threads
listlengths
1
275
[ { "msg_contents": "Dear list\n\nI am experiencing a rather severe degradation of insert performance\nstarting from an empty database:\n\n\n\t 120.000 mio SNPs imported in 28.9 sec - 4.16 mio/sec\n\t 120.000 mio SNPs imported in 40.9 sec - 2.93 mio/sec\n\t 120.000 mio SNPs imported in 49.7 sec - 2.41 mio/sec\n\t 120.000 mio SNPs imported in 58.8 sec - 2.04 mio/sec\n\t 120.000 mio SNPs imported in 68.9 sec - 1.74 mio/sec\n\t 120.000 mio SNPs imported in 77.0 sec - 1.56 mio/sec\n\t 120.000 mio SNPs imported in 85.1 sec - 1.41 mio/sec\n\t 120.000 mio SNPs imported in 94.0 sec - 1.28 mio/sec\n\t 120.000 mio SNPs imported in 103.4 sec - 1.16 mio/sec\n\t 120.000 mio SNPs imported in 108.9 sec - 1.10 mio/sec\n\t 120.000 mio SNPs imported in 117.2 sec - 1.02 mio/sec\n\t 120.000 mio SNPs imported in 122.1 sec - 0.98 mio/sec\n\t 120.000 mio SNPs imported in 132.6 sec - 0.90 mio/sec\n\t 120.000 mio SNPs imported in 142.0 sec - 0.85 mio/sec\n\t 120.000 mio SNPs imported in 147.3 sec - 0.81 mio/sec\n\t 120.000 mio SNPs imported in 154.4 sec - 0.78 mio/sec\n\t 120.000 mio SNPs imported in 163.9 sec - 0.73 mio/sec\n\t 120.000 mio SNPs imported in 170.1 sec - 0.71 mio/sec\n\t 120.000 mio SNPs imported in 179.1 sec - 0.67 mio/sec\n\t 120.000 mio SNPs imported in 186.1 sec - 0.64 mio/sec\n\neach line represents the insertion of 20000 records in two tables which is\nnot really a whole lot. Also, these 20000 get inserted in one program run.\nThe following lines are then again each the execution of that program.\nThe insert are a text string in one table and a bit varying of length packed\n24000 bits, also no big deal.\n\nAs can be seen the degradation is severe going from 29 sec up to 186 sec\nfor the same amount of data inserted.\n\nI have dropped the indices and primary keys, but that did not change the \npicture. Made commits every 100 records: also no effect.\nI have also played around with postgresql.conf but also this had no real \neffect (which is actually not surprising considering the small size of the \ndatabase).\n\nAt this stage the who database has a size of around 1GB.\n\nI am using pg 9.4\n\nany idea of what might be going on?\n\ncheers\n\nEildert\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Sep 2015 13:32:30 +0200", "msg_from": "Eildert Groeneveld <[email protected]>", "msg_from_op": true, "msg_subject": "degrading inser performance" }, { "msg_contents": "On 17.9.2015 13:32, Eildert Groeneveld wrote:\n> Dear list\n> \n> I am experiencing a rather severe degradation of insert performance\n> starting from an empty database:\n> \n> \n> \t 120.000 mio SNPs imported in 28.9 sec - 4.16 mio/sec\n> \t 120.000 mio SNPs imported in 40.9 sec - 2.93 mio/sec\n> \t 120.000 mio SNPs imported in 49.7 sec - 2.41 mio/sec\n> \t 120.000 mio SNPs imported in 58.8 sec - 2.04 mio/sec\n> \t 120.000 mio SNPs imported in 68.9 sec - 1.74 mio/sec\n> \t 120.000 mio SNPs imported in 77.0 sec - 1.56 mio/sec\n> \t 120.000 mio SNPs imported in 85.1 sec - 1.41 mio/sec\n> \t 120.000 mio SNPs imported in 94.0 sec - 1.28 mio/sec\n> \t 120.000 mio SNPs imported in 103.4 sec - 1.16 mio/sec\n> \t 120.000 mio SNPs imported in 108.9 sec - 1.10 mio/sec\n> \t 120.000 mio SNPs imported in 117.2 sec - 1.02 mio/sec\n> \t 120.000 mio SNPs imported in 122.1 sec - 0.98 mio/sec\n> \t 120.000 mio SNPs imported in 132.6 sec - 0.90 mio/sec\n> \t 120.000 mio SNPs imported in 142.0 sec - 0.85 mio/sec\n> \t 120.000 mio SNPs imported in 147.3 sec - 0.81 mio/sec\n> \t 120.000 mio SNPs imported in 154.4 sec - 0.78 mio/sec\n> \t 120.000 mio SNPs imported in 163.9 sec - 0.73 mio/sec\n> \t 120.000 mio SNPs imported in 170.1 sec - 0.71 mio/sec\n> \t 120.000 mio SNPs imported in 179.1 sec - 0.67 mio/sec\n> \t 120.000 mio SNPs imported in 186.1 sec - 0.64 mio/sec\n> \n> each line represents the insertion of 20000 records in two tables which is\n> not really a whole lot. Also, these 20000 get inserted in one program run.\n> The following lines are then again each the execution of that program.\n> The insert are a text string in one table and a bit varying of length packed\n> 24000 bits, also no big deal.\n> \n> As can be seen the degradation is severe going from 29 sec up to 186 sec\n> for the same amount of data inserted.\n> \n> I have dropped the indices and primary keys, but that did not change the \n> picture. Made commits every 100 records: also no effect.\n> I have also played around with postgresql.conf but also this had no real \n> effect (which is actually not surprising considering the small size of the \n> database).\n> \n> At this stage the who database has a size of around 1GB.\n> \n> I am using pg 9.4\n> \n> any idea of what might be going on?\n\n\nHello.\n\nJust a couple of questions...\n\nYou talk about two tables; have you also dropped FKs (you only mention indices\nand PK)?\n\nWhat SQL do you use for inserting the data:\n * one INSERT per row with autocommit\n * one INSERT per row inside BEGIN...COMMIT\n * one INSERT per bulk (20 000 rows)\n * one COPY per bulk (20 000 rows)\n?\n\nIs the loading of data the only activity on the server?\n\nSee also:\nhttp://www.postgresql.org/docs/9.4/static/populate.html\n\n\nHTH,\n\nLadislav Lenart\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Sep 2015 14:11:55 +0200", "msg_from": "Ladislav Lenart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degrading inser performance" }, { "msg_contents": "On Do, 2015-09-17 at 14:11 +0200, Ladislav Lenart wrote:\n> On 17.9.2015 13:32, Eildert Groeneveld wrote:\n> > Dear list\n> > \n> > I am experiencing a rather severe degradation of insert performance\n> > starting from an empty database:\n> > \n> > \n> > \t 120.000 mio SNPs imported in 28.9 sec - 4.16 mio/sec\n> > \t 120.000 mio SNPs imported in 40.9 sec - 2.93 mio/sec\n> > \t 120.000 mio SNPs imported in 49.7 sec - 2.41 mio/sec\n> > \t 120.000 mio SNPs imported in 58.8 sec - 2.04 mio/sec\n> > \t 120.000 mio SNPs imported in 68.9 sec - 1.74 mio/sec\n> > \t 120.000 mio SNPs imported in 77.0 sec - 1.56 mio/sec\n> > \t 120.000 mio SNPs imported in 85.1 sec - 1.41 mio/sec\n> > \t 120.000 mio SNPs imported in 94.0 sec - 1.28 mio/sec\n> > \t 120.000 mio SNPs imported in 103.4 sec - 1.16 mio/sec\n> > \t 120.000 mio SNPs imported in 108.9 sec - 1.10 mio/sec\n> > \t 120.000 mio SNPs imported in 117.2 sec - 1.02 mio/sec\n> > \t 120.000 mio SNPs imported in 122.1 sec - 0.98 mio/sec\n> > \t 120.000 mio SNPs imported in 132.6 sec - 0.90 mio/sec\n> > \t 120.000 mio SNPs imported in 142.0 sec - 0.85 mio/sec\n> > \t 120.000 mio SNPs imported in 147.3 sec - 0.81 mio/sec\n> > \t 120.000 mio SNPs imported in 154.4 sec - 0.78 mio/sec\n> > \t 120.000 mio SNPs imported in 163.9 sec - 0.73 mio/sec\n> > \t 120.000 mio SNPs imported in 170.1 sec - 0.71 mio/sec\n> > \t 120.000 mio SNPs imported in 179.1 sec - 0.67 mio/sec\n> > \t 120.000 mio SNPs imported in 186.1 sec - 0.64 mio/sec\n> > \n> > each line represents the insertion of 20000 records in two tables\n> > which is\n> > not really a whole lot. Also, these 20000 get inserted in one\n> > program run.\n> > The following lines are then again each the execution of that\n> > program.\n> > The insert are a text string in one table and a bit varying of\n> > length packed\n> > 24000 bits, also no big deal.\n> > \n> > As can be seen the degradation is severe going from 29 sec up to\n> > 186 sec\n> > for the same amount of data inserted.\n> > \n> > I have dropped the indices and primary keys, but that did not\n> > change the \n> > picture. Made commits every 100 records: also no effect.\n> > I have also played around with postgresql.conf but also this had no\n> > real \n> > effect (which is actually not surprising considering the small size\n> > of the \n> > database).\n> > \n> > At this stage the who database has a size of around 1GB.\n> > \n> > I am using pg 9.4\n> > \n> > any idea of what might be going on?\n> \n> \n> Hello.\n> \n> Just a couple of questions...\n> \n> You talk about two tables; have you also dropped FKs (you only\n> mention indices\n> and PK)?\nyes, they were all gone\n> \n> What SQL do you use for inserting the data:\nI go through ecpg\n> * one INSERT per row with autocommit\nyes\n> * one INSERT per row inside BEGIN...COMMIT\nalso this, same result as above\n> * one INSERT per bulk (20 000 rows)\n> * one COPY per bulk (20 000 rows)\ncopy does not fit so well, as it is not only initial populating.\n\n> Is the loading of data the only activity on the server?\nyes, it is. I have this \"feature\" on every machine\n\n> See also:\n> http://www.postgresql.org/docs/9.4/static/populate.html\nThanks, yes, I have been through this.\n\nmillions of records seem to be the staple diet of PG, here the \ndegradation starts already with the second 20000 record batch.\n> \ngreetings\n\nEildert\n> HTH,\n> \n> Ladislav Lenart\n> \n> \n-- \nEildert Groeneveld\n===================================================\nInstitute of Farm Animal Genetics (FLI)\nMariensee 31535 Neustadt Germany\nTel : (+49)(0)5034 871155 Fax : (+49)(0)5034 871143\ne-mail: [email protected] \nweb: http://vce.tzv.fal.de\n==================================================\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Sep 2015 14:19:25 +0200", "msg_from": "Eildert Groeneveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: degrading inser performance" }, { "msg_contents": "On Thu, Sep 17, 2015 at 9:19 AM, Eildert Groeneveld <\[email protected]> wrote:\n\n> > * one COPY per bulk (20 000 rows)\n> copy does not fit so well, as it is not only initial populating.\n>\n\nWhy do you say COPY doesn't fit? It seems to me that COPY fits perfectly\nfor your case, and would certainly make the load faster.\n\nI suspect (not sure though) that the degradation is most because you are\ninserting one row at a time, and, it needs to verify FSM (Free Space Map)\nfor each tuple inserted, when the table start to get more populated, this\nverification starts to become slower. If that is really the case, COPY\nwould certainly improve that, or even INSERT with many rows at once.\n\nRegards,\n-- \nMatheus de Oliveira\n\nOn Thu, Sep 17, 2015 at 9:19 AM, Eildert Groeneveld <[email protected]> wrote:\n>  * one COPY per bulk (20 000 rows)\ncopy does not fit so well, as it is not only initial populating.Why do you say COPY doesn't fit? It seems to me that COPY fits perfectly for your case, and would certainly make the load faster.I suspect (not sure though) that the degradation is most because you are inserting one row at a time, and, it needs to verify FSM (Free Space Map) for each tuple inserted, when the table start to get more populated, this verification starts to become slower. If that is really the case, COPY would certainly improve that, or even INSERT with many rows at once.Regards,-- Matheus de Oliveira", "msg_date": "Thu, 17 Sep 2015 11:21:15 -0300", "msg_from": "Matheus de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degrading inser performance" }, { "msg_contents": "Thanks for your input!\nOn Do, 2015-09-17 at 11:21 -0300, Matheus de Oliveira wrote:\n> \n> On Thu, Sep 17, 2015 at 9:19 AM, Eildert Groeneveld <\n> [email protected]> wrote:\n> > > * one COPY per bulk (20 000 rows)\n> > copy does not fit so well, as it is not only initial populating.\n> > \n> Why do you say COPY doesn't fit? It seems to me that COPY fits\n> perfectly for your case, and would certainly make the load faster.\nwell, more than one table needs to get populated and data is not really\navailable in one file.\n\n> I suspect (not sure though) that the degradation is most because you are inserting one row at a time, and, it needs to verify FSM (Free Space Map) for each tuple inserted, when the table start to get more populated, this verification starts to become slower. If that is really the case, COPY would certainly improve that, or even INSERT with many rows at once.\nallright, sounds reasonable. \nBut what is your experience: is it possible that \ninserting the first 20000 records takes 29 seconds while inserting lot\n20 (i.e. 9*20000 later) takes\n186.9 sec? after all we are talking only about 200000 records? That\ntake 6 times longer!!\nodd, anyone has an idea?\ngreetings\nEildert\n> > Regards,> -- \n> Matheus de Oliveira> \n> \n\n> \n\n> \n\n-- \nEildert Groeneveld\n===================================================\nInstitute of Farm Animal Genetics (FLI)\nMariensee 31535 Neustadt Germany\nTel : (+49)(0)5034 871155 Fax : (+49)(0)5034 871143\ne-mail: [email protected] \nweb: http://vce.tzv.fal.de\n==================================================\n\n\nThanks for your input!On Do, 2015-09-17 at 11:21 -0300, Matheus de Oliveira wrote:On Thu, Sep 17, 2015 at 9:19 AM, Eildert Groeneveld <[email protected]> wrote:\n>  * one COPY per bulk (20 000 rows)\ncopy does not fit so well, as it is not only initial populating.Why do you say COPY doesn't fit? It seems to me that COPY fits perfectly for your case, and would certainly make the load faster.well, more than one table needs to get populated and data is not really available in one file.I suspect (not sure though) that the degradation is most because you are inserting one row at a time, and, it needs to verify FSM (Free Space Map) for each tuple inserted, when the table start to get more populated, this verification starts to become slower. If that is really the case, COPY would certainly improve that, or even INSERT with many rows at once.allright, sounds reasonable. But what is your experience: is it possible that inserting the first 20000 records takes 29 seconds while inserting lot 20 (i.e. 9*20000 later) takes186.9 sec? after all we are talking only about 200000 records? That take 6 times longer!!odd, anyone has an idea?greetingsEildertRegards,-- Matheus de Oliveira\n\n-- \nEildert Groeneveld\n===================================================\nInstitute of Farm Animal Genetics (FLI)\nMariensee 31535 Neustadt Germany\nTel : (+49)(0)5034 871155 Fax : (+49)(0)5034 871143\ne-mail: [email protected] \nweb: http://vce.tzv.fal.de\n==================================================", "msg_date": "Thu, 17 Sep 2015 16:41:08 +0200", "msg_from": "Eildert Groeneveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: degrading inser performance" }, { "msg_contents": "Nobody has asked what kind of machine this is ???\n\nHard disks, memory, etc.\n\nWhat are your relevant settings in postgresql.conf ? Shared buffers,\ncheckpoints, etc.\n\nAlso how big are the inserts ? What else is this machine doing ? Is it bare\nhardware, or a VM ?\n\nDave Cramer\n\ndave.cramer(at)credativ(dot)ca\nhttp://www.credativ.ca\n\nOn 17 September 2015 at 10:41, Eildert Groeneveld <\[email protected]> wrote:\n\n> Thanks for your input!\n> On Do, 2015-09-17 at 11:21 -0300, Matheus de Oliveira wrote:\n>\n>\n> On Thu, Sep 17, 2015 at 9:19 AM, Eildert Groeneveld <\n> [email protected]> wrote:\n>\n> > * one COPY per bulk (20 000 rows)\n> copy does not fit so well, as it is not only initial populating.\n>\n>\n> Why do you say COPY doesn't fit? It seems to me that COPY fits perfectly\n> for your case, and would certainly make the load faster.\n>\n> well, more than one table needs to get populated and data is not really\n> available in one file.\n>\n>\n> I suspect (not sure though) that the degradation is most because you are\n> inserting one row at a time, and, it needs to verify FSM (Free Space Map)\n> for each tuple inserted, when the table start to get more populated, this\n> verification starts to become slower. If that is really the case, COPY\n> would certainly improve that, or even INSERT with many rows at once.\n>\n> allright, sounds reasonable.\n>\n> But what is your experience: is it possible that\n> inserting the first 20000 records takes 29 seconds while inserting lot 20\n> (i.e. 9*20000 later) takes\n> 186.9 sec? after all we are talking only about 200000 records? That take 6\n> times longer!!\n>\n> odd, anyone has an idea?\n>\n> greetings\n>\n> Eildert\n>\n>\n> Regards,\n> --\n> Matheus de Oliveira\n>\n>\n> --\n> Eildert Groeneveld\n> ===================================================\n> Institute of Farm Animal Genetics (FLI)\n> Mariensee 31535 Neustadt Germany\n> Tel : (+49)(0)5034 871155 Fax : (+49)(0)5034 871143\n> e-mail: [email protected]\n> web: http://vce.tzv.fal.de\n> ==================================================\n>\n>\n\nNobody has asked what kind of machine this is ???Hard disks, memory, etc.What are your relevant settings in postgresql.conf ? Shared buffers, checkpoints, etc.Also how big are the inserts ? What else is this machine doing ? Is it bare hardware, or a VM ?Dave Cramerdave.cramer(at)credativ(dot)cahttp://www.credativ.ca\nOn 17 September 2015 at 10:41, Eildert Groeneveld <[email protected]> wrote:Thanks for your input!On Do, 2015-09-17 at 11:21 -0300, Matheus de Oliveira wrote:On Thu, Sep 17, 2015 at 9:19 AM, Eildert Groeneveld <[email protected]> wrote:\n>  * one COPY per bulk (20 000 rows)\ncopy does not fit so well, as it is not only initial populating.Why do you say COPY doesn't fit? It seems to me that COPY fits perfectly for your case, and would certainly make the load faster.well, more than one table needs to get populated and data is not really available in one file.I suspect (not sure though) that the degradation is most because you are inserting one row at a time, and, it needs to verify FSM (Free Space Map) for each tuple inserted, when the table start to get more populated, this verification starts to become slower. If that is really the case, COPY would certainly improve that, or even INSERT with many rows at once.allright, sounds reasonable. But what is your experience: is it possible that inserting the first 20000 records takes 29 seconds while inserting lot 20 (i.e. 9*20000 later) takes186.9 sec? after all we are talking only about 200000 records? That take 6 times longer!!odd, anyone has an idea?greetingsEildertRegards,-- Matheus de Oliveira\n\n-- \nEildert Groeneveld\n===================================================\nInstitute of Farm Animal Genetics (FLI)\nMariensee 31535 Neustadt Germany\nTel : (+49)(0)5034 871155 Fax : (+49)(0)5034 871143\ne-mail: [email protected] \nweb: http://vce.tzv.fal.de\n==================================================", "msg_date": "Thu, 17 Sep 2015 16:13:02 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degrading inser performance" } ]
[ { "msg_contents": "Hi all. I am occasionally seeing really slow update/inserts into a fairly\nlarge table. By really slow I mean around 10-40 seconds,\nwhile the majority of queries take milliseconds. I cannot reproduce this\nproblem myself, but it is occurring multiple times a day\n(maybe 30 times).\n\nSystem Info\n---------------\nModel: Dell PowerEdge R420\nCPU: 12 core Intel(R) Xeon(R) @ 2.20GHz\nMemory: 16GB\nDisk: PERC H310 Mini Raid Controller using Raid 1\nOS: Ubuntu 14.04.3 LTS\n\nDB Settings\n----------------\n\n name current_setting\nsource\n-------------------------------+----------------------------------------+----------------------\n application_name | psql\n | client\n auto_explain.log_min_duration | 15s |\nconfiguration file\n checkpoint_segments | 16\n| configuration file\n client_encoding | UTF8\n | client\n DateStyle | ISO, YMD\n | configuration file\n default_text_search_config | pg_catalog.english\n| configuration file\n effective_cache_size | 8GB\n | configuration file\n external_pid_file |\n/var/run/postgresql/9.3-main.pid | configuration file\n hot_standby | on\n | configuration file\n lc_messages | en_CA.UTF-8\n | configuration file\n lc_monetary | en_CA.UTF-8\n | configuration file\n lc_numeric | en_CA.UTF-8\n | configuration file\n lc_time | en_CA.UTF-8\n | configuration file\n listen_addresses | localhost,x.x.x.x\n | configuration file\n log_autovacuum_min_duration | -1 |\nconfiguration file\n log_checkpoints | on\n | configuration file\n log_line_prefix | %m %p %v %x\n | configuration file\n log_lock_waits | on\n | configuration file\n log_min_duration_statement | 15s |\nconfiguration file\n log_timezone | UTC\n | configuration file\n max_connections | 100\n | configuration file\n max_stack_depth | 2MB\n | environment variable\n max_wal_senders | 3\n | configuration file\n pg_stat_statements.track | all\n | configuration file\n shared_buffers | 4GB\n | configuration file\n shared_preload_libraries | pg_stat_statements, auto_explain\n | configuration file\n ssl | on\n | configuration file\n TimeZone | UTC\n | configuration file\n track_activity_query_size | 2048\n| configuration file\n unix_socket_directories | /var/run/postgresql\n | configuration file\n wal_keep_segments | 32\n| configuration file\n wal_level | hot_standby\n | configuration file\n\nSchema\n-------\n\nTable \"public.documents\"\n Column | Type |\n Modifiers\n------------------------+-----------------------------+--------------------------------------------------------\n id | bigint | not null\ndefault nextval('documents_id_seq'::regclass)\n user_id | bigint | not null\n biller_id bigint | not null\n filename | character varying(255) | not null\n resource | character varying(255) | not null\n size | integer | not\nnull\n doc_type | character varying(255) | not null\n content_type | character varying(255) | not null\n account_name | character varying(255) |\n account_number | character varying(255) |\n bill_date | timestamp without time zone |\n due_date | date |\n amount | numeric(12,2) |\n amount | numeric(12,2) |\n amount | numeric(12,2) |\n amount | numeric(12,2) |\n amount | numeric(12,2) |\n paid | boolean |\n paid_date | timestamp without time zone |\n paid_amount | numeric(12,2) |\n contents | text |\n contents_search | tsvector |\n extra_data | text |\n created_at | timestamp without time zone | not null default\nnow()\n updated_at | timestamp without time zone | not null default now()\n billercred_id | bigint |\n folder_id | bigint |\n shasum | character varying(255) |\n intake_type | smallint | not null default 1\n page_count | smallint |\n notes | text |\n vendor_name | character varying(255) |\n invoice_number | character varying(255) |\n tax | numeric(12,2) |\n subtotal | numeric(12,2) |\n payment_account_number | character varying(255) |\n currency | character varying(3) |\n payment_method | payment_method |\n workflow_state | workflow_states | default\n'review'::workflow_states\n vendor_id | bigint |\n document_type | document_types |\nIndexes:\n \"documents_pkey\" PRIMARY KEY, btree (id)\n \"document_search_ix\" gin (contents_search)\n \"document_user_id_recvd_ix\" btree (user_id, bill_date DESC)\nForeign-key constraints:\n \"documents_biller_id_fkey\" FOREIGN KEY (biller_id) REFERENCES\nbillers(id) ON DELETE SET DEFAULT\n \"documents_billercred_id_fkey\" FOREIGN KEY (billercred_id) REFERENCES\nbillercreds(id) ON DELETE SET NULL\n \"documents_folder_id_fkey\" FOREIGN KEY (folder_id) REFERENCES\nfolders(id) ON DELETE CASCADE\n \"documents_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES users(id) ON\nDELETE CASCADE\n \"documents_vendor_id_fkey\" FOREIGN KEY (vendor_id) REFERENCES\nvendors(id) ON DELETE SET NULL\nReferenced by:\n TABLE \"document_billcom_actions\" CONSTRAINT\n\"document_billcom_actions_document_id_fkey\" FOREIGN KEY (document_id)\nREFERENCES documents(id) ON DELETE CASCADE\n TABLE \"document_box_actions\" CONSTRAINT\n\"document_box_actions_document_id_fkey\" FOREIGN KEY (document_id)\nREFERENCES documents(id) ON DELETE CASCADE\n TABLE \"document_email_forwarding_actions\" CONSTRAINT\n\"document_email_forwarding_actions_document_id_fkey\" FOREIGN KEY\n(document_id) REFERENCES documents(id) ON DELETE CASCADE\n TABLE \"document_qbo_actions\" CONSTRAINT\n\"document_qbo_actions_document_id_fkey\" FOREIGN KEY (document_id)\nREFERENCES documents(id) ON DELETE CASCADE\n TABLE \"document_xero_actions\" CONSTRAINT\n\"document_xero_actions_document_id_fkey\" FOREIGN KEY (document_id)\nREFERENCES documents(id) ON DELETE CASCADE\n TABLE \"document_xerofiles_actions\" CONSTRAINT\n\"document_xerofiles_actions_document_id_fkey\" FOREIGN KEY (document_id)\nREFERENCES documents(id) ON DELETE CASCADE\n TABLE \"documenttagmap\" CONSTRAINT \"documenttagmap_document_id_fkey\"\nFOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE\n TABLE \"synced_docs\" CONSTRAINT \"synced_docs_doc_id_fkey\" FOREIGN KEY\n(doc_id) REFERENCES documents(id) ON DELETE CASCADE\nTriggers:\n document_search_update BEFORE INSERT OR UPDATE ON documents FOR EACH\nROW EXECUTE PROCEDURE tsvector_update_trigger('contents_search',\n'pg_catalog.english', 'contents', 'filename', 'account_name',\n'account_number')\n document_updated_at_t BEFORE UPDATE ON documents FOR EACH ROW EXECUTE\nPROCEDURE update_updated_at_column()\n documents_count BEFORE INSERT OR DELETE ON documents FOR EACH ROW\nEXECUTE PROCEDURE count_trig()\n folder_document_count_trig BEFORE INSERT OR DELETE OR UPDATE ON\ndocuments FOR EACH ROW EXECUTE PROCEDURE update_folder_count()\n tags_in_trash_document_count_trig BEFORE DELETE OR UPDATE ON documents\nFOR EACH ROW EXECUTE PROCEDURE update_tag_trash_count()\n\nTable/Index Sizes\n-----------------\n\nCurrent size:\n6841 MB | pg_toast_17426\n2486 MB | document_search_ix\n2172 MB | documents\n188 MB | pg_toast_17426_index\n113 MB | document_user_id_recvd_ix\n76 MB | documents_pkey\n\nSize after building on a new machine from pg_dump:\n5564 MB | pg_toast_1599236\n1882 MB | documents\n1666 MB | document_search_ix\n73 MB | pg_toast_1599236_index\n40 MB | document_user_id_recvd_ix\n\nThings to know about the table/DB:\n----------------------------------\n- We currently inserting ~ 5000-10000 documents a day\n- We extract text from the documents and store it in the contents field for\nfull text search\n- In the last few months we've added a feature that allows users to update\ncolumns (date, amount, etc), so we're seeing a lot more updates to\n the table than before\n- 2 massive updates were done to the table in the last few months in which\na particular column was updated for each row in the table\n- We're running a pg_dump every hour which takes around 10 min.\n\nWhat I've tried\n---------------\n\nLogging checkpoints: The slow queries happen even in between checkpoints\n\nLogging locks: There are no logs indicating that the slow query is the\nresult of a lock\n\nExplain plan: Nothing strange, updates are using documents_pkey index\n\nTrying another machine: I switched over to our replica on another box and\nwas still seeing slow queries\n\nDoing backups less frequently: Slow queries occur even when backup is not\nrunning\n\nRebuilding document_search_ix index: The index rebuild reduced the index\nsize from about 4500MB to 1500MB. This didn't appear to reduce the slow\nqueries\n and the size is now at ~ 2400MB after\nless than a week\n\nRunning vacuum analyze after rebuilding index: Didn't appear to help\n\nMatching iotop output to slow query output: INSERTs are reading a lot of\ndata, 22MB in this case, but I don't know how to utilize this information.\n\n14:23:03 22739 be/4 postgres 702.11 K/s 66.87 K/s 0.00 % 25.55 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:04 22739 be/4 postgres 2.48 M/s 96.51 K/s 0.00 % 96.33 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:05 22739 be/4 postgres 1221.82 K/s 43.77 K/s 0.00 % 53.84 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:06 22739 be/4 postgres 1242.40 K/s 94.73 K/s 0.00 % 51.53 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:07 22739 be/4 postgres 1376.88 K/s 46.15 K/s 0.00 % 59.85 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:08 22739 be/4 postgres 563.74 K/s 37.09 K/s 0.00 % 29.03 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:09 22739 be/4 postgres 1267.57 K/s 81.54 K/s 0.00 % 47.95 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:10 22739 be/4 postgres 1080.95 K/s 59.43 K/s 0.00 % 48.21 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:11 22739 be/4 postgres 1041.17 K/s 125.98 K/s 0.00 % 49.86 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:13 22739 be/4 postgres 1019.91 K/s 138.55 K/s 0.00 % 42.42 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:14 22739 be/4 postgres 856.88 K/s 103.86 K/s 0.00 % 31.88 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:15 22739 be/4 postgres 1284.18 K/s 170.73 K/s 0.00 % 52.36 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:16 22739 be/4 postgres 1188.97 K/s 74.31 K/s 0.00 % 51.59 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:17 22739 be/4 postgres 1088.19 K/s 111.42 K/s 0.00 % 44.45 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:18 22739 be/4 postgres 1261.87 K/s 133.61 K/s 0.00 % 49.08 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:19 22739 be/4 postgres 1203.82 K/s 137.58 K/s 0.00 % 52.22 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:20 22739 be/4 postgres 1399.45 K/s 133.63 K/s 0.00 % 45.76 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:21 22739 be/4 postgres 1380.05 K/s 126.13 K/s 0.00 % 57.53 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n14:23:22 22739 be/4 postgres 1236.00 K/s 148.47 K/s 0.00 % 53.78 %\npostgres: ourusername ourdb 192.x.x.x(51168) INSERT\n\nWhat I haven't tried\n--------------------\n- more aggressive auto-vacuum\n- trying gist table for full text search index instead of gin\n- removing full text search altogether (are users don't use it very much)\n- rebuilding the production table\n- vacuum full\n\nAny help on what the issue might be or how to debug further would be\namazing. I'd like to understand this issue better,\nboth for my business as well as for my own understanding of databases.\n\nDave Stibrany\n\nHi all. I am occasionally seeing really slow update/inserts into a fairly large table. By really slow I mean around 10-40 seconds,while the majority of queries take milliseconds. I cannot reproduce this problem myself, but it is occurring multiple times a day(maybe 30 times).System Info---------------Model: Dell PowerEdge R420CPU: 12 core Intel(R) Xeon(R) @ 2.20GHzMemory: 16GBDisk: PERC H310 Mini Raid Controller using Raid 1OS: Ubuntu 14.04.3 LTSDB Settings----------------     name                          current_setting                       source-------------------------------+----------------------------------------+---------------------- application_name                      | psql                                   | client auto_explain.log_min_duration | 15s                                    | configuration file checkpoint_segments               | 16                                     | configuration file client_encoding                         | UTF8                                   | client DateStyle                                   | ISO, YMD                               | configuration file default_text_search_config       | pg_catalog.english                     | configuration file effective_cache_size                 | 8GB                                    | configuration file external_pid_file                        | /var/run/postgresql/9.3-main.pid       | configuration file hot_standby                               | on                                     | configuration file lc_messages                              | en_CA.UTF-8                            | configuration file lc_monetary                               | en_CA.UTF-8                            | configuration file lc_numeric                                 | en_CA.UTF-8                            | configuration file lc_time                                       | en_CA.UTF-8                            | configuration file listen_addresses                       | localhost,x.x.x.x                      | configuration file log_autovacuum_min_duration   | -1                                     | configuration file log_checkpoints                        | on                                     | configuration file log_line_prefix                           | %m %p %v %x                            | configuration file log_lock_waits                           | on                                     | configuration file log_min_duration_statement     | 15s                                    | configuration file log_timezone                            | UTC                                    | configuration file max_connections                     | 100                                    | configuration file max_stack_depth                     | 2MB                                    | environment variable max_wal_senders                     | 3                                      | configuration file pg_stat_statements.track          | all                                    | configuration file shared_buffers                          | 4GB                                    | configuration file shared_preload_libraries           | pg_stat_statements, auto_explain       | configuration file ssl                                             | on                                     | configuration file TimeZone                                 | UTC                                    | configuration file track_activity_query_size         | 2048                                   | configuration file unix_socket_directories            | /var/run/postgresql                    | configuration file wal_keep_segments                 | 32                                     | configuration file wal_level                                   | hot_standby                            | configuration fileSchema-------Table \"public.documents\"         Column         |            Type             |                       Modifiers------------------------+-----------------------------+-------------------------------------------------------- id                          | bigint                        | not null default nextval('documents_id_seq'::regclass) user_id                 | bigint                        | not null biller_id                  bigint                         | not null filename               | character varying(255)      | not null resource               | character varying(255)      | not null size                      | integer                                | not null doc_type              | character varying(255)      | not null content_type        | character varying(255)      | not null account_name     | character varying(255)      | account_number  | character varying(255)      | bill_date                | timestamp without time zone | due_date              | date                              | amount                 | numeric(12,2)               | amount                 | numeric(12,2)               | amount                 | numeric(12,2)               | amount                 | numeric(12,2)               | amount                 | numeric(12,2)               | paid                      | boolean                     | paid_date             | timestamp without time zone | paid_amount        | numeric(12,2)               | contents               | text                                | contents_search  | tsvector                         | extra_data           | text                                | created_at            | timestamp without time zone | not null default now() updated_at           | timestamp without time zone | not null default now() billercred_id          | bigint                            | folder_id               | bigint                             | shasum                | character varying(255)      | intake_type          | smallint                    | not null default 1 page_count          | smallint                    | notes                   | text                        | vendor_name      | character varying(255)      | invoice_number  | character varying(255)      | tax                       | numeric(12,2)               | subtotal               | numeric(12,2)               | payment_account_number | character varying(255)      | currency              | character varying(3)        | payment_method     | payment_method              | workflow_state         | workflow_states             | default 'review'::workflow_states vendor_id                 | bigint                      | document_type        | document_types              |Indexes:    \"documents_pkey\" PRIMARY KEY, btree (id)    \"document_search_ix\" gin (contents_search)    \"document_user_id_recvd_ix\" btree (user_id, bill_date DESC)Foreign-key constraints:    \"documents_biller_id_fkey\" FOREIGN KEY (biller_id) REFERENCES billers(id) ON DELETE SET DEFAULT    \"documents_billercred_id_fkey\" FOREIGN KEY (billercred_id) REFERENCES billercreds(id) ON DELETE SET NULL    \"documents_folder_id_fkey\" FOREIGN KEY (folder_id) REFERENCES folders(id) ON DELETE CASCADE    \"documents_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE    \"documents_vendor_id_fkey\" FOREIGN KEY (vendor_id) REFERENCES vendors(id) ON DELETE SET NULLReferenced by:    TABLE \"document_billcom_actions\" CONSTRAINT \"document_billcom_actions_document_id_fkey\" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE    TABLE \"document_box_actions\" CONSTRAINT \"document_box_actions_document_id_fkey\" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE    TABLE \"document_email_forwarding_actions\" CONSTRAINT \"document_email_forwarding_actions_document_id_fkey\" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE    TABLE \"document_qbo_actions\" CONSTRAINT \"document_qbo_actions_document_id_fkey\" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE    TABLE \"document_xero_actions\" CONSTRAINT \"document_xero_actions_document_id_fkey\" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE    TABLE \"document_xerofiles_actions\" CONSTRAINT \"document_xerofiles_actions_document_id_fkey\" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE    TABLE \"documenttagmap\" CONSTRAINT \"documenttagmap_document_id_fkey\" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE    TABLE \"synced_docs\" CONSTRAINT \"synced_docs_doc_id_fkey\" FOREIGN KEY (doc_id) REFERENCES documents(id) ON DELETE CASCADETriggers:    document_search_update BEFORE INSERT OR UPDATE ON documents FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('contents_search', 'pg_catalog.english', 'contents', 'filename', 'account_name', 'account_number')    document_updated_at_t BEFORE UPDATE ON documents FOR EACH ROW EXECUTE PROCEDURE update_updated_at_column()    documents_count BEFORE INSERT OR DELETE ON documents FOR EACH ROW EXECUTE PROCEDURE count_trig()    folder_document_count_trig BEFORE INSERT OR DELETE OR UPDATE ON documents FOR EACH ROW EXECUTE PROCEDURE update_folder_count()    tags_in_trash_document_count_trig BEFORE DELETE OR UPDATE ON documents FOR EACH ROW EXECUTE PROCEDURE update_tag_trash_count()Table/Index Sizes-----------------Current size:6841 MB        | pg_toast_17426 2486 MB        | document_search_ix2172 MB        | documents188 MB          | pg_toast_17426_index113 MB          | document_user_id_recvd_ix76 MB            | documents_pkeySize after building on a new machine from pg_dump:5564 MB        | pg_toast_15992361882 MB        | documents1666 MB        | document_search_ix                              73 MB            | pg_toast_1599236_index40 MB            | document_user_id_recvd_ixThings to know about the table/DB:----------------------------------- We currently inserting ~ 5000-10000 documents a day- We extract text from the documents and store it in the contents field for full text search- In the last few months we've added a feature that allows users to update columns (date, amount, etc), so we're seeing a lot more updates to  the table than before- 2 massive updates were done to the table in the last few months in which a particular column was updated for each row in the table- We're running a pg_dump every hour which takes around 10 min.What I've tried---------------Logging checkpoints: The slow queries happen even in between checkpointsLogging locks: There are no logs indicating that the slow query is the result of a lockExplain plan: Nothing strange, updates are using documents_pkey indexTrying another machine: I switched over to our replica on another box and was still seeing slow queriesDoing backups less frequently: Slow queries occur even when backup is not runningRebuilding document_search_ix index: The index rebuild reduced the index size from about 4500MB to 1500MB. This didn't appear to reduce the slow queries                                     and the size is now at ~ 2400MB after less than a weekRunning vacuum analyze after rebuilding index: Didn't appear to helpMatching iotop output to slow query output: INSERTs are reading a lot of data, 22MB in this case, but I don't know how to utilize this information.14:23:03 22739 be/4 postgres  702.11 K/s   66.87 K/s  0.00 % 25.55 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:04 22739 be/4 postgres    2.48 M/s   96.51 K/s  0.00 % 96.33 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:05 22739 be/4 postgres 1221.82 K/s   43.77 K/s  0.00 % 53.84 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:06 22739 be/4 postgres 1242.40 K/s   94.73 K/s  0.00 % 51.53 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:07 22739 be/4 postgres 1376.88 K/s   46.15 K/s  0.00 % 59.85 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:08 22739 be/4 postgres  563.74 K/s   37.09 K/s  0.00 % 29.03 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:09 22739 be/4 postgres 1267.57 K/s   81.54 K/s  0.00 % 47.95 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:10 22739 be/4 postgres 1080.95 K/s   59.43 K/s  0.00 % 48.21 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:11 22739 be/4 postgres 1041.17 K/s  125.98 K/s  0.00 % 49.86 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:13 22739 be/4 postgres 1019.91 K/s  138.55 K/s  0.00 % 42.42 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:14 22739 be/4 postgres  856.88 K/s  103.86 K/s  0.00 % 31.88 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:15 22739 be/4 postgres 1284.18 K/s  170.73 K/s  0.00 % 52.36 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:16 22739 be/4 postgres 1188.97 K/s   74.31 K/s  0.00 % 51.59 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:17 22739 be/4 postgres 1088.19 K/s  111.42 K/s  0.00 % 44.45 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:18 22739 be/4 postgres 1261.87 K/s  133.61 K/s  0.00 % 49.08 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:19 22739 be/4 postgres 1203.82 K/s  137.58 K/s  0.00 % 52.22 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:20 22739 be/4 postgres 1399.45 K/s  133.63 K/s  0.00 % 45.76 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:21 22739 be/4 postgres 1380.05 K/s  126.13 K/s  0.00 % 57.53 % postgres: ourusername ourdb 192.x.x.x(51168) INSERT14:23:22 22739 be/4 postgres 1236.00 K/s  148.47 K/s  0.00 % 53.78 % postgres: ourusername ourdb 192.x.x.x(51168) INSERTWhat I haven't tried--------------------- more aggressive auto-vacuum- trying gist table for full text search index instead of gin- removing full text search altogether (are users don't use it very much)- rebuilding the production table- vacuum fullAny help on what the issue might be or how to debug further would be amazing. I'd like to understand this issue better,both for my business as well as for my own understanding of databases.Dave Stibrany", "msg_date": "Thu, 17 Sep 2015 15:14:43 -0400", "msg_from": "Dave Stibrany <[email protected]>", "msg_from_op": true, "msg_subject": "Occasional Really Slow Running Updates/Inserts" }, { "msg_contents": "On Thu, Sep 17, 2015 at 03:14:43PM -0400, Dave Stibrany wrote:\n> Hi all. I am occasionally seeing really slow update/inserts into a fairly\n> large table. By really slow I mean around 10-40 seconds,\n> while the majority of queries take milliseconds. I cannot reproduce this\n> problem myself, but it is occurring multiple times a day\n> (maybe 30 times).\n> \n> System Info\n> ---------------\n> Model: Dell PowerEdge R420\n> CPU: 12 core Intel(R) Xeon(R) @ 2.20GHz\n> Memory: 16GB\n> Disk: PERC H310 Mini Raid Controller using Raid 1\n> OS: Ubuntu 14.04.3 LTS\n> \n> DB Settings\n> ----------------\n> ... a lot of information deleted...\n\nHi Dave,\n\nThis search index is almost certainly the cause of your slowdowns:\n\n> Indexes:\n> \"document_search_ix\" gin (contents_search)\n\nWe observed similar odd slowdowns with a GIN text search index. We\nhad to disable the 'fastupdate' option for the index to stop the large\npauses by the index entry clean-up processing. There have been some\nrecent patches to address the penalty problem caused by the fastupdate\nprocessing.\n\n> What I haven't tried\n> --------------------\n> - more aggressive auto-vacuum\n> - trying gist table for full text search index instead of gin\n> - removing full text search altogether (are users don't use it very much)\n\nNope, keep using GIN. GIST is too slow for this usage. Just disable the\n'fastupdate' on the index:\n\nALTER INDEX document_search_ix SET (fastupdate = off);\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Sep 2015 15:12:51 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Occasional Really Slow Running Updates/Inserts" }, { "msg_contents": "Thanks Ken, I'll give that a try.\n\nOn Thu, Sep 17, 2015 at 4:12 PM, [email protected] <[email protected]> wrote:\n\n> On Thu, Sep 17, 2015 at 03:14:43PM -0400, Dave Stibrany wrote:\n> > Hi all. I am occasionally seeing really slow update/inserts into a fairly\n> > large table. By really slow I mean around 10-40 seconds,\n> > while the majority of queries take milliseconds. I cannot reproduce this\n> > problem myself, but it is occurring multiple times a day\n> > (maybe 30 times).\n> >\n> > System Info\n> > ---------------\n> > Model: Dell PowerEdge R420\n> > CPU: 12 core Intel(R) Xeon(R) @ 2.20GHz\n> > Memory: 16GB\n> > Disk: PERC H310 Mini Raid Controller using Raid 1\n> > OS: Ubuntu 14.04.3 LTS\n> >\n> > DB Settings\n> > ----------------\n> > ... a lot of information deleted...\n>\n> Hi Dave,\n>\n> This search index is almost certainly the cause of your slowdowns:\n>\n> > Indexes:\n> > \"document_search_ix\" gin (contents_search)\n>\n> We observed similar odd slowdowns with a GIN text search index. We\n> had to disable the 'fastupdate' option for the index to stop the large\n> pauses by the index entry clean-up processing. There have been some\n> recent patches to address the penalty problem caused by the fastupdate\n> processing.\n>\n> > What I haven't tried\n> > --------------------\n> > - more aggressive auto-vacuum\n> > - trying gist table for full text search index instead of gin\n> > - removing full text search altogether (are users don't use it very much)\n>\n> Nope, keep using GIN. GIST is too slow for this usage. Just disable the\n> 'fastupdate' on the index:\n>\n> ALTER INDEX document_search_ix SET (fastupdate = off);\n>\n> Regards,\n> Ken\n>\n\nThanks Ken, I'll give that a try.On Thu, Sep 17, 2015 at 4:12 PM, [email protected] <[email protected]> wrote:On Thu, Sep 17, 2015 at 03:14:43PM -0400, Dave Stibrany wrote:\n> Hi all. I am occasionally seeing really slow update/inserts into a fairly\n> large table. By really slow I mean around 10-40 seconds,\n> while the majority of queries take milliseconds. I cannot reproduce this\n> problem myself, but it is occurring multiple times a day\n> (maybe 30 times).\n>\n> System Info\n> ---------------\n> Model: Dell PowerEdge R420\n> CPU: 12 core Intel(R) Xeon(R) @ 2.20GHz\n> Memory: 16GB\n> Disk: PERC H310 Mini Raid Controller using Raid 1\n> OS: Ubuntu 14.04.3 LTS\n>\n> DB Settings\n> ----------------\n> ... a lot of information deleted...\n\nHi Dave,\n\nThis search index is almost certainly the cause of your slowdowns:\n\n> Indexes:\n>     \"document_search_ix\" gin (contents_search)\n\nWe observed similar odd slowdowns with a GIN text search index. We\nhad to disable the 'fastupdate' option for the index to stop the large\npauses by the index entry clean-up processing. There have been some\nrecent patches to address the penalty problem caused by the fastupdate\nprocessing.\n\n> What I haven't tried\n> --------------------\n> - more aggressive auto-vacuum\n> - trying gist table for full text search index instead of gin\n> - removing full text search altogether (are users don't use it very much)\n\nNope, keep using GIN. GIST is too slow for this usage. Just disable the\n'fastupdate' on the index:\n\nALTER INDEX document_search_ix SET (fastupdate = off);\n\nRegards,\nKen", "msg_date": "Thu, 17 Sep 2015 18:14:26 -0400", "msg_from": "Dave Stibrany <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occasional Really Slow Running Updates/Inserts" } ]
[ { "msg_contents": "Please how long does it take approximately to restore a 300 Go database using\npg_restore ? Are there benchmarks for that ?\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/dump-restoration-performance-tp5867370.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 25 Sep 2015 06:43:06 -0700 (MST)", "msg_from": "rlemaroi <[email protected]>", "msg_from_op": true, "msg_subject": "dump restoration performance" }, { "msg_contents": "On Fri, Sep 25, 2015 at 10:43 PM, rlemaroi <[email protected]> wrote:\n> Please how long does it take approximately to restore a 300 Go database using\n> pg_restore ? Are there benchmarks for that ?\n\nThat's not an exact science and this is really application-dependent.\nFor example the more your schema has index entries to rebuild at\nrestore the longer it would take. There are as well ways to tune the\nserver to perform a faster restore, by for example increasing\nmaintenance_work_mem, disabling autovacuum, moving wal_level to\nminimum, etc.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 30 Sep 2015 11:38:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dump restoration performance" } ]
[ { "msg_contents": "How do we measure queries per second (QPS), not transactions per second, in\nPostgreSQL without turning on full logging which has a performance penalty\nand can soak up lots of disk space?\n\nWe are using 8.4, but I'm interested in any version as well.\n\nThank you,\nAdam C. Scott\n\nHow do we measure queries per second (QPS), not transactions per second, in PostgreSQL without turning on full logging which has a performance penalty and can soak up lots of disk space?We are using 8.4, but I'm interested in any version as well.Thank you,Adam C. Scott", "msg_date": "Sat, 26 Sep 2015 10:24:54 -0600", "msg_from": "Adam Scott <[email protected]>", "msg_from_op": true, "msg_subject": "Queries Per Second (QPS)" }, { "msg_contents": "Le 26 sept. 2015 6:26 PM, \"Adam Scott\" <[email protected]> a écrit :\n>\n> How do we measure queries per second (QPS), not transactions per second,\nin PostgreSQL without turning on full logging which has a performance\npenalty and can soak up lots of disk space?\n>\n\nThe only way I can think of is to write an extension that will execute some\ncode at the end of the execution of a query.\n\nNote that this might get tricky. Do you want to count any query? Such as\nthose in explicit transactions and those in plpgsql functions? People might\nnot see this your way, which may explain why I don't know of any such\nextension.\n\n> We are using 8.4, but I'm interested in any version as well.\n>\n\nLe 26 sept. 2015 6:26 PM, \"Adam Scott\" <[email protected]> a écrit :\n>\n> How do we measure queries per second (QPS), not transactions per second, in PostgreSQL without turning on full logging which has a performance penalty and can soak up lots of disk space?\n>\nThe only way I can think of is to write an extension that will execute some code at the end of the execution of a query.\nNote that this might get tricky. Do you want to count any query? Such as those in explicit transactions and those in plpgsql functions? People might not see this your way, which may explain why I don't know of any such extension.\n> We are using 8.4, but I'm interested in any version as well.\n>", "msg_date": "Sun, 27 Sep 2015 08:02:31 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries Per Second (QPS)" }, { "msg_contents": "Le 27 sept. 2015 8:02 AM, \"Guillaume Lelarge\" <[email protected]> a\nécrit :\n>\n> Le 26 sept. 2015 6:26 PM, \"Adam Scott\" <[email protected]> a écrit :\n> >\n> > How do we measure queries per second (QPS), not transactions per\nsecond, in PostgreSQL without turning on full logging which has a\nperformance penalty and can soak up lots of disk space?\n> >\n>\n> The only way I can think of is to write an extension that will execute\nsome code at the end of the execution of a query.\n>\n> Note that this might get tricky. Do you want to count any query? Such as\nthose in explicit transactions and those in plpgsql functions? People might\nnot see this your way, which may explain why I don't know of any such\nextension.\n>\n\nThinking about this, such an extension already exists. It's\npg_stat_statements. You need to sum the count column of the\npg_stat_statements from time to time. The difference between two sums will\nbe your number of queries.\n\n> > We are using 8.4, but I'm interested in any version as well.\n> >\n\nLe 27 sept. 2015 8:02 AM, \"Guillaume Lelarge\" <[email protected]> a écrit :\n>\n> Le 26 sept. 2015 6:26 PM, \"Adam Scott\" <[email protected]> a écrit :\n> >\n> > How do we measure queries per second (QPS), not transactions per second, in PostgreSQL without turning on full logging which has a performance penalty and can soak up lots of disk space?\n> >\n>\n> The only way I can think of is to write an extension that will execute some code at the end of the execution of a query.\n>\n> Note that this might get tricky. Do you want to count any query? Such as those in explicit transactions and those in plpgsql functions? People might not see this your way, which may explain why I don't know of any such extension.\n>\nThinking about this, such an extension already exists. It's pg_stat_statements. You need to sum the count column of the pg_stat_statements from time to time. The difference between two sums will be your number of queries.\n> > We are using 8.4, but I'm interested in any version as well.\n> >", "msg_date": "Sun, 27 Sep 2015 08:06:24 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries Per Second (QPS)" }, { "msg_contents": "On Sat, Sep 26, 2015 at 11:06 PM, Guillaume Lelarge <[email protected]>\nwrote:\n\n> Le 27 sept. 2015 8:02 AM, \"Guillaume Lelarge\" <[email protected]> a\n> écrit :\n> >\n> > Le 26 sept. 2015 6:26 PM, \"Adam Scott\" <[email protected]> a\n> écrit :\n> > >\n> > > How do we measure queries per second (QPS), not transactions per\n> second, in PostgreSQL without turning on full logging which has a\n> performance penalty and can soak up lots of disk space?\n> > >\n> >\n> > The only way I can think of is to write an extension that will execute\n> some code at the end of the execution of a query.\n> >\n> > Note that this might get tricky. Do you want to count any query? Such as\n> those in explicit transactions and those in plpgsql functions? People might\n> not see this your way, which may explain why I don't know of any such\n> extension.\n> >\n>\n> Thinking about this, such an extension already exists. It's\n> pg_stat_statements. You need to sum the count column of the\n> pg_stat_statements from time to time. The difference between two sums will\n> be your number of queries.\n>\n\nThat is what I was thinking, but the pg_stat_statement does discard\n statements sometimes, discarding the counts with them. You would have set\npg_stat_statements.max to a higher value than you ever expect to get\nreached.\n\nCheers,\n\nJeff\n\nOn Sat, Sep 26, 2015 at 11:06 PM, Guillaume Lelarge <[email protected]> wrote:Le 27 sept. 2015 8:02 AM, \"Guillaume Lelarge\" <[email protected]> a écrit :\n>\n> Le 26 sept. 2015 6:26 PM, \"Adam Scott\" <[email protected]> a écrit :\n> >\n> > How do we measure queries per second (QPS), not transactions per second, in PostgreSQL without turning on full logging which has a performance penalty and can soak up lots of disk space?\n> >\n>\n> The only way I can think of is to write an extension that will execute some code at the end of the execution of a query.\n>\n> Note that this might get tricky. Do you want to count any query? Such as those in explicit transactions and those in plpgsql functions? People might not see this your way, which may explain why I don't know of any such extension.\n>\nThinking about this, such an extension already exists. It's pg_stat_statements. You need to sum the count column of the pg_stat_statements from time to time. The difference between two sums will be your number of queries.That is what I was thinking, but the pg_stat_statement does discard  statements sometimes, discarding the counts with them.  You would have set pg_stat_statements.max to a higher value than you ever expect to get reached.Cheers,Jeff", "msg_date": "Sun, 27 Sep 2015 10:34:19 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries Per Second (QPS)" }, { "msg_contents": "On 09/26/2015 09:24 AM, Adam Scott wrote:\n> How do we measure queries per second (QPS), not transactions per second,\n> in PostgreSQL without turning on full logging which has a performance\n> penalty and can soak up lots of disk space?\n\nMeasure it from the client side. pgBench does this.\n\nIf you mean on your production workload, then I recommend using a\nconnection proxy which counts statements. A few exist for Postgres, for\nexample:\n\nVividCortex:\nhttps://www.vividcortex.com/blog/2015/05/13/announcing-vividcortex-network-analyzer-mysql-postgresql/\n\nWireShark: https://github.com/dalibo/pgshark\n\nYou'd need to measure how much one of these tools affects your QPS, of\ncourse, but that should be easily measurable on a test system.\n\nAlso, if the PostgresQL activity log is moved to a seperate SSD from the\ndatabase storage, I've found overhead in writing to it to be less than\n3% ... depending on the nature of your query traffic. Pathological\nsituations are mainly databases which have a high volume of very long\nqueries or failed connection attempts.\n\n> \n> We are using 8.4, but I'm interested in any version as well.\n\nYou are aware that 8.4 is EOL, yes? Not to mention missing 5 years of\nperformance improvements ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 30 Sep 2015 10:06:46 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries Per Second (QPS)" } ]
[ { "msg_contents": "I previously posted about par_psql, but I recently found another PG parallelism project which can do a few extra things that par_psql can’t: \r\n\r\nhttps://github.com/moat/pmpp\r\npmpp: Poor Man's Parallel Processing. \r\n\r\nCorey Huinker had the idea of using dblink async as a foundation for distributing queries. This allows parallelisation at the query level and across multiple dbs simultaneously. \r\nNice idea!\r\n\r\nGraeme Bell\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Sep 2015 08:41:01 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Another parallel postgres project..." }, { "msg_contents": "Thanks for the shout-out.\nThis is the project that I presented at PgConfUS 2015. It took a while for\nMoat's (http://moat.com) lawyers to come around to licensing the code, but\nthey finally did.\n\nOn Tue, Sep 29, 2015 at 4:41 AM, Graeme B. Bell <[email protected]>\nwrote:\n\n> I previously posted about par_psql, but I recently found another PG\n> parallelism project which can do a few extra things that par_psql can’t:\n>\n> https://github.com/moat/pmpp\n> pmpp: Poor Man's Parallel Processing.\n>\n> Corey Huinker had the idea of using dblink async as a foundation for\n> distributing queries. This allows parallelisation at the query level and\n> across multiple dbs simultaneously.\n> Nice idea!\n>\n> Graeme Bell\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks for the shout-out.This is the project that I presented at PgConfUS 2015. It took a while for Moat's (http://moat.com) lawyers to come around to licensing the code, but they finally did.On Tue, Sep 29, 2015 at 4:41 AM, Graeme B. Bell <[email protected]> wrote:I previously posted about par_psql, but I recently found another PG parallelism project which can do a few extra things that par_psql can’t:\n\nhttps://github.com/moat/pmpp\npmpp: Poor Man's Parallel Processing.\n\nCorey Huinker had the idea of using dblink async as a foundation for distributing queries. This allows parallelisation at the query level and across multiple dbs simultaneously.\nNice idea!\n\nGraeme Bell\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 29 Sep 2015 12:25:05 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Another parallel postgres project..." } ]
[ { "msg_contents": "Hi,\n\nWe have got big slow down on our production plateform (PG 9.4.4).\n\nAfter analyzing wals with pg_xlogdump, we see lot of writing in Gin Indexes.\n\nWe suspect slow down are related to the write of pending update on the\nindex.\n\nSo, is there any method to see\n- what is the current config of gin_pending_list_limit on a given index ?\n- the current size of pending list on a given index ?\n\nRegards,\n\nBertrand\n\nHi,We have got big slow down on our production plateform (PG 9.4.4).After analyzing wals with pg_xlogdump, we see lot of writing in Gin Indexes.We suspect slow down are related to the write of pending update on the index.So, is there any method to see- what is the current config of gin_pending_list_limit on a given index ?- the current size of pending list on a given index ?Regards,Bertrand", "msg_date": "Tue, 29 Sep 2015 17:45:41 +0200", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problem with gin index" }, { "msg_contents": "On Tue, Sep 29, 2015 at 05:45:41PM +0200, Bertrand Paquet wrote:\n> Hi,\n> \n> We have got big slow down on our production plateform (PG 9.4.4).\n> After analyzing wals with pg_xlogdump, we see lot of writing in Gin Indexes.\n> We suspect slow down are related to the write of pending update on the\n> index.\n> \n> So, is there any method to see\n> - what is the current config of gin_pending_list_limit on a given index ?\n> - the current size of pending list on a given index ?\n> \n> Regards,\n> Bertrand\n\nHi Bertrand,\n\nYou might try disabling fastupdate for the index. 9.5 has some work in\nthis area, but prior to that disabling it is the best fix. It certainly\nhelped our system with the same issue.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Sep 2015 11:12:16 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with gin index" }, { "msg_contents": "On Tue, Sep 29, 2015 at 8:45 AM, Bertrand Paquet <\[email protected]> wrote:\n\n> Hi,\n>\n> We have got big slow down on our production plateform (PG 9.4.4).\n>\n\nWhat is it slow compared to? Did your version change, or your\nworkload/usage change?\n\n\n>\n> After analyzing wals with pg_xlogdump, we see lot of writing in Gin\n> Indexes.\n>\n> We suspect slow down are related to the write of pending update on the\n> index.\n>\n> So, is there any method to see\n> - what is the current config of gin_pending_list_limit on a given index ?\n>\n\ngin_pending_list_limit will be introduced in 9.5. In 9.4 and before, there\nis no such parameter. Instead, the limit is tied to the setting of\nwork_mem in those versions.\n\n\n> - the current size of pending list on a given index ?\n>\n\nYou can use this from the pgstattuple contrib module:\n\nSELECT * FROM pgstatginindex('test_gin_index');\n\nYour best bet may be to turn off fastupdate. It will slow down most\ninserts/updates, but you will not have the huge latency spikes you get with\nfastupdate turned on.\n\nAlso, you might (or might not) have a higher overall throughput with\nfastupdate turned off, depending on a lot of things like the size of the\nindex, the size of ram and shared_buffers, the number of spindles in your\nRAID, the amount of parallelization in your insert/update activity, and the\ndistribution of \"keys\" among the data you are inserting/updating.\n\nCheers,\n\nJeff\n\nOn Tue, Sep 29, 2015 at 8:45 AM, Bertrand Paquet <[email protected]> wrote:Hi,We have got big slow down on our production plateform (PG 9.4.4).What is it slow compared to?  Did your version change, or your workload/usage change? After analyzing wals with pg_xlogdump, we see lot of writing in Gin Indexes.We suspect slow down are related to the write of pending update on the index.So, is there any method to see- what is the current config of gin_pending_list_limit on a given index ?gin_pending_list_limit will be introduced in 9.5.  In 9.4 and before, there is no such parameter.  Instead, the limit is tied to the setting of work_mem in those versions. - the current size of pending list on a given index ?You can use this from the pgstattuple contrib module:SELECT * FROM pgstatginindex('test_gin_index');Your best bet may be to turn off fastupdate.  It will slow down most inserts/updates, but you will not have the huge latency spikes you get with fastupdate turned on.  Also, you might (or might not) have a higher overall throughput with fastupdate turned off, depending on a lot of things like the size of the index, the size of ram and shared_buffers, the number of spindles in your RAID, the amount of parallelization in your insert/update activity, and the distribution of \"keys\" among the data you are inserting/updating.Cheers,Jeff", "msg_date": "Tue, 29 Sep 2015 10:17:31 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with gin index" }, { "msg_contents": "Thx you for your hints.\n\nI found lot of information in this thread\nhttp://postgresql.nabble.com/how-to-investigate-GIN-fast-updates-and-cleanup-cycles-td5863756.html\n\nCurrently, we are monitoring pending_pages (pgstatginindex works on 9.4.4),\nand run a vacuum every night. We hope it will solve the problem, without\ndisabling fast update.\n\nRegards,\n\nBertrand\n\n2015-09-29 19:17 GMT+02:00 Jeff Janes <[email protected]>:\n\n> On Tue, Sep 29, 2015 at 8:45 AM, Bertrand Paquet <\n> [email protected]> wrote:\n>\n>> Hi,\n>>\n>> We have got big slow down on our production plateform (PG 9.4.4).\n>>\n>\n> What is it slow compared to? Did your version change, or your\n> workload/usage change?\n>\n>\n>>\n>> After analyzing wals with pg_xlogdump, we see lot of writing in Gin\n>> Indexes.\n>>\n>> We suspect slow down are related to the write of pending update on the\n>> index.\n>>\n>> So, is there any method to see\n>> - what is the current config of gin_pending_list_limit on a given index ?\n>>\n>\n> gin_pending_list_limit will be introduced in 9.5. In 9.4 and before,\n> there is no such parameter. Instead, the limit is tied to the setting of\n> work_mem in those versions.\n>\n>\n>> - the current size of pending list on a given index ?\n>>\n>\n> You can use this from the pgstattuple contrib module:\n>\n> SELECT * FROM pgstatginindex('test_gin_index');\n>\n> Your best bet may be to turn off fastupdate. It will slow down most\n> inserts/updates, but you will not have the huge latency spikes you get with\n> fastupdate turned on.\n>\n> Also, you might (or might not) have a higher overall throughput with\n> fastupdate turned off, depending on a lot of things like the size of the\n> index, the size of ram and shared_buffers, the number of spindles in your\n> RAID, the amount of parallelization in your insert/update activity, and the\n> distribution of \"keys\" among the data you are inserting/updating.\n>\n> Cheers,\n>\n> Jeff\n>\n\nThx you for your hints.I found lot of information in this thread http://postgresql.nabble.com/how-to-investigate-GIN-fast-updates-and-cleanup-cycles-td5863756.htmlCurrently, we are monitoring pending_pages (pgstatginindex works on 9.4.4), and run a vacuum every night. We hope it will solve the problem, without disabling fast update.Regards,Bertrand2015-09-29 19:17 GMT+02:00 Jeff Janes <[email protected]>:On Tue, Sep 29, 2015 at 8:45 AM, Bertrand Paquet <[email protected]> wrote:Hi,We have got big slow down on our production plateform (PG 9.4.4).What is it slow compared to?  Did your version change, or your workload/usage change? After analyzing wals with pg_xlogdump, we see lot of writing in Gin Indexes.We suspect slow down are related to the write of pending update on the index.So, is there any method to see- what is the current config of gin_pending_list_limit on a given index ?gin_pending_list_limit will be introduced in 9.5.  In 9.4 and before, there is no such parameter.  Instead, the limit is tied to the setting of work_mem in those versions. - the current size of pending list on a given index ?You can use this from the pgstattuple contrib module:SELECT * FROM pgstatginindex('test_gin_index');Your best bet may be to turn off fastupdate.  It will slow down most inserts/updates, but you will not have the huge latency spikes you get with fastupdate turned on.  Also, you might (or might not) have a higher overall throughput with fastupdate turned off, depending on a lot of things like the size of the index, the size of ram and shared_buffers, the number of spindles in your RAID, the amount of parallelization in your insert/update activity, and the distribution of \"keys\" among the data you are inserting/updating.Cheers,Jeff", "msg_date": "Tue, 29 Sep 2015 22:51:33 +0200", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem with gin index" } ]
[ { "msg_contents": "I am working in a public company who uses only open source applications and\ndatabases.I have a problem with our critical database which is write and\nread intensive.*version:* Postgresql-9.4*Hardware:* HP DL980 (8-processor,\n80 cores w/o hyper threading, 512GB RAM)*Operating system: *Red Hat\nEnterprise Linux Server release 6.4 (Santiago)*uname -a* : Linux host1\n2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64\nx86_64 GNU/LinuxSingle database with separate tablespace for main-data,\npg_xlog and indexesI have a database having 770GB size and expected to grow\nto 2TB within the next year.The database was running in a 2processor HP\nDL560 (16 cores) and as the transactions of the database were found\nincreasing, we have changed the hardware to DL980 with 8 processors and\n512GB RAM. *Problem* It is observed that at some times during moderate load\nthe CPU usage goes up to 400% and the users are not able to complete the\nqueries in expected time. But the load is contributed by some system process\nonly.The average connections are normally 50. But when this happens the\nconnections will shoot up to max-connections.*The sar command\noutput*07:20:01 IST CPU %user %nice %system %iowait \n%steal %idle07:30:01 IST all 0.73 0.00 0.37 \n0.58 0.00 98.3307:40:01 IST all 0.66 0.00 0.38 \n0.65 0.00 98.3107:50:01 IST all 0.27 0.00 0.27 \n0.01 0.00 99.4508:00:01 IST all 0.52 0.00 0.37 \n0.01 0.00 99.1008:10:01 IST all 1.54 0.00 0.70 \n0.02 0.00 97.7408:20:01 IST all 1.20 0.00 0.67 \n0.02 0.00 98.1008:30:01 IST all 1.48 0.00 0.77 \n0.03 0.00 97.7208:40:01 IST all 1.69 0.00 0.89 \n0.04 0.00 97.3908:50:01 IST all 1.71 0.00 0.94 \n0.04 0.00 97.3109:00:01 IST all 1.74 0.00 0.92 \n0.03 0.00 97.3109:10:01 IST all 2.32 0.00 1.06 \n0.04 0.00 96.5809:20:01 IST all 2.22 0.00 1.17 \n0.04 0.00 96.5709:30:02 IST all 2.20 0.00 6.68 \n0.06 0.00 91.0609:40:01 IST all 2.43 0.00 1.37 \n0.06 0.00 96.1409:50:01 IST all 3.23 0.00 2.06 \n0.08 0.00 94.6310:00:02 IST all 3.15 0.00 6.10 \n0.07 0.00 90.6710:10:01 IST all 4.94 0.00 5.20 \n0.29 0.00 89.5710:20:01 IST all 5.10 0.00 2.13 \n0.34 0.00 92.4310:30:01 IST all 5.60 0.00 2.42 \n0.18 0.00 91.8010:40:01 IST all 5.28 0.00 14.37 \n0.19 0.00 80.1610:50:01 IST all 4.52 0.00 28.48 \n0.23 0.00 66.7711:00:01 IST all 5.25 0.00 9.02 \n0.18 0.00 85.5511:10:01 IST all 5.77 0.00 4.96 \n0.27 0.00 89.0011:20:01 IST all 5.70 0.00 2.74 \n0.19 0.00 91.3711:30:01 IST all 5.72 0.00 5.91 \n0.20 0.00 88.1711:40:01 IST all 5.66 0.00 2.81 \n0.37 0.00 91.1511:50:01 IST all 5.90 0.00 8.80 \n0.10 0.00 85.1912:00:01 IST all 6.44 0.00 3.40 \n0.13 0.00 90.0312:10:01 IST all 7.18 0.00 4.52 \n0.11 0.00 88.1812:20:02 IST all 4.40 0.00 37.84 \n0.07 0.00 57.7012:30:01 IST all 5.66 0.00 2.98 \n0.10 0.00 91.2612:40:01 IST all 5.74 0.00 3.05 \n0.11 0.00 91.10Average: all 1.92 0.00 2.28 \n0.11 0.00 95.69Postgresql.confmax_connections = 500 (can be\nreduced)shared_buffers = 8500MBwork_mem = 50MBmaintenance_work_mem =\n8064MBcheckpoint_segments = 132checkpoint_timeout =\n30mincheckpoint_completion_target = 0.9 This over load happens 5-6 times a\nday.How to trace the cause of this problem?. My thoughts.1. some thing\nrelated to the numa systems memory management.2. Some thing related to the\nsize of shared buffers.Please helpAjayakumar.BS\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Multi-processor-server-overloads-occationally-with-system-process-while-running-postgresql-9-4-tp5868474.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nI am working in a public company who uses only open source applications and databases.\nI have a problem with our critical database which is write and read intensive.\n\n\nversion: Postgresql-9.4\nHardware: HP DL980 (8-processor, 80 cores w/o hyper threading, 512GB RAM)\nOperating system: Red Hat Enterprise Linux Server release 6.4 (Santiago)\nuname -a : Linux host1 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64 GNU/Linux\nSingle database with separate tablespace for main-data, pg_xlog and indexes\n\nI have a database having 770GB size and expected to grow to 2TB within the next year.\nThe database was running in a 2processor HP DL560 (16 cores) and as the transactions of the database were found increasing, we have changed the hardware to DL980 with 8 processors and 512GB RAM. \nProblem\n It is observed that at some times during moderate load the CPU usage goes up to 400% and the users are not able to complete the queries in expected time. But the load is contributed by some system process only.\nThe average connections are normally 50. But when this happens the connections will shoot up to max-connections.\n\nThe sar command output\n07:20:01 IST CPU %user %nice %system %iowait %steal %idle\n07:30:01 IST all 0.73 0.00 0.37 0.58 0.00 98.33\n07:40:01 IST all 0.66 0.00 0.38 0.65 0.00 98.31\n07:50:01 IST all 0.27 0.00 0.27 0.01 0.00 99.45\n08:00:01 IST all 0.52 0.00 0.37 0.01 0.00 99.10\n08:10:01 IST all 1.54 0.00 0.70 0.02 0.00 97.74\n08:20:01 IST all 1.20 0.00 0.67 0.02 0.00 98.10\n08:30:01 IST all 1.48 0.00 0.77 0.03 0.00 97.72\n08:40:01 IST all 1.69 0.00 0.89 0.04 0.00 97.39\n08:50:01 IST all 1.71 0.00 0.94 0.04 0.00 97.31\n09:00:01 IST all 1.74 0.00 0.92 0.03 0.00 97.31\n09:10:01 IST all 2.32 0.00 1.06 0.04 0.00 96.58\n09:20:01 IST all 2.22 0.00 1.17 0.04 0.00 96.57\n09:30:02 IST all 2.20 0.00 6.68 0.06 0.00 91.06\n09:40:01 IST all 2.43 0.00 1.37 0.06 0.00 96.14\n09:50:01 IST all 3.23 0.00 2.06 0.08 0.00 94.63\n10:00:02 IST all 3.15 0.00 6.10 0.07 0.00 90.67\n10:10:01 IST all 4.94 0.00 5.20 0.29 0.00 89.57\n10:20:01 IST all 5.10 0.00 2.13 0.34 0.00 92.43\n10:30:01 IST all 5.60 0.00 2.42 0.18 0.00 91.80\n10:40:01 IST all 5.28 0.00 14.37 0.19 0.00 80.16\n10:50:01 IST all 4.52 0.00 28.48 0.23 0.00 66.77\n11:00:01 IST all 5.25 0.00 9.02 0.18 0.00 85.55\n11:10:01 IST all 5.77 0.00 4.96 0.27 0.00 89.00\n11:20:01 IST all 5.70 0.00 2.74 0.19 0.00 91.37\n11:30:01 IST all 5.72 0.00 5.91 0.20 0.00 88.17\n11:40:01 IST all 5.66 0.00 2.81 0.37 0.00 91.15\n11:50:01 IST all 5.90 0.00 8.80 0.10 0.00 85.19\n12:00:01 IST all 6.44 0.00 3.40 0.13 0.00 90.03\n12:10:01 IST all 7.18 0.00 4.52 0.11 0.00 88.18\n12:20:02 IST all 4.40 0.00 37.84 0.07 0.00 57.70\n12:30:01 IST all 5.66 0.00 2.98 0.10 0.00 91.26\n12:40:01 IST all 5.74 0.00 3.05 0.11 0.00 91.10\nAverage: all 1.92 0.00 2.28 0.11 0.00 95.69\n\nPostgresql.conf\nmax_connections = 500 (can be reduced)\nshared_buffers = 8500MB\nwork_mem = 50MB\nmaintenance_work_mem = 8064MB\ncheckpoint_segments = 132\ncheckpoint_timeout = 30min\ncheckpoint_completion_target = 0.9\n\n \nThis over load happens 5-6 times a day.\n\nHow to trace the cause of this problem?. \n\nMy thoughts.\n1. some thing related to the numa systems memory management.\n2. Some thing related to the size of shared buffers.\n\nPlease help\n\nAjayakumar.BS\n\n\t\n\t\n\t\n\nView this message in context: Multi processor server overloads occationally with system process while running postgresql-9.4\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.", "msg_date": "Sat, 3 Oct 2015 01:39:33 -0700 (MST)", "msg_from": "ajaykbs <[email protected]>", "msg_from_op": true, "msg_subject": "Multi processor server overloads occationally with system process\n while running postgresql-9.4" }, { "msg_contents": "On 03/10/15 21:39, ajaykbs wrote:\n> I am working in a public company who uses only open source \n> applications and databases. I have a problem with our critical \n> database which is write and read intensive. *version:* Postgresql-9.4 \n> *Hardware:* HP DL980 (8-processor, 80 cores w/o hyper threading, 512GB \n> RAM) *Operating system: *Red Hat Enterprise Linux Server release 6.4 \n> (Santiago) *uname -a* : Linux host1 2.6.32-358.el6.x86_64 #1 SMP Tue \n> Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64 GNU/Linux Single \n> database with separate tablespace for main-data, pg_xlog and indexes I \n> have a database having 770GB size and expected to grow to 2TB within \n> the next year. The database was running in a 2processor HP DL560 (16 \n> cores) and as the transactions of the database were found increasing, \n> we have changed the hardware to DL980 with 8 processors and 512GB RAM. \n> *Problem* It is observed that at some times during moderate load the \n> CPU usage goes up to 400% and the users are not able to complete the \n> queries in expected time. But the load is contributed by some system \n> process only. The average connections are normally 50. But when this \n> happens the connections will shoot up to max-connections. *The sar \n> command output* 07:20:01 IST CPU %user %nice %system %iowait %steal \n> %idle 07:30:01 IST all 0.73 0.00 0.37 0.58 0.00 98.33 07:40:01 IST all \n> 0.66 0.00 0.38 0.65 0.00 98.31 07:50:01 IST all 0.27 0.00 0.27 0.01 \n> 0.00 99.45 08:00:01 IST all 0.52 0.00 0.37 0.01 0.00 99.10 08:10:01 \n> IST all 1.54 0.00 0.70 0.02 0.00 97.74 08:20:01 IST all 1.20 0.00 0.67 \n> 0.02 0.00 98.10 08:30:01 IST all 1.48 0.00 0.77 0.03 0.00 97.72 \n> 08:40:01 IST all 1.69 0.00 0.89 0.04 0.00 97.39 08:50:01 IST all 1.71 \n> 0.00 0.94 0.04 0.00 97.31 09:00:01 IST all 1.74 0.00 0.92 0.03 0.00 \n> 97.31 09:10:01 IST all 2.32 0.00 1.06 0.04 0.00 96.58 09:20:01 IST all \n> 2.22 0.00 1.17 0.04 0.00 96.57 09:30:02 IST all 2.20 0.00 6.68 0.06 \n> 0.00 91.06 09:40:01 IST all 2.43 0.00 1.37 0.06 0.00 96.14 09:50:01 \n> IST all 3.23 0.00 2.06 0.08 0.00 94.63 10:00:02 IST all 3.15 0.00 6.10 \n> 0.07 0.00 90.67 10:10:01 IST all 4.94 0.00 5.20 0.29 0.00 89.57 \n> 10:20:01 IST all 5.10 0.00 2.13 0.34 0.00 92.43 10:30:01 IST all 5.60 \n> 0.00 2.42 0.18 0.00 91.80 10:40:01 IST all 5.28 0.00 14.37 0.19 0.00 \n> 80.16 10:50:01 IST all 4.52 0.00 28.48 0.23 0.00 66.77 11:00:01 IST \n> all 5.25 0.00 9.02 0.18 0.00 85.55 11:10:01 IST all 5.77 0.00 4.96 \n> 0.27 0.00 89.00 11:20:01 IST all 5.70 0.00 2.74 0.19 0.00 91.37 \n> 11:30:01 IST all 5.72 0.00 5.91 0.20 0.00 88.17 11:40:01 IST all 5.66 \n> 0.00 2.81 0.37 0.00 91.15 11:50:01 IST all 5.90 0.00 8.80 0.10 0.00 \n> 85.19 12:00:01 IST all 6.44 0.00 3.40 0.13 0.00 90.03 12:10:01 IST all \n> 7.18 0.00 4.52 0.11 0.00 88.18 12:20:02 IST all 4.40 0.00 37.84 0.07 \n> 0.00 57.70 12:30:01 IST all 5.66 0.00 2.98 0.10 0.00 91.26 12:40:01 \n> IST all 5.74 0.00 3.05 0.11 0.00 91.10 Average: all 1.92 0.00 2.28 \n> 0.11 0.00 95.69 Postgresql.conf max_connections = 500 (can be reduced) \n> shared_buffers = 8500MB work_mem = 50MB maintenance_work_mem = 8064MB \n> checkpoint_segments = 132 checkpoint_timeout = 30min \n> checkpoint_completion_target = 0.9 This over load happens 5-6 times a \n> day. How to trace the cause of this problem?. My thoughts. 1. some \n> thing related to the numa systems memory management. 2. Some thing \n> related to the size of shared buffers. Please help Ajayakumar.BS\n> ------------------------------------------------------------------------\n> View this message in context: Multi processor server overloads \n> occationally with system process while running postgresql-9.4 \n> <http://postgresql.nabble.com/Multi-processor-server-overloads-occationally-with-system-process-while-running-postgresql-9-4-tp5868474.html>\n> Sent from the PostgreSQL - performance mailing list archive \n> <http://postgresql.nabble.com/PostgreSQL-performance-f2050081.html> at \n> Nabble.com.\nA little bit of formatting might make the above a bit more readable... \nOne paragraph is hard to parse.\n\n\n-Gavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 3 Oct 2015 22:03:16 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multi processor server overloads occationally with\n system process while running postgresql-9.4" }, { "msg_contents": "Are you using any connection pooler in front of the database?\nOn 3 Oct 2015 17:04, \"Gavin Flower\" <[email protected]> wrote:\n\n> On 03/10/15 21:39, ajaykbs wrote:\n>\n>> I am working in a public company who uses only open source applications\n>> and databases. I have a problem with our critical database which is write\n>> and read intensive. *version:* Postgresql-9.4 *Hardware:* HP DL980\n>> (8-processor, 80 cores w/o hyper threading, 512GB RAM) *Operating system:\n>> *Red Hat Enterprise Linux Server release 6.4 (Santiago) *uname -a* : Linux\n>> host1 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64\n>> x86_64 x86_64 GNU/Linux Single database with separate tablespace for\n>> main-data, pg_xlog and indexes I have a database having 770GB size and\n>> expected to grow to 2TB within the next year. The database was running in a\n>> 2processor HP DL560 (16 cores) and as the transactions of the database were\n>> found increasing, we have changed the hardware to DL980 with 8 processors\n>> and 512GB RAM. *Problem* It is observed that at some times during moderate\n>> load the CPU usage goes up to 400% and the users are not able to complete\n>> the queries in expected time. But the load is contributed by some system\n>> process only. The average connections are normally 50. But when this\n>> happens the connections will shoot up to max-connections. *The sar command\n>> output* 07:20:01 IST CPU %user %nice %system %iowait %steal %idle 07:30:01\n>> IST all 0.73 0.00 0.37 0.58 0.00 98.33 07:40:01 IST all 0.66 0.00 0.38 0.65\n>> 0.00 98.31 07:50:01 IST all 0.27 0.00 0.27 0.01 0.00 99.45 08:00:01 IST all\n>> 0.52 0.00 0.37 0.01 0.00 99.10 08:10:01 IST all 1.54 0.00 0.70 0.02 0.00\n>> 97.74 08:20:01 IST all 1.20 0.00 0.67 0.02 0.00 98.10 08:30:01 IST all 1.48\n>> 0.00 0.77 0.03 0.00 97.72 08:40:01 IST all 1.69 0.00 0.89 0.04 0.00 97.39\n>> 08:50:01 IST all 1.71 0.00 0.94 0.04 0.00 97.31 09:00:01 IST all 1.74 0.00\n>> 0.92 0.03 0.00 97.31 09:10:01 IST all 2.32 0.00 1.06 0.04 0.00 96.58\n>> 09:20:01 IST all 2.22 0.00 1.17 0.04 0.00 96.57 09:30:02 IST all 2.20 0.00\n>> 6.68 0.06 0.00 91.06 09:40:01 IST all 2.43 0.00 1.37 0.06 0.00 96.14\n>> 09:50:01 IST all 3.23 0.00 2.06 0.08 0.00 94.63 10:00:02 IST all 3.15 0.00\n>> 6.10 0.07 0.00 90.67 10:10:01 IST all 4.94 0.00 5.20 0.29 0.00 89.57\n>> 10:20:01 IST all 5.10 0.00 2.13 0.34 0.00 92.43 10:30:01 IST all 5.60 0.00\n>> 2.42 0.18 0.00 91.80 10:40:01 IST all 5.28 0.00 14.37 0.19 0.00 80.16\n>> 10:50:01 IST all 4.52 0.00 28.48 0.23 0.00 66.77 11:00:01 IST all 5.25 0.00\n>> 9.02 0.18 0.00 85.55 11:10:01 IST all 5.77 0.00 4.96 0.27 0.00 89.00\n>> 11:20:01 IST all 5.70 0.00 2.74 0.19 0.00 91.37 11:30:01 IST all 5.72 0.00\n>> 5.91 0.20 0.00 88.17 11:40:01 IST all 5.66 0.00 2.81 0.37 0.00 91.15\n>> 11:50:01 IST all 5.90 0.00 8.80 0.10 0.00 85.19 12:00:01 IST all 6.44 0.00\n>> 3.40 0.13 0.00 90.03 12:10:01 IST all 7.18 0.00 4.52 0.11 0.00 88.18\n>> 12:20:02 IST all 4.40 0.00 37.84 0.07 0.00 57.70 12:30:01 IST all 5.66 0.00\n>> 2.98 0.10 0.00 91.26 12:40:01 IST all 5.74 0.00 3.05 0.11 0.00 91.10\n>> Average: all 1.92 0.00 2.28 0.11 0.00 95.69 Postgresql.conf max_connections\n>> = 500 (can be reduced) shared_buffers = 8500MB work_mem = 50MB\n>> maintenance_work_mem = 8064MB checkpoint_segments = 132 checkpoint_timeout\n>> = 30min checkpoint_completion_target = 0.9 This over load happens 5-6 times\n>> a day. How to trace the cause of this problem?. My thoughts. 1. some thing\n>> related to the numa systems memory management. 2. Some thing related to the\n>> size of shared buffers. Please help Ajayakumar.BS\n>> ------------------------------------------------------------------------\n>> View this message in context: Multi processor server overloads\n>> occationally with system process while running postgresql-9.4 <\n>> http://postgresql.nabble.com/Multi-processor-server-overloads-occationally-with-system-process-while-running-postgresql-9-4-tp5868474.html\n>> >\n>> Sent from the PostgreSQL - performance mailing list archive <\n>> http://postgresql.nabble.com/PostgreSQL-performance-f2050081.html> at\n>> Nabble.com.\n>>\n> A little bit of formatting might make the above a bit more readable...\n> One paragraph is hard to parse.\n>\n>\n> -Gavin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAre you using any connection pooler in front of the database?\nOn 3 Oct 2015 17:04, \"Gavin Flower\" <[email protected]> wrote:On 03/10/15 21:39, ajaykbs wrote:\n\nI am working in a public company who uses only open source applications and databases. I have a problem with our critical database which is write and read intensive. *version:* Postgresql-9.4 *Hardware:* HP DL980 (8-processor, 80 cores w/o hyper threading, 512GB RAM) *Operating system: *Red Hat Enterprise Linux Server release 6.4 (Santiago) *uname -a* : Linux host1 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64 GNU/Linux Single database with separate tablespace for main-data, pg_xlog and indexes I have a database having 770GB size and expected to grow to 2TB within the next year. The database was running in a 2processor HP DL560 (16 cores) and as the transactions of the database were found increasing, we have changed the hardware to DL980 with 8 processors and 512GB RAM. *Problem* It is observed that at some times during moderate load the CPU usage goes up to 400% and the users are not able to complete the queries in expected time. But the load is contributed by some system process only. The average connections are normally 50. But when this happens the connections will shoot up to max-connections. *The sar command output* 07:20:01 IST CPU %user %nice %system %iowait %steal %idle 07:30:01 IST all 0.73 0.00 0.37 0.58 0.00 98.33 07:40:01 IST all 0.66 0.00 0.38 0.65 0.00 98.31 07:50:01 IST all 0.27 0.00 0.27 0.01 0.00 99.45 08:00:01 IST all 0.52 0.00 0.37 0.01 0.00 99.10 08:10:01 IST all 1.54 0.00 0.70 0.02 0.00 97.74 08:20:01 IST all 1.20 0.00 0.67 0.02 0.00 98.10 08:30:01 IST all 1.48 0.00 0.77 0.03 0.00 97.72 08:40:01 IST all 1.69 0.00 0.89 0.04 0.00 97.39 08:50:01 IST all 1.71 0.00 0.94 0.04 0.00 97.31 09:00:01 IST all 1.74 0.00 0.92 0.03 0.00 97.31 09:10:01 IST all 2.32 0.00 1.06 0.04 0.00 96.58 09:20:01 IST all 2.22 0.00 1.17 0.04 0.00 96.57 09:30:02 IST all 2.20 0.00 6.68 0.06 0.00 91.06 09:40:01 IST all 2.43 0.00 1.37 0.06 0.00 96.14 09:50:01 IST all 3.23 0.00 2.06 0.08 0.00 94.63 10:00:02 IST all 3.15 0.00 6.10 0.07 0.00 90.67 10:10:01 IST all 4.94 0.00 5.20 0.29 0.00 89.57 10:20:01 IST all 5.10 0.00 2.13 0.34 0.00 92.43 10:30:01 IST all 5.60 0.00 2.42 0.18 0.00 91.80 10:40:01 IST all 5.28 0.00 14.37 0.19 0.00 80.16 10:50:01 IST all 4.52 0.00 28.48 0.23 0.00 66.77 11:00:01 IST all 5.25 0.00 9.02 0.18 0.00 85.55 11:10:01 IST all 5.77 0.00 4.96 0.27 0.00 89.00 11:20:01 IST all 5.70 0.00 2.74 0.19 0.00 91.37 11:30:01 IST all 5.72 0.00 5.91 0.20 0.00 88.17 11:40:01 IST all 5.66 0.00 2.81 0.37 0.00 91.15 11:50:01 IST all 5.90 0.00 8.80 0.10 0.00 85.19 12:00:01 IST all 6.44 0.00 3.40 0.13 0.00 90.03 12:10:01 IST all 7.18 0.00 4.52 0.11 0.00 88.18 12:20:02 IST all 4.40 0.00 37.84 0.07 0.00 57.70 12:30:01 IST all 5.66 0.00 2.98 0.10 0.00 91.26 12:40:01 IST all 5.74 0.00 3.05 0.11 0.00 91.10 Average: all 1.92 0.00 2.28 0.11 0.00 95.69 Postgresql.conf max_connections = 500 (can be reduced) shared_buffers = 8500MB work_mem = 50MB maintenance_work_mem = 8064MB checkpoint_segments = 132 checkpoint_timeout = 30min checkpoint_completion_target = 0.9 This over load happens 5-6 times a day. How to trace the cause of this problem?. My thoughts. 1. some thing related to the numa systems memory management. 2. Some thing related to the size of shared buffers. Please help Ajayakumar.BS\n------------------------------------------------------------------------\nView this message in context: Multi processor server overloads occationally with system process while running postgresql-9.4 <http://postgresql.nabble.com/Multi-processor-server-overloads-occationally-with-system-process-while-running-postgresql-9-4-tp5868474.html>\nSent from the PostgreSQL - performance mailing list archive <http://postgresql.nabble.com/PostgreSQL-performance-f2050081.html> at Nabble.com.\n\nA little bit of formatting might make the above a bit more readable...  One paragraph is hard to parse.\n\n\n-Gavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 3 Oct 2015 17:15:30 +0800", "msg_from": "Wei Shan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multi processor server overloads occationally with\n system process while running postgresql-9.4" }, { "msg_contents": "On 2015-10-03 01:39:33 -0700, ajaykbs wrote:\n> It is observed that at some times during moderate load\n> the CPU usage goes up to 400% and the users are not able to complete the\n> queries in expected time. But the load is contributed by some system process\n> only.The average connections are normally 50.\n\nThis email is nearly impossible to read.\n\nBut it sounds a bit like you need to disable transparent hugepages\nand/or zone_reclaim mode.\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 3 Oct 2015 11:26:30 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multi processor server overloads occationally with\n system process while running postgresql-9.4" }, { "msg_contents": "Sorry about the formatting.\nI am posting the same lines again.\n\nI am working in a public company who uses only open source applications and\ndatabases. I have a problem with our critical database which is write and\nread intensive. \n\nversion: Postgresql-9.4\n Hardware: HP DL980 (8-processor, 80 cores w/o hyper threading, 512GB RAM) \nOperating system: Red Hat Enterprise Linux Server release 6.4 (Santiago) \nuname -a : Linux host1 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST\n2013 x86_64 x86_64 x86_64 GNU/Linux Single database with separate tablespace\nfor main-data, pg_xlog and indexes \n\nI have a database having 770GB size and expected to grow to 2TB within the\nnext year. The database was running in a 2processor HP DL560 (16 cores) and\nas the transactions of the database were found increasing, we have changed\nthe hardware to DL980 with 8 processors and 512GB RAM.\n\n Problem It is observed that at some times during moderate load the CPU\nusage goes up to 400% and the users are not able to complete the queries in\nexpected time. But the load is contributed by some system process only. The\naverage connections are normally 50. But when this happens the connections\nwill shoot up to max-connections.\n\nsar command output\n\n07:20:01 IST CPU %user %nice %system %iowait %steal \n%idle\n07:30:01 IST all 0.73 0.00 0.37 0.58 0.00 \n98.33\n07:40:01 IST all 0.66 0.00 0.38 0.65 0.00 \n98.31\n07:50:01 IST all 0.27 0.00 0.27 0.01 0.00 \n99.45\n08:00:01 IST all 0.52 0.00 0.37 0.01 0.00 \n99.10\n08:10:01 IST all 1.54 0.00 0.70 0.02 0.00 \n97.74\n08:20:01 IST all 1.20 0.00 0.67 0.02 0.00 \n98.10\n08:30:01 IST all 1.48 0.00 0.77 0.03 0.00 \n97.72\n08:40:01 IST all 1.69 0.00 0.89 0.04 0.00 \n97.39\n08:50:01 IST all 1.71 0.00 0.94 0.04 0.00 \n97.31\n09:00:01 IST all 1.74 0.00 0.92 0.03 0.00 \n97.31\n09:10:01 IST all 2.32 0.00 1.06 0.04 0.00 \n96.58\n09:20:01 IST all 2.22 0.00 1.17 0.04 0.00 \n96.57\n09:30:02 IST all 2.20 0.00 6.68 0.06 0.00 \n91.06\n09:40:01 IST all 2.43 0.00 1.37 0.06 0.00 \n96.14\n09:50:01 IST all 3.23 0.00 2.06 0.08 0.00 \n94.63\n10:00:02 IST all 3.15 0.00 6.10 0.07 0.00 \n90.67\n10:10:01 IST all 4.94 0.00 5.20 0.29 0.00 \n89.57\n10:20:01 IST all 5.10 0.00 2.13 0.34 0.00 \n92.43\n10:30:01 IST all 5.60 0.00 2.42 0.18 0.00 \n91.80\n10:40:01 IST all 5.28 0.00 14.37 0.19 0.00 \n80.16\n10:50:01 IST all 4.52 0.00 28.48 0.23 0.00 \n66.77\n11:00:01 IST all 5.25 0.00 9.02 0.18 0.00 \n85.55\n11:10:01 IST all 5.77 0.00 4.96 0.27 0.00 \n89.00\n11:20:01 IST all 5.70 0.00 2.74 0.19 0.00 \n91.37\n11:30:01 IST all 5.72 0.00 5.91 0.20 0.00 \n88.17\n11:40:01 IST all 5.66 0.00 2.81 0.37 0.00 \n91.15\n11:50:01 IST all 5.90 0.00 8.80 0.10 0.00 \n85.19\n12:00:01 IST all 6.44 0.00 3.40 0.13 0.00 \n90.03\n12:10:01 IST all 7.18 0.00 4.52 0.11 0.00 \n88.18\n12:20:02 IST all 4.40 0.00 37.84 0.07 0.00 \n57.70\n12:30:01 IST all 5.66 0.00 2.98 0.10 0.00 \n91.26\n12:40:01 IST all 5.74 0.00 3.05 0.11 0.00 \n91.10\nAverage: all 1.92 0.00 2.28 0.11 0.00 \n95.69\n\n\nPostgresql.conf \nmax_connections = 500 (can be reduced) \nshared_buffers = 8500MB work_mem = 50MB \nmaintenance_work_mem = 8064MB \ncheckpoint_segments = 132\ncheckpoint_timeout = 30min \ncheckpoint_completion_target = 0.9 \n\nI am not using a connection pooler.\n\nThis over load happens 5-6 times a day. How to trace the cause of this\nproblem?. \n\nMy thoughts. \n1. some thing related to the numa systems memory management. \n2. Some thing related to the size of shared buffers. Please help \n\n\n\nAjayakumar.BS\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Multi-processor-server-overloads-occationally-with-system-process-while-running-postgresql-9-4-tp5868474p5868480.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 3 Oct 2015 02:34:10 -0700 (MST)", "msg_from": "ajaykbs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multi processor server overloads occationally with system\n process while running postgresql-9.4" }, { "msg_contents": "I have checked the transparent huge pages and zone reclaim mode and those are\nalready disabled.\n\nAs a trial and error method, I have reduced the shared buffer size from\n8500MB to 3000MB.\nThe CPU i/o wait is icreased a little. But the periodical over load has not\noccurred afterwards. (3 days passed without such situation). I shall report\nfurther developments.\n Thank you all for the great help.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Multi-processor-server-overloads-occationally-with-system-process-while-running-postgresql-9-4-tp5868474p5869047.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 6 Oct 2015 22:08:39 -0700 (MST)", "msg_from": "ajaykbs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multi processor server overloads occationally with system\n process while running postgresql-9.4" }, { "msg_contents": "On Tue, Oct 6, 2015 at 11:08 PM, ajaykbs <[email protected]> wrote:\n> I have checked the transparent huge pages and zone reclaim mode and those are\n> already disabled.\n>\n> As a trial and error method, I have reduced the shared buffer size from\n> 8500MB to 3000MB.\n> The CPU i/o wait is icreased a little. But the periodical over load has not\n> occurred afterwards. (3 days passed without such situation). I shall report\n> further developments.\n\nReduce max connections to something more reasonable like < 100 and get\na connection pooler in place (pgbouncer is simple to setup and use)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 7 Oct 2015 02:31:03 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Multi processor server overloads occationally with\n system process while running postgresql-9.4" }, { "msg_contents": "On Saturday, October 3, 2015 4:36 AM, ajaykbs <[email protected]> wrote:\n\n> version: Postgresql-9.4\n> Hardware: HP DL980 (8-processor, 80 cores w/o hyper threading, 512GB RAM)\n> Operating system: Red Hat Enterprise Linux Server release 6.4 (Santiago)\n> uname -a : Linux host1 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST\n> 2013 x86_64 x86_64 x86_64 GNU/Linux Single database with separate tablespace\n> for main-data, pg_xlog and indexes\n>\n> I have a database having 770GB size and expected to grow to 2TB within the\n> next year. The database was running in a 2processor HP DL560 (16 cores) and\n> as the transactions of the database were found increasing, we have changed\n> the hardware to DL980 with 8 processors and 512GB RAM.\n>\n> Problem It is observed that at some times during moderate load the CPU\n> usage goes up to 400% and the users are not able to complete the queries in\n> expected time. But the load is contributed by some system process only. The\n> average connections are normally 50. But when this happens the connections\n> will shoot up to max-connections.\n\nYou might find this thread interesting:\n\nhttp://www.postgresql.org/message-id/flat/[email protected]#[email protected]\n\nThe short version is that in existing production versions you can\neasily run in to such symptoms when you get to 8 or more CPU\npackages. The problem seems to be solved in the development\nversions of 9.5 (with changes not suitable for back-patching to a\nstable branch).\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 9 Oct 2015 13:01:53 +0000 (UTC)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Multi processor server overloads occationally\n with system process while running postgresql-9.4" } ]
[ { "msg_contents": "I have configured postgresql.conf with parameters as below:\n\nlog_destination = 'stderr'\nlogging_collector = on\nlog_directory = 'pg_log'\nlisten_addresses = '*'\nlog_destination = 'stderr'\nlogging_collector = on\nlog_directory = 'pg_log'\nlog_rotation_age = 1d\nlog_rotation_size = 1024MB\nlisten_addresses = '*'\ncheckpoint_segments = 64\nwal_keep_segments = 128\nmax_connections = 9999\nmax_prepared_transactions = 9999\ncheckpoint_completion_target = 0.9\ndefault_statistics_target = 10\nmaintenance_work_mem = 1GB\neffective_cache_size = 64GB\nshared_buffers = 24GB\nwork_mem = 5MB\nwal_buffers = 8MB\nport = 40003\npooler_port = 40053\ngtm_host = 'node03'\ngtm_port = 10053\n\nAs you can see, I have set the shared_buffers to 24GB, but my server\nstill only use 4-5 GB average.\nI have 128GB RAM in a single server.\nMy database has 2 tables:\n- room (3GB size if pg_dump'ed)\n- message (17GB if pg_dump'ed)\n\nThe backend application is a messaging server, in average there will\nbe 40-180 connections to the postgres Server.\nThe traffic is quite almost-heavy.\n\nHow to make postgres-xl effectively utilizes the resource of RAM for\n9999 max_connections?\n\n\nThanks,\nFattahRozzaq\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 5 Oct 2015 21:51:24 +0700", "msg_from": "FattahRozzaq <[email protected]>", "msg_from_op": true, "msg_subject": "shared-buffers set to 24GB but the RAM only use 4-5 GB average" }, { "msg_contents": "On Mon, Oct 5, 2015 at 9:51 AM, FattahRozzaq <[email protected]> wrote:\n> I have configured postgresql.conf with parameters as below:\n>\n> log_destination = 'stderr'\n> logging_collector = on\n> log_directory = 'pg_log'\n> listen_addresses = '*'\n> log_destination = 'stderr'\n> logging_collector = on\n> log_directory = 'pg_log'\n> log_rotation_age = 1d\n> log_rotation_size = 1024MB\n> listen_addresses = '*'\n> checkpoint_segments = 64\n> wal_keep_segments = 128\n> max_connections = 9999\n> max_prepared_transactions = 9999\n> checkpoint_completion_target = 0.9\n> default_statistics_target = 10\n> maintenance_work_mem = 1GB\n> effective_cache_size = 64GB\n> shared_buffers = 24GB\n> work_mem = 5MB\n> wal_buffers = 8MB\n> port = 40003\n> pooler_port = 40053\n> gtm_host = 'node03'\n> gtm_port = 10053\n>\n> As you can see, I have set the shared_buffers to 24GB, but my server\n> still only use 4-5 GB average.\n\nHow did you measure that?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 5 Oct 2015 11:29:12 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average" }, { "msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of FattahRozzaq\r\nSent: Monday, October 05, 2015 10:51 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 GB average\r\n\r\nI have configured postgresql.conf with parameters as below:\r\n\r\nlog_destination = 'stderr'\r\nlogging_collector = on\r\nlog_directory = 'pg_log'\r\nlisten_addresses = '*'\r\nlog_destination = 'stderr'\r\nlogging_collector = on\r\nlog_directory = 'pg_log'\r\nlog_rotation_age = 1d\r\nlog_rotation_size = 1024MB\r\nlisten_addresses = '*'\r\ncheckpoint_segments = 64\r\nwal_keep_segments = 128\r\nmax_connections = 9999\r\nmax_prepared_transactions = 9999\r\ncheckpoint_completion_target = 0.9\r\ndefault_statistics_target = 10\r\nmaintenance_work_mem = 1GB\r\neffective_cache_size = 64GB\r\nshared_buffers = 24GB\r\nwork_mem = 5MB\r\nwal_buffers = 8MB\r\nport = 40003\r\npooler_port = 40053\r\ngtm_host = 'node03'\r\ngtm_port = 10053\r\n\r\nAs you can see, I have set the shared_buffers to 24GB, but my server still only use 4-5 GB average.\r\nI have 128GB RAM in a single server.\r\nMy database has 2 tables:\r\n- room (3GB size if pg_dump'ed)\r\n- message (17GB if pg_dump'ed)\r\n\r\nThe backend application is a messaging server, in average there will be 40-180 connections to the postgres Server.\r\nThe traffic is quite almost-heavy.\r\n\r\nHow to make postgres-xl effectively utilizes the resource of RAM for\r\n9999 max_connections?\r\n\r\n\r\nThanks,\r\nFattahRozzaq\r\n____________________________________\r\n\r\nWhy are you looking at memory consumption?\r\nAre you experiencing performance problems?\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 5 Oct 2015 18:24:40 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5\n GB average" }, { "msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Igor Neyman\r\nSent: Monday, October 05, 2015 2:25 PM\r\nTo: FattahRozzaq <[email protected]>; [email protected]\r\nSubject: Re: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 GB average\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of FattahRozzaq\r\nSent: Monday, October 05, 2015 10:51 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 GB average\r\n\r\nI have configured postgresql.conf with parameters as below:\r\n\r\nlog_destination = 'stderr'\r\nlogging_collector = on\r\nlog_directory = 'pg_log'\r\nlisten_addresses = '*'\r\nlog_destination = 'stderr'\r\nlogging_collector = on\r\nlog_directory = 'pg_log'\r\nlog_rotation_age = 1d\r\nlog_rotation_size = 1024MB\r\nlisten_addresses = '*'\r\ncheckpoint_segments = 64\r\nwal_keep_segments = 128\r\nmax_connections = 9999\r\nmax_prepared_transactions = 9999\r\ncheckpoint_completion_target = 0.9\r\ndefault_statistics_target = 10\r\nmaintenance_work_mem = 1GB\r\neffective_cache_size = 64GB\r\nshared_buffers = 24GB\r\nwork_mem = 5MB\r\nwal_buffers = 8MB\r\nport = 40003\r\npooler_port = 40053\r\ngtm_host = 'node03'\r\ngtm_port = 10053\r\n\r\nAs you can see, I have set the shared_buffers to 24GB, but my server still only use 4-5 GB average.\r\nI have 128GB RAM in a single server.\r\nMy database has 2 tables:\r\n- room (3GB size if pg_dump'ed)\r\n- message (17GB if pg_dump'ed)\r\n\r\nThe backend application is a messaging server, in average there will be 40-180 connections to the postgres Server.\r\nThe traffic is quite almost-heavy.\r\n\r\nHow to make postgres-xl effectively utilizes the resource of RAM for\r\n9999 max_connections?\r\n\r\n\r\nThanks,\r\nFattahRozzaq\r\n____________________________________\r\n\r\nWhy are you looking at memory consumption?\r\nAre you experiencing performance problems?\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n_______________________\r\n\r\nAlso,\r\nPostgres-xl has it's own mailing lists:\r\nhttp://sourceforge.net/p/postgres-xl/mailman/\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 5 Oct 2015 18:29:12 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5\n GB average" }, { "msg_contents": "@Merlin Moncure, I got the calculation using pg_tune. And I modified\nthe shared_buffers=24GB and the effective_cache_size=64GB\n\n@Igor Neyman,\nYes, I had performance problem which sometimes the response time took\n11ms, with the exactly same query it took 100ms, and the response time\nseems randomly fluctuating even with the exact same query.\n\nAny idea on how I should configure postgres to effectively utilize the\nhardware and reduce the response time to be quicker?\n*(RAM=128GB, CPU=24cores, RAID-1+0:SSD)\n\n\nThanks,\nFattahRozzaq\n*looking for answer*\n\nOn 06/10/2015, Igor Neyman <[email protected]> wrote:\n>\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Igor Neyman\n> Sent: Monday, October 05, 2015 2:25 PM\n> To: FattahRozzaq <[email protected]>; [email protected]\n> Subject: Re: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5\n> GB average\n>\n>\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of FattahRozzaq\n> Sent: Monday, October 05, 2015 10:51 AM\n> To: [email protected]\n> Subject: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 GB\n> average\n>\n> I have configured postgresql.conf with parameters as below:\n>\n> log_destination = 'stderr'\n> logging_collector = on\n> log_directory = 'pg_log'\n> listen_addresses = '*'\n> log_destination = 'stderr'\n> logging_collector = on\n> log_directory = 'pg_log'\n> log_rotation_age = 1d\n> log_rotation_size = 1024MB\n> listen_addresses = '*'\n> checkpoint_segments = 64\n> wal_keep_segments = 128\n> max_connections = 9999\n> max_prepared_transactions = 9999\n> checkpoint_completion_target = 0.9\n> default_statistics_target = 10\n> maintenance_work_mem = 1GB\n> effective_cache_size = 64GB\n> shared_buffers = 24GB\n> work_mem = 5MB\n> wal_buffers = 8MB\n> port = 40003\n> pooler_port = 40053\n> gtm_host = 'node03'\n> gtm_port = 10053\n>\n> As you can see, I have set the shared_buffers to 24GB, but my server still\n> only use 4-5 GB average.\n> I have 128GB RAM in a single server.\n> My database has 2 tables:\n> - room (3GB size if pg_dump'ed)\n> - message (17GB if pg_dump'ed)\n>\n> The backend application is a messaging server, in average there will be\n> 40-180 connections to the postgres Server.\n> The traffic is quite almost-heavy.\n>\n> How to make postgres-xl effectively utilizes the resource of RAM for\n> 9999 max_connections?\n>\n>\n> Thanks,\n> FattahRozzaq\n> ____________________________________\n>\n> Why are you looking at memory consumption?\n> Are you experiencing performance problems?\n>\n> Regards,\n> Igor Neyman\n>\n> _______________________\n>\n> Also,\n> Postgres-xl has it's own mailing lists:\n> http://sourceforge.net/p/postgres-xl/mailman/\n>\n> Regards,\n> Igor Neyman\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 6 Oct 2015 16:33:06 +0700", "msg_from": "FattahRozzaq <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average" }, { "msg_contents": "On Tue, Oct 6, 2015 at 3:33 AM, FattahRozzaq <[email protected]> wrote:\n> @Merlin Moncure, I got the calculation using pg_tune. And I modified\n> the shared_buffers=24GB and the effective_cache_size=64GB\n>\n> @Igor Neyman,\n> Yes, I had performance problem which sometimes the response time took\n> 11ms, with the exactly same query it took 100ms, and the response time\n> seems randomly fluctuating even with the exact same query.\n>\n> Any idea on how I should configure postgres to effectively utilize the\n> hardware and reduce the response time to be quicker?\n> *(RAM=128GB, CPU=24cores, RAID-1+0:SSD)\n\nOK I'm gonna copy and paste some stuff from previous messages since\ntop-posting kinda messed up the formatting.\n\nFirst, this line:\n\n>> max_connections = 9999\n\nWhen you are experiencing this problem, how many connections are\nthere? There's a bell shaped curve for performance, and the peak is\nWAY less than 9999. The IPC / shared memory performance etc will drop\noff very quickly after a few dozen or at most a hundred or so\nconnections. If your application layer needs to keep more than a\ncouple dozen connections open, then it's a REAL good idea to throw a\nconnection pooler between the app and the db. I recommend pgbouncer as\nit's very easy to setup.\n\nBUT more important than that, it appears you're looking for a \"go\nfaster\" knob, and there may or may not be one for what you're doing.\n\nI'd recommend profiling your db server under load to see what's going\non. What does iostat, iotop, top, etc show you when this is happening?\nAre you running out of IO? Memory, CPU? What does \"explain analyze\nslowquerygoeshere\" tell you?\n\nI would recommend you consider reducing shared_buffers unless you have\nsome concrete proof that 24GB is helping. Big shared_buffers have\nmaintenance costs that affect write speeds, and slow writing can make\neverything kind of back up behind it. Typically something under 1GB\nis fine. PostgreSQL relies on the OS to cache most read data. So\ntrying to crank up shared_buffers to do the same job is often either\ncounter-productive or of no real gain.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 6 Oct 2015 09:10:03 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average" }, { "msg_contents": "On Tue, Oct 6, 2015 at 10:10 AM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Oct 6, 2015 at 3:33 AM, FattahRozzaq <[email protected]> wrote:\n>> @Merlin Moncure, I got the calculation using pg_tune. And I modified\n>> the shared_buffers=24GB and the effective_cache_size=64GB\n>>\n>> @Igor Neyman,\n>> Yes, I had performance problem which sometimes the response time took\n>> 11ms, with the exactly same query it took 100ms, and the response time\n>> seems randomly fluctuating even with the exact same query.\n>>\n>> Any idea on how I should configure postgres to effectively utilize the\n>> hardware and reduce the response time to be quicker?\n>> *(RAM=128GB, CPU=24cores, RAID-1+0:SSD)\n>\n> OK I'm gonna copy and paste some stuff from previous messages since\n> top-posting kinda messed up the formatting.\n>\n> First, this line:\n>\n>>> max_connections = 9999\n>\n> When you are experiencing this problem, how many connections are\n> there? There's a bell shaped curve for performance, and the peak is\n> WAY less than 9999. The IPC / shared memory performance etc will drop\n> off very quickly after a few dozen or at most a hundred or so\n> connections. If your application layer needs to keep more than a\n> couple dozen connections open, then it's a REAL good idea to throw a\n> connection pooler between the app and the db. I recommend pgbouncer as\n> it's very easy to setup.\n>\n> BUT more important than that, it appears you're looking for a \"go\n> faster\" knob, and there may or may not be one for what you're doing.\n>\n> I'd recommend profiling your db server under load to see what's going\n> on. What does iostat, iotop, top, etc show you when this is happening?\n> Are you running out of IO? Memory, CPU? What does \"explain analyze\n> slowquerygoeshere\" tell you?\n>\n> I would recommend you consider reducing shared_buffers unless you have\n> some concrete proof that 24GB is helping. Big shared_buffers have\n> maintenance costs that affect write speeds, and slow writing can make\n> everything kind of back up behind it. Typically something under 1GB\n> is fine. PostgreSQL relies on the OS to cache most read data. So\n> trying to crank up shared_buffers to do the same job is often either\n> counter-productive or of no real gain.\n\nThis is spot on. 9999 max_connections is gross overconfiguration\n(unless your server has 9999 cores....). If you need to support a\nlarge number of hopefully idle clients, you need to immediately\nexplore pgbouncer.\n\nAlso, OP did not answer my previous question correctly: \"how did you\nmeasure that?\" was asking how you determined that the server was only\nusing 4-5GB. Reason for that question is that measuring shared memory\nusage is a little more complex than it looks on the surface. It's\npretty common for sysadmins unfamiliar with it to under- or over-\ncount usage. We need to confirm your measurements with a some\ndiagnostics from utilities like 'top'.\n\nIf your performance issues are in fact cache related (which is really\nstorage), you have the traditional mitigation strategies: prewarm the\ncache, buy faster storage etc.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 6 Oct 2015 15:35:36 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average" }, { "msg_contents": "On 10/06/2015 02:33 AM, FattahRozzaq wrote:\n> @Merlin Moncure, I got the calculation using pg_tune. And I modified\n> the shared_buffers=24GB and the effective_cache_size=64GB\n\nI really need to get Greg to take down pg_tune. It's way out of date.\n\nProbably, I should replace it.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 6 Oct 2015 22:35:02 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5 GB\n average" } ]
[ { "msg_contents": "We have a system which is constantly importing flat file data feeds into\nnormalized tables in a DB warehouse over 10-20 connections. Each data feed\nrow results in a single transaction of multiple single row writes to\nmultiple normalized tables.\n\n \n\nThe more columns in the feed row, the more write operations, longer the\ntransaction.\n\n \n\nOperators are noticing that splitting a single feed of say - 100 columns -\ninto two consecutive feeds of 50 columns improves performance dramatically.\nI am wondering whether the multi-threaded and very busy import environment\ncauses non-linear performance degradation for longer transactions. Would the\noperators be advised to rewrite the feeds to result in more smaller\ntransactions rather than fewer, longer ones?\n\n \n\nCarlo\n\n\nWe have a system which is constantly importing flat file data feeds into normalized tables in a DB warehouse over 10-20 connections. Each data feed row results in a single transaction of multiple single row writes to multiple normalized tables. The more columns in the feed row, the more write operations, longer the transaction. Operators are noticing that splitting a single feed of say – 100 columns – into two consecutive feeds of 50 columns improves performance dramatically. I am wondering whether the multi-threaded and very busy import environment causes non-linear performance degradation for longer transactions. Would the operators be advised to rewrite the feeds to result in more smaller transactions rather than fewer, longer ones? Carlo", "msg_date": "Mon, 5 Oct 2015 23:10:49 -0400", "msg_from": "\"Carlo\" <[email protected]>", "msg_from_op": true, "msg_subject": "One long transaction or multiple short transactions?" }, { "msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Carlo\nSent: Monday, October 05, 2015 11:11 PM\nTo: [email protected]\nSubject: [PERFORM] One long transaction or multiple short transactions?\n\nWe have a system which is constantly importing flat file data feeds into normalized tables in a DB warehouse over 10-20 connections. Each data feed row results in a single transaction of multiple single row writes to multiple normalized tables.\n\nThe more columns in the feed row, the more write operations, longer the transaction.\n\nOperators are noticing that splitting a single feed of say - 100 columns - into two consecutive feeds of 50 columns improves performance dramatically. I am wondering whether the multi-threaded and very busy import environment causes non-linear performance degradation for longer transactions. Would the operators be advised to rewrite the feeds to result in more smaller transactions rather than fewer, longer ones?\n\nCarlo\n\n\n\n? over 10-20 connections\n\nHow many cores do you have on that machine?\nTest if limiting number of simultaneous feeds, like bringing their number down to half of your normal connections has the same positive effect.\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\n \n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Carlo\nSent: Monday, October 05, 2015 11:11 PM\nTo: [email protected]\nSubject: [PERFORM] One long transaction or multiple short transactions?\n\n\n \nWe have a system which is constantly importing flat file data feeds into normalized tables in a DB warehouse over 10-20 connections. Each data feed row results in a single transaction of multiple single row writes to\n multiple normalized tables.\n \nThe more columns in the feed row, the more write operations, longer the transaction.\n \nOperators are noticing that splitting a single feed of say – 100 columns – into two consecutive feeds of 50 columns improves performance dramatically. I am wondering whether the multi-threaded and very busy import environment\n causes non-linear performance degradation for longer transactions. Would the operators be advised to rewrite the feeds to result in more smaller transactions rather than fewer, longer ones?\n \nCarlo\n\n \n\n \nØ \nover 10-20 connections\n \nHow many cores do you have on that machine?\nTest if limiting number of simultaneous feeds, like bringing their number down to half of your normal connections has the same positive effect.\n \nRegards,\nIgor Neyman", "msg_date": "Tue, 6 Oct 2015 13:10:01 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": ">> How many cores do you have on that machine?\n\nTest if limiting number of simultaneous feeds, like bringing their number\ndown to half of your normal connections has the same positive effect.\n\n<< \n\n \n\nI am told 32 cores on a LINUX VM. The operators have tried limiting the\nnumber of threads. They feel that the number of connections is optimal.\nHowever, under the same conditions they noticed a sizable boost in\nperformance if the same import was split into two successive imports which\nhad shorter transactions.\n\n \n\nI am just looking to see if there is any reason to think that lock\ncontention (or anything else) over longer vs. shorter single-row-write\ntransactions under the same conditions might explain this. \n\n \n\nCarlo\n\n \n\nFrom: Igor Neyman [mailto:[email protected]] \nSent: October 6, 2015 9:10 AM\nTo: Carlo; [email protected]\nSubject: RE: [PERFORM] One long transaction or multiple short transactions?\n\n \n\n \n\n \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Carlo\nSent: Monday, October 05, 2015 11:11 PM\nTo: [email protected]\nSubject: [PERFORM] One long transaction or multiple short transactions?\n\n \n\nWe have a system which is constantly importing flat file data feeds into\nnormalized tables in a DB warehouse over 10-20 connections. Each data feed\nrow results in a single transaction of multiple single row writes to\nmultiple normalized tables.\n\n \n\nThe more columns in the feed row, the more write operations, longer the\ntransaction.\n\n \n\nOperators are noticing that splitting a single feed of say – 100 columns –\ninto two consecutive feeds of 50 columns improves performance dramatically.\nI am wondering whether the multi-threaded and very busy import environment\ncauses non-linear performance degradation for longer transactions. Would the\noperators be advised to rewrite the feeds to result in more smaller\ntransactions rather than fewer, longer ones?\n\n \n\nCarlo\n\n \n\n \n\nØ over 10-20 connections\n\n \n\nHow many cores do you have on that machine?\n\nTest if limiting number of simultaneous feeds, like bringing their number\ndown to half of your normal connections has the same positive effect.\n\n \n\nRegards,\n\nIgor Neyman\n\n\n>> How many cores do you have on that machine?Test if limiting number of simultaneous feeds, like bringing their number down to half of your normal connections has the same positive effect.<<  I am told 32 cores on a LINUX VM. The operators have tried limiting the number of threads. They feel that the number of connections is optimal. However, under the same conditions they noticed a sizable boost in performance if the same import was split into two successive imports which had shorter transactions. I am just looking to see if there is any reason to think that lock contention (or anything else) over longer vs. shorter single-row-write transactions under the same conditions might explain this.  Carlo From: Igor Neyman [mailto:[email protected]] Sent: October 6, 2015 9:10 AMTo: Carlo; [email protected]: RE: [PERFORM] One long transaction or multiple short transactions?   From: [email protected] [mailto:[email protected]] On Behalf Of CarloSent: Monday, October 05, 2015 11:11 PMTo: [email protected]: [PERFORM] One long transaction or multiple short transactions? We have a system which is constantly importing flat file data feeds into normalized tables in a DB warehouse over 10-20 connections. Each data feed row results in a single transaction of multiple single row writes to multiple normalized tables. The more columns in the feed row, the more write operations, longer the transaction. Operators are noticing that splitting a single feed of say – 100 columns – into two consecutive feeds of 50 columns improves performance dramatically. I am wondering whether the multi-threaded and very busy import environment causes non-linear performance degradation for longer transactions. Would the operators be advised to rewrite the feeds to result in more smaller transactions rather than fewer, longer ones? Carlo  Ø  over 10-20 connections How many cores do you have on that machine?Test if limiting number of simultaneous feeds, like bringing their number down to half of your normal connections has the same positive effect. Regards,Igor Neyman", "msg_date": "Wed, 7 Oct 2015 19:40:31 -0400", "msg_from": "\"Carlo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": "Sounds like a locking problem, but assuming you aren’t sherlock holmes and simply want to get the thing working as soon as possible: \n\nStick a fast SSD in there (whether you stay on VM or physical). If you have enough I/O, you may be able to solve the problem with brute force.\nSSDs are a lot cheaper than your time. \n\nSuggest you forward this to your operators: a talk I have about optimising multi-threaded work in postgres: \n\n http://graemebell.net/foss4gcomo.pdf (Slides: “Input/Output” in the middle of the talk and also the slides at the end labelled “For Techies\")\n\nGraeme Bell\n\np.s. You mentioned a VM. Consider making the machine physical and not VM. You’ll get a performance boost and remove the risk of DB corruption from untrustworthy VM fsyncs. One day there will be a power cut or O/S crash during these your writes and with a VM you’ve a reasonable chance of nuking your DB because VM virtualised storage often doesn’t honour fsync (for performance reasons), but it’s fundamental to correct operation of PG. \n\n\n\n> On 08 Oct 2015, at 01:40, Carlo <[email protected]> wrote:\n> \n> \n> I am told 32 cores on a LINUX VM. The operators have tried limiting the number of threads. They feel that the number of connections is optimal. However, under the same conditions they noticed a sizable boost in performance if the same import was split into two successive imports which had shorter transactions.\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 08:54:55 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": ">> Sounds like a locking problem\n\nThis is what I am trying to get at. The reason that I am not addressing\nhardware or OS configuration concerns is that this is not my environment,\nbut my client's. The client is running my import software and has a choice\nof how long the transactions can be. They are going for long transactions,\nand I am trying to determine whether there is a penalty for single long\ntransactions over a configuration which would allow for more successive\nshort transactions. (keep in mind all reads and writes are single-row). \n\nThere are other people working on hardware and OS configuration, and that's\nwhy I can't want to get into a general optimization discussion because the\nclient is concerned with just this question.\n\n-----Original Message-----\nFrom: Graeme B. Bell [mailto:[email protected]] \nSent: October 8, 2015 4:55 AM\nTo: Carlo\nCc: [email protected]\nSubject: Re: [PERFORM] One long transaction or multiple short transactions?\n\nSounds like a locking problem, but assuming you aren't sherlock holmes and\nsimply want to get the thing working as soon as possible: \n\nStick a fast SSD in there (whether you stay on VM or physical). If you have\nenough I/O, you may be able to solve the problem with brute force.\nSSDs are a lot cheaper than your time. \n\nSuggest you forward this to your operators: a talk I have about optimising\nmulti-threaded work in postgres: \n\n http://graemebell.net/foss4gcomo.pdf (Slides: \"Input/Output\" in the\nmiddle of the talk and also the slides at the end labelled \"For Techies\")\n\nGraeme Bell\n\np.s. You mentioned a VM. Consider making the machine physical and not VM.\nYou'll get a performance boost and remove the risk of DB corruption from\nuntrustworthy VM fsyncs. One day there will be a power cut or O/S crash\nduring these your writes and with a VM you've a reasonable chance of nuking\nyour DB because VM virtualised storage often doesn't honour fsync (for\nperformance reasons), but it's fundamental to correct operation of PG. \n\n\n\n> On 08 Oct 2015, at 01:40, Carlo <[email protected]> wrote:\n> \n> \n> I am told 32 cores on a LINUX VM. The operators have tried limiting the\nnumber of threads. They feel that the number of connections is optimal.\nHowever, under the same conditions they noticed a sizable boost in\nperformance if the same import was split into two successive imports which\nhad shorter transactions.\n> \n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 11:08:55 -0400", "msg_from": "\"Carlo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": "On Thu, Oct 08, 2015 at 11:08:55AM -0400, Carlo wrote:\n> >> Sounds like a locking problem\n> \n> This is what I am trying to get at. The reason that I am not addressing\n> hardware or OS configuration concerns is that this is not my environment,\n> but my client's. The client is running my import software and has a choice\n> of how long the transactions can be. They are going for long transactions,\n> and I am trying to determine whether there is a penalty for single long\n> transactions over a configuration which would allow for more successive\n> short transactions. (keep in mind all reads and writes are single-row). \n> \n> There are other people working on hardware and OS configuration, and that's\n> why I can't want to get into a general optimization discussion because the\n> client is concerned with just this question.\n> \n\nHi Carlo,\n\nSince the read/writes are basically independent, which is what I take your\n\"single-row\" comment to mean, by batching them you are balancing two\nopposing factors. First, larger batches allow you to consolodate I/O and\nother resource requests to make them more efficient per row. Second, larger\nbatches require more locking as the number of rows updated grows. It may\nvery well be the case that by halving your batch size that the system can\nprocess them more quickly than a single batch that is twice the size.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 12:00:25 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": "-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] \nSent: October 8, 2015 1:00 PM\nTo: Carlo\nCc: [email protected]\nSubject: Re: [PERFORM] One long transaction or multiple short transactions?\n\nOn Thu, Oct 08, 2015 at 11:08:55AM -0400, Carlo wrote:\n> >> Sounds like a locking problem\n> \n> This is what I am trying to get at. The reason that I am not \n> addressing hardware or OS configuration concerns is that this is not \n> my environment, but my client's. The client is running my import \n> software and has a choice of how long the transactions can be. They \n> are going for long transactions, and I am trying to determine whether \n> there is a penalty for single long transactions over a configuration \n> which would allow for more successive short transactions. (keep in mind\nall reads and writes are single-row).\n> \n> There are other people working on hardware and OS configuration, and \n> that's why I can't want to get into a general optimization discussion \n> because the client is concerned with just this question.\n> \n\nOn October 8, 2015 1:00 PM Ken wrote:\n> Hi Carlo,\n\n> Since the read/writes are basically independent, which is what I take your\n\"single-row\" comment to mean, by batching them you are balancing two \n> opposing factors. First, larger batches allow you to consolodate I/O and\nother resource requests to make them more efficient per row. Second, larger \n> batches require more locking as the number of rows updated grows. It may\nvery well be the case that by halving your batch size that the system can \n> process them more quickly than a single batch that is twice the size.\n\nJust to clarify, one transaction of this type may take longer to commit than\ntwo successive transactions of half the size?\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 17:43:11 -0400", "msg_from": "\"Carlo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": "On Thu, Oct 08, 2015 at 05:43:11PM -0400, Carlo wrote:\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] \n> Sent: October 8, 2015 1:00 PM\n> To: Carlo\n> Cc: [email protected]\n> Subject: Re: [PERFORM] One long transaction or multiple short transactions?\n> \n> On Thu, Oct 08, 2015 at 11:08:55AM -0400, Carlo wrote:\n> > >> Sounds like a locking problem\n> > \n> > This is what I am trying to get at. The reason that I am not \n> > addressing hardware or OS configuration concerns is that this is not \n> > my environment, but my client's. The client is running my import \n> > software and has a choice of how long the transactions can be. They \n> > are going for long transactions, and I am trying to determine whether \n> > there is a penalty for single long transactions over a configuration \n> > which would allow for more successive short transactions. (keep in mind\n> all reads and writes are single-row).\n> > \n> > There are other people working on hardware and OS configuration, and \n> > that's why I can't want to get into a general optimization discussion \n> > because the client is concerned with just this question.\n> > \n> \n> On October 8, 2015 1:00 PM Ken wrote:\n> > Hi Carlo,\n> \n> > Since the read/writes are basically independent, which is what I take your\n> \"single-row\" comment to mean, by batching them you are balancing two \n> > opposing factors. First, larger batches allow you to consolodate I/O and\n> other resource requests to make them more efficient per row. Second, larger \n> > batches require more locking as the number of rows updated grows. It may\n> very well be the case that by halving your batch size that the system can \n> > process them more quickly than a single batch that is twice the size.\n> \n> Just to clarify, one transaction of this type may take longer to commit than\n> two successive transactions of half the size?\n> \n\nYes, but where the optimum count is located should be determined by testing.\nJust varying the batch size and note where the performance is at a maximum.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 16:59:30 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": "Le 08/10/2015 01:40, Carlo a �crit :\n>\n> >> How many cores do you have on that machine?\n>\n> Test if limiting number of simultaneous feeds, like bringing their \n> number down to half of your normal connections has the same positive \n> effect.\n>\n> <<\n>\n> I am told 32 cores on a LINUX VM. The operators have tried limiting \n> the number of threads. They feel that the number of connections is \n> optimal. However, under the same conditions they noticed a sizable \n> boost in performance if the same import was split into two successive \n> imports which had shorter transactions.\n>\n> I am just looking to see if there is any reason to think that lock \n> contention (or anything else) over longer vs. shorter single-row-write \n> transactions under the same conditions might explain this.\n>\nI don't think inserts can cause contention on the server. Insert do not \nlock tables during the transaction. You may have contention on sequence \nbut it won't vary with transaction size.\n\nHave you checked the resource usage (CPU,memory) on the client side ?\n\nHow do you insert rows ? Do you use plain postgres API ?\n\nRegards,\nLaurent\n\n\n\n\n\n\n\nLe 08/10/2015 01:40, Carlo a écrit :\n\n\n\n\n\n\n>> How\n many cores do you have on that machine?\nTest if\n limiting number of simultaneous feeds, like bringing their\n number down to half of your normal connections has the same\n positive effect.\n<< \n \nI am told 32\n cores on a LINUX VM. The operators have tried limiting the\n number of threads. They feel that the number of connections\n is optimal. However, under the same conditions they noticed\n a sizable boost in performance if the same import was split\n into two successive imports which had shorter transactions.\n \nI am just\n looking to see if there is any reason to think that lock\n contention (or anything else) over longer vs. shorter\n single-row-write transactions under the same conditions\n might explain this.\n\n\n I don't think inserts can cause contention on the server. Insert do\n not lock tables during the transaction. You may have contention on\n sequence but it won't vary with transaction size.\n\n Have you checked the resource usage (CPU,memory) on the client side\n ?\n\n How do you insert rows ? Do you use plain postgres API ?\n\n Regards,\n Laurent", "msg_date": "Fri, 09 Oct 2015 08:45:28 +0200", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": "\n> I don't think inserts can cause contention on the server. Insert do not lock tables during the transaction. You may have contention on sequence but it won't vary with transaction size.\n\nPerhaps there could be a trigger on inserts which creates some lock contention?\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 9 Oct 2015 08:33:21 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": "On 10/9/15 3:33 AM, Graeme B. Bell wrote:\n>\n>> I don't think inserts can cause contention on the server. Insert do not lock tables during the transaction. You may have contention on sequence but it won't vary with transaction size.\n>\n> Perhaps there could be a trigger on inserts which creates some lock contention?\n\nExcept inserts *do* take a lot of locks, just not user-level locks. \nOperations like finding a page to insert into, seeing if that page is in \nshared buffers, loading the page into shared buffers, modifying a shared \nbuffer, getting the relation extension lock if you need to add a new \npage. Then there's a whole pile of additional locking you could be \nlooking at for inserting into any indexes.\n\nNow, most of the locks I described up there are transaction-aware, but \nthere's other things happening at a transaction level that could alter \nthat locking. So it wouldn't surprise me if you're seeing radically \ndifferent behavior based on transaction duration.\n\nAlso, it sounds like perhaps longer transactions are involving more \ntables? Is this a star schema you're dealing with?\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 17 Oct 2015 10:26:01 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": "On 2015-10-17 10:26:01 -0500, Jim Nasby wrote:\n> Except inserts *do* take a lot of locks, just not user-level locks.\n> Operations like finding a page to insert into, seeing if that page is in\n> shared buffers, loading the page into shared buffers, modifying a shared\n> buffer, getting the relation extension lock if you need to add a new page.\n> Then there's a whole pile of additional locking you could be looking at for\n> inserting into any indexes.\n> \n> Now, most of the locks I described up there are transaction-aware\n\nMissing *not*?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 17 Oct 2015 19:13:38 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One long transaction or multiple short transactions?" }, { "msg_contents": "On 10/17/15 12:13 PM, Andres Freund wrote:\n> On 2015-10-17 10:26:01 -0500, Jim Nasby wrote:\n>> Except inserts *do* take a lot of locks, just not user-level locks.\n>> Operations like finding a page to insert into, seeing if that page is in\n>> shared buffers, loading the page into shared buffers, modifying a shared\n>> buffer, getting the relation extension lock if you need to add a new page.\n>> Then there's a whole pile of additional locking you could be looking at for\n>> inserting into any indexes.\n>>\n>> Now, most of the locks I described up there are transaction-aware\n>\n> Missing *not*?\n\nOops. Yes, they're *not* transaction-aware.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 17 Oct 2015 15:43:29 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One long transaction or multiple short transactions?" } ]
[ { "msg_contents": "Response from you all are very precious.\n\n@Merlin,\nI'm misunderstood the question.\nYes, I didn't measure it. I only monitor RAM and CPU using htop (I also use\nnmon for disk IO, iftop for the network utilization).\nDid 1 connection need 1 core dedicatedly?\n(I was having 40-80 connections in stable condition. And when the problem\nhappened the connections would be more than 150)\n\n@Scott,\nJust before the problem happened and when the problem happened, my server\ndidn't running out of IO/RAM/CPU.\nThe SSD IO total usage was 25-50% (I use nmon to monitor the disk IO)\nThe RAM total usage was 4-5GB of total 128GB (I monitor it using htop)\nThe CPU was 100% in 2 cores, 70% in 3 cores, the other 19 cores were under\n5% (I monitor it using htop)\nThe network interface utilization was only 300-400 Mbps of total 1Gbps (I\nmonitor it using iftop)\nSo, maybe the 128GB RAM will never all be fully use by PostgreSQL?\n\nI will test PostgreSQL with pg_bouncer in an identical logical (OS,\nsoftwares, etc) condition and physical resource condition.\n\n\nRespect,\nFattahRozzaq\n\nResponse from you all are very precious.@Merlin,I'm misunderstood the question. Yes, I didn't measure it. I only monitor RAM and CPU using htop (I also use nmon for disk IO, iftop for the network utilization).Did 1 connection need 1 core dedicatedly? (I was having 40-80 connections in stable condition. And when the problem happened the connections would be more than 150)@Scott,Just before the problem happened and when the problem happened, my server didn't running out of IO/RAM/CPU.The SSD IO total usage was 25-50% (I use nmon to monitor the disk IO)The RAM total usage was 4-5GB of total 128GB (I monitor it using htop)The CPU was 100% in 2 cores, 70% in 3 cores, the other 19 cores were under 5% (I monitor it using htop)The network interface utilization was only 300-400 Mbps of total 1Gbps (I monitor it using iftop)So, maybe the 128GB RAM will never all be fully use by PostgreSQL?I will test PostgreSQL with pg_bouncer in an identical logical (OS, softwares, etc) condition and physical resource condition.Respect,FattahRozzaq", "msg_date": "Wed, 7 Oct 2015 17:29:26 +0700", "msg_from": "FattahRozzaq <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average" }, { "msg_contents": "On Wed, Oct 7, 2015 at 4:29 AM, FattahRozzaq <[email protected]> wrote:\n> Response from you all are very precious.\n>\n> @Merlin,\n> I'm misunderstood the question.\n> Yes, I didn't measure it. I only monitor RAM and CPU using htop (I also use\n> nmon for disk IO, iftop for the network utilization).\n> Did 1 connection need 1 core dedicatedly?\n> (I was having 40-80 connections in stable condition. And when the problem\n> happened the connections would be more than 150)\n>\n> @Scott,\n> Just before the problem happened and when the problem happened, my server\n> didn't running out of IO/RAM/CPU.\n> The SSD IO total usage was 25-50% (I use nmon to monitor the disk IO)\n> The RAM total usage was 4-5GB of total 128GB (I monitor it using htop)\n> The CPU was 100% in 2 cores, 70% in 3 cores, the other 19 cores were under\n> 5% (I monitor it using htop)\n> The network interface utilization was only 300-400 Mbps of total 1Gbps (I\n> monitor it using iftop)\n> So, maybe the 128GB RAM will never all be fully use by PostgreSQL?\n\nCheck what vmstat 10 has to say, specifically the in and cs columns,\nwhich is interrupts and context switches per second. What you'll\nlikely see is it ranging from 10k to 20k normally and spiking to 10 or\n100 times when this is happening. That's the typical symptom that your\nOS is spending all its time trying to switch between 1,000 processes\ninstead of servicing a handful at a time.\n\nYou should also see a huge uptick in pg processes that are active at\nonce, either in top or via pg_stat_activity.\n\nDon't worry about making PostgreSQL use all your RAM, the OS will do\nthat for you, worry about getting PostgreSQL to process as many\nqueries per second as it can. And you do that by using a connection\npooler. I have machines with 350G dbs on machines with > 512GB RAM,\nand eventually the whole db is in kernel cache and the only IO is when\nblocks get written to disk. But the kernel only caches the parts of\nthe db that get read. If your db isn't reading more than a few dozen\ngigabytes then that's how much memory will be used to cache the db.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 7 Oct 2015 07:01:04 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average" }, { "msg_contents": "On Wed, Oct 7, 2015 at 5:29 AM, FattahRozzaq <[email protected]> wrote:\n> Response from you all are very precious.\n>\n> @Merlin,\n> I'm misunderstood the question.\n> Yes, I didn't measure it. I only monitor RAM and CPU using htop (I also use\n\nCan you be a little more specific. What values did you look at and\nhow did you sum them up? Assuming your measurement was correct, you\nmight be looking at simple prewarm issue in terms of getting shared\nbuffers stuffed up. There are some tactics to warm up shared buffers\n(like pg_prewarm), but it's not clear that would be useful in your\ncase.\n\nOne cause (especially with older kernels) of low memory utilization is\nmisconfigured NUMA. Note this would only affect the backing o/s\ncache, not pg's shared buffers.\n\nVery first thing you need to figure out is if your measured issues are\ncoming from storage or not. iowait % above single digits suggests\nthis. With fast SSD it's pretty difficult to max out storage,\nespecially when reading data, but it's always the first thing to look\nat. Context switch issues (as Scott notes) as another major\npotential cause of performance variability, as is server internal\ncontention. But rule out storage first.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 7 Oct 2015 08:59:49 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average" } ]
[ { "msg_contents": "Hi,\n\nI use postgresql often but I'm not very familiar with how it works internal.\n\nI've made a small script to backup files from different computers to a \npostgresql database.\nSort of a versioning networked backup system.\nIt works with large objects (oid in table, linked to large object), \nwhich I import using psycopg\n\nIt works well but slow.\n\nThe database (9.2.9) on the server (freebsd10) runs on a zfs mirror.\nIf I copy a file to the mirror using scp I get 37MB/sec\nMy script achieves something like 7 or 8MB/sec on large (+100MB) files.\n\nI've never used postgresql for something like this, is there something I \ncan do to speed things up ?\nIt's not a huge problem as it's only the initial run that takes a while \n(after that, most files are already in the db).\nStill it would be nice if it would be a little faster.\ncpu is mostly idle on the server, filesystem is running 100%.\nThis is a seperate postgresql server (I've used freebsd profiles to have \n2 postgresql server running) so I can change this setup so it will work \nbetter for this application.\n\nI've read different suggestions online but I'm unsure which is best, \nthey all speak of files which are only a few Kb, not 100MB or bigger.\n\nps. english is not my native language\n\nthx\nBram\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 08 Oct 2015 11:17:49 +0200", "msg_from": "Bram Van Steenlandt <[email protected]>", "msg_from_op": true, "msg_subject": "large object write performance" }, { "msg_contents": "Seems a bit slow.\n\n1. Can you share the script (the portion that does the file transfer) to the list? Maybe you’re doing something unusual there by mistake.\nSimilarly the settings you’re using for scp. \n\n2. What’s the network like?\nFor example, what if the underlying network is only capable of 10MB/s peak, and scp is using compression and the files are highly compressible?\nHave you tried storing zip or gzip’d versions of the file into postgres? (that’s probably a good idea anyway)\n\n3. ZFS performance can depend on available memory and use of caches (memory + L2ARC for reading, ZIL cache for writing).\nMaybe put an intel SSD in there (or a pair of them) and use it as a ZIL cache. \n\n4. Use dd to measure the write performance of ZFS doing a local write to the machine. What speed do you get?\n\n5. Transfer a zip’d file over the network using scp. What speed do you get?\n\n6. Is your postgres running all the time or do you start it before this test? Perhaps check if any background tasks are running when you use postgres - autovacuum, autoanalyze etc. \n\nGraeme Bell\n\n> On 08 Oct 2015, at 11:17, Bram Van Steenlandt <[email protected]> wrote:\n> \n> Hi,\n> \n> I use postgresql often but I'm not very familiar with how it works internal.\n> \n> I've made a small script to backup files from different computers to a postgresql database.\n> Sort of a versioning networked backup system.\n> It works with large objects (oid in table, linked to large object), which I import using psycopg\n> \n> It works well but slow.\n> \n> The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.\n> If I copy a file to the mirror using scp I get 37MB/sec\n> My script achieves something like 7 or 8MB/sec on large (+100MB) files.\n> \n> I've never used postgresql for something like this, is there something I can do to speed things up ?\n> It's not a huge problem as it's only the initial run that takes a while (after that, most files are already in the db).\n> Still it would be nice if it would be a little faster.\n> cpu is mostly idle on the server, filesystem is running 100%.\n> This is a seperate postgresql server (I've used freebsd profiles to have 2 postgresql server running) so I can change this setup so it will work better for this application.\n> \n> I've read different suggestions online but I'm unsure which is best, they all speak of files which are only a few Kb, not 100MB or bigger.\n> \n> ps. english is not my native language\n> \n> thx\n> Bram\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 09:45:16 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large object write performance" }, { "msg_contents": "\n> On 08 Oct 2015, at 11:17, Bram Van Steenlandt <[email protected]> wrote:\n> \n> The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.\n> If I copy a file to the mirror using scp I get 37MB/sec\n> My script achieves something like 7 or 8MB/sec on large (+100MB) files.\n\n\nThis may help - great blog article about ZFS with postgres and how use you can zfs compression to boost i/o performance substantially.\nIf your machine is making a lot of smaller writes in postgres (as opposed to presumably large writes by scp) then this may alleviate things a bit.\n\nhttps://www.citusdata.com/blog/64-zfs-compression\n\nGraeme Bell\n\np.s. Apologies for top-posting on my previous message.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 10:25:02 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large object write performance" }, { "msg_contents": "\n>> First the database was on a partition where compression was enabled, I changed it to an uncompressed one to see if it makes a difference thinking maybe the cpu couldn't handle the load.\n> It made little difference in my case.\n> \n> My regular gmirror partition seems faster:\n> dd bs=8k count=25600 if=/dev/zero of=./test\n> 25600+0 records in\n> 25600+0 records out\n> 209715200 bytes transferred in 1.513112 secs (138598612 bytes/sec)\n> \n> the zfs compressed partition also goes faster:\n> dd bs=8k count=25600 if=/dev/zero of=./test\n> 25600+0 records in\n> 25600+0 records out\n> 209715200 bytes transferred in 0.979065 secs (214199479 bytes/sec)\n> but this one didn't really go that fast in my test (maybe 10%)\n\n\nPlease can you run iozone and look for low random write performance with small blocks? (4k)\nhttp://www.slashroot.in/linux-file-system-read-write-performance-test\n\nAlso please can you CC to the list with your replies to my on-list emails?\n\nGraeme Bell\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 11:21:50 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large object write performance" }, { "msg_contents": "\n\nOp 08-10-15 om 13:21 schreef Graeme B. Bell:\n>>> First the database was on a partition where compression was enabled, I changed it to an uncompressed one to see if it makes a difference thinking maybe the cpu couldn't handle the load.\n>> It made little difference in my case.\n>>\n>> My regular gmirror partition seems faster:\n>> dd bs=8k count=25600 if=/dev/zero of=./test\n>> 25600+0 records in\n>> 25600+0 records out\n>> 209715200 bytes transferred in 1.513112 secs (138598612 bytes/sec)\n>>\n>> the zfs compressed partition also goes faster:\n>> dd bs=8k count=25600 if=/dev/zero of=./test\n>> 25600+0 records in\n>> 25600+0 records out\n>> 209715200 bytes transferred in 0.979065 secs (214199479 bytes/sec)\n>> but this one didn't really go that fast in my test (maybe 10%)\n>\n> Please can you run iozone and look for low random write performance with small blocks? (4k)\n> http://www.slashroot.in/linux-file-system-read-write-performance-test\nLike this ?\n\ngmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives)\nzfs uncompressed (iozone -s 4 -a /datapool/data) = 650136\nzfs compressed (iozone -s 4 -a /datapool/data) = 676345\n\n> Also please can you CC to the list with your replies to my on-list emails?\n>\n> Graeme Bell\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 08 Oct 2015 13:33:22 +0200", "msg_from": "Bram Van Steenlandt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: large object write performance" }, { "msg_contents": ">> \n>> \n> Like this ?\n> \n> gmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives)\n> zfs uncompressed (iozone -s 4 -a /datapool/data) = 650136\n> zfs compressed (iozone -s 4 -a /datapool/data) = 676345\n\n\nIf you can get the complete tables (as in the images on the blog post) with random performance compared to sequential etc, different block sizes, that would be very interesting.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 11:37:41 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large object write performance" }, { "msg_contents": "\n\nOp 08-10-15 om 13:13 schreef Graeme B. Bell:\n>> 1. The part is \"fobj = lobject(db.db,0,\"r\",0,fpath)\", I don't think there is anything there\n> Can you include the surrounding code please (e.g. setting up the db connection) so we can see what�s happening, any sync/commit type stuff afterwards.\nconnect:\n self.db = \npsycopg2.connect(dbname=self.confighelp.get(\"dbname\"),user=self.confighelp.get(\"dbuser\"),password=self.confighelp.get(\"dbpassword\"),host=self.confighelp.get(\"dbhost\"),port=int(self.confighelp.get(\"dbport\")),sslmode=self.confighelp.get(\"dbsslmode\"))\n\n\nupload:\n self.statusupdate(\"Backing up %s \n(%s)\"%(fpath,nicesizeprint(size)))\n starttime =datetime.datetime.now()\n try:\n fobj = lobject(db.db,0,\"r\",0,fpath)\n except psycopg2.OperationalError,e:\n if e.__str__().find(\"could not open \nfile\")>-1:\nbadfiles.append([fpath,str(e).rstrip(\"\\n\").rstrip(\"\\r\")])\n self.statusupdate(\"Can't backup \n%s\"%fpath)\n else:\n self.emsg = str(e)\n return False\n except Exception,e:\n self.emsg= str(e)\n return False\n else:\n cursor.execute(\"insert into ${table} \n(set,path,modtime,size,file,basepath) values \n(%s,%s,%s,%s,%s,%s)\".replace(\"${table}\",tablename),[bset,urpath,modtime,size,fobj.oid,path])\n db.commit()\n>\n>> 2.gigabit ethernet, the scp copy I did was over the network to that harddrive using\n>> scp FreeBSD-10.1-RELEASE-amd64-dvd1.iso x.x.x.x:/datapool/db/test\n>>\n>> 3.I agree but if scp can write to the drive at 37mb/sec, I should be able to achieve more than 8mb/sec.\n>> I can indeed speed up the ZFS but it's more the difference between scp and my script that bugs me.\n> It is either being caused by\n>\n> a) your script\n> b) postgres working in a different way to scp\n>\n> To solve the first you need to send us more of your script\n> To solve the second, it may be possible to reconfigure postgres but you may have to reconfigure your OS or hardware to be more suitable for the type of thing postgres does.\n>\n> Put simply, scp does the absolute minimum of work to put the data onto the disk without any safety measures.\n> Postgres is doing other things in the background - analyzing things, keep a synchronous log for rollback etc.. Crucially it�s using it�s own internal storage format.\n> You�re comparing chalk with cheese and expecting them to taste quite similar.\nI agree, my question is also more, what can I do to make it easier for \npostgresql, can I turn things off that will speed things up.\n>\n> If you try the advice I gave + read the blog post, about configuring ZFS to be friendly to the type of activity postgres likes to do, you may see some improvement.\n>\n> If the problem is your script you�ll need to send a greater amount of the code so it can be examined.\n>\n>> 4.\n>> dd bs=1M count=256 if=/dev/zero of=./test\n>> 256+0 records in\n>> 256+0 records out\n>> 268435456 bytes transferred in 5.401050 secs (49700605 bytes/sec)\n> good\n>\n>> 5. a tgz file with scp is 33.8MB/sec.\n> (you can speed that up probably by changing to a lightweightcompression algorithm)\n>\n>> 6. the server is running all the time, speed varies , it's between 5 and 8mb/sec actually (depending also on the number of clients performing a backup).\n> How is the connection to postgres being made, incidentally?\n>\n> Graeme.\n>\n>\n>\n>\n>\n>>\n>>\n>> Op 08-10-15 om 11:45 schreef Graeme B. Bell:\n>>> Seems a bit slow.\n>>>\n>>> 1. Can you share the script (the portion that does the file transfer) to the list? Maybe you�re doing something unusual there by mistake.\n>>> Similarly the settings you�re using for scp.\n>>>\n>>> 2. What�s the network like?\n>>> For example, what if the underlying network is only capable of 10MB/s peak, and scp is using compression and the files are highly compressible?\n>>> Have you tried storing zip or gzip�d versions of the file into postgres? (that�s probably a good idea anyway)\n>>>\n>>> 3. ZFS performance can depend on available memory and use of caches (memory + L2ARC for reading, ZIL cache for writing).\n>>> Maybe put an intel SSD in there (or a pair of them) and use it as a ZIL cache.\n>>>\n>>> 4. Use dd to measure the write performance of ZFS doing a local write to the machine. What speed do you get?\n>>>\n>>> 5. Transfer a zip�d file over the network using scp. What speed do you get?\n>>>\n>>> 6. Is your postgres running all the time or do you start it before this test? Perhaps check if any background tasks are running when you use postgres - autovacuum, autoanalyze etc.\n>>>\n>>> Graeme Bell\n>>>\n>>>> On 08 Oct 2015, at 11:17, Bram Van Steenlandt <[email protected]> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> I use postgresql often but I'm not very familiar with how it works internal.\n>>>>\n>>>> I've made a small script to backup files from different computers to a postgresql database.\n>>>> Sort of a versioning networked backup system.\n>>>> It works with large objects (oid in table, linked to large object), which I import using psycopg\n>>>>\n>>>> It works well but slow.\n>>>>\n>>>> The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.\n>>>> If I copy a file to the mirror using scp I get 37MB/sec\n>>>> My script achieves something like 7 or 8MB/sec on large (+100MB) files.\n>>>>\n>>>> I've never used postgresql for something like this, is there something I can do to speed things up ?\n>>>> It's not a huge problem as it's only the initial run that takes a while (after that, most files are already in the db).\n>>>> Still it would be nice if it would be a little faster.\n>>>> cpu is mostly idle on the server, filesystem is running 100%.\n>>>> This is a seperate postgresql server (I've used freebsd profiles to have 2 postgresql server running) so I can change this setup so it will work better for this application.\n>>>>\n>>>> I've read different suggestions online but I'm unsure which is best, they all speak of files which are only a few Kb, not 100MB or bigger.\n>>>>\n>>>> ps. english is not my native language\n>>>>\n>>>> thx\n>>>> Bram\n>>>>\n>>>>\n>>>> -- \n>>>> Sent via pgsql-performance mailing list ([email protected])\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 08 Oct 2015 13:50:35 +0200", "msg_from": "Bram Van Steenlandt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: large object write performance" }, { "msg_contents": "\n\nOp 08-10-15 om 13:37 schreef Graeme B. Bell:\n>>>\n>> Like this ?\n>>\n>> gmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives)\n>> zfs uncompressed (iozone -s 4 -a /datapool/data) = 650136\n>> zfs compressed (iozone -s 4 -a /datapool/data) = 676345\n>\n> If you can get the complete tables (as in the images on the blog post) with random performance compared to sequential etc, different block sizes, that would be very interesting.\n>\n>\n>\nLike this ?\n\n Command line used: iozone -a /dev/mirror/gm0s1e\n Output is in Kbytes/sec\n Time Resolution = 0.000001 seconds.\n Processor cache size set to 1024 Kbytes.\n Processor cache line size set to 32 bytes.\n File stride size set to 17 * record size.\n random \nrandom bkwd record stride\n KB reclen write rewrite read reread read \nwrite read rewrite read fwrite frewrite fread freread\n 64 4 321554 913649 2689580 2892445 2133730 \n874936 1879725 913649 1933893 1013707 1066042 2006158 2561267\n 64 8 603489 1722886 4897948 5860307 3791156 \n1947927 3022727 1879725 3791156 1828508 1768283 3541098 4564786\n 64 16 1083249 3363612 7940539 9006179 7100397 \n3541098 3958892 3165299 7100397 3588436 3363612 4897948 6421025\n 64 32 1392258 5283570 12902017 1082152412902017 \n4897948 5860307 4564786 7100397 4207076 4274062 5389653 7940539\n 64 64 2772930 5735102 20962191 2096219115972885 \n7940539 7940539 7940539 12902017 7100397 5860307 4274062 9318832\n 128 4 253503 1093856 2727923 2714132 2420455 \n1048974 2286447 1057236 2058509 948860 1051027 2464907 2558895\n 128 8 465304 1975201 4934216 5325799 4557257 \n1975201 2286447 1133103 4267461 1878447 1975201 3657016 4557257\n 128 16 821147 3468030 6114306 8036304 6114306 \n3468030 5122535 3359523 6727225 3560017 2969325 5122535 6406138\n 128 32 1504659 6045455 11720614 1254204311470204 \n5325799 8036304 6045455 9977956 5847904 6114306 6727225 8548124\n 128 64 2621367 5784891 16365173 1588107815881078 \n8036304 9795896 8036304 9129573 8036304 7082197 6727225 9795896\n 128 128 4444086 9129573 18637664 \n10779307180123591056714011470204 10779307 14200794 10567140 9795896 \n3380677 6812590\n 256 4 390763 951219 2692393 2692393 2325094 \n958864 2310087 955451 2305128 984356 1004618 2435861 2441400\n 256 8 736041 1894373 4572895 5022044 4422226 \n1729594 3823789 1729594 4132864 1778290 1729594 4197489 4332998\n 256 16 1280084 3368013 7120034 8024634 7120034 \n3043436 6249745 3275543 7120034 3410808 3285566 5117791 6249745\n 256 32 2152625 4734192 11091721 1222861210245071 \n5540300 8815202 5569035 9868433 5687020 5810112 6554972 8208677\n 256 64 3245838 7518900 16072608 1709624916072608 \n802463411695808 8534922 13454450 7518900 8024634 8271916 9868433\n 256 128 490728410651598 17096249 \n19591792170962491024507113454450 11695808 17096249 11091721 8815202 \n7314033 10245071\n 256 256 5022044 9868433 18259146 1959179218259146 \n986843314353744 10651598 16828306 10651598 9518507 3326279 10245071\n 512 4 533297 948197 2695115 2768068 2327124 \n944860 2287463 930933 2394591 984720 988345 2486072 2497638\n 512 8 953248 1931527 5019764 5127637 4494470 \n1882427 4228947 1875849 4420457 1938502 1938502 4061007 4420457\n 512 16 1571169 3510074 7758090 7900804 7091952 \n3437042 6652558 3366987 6931711 3533174 3437042 5951911 6246213\n 512 32 2327124 5951911 11620224 1249949011138071 \n5822804 9814569 5807059 11138071 5886650 5822804 6821616 7647578\n 512 64 3163620 7647578 15471149 1654383213872122 \n801881212427157 8992598 16049269 8394979 8394979 8992598 9859630\n 512 128 5951911 9859630 17069844 \n19037014176304041137404015143846 12797441 18229030 11620224 10641343 \n8665997 10235583\n 512 256 582280410641343 18385093 \n18385093176304041069433615471149 11943357 17630404 10911694 10911694 \n7758090 9859630\n 512 512 601863610434519 18869737 1955712419037014 \n985963015930214 11138071 13872122 10911694 10235583 3415178 10044089\n 1024 4 643652 991266 2409105 2716948 2354947 \n940269 2290886 942952 2338280 992412 1004951 2364021 2472911\n 1024 8 1120580 1858644 4904018 5042191 4415030 \n1837965 3999762 1774922 4146499 1822368 1806273 4215688 4321737\n 1024 16 1769073 3469823 7699755 7755368 7054742 \n3355952 6787181 3311958 6874084 3458646 3390391 6059442 6172653\n 1024 32 2474336 5472650 11132462 1124909110255274 \n6059442 9946527 5782087 10039528 5417427 5917516 7927135 8262639\n 1024 64 3200886 8061038 15516181 1678995915516181 \n832671514044758 9220512 15977962 8525047 8457895 9464330 10039528\n 1024 128 681951111368190 17985196 \n18936769171246801124909116280798 12790037 17688906 11249091 11645609 \n9220512 10429596\n 1024 256 683035611132462 17985196 \n18608584179851961148983916789959 12639479 18608584 11368190 11132462 \n8326715 9832672\n 1024 512 706634911132462 18936769 \n19276739189367691101822614618393 11773301 15080341 10906310 11018226 \n7319232 10134284\n 1024 1024 691837611018226 20088179 \n20471166200881791148983914618393 11249091 19276739 11520658 11249091 \n3579719 10354166\n 2048 4 711805 991702 2534049 2709070 2378293 \n961939 2250555 923767 2193096 986917 950339 2316716 2479912\n 2048 8 1201758 1880950 4611288 4911886 4385291 \n1843410 4063729 1836709 3993821 1802788 1861789 3946119 4221501\n 2048 16 1826943 3330973 6738230 7336772 6588354 \n3287628 6321679 3221057 6583305 3282603 3151337 4994712 5719737\n 2048 32 2569678 5884299 10620516 1183503311015480 \n5719737 9062970 5596757 9567698 5549749 5389574 6895083 7846081\n 2048 64 3061485 8431376 15652050 1600194315289867 \n856590114226322 9228493 14109484 8292998 8531869 9525260 10037248\n 2048 128 736823811502234 17828628 \n18284015177917011198363016783585 13748169 18600754 11502234 12118884 \n9898453 10439809\n 2048 256 721354811058022 17645509 \n18441025178286281107227517221003 13138359 18763275 11502234 11305435 \n8987113 9898453\n 2048 512 728698011502234 18480699 \n19486895191395401183503316274804 12416686 18763275 11305435 10834854 \n8431376 10240672\n 2048 1024 731802011246230 19665344 \n19893055197104681157976317828628 12561952 19531203 11564174 11440955 \n7903836 10240672\n 2048 2048 687301610834854 20079056 \n20079056202685681203399518284015 12187663 16915790 12118884 11851361 \n3221057 9349021\n 4096 4 747054 977305 2315406 2418063 2156702 \n914269 2125483 933241 2066675 978976 984869 2203449 2288267\n 4096 8 1283631 1856920 4087714 4442909 3968731 \n1791845 3828122 1793342 3630721 1877210 1835102 3713919 3886139\n 4096 16 1890015 3303309 6041154 6758923 6342230 \n3290024 5918366 3248345 5231694 3316701 3245890 5145513 5551194\n 4096 32 2612190 5565581 8829180 10059615 9685316 \n5800471 8189447 5744227 7831080 5657217 5572803 6512939 7314299\n 4096 64 3185110 8393502 13790167 1511240514026607 \n869512013086376 9593374 14118827 8518356 8660056 8883968 9394037\n 4096 128 759900211500738 15692162 \n17974227175878061140908716058868 13746032 17148903 11609543 10806269 \n9756822 10394367\n 4096 256 751589211164417 15219509 \n17427236176601231128913414271297 13126371 18358372 11042425 11409087 \n8426437 9658092\n 4096 512 759900211222762 16179861 \n18358372188003521160954315508016 13126371 18698043 11940367 10923071 \n7161842 8883968\n 4096 1024 768397211841605 17569819 \n18698043192210331194036717217649 13176709 19796907 11736440 11348794 \n7998800 9920205\n 4096 2048 622502810476771 15006799 \n16585996169962131167264713604523 12220656 14177082 11193514 10895362 \n5572803 7746336\n 4096 4096 6503078 8770583 13518879 \n13615304134343071071862212015527 11007051 11252164 10606129 10136778 \n2151300 5657217\n 8192 4 771889 955105 2348612 2440182 2196239 \n945954 2090414 927318 2088254 978497 991142 2209230 2277469\n 8192 8 1300529 1853795 3994479 4271046 3844314 \n1794916 3645670 1816072 3541581 1855096 1805195 3594563 3803462\n 8192 16 1934800 3308654 5911041 6558696 6158977 \n3240626 5546540 3316639 5350479 3345053 3308654 5025700 5382328\n 8192 32 2606405 5520695 8062248 9329772 9092750 \n5523358 7656244 5825851 7699133 5843686 5509188 6281710 6923450\n 8192 64 3195123 8358395 9954134 1226342613105371 \n852428510625170 9603578 12120667 8393104 8283835 7976157 8734477\n 8192 128 740702210922396 11160086 \n130854071363045811820445 9798016 13428037 11719650 11620560 11178240 \n6708520 7656244\n 8192 256 771989110964221 10908526 \n12703218132162711167187712082308 13125396 14025495 11604860 10582628 \n7379976 8175428\n 8192 512 716752810612044 11538609 \n13723005143236851176781612172193 13277557 18706245 11084481 11084481 \n7013901 8079310\n 8192 1024 780937610964221 11939561 \n13839076149272971160094211906462 13344594 19692587 11755738 10922396 \n7067279 7976157\n 8192 2048 777227910612044 12373836 \n13125396136088641128470610612044 12245943 19359718 11393220 11070196 \n6569983 7537004\n 8192 4096 6590145 9309549 11284706 \n11993741118571571107019610638329 11573589 11873547 10343693 9011667 \n3030813 5839713\n 8192 8192 5810089 7394270 8974009 9582152 9692982 \n8471741 9373038 8549738 9102385 8167655 7992856 2040751 4627695\n 16384 4 797664 996356 2344270 2377523 2100240 \n939190 1983772 924869 2043652 991711 989241 2179301 2193911\n 16384 8 1333457 1847357 4009817 4115479 3644859 \n1780859 2637395 1687493 2962198 1776256 1739477 3552531 3661171\n 16384 16 1927517 3250099 5797250 6093337 5595681 \n3185763 5414049 3311334 5348732 3299885 3276130 4880365 5024159\n 16384 32 2621499 5470509 7915343 8398033 8023471 \n5333373 7606924 5820327 7508843 5587946 5454010 6234299 6574261\n 16384 64 3153165 8171351 9757400 1042973110363669 \n8003847 8743152 8519852 8870685 7620421 7988030 7282069 7494921\n 16384 128 754015110729361 10214248 \n10878833111146101083594710246230 13780259 11291763 11753336 11100247 \n7356911 7606082\n 16384 256 773188310766343 10388737 \n10944673108582061067270510570916 13256568 10829117 10800184 10715976 \n6998761 7330229\n 16384 512 779061210894355 11076987 \n11571262117957031090126811009550 13440658 11563473 11100247 10652851 \n7145769 7370324\n 16384 1024 772753610771405 11480405 \n11897816119766121084449811002499 13097393 19931365 11465082 9858182 \n6687502 7651816\n 16384 2048 783591710284566 11370232 \n12208535120649261147465411251090 13676046 19507030 11138029 10715976 \n6843338 7334141\n 16384 4096 6798654 8646345 10646250 \n10806978111002471111461010232499 11711273 13919826 10618284 9193480 \n3981705 6065906\n 16384 8192 6021260 8098167 8932953 9287914 9367679 \n8640909 8997280 8962078 9587240 8444472 8079126 2755413 4765300\n 16384 16384 5647647 7077327 8581561 8586923 8542092 \n7095596 8389831 6993063 8586923 7086815 6989507 2007885 4131065\n 32768 64 296167 194585 8750280 9778787 9140169 \n175768 9378422 9394448 9640910 274952 189845 7000117 7141984\n 32768 128 191663 317953 9672120 10252961 9965984 \n271105 9649033 13523721 9963094 222901 191480 6869610 7125322\n 32768 256 196003 135504 9897089 1038230610314518 \n201612 9902793 13017757 10085185 111844 155051 6792540 7006184\n 32768 512 128074 101213 10246082 1068332910821278 \n9750910376035 13144750 10680008 154413 84475 6947043 7184169\n 32768 1024 153009 158275 10743456 1123711311203222 \n14983610638673 13144750 10944503 150476 159293 6865492 7281608\n 32768 2048 177640 157394 11050985 1132321211222432 \n16127010601744 13319269 19456785 169390 158441 6502967 7128278\n 32768 4096 163017 140807 9726193 1013202510465315 \n146558 9697370 11915151 12977195 151240 146040 4748311 6107636\n 32768 8192 143250 144060 8868284 8875156 8889507 \n135154 8023380 8450082 9975387 122431 164131 3359724 4659136\n 32768 16384 133217 199384 8124883 8114809 8157676 \n162634 8088543 6801280 8357082 216484 124653 2700292 4270570\n 65536 64 132478 87352 1119848 679995 810266 \n71092 1086543 9482862 1268087 104467 98471 1633166 743856\n 65536 128 218970 184752 1148488 521940 1220248 \n227708 315986 13741967 1164772 191161 170112 312986 907814\n 65536 256 215160 167094 1151432 485369 435206 \n150642 1724542 13018870 1636316 169468 178463 534948 766691\n 65536 512 184008 183956 856355 1264028 1931097 \n183766 755799 12990568 642429 169232 129864 212418 551519\n 65536 1024 204725 154571 944974 543736 900059 \n18120110606164 13244698 10909669 169260 157705 605011 789797\n 65536 2048 170298 172145 839204 1151939 5369148 \n150545 541055 13263872 6960907 178095 146453 417150 939140\n 65536 4096 143556 166245 877391 1350281 1739504 \n193326 9432425 11317061 15090076 172834 172315 697406 823770\n 65536 8192 167896 154215 569887 518313 925923 \n181461 746474 8010708 10573526 218476 183726 600098 841748\n 65536 16384 194580 129553 1377676 1397768 1612321 \n186326 918167 6857237 8487080 162869 189240 501541 1154532\n\nzfs compressed:\n Auto Mode\n Command line used: iozone -a /datapool/data\n Output is in Kbytes/sec\n Time Resolution = 0.000001 seconds.\n Processor cache size set to 1024 Kbytes.\n Processor cache line size set to 32 bytes.\n File stride size set to 17 * record size.\n random \nrandom bkwd record stride\n KB reclen write rewrite read reread read \nwrite read rewrite read fwrite frewrite fread freread\n 64 4 320021 926260 2116903 2892445 2133730 \n818885 1892980 874936 1933893 969761 1033216 2067979 2561267\n 64 8 587635 1991276 5389653 5860307 4564786 \n1933893 3588436 2006158 4274062 2006158 1599680 3541098 4564786\n 64 16 1066042 3541098 7940539 9006179 7100397 \n3363612 4564786 3541098 6271021 3588436 3057153 4897948 6421025\n 64 32 1518251 5389653 12902017 1597288510821524 \n5860307 5283570 4897948 9006179 5860307 4274062 4897948 7940539\n 64 64 3588436 7100397 15972885 2096219115972885 \n7100397 7940539 6421025 9006179 6421025 6421025 3203069 9006179\n 128 4 248573 1024942 2558895 2770150 2377579 \n976473 2098744 1024942 2238774 1038825 1040839 2336194 2511022\n 128 8 486381 1967960 5122535 5325799 4407601 \n1911894 3867787 1939522 4267461 1885042 1778862 4012317 4407601\n 128 16 913347 3560017 7476717 8548124 7176872 \n3468030 6045455 3560017 6812590 3560017 3657016 4267461 6114306\n 128 32 1622919 6114306 9795896 14200794 9795896 \n6406138 7582312 5603747 8548124 5603747 5545860 6045455 8036304\n 128 64 2511022 6406138 15881078 1801235916365173 \n7582312 8548124 8548124 10567140 7582312 5847904 6406138 9795896\n 128 128 4934216 8548124 20804356 \n258040351863766410567140 9795896 9795896 11470204 9129573 7582312 \n4596273 9977956\n 256 4 390195 980760 2754556 2754556 2413957 \n969251 2330140 962301 2345409 1012194 1008392 2463808 2533571\n 256 8 744204 1718521 4929815 5117791 4404088 \n1817419 4054829 1814348 4197489 1881098 1881098 4070199 4264168\n 256 16 1334162 3236056 7518900 8024634 7120034 \n3275543 5347168 3285566 6761355 3197509 3326279 5687020 6073004\n 256 32 2247235 4907284 11695808 1281227711569783 \n5971678 8815202 5810112 9868433 5841722 4998665 6398720 8208677\n 256 64 3654598 6891544 15164624 1825914614953435 \n796510712228612 8815202 14353744 8271916 7314033 6891544 9868433\n 256 128 5569035 9434868 18259146 1995591317096249 \n986843312228612 11207494 16072608 10651598 9192546 6936061 10245071\n 256 256 542826510245071 18259146 \n19955913182591461024507113454450 10245071 9434868 10245071 9868433 \n3236056 10245071\n 512 4 535558 990168 2613128 2722449 2381315 \n907717 2327124 946109 2347475 1004057 839252 2402629 2460437\n 512 8 957072 1880778 4784885 4742616 4123387 \n1842059 4099771 1759070 4091959 1809465 1910902 4091959 4340054\n 512 16 1574625 3346002 7620440 7871843 7115451 \n3239989 6571132 3220553 6821616 3415178 3122224 4973263 5639315\n 512 32 2424328 5951911 10856530 1221509711138071 \n5384786 9814569 5886650 10641343 5509113 5807059 7540170 8234036\n 512 64 3388236 8394979 16049269 1706984414240069 \n810965812797441 8394979 14628067 8394979 8140399 8844453 9814569\n 512 128 6652558 9681823 17630404 \n19037014170698441085653014628067 12427157 17069844 11374040 10434519 \n8528336 10235583\n 512 256 617437710235583 17630404 \n18229030154711491069433615471149 10434519 17069844 10044089 10044089 \n7435738 9997331\n 512 512 641411910694336 19736867 \n19736867188697371091169417069844 11374040 18229030 11138071 10641343 \n3325278 10235583\n 1024 4 635274 957884 2606476 2723840 2387677 \n925877 2348509 937600 2353657 990352 983548 2491561 2503178\n 1024 8 1099636 1861867 4854136 4972145 4374559 \n1834824 4146499 1815435 4265934 1888889 1841906 3764854 4356809\n 1024 16 1741810 3251778 7420395 7811791 7008693 \n3220084 5690162 3149251 6692005 3241960 3447541 5853003 6059442\n 1024 32 2553783 6093831 11132462 1193690711132462 \n544489810557785 5917516 10134284 5623115 6059442 8000971 8199542\n 1024 64 3191372 8457895 15743686 1678995915295157 \n845789514044758 9064828 15516181 8246774 8408221 9064828 9765601\n 1024 128 722079011773301 17985196 \n18689559167899591177330116280798 12639479 18936769 11614118 11018226 \n9320560 10039528\n 1024 256 686310010230845 17402221 \n18291580179851961101822614618393 12037272 17985196 10990032 10769573 \n8610501 9946527\n 1024 512 691837610906310 18608584 \n19719260189367691124909115516181 11903823 18608584 10255274 10662628 \n7699755 10230845\n 1024 1024 673397411132462 19629138 \n20088179200881791136819017331995 11645609 15295157 11103681 11132462 \n3541348 10230845\n 2048 4 715839 993193 2476337 2656292 2348384 \n958397 2207184 944591 2185842 985106 994688 2329912 2495039\n 2048 8 1211077 1865428 4502520 4983122 4394265 \n1812296 4146110 1838675 3792790 1817281 1815360 3786104 4240255\n 2048 16 1850160 3251539 6459541 7698414 6759439 \n3221057 6498636 3185225 6401772 3407614 3240499 5212953 6024617\n 2048 32 2506688 5433896 8940345 10780463 9796850 \n5578583 9431138 5785224 9944290 5640860 5884299 6990474 7903836\n 2048 64 3089009 8398403 14514788 1551073713748169 \n819020510037248 8749118 12274742 8390200 8456277 8866519 9796850\n 2048 128 703628311015480 16525279 \n18441025169157901124623015538795 12805399 19355169 11187641 11380325 \n9349021 10240672\n 2048 256 731802011072275 16783585 \n18441025172210031170600616001943 12786337 18441025 11305435 11320334 \n8784909 9944290\n 2048 512 637800511380325 18129656 \n19311656189704641100137217221003 12729493 18641120 11320334 11260973 \n8325147 10096235\n 2048 1024 731802011129659 18641120 \n20079056198930551132033417681831 12416686 15399510 11246230 11187641 \n7503399 10143926\n 2048 2048 708854110780463 19139540 \n20079056197104681132033413639023 11706006 10401883 12033995 10179991 \n2691246 6207471\n 4096 4 763183 1001461 2427973 2556602 2249320 \n952753 2185509 927949 2140045 976195 971063 2230918 2316968\n 4096 8 1274016 1549857 3767675 4476482 4149921 \n1761711 3536549 1787929 3627654 1847535 1813981 3552638 4034911\n 4096 16 1862759 3279348 6151473 6782940 6216019 \n3227596 5792648 3150647 5627567 3298235 3245277 5120972 5490871\n 4096 32 2550150 5753847 8514134 9989424 9636422 \n5784846 8865630 5934722 8205092 5744227 5596402 6693092 7447471\n 4096 64 3190434 8463799 13890515 1552202815287223 \n849728913434307 9503161 13790167 8660056 8426437 8426437 9593374\n 4096 128 718580711378861 15219509 \n17880690176419881140908714787213 13702178 14941541 11470025 11539362 \n9019223 10065509\n 4096 256 709968911164417 11800935 \n13935585146235831160170315273632 13046624 18456987 10950922 10867793 \n8660056 9729195\n 4096 512 680443210833527 16380411 \n17880690179554421143947514673544 12919082 18536646 11508442 10840363 \n7969117 9461292\n 4096 1024 653275210580003 14636041 \n16380411169292201190726416195114 12997272 19135397 11570449 10950922 \n6941906 8811067\n 4096 2048 618914610444923 15179168 \n16846218170807031190726411157166 12151506 16650294 11539362 11222762 \n5627567 7715028\n 4096 4096 7004170 8447153 12803544 \n13834587138345871109949910978915 10806269 11193514 10476771 9729195 \n2104652 5672159\n 8192 4 787763 968540 2323361 2395440 2126115 \n925444 2085592 925544 1259160 881045 928897 2158708 2212217\n 8192 8 1292360 1831854 3938618 4190311 3837016 \n1800843 3484475 1807949 3468296 1820498 1823687 3511181 3795898\n 8192 16 1924397 3307062 5769116 6460047 6085887 \n3219976 5526912 3279287 5378957 3339526 3301977 4931921 5299316\n 8192 32 2604627 5584401 8167655 9414128 9224572 \n5610847 7985426 5958192 7612143 5855636 4703073 6253130 6919267\n 8192 64 3186234 8393104 10721317 1272203212508980 \n861404110450659 9558162 11719650 8876637 8150218 8007759 8725605\n 8192 128 734683810751510 11503841 \n13318731136575481158529611683783 13539152 13231539 11719650 10836280 \n7712960 8409538\n 8192 256 723697210967720 11192805 \n12840894135231661177184812302946 13344594 12879400 11569692 10734715 \n7379976 8273861\n 8192 512 691926710849967 11314434 \n13323895141235021158529612082308 13040710 18706245 11378128 10569606 \n7148143 8257953\n 8192 1024 7809376 9991764 11751717 \n13772510146037291170368210724663 13085407 18405631 11359320 9742452 \n6466126 7877411\n 8192 2048 6621897 9332306 11906462 \n13517846139288381168775811523131 13277557 17731260 10925870 9254386 \n5610847 7255309\n 8192 4096 6532510 9163071 11344319 \n119437111183673311236730 9879717 11329356 12298542 10100441 9052027 \n2905979 4976927\n 8192 8192 5713476 7295362 9635897 9834473 9823226 \n8752276 9329772 8835550 8952964 8505294 7868391 2036276 4699856\n 16384 4 793602 976339 2332889 2359401 2083683 \n942152 2068691 932702 2025521 958861 970616 2174129 2191253\n 16384 8 1316621 1848798 3948077 4064604 3661171 \n1769350 2598108 1710213 2951511 1761143 1782199 3517256 3618375\n 16384 16 1972328 3152008 5724330 6010201 5645791 \n3092991 5383933 3236628 5321396 3254717 3246874 4906849 5104015\n 16384 32 2640841 5582045 7907146 8449664 8187902 \n5510869 7568386 5893706 7447000 5665340 5476176 6232038 6524947\n 16384 64 2665838 8274664 8972610 8972610 9036322 \n7055528 9597952 9431966 9746329 8095305 8019726 7113960 7617042\n 16384 128 771365711046716 10771405 \n11610361117372771139285310596998 13711520 11299189 11594690 10997217 \n7105134 7022362\n 16384 256 782788410806978 10694297 \n11432656115790601113802910800184 13341500 11604480 11474654 10878833 \n7013761 7417260\n 16384 512 782788410815482 11276939 \n11995428123467251135332611123606 13243794 11932938 11238211 10401316 \n7086815 7550922\n 16384 1024 782788410793399 11545987 \n12109573123556051133086211076987 13150025 19618409 11345828 9876601 \n6714948 7627187\n 16384 2048 770241710652851 11803808 \n12199866120649261098842511306626 13419661 19031618 11062721 10131423 \n6798654 7223891\n 16384 4096 7016626 9224331 10577425 10679339 9372790 \n9683156 9959624 11653678 13708785 10583941 9224331 3717820 5677040\n 16384 8192 6101994 8316724 9282896 9454025 9399713 \n8914412 8646345 9092516 9581893 8784505 7824319 2855363 4753105\n 16384 16384 5559465 7136122 8550595 8550595 8546342 \n7120594 8368375 7034583 8507196 7240636 7211761 2019925 4372634\n 32768 64 243907 165911 9274630 9861582 9280267 \n208814 9375223 9365002 9410529 186161 131414 6936524 7145325\n 32768 128 163009 181056 9843218 1042562210217137 \n115074 9583767 13523721 10032915 195318 168350 7037755 7212823\n 32768 256 231553 177221 10017558 1052220110543188 \n18010110057878 13023925 10455761 159521 190742 6866864 7063435\n 32768 512 207887 177964 10376035 1087866610941018 \n18631510452580 12849799 10929707 191960 187956 6937925 7183043\n 32768 1024 196158 173361 10691639 1113424411141465 \n18963810529456 13044940 11191362 203682 189378 6892692 7330935\n 32768 2048 237930 226379 10832365 1122609911205962 \n21207610279033 12559754 19129112 216887 159554 6477223 6991215\n 32768 4096 193082 158767 10070406 1041219510340126 \n194637 9402803 11530301 14609034 188553 179426 4703301 5991547\n 32768 8192 217095 193194 8598099 8836923 8856284 \n123392 8308582 8794513 10123069 137619 161559 3499061 4637441\n 32768 16384 217085 168641 8133057 8099029 8155256 \n192300 8056774 7074342 8535623 211167 193018 2688092 4250627\n 65536 64 167194 116265 884415 771563 807690 \n72593 582097 9309120 1168386 130574 103927 1840891 609206\n 65536 128 167245 166455 753365 391203 592647 \n188236 780460 13523592 1175768 173657 135994 386873 491893\n 65536 256 176472 124844 778808 740561 1506751 \n214556 1726405 13075846 1580359 179845 153255 650261 934371\n 65536 512 190142 161068 806635 653568 1771099 \n19837010474010 13133323 10690727 249879 187587 620307 594468\n 65536 1024 224246 204830 3870058 544544 505776 \n166446 652924 13366423 2091792 177375 161484 447076 593080\n 65536 2048 161621 181740 1059648 547429 1389677 \n224177 2081355 13325600 1670514 220996 187846 636748 1005292\n 65536 4096 199632 171528 964230 607314 2108075 \n195447 656213 11079852 15059487 182302 158450 477527 870713\n 65536 8192 161818 171580 975398 1232598 2454906 \n209137 475163 8736876 10549988 158130 203418 533033 864671\n 65536 16384 210449 171927 2208017 1089886 1552475 \n185855 8126998 7225489 8411761 190138 259959 573670 815569\n\nzfs uncompressed:\n\n Auto Mode\n Command line used: iozone -a /datapool/db\n Output is in Kbytes/sec\n Time Resolution = 0.000001 seconds.\n Processor cache size set to 1024 Kbytes.\n Processor cache line size set to 32 bytes.\n File stride size set to 17 * record size.\n random \nrandom bkwd record stride\n KB reclen write rewrite read reread read \nwrite read rewrite read fwrite frewrite fread freread\n 64 4 321554 926260 2561267 2892445 2467108 \n942521 2006158 998622 2358717 1033216 1013707 2278628 2561267\n 64 8 546928 1562436 4564786 5860307 4274062 \n1780008 2561267 1734015 3738358 1828508 1828508 3363612 4564786\n 64 16 1033216 3057153 7100397 9318832 7100397 \n3022727 5389653 3363612 7100397 3588436 3541098 4274062 6421025\n 64 32 1768283 5283570 10402178 1290201712902017 \n5283570 7100397 5860307 9318832 5735102 5283570 5283570 9006179\n 64 64 2662899 6421025 15972885 2096219115972885 \n8182586 7100397 7100397 9006179 6421025 7100397 4274062 10402178\n 128 4 253025 971174 2660335 2784517 2409592 \n992724 2132084 1009524 2286447 1057236 1024942 2367096 2558895\n 128 8 484625 1939522 4557257 5325799 4407601 \n1778862 3759450 1911894 4407601 2004703 1967960 3867787 4407601\n 128 16 895074 2905056 7476717 8548124 7176872 \n3380677 4934216 3277486 6406138 3199360 3468030 5122535 6406138\n 128 32 1451764 4444086 12842051 1254204311720614 \n5784891 7917784 6045455 9129573 5784891 4717434 6727225 8548124\n 128 64 2770150 7082197 15881078 2164304915881078 \n7582312 9129573 8036304 14200794 7917784 7176872 6406138 9129573\n 128 128 5122535 9795896 18637664 \n216430491863766410779307 9129573 9795896 10567140 9129573 8548124 \n4557257 9795896\n 256 4 390337 917880 2557711 2665657 2345409 \n889002 2131261 894929 2266207 921029 976301 2366082 2371308\n 256 8 729045 1755037 4572895 5022044 4422226 \n1662639 4009406 1766587 4132864 1826695 1729594 3935921 4332998\n 256 16 1368162 3236056 7314033 7965107 7073132 \n3087188 6437081 3316006 7120034 3410808 3454704 5841722 6249745\n 256 32 2305128 5841722 8815202 1222861210651598 \n5217259 8534922 5428265 9868433 5569035 4819184 6554972 8271916\n 256 64 3654598 6719046 16072608 1825914614953435 \n796510710245071 8271916 14353744 7965107 6891544 8024634 10245071\n 256 128 6107548 8534922 18259146 \n19591792170962491065159813454450 11695808 14353744 9868433 10245071 \n7518900 9778562\n 256 256 5217259 8534922 17096249 \n19591792182591461024507112812277 10245071 17096249 10245071 9434868 \n3326279 9868433\n 512 4 530006 988801 2198475 2337255 2016784 \n869143 2073249 850217 2198475 914287 896725 2449212 2486072\n 512 8 951558 1753325 4882800 5019764 4375425 \n1778004 4271001 1789859 4228947 1862832 1766304 4163357 4340054\n 512 16 1585086 3140488 7309196 7988981 7115451 \n3064306 6843354 3325278 6821616 3409755 3158966 5624545 6104175\n 512 32 2474613 6018636 11138071 1214600911138071 \n527889210235583 5822804 10235583 5509113 5951911 7647578 8234036\n 512 64 3284589 8362289 15471149 1706984412797441 \n730919612215097 9304292 13438092 8234036 7871843 5331314 9681823\n 512 128 601863610235583 17630404 \n19037014176304041091169415037801 12499490 18229030 11138071 10856530 \n8955098 10235583\n 512 256 657113210235583 18385093 \n18229030183850931091169414628067 11620224 16543832 10434519 10485468 \n7647578 9814569\n 512 512 6395018 9681823 19557124 \n19557124176304041085653015930214 11138071 12797441 10694336 9638369 \n3259661 10235583\n 1024 4 644811 973296 2694787 2753527 2403712 \n956178 2300703 935151 2311849 956178 994250 2432298 2503178\n 1024 8 1105865 1874871 4694950 5024495 4211554 \n1780809 4195100 1738286 4003490 1802483 1855433 4228138 4304412\n 1024 16 1747480 3220084 7306780 7631350 6830356 \n3301774 6640274 3284101 6744549 3379719 2156318 5917516 6059442\n 1024 32 2497356 5417427 11614118 1207110311132462 \n591751610557785 5917516 10662628 6025439 5417427 7869040 8262639\n 1024 64 3210456 8457895 15516181 1628079815240881 \n812201414044758 9320560 15743686 8457895 8391792 9402175 9946527\n 1024 128 702014911398360 16531459 \n19020633176889061113246216280798 13142265 18608584 11520658 11132462 \n9569770 10454984\n 1024 256 691837610230845 17985196 \n18608584167899591124909116531459 12071103 18291580 10906310 11132462 \n8391792 9946527\n 1024 512 687408411489839 18291580 \n19276739190206331101822615080341 11936907 18608584 11132462 11278631 \n7472033 10134284\n 1024 1024 716059711249091 19719260 \n20471166192767391139836017985196 11489839 15801608 10878686 11398360 \n3447541 10255274\n 2048 4 713876 995610 2639154 2730599 2389540 \n962370 2104960 928861 2115848 981280 958718 2311106 2461436\n 2048 8 1212615 1910655 4385291 4900677 4374126 \n1807720 2933915 1629410 3117032 1685356 1779262 3887189 4196751\n 2048 16 1865428 3195891 6802261 7643611 6759439 \n3266376 6225466 3219850 6058611 3379461 3324528 5277002 5920802\n 2048 32 2572756 5937172 10289741 1198363010959265 \n5416763 9349021 5816563 9752360 5507054 5904523 7582884 7903836\n 2048 64 3230749 8398403 15181774 1624402814840791 \n819020514132698 9308497 14226322 8357547 8190205 9567698 9944290\n 2048 128 741915010889797 16915790 \n18480699178286281118764115854271 13301113 18970464 11260973 11502234 \n9189005 10503637\n 2048 256 741915011502234 16274804 \n17977882177917011138032516525279 12979541 18284015 10889797 11502234 \n8940345 9898453\n 2048 512 750339910848538 18600754 \n19486895191395401101548017501701 12882215 18804350 11690076 11001372 \n7529708 9525260\n 2048 1024 752970811380325 18928661 \n20268568198930551144095516525279 12561952 18600754 11642543 11502234 \n7135648 10096235\n 2048 2048 723176710401883 19710468 \n20079056198470921151765714417342 11900619 16274804 11706006 11320334 \n2959183 7754008\n 4096 4 763760 963331 2367414 2620558 2356697 \n951434 2128907 916709 2081448 982953 956147 2203449 2399488\n 4096 8 1286033 1835886 3972402 4708354 4311344 \n1822835 3717133 1794840 3628421 1863567 1820517 3567392 4043458\n 4096 16 1916795 3192212 5867831 7237268 6908408 \n3332786 5802430 3212508 5284803 3357537 3258202 4887857 5761565\n 4096 32 2587405 5475122 8447153 1089536210923071 \n5744227 8276241 5736555 7906767 5902100 5468151 6169144 7558881\n 4096 64 3105091 8324363 10840363 1452467515112405 \n882918014636041 9545402 14838300 8793028 8535284 9503161 9891647\n 4096 128 708504911378861 14889742 \n16715094159396711147002513980948 13126371 15273632 11439475 10833527 \n9287391 9920205\n 4096 256 759900211378861 15635038 \n17806558179554421160170316380411 13340420 18358372 11007051 11222762 \n8173861 9202815\n 4096 512 751589211378861 16996213 \n18030821182997071119351417356809 13086376 19050521 11809047 10985935 \n7742845 9100443\n 4096 1024 770119411409087 17974227 \n18718416188622761134879416380411 12841826 17974227 11252164 10394367 \n6758923 8569343\n 4096 2048 671401810580003 16396044 \n17516078176419881157044913879294 12919082 14889742 10752164 11007051 \n5572803 7656576\n 4096 4096 6575257 8865630 14026607 \n14026607137460321088845610089154 10586522 9989424 10745439 9119767 \n2018832 4718699\n 8192 4 781243 955557 2312727 2346687 2123487 \n929198 2095130 901906 2036276 977661 978831 2167833 2235244\n 8192 8 1299742 1807949 3942233 4231596 3846036 \n1801598 3632566 1798110 3532479 1825334 1842562 3541946 3754421\n 8192 16 1923858 3259997 5843686 6475875 6122759 \n3211548 5421393 3234221 5301769 3292800 3275536 4815791 5312425\n 8192 32 2603837 3044239 7494262 8874344 8876637 \n5305863 7287625 5732541 7223279 5637544 5316535 6277120 6930432\n 8192 64 3148282 8315913 9857043 1215067113170677 \n862485210263360 9742452 11585296 8779111 8224350 7187018 8461310\n 8192 128 766307411329356 10384332 \n12298542127409021170368212320592 13587338 13744963 11857157 10939784 \n6643663 7262978\n 8192 256 765624410995799 10569606 \n13951461142701451188998210778492 13386185 12722032 11207409 11013422 \n7230880 7899143\n 8192 512 762735111131163 11604860 \n13744963143416211182451312280959 13105371 18870623 10088578 10088578 \n7155586 8334066\n 8192 1024 766478411024023 11993741 \n14025495152791051142732212211126 13298112 19886343 11363077 9954134 \n6616796 8159896\n 8192 2048 771469110721317 11788003 \n13449061139571281147311111840812 13344594 17767937 11178240 10678002 \n6429825 7313997\n 8192 4096 6434642 8549738 11538609 1168775810939784 \n989394210139188 9905351 11329356 9638600 8436380 2913371 4902367\n 8192 8192 5669170 7155586 8867473 9649428 9798016 \n8846925 9309549 8752276 9279379 8309879 7118525 1969738 4413692\n 16384 4 792495 984268 2316766 2361104 2081852 \n925853 1630092 906950 1783772 935317 938421 2179854 2200374\n 16384 8 1326302 1838411 3962191 3989333 3671146 \n1778371 3602441 1829211 3431888 1831210 1831649 3523568 3599234\n 16384 16 1933700 3307986 5756940 6045627 5655548 \n3196878 5331717 3297985 5350398 3271451 3194946 4825871 4990229\n 16384 32 2567627 3320614 7621266 8311694 7710195 \n5091536 7578401 5677040 7522817 5387732 5324694 6136321 6392030\n 16384 64 3104729 8221207 9972633 1086335510659461 \n8195714 9526105 9552590 10045523 8266700 8098167 7160661 7443774\n 16384 128 776859410967380 10530420 \n11425053116201781129176310482232 13472278 11425053 11457436 10842786 \n7317740 7666328\n 16384 256 770673710793399 10592098 \n11362712115966461116698810715976 13254011 11522755 11229029 10596998 \n6966125 7337273\n 16384 512 776157410646250 10815482 \n11482323117533361121436910960383 13097393 11221694 10871948 10930746 \n7089009 7386962\n 16384 1024 781275510757915 11466995 \n11811923120374521090818910851347 12850032 19668946 11368351 9787975 \n6820923 7540151\n 16384 2048 780565510715976 11846540 \n12163157120649261095165010101637 12440603 17730371 9971186 10357421 \n6728097 7173368\n 16384 4096 6445388 9377906 10442410 \n11260308111308131096563010020621 11695328 12073405 8240925 8414486 \n3670166 5107050\n 16384 8192 5608926 7089740 8775530 9335863 9508969 \n9047029 8537847 8229083 9372790 8705493 7301412 2804440 4768607\n 16384 16384 5589764 6913563 8752060 8760986 8728714 \n7053355 8507196 7347471 8175239 6901759 7167383 1996103 4312539\n 32768 64 230915 200119 9184145 9700108 9122575 \n214993 9181077 9335740 9435078 241375 197798 6866521 7176666\n 32768 128 155408 210286 9555781 10088146 9909219 \n218021 9651743 13380213 10119343 215878 188418 7124952 7314548\n 32768 256 245438 211358 10005161 1038622910502100 \n20165610076313 12936885 10259083 216026 194716 6795563 7019782\n 32768 512 224770 172055 10275190 1086404710785611 \n17750310502100 12881111 10864906 236196 189975 6934774 7189054\n 32768 1024 224670 244041 10822130 1123343911210532 \n21732110673373 12941757 11183167 132705 249347 6802626 7291653\n 32768 2048 279628 277185 11028815 1135408111222432 \n21980110828951 13097150 18213953 182474 246020 6290744 7106532\n 32768 4096 223770 189444 9589786 984956710024133 \n180631 9245930 9640234 12449399 126386 217381 4209100 5310869\n 32768 8192 289548 212970 8714770 8977184 8963133 \n192414 8001892 8984813 10246845 179510 205728 3512116 4721559\n 32768 16384 259355 207628 8124883 8245278 8228987 \n197801 8139318 7117573 8648416 189515 263243 2684574 4287356\n 65536 64 238936 129767 2090408 525987 826151 \n90448 677164 9394385 1025262 116969 121666 1137896 1044564\n 65536 128 191346 154689 976531 790086 1150584 \n184619 426549 13396389 1152161 160297 182615 631276 781643\n 65536 256 117481 130199 853377 817880 1524796 \n136433 9955086 13098901 482389 124845 133903 471437 893848\n 65536 512 172186 115005 928350 730833 1262443 \n182679 204828 13075846 2076276 132742 112534 420170 816657\n 65536 1024 131621 122795 1853653 1544641 781978 \n112777 316975 13207787 1555603 121322 161941 465593 874324\n 65536 2048 115612 154134 977858 1074909 1142198 \n128591 485873 13086429 2637786 137100 106794 589607 839053\n 65536 4096 108742 93844 538163 521837 904195 \n127249 512677 9472079 12253935 126808 64501 379450 910309\n 65536 8192 114983 124150 633228 934308 1544459 \n134346 400188 8122916 10597577 139909 114255 483149 1073549\n 65536 16384 118519 122137 1466259 758280 1260452 \n123904 520225 7203336 8145301 132301 113137 673200 579078\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 08 Oct 2015 14:09:58 +0200", "msg_from": "Bram Van Steenlandt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: large object write performance" }, { "msg_contents": "\n> On 08 Oct 2015, at 13:50, Bram Van Steenlandt <[email protected]> wrote:\n>>> 1. The part is \"fobj = lobject(db.db,0,\"r\",0,fpath)\", I don't think there is anything there\n\nRe: lobject\n\nhttp://initd.org/psycopg/docs/usage.html#large-objects\n\n\"Psycopg large object support *efficient* import/export with file system files using the lo_import() and lo_export() libpq functions.”\n\nSee *\n\nlobject seems to default to string handling in Python\nThat’s going to be slow.\nTry using lo_import / export?\n\nGraeme Bell\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 12:10:38 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large object write performance" }, { "msg_contents": "Op 08-10-15 om 14:10 schreef Graeme B. Bell:\n>> On 08 Oct 2015, at 13:50, Bram Van Steenlandt <[email protected]> wrote:\n>>>> 1. The part is \"fobj = lobject(db.db,0,\"r\",0,fpath)\", I don't think there is anything there\n> Re: lobject\n>\n> http://initd.org/psycopg/docs/usage.html#large-objects\n>\n> \"Psycopg large object support *efficient* import/export with file system files using the lo_import() and lo_export() libpq functions.�\n>\n> See *\nI was under the impression they meant that the lobject was using \nlo_import and lo_export.\nI can't seem to find how to use lo_import en export, I searched google \nand came to the conclusion the lobject was the way to go.\n >>> x.lo_import()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'psycopg2._psycopg.connection' object has no attribute \n'lo_import'\n\n >>> from psycopg2.extensions import lo_importTraceback (most recent \ncall last):\n File \"<stdin>\", line 1, in <module>\nImportError: cannot import name lo_import\n\nAlso:\nhttp://initd.org/psycopg/docs/connection.html\n\n|lobject|([/oid/[, /mode/[, /new_oid/[, /new_file/[, /lobject_factory/]]]]])\n\n Return a new database large object as a |lobject|\n <http://initd.org/psycopg/docs/extensions.html#psycopg2.extensions.lobject>\n instance.\n\n See Access to PostgreSQL large objects\n <http://initd.org/psycopg/docs/usage.html#large-objects> for an\n overview.\n\n Parameters: \t\n\n * *oid* � The OID of the object to read or write. 0 to create a\n new large object and and have its OID assigned automatically.\n * *mode* � Access mode to the object, see below.\n * *new_oid* � Create a new object using the specified OID. The\n function raises |OperationalError|\n <http://initd.org/psycopg/docs/module.html#psycopg2.OperationalError>\n if the OID is already in use. Default is 0, meaning assign a new\n one automatically.\n * *new_file* � The name of a file to be imported in the the\n database (using the |lo_import()|\n <http://www.postgresql.org/docs/current/static/lo-interfaces.html#LO-IMPORT>\n function)\n * *lobject_factory* � Subclass of |lobject|\n <http://initd.org/psycopg/docs/extensions.html#psycopg2.extensions.lobject>\n to be instantiated.\n\n\n>\n> lobject seems to default to string handling in Python\n> That�s going to be slow.\n> Try using lo_import / export?\n>\n> Graeme Bell\n>\n\n\n\n\n\n\n\n\n\nOp 08-10-15 om 14:10 schreef Graeme B.\n Bell:\n\n\n\n\n\nOn 08 Oct 2015, at 13:50, Bram Van Steenlandt <[email protected]> wrote:\n\n\n\n1. The part is \"fobj = lobject(db.db,0,\"r\",0,fpath)\", I don't think there is anything there\n\n\n\n\n\nRe: lobject\n\nhttp://initd.org/psycopg/docs/usage.html#large-objects\n\n\"Psycopg large object support *efficient* import/export with file system files using the lo_import() and lo_export() libpq functions.�\n\nSee *\n\n I was under the impression they meant that the lobject was using\n lo_import and lo_export.\n I can't seem to find how to use lo_import en export, I searched\n google and came to the conclusion the lobject was the way to go.\n >>> x.lo_import()\n Traceback (most recent call last):\n � File \"<stdin>\", line 1, in <module>\n AttributeError: 'psycopg2._psycopg.connection' object has no\n attribute 'lo_import'\n\n >>> from psycopg2.extensions import lo_importTraceback\n (most recent call last):\n � File \"<stdin>\", line 1, in <module>\n ImportError: cannot import name lo_import\n\n Also:\nhttp://initd.org/psycopg/docs/connection.html\n\n\nlobject([oid[,\n mode[, new_oid[, new_file[, lobject_factory]]]]])\n\nReturn a new database large object as a lobject\n instance.\nSee Access\n to PostgreSQL large objects for an overview.\n\n\n\n\n\nParameters:\n\n\noid � The OID of the object to\n read or write. 0 to create\n a new large object and and have its OID assigned\n automatically.\nmode � Access mode to the object,\n see below.\nnew_oid � Create a new object\n using the specified OID. The\n function raises OperationalError\n if the OID is already\n in use. Default is 0, meaning assign a new one\n automatically.\nnew_file � The name of a file to\n be imported in the the database\n (using the lo_import()\n function)\nlobject_factory � Subclass of\n lobject to be\n instantiated.\n\n\n\n\n\n\n\n\n\n\n\nlobject seems to default to string handling in Python\nThat�s going to be slow.\nTry using lo_import / export?\n\nGraeme Bell", "msg_date": "Thu, 08 Oct 2015 14:29:28 +0200", "msg_from": "Bram Van Steenlandt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: large object write performance" }, { "msg_contents": "\n>> \n>> http://initd.org/psycopg/docs/usage.html#large-objects\n>> \n>> \n>> \"Psycopg large object support *efficient* import/export with file system files using the lo_import() and lo_export() libpq functions.”\n>> \n>> See *\n>> \n> I was under the impression they meant that the lobject was using lo_import and lo_export.\n> I can't seem to find how to use lo_import en export, I searched google and came to the conclusion the lobject was the way to go.\n> >>> x.lo_import()\n> Traceback (most recent call last):\n> File \"<stdin>\", line 1, in <module>\n> AttributeError: 'psycopg2._psyco\n\nBram,\n\nI recommend posting this as a question on a python/psycopg mailing list, for advice.\nYou are probably not the first person to encounter it.\n\nGraeme Bell\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 8 Oct 2015 13:10:00 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large object write performance" }, { "msg_contents": "\n\nOp 08-10-15 om 15:10 schreef Graeme B. Bell:\n>>> http://initd.org/psycopg/docs/usage.html#large-objects\n>>>\n>>>\n>>> \"Psycopg large object support *efficient* import/export with file system files using the lo_import() and lo_export() libpq functions.�\n>>>\n>>> See *\n>>>\n>> I was under the impression they meant that the lobject was using lo_import and lo_export.\n>> I can't seem to find how to use lo_import en export, I searched google and came to the conclusion the lobject was the way to go.\n>>>>> x.lo_import()\n>> Traceback (most recent call last):\n>> File \"<stdin>\", line 1, in <module>\n>> AttributeError: 'psycopg2._psyco\n> Bram,\n>\n> I recommend posting this as a question on a python/psycopg mailing list, for advice.\n> You are probably not the first person to encounter it.\n>\n> Graeme Bell\n>\n>\n>\nHi,\n\nI tried \\lo_import with psql and it's not faster, the same 5.5megabytes/sec.\n\nBram\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 08 Oct 2015 16:13:58 +0200", "msg_from": "Bram Van Steenlandt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: large object write performance" } ]
[ { "msg_contents": "Hail there,\n\nShort question:\nWhy would pg optimizer choose a worst (slower) query plan for a\nquery with 'LIMIT 1' instead of, say, 'LIMIT 3'?\n\nComplete scenario:\nQuery: 'SELECT * FROM a WHERE a.b_id = 42 ORDER BY created LIMIT 1'\n- b_id is a FK to b;\n- created is a datetime with the time of the creation of the row;\n- both 'b' and 'created' are indexed separately\n\nThis query, with the LIMIT 1, uses the index on created, which is much\nslower (10x) than if it used the index on b_id\n\nIf I change the LIMIT from 1 to 3 pg starts using the index on b_id.\n\nAlready tried running REINDEX and VACUUM ANALYZE on both A and B.\nNothing changed.\n\nWhy does this happen?\nIs there any way I can hint/force the optimizer to use b_id index?\n\nThanks\n\n-- \nMarcio Ribeiro\n\nHail there,Short question:Why would pg optimizer choose a worst (slower) query plan for aquery with 'LIMIT 1' instead of, say, 'LIMIT 3'?Complete scenario:Query: 'SELECT * FROM a WHERE a.b_id = 42 ORDER BY created LIMIT 1'- b_id is a FK to b;- created is a datetime with the time of the creation of the row;- both 'b' and 'created' are indexed separatelyThis query, with the LIMIT 1, uses the index on created, which is muchslower (10x) than if it used the index on b_idIf I change the LIMIT from 1 to 3 pg starts using the index on b_id.Already tried running REINDEX and VACUUM ANALYZE on both A and B.Nothing changed.Why does this happen?Is there any way I can hint/force the optimizer to use b_id index?Thanks-- Marcio Ribeiro", "msg_date": "Sat, 10 Oct 2015 05:52:35 -0300", "msg_from": "Marcio Ribeiro <[email protected]>", "msg_from_op": true, "msg_subject": "LIMIT 1 poor query plan" }, { "msg_contents": "Marcio Ribeiro <[email protected]> writes:\n> Short question:\n> Why would pg optimizer choose a worst (slower) query plan for a\n> query with 'LIMIT 1' instead of, say, 'LIMIT 3'?\n\n> Complete scenario:\n> Query: 'SELECT * FROM a WHERE a.b_id = 42 ORDER BY created LIMIT 1'\n> - b_id is a FK to b;\n> - created is a datetime with the time of the creation of the row;\n> - both 'b' and 'created' are indexed separately\n\n> This query, with the LIMIT 1, uses the index on created, which is much\n> slower (10x) than if it used the index on b_id\n\nIt's trying to avoid a sort; or to be less anthropomorphic, the estimated\ncost of scanning the \"created\" index until it hits the first row with\nb_id=42 is less than the estimated cost of collecting all the rows with\nb_id=42 and then sorting them by \"created\". The estimates unfortunately\nare kind of shaky because it's hard to predict how many rows will get\nskipped before finding one with b_id=42.\n\nIf you do this type of query often enough to care about its performance,\nyou could consider creating a two-column index on (b_id, created)\n(in that order).\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 10 Oct 2015 10:45:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT 1 poor query plan" }, { "msg_contents": "Yes, the composite index nailed it.\n\nThanks mate :)\n\nOn Sat, Oct 10, 2015 at 12:45 PM, Tom Lane <[email protected]> wrote:\n\n> Marcio Ribeiro <[email protected]> writes:\n> > Short question:\n> > Why would pg optimizer choose a worst (slower) query plan for a\n> > query with 'LIMIT 1' instead of, say, 'LIMIT 3'?\n>\n> > Complete scenario:\n> > Query: 'SELECT * FROM a WHERE a.b_id = 42 ORDER BY created LIMIT 1'\n> > - b_id is a FK to b;\n> > - created is a datetime with the time of the creation of the row;\n> > - both 'b' and 'created' are indexed separately\n>\n> > This query, with the LIMIT 1, uses the index on created, which is much\n> > slower (10x) than if it used the index on b_id\n>\n> It's trying to avoid a sort; or to be less anthropomorphic, the estimated\n> cost of scanning the \"created\" index until it hits the first row with\n> b_id=42 is less than the estimated cost of collecting all the rows with\n> b_id=42 and then sorting them by \"created\". The estimates unfortunately\n> are kind of shaky because it's hard to predict how many rows will get\n> skipped before finding one with b_id=42.\n>\n> If you do this type of query often enough to care about its performance,\n> you could consider creating a two-column index on (b_id, created)\n> (in that order).\n>\n> regards, tom lane\n>\n\n\n\n-- \nMarcio Ribeiro\n\nYes, the composite index nailed it.Thanks mate :)On Sat, Oct 10, 2015 at 12:45 PM, Tom Lane <[email protected]> wrote:Marcio Ribeiro <[email protected]> writes:\n> Short question:\n> Why would pg optimizer choose a worst (slower) query plan for a\n> query with 'LIMIT 1' instead of, say, 'LIMIT 3'?\n\n> Complete scenario:\n> Query: 'SELECT * FROM a WHERE a.b_id = 42 ORDER BY created LIMIT 1'\n> - b_id is a FK to b;\n> - created is a datetime with the time of the creation of the row;\n> - both 'b' and 'created' are indexed separately\n\n> This query, with the LIMIT 1, uses the index on created, which is much\n> slower (10x) than if it used the index on b_id\n\nIt's trying to avoid a sort; or to be less anthropomorphic, the estimated\ncost of scanning the \"created\" index until it hits the first row with\nb_id=42 is less than the estimated cost of collecting all the rows with\nb_id=42 and then sorting them by \"created\".  The estimates unfortunately\nare kind of shaky because it's hard to predict how many rows will get\nskipped before finding one with b_id=42.\n\nIf you do this type of query often enough to care about its performance,\nyou could consider creating a two-column index on (b_id, created)\n(in that order).\n\n                        regards, tom lane\n-- Marcio Ribeiro", "msg_date": "Sat, 10 Oct 2015 14:45:20 -0300", "msg_from": "Marcio Ribeiro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIMIT 1 poor query plan" } ]
[ { "msg_contents": "Hi there,\n\nIf it's possible, I would really appreciate any hints or help on an issue\nI've been facing lately.\nI'm running two instance of Postgres locally: 9.4.4 (operational db) and\n9.5beta1 (analytical db). I've already imported schema to analytical db and\nwhile doing the following query I find very different query plans being\nexecuted:\n\nQuery:\n\nEXPLAIN ANALYZE VERBOSE SELECT\n o.id AS id,\n o.company_id AS company_id,\n o.created_at::date AS created_at,\n COALESCE(o.assignee_id, 0) AS assignee_id,\n (o.tax_treatment)::text AS tax_treatment,\n COALESCE(o.tax_override, 0) AS tax_override,\n COALESCE(o.stock_location_id, 0) AS stock_location_id,\n COALESCE(l.label, 'N/A')::text AS stock_location_name,\n COALESCE(sa.country, 'N/A')::text AS shipping_address_country,\n COALESCE(o.tags, ARRAY[]::text[]) AS tags\n FROM orders AS o\n INNER JOIN locations AS l ON l.id = o.stock_location_id\n INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id\n WHERE o.account_id = <some_value> AND l.account_id = <another_value>\nLIMIT 10;\n\n\nPlan when I run it locally on operational db:\n\nLimit (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.397\nrows=10 loops=1)\nOutput: o.id, o.company_id, ((o.created_at)::date),\n(COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n(COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)),\n((COALESCE(l.label, 'N/A'::character varying))::text),\n((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags,\n'{}'::character varying[]))\n-> Nested Loop (cost=747.62..811.46 rows=1 width=76) (actual\ntime=28.208..28.395 rows=10 loops=1)\n Output: o.id, o.company_id, (o.created_at)::date,\nCOALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\nCOALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0),\n(COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country,\n'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character\nvarying[])\n -> Nested Loop (cost=747.19..807.15 rows=1 width=73) (actual\ntime=28.164..28.211 rows=10 loops=1)\n Output: o.id, o.company_id, o.created_at, o.assignee_id,\no.tax_treatment, o.tax_override, o.stock_location_id, o.tags,\no.shipping_address_id, l.label\n -> Index Scan using index_locations_on_account_id on\npublic.locations l (cost=0.29..8.31 rows=1 width=20) (actual\ntime=0.025..0.025 rows=1 loops=1)\n Output: l.id, l.address1, l.address2, l.city, l.country,\nl.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude,\nl.created_at, l.updated_at, l.account_id, l.holds_stock\n Index Cond: (l.account_id = 18799)\n -> Bitmap Heap Scan on public.orders o (cost=746.90..798.71\nrows=13 width=57) (actual time=28.133..28.176 rows=10 loops=1)\n Output: o.id, o.account_id, o.company_id, o.status,\no.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id,\no.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id,\no.shipping_address_id, o.payment_status, o.email, o.fulfillment_status,\no.phone_number, o.assignee_id, o.tax_treatment, o.tax_override,\no.tax_label_override, o.stock_location_id, o.currency_id, o.source,\no.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search,\no.default_price_list_id, o.contact_id, o.return_status, o.tags,\no.packed_status, o.returning_status, o.shippability_status,\no.backordering_status\n Recheck Cond: ((o.stock_location_id = l.id) AND\n(o.account_id = 18799))\n Heap Blocks: exact=7\n -> BitmapAnd (cost=746.90..746.90 rows=13 width=0)\n(actual time=23.134..23.134 rows=0 loops=1)\n -> Bitmap Index Scan on\nindex_orders_on_stock_location_id_manual (cost=0.00..18.02 rows=745\nwidth=0) (actual time=9.282..9.282 rows=40317 loops=1)\n Index Cond: (o.stock_location_id = l.id)\n -> Bitmap Index Scan on index_orders_on_account_id\n (cost=0.00..718.94 rows=38735 width=0) (actual time=9.856..9.856\nrows=40317 loops=1)\n Index Cond: (o.account_id = 18799)\n -> Index Scan using addresses_pkey on public.addresses sa\n (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1\nloops=10)\n Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country,\nsa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label,\nsa.status, sa.address2, sa.phone_number, sa.email, sa.company_name,\nsa.latitude, sa.longitude, sa.first_name, sa.last_name\n Index Cond: (sa.id = o.shipping_address_id)\n Planning time: 1.136 ms\n Execution time: 28.621 ms\n(23 rows)\n\nPlan when I run it from analytical db via FDW:\n\nLimit (cost=300.00..339.95 rows=1 width=1620) (actual\ntime=7630.240..82368.326 rows=10 loops=1)\n Output: o.id, o.company_id, ((o.created_at)::date),\n(COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n(COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id,\n0)), ((COALESCE(l.label, 'N/A'::character varying))::\ntext), ((COALESCE(sa.country, 'N/A'::character varying))::text),\n(COALESCE(o.tags, '{}'::character varying[]))\n -> Nested Loop (cost=300.00..339.95 rows=1 width=1620) (actual\ntime=7630.238..82368.314 rows=10 loops=1)\n Output: o.id, o.company_id, (o.created_at)::date,\nCOALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\nCOALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0),\n(COALESCE(l.label, 'N/A'::character varying))::text,\n (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags,\n'{}'::character varying[])\n Join Filter: (o.shipping_address_id = sa.id)\n Rows Removed by Join Filter: 19227526\n -> Nested Loop (cost=200.00..223.58 rows=1 width=1108) (actual\ntime=69.758..69.812 rows=10 loops=1)\n Output: o.id, o.company_id, o.created_at, o.assignee_id,\no.tax_treatment, o.tax_override, o.stock_location_id, o.tags,\no.shipping_address_id, l.label\n Join Filter: (o.stock_location_id = l.id)\n Rows Removed by Join Filter: 18\n -> Foreign Scan on remote.orders o (cost=100.00..111.67\nrows=1 width=592) (actual time=68.009..68.014 rows=10 loops=1)\n Output: o.id, o.account_id, o.company_id, o.status,\no.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id,\no.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id,\no.shipping_address\n_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number,\no.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override,\no.stock_location_id, o.currency_id, o.source, o.source_url, o.demo,\no.invoice_status, o.ship_at, o\n.source_id, o.search, o.default_price_list_id, o.contact_id,\no.return_status, o.tags, o.packed_status, o.returning_status,\no.shippability_status, o.backordering_status\n Remote SQL: SELECT id, company_id, created_at,\nshipping_address_id, assignee_id, tax_treatment, tax_override,\nstock_location_id, tags FROM public.orders WHERE ((account_id = 18799))\n -> Foreign Scan on remote.locations l (cost=100.00..111.90\nrows=1 width=520) (actual time=0.174..0.174 rows=3 loops=10)\n Output: l.id, l.address1, l.address2, l.city,\nl.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude,\nl.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock\n Remote SQL: SELECT id, label FROM public.locations\nWHERE ((account_id = 18799))\n -> Foreign Scan on remote.addresses sa (cost=100.00..114.50\nrows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)\n Output: sa.id, sa.company_id, sa.address1, sa.city,\nsa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state,\nsa.label, sa.status, sa.address2, sa.phone_number, sa.email,\nsa.company_name, sa.latitude, sa.l\nongitude, sa.first_name, sa.last_name\n Remote SQL: SELECT id, country FROM public.addresses\n Planning time: 0.209 ms\n Execution time: 82391.610 ms\n(21 rows)\n\nTime: 82393.211 ms\n\nWhat am I doing wrong ? really appreciate any guidance possible. Thank you\nvery much for taking the time to helping me with this.\n\nBest Regards,\nMohammad\n\nHi there, If it's possible, I would really appreciate any hints or help on an issue I've been facing lately.I'm running two instance of Postgres locally: 9.4.4 (operational db) and 9.5beta1 (analytical db). I've already imported schema to analytical db and while doing the following query I find very different query plans being executed:Query:EXPLAIN ANALYZE VERBOSE SELECT    o.id AS id,    o.company_id AS company_id,    o.created_at::date AS created_at,    COALESCE(o.assignee_id, 0) AS assignee_id,    (o.tax_treatment)::text AS tax_treatment,    COALESCE(o.tax_override, 0) AS tax_override,    COALESCE(o.stock_location_id, 0) AS stock_location_id,    COALESCE(l.label, 'N/A')::text AS stock_location_name,    COALESCE(sa.country, 'N/A')::text AS shipping_address_country,    COALESCE(o.tags, ARRAY[]::text[]) AS tags  FROM orders AS o    INNER JOIN locations AS l ON l.id = o.stock_location_id    INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id  WHERE o.account_id = <some_value> AND l.account_id = <another_value> LIMIT 10;Plan when I run it locally on operational db:Limit  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.397 rows=10 loops=1)Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))->  Nested Loop  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.395 rows=10 loops=1)      Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])      ->  Nested Loop  (cost=747.19..807.15 rows=1 width=73) (actual time=28.164..28.211 rows=10 loops=1)            Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label            ->  Index Scan using index_locations_on_account_id on public.locations l  (cost=0.29..8.31 rows=1 width=20) (actual time=0.025..0.025 rows=1 loops=1)                  Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                  Index Cond: (l.account_id = 18799)            ->  Bitmap Heap Scan on public.orders o  (cost=746.90..798.71 rows=13 width=57) (actual time=28.133..28.176 rows=10 loops=1)                  Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                  Recheck Cond: ((o.stock_location_id = l.id) AND (o.account_id = 18799))                  Heap Blocks: exact=7                  ->  BitmapAnd  (cost=746.90..746.90 rows=13 width=0) (actual time=23.134..23.134 rows=0 loops=1)                        ->  Bitmap Index Scan on index_orders_on_stock_location_id_manual  (cost=0.00..18.02 rows=745 width=0) (actual time=9.282..9.282 rows=40317 loops=1)                              Index Cond: (o.stock_location_id = l.id)                        ->  Bitmap Index Scan on index_orders_on_account_id  (cost=0.00..718.94 rows=38735 width=0) (actual time=9.856..9.856 rows=40317 loops=1)                              Index Cond: (o.account_id = 18799)      ->  Index Scan using addresses_pkey on public.addresses sa  (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1 loops=10)            Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name            Index Cond: (sa.id = o.shipping_address_id) Planning time: 1.136 ms Execution time: 28.621 ms(23 rows)Plan when I run it from analytical db via FDW:Limit  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.240..82368.326 rows=10 loops=1)   Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))   ->  Nested Loop  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.238..82368.314 rows=10 loops=1)         Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])         Join Filter: (o.shipping_address_id = sa.id)         Rows Removed by Join Filter: 19227526         ->  Nested Loop  (cost=200.00..223.58 rows=1 width=1108) (actual time=69.758..69.812 rows=10 loops=1)               Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label               Join Filter: (o.stock_location_id = l.id)               Rows Removed by Join Filter: 18               ->  Foreign Scan on remote.orders o  (cost=100.00..111.67 rows=1 width=592) (actual time=68.009..68.014 rows=10 loops=1)                     Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                     Remote SQL: SELECT id, company_id, created_at, shipping_address_id, assignee_id, tax_treatment, tax_override, stock_location_id, tags FROM public.orders WHERE ((account_id = 18799))               ->  Foreign Scan on remote.locations l  (cost=100.00..111.90 rows=1 width=520) (actual time=0.174..0.174 rows=3 loops=10)                     Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                     Remote SQL: SELECT id, label FROM public.locations WHERE ((account_id = 18799))         ->  Foreign Scan on remote.addresses sa  (cost=100.00..114.50 rows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)               Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name               Remote SQL: SELECT id, country FROM public.addresses Planning time: 0.209 ms Execution time: 82391.610 ms(21 rows)Time: 82393.211 msWhat am I doing wrong ? really appreciate any guidance possible. Thank you very much for taking the time to helping me with this.Best Regards,Mohammad", "msg_date": "Sun, 11 Oct 2015 16:05:38 +0800", "msg_from": "Mohammad Habbab <[email protected]>", "msg_from_op": true, "msg_subject": "3000x Slower query when using Foreign Data Wrapper vs. local" }, { "msg_contents": "Hi Mohammad,\n I think it's not enable\n\"use_remote_estimate\" during the creation of the foreign table\n\nhttp://www.postgresql.org/docs/9.4/static/postgres-fdw.html\n\nuse_remote_estimate\n\nThis option, which can be specified for a foreign table or a foreign\nserver, controls whether postgres_fdw issues remote EXPLAIN commands to\nobtain cost estimates. A setting for a foreign table overrides any setting\nfor its server, but only for that table. The default is false.\n\n\ntry it\n\n\nBye\n\n\n2015-10-11 10:05 GMT+02:00 Mohammad Habbab <[email protected]>:\n\n> Hi there,\n>\n> If it's possible, I would really appreciate any hints or help on an issue\n> I've been facing lately.\n> I'm running two instance of Postgres locally: 9.4.4 (operational db) and\n> 9.5beta1 (analytical db). I've already imported schema to analytical db and\n> while doing the following query I find very different query plans being\n> executed:\n>\n> Query:\n>\n> EXPLAIN ANALYZE VERBOSE SELECT\n> o.id AS id,\n> o.company_id AS company_id,\n> o.created_at::date AS created_at,\n> COALESCE(o.assignee_id, 0) AS assignee_id,\n> (o.tax_treatment)::text AS tax_treatment,\n> COALESCE(o.tax_override, 0) AS tax_override,\n> COALESCE(o.stock_location_id, 0) AS stock_location_id,\n> COALESCE(l.label, 'N/A')::text AS stock_location_name,\n> COALESCE(sa.country, 'N/A')::text AS shipping_address_country,\n> COALESCE(o.tags, ARRAY[]::text[]) AS tags\n> FROM orders AS o\n> INNER JOIN locations AS l ON l.id = o.stock_location_id\n> INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id\n> WHERE o.account_id = <some_value> AND l.account_id = <another_value>\n> LIMIT 10;\n>\n>\n> Plan when I run it locally on operational db:\n>\n> Limit (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.397\n> rows=10 loops=1)\n> Output: o.id, o.company_id, ((o.created_at)::date),\n> (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n> (COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)),\n> ((COALESCE(l.label, 'N/A'::character varying))::text),\n> ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags,\n> '{}'::character varying[]))\n> -> Nested Loop (cost=747.62..811.46 rows=1 width=76) (actual\n> time=28.208..28.395 rows=10 loops=1)\n> Output: o.id, o.company_id, (o.created_at)::date,\n> COALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\n> COALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0),\n> (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country,\n> 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character\n> varying[])\n> -> Nested Loop (cost=747.19..807.15 rows=1 width=73) (actual\n> time=28.164..28.211 rows=10 loops=1)\n> Output: o.id, o.company_id, o.created_at, o.assignee_id,\n> o.tax_treatment, o.tax_override, o.stock_location_id, o.tags,\n> o.shipping_address_id, l.label\n> -> Index Scan using index_locations_on_account_id on\n> public.locations l (cost=0.29..8.31 rows=1 width=20) (actual\n> time=0.025..0.025 rows=1 loops=1)\n> Output: l.id, l.address1, l.address2, l.city,\n> l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude,\n> l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock\n> Index Cond: (l.account_id = 18799)\n> -> Bitmap Heap Scan on public.orders o (cost=746.90..798.71\n> rows=13 width=57) (actual time=28.133..28.176 rows=10 loops=1)\n> Output: o.id, o.account_id, o.company_id, o.status,\n> o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id,\n> o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id,\n> o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status,\n> o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override,\n> o.tax_label_override, o.stock_location_id, o.currency_id, o.source,\n> o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search,\n> o.default_price_list_id, o.contact_id, o.return_status, o.tags,\n> o.packed_status, o.returning_status, o.shippability_status,\n> o.backordering_status\n> Recheck Cond: ((o.stock_location_id = l.id) AND\n> (o.account_id = 18799))\n> Heap Blocks: exact=7\n> -> BitmapAnd (cost=746.90..746.90 rows=13 width=0)\n> (actual time=23.134..23.134 rows=0 loops=1)\n> -> Bitmap Index Scan on\n> index_orders_on_stock_location_id_manual (cost=0.00..18.02 rows=745\n> width=0) (actual time=9.282..9.282 rows=40317 loops=1)\n> Index Cond: (o.stock_location_id = l.id)\n> -> Bitmap Index Scan on\n> index_orders_on_account_id (cost=0.00..718.94 rows=38735 width=0) (actual\n> time=9.856..9.856 rows=40317 loops=1)\n> Index Cond: (o.account_id = 18799)\n> -> Index Scan using addresses_pkey on public.addresses sa\n> (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1\n> loops=10)\n> Output: sa.id, sa.company_id, sa.address1, sa.city,\n> sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state,\n> sa.label, sa.status, sa.address2, sa.phone_number, sa.email,\n> sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name\n> Index Cond: (sa.id = o.shipping_address_id)\n> Planning time: 1.136 ms\n> Execution time: 28.621 ms\n> (23 rows)\n>\n> Plan when I run it from analytical db via FDW:\n>\n> Limit (cost=300.00..339.95 rows=1 width=1620) (actual\n> time=7630.240..82368.326 rows=10 loops=1)\n> Output: o.id, o.company_id, ((o.created_at)::date),\n> (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n> (COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id,\n> 0)), ((COALESCE(l.label, 'N/A'::character varying))::\n> text), ((COALESCE(sa.country, 'N/A'::character varying))::text),\n> (COALESCE(o.tags, '{}'::character varying[]))\n> -> Nested Loop (cost=300.00..339.95 rows=1 width=1620) (actual\n> time=7630.238..82368.314 rows=10 loops=1)\n> Output: o.id, o.company_id, (o.created_at)::date,\n> COALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\n> COALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0),\n> (COALESCE(l.label, 'N/A'::character varying))::text,\n> (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags,\n> '{}'::character varying[])\n> Join Filter: (o.shipping_address_id = sa.id)\n> Rows Removed by Join Filter: 19227526\n> -> Nested Loop (cost=200.00..223.58 rows=1 width=1108) (actual\n> time=69.758..69.812 rows=10 loops=1)\n> Output: o.id, o.company_id, o.created_at, o.assignee_id,\n> o.tax_treatment, o.tax_override, o.stock_location_id, o.tags,\n> o.shipping_address_id, l.label\n> Join Filter: (o.stock_location_id = l.id)\n> Rows Removed by Join Filter: 18\n> -> Foreign Scan on remote.orders o (cost=100.00..111.67\n> rows=1 width=592) (actual time=68.009..68.014 rows=10 loops=1)\n> Output: o.id, o.account_id, o.company_id, o.status,\n> o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id,\n> o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id,\n> o.shipping_address\n> _id, o.payment_status, o.email, o.fulfillment_status, o.phone_number,\n> o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override,\n> o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo,\n> o.invoice_status, o.ship_at, o\n> .source_id, o.search, o.default_price_list_id, o.contact_id,\n> o.return_status, o.tags, o.packed_status, o.returning_status,\n> o.shippability_status, o.backordering_status\n> Remote SQL: SELECT id, company_id, created_at,\n> shipping_address_id, assignee_id, tax_treatment, tax_override,\n> stock_location_id, tags FROM public.orders WHERE ((account_id = 18799))\n> -> Foreign Scan on remote.locations l\n> (cost=100.00..111.90 rows=1 width=520) (actual time=0.174..0.174 rows=3\n> loops=10)\n> Output: l.id, l.address1, l.address2, l.city,\n> l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude,\n> l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock\n> Remote SQL: SELECT id, label FROM public.locations\n> WHERE ((account_id = 18799))\n> -> Foreign Scan on remote.addresses sa (cost=100.00..114.50\n> rows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)\n> Output: sa.id, sa.company_id, sa.address1, sa.city,\n> sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state,\n> sa.label, sa.status, sa.address2, sa.phone_number, sa.email,\n> sa.company_name, sa.latitude, sa.l\n> ongitude, sa.first_name, sa.last_name\n> Remote SQL: SELECT id, country FROM public.addresses\n> Planning time: 0.209 ms\n> Execution time: 82391.610 ms\n> (21 rows)\n>\n> Time: 82393.211 ms\n>\n> What am I doing wrong ? really appreciate any guidance possible. Thank you\n> very much for taking the time to helping me with this.\n>\n> Best Regards,\n> Mohammad\n>\n\n\n\n-- \nMatteo Durighetto\n\n- - - - - - - - - - - - - - - - - - - - - - -\n\nItalian PostgreSQL User Group <http://www.itpug.org/index.it.html>\nItalian Community for Geographic Free/Open-Source Software\n<http://www.gfoss.it>\n\nHi Mohammad,                                 I think it's not enable \"use_remote_estimate\" during the creation of the foreign table http://www.postgresql.org/docs/9.4/static/postgres-fdw.htmluse_remote_estimate\nThis option, which can be specified for a foreign\n table or a foreign server, controls whether postgres_fdw issues remote EXPLAIN commands to obtain cost\n estimates. A setting for a foreign table overrides any\n setting for its server, but only for that table. The\n default is false.try itBye2015-10-11 10:05 GMT+02:00 Mohammad Habbab <[email protected]>:Hi there, If it's possible, I would really appreciate any hints or help on an issue I've been facing lately.I'm running two instance of Postgres locally: 9.4.4 (operational db) and 9.5beta1 (analytical db). I've already imported schema to analytical db and while doing the following query I find very different query plans being executed:Query:EXPLAIN ANALYZE VERBOSE SELECT    o.id AS id,    o.company_id AS company_id,    o.created_at::date AS created_at,    COALESCE(o.assignee_id, 0) AS assignee_id,    (o.tax_treatment)::text AS tax_treatment,    COALESCE(o.tax_override, 0) AS tax_override,    COALESCE(o.stock_location_id, 0) AS stock_location_id,    COALESCE(l.label, 'N/A')::text AS stock_location_name,    COALESCE(sa.country, 'N/A')::text AS shipping_address_country,    COALESCE(o.tags, ARRAY[]::text[]) AS tags  FROM orders AS o    INNER JOIN locations AS l ON l.id = o.stock_location_id    INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id  WHERE o.account_id = <some_value> AND l.account_id = <another_value> LIMIT 10;Plan when I run it locally on operational db:Limit  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.397 rows=10 loops=1)Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))->  Nested Loop  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.395 rows=10 loops=1)      Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])      ->  Nested Loop  (cost=747.19..807.15 rows=1 width=73) (actual time=28.164..28.211 rows=10 loops=1)            Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label            ->  Index Scan using index_locations_on_account_id on public.locations l  (cost=0.29..8.31 rows=1 width=20) (actual time=0.025..0.025 rows=1 loops=1)                  Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                  Index Cond: (l.account_id = 18799)            ->  Bitmap Heap Scan on public.orders o  (cost=746.90..798.71 rows=13 width=57) (actual time=28.133..28.176 rows=10 loops=1)                  Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                  Recheck Cond: ((o.stock_location_id = l.id) AND (o.account_id = 18799))                  Heap Blocks: exact=7                  ->  BitmapAnd  (cost=746.90..746.90 rows=13 width=0) (actual time=23.134..23.134 rows=0 loops=1)                        ->  Bitmap Index Scan on index_orders_on_stock_location_id_manual  (cost=0.00..18.02 rows=745 width=0) (actual time=9.282..9.282 rows=40317 loops=1)                              Index Cond: (o.stock_location_id = l.id)                        ->  Bitmap Index Scan on index_orders_on_account_id  (cost=0.00..718.94 rows=38735 width=0) (actual time=9.856..9.856 rows=40317 loops=1)                              Index Cond: (o.account_id = 18799)      ->  Index Scan using addresses_pkey on public.addresses sa  (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1 loops=10)            Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name            Index Cond: (sa.id = o.shipping_address_id) Planning time: 1.136 ms Execution time: 28.621 ms(23 rows)Plan when I run it from analytical db via FDW:Limit  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.240..82368.326 rows=10 loops=1)   Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))   ->  Nested Loop  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.238..82368.314 rows=10 loops=1)         Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])         Join Filter: (o.shipping_address_id = sa.id)         Rows Removed by Join Filter: 19227526         ->  Nested Loop  (cost=200.00..223.58 rows=1 width=1108) (actual time=69.758..69.812 rows=10 loops=1)               Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label               Join Filter: (o.stock_location_id = l.id)               Rows Removed by Join Filter: 18               ->  Foreign Scan on remote.orders o  (cost=100.00..111.67 rows=1 width=592) (actual time=68.009..68.014 rows=10 loops=1)                     Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                     Remote SQL: SELECT id, company_id, created_at, shipping_address_id, assignee_id, tax_treatment, tax_override, stock_location_id, tags FROM public.orders WHERE ((account_id = 18799))               ->  Foreign Scan on remote.locations l  (cost=100.00..111.90 rows=1 width=520) (actual time=0.174..0.174 rows=3 loops=10)                     Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                     Remote SQL: SELECT id, label FROM public.locations WHERE ((account_id = 18799))         ->  Foreign Scan on remote.addresses sa  (cost=100.00..114.50 rows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)               Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name               Remote SQL: SELECT id, country FROM public.addresses Planning time: 0.209 ms Execution time: 82391.610 ms(21 rows)Time: 82393.211 msWhat am I doing wrong ? really appreciate any guidance possible. Thank you very much for taking the time to helping me with this.Best Regards,Mohammad\n-- Matteo Durighetto - - - - - - - - - - - - - - - - - - - - - - -Italian PostgreSQL User GroupItalian Community for Geographic Free/Open-Source Software", "msg_date": "Sun, 11 Oct 2015 11:42:51 +0200", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3000x Slower query when using Foreign Data Wrapper vs. local" }, { "msg_contents": "Awesome ! Thank you very much, that solved it :) . But, do you have any\nidea why this isn't enabled by default ?\nAs a first time user for FDW I would assume that usage of remote estimates\nwould be enabled by default because they would be more authoritative and\nmore representative of access patterns. Correct ?\n\nBest Regards,\nMohammad\n\nOn Sun, Oct 11, 2015 at 5:42 PM, desmodemone <[email protected]> wrote:\n\n> Hi Mohammad,\n> I think it's not enable\n> \"use_remote_estimate\" during the creation of the foreign table\n>\n> http://www.postgresql.org/docs/9.4/static/postgres-fdw.html\n>\n> use_remote_estimate\n>\n> This option, which can be specified for a foreign table or a foreign\n> server, controls whether postgres_fdw issues remote EXPLAIN commands to\n> obtain cost estimates. A setting for a foreign table overrides any setting\n> for its server, but only for that table. The default is false.\n>\n>\n> try it\n>\n>\n> Bye\n>\n>\n> 2015-10-11 10:05 GMT+02:00 Mohammad Habbab <[email protected]>:\n>\n>> Hi there,\n>>\n>> If it's possible, I would really appreciate any hints or help on an issue\n>> I've been facing lately.\n>> I'm running two instance of Postgres locally: 9.4.4 (operational db) and\n>> 9.5beta1 (analytical db). I've already imported schema to analytical db and\n>> while doing the following query I find very different query plans being\n>> executed:\n>>\n>> Query:\n>>\n>> EXPLAIN ANALYZE VERBOSE SELECT\n>> o.id AS id,\n>> o.company_id AS company_id,\n>> o.created_at::date AS created_at,\n>> COALESCE(o.assignee_id, 0) AS assignee_id,\n>> (o.tax_treatment)::text AS tax_treatment,\n>> COALESCE(o.tax_override, 0) AS tax_override,\n>> COALESCE(o.stock_location_id, 0) AS stock_location_id,\n>> COALESCE(l.label, 'N/A')::text AS stock_location_name,\n>> COALESCE(sa.country, 'N/A')::text AS shipping_address_country,\n>> COALESCE(o.tags, ARRAY[]::text[]) AS tags\n>> FROM orders AS o\n>> INNER JOIN locations AS l ON l.id = o.stock_location_id\n>> INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id\n>> WHERE o.account_id = <some_value> AND l.account_id = <another_value>\n>> LIMIT 10;\n>>\n>>\n>> Plan when I run it locally on operational db:\n>>\n>> Limit (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.397\n>> rows=10 loops=1)\n>> Output: o.id, o.company_id, ((o.created_at)::date),\n>> (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n>> (COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)),\n>> ((COALESCE(l.label, 'N/A'::character varying))::text),\n>> ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags,\n>> '{}'::character varying[]))\n>> -> Nested Loop (cost=747.62..811.46 rows=1 width=76) (actual\n>> time=28.208..28.395 rows=10 loops=1)\n>> Output: o.id, o.company_id, (o.created_at)::date,\n>> COALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\n>> COALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0),\n>> (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country,\n>> 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character\n>> varying[])\n>> -> Nested Loop (cost=747.19..807.15 rows=1 width=73) (actual\n>> time=28.164..28.211 rows=10 loops=1)\n>> Output: o.id, o.company_id, o.created_at, o.assignee_id,\n>> o.tax_treatment, o.tax_override, o.stock_location_id, o.tags,\n>> o.shipping_address_id, l.label\n>> -> Index Scan using index_locations_on_account_id on\n>> public.locations l (cost=0.29..8.31 rows=1 width=20) (actual\n>> time=0.025..0.025 rows=1 loops=1)\n>> Output: l.id, l.address1, l.address2, l.city,\n>> l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude,\n>> l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock\n>> Index Cond: (l.account_id = 18799)\n>> -> Bitmap Heap Scan on public.orders o (cost=746.90..798.71\n>> rows=13 width=57) (actual time=28.133..28.176 rows=10 loops=1)\n>> Output: o.id, o.account_id, o.company_id, o.status,\n>> o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id,\n>> o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id,\n>> o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status,\n>> o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override,\n>> o.tax_label_override, o.stock_location_id, o.currency_id, o.source,\n>> o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search,\n>> o.default_price_list_id, o.contact_id, o.return_status, o.tags,\n>> o.packed_status, o.returning_status, o.shippability_status,\n>> o.backordering_status\n>> Recheck Cond: ((o.stock_location_id = l.id) AND\n>> (o.account_id = 18799))\n>> Heap Blocks: exact=7\n>> -> BitmapAnd (cost=746.90..746.90 rows=13 width=0)\n>> (actual time=23.134..23.134 rows=0 loops=1)\n>> -> Bitmap Index Scan on\n>> index_orders_on_stock_location_id_manual (cost=0.00..18.02 rows=745\n>> width=0) (actual time=9.282..9.282 rows=40317 loops=1)\n>> Index Cond: (o.stock_location_id = l.id)\n>> -> Bitmap Index Scan on\n>> index_orders_on_account_id (cost=0.00..718.94 rows=38735 width=0) (actual\n>> time=9.856..9.856 rows=40317 loops=1)\n>> Index Cond: (o.account_id = 18799)\n>> -> Index Scan using addresses_pkey on public.addresses sa\n>> (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1\n>> loops=10)\n>> Output: sa.id, sa.company_id, sa.address1, sa.city,\n>> sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state,\n>> sa.label, sa.status, sa.address2, sa.phone_number, sa.email,\n>> sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name\n>> Index Cond: (sa.id = o.shipping_address_id)\n>> Planning time: 1.136 ms\n>> Execution time: 28.621 ms\n>> (23 rows)\n>>\n>> Plan when I run it from analytical db via FDW:\n>>\n>> Limit (cost=300.00..339.95 rows=1 width=1620) (actual\n>> time=7630.240..82368.326 rows=10 loops=1)\n>> Output: o.id, o.company_id, ((o.created_at)::date),\n>> (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n>> (COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id,\n>> 0)), ((COALESCE(l.label, 'N/A'::character varying))::\n>> text), ((COALESCE(sa.country, 'N/A'::character varying))::text),\n>> (COALESCE(o.tags, '{}'::character varying[]))\n>> -> Nested Loop (cost=300.00..339.95 rows=1 width=1620) (actual\n>> time=7630.238..82368.314 rows=10 loops=1)\n>> Output: o.id, o.company_id, (o.created_at)::date,\n>> COALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\n>> COALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0),\n>> (COALESCE(l.label, 'N/A'::character varying))::text,\n>> (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags,\n>> '{}'::character varying[])\n>> Join Filter: (o.shipping_address_id = sa.id)\n>> Rows Removed by Join Filter: 19227526\n>> -> Nested Loop (cost=200.00..223.58 rows=1 width=1108) (actual\n>> time=69.758..69.812 rows=10 loops=1)\n>> Output: o.id, o.company_id, o.created_at, o.assignee_id,\n>> o.tax_treatment, o.tax_override, o.stock_location_id, o.tags,\n>> o.shipping_address_id, l.label\n>> Join Filter: (o.stock_location_id = l.id)\n>> Rows Removed by Join Filter: 18\n>> -> Foreign Scan on remote.orders o (cost=100.00..111.67\n>> rows=1 width=592) (actual time=68.009..68.014 rows=10 loops=1)\n>> Output: o.id, o.account_id, o.company_id, o.status,\n>> o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id,\n>> o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id,\n>> o.shipping_address\n>> _id, o.payment_status, o.email, o.fulfillment_status, o.phone_number,\n>> o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override,\n>> o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo,\n>> o.invoice_status, o.ship_at, o\n>> .source_id, o.search, o.default_price_list_id, o.contact_id,\n>> o.return_status, o.tags, o.packed_status, o.returning_status,\n>> o.shippability_status, o.backordering_status\n>> Remote SQL: SELECT id, company_id, created_at,\n>> shipping_address_id, assignee_id, tax_treatment, tax_override,\n>> stock_location_id, tags FROM public.orders WHERE ((account_id = 18799))\n>> -> Foreign Scan on remote.locations l\n>> (cost=100.00..111.90 rows=1 width=520) (actual time=0.174..0.174 rows=3\n>> loops=10)\n>> Output: l.id, l.address1, l.address2, l.city,\n>> l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude,\n>> l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock\n>> Remote SQL: SELECT id, label FROM public.locations\n>> WHERE ((account_id = 18799))\n>> -> Foreign Scan on remote.addresses sa (cost=100.00..114.50\n>> rows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)\n>> Output: sa.id, sa.company_id, sa.address1, sa.city,\n>> sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state,\n>> sa.label, sa.status, sa.address2, sa.phone_number, sa.email,\n>> sa.company_name, sa.latitude, sa.l\n>> ongitude, sa.first_name, sa.last_name\n>> Remote SQL: SELECT id, country FROM public.addresses\n>> Planning time: 0.209 ms\n>> Execution time: 82391.610 ms\n>> (21 rows)\n>>\n>> Time: 82393.211 ms\n>>\n>> What am I doing wrong ? really appreciate any guidance possible. Thank\n>> you very much for taking the time to helping me with this.\n>>\n>> Best Regards,\n>> Mohammad\n>>\n>\n>\n>\n> --\n> Matteo Durighetto\n>\n> - - - - - - - - - - - - - - - - - - - - - - -\n>\n> Italian PostgreSQL User Group <http://www.itpug.org/index.it.html>\n> Italian Community for Geographic Free/Open-Source Software\n> <http://www.gfoss.it>\n>\n\n\n\n-- \nMohammad Habbab\nBangsar, KL, Malaysia\nMobile No. +601111582144\nEmail: [email protected]\nLinkedIn: https://www.linkedin.com/in/mohammadhabbab\n\nAwesome ! Thank you very much, that solved it :) . But, do you have any idea why this isn't enabled by default ?As a first time user for FDW I would assume that usage of remote estimates would be enabled by default because they would be more authoritative and more representative of access patterns. Correct ?Best Regards,Mohammad On Sun, Oct 11, 2015 at 5:42 PM, desmodemone <[email protected]> wrote:Hi Mohammad,                                 I think it's not enable \"use_remote_estimate\" during the creation of the foreign table http://www.postgresql.org/docs/9.4/static/postgres-fdw.htmluse_remote_estimate\nThis option, which can be specified for a foreign\n table or a foreign server, controls whether postgres_fdw issues remote EXPLAIN commands to obtain cost\n estimates. A setting for a foreign table overrides any\n setting for its server, but only for that table. The\n default is false.try itBye2015-10-11 10:05 GMT+02:00 Mohammad Habbab <[email protected]>:Hi there, If it's possible, I would really appreciate any hints or help on an issue I've been facing lately.I'm running two instance of Postgres locally: 9.4.4 (operational db) and 9.5beta1 (analytical db). I've already imported schema to analytical db and while doing the following query I find very different query plans being executed:Query:EXPLAIN ANALYZE VERBOSE SELECT    o.id AS id,    o.company_id AS company_id,    o.created_at::date AS created_at,    COALESCE(o.assignee_id, 0) AS assignee_id,    (o.tax_treatment)::text AS tax_treatment,    COALESCE(o.tax_override, 0) AS tax_override,    COALESCE(o.stock_location_id, 0) AS stock_location_id,    COALESCE(l.label, 'N/A')::text AS stock_location_name,    COALESCE(sa.country, 'N/A')::text AS shipping_address_country,    COALESCE(o.tags, ARRAY[]::text[]) AS tags  FROM orders AS o    INNER JOIN locations AS l ON l.id = o.stock_location_id    INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id  WHERE o.account_id = <some_value> AND l.account_id = <another_value> LIMIT 10;Plan when I run it locally on operational db:Limit  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.397 rows=10 loops=1)Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))->  Nested Loop  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.395 rows=10 loops=1)      Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])      ->  Nested Loop  (cost=747.19..807.15 rows=1 width=73) (actual time=28.164..28.211 rows=10 loops=1)            Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label            ->  Index Scan using index_locations_on_account_id on public.locations l  (cost=0.29..8.31 rows=1 width=20) (actual time=0.025..0.025 rows=1 loops=1)                  Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                  Index Cond: (l.account_id = 18799)            ->  Bitmap Heap Scan on public.orders o  (cost=746.90..798.71 rows=13 width=57) (actual time=28.133..28.176 rows=10 loops=1)                  Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                  Recheck Cond: ((o.stock_location_id = l.id) AND (o.account_id = 18799))                  Heap Blocks: exact=7                  ->  BitmapAnd  (cost=746.90..746.90 rows=13 width=0) (actual time=23.134..23.134 rows=0 loops=1)                        ->  Bitmap Index Scan on index_orders_on_stock_location_id_manual  (cost=0.00..18.02 rows=745 width=0) (actual time=9.282..9.282 rows=40317 loops=1)                              Index Cond: (o.stock_location_id = l.id)                        ->  Bitmap Index Scan on index_orders_on_account_id  (cost=0.00..718.94 rows=38735 width=0) (actual time=9.856..9.856 rows=40317 loops=1)                              Index Cond: (o.account_id = 18799)      ->  Index Scan using addresses_pkey on public.addresses sa  (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1 loops=10)            Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name            Index Cond: (sa.id = o.shipping_address_id) Planning time: 1.136 ms Execution time: 28.621 ms(23 rows)Plan when I run it from analytical db via FDW:Limit  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.240..82368.326 rows=10 loops=1)   Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))   ->  Nested Loop  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.238..82368.314 rows=10 loops=1)         Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])         Join Filter: (o.shipping_address_id = sa.id)         Rows Removed by Join Filter: 19227526         ->  Nested Loop  (cost=200.00..223.58 rows=1 width=1108) (actual time=69.758..69.812 rows=10 loops=1)               Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label               Join Filter: (o.stock_location_id = l.id)               Rows Removed by Join Filter: 18               ->  Foreign Scan on remote.orders o  (cost=100.00..111.67 rows=1 width=592) (actual time=68.009..68.014 rows=10 loops=1)                     Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                     Remote SQL: SELECT id, company_id, created_at, shipping_address_id, assignee_id, tax_treatment, tax_override, stock_location_id, tags FROM public.orders WHERE ((account_id = 18799))               ->  Foreign Scan on remote.locations l  (cost=100.00..111.90 rows=1 width=520) (actual time=0.174..0.174 rows=3 loops=10)                     Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                     Remote SQL: SELECT id, label FROM public.locations WHERE ((account_id = 18799))         ->  Foreign Scan on remote.addresses sa  (cost=100.00..114.50 rows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)               Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name               Remote SQL: SELECT id, country FROM public.addresses Planning time: 0.209 ms Execution time: 82391.610 ms(21 rows)Time: 82393.211 msWhat am I doing wrong ? really appreciate any guidance possible. Thank you very much for taking the time to helping me with this.Best Regards,Mohammad\n-- Matteo Durighetto - - - - - - - - - - - - - - - - - - - - - - -Italian PostgreSQL User GroupItalian Community for Geographic Free/Open-Source Software\n\n-- Mohammad HabbabBangsar, KL, MalaysiaMobile No. +601111582144Email: [email protected]: https://www.linkedin.com/in/mohammadhabbab", "msg_date": "Sun, 11 Oct 2015 18:10:17 +0800", "msg_from": "Mohammad Habbab <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 3000x Slower query when using Foreign Data Wrapper vs. local" }, { "msg_contents": "2015-10-11 12:10 GMT+02:00 Mohammad Habbab <[email protected]>:\n\n> Awesome ! Thank you very much, that solved it :) . But, do you have any\n> idea why this isn't enabled by default ?\n> As a first time user for FDW I would assume that usage of remote estimates\n> would be enabled by default because they would be more authoritative and\n> more representative of access patterns. Correct ?\n>\n> Best Regards,\n> Mohammad\n>\n> On Sun, Oct 11, 2015 at 5:42 PM, desmodemone <[email protected]>\n> wrote:\n>\n>> Hi Mohammad,\n>> I think it's not enable\n>> \"use_remote_estimate\" during the creation of the foreign table\n>>\n>> http://www.postgresql.org/docs/9.4/static/postgres-fdw.html\n>>\n>> use_remote_estimate\n>>\n>> This option, which can be specified for a foreign table or a foreign\n>> server, controls whether postgres_fdw issues remote EXPLAIN commands to\n>> obtain cost estimates. A setting for a foreign table overrides any setting\n>> for its server, but only for that table. The default is false.\n>>\n>>\n>> try it\n>>\n>>\n>> Bye\n>>\n>>\n>> 2015-10-11 10:05 GMT+02:00 Mohammad Habbab <[email protected]>:\n>>\n>>> Hi there,\n>>>\n>>> If it's possible, I would really appreciate any hints or help on an\n>>> issue I've been facing lately.\n>>> I'm running two instance of Postgres locally: 9.4.4 (operational db) and\n>>> 9.5beta1 (analytical db). I've already imported schema to analytical db and\n>>> while doing the following query I find very different query plans being\n>>> executed:\n>>>\n>>> Query:\n>>>\n>>> EXPLAIN ANALYZE VERBOSE SELECT\n>>> o.id AS id,\n>>> o.company_id AS company_id,\n>>> o.created_at::date AS created_at,\n>>> COALESCE(o.assignee_id, 0) AS assignee_id,\n>>> (o.tax_treatment)::text AS tax_treatment,\n>>> COALESCE(o.tax_override, 0) AS tax_override,\n>>> COALESCE(o.stock_location_id, 0) AS stock_location_id,\n>>> COALESCE(l.label, 'N/A')::text AS stock_location_name,\n>>> COALESCE(sa.country, 'N/A')::text AS shipping_address_country,\n>>> COALESCE(o.tags, ARRAY[]::text[]) AS tags\n>>> FROM orders AS o\n>>> INNER JOIN locations AS l ON l.id = o.stock_location_id\n>>> INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id\n>>> WHERE o.account_id = <some_value> AND l.account_id = <another_value>\n>>> LIMIT 10;\n>>>\n>>>\n>>> Plan when I run it locally on operational db:\n>>>\n>>> Limit (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.397\n>>> rows=10 loops=1)\n>>> Output: o.id, o.company_id, ((o.created_at)::date),\n>>> (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n>>> (COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)),\n>>> ((COALESCE(l.label, 'N/A'::character varying))::text),\n>>> ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags,\n>>> '{}'::character varying[]))\n>>> -> Nested Loop (cost=747.62..811.46 rows=1 width=76) (actual\n>>> time=28.208..28.395 rows=10 loops=1)\n>>> Output: o.id, o.company_id, (o.created_at)::date,\n>>> COALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\n>>> COALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0),\n>>> (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country,\n>>> 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character\n>>> varying[])\n>>> -> Nested Loop (cost=747.19..807.15 rows=1 width=73) (actual\n>>> time=28.164..28.211 rows=10 loops=1)\n>>> Output: o.id, o.company_id, o.created_at, o.assignee_id,\n>>> o.tax_treatment, o.tax_override, o.stock_location_id, o.tags,\n>>> o.shipping_address_id, l.label\n>>> -> Index Scan using index_locations_on_account_id on\n>>> public.locations l (cost=0.29..8.31 rows=1 width=20) (actual\n>>> time=0.025..0.025 rows=1 loops=1)\n>>> Output: l.id, l.address1, l.address2, l.city,\n>>> l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude,\n>>> l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock\n>>> Index Cond: (l.account_id = 18799)\n>>> -> Bitmap Heap Scan on public.orders o\n>>> (cost=746.90..798.71 rows=13 width=57) (actual time=28.133..28.176 rows=10\n>>> loops=1)\n>>> Output: o.id, o.account_id, o.company_id, o.status,\n>>> o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id,\n>>> o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id,\n>>> o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status,\n>>> o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override,\n>>> o.tax_label_override, o.stock_location_id, o.currency_id, o.source,\n>>> o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search,\n>>> o.default_price_list_id, o.contact_id, o.return_status, o.tags,\n>>> o.packed_status, o.returning_status, o.shippability_status,\n>>> o.backordering_status\n>>> Recheck Cond: ((o.stock_location_id = l.id) AND\n>>> (o.account_id = 18799))\n>>> Heap Blocks: exact=7\n>>> -> BitmapAnd (cost=746.90..746.90 rows=13 width=0)\n>>> (actual time=23.134..23.134 rows=0 loops=1)\n>>> -> Bitmap Index Scan on\n>>> index_orders_on_stock_location_id_manual (cost=0.00..18.02 rows=745\n>>> width=0) (actual time=9.282..9.282 rows=40317 loops=1)\n>>> Index Cond: (o.stock_location_id = l.id)\n>>> -> Bitmap Index Scan on\n>>> index_orders_on_account_id (cost=0.00..718.94 rows=38735 width=0) (actual\n>>> time=9.856..9.856 rows=40317 loops=1)\n>>> Index Cond: (o.account_id = 18799)\n>>> -> Index Scan using addresses_pkey on public.addresses sa\n>>> (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1\n>>> loops=10)\n>>> Output: sa.id, sa.company_id, sa.address1, sa.city,\n>>> sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state,\n>>> sa.label, sa.status, sa.address2, sa.phone_number, sa.email,\n>>> sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name\n>>> Index Cond: (sa.id = o.shipping_address_id)\n>>> Planning time: 1.136 ms\n>>> Execution time: 28.621 ms\n>>> (23 rows)\n>>>\n>>> Plan when I run it from analytical db via FDW:\n>>>\n>>> Limit (cost=300.00..339.95 rows=1 width=1620) (actual\n>>> time=7630.240..82368.326 rows=10 loops=1)\n>>> Output: o.id, o.company_id, ((o.created_at)::date),\n>>> (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n>>> (COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id,\n>>> 0)), ((COALESCE(l.label, 'N/A'::character varying))::\n>>> text), ((COALESCE(sa.country, 'N/A'::character varying))::text),\n>>> (COALESCE(o.tags, '{}'::character varying[]))\n>>> -> Nested Loop (cost=300.00..339.95 rows=1 width=1620) (actual\n>>> time=7630.238..82368.314 rows=10 loops=1)\n>>> Output: o.id, o.company_id, (o.created_at)::date,\n>>> COALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\n>>> COALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0),\n>>> (COALESCE(l.label, 'N/A'::character varying))::text,\n>>> (COALESCE(sa.country, 'N/A'::character varying))::text,\n>>> COALESCE(o.tags, '{}'::character varying[])\n>>> Join Filter: (o.shipping_address_id = sa.id)\n>>> Rows Removed by Join Filter: 19227526\n>>> -> Nested Loop (cost=200.00..223.58 rows=1 width=1108)\n>>> (actual time=69.758..69.812 rows=10 loops=1)\n>>> Output: o.id, o.company_id, o.created_at, o.assignee_id,\n>>> o.tax_treatment, o.tax_override, o.stock_location_id, o.tags,\n>>> o.shipping_address_id, l.label\n>>> Join Filter: (o.stock_location_id = l.id)\n>>> Rows Removed by Join Filter: 18\n>>> -> Foreign Scan on remote.orders o (cost=100.00..111.67\n>>> rows=1 width=592) (actual time=68.009..68.014 rows=10 loops=1)\n>>> Output: o.id, o.account_id, o.company_id,\n>>> o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at,\n>>> o.user_id, o.notes, o.created_at, o.updated_at, o.order_number,\n>>> o.billing_address_id, o.shipping_address\n>>> _id, o.payment_status, o.email, o.fulfillment_status, o.phone_number,\n>>> o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override,\n>>> o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo,\n>>> o.invoice_status, o.ship_at, o\n>>> .source_id, o.search, o.default_price_list_id, o.contact_id,\n>>> o.return_status, o.tags, o.packed_status, o.returning_status,\n>>> o.shippability_status, o.backordering_status\n>>> Remote SQL: SELECT id, company_id, created_at,\n>>> shipping_address_id, assignee_id, tax_treatment, tax_override,\n>>> stock_location_id, tags FROM public.orders WHERE ((account_id = 18799))\n>>> -> Foreign Scan on remote.locations l\n>>> (cost=100.00..111.90 rows=1 width=520) (actual time=0.174..0.174 rows=3\n>>> loops=10)\n>>> Output: l.id, l.address1, l.address2, l.city,\n>>> l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude,\n>>> l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock\n>>> Remote SQL: SELECT id, label FROM public.locations\n>>> WHERE ((account_id = 18799))\n>>> -> Foreign Scan on remote.addresses sa (cost=100.00..114.50\n>>> rows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)\n>>> Output: sa.id, sa.company_id, sa.address1, sa.city,\n>>> sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state,\n>>> sa.label, sa.status, sa.address2, sa.phone_number, sa.email,\n>>> sa.company_name, sa.latitude, sa.l\n>>> ongitude, sa.first_name, sa.last_name\n>>> Remote SQL: SELECT id, country FROM public.addresses\n>>> Planning time: 0.209 ms\n>>> Execution time: 82391.610 ms\n>>> (21 rows)\n>>>\n>>> Time: 82393.211 ms\n>>>\n>>> What am I doing wrong ? really appreciate any guidance possible. Thank\n>>> you very much for taking the time to helping me with this.\n>>>\n>>> Best Regards,\n>>> Mohammad\n>>>\n>>\n>>\n>>\n>> --\n>> Matteo Durighetto\n>>\n>> - - - - - - - - - - - - - - - - - - - - - - -\n>>\n>> Italian PostgreSQL User Group <http://www.itpug.org/index.it.html>\n>> Italian Community for Geographic Free/Open-Source Software\n>> <http://www.gfoss.it>\n>>\n>\n>\n>\n> --\n> Mohammad Habbab\n> Bangsar, KL, Malaysia\n> Mobile No. +601111582144\n> Email: [email protected]\n> LinkedIn: https://www.linkedin.com/in/mohammadhabbab\n>\n\n\nHi,\n I am not sure why, by the way I think because you could have the\nlocal tables mixed with the foreign tables, so in that case, you have to use\nthe local cost base optimizer [if you not rewrite query with CTE with only\nthe fdw tables and use so the use_remote_estimate], and so you need local\nstatistics of local and remote table [ infact you could also analyze fdw\ntable and store the statistics in local dictionary ].\n\nIn your case I see you have all fdw tables, so it makes more sense to use\nremote cost base optmizer.\n\n\nHave a nice day\n-- \nMatteo Durighetto\n\n- - - - - - - - - - - - - - - - - - - - - - -\n\nItalian PostgreSQL User Group <http://www.itpug.org/index.it.html>\nItalian Community for Geographic Free/Open-Source Software\n<http://www.gfoss.it>\n\n2015-10-11 12:10 GMT+02:00 Mohammad Habbab <[email protected]>:Awesome ! Thank you very much, that solved it :) . But, do you have any idea why this isn't enabled by default ?As a first time user for FDW I would assume that usage of remote estimates would be enabled by default because they would be more authoritative and more representative of access patterns. Correct ?Best Regards,Mohammad On Sun, Oct 11, 2015 at 5:42 PM, desmodemone <[email protected]> wrote:Hi Mohammad,                                 I think it's not enable \"use_remote_estimate\" during the creation of the foreign table http://www.postgresql.org/docs/9.4/static/postgres-fdw.htmluse_remote_estimate\nThis option, which can be specified for a foreign\n table or a foreign server, controls whether postgres_fdw issues remote EXPLAIN commands to obtain cost\n estimates. A setting for a foreign table overrides any\n setting for its server, but only for that table. The\n default is false.try itBye2015-10-11 10:05 GMT+02:00 Mohammad Habbab <[email protected]>:Hi there, If it's possible, I would really appreciate any hints or help on an issue I've been facing lately.I'm running two instance of Postgres locally: 9.4.4 (operational db) and 9.5beta1 (analytical db). I've already imported schema to analytical db and while doing the following query I find very different query plans being executed:Query:EXPLAIN ANALYZE VERBOSE SELECT    o.id AS id,    o.company_id AS company_id,    o.created_at::date AS created_at,    COALESCE(o.assignee_id, 0) AS assignee_id,    (o.tax_treatment)::text AS tax_treatment,    COALESCE(o.tax_override, 0) AS tax_override,    COALESCE(o.stock_location_id, 0) AS stock_location_id,    COALESCE(l.label, 'N/A')::text AS stock_location_name,    COALESCE(sa.country, 'N/A')::text AS shipping_address_country,    COALESCE(o.tags, ARRAY[]::text[]) AS tags  FROM orders AS o    INNER JOIN locations AS l ON l.id = o.stock_location_id    INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id  WHERE o.account_id = <some_value> AND l.account_id = <another_value> LIMIT 10;Plan when I run it locally on operational db:Limit  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.397 rows=10 loops=1)Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))->  Nested Loop  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.395 rows=10 loops=1)      Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])      ->  Nested Loop  (cost=747.19..807.15 rows=1 width=73) (actual time=28.164..28.211 rows=10 loops=1)            Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label            ->  Index Scan using index_locations_on_account_id on public.locations l  (cost=0.29..8.31 rows=1 width=20) (actual time=0.025..0.025 rows=1 loops=1)                  Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                  Index Cond: (l.account_id = 18799)            ->  Bitmap Heap Scan on public.orders o  (cost=746.90..798.71 rows=13 width=57) (actual time=28.133..28.176 rows=10 loops=1)                  Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                  Recheck Cond: ((o.stock_location_id = l.id) AND (o.account_id = 18799))                  Heap Blocks: exact=7                  ->  BitmapAnd  (cost=746.90..746.90 rows=13 width=0) (actual time=23.134..23.134 rows=0 loops=1)                        ->  Bitmap Index Scan on index_orders_on_stock_location_id_manual  (cost=0.00..18.02 rows=745 width=0) (actual time=9.282..9.282 rows=40317 loops=1)                              Index Cond: (o.stock_location_id = l.id)                        ->  Bitmap Index Scan on index_orders_on_account_id  (cost=0.00..718.94 rows=38735 width=0) (actual time=9.856..9.856 rows=40317 loops=1)                              Index Cond: (o.account_id = 18799)      ->  Index Scan using addresses_pkey on public.addresses sa  (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1 loops=10)            Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name            Index Cond: (sa.id = o.shipping_address_id) Planning time: 1.136 ms Execution time: 28.621 ms(23 rows)Plan when I run it from analytical db via FDW:Limit  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.240..82368.326 rows=10 loops=1)   Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))   ->  Nested Loop  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.238..82368.314 rows=10 loops=1)         Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])         Join Filter: (o.shipping_address_id = sa.id)         Rows Removed by Join Filter: 19227526         ->  Nested Loop  (cost=200.00..223.58 rows=1 width=1108) (actual time=69.758..69.812 rows=10 loops=1)               Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label               Join Filter: (o.stock_location_id = l.id)               Rows Removed by Join Filter: 18               ->  Foreign Scan on remote.orders o  (cost=100.00..111.67 rows=1 width=592) (actual time=68.009..68.014 rows=10 loops=1)                     Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                     Remote SQL: SELECT id, company_id, created_at, shipping_address_id, assignee_id, tax_treatment, tax_override, stock_location_id, tags FROM public.orders WHERE ((account_id = 18799))               ->  Foreign Scan on remote.locations l  (cost=100.00..111.90 rows=1 width=520) (actual time=0.174..0.174 rows=3 loops=10)                     Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                     Remote SQL: SELECT id, label FROM public.locations WHERE ((account_id = 18799))         ->  Foreign Scan on remote.addresses sa  (cost=100.00..114.50 rows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)               Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name               Remote SQL: SELECT id, country FROM public.addresses Planning time: 0.209 ms Execution time: 82391.610 ms(21 rows)Time: 82393.211 msWhat am I doing wrong ? really appreciate any guidance possible. Thank you very much for taking the time to helping me with this.Best Regards,Mohammad\n-- Matteo Durighetto - - - - - - - - - - - - - - - - - - - - - - -Italian PostgreSQL User GroupItalian Community for Geographic Free/Open-Source Software\n\n-- Mohammad HabbabBangsar, KL, MalaysiaMobile No. +601111582144Email: [email protected]: https://www.linkedin.com/in/mohammadhabbab\n\nHi,         I am not sure why, by the way I think \nbecause you could have the local tables mixed with the foreign tables, \nso in that case, you have to usethe local cost base optimizer [if \nyou not rewrite query with CTE with only the fdw tables and use so the  \nuse_remote_estimate], and so you need local statistics of local and \nremote table [ infact you could also analyze fdw table and store the \nstatistics in local dictionary ].In your case I see you have all fdw tables, so it makes more sense to use remote cost base optmizer.Have a nice day-- Matteo Durighetto - - - - - - - - - - - - - - - - - - - - - - -Italian PostgreSQL User GroupItalian Community for Geographic Free/Open-Source Software", "msg_date": "Sun, 11 Oct 2015 12:48:58 +0200", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3000x Slower query when using Foreign Data Wrapper vs. local" }, { "msg_contents": "> I am not sure why, by the way I think because you could have the local\ntables mixed with the foreign tables, so in that case, you have to use\n> the local cost base optimizer\nOh right ! yep, that explains it. Thank you very much !\n\nBest Regards,\nMohammad\n\nOn Sun, Oct 11, 2015 at 6:48 PM, desmodemone <[email protected]> wrote:\n\n>\n>\n> 2015-10-11 12:10 GMT+02:00 Mohammad Habbab <[email protected]>:\n>\n>> Awesome ! Thank you very much, that solved it :) . But, do you have any\n>> idea why this isn't enabled by default ?\n>> As a first time user for FDW I would assume that usage of remote\n>> estimates would be enabled by default because they would be more\n>> authoritative and more representative of access patterns. Correct ?\n>>\n>> Best Regards,\n>> Mohammad\n>>\n>> On Sun, Oct 11, 2015 at 5:42 PM, desmodemone <[email protected]>\n>> wrote:\n>>\n>>> Hi Mohammad,\n>>> I think it's not enable\n>>> \"use_remote_estimate\" during the creation of the foreign table\n>>>\n>>> http://www.postgresql.org/docs/9.4/static/postgres-fdw.html\n>>>\n>>> use_remote_estimate\n>>>\n>>> This option, which can be specified for a foreign table or a foreign\n>>> server, controls whether postgres_fdw issues remote EXPLAIN commands to\n>>> obtain cost estimates. A setting for a foreign table overrides any setting\n>>> for its server, but only for that table. The default is false.\n>>>\n>>>\n>>> try it\n>>>\n>>>\n>>> Bye\n>>>\n>>>\n>>> 2015-10-11 10:05 GMT+02:00 Mohammad Habbab <[email protected]>:\n>>>\n>>>> Hi there,\n>>>>\n>>>> If it's possible, I would really appreciate any hints or help on an\n>>>> issue I've been facing lately.\n>>>> I'm running two instance of Postgres locally: 9.4.4 (operational db)\n>>>> and 9.5beta1 (analytical db). I've already imported schema to analytical db\n>>>> and while doing the following query I find very different query plans being\n>>>> executed:\n>>>>\n>>>> Query:\n>>>>\n>>>> EXPLAIN ANALYZE VERBOSE SELECT\n>>>> o.id AS id,\n>>>> o.company_id AS company_id,\n>>>> o.created_at::date AS created_at,\n>>>> COALESCE(o.assignee_id, 0) AS assignee_id,\n>>>> (o.tax_treatment)::text AS tax_treatment,\n>>>> COALESCE(o.tax_override, 0) AS tax_override,\n>>>> COALESCE(o.stock_location_id, 0) AS stock_location_id,\n>>>> COALESCE(l.label, 'N/A')::text AS stock_location_name,\n>>>> COALESCE(sa.country, 'N/A')::text AS shipping_address_country,\n>>>> COALESCE(o.tags, ARRAY[]::text[]) AS tags\n>>>> FROM orders AS o\n>>>> INNER JOIN locations AS l ON l.id = o.stock_location_id\n>>>> INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id\n>>>> WHERE o.account_id = <some_value> AND l.account_id = <another_value>\n>>>> LIMIT 10;\n>>>>\n>>>>\n>>>> Plan when I run it locally on operational db:\n>>>>\n>>>> Limit (cost=747.62..811.46 rows=1 width=76) (actual\n>>>> time=28.208..28.397 rows=10 loops=1)\n>>>> Output: o.id, o.company_id, ((o.created_at)::date),\n>>>> (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n>>>> (COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)),\n>>>> ((COALESCE(l.label, 'N/A'::character varying))::text),\n>>>> ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags,\n>>>> '{}'::character varying[]))\n>>>> -> Nested Loop (cost=747.62..811.46 rows=1 width=76) (actual\n>>>> time=28.208..28.395 rows=10 loops=1)\n>>>> Output: o.id, o.company_id, (o.created_at)::date,\n>>>> COALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\n>>>> COALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0),\n>>>> (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country,\n>>>> 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character\n>>>> varying[])\n>>>> -> Nested Loop (cost=747.19..807.15 rows=1 width=73) (actual\n>>>> time=28.164..28.211 rows=10 loops=1)\n>>>> Output: o.id, o.company_id, o.created_at, o.assignee_id,\n>>>> o.tax_treatment, o.tax_override, o.stock_location_id, o.tags,\n>>>> o.shipping_address_id, l.label\n>>>> -> Index Scan using index_locations_on_account_id on\n>>>> public.locations l (cost=0.29..8.31 rows=1 width=20) (actual\n>>>> time=0.025..0.025 rows=1 loops=1)\n>>>> Output: l.id, l.address1, l.address2, l.city,\n>>>> l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude,\n>>>> l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock\n>>>> Index Cond: (l.account_id = 18799)\n>>>> -> Bitmap Heap Scan on public.orders o\n>>>> (cost=746.90..798.71 rows=13 width=57) (actual time=28.133..28.176 rows=10\n>>>> loops=1)\n>>>> Output: o.id, o.account_id, o.company_id, o.status,\n>>>> o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id,\n>>>> o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id,\n>>>> o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status,\n>>>> o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override,\n>>>> o.tax_label_override, o.stock_location_id, o.currency_id, o.source,\n>>>> o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search,\n>>>> o.default_price_list_id, o.contact_id, o.return_status, o.tags,\n>>>> o.packed_status, o.returning_status, o.shippability_status,\n>>>> o.backordering_status\n>>>> Recheck Cond: ((o.stock_location_id = l.id) AND\n>>>> (o.account_id = 18799))\n>>>> Heap Blocks: exact=7\n>>>> -> BitmapAnd (cost=746.90..746.90 rows=13 width=0)\n>>>> (actual time=23.134..23.134 rows=0 loops=1)\n>>>> -> Bitmap Index Scan on\n>>>> index_orders_on_stock_location_id_manual (cost=0.00..18.02 rows=745\n>>>> width=0) (actual time=9.282..9.282 rows=40317 loops=1)\n>>>> Index Cond: (o.stock_location_id = l.id)\n>>>> -> Bitmap Index Scan on\n>>>> index_orders_on_account_id (cost=0.00..718.94 rows=38735 width=0) (actual\n>>>> time=9.856..9.856 rows=40317 loops=1)\n>>>> Index Cond: (o.account_id = 18799)\n>>>> -> Index Scan using addresses_pkey on public.addresses sa\n>>>> (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1\n>>>> loops=10)\n>>>> Output: sa.id, sa.company_id, sa.address1, sa.city,\n>>>> sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state,\n>>>> sa.label, sa.status, sa.address2, sa.phone_number, sa.email,\n>>>> sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name\n>>>> Index Cond: (sa.id = o.shipping_address_id)\n>>>> Planning time: 1.136 ms\n>>>> Execution time: 28.621 ms\n>>>> (23 rows)\n>>>>\n>>>> Plan when I run it from analytical db via FDW:\n>>>>\n>>>> Limit (cost=300.00..339.95 rows=1 width=1620) (actual\n>>>> time=7630.240..82368.326 rows=10 loops=1)\n>>>> Output: o.id, o.company_id, ((o.created_at)::date),\n>>>> (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text),\n>>>> (COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id,\n>>>> 0)), ((COALESCE(l.label, 'N/A'::character varying))::\n>>>> text), ((COALESCE(sa.country, 'N/A'::character varying))::text),\n>>>> (COALESCE(o.tags, '{}'::character varying[]))\n>>>> -> Nested Loop (cost=300.00..339.95 rows=1 width=1620) (actual\n>>>> time=7630.238..82368.314 rows=10 loops=1)\n>>>> Output: o.id, o.company_id, (o.created_at)::date,\n>>>> COALESCE(o.assignee_id, 0), (o.tax_treatment)::text,\n>>>> COALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0),\n>>>> (COALESCE(l.label, 'N/A'::character varying))::text,\n>>>> (COALESCE(sa.country, 'N/A'::character varying))::text,\n>>>> COALESCE(o.tags, '{}'::character varying[])\n>>>> Join Filter: (o.shipping_address_id = sa.id)\n>>>> Rows Removed by Join Filter: 19227526\n>>>> -> Nested Loop (cost=200.00..223.58 rows=1 width=1108)\n>>>> (actual time=69.758..69.812 rows=10 loops=1)\n>>>> Output: o.id, o.company_id, o.created_at,\n>>>> o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id,\n>>>> o.tags, o.shipping_address_id, l.label\n>>>> Join Filter: (o.stock_location_id = l.id)\n>>>> Rows Removed by Join Filter: 18\n>>>> -> Foreign Scan on remote.orders o\n>>>> (cost=100.00..111.67 rows=1 width=592) (actual time=68.009..68.014 rows=10\n>>>> loops=1)\n>>>> Output: o.id, o.account_id, o.company_id,\n>>>> o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at,\n>>>> o.user_id, o.notes, o.created_at, o.updated_at, o.order_number,\n>>>> o.billing_address_id, o.shipping_address\n>>>> _id, o.payment_status, o.email, o.fulfillment_status, o.phone_number,\n>>>> o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override,\n>>>> o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo,\n>>>> o.invoice_status, o.ship_at, o\n>>>> .source_id, o.search, o.default_price_list_id, o.contact_id,\n>>>> o.return_status, o.tags, o.packed_status, o.returning_status,\n>>>> o.shippability_status, o.backordering_status\n>>>> Remote SQL: SELECT id, company_id, created_at,\n>>>> shipping_address_id, assignee_id, tax_treatment, tax_override,\n>>>> stock_location_id, tags FROM public.orders WHERE ((account_id = 18799))\n>>>> -> Foreign Scan on remote.locations l\n>>>> (cost=100.00..111.90 rows=1 width=520) (actual time=0.174..0.174 rows=3\n>>>> loops=10)\n>>>> Output: l.id, l.address1, l.address2, l.city,\n>>>> l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude,\n>>>> l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock\n>>>> Remote SQL: SELECT id, label FROM public.locations\n>>>> WHERE ((account_id = 18799))\n>>>> -> Foreign Scan on remote.addresses sa (cost=100.00..114.50\n>>>> rows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)\n>>>> Output: sa.id, sa.company_id, sa.address1, sa.city,\n>>>> sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state,\n>>>> sa.label, sa.status, sa.address2, sa.phone_number, sa.email,\n>>>> sa.company_name, sa.latitude, sa.l\n>>>> ongitude, sa.first_name, sa.last_name\n>>>> Remote SQL: SELECT id, country FROM public.addresses\n>>>> Planning time: 0.209 ms\n>>>> Execution time: 82391.610 ms\n>>>> (21 rows)\n>>>>\n>>>> Time: 82393.211 ms\n>>>>\n>>>> What am I doing wrong ? really appreciate any guidance possible. Thank\n>>>> you very much for taking the time to helping me with this.\n>>>>\n>>>> Best Regards,\n>>>> Mohammad\n>>>>\n>>>\n>>>\n>>>\n>>> --\n>>> Matteo Durighetto\n>>>\n>>> - - - - - - - - - - - - - - - - - - - - - - -\n>>>\n>>> Italian PostgreSQL User Group <http://www.itpug.org/index.it.html>\n>>> Italian Community for Geographic Free/Open-Source Software\n>>> <http://www.gfoss.it>\n>>>\n>>\n>>\n>>\n>> --\n>> Mohammad Habbab\n>> Bangsar, KL, Malaysia\n>> Mobile No. +601111582144\n>> Email: [email protected]\n>> LinkedIn: https://www.linkedin.com/in/mohammadhabbab\n>>\n>\n>\n> Hi,\n> I am not sure why, by the way I think because you could have the\n> local tables mixed with the foreign tables, so in that case, you have to use\n> the local cost base optimizer [if you not rewrite query with CTE with only\n> the fdw tables and use so the use_remote_estimate], and so you need local\n> statistics of local and remote table [ infact you could also analyze fdw\n> table and store the statistics in local dictionary ].\n>\n> In your case I see you have all fdw tables, so it makes more sense to use\n> remote cost base optmizer.\n>\n>\n> Have a nice day\n> --\n> Matteo Durighetto\n>\n> - - - - - - - - - - - - - - - - - - - - - - -\n>\n> Italian PostgreSQL User Group <http://www.itpug.org/index.it.html>\n> Italian Community for Geographic Free/Open-Source Software\n> <http://www.gfoss.it>\n>\n\n\n\n-- \nMohammad Habbab\nBangsar, KL, Malaysia\nMobile No. +601111582144\nEmail: [email protected]\nLinkedIn: https://www.linkedin.com/in/mohammadhabbab\n\n> I am not sure why, by the way I think because you could have the local tables mixed with the foreign tables, so in that case, you have to use> the local cost base optimizerOh right ! yep, that explains it. Thank you very much !Best Regards,Mohammad On Sun, Oct 11, 2015 at 6:48 PM, desmodemone <[email protected]> wrote:2015-10-11 12:10 GMT+02:00 Mohammad Habbab <[email protected]>:Awesome ! Thank you very much, that solved it :) . But, do you have any idea why this isn't enabled by default ?As a first time user for FDW I would assume that usage of remote estimates would be enabled by default because they would be more authoritative and more representative of access patterns. Correct ?Best Regards,Mohammad On Sun, Oct 11, 2015 at 5:42 PM, desmodemone <[email protected]> wrote:Hi Mohammad,                                 I think it's not enable \"use_remote_estimate\" during the creation of the foreign table http://www.postgresql.org/docs/9.4/static/postgres-fdw.htmluse_remote_estimate\nThis option, which can be specified for a foreign\n table or a foreign server, controls whether postgres_fdw issues remote EXPLAIN commands to obtain cost\n estimates. A setting for a foreign table overrides any\n setting for its server, but only for that table. The\n default is false.try itBye2015-10-11 10:05 GMT+02:00 Mohammad Habbab <[email protected]>:Hi there, If it's possible, I would really appreciate any hints or help on an issue I've been facing lately.I'm running two instance of Postgres locally: 9.4.4 (operational db) and 9.5beta1 (analytical db). I've already imported schema to analytical db and while doing the following query I find very different query plans being executed:Query:EXPLAIN ANALYZE VERBOSE SELECT    o.id AS id,    o.company_id AS company_id,    o.created_at::date AS created_at,    COALESCE(o.assignee_id, 0) AS assignee_id,    (o.tax_treatment)::text AS tax_treatment,    COALESCE(o.tax_override, 0) AS tax_override,    COALESCE(o.stock_location_id, 0) AS stock_location_id,    COALESCE(l.label, 'N/A')::text AS stock_location_name,    COALESCE(sa.country, 'N/A')::text AS shipping_address_country,    COALESCE(o.tags, ARRAY[]::text[]) AS tags  FROM orders AS o    INNER JOIN locations AS l ON l.id = o.stock_location_id    INNER JOIN addresses AS sa ON sa.id = o.shipping_address_id  WHERE o.account_id = <some_value> AND l.account_id = <another_value> LIMIT 10;Plan when I run it locally on operational db:Limit  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.397 rows=10 loops=1)Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, 0::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))->  Nested Loop  (cost=747.62..811.46 rows=1 width=76) (actual time=28.208..28.395 rows=10 loops=1)      Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, 0::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])      ->  Nested Loop  (cost=747.19..807.15 rows=1 width=73) (actual time=28.164..28.211 rows=10 loops=1)            Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label            ->  Index Scan using index_locations_on_account_id on public.locations l  (cost=0.29..8.31 rows=1 width=20) (actual time=0.025..0.025 rows=1 loops=1)                  Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                  Index Cond: (l.account_id = 18799)            ->  Bitmap Heap Scan on public.orders o  (cost=746.90..798.71 rows=13 width=57) (actual time=28.133..28.176 rows=10 loops=1)                  Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                  Recheck Cond: ((o.stock_location_id = l.id) AND (o.account_id = 18799))                  Heap Blocks: exact=7                  ->  BitmapAnd  (cost=746.90..746.90 rows=13 width=0) (actual time=23.134..23.134 rows=0 loops=1)                        ->  Bitmap Index Scan on index_orders_on_stock_location_id_manual  (cost=0.00..18.02 rows=745 width=0) (actual time=9.282..9.282 rows=40317 loops=1)                              Index Cond: (o.stock_location_id = l.id)                        ->  Bitmap Index Scan on index_orders_on_account_id  (cost=0.00..718.94 rows=38735 width=0) (actual time=9.856..9.856 rows=40317 loops=1)                              Index Cond: (o.account_id = 18799)      ->  Index Scan using addresses_pkey on public.addresses sa  (cost=0.43..4.30 rows=1 width=11) (actual time=0.015..0.016 rows=1 loops=10)            Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name            Index Cond: (sa.id = o.shipping_address_id) Planning time: 1.136 ms Execution time: 28.621 ms(23 rows)Plan when I run it from analytical db via FDW:Limit  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.240..82368.326 rows=10 loops=1)   Output: o.id, o.company_id, ((o.created_at)::date), (COALESCE(o.assignee_id, 0)), ((o.tax_treatment)::text), (COALESCE(o.tax_override, '0'::numeric)), (COALESCE(o.stock_location_id, 0)), ((COALESCE(l.label, 'N/A'::character varying))::text), ((COALESCE(sa.country, 'N/A'::character varying))::text), (COALESCE(o.tags, '{}'::character varying[]))   ->  Nested Loop  (cost=300.00..339.95 rows=1 width=1620) (actual time=7630.238..82368.314 rows=10 loops=1)         Output: o.id, o.company_id, (o.created_at)::date, COALESCE(o.assignee_id, 0), (o.tax_treatment)::text, COALESCE(o.tax_override, '0'::numeric), COALESCE(o.stock_location_id, 0), (COALESCE(l.label, 'N/A'::character varying))::text, (COALESCE(sa.country, 'N/A'::character varying))::text, COALESCE(o.tags, '{}'::character varying[])         Join Filter: (o.shipping_address_id = sa.id)         Rows Removed by Join Filter: 19227526         ->  Nested Loop  (cost=200.00..223.58 rows=1 width=1108) (actual time=69.758..69.812 rows=10 loops=1)               Output: o.id, o.company_id, o.created_at, o.assignee_id, o.tax_treatment, o.tax_override, o.stock_location_id, o.tags, o.shipping_address_id, l.label               Join Filter: (o.stock_location_id = l.id)               Rows Removed by Join Filter: 18               ->  Foreign Scan on remote.orders o  (cost=100.00..111.67 rows=1 width=592) (actual time=68.009..68.014 rows=10 loops=1)                     Output: o.id, o.account_id, o.company_id, o.status, o.invoice_number, o.reference_number, o.due_at, o.issued_at, o.user_id, o.notes, o.created_at, o.updated_at, o.order_number, o.billing_address_id, o.shipping_address_id, o.payment_status, o.email, o.fulfillment_status, o.phone_number, o.assignee_id, o.tax_treatment, o.tax_override, o.tax_label_override, o.stock_location_id, o.currency_id, o.source, o.source_url, o.demo, o.invoice_status, o.ship_at, o.source_id, o.search, o.default_price_list_id, o.contact_id, o.return_status, o.tags, o.packed_status, o.returning_status, o.shippability_status, o.backordering_status                     Remote SQL: SELECT id, company_id, created_at, shipping_address_id, assignee_id, tax_treatment, tax_override, stock_location_id, tags FROM public.orders WHERE ((account_id = 18799))               ->  Foreign Scan on remote.locations l  (cost=100.00..111.90 rows=1 width=520) (actual time=0.174..0.174 rows=3 loops=10)                     Output: l.id, l.address1, l.address2, l.city, l.country, l.zip_code, l.suburb, l.state, l.label, l.status, l.latitude, l.longitude, l.created_at, l.updated_at, l.account_id, l.holds_stock                     Remote SQL: SELECT id, label FROM public.locations WHERE ((account_id = 18799))         ->  Foreign Scan on remote.addresses sa  (cost=100.00..114.50 rows=150 width=520) (actual time=0.634..8029.415 rows=1922754 loops=10)               Output: sa.id, sa.company_id, sa.address1, sa.city, sa.country, sa.zip_code, sa.created_at, sa.updated_at, sa.suburb, sa.state, sa.label, sa.status, sa.address2, sa.phone_number, sa.email, sa.company_name, sa.latitude, sa.longitude, sa.first_name, sa.last_name               Remote SQL: SELECT id, country FROM public.addresses Planning time: 0.209 ms Execution time: 82391.610 ms(21 rows)Time: 82393.211 msWhat am I doing wrong ? really appreciate any guidance possible. Thank you very much for taking the time to helping me with this.Best Regards,Mohammad\n-- Matteo Durighetto - - - - - - - - - - - - - - - - - - - - - - -Italian PostgreSQL User GroupItalian Community for Geographic Free/Open-Source Software\n\n-- Mohammad HabbabBangsar, KL, MalaysiaMobile No. +601111582144Email: [email protected]: https://www.linkedin.com/in/mohammadhabbab\n\nHi,         I am not sure why, by the way I think \nbecause you could have the local tables mixed with the foreign tables, \nso in that case, you have to usethe local cost base optimizer [if \nyou not rewrite query with CTE with only the fdw tables and use so the  \nuse_remote_estimate], and so you need local statistics of local and \nremote table [ infact you could also analyze fdw table and store the \nstatistics in local dictionary ].In your case I see you have all fdw tables, so it makes more sense to use remote cost base optmizer.Have a nice day-- Matteo Durighetto - - - - - - - - - - - - - - - - - - - - - - -Italian PostgreSQL User GroupItalian Community for Geographic Free/Open-Source Software\n\n-- Mohammad HabbabBangsar, KL, MalaysiaMobile No. +601111582144Email: [email protected]: https://www.linkedin.com/in/mohammadhabbab", "msg_date": "Sun, 11 Oct 2015 18:58:16 +0800", "msg_from": "Mohammad Habbab <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 3000x Slower query when using Foreign Data Wrapper vs. local" } ]
[ { "msg_contents": "Hi guys,\n\nI've been doing some design investigation and ran into an interesting snag\nI didn't expect to find on 9.4 (and earlier). I wrote a quick python script\nto fork multiple simultaneous COPY commands to several separate tables and\nfound that performance apparently degrades based on how many COPY commands\nare running.\n\nFor instance, in the logs with one COPY, I see about one second to import\n100k rows. At two processes, it's 2 seconds. At four processes, it's 4\nseconds. This is for each process. Thus loading 400k rows takes 16 seconds\ncumulatively. To me, it looked like some kind of locking issue, but\npg_locks showed no waits during the load. In trying to figure this out, I\nran across this discussion:\n\nhttp://www.postgresql.org/message-id/CAB7nPqQJeASxDr0Rt9CJiaf9OnfjoJstyk18iw+oXi-OBO4gYA@mail.gmail.com\n\nWhich came after this:\n\nhttp://forums.enterprisedb.com/posts/list/4048.page\n\nIt would appear I'm running into whatever issue the xloginsert_slots patch\ntried to address, but not much discussion exists afterwards. It's like the\npatch just kinda vanished into the ether even though it (apparently)\nmassively improves PG's ability to scale data import.\n\nI should note that setting wal_level to minimal, or doing the load on\nunlogged tables completely resolves this issue. However, those are not\nacceptable settings in a production environment. Is there any other way to\nget normal parallel COPY performance, or is that just currently impossible?\n\nI also know 9.5 underwent a lot of locking improvements, so it might not be\nrelevant. I just haven't gotten a chance to repeat my tests with 9.5 just\nyet.\n\n-- \nShaun Thomas\[email protected]\n\nHi guys,I've been doing some design investigation and ran into an interesting snag I didn't expect to find on 9.4 (and earlier). I wrote a quick python script to fork multiple simultaneous COPY commands to several separate tables and found that performance apparently degrades based on how many COPY commands are running.For instance, in the logs with one COPY, I see about one second to import 100k rows. At two processes, it's 2 seconds. At four processes, it's 4 seconds. This is for each process. Thus loading 400k rows takes 16 seconds cumulatively. To me, it looked like some kind of locking issue, but pg_locks showed no waits during the load. In trying to figure this out, I ran across this discussion:http://www.postgresql.org/message-id/CAB7nPqQJeASxDr0Rt9CJiaf9OnfjoJstyk18iw+oXi-OBO4gYA@mail.gmail.comWhich came after this:http://forums.enterprisedb.com/posts/list/4048.pageIt would appear I'm running into whatever issue the xloginsert_slots patch tried to address, but not much discussion exists afterwards. It's like the patch just kinda vanished into the ether even though it (apparently) massively improves PG's ability to scale data import.I should note that setting wal_level to minimal, or doing the load on unlogged tables completely resolves this issue. However, those are not acceptable settings in a production environment. Is there any other way to get normal parallel COPY performance, or is that just currently impossible?I also know 9.5 underwent a lot of locking improvements, so it might not be relevant. I just haven't gotten a chance to repeat my tests with 9.5 just yet.-- Shaun [email protected]", "msg_date": "Mon, 12 Oct 2015 13:17:53 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Having some problems with concurrent COPY commands" }, { "msg_contents": "Hi,\n\nOn 2015-10-12 13:17:53 -0500, Shaun Thomas wrote:\n> It would appear I'm running into whatever issue the xloginsert_slots patch\n> tried to address, but not much discussion exists afterwards.\n\nThat patch is merged, it's just that the number of slots is\nhardcoded. You can recompile postgres with different values by changing\n#define NUM_XLOGINSERT_LOCKS 8\nin xlog.c to a different value. A restart is enough afterwards.\n\n> Is there any other way to\n> get normal parallel COPY performance, or is that just currently impossible?\n> \n> I also know 9.5 underwent a lot of locking improvements, so it might\n> not be relevant. I just haven't gotten a chance to repeat my tests\n> with 9.5 just yet.\n\nHard to say anything substantive without further information. Any chance\nyou could provide profiles of such a run? If yes, I can help you with\ninstructions. I'm just to lazy to write them up if not.\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 12 Oct 2015 20:28:08 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Having some problems with concurrent COPY commands" }, { "msg_contents": "On Mon, Oct 12, 2015 at 11:17 AM, Shaun Thomas <[email protected]> wrote:\n\n> Hi guys,\n>\n> I've been doing some design investigation and ran into an interesting snag\n> I didn't expect to find on 9.4 (and earlier). I wrote a quick python script\n> to fork multiple simultaneous COPY commands to several separate tables and\n> found that performance apparently degrades based on how many COPY commands\n> are running.\n>\n> For instance, in the logs with one COPY, I see about one second to import\n> 100k rows. At two processes, it's 2 seconds. At four processes, it's 4\n> seconds. This is for each process. Thus loading 400k rows takes 16 seconds\n> cumulatively. To me, it looked like some kind of locking issue, but\n> pg_locks showed no waits during the load. In trying to figure this out, I\n> ran across this discussion:\n>\n>\n> http://www.postgresql.org/message-id/CAB7nPqQJeASxDr0Rt9CJiaf9OnfjoJstyk18iw+oXi-OBO4gYA@mail.gmail.com\n>\n> Which came after this:\n>\n> http://forums.enterprisedb.com/posts/list/4048.page\n>\n> It would appear I'm running into whatever issue the xloginsert_slots patch\n> tried to address, but not much discussion exists afterwards. It's like the\n> patch just kinda vanished into the ether even though it (apparently)\n> massively improves PG's ability to scale data import.\n>\n> I should note that setting wal_level to minimal, or doing the load on\n> unlogged tables completely resolves this issue. However, those are not\n> acceptable settings in a production environment. Is there any other way to\n> get normal parallel COPY performance, or is that just currently impossible?\n>\n> I also know 9.5 underwent a lot of locking improvements, so it might not\n> be relevant. I just haven't gotten a chance to repeat my tests with 9.5\n> just yet.\n>\n\n\nCan you provide the test script? Also, have you tuned your database for\nhigh io throughput? What is your storage system like?\n\nOn Mon, Oct 12, 2015 at 11:17 AM, Shaun Thomas <[email protected]> wrote:Hi guys,I've been doing some design investigation and ran into an interesting snag I didn't expect to find on 9.4 (and earlier). I wrote a quick python script to fork multiple simultaneous COPY commands to several separate tables and found that performance apparently degrades based on how many COPY commands are running.For instance, in the logs with one COPY, I see about one second to import 100k rows. At two processes, it's 2 seconds. At four processes, it's 4 seconds. This is for each process. Thus loading 400k rows takes 16 seconds cumulatively. To me, it looked like some kind of locking issue, but pg_locks showed no waits during the load. In trying to figure this out, I ran across this discussion:http://www.postgresql.org/message-id/CAB7nPqQJeASxDr0Rt9CJiaf9OnfjoJstyk18iw+oXi-OBO4gYA@mail.gmail.comWhich came after this:http://forums.enterprisedb.com/posts/list/4048.pageIt would appear I'm running into whatever issue the xloginsert_slots patch tried to address, but not much discussion exists afterwards. It's like the patch just kinda vanished into the ether even though it (apparently) massively improves PG's ability to scale data import.I should note that setting wal_level to minimal, or doing the load on unlogged tables completely resolves this issue. However, those are not acceptable settings in a production environment. Is there any other way to get normal parallel COPY performance, or is that just currently impossible?I also know 9.5 underwent a lot of locking improvements, so it might not be relevant. I just haven't gotten a chance to repeat my tests with 9.5 just yet.Can you provide the test script?  Also, have you tuned your database for high io throughput?  What is your storage system like?", "msg_date": "Mon, 12 Oct 2015 11:56:23 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Having some problems with concurrent COPY commands" }, { "msg_contents": "On Mon, Oct 12, 2015 at 1:28 PM, Andres Freund <[email protected]> wrote:\n\n> Any chance\n> you could provide profiles of such a run?\n\nThis is as simple as I could make it reliably. With one copy running,\nthe thread finishes in about 1 second. With 2, it's 1.5s each, and\nwith all 4, it's a little over 3s for each according to the logs. I\nhave log_min_duration_statement set to 1000, so it's pretty obvious.\nThe scary part is that it's not even scaling linearly; performance is\nactually getting *worse* with each subsequent thread.\n\nRegarding performance, all of this fits in memory. The tables are only\n100k rows with the COPY statement. The machine itself is 8 CPUs with\n32GB of RAM, so it's not an issue of hardware. So far as I can tell,\nit happens on every version I've tested on, from 9.2 to 9.4. I also\ntake back what I said about wal_level. Setting it to minimal does\nnothing. Disabling archive_mode and setting max_wal_senders to 0 also\ndoes nothing. With 4 concurrent processes, each takes 3 seconds, for a\ntotal of 12 seconds to import 400k rows when it would take 4 seconds\nto do sequentially. Sketchy.\n\nCOPY (\n SELECT id, id % 100, id % 1000, now() - (id || 's')::INTERVAL\n FROM generate_series(1, 100000) a(id)\n) TO '/tmp/loadtest1.csv';\n\nCREATE TABLE test_copy (\n id SERIAL PRIMARY KEY,\n location VARCHAR NOT NULL,\n reading BIGINT NOT NULL,\n reading_date TIMESTAMP NOT NULL\n);\n\nCREATE INDEX idx_test_copy_location ON test_copy (location);\nCREATE INDEX idx_test_copy_date ON test_copy (reading_date);\n\nCREATE TABLE test_copy2 (LIKE test_copy INCLUDING INDEXES);\nCREATE TABLE test_copy3 (LIKE test_copy INCLUDING INDEXES);\nCREATE TABLE test_copy4 (LIKE test_copy INCLUDING INDEXES);\n\npsql -c \"COPY test_copy FROM '/tmp/loadtest1.csv'\" &>/dev/null &\npsql -c \"COPY test_copy2 FROM '/tmp/loadtest1.csv'\" &>/dev/null &\npsql -c \"COPY test_copy3 FROM '/tmp/loadtest1.csv'\" &>/dev/null &\npsql -c \"COPY test_copy4 FROM '/tmp/loadtest1.csv'\" &>/dev/null &\n\n\n-- \nShaun Thomas\[email protected]\nhttp://bonesmoses.org/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 12 Oct 2015 15:14:26 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Having some problems with concurrent COPY commands" }, { "msg_contents": "On 10/12/2015 11:14 PM, Shaun Thomas wrote:\n> On Mon, Oct 12, 2015 at 1:28 PM, Andres Freund <[email protected]> wrote:\n>\n>> Any chance\n>> you could provide profiles of such a run?\n>\n> This is as simple as I could make it reliably. With one copy running,\n> the thread finishes in about 1 second. With 2, it's 1.5s each, and\n> with all 4, it's a little over 3s for each according to the logs. I\n> have log_min_duration_statement set to 1000, so it's pretty obvious.\n> The scary part is that it's not even scaling linearly; performance is\n> actually getting *worse* with each subsequent thread.\n>\n> Regarding performance, all of this fits in memory. The tables are only\n> 100k rows with the COPY statement. The machine itself is 8 CPUs with\n> 32GB of RAM, so it's not an issue of hardware. So far as I can tell,\n> it happens on every version I've tested on, from 9.2 to 9.4. I also\n> take back what I said about wal_level. Setting it to minimal does\n> nothing. Disabling archive_mode and setting max_wal_senders to 0 also\n> does nothing. With 4 concurrent processes, each takes 3 seconds, for a\n> total of 12 seconds to import 400k rows when it would take 4 seconds\n> to do sequentially. Sketchy.\n\nI was not able reproduce that behaviour on my laptop. I bumped the \nnumber of rows in your script 100000, to make it run a bit longer. \nAttached is the script I used. The total wallclock time the COPYs takes \non 9.4 is about 8 seconds for a single COPY, and 12 seconds for 4 \nconcurrent COPYs. So it's not scaling as well as you might hope, but \nit's certainly not worse-than-serial either, as you you're seeing.\n\nIf you're seeing this on 9.2 and 9.4 alike, this can't be related to the \nXLogInsert scaling patch, although you might've found a case where that \npatch didn't help where it should've. I ran \"perf\" to profile the test \ncase, and it looks like about 80% of the CPU time is spent in the b-tree \ncomparison function. That doesn't leave much scope for XLogInsert \nscalability to matter one way or another.\n\nI have no explanation for what you're seeing though. A bad spinlock \nimplementation perhaps? Anything special about the hardware at all? Can \nyou profile it on your system? Which collation?\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 13 Oct 2015 12:32:38 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Having some problems with concurrent COPY commands" }, { "msg_contents": "On Tue, Oct 13, 2015 at 2:32 AM, Heikki Linnakangas <[email protected]> wrote:\n> 80% of the CPU time is spent in the b-tree comparison function.\n\nIn the logs, my duration per COPY command increases from about 1400ms\nfor one process to about 3800ms when I have four running concurrently.\nThat's really my only data point, unfortunately. Strace isn't super\nhelpful because it just says 70-ish% of the time is wait4, but that's\nnot significantly different than the results using one process.\n\nEverything else is bog standard default. I bumped up\ncheckpoint_segments to avoid checkpoints during the test, and\nincreased work_mem and maintenance_work_mem to avoid disk affecting\nresults, and still got the same behavior.\n\nI wasn't blaming the patch. :) I thought it didn't get merged or\nsomething, and after learning that wasn't the case, my only theory\nwent out the window. I'm a bit stuck, because this seems incredibly\nwrong. I'll keep digging.\n\n-- \nShaun Thomas\[email protected]\nhttp://bonesmoses.org/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 13 Oct 2015 07:14:01 -0700", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Having some problems with concurrent COPY commands" }, { "msg_contents": "On 2015-10-13 07:14:01 -0700, Shaun Thomas wrote:\n> On Tue, Oct 13, 2015 at 2:32 AM, Heikki Linnakangas <[email protected]> wrote:\n> > 80% of the CPU time is spent in the b-tree comparison function.\n> \n> In the logs, my duration per COPY command increases from about 1400ms\n> for one process to about 3800ms when I have four running concurrently.\n> That's really my only data point, unfortunately. Strace isn't super\n> helpful because it just says 70-ish% of the time is wait4, but that's\n> not significantly different than the results using one process.\n\nPlease run a profile. Compile postgres with CFLAGS='-O2 -fno-omit-frame-pointer'\nas an argument to configure. That'll allow us to get a hierarchical profile.\n\nThen first do a plain cpu profile:\nperf record -g -a sleep 5\nperf report > somefile\n\nThen let's look at lock contention:\nperf record -g -a -e syscalls:sys_enter_semop sleep 5\nperf report > somefile\n\nand send the results.\n\nThanks,\n\nAndres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 13 Oct 2015 16:23:36 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Having some problems with concurrent COPY commands" }, { "msg_contents": "On Tue, Oct 13, 2015 at 7:23 AM, Andres Freund <[email protected]> wrote:\n> and send the results.\n\nWhelp, I'm an idiot. I can't account for how I did it, but I can only\nassume I didn't export my ports in the tests properly. I ran\neverything again and there's a marked difference between 9.3 and 9.4.\nThe parallel copy times still inflate, but only from 1.4s to 2.5s at 4\nprocs. Though it gets a bit dicey after that.\n\nI tried to see what the growth curve looks like, but the numbers are\nwildly inconsistent after 4 procs. Even at 6, it went anywhere from\n4.3 to 7s for each COPY, even while no checkpoint is running. COPY\ntime definitely increases with each additional process though, which\nis likely expected. I was hoping the lock improvements in 9.5 would\nimprove this area too, but performance is the same on 9.5 (yes I'm\nsure this time).\n\nI can still send the perfs, but I suspect they're not exceptionally\nuseful anymore. :)\n\nAs a side note, using INSERT instead scales almost exactly linearly.\nThis would be useful, except that INSERT is already at least a\nmagnitude slower than COPY. Hah.\n\n-- \nShaun Thomas\[email protected]\nhttp://bonesmoses.org/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 13 Oct 2015 12:33:11 -0700", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Having some problems with concurrent COPY commands" }, { "msg_contents": "On 14 October 2015 at 08:33, Shaun Thomas <[email protected]> wrote:\n\n> On Tue, Oct 13, 2015 at 7:23 AM, Andres Freund <[email protected]> wrote:\n> > and send the results.\n>\n> Whelp, I'm an idiot. I can't account for how I did it, but I can only\n> assume I didn't export my ports in the tests properly. I ran\n> everything again and there's a marked difference between 9.3 and 9.4.\n> The parallel copy times still inflate, but only from 1.4s to 2.5s at 4\n> procs. Though it gets a bit dicey after that.\n>\n>\n>\nDo the times still inflate in the same way if you perform the COPY before\nadding the indexes to the table?\n\n--\n David Rowley http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn 14 October 2015 at 08:33, Shaun Thomas <[email protected]> wrote:On Tue, Oct 13, 2015 at 7:23 AM, Andres Freund <[email protected]> wrote:\n> and send the results.\n\nWhelp, I'm an idiot. I can't account for how I did it, but I can only\nassume I didn't export my ports in the tests properly. I ran\neverything again and there's a marked difference between 9.3 and 9.4.\nThe parallel copy times still inflate, but only from 1.4s to 2.5s at 4\nprocs. Though it gets a bit dicey after that.\nDo the times still inflate in the same way if you perform the COPY before adding the indexes to the table?-- David Rowley                   http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 14 Oct 2015 10:44:28 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Having some problems with concurrent COPY commands" } ]
[ { "msg_contents": "https://medium.com/@c2c/nodejs-a-quick-optimization-advice-7353b820c92e\r\n\r\n100% performance boost, for mysterious reasons that may be worth knowing about… \r\n\r\nGraeme Bell\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 13 Oct 2015 08:47:09 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": true, "msg_subject": "V8 optimisation (if you're using javascript in postgres)" } ]
[ { "msg_contents": "I have a very complex SELECT for which I use PREPARE and then EXECUTE.\nThe first five times I run \"explain (analyze, buffers) execute ...\" in\npsql, it takes about 1s. Starting with the sixth execution, the plan\nchanges and execution time doubles or more. The slower plan is used from\nthen on. If I DEALLOCATE the prepared statement and PREPARE again, the\ncycle is reset and I get five good executions again.\n\nThis behavior is utterly mystifying to me since I can see no reason for\nPostgres to change its plan after an arbitrary number of executions,\nespecially for the worse. When I did the experiment on a development\nsystem, Postgres was doing nothing apart from the interactively executed\nstatements. No data were inserted, no settings were changed and no other\nclients were active in any way. Is there some threshold for five or six\nexecutions of the same query?\n\nWithout delving into the plans themselves yet, what could possibly cause\nthe prepared statement to be re-planned? I have seen the same behavior\non Postgres 9.2.10 and 9.4.1.\n-- \nJonathan Rogers\nSocialserve.com by Emphasys Software\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 14 Oct 2015 03:38:55 -0400", "msg_from": "Jonathan Rogers <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT slows down on sixth execution" }, { "msg_contents": "Jonathan Rogers wrote:\r\n> I have a very complex SELECT for which I use PREPARE and then EXECUTE.\r\n> The first five times I run \"explain (analyze, buffers) execute ...\" in\r\n> psql, it takes about 1s. Starting with the sixth execution, the plan\r\n> changes and execution time doubles or more. The slower plan is used from\r\n> then on. If I DEALLOCATE the prepared statement and PREPARE again, the\r\n> cycle is reset and I get five good executions again.\r\n> \r\n> This behavior is utterly mystifying to me since I can see no reason for\r\n> Postgres to change its plan after an arbitrary number of executions,\r\n> especially for the worse. When I did the experiment on a development\r\n> system, Postgres was doing nothing apart from the interactively executed\r\n> statements. No data were inserted, no settings were changed and no other\r\n> clients were active in any way. Is there some threshold for five or six\r\n> executions of the same query?\r\n> \r\n> Without delving into the plans themselves yet, what could possibly cause\r\n> the prepared statement to be re-planned? I have seen the same behavior\r\n> on Postgres 9.2.10 and 9.4.1.\r\n\r\nYou are encountering \"custom plans\", introduced in 9.2.\r\n\r\nWhen a statement with parameters is executed, PostgreSQL will not only generate\r\na generic plan, but for the first 5 executions it will substitute the arguments\r\nand generate and execute a custom plan for that.\r\n\r\nAfter 5 executions, the cost of the generic plan is compared to the average\r\nof the costs of the custom plans. If the cost is less, the generic plan will\r\nbe used from that point on. If the cost is more, a custom plan will be used.\r\n\r\nSo what you encounter is probably caused by bad estimates for either\r\nthe custom plan or the generic plan.\r\n\r\nLook at the EXPLAIN ANALYZE output for both the custom plan (one of the\r\nfirst five executions) and the generic plan (the one used from the sixth\r\ntime on) and see if you can find and fix the cause for the misestimate.\r\n\r\nOther than that, you could stop using prepared statements, but that is\r\nprobably not the optimal solution.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 14 Oct 2015 09:00:03 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "Hi\n\n2015-10-14 9:38 GMT+02:00 Jonathan Rogers <[email protected]>:\n\n> I have a very complex SELECT for which I use PREPARE and then EXECUTE.\n> The first five times I run \"explain (analyze, buffers) execute ...\" in\n> psql, it takes about 1s. Starting with the sixth execution, the plan\n> changes and execution time doubles or more. The slower plan is used from\n> then on. If I DEALLOCATE the prepared statement and PREPARE again, the\n> cycle is reset and I get five good executions again.\n>\n> This behavior is utterly mystifying to me since I can see no reason for\n> Postgres to change its plan after an arbitrary number of executions,\n> especially for the worse. When I did the experiment on a development\n> system, Postgres was doing nothing apart from the interactively executed\n> statements. No data were inserted, no settings were changed and no other\n> clients were active in any way. Is there some threshold for five or six\n> executions of the same query?\n>\n\nyes, there is. PostgreSQL try to run custom plans five times (optimized for\nspecific parameters) and then compare average cost with cost of generic\nplan. If generic plan is cheaper, then PostgreSQL will use generic plan\n(that is optimized for most common value (not for currently used value)).\n\nsee\nhttps://github.com/postgres/postgres/blob/master/src/backend/utils/cache/plancache.c\n, function choose_custom_plan\n\nWhat I know, this behave isn't possible to change from outside. Shouldn't\nbe hard to write a extension for own PREPARE function, that set\nCURSOR_OPT_CUSTOM_PLAN option\n\nRegards\n\nPavel\n\n\n>\n> Without delving into the plans themselves yet, what could possibly cause\n> the prepared statement to be re-planned? I have seen the same behavior\n> on Postgres 9.2.10 and 9.4.1.\n> --\n> Jonathan Rogers\n> Socialserve.com by Emphasys Software\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi2015-10-14 9:38 GMT+02:00 Jonathan Rogers <[email protected]>:I have a very complex SELECT for which I use PREPARE and then EXECUTE.\nThe first five times I run \"explain (analyze, buffers) execute ...\" in\npsql, it takes about 1s. Starting with the sixth execution, the plan\nchanges and execution time doubles or more. The slower plan is used from\nthen on. If I DEALLOCATE the prepared statement and PREPARE again, the\ncycle is reset and I get five good executions again.\n\nThis behavior is utterly mystifying to me since I can see no reason for\nPostgres to change its plan after an arbitrary number of executions,\nespecially for the worse. When I did the experiment on a development\nsystem, Postgres was doing nothing apart from the interactively executed\nstatements. No data were inserted, no settings were changed and no other\nclients were active in any way. Is there some threshold for five or six\nexecutions of the same query?yes, there is. PostgreSQL try to run custom plans five times (optimized for specific parameters) and then compare average cost with cost of generic plan. If generic plan is cheaper, then PostgreSQL will use generic plan (that is optimized for most common value (not for currently used value)). see https://github.com/postgres/postgres/blob/master/src/backend/utils/cache/plancache.c  , function choose_custom_planWhat I know, this behave isn't possible to change from outside. Shouldn't be hard to write a extension for own PREPARE function, that set CURSOR_OPT_CUSTOM_PLAN optionRegardsPavel \n\nWithout delving into the plans themselves yet, what could possibly cause\nthe prepared statement to be re-planned? I have seen the same behavior\non Postgres 9.2.10 and 9.4.1.\n--\nJonathan Rogers\nSocialserve.com by Emphasys Software\[email protected]\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 14 Oct 2015 11:01:43 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "On 10/14/2015 05:00 AM, Albe Laurenz wrote:\n> Jonathan Rogers wrote:\n>> I have a very complex SELECT for which I use PREPARE and then EXECUTE.\n>> The first five times I run \"explain (analyze, buffers) execute ...\" in\n>> psql, it takes about 1s. Starting with the sixth execution, the plan\n>> changes and execution time doubles or more. The slower plan is used from\n>> then on. If I DEALLOCATE the prepared statement and PREPARE again, the\n>> cycle is reset and I get five good executions again.\n>>\n>> This behavior is utterly mystifying to me since I can see no reason for\n>> Postgres to change its plan after an arbitrary number of executions,\n>> especially for the worse. When I did the experiment on a development\n>> system, Postgres was doing nothing apart from the interactively executed\n>> statements. No data were inserted, no settings were changed and no other\n>> clients were active in any way. Is there some threshold for five or six\n>> executions of the same query?\n>>\n>> Without delving into the plans themselves yet, what could possibly cause\n>> the prepared statement to be re-planned? I have seen the same behavior\n>> on Postgres 9.2.10 and 9.4.1.\n> \n> You are encountering \"custom plans\", introduced in 9.2.\n> \n> When a statement with parameters is executed, PostgreSQL will not only generate\n> a generic plan, but for the first 5 executions it will substitute the arguments\n> and generate and execute a custom plan for that.\n> \n> After 5 executions, the cost of the generic plan is compared to the average\n> of the costs of the custom plans. If the cost is less, the generic plan will\n> be used from that point on. If the cost is more, a custom plan will be used.\n> \n> So what you encounter is probably caused by bad estimates for either\n> the custom plan or the generic plan.\n\nThanks. That does explain what I've seen.\n\n> \n> Look at the EXPLAIN ANALYZE output for both the custom plan (one of the\n> first five executions) and the generic plan (the one used from the sixth\n> time on) and see if you can find and fix the cause for the misestimate.\n\nYes, I have been looking at both plans and can see where they diverge.\nHow could I go about figuring out why Postgres fails to see the large\ndifference in plan execution time? I use exactly the same parameters\nevery time I execute the prepared statement, so how would Postgres come\nto think that those are not the norm?\n\n> \n> Other than that, you could stop using prepared statements, but that is\n> probably not the optimal solution.\n\nThis is probably what I'll end up doing. The statement preparation is\nthe result of a custom layer that does so universally and I'll probably\njust turn that feature off.\n\n-- \nJonathan Rogers\nSocialserve.com by Emphasys Software\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 14 Oct 2015 11:28:29 -0400", "msg_from": "Jonathan Rogers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "Jonathan Rogers wrote:\r\n>> Look at the EXPLAIN ANALYZE output for both the custom plan (one of the\r\n>> first five executions) and the generic plan (the one used from the sixth\r\n>> time on) and see if you can find and fix the cause for the misestimate.\r\n> \r\n> Yes, I have been looking at both plans and can see where they diverge.\r\n> How could I go about figuring out why Postgres fails to see the large\r\n> difference in plan execution time? I use exactly the same parameters\r\n> every time I execute the prepared statement, so how would Postgres come\r\n> to think that those are not the norm?\r\n\r\nPostgreSQL does not consider the actual query execution time, it only\r\ncompares its estimates for there general and the custom plan.\r\nAlso, it does not keep track of the parameter values you supply,\r\nonly of the average custom plan query cost estimate.\r\n\r\nThe problem is either that the planner underestimates the cost of\r\nthe generic plan or overestimates the cost of the custom plans.\r\n\r\nIf you look at the EXPLAIN ANALYZE outputs (probably with\r\nhttp://explain.depesz.com ), are there any row count estimates that\r\ndiffer significantly from reality?\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 16 Oct 2015 12:37:56 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "On 10/16/2015 08:37 AM, Albe Laurenz wrote:\n> Jonathan Rogers wrote:\n>>> Look at the EXPLAIN ANALYZE output for both the custom plan (one of the\n>>> first five executions) and the generic plan (the one used from the sixth\n>>> time on) and see if you can find and fix the cause for the misestimate.\n>>\n>> Yes, I have been looking at both plans and can see where they diverge.\n>> How could I go about figuring out why Postgres fails to see the large\n>> difference in plan execution time? I use exactly the same parameters\n>> every time I execute the prepared statement, so how would Postgres come\n>> to think that those are not the norm?\n> \n> PostgreSQL does not consider the actual query execution time, it only\n> compares its estimates for there general and the custom plan.\n> Also, it does not keep track of the parameter values you supply,\n> only of the average custom plan query cost estimate.\n\nOK, that makes more sense then. It's somewhat tedious for the purpose of\ntesting to execute a prepared statement six times to see the plan which\nneeds to be optimized. Unfortunately, there doesn't seem to be any way\nto force use of a generic plan in SQL based on Pavel Stehule's reply.\n\n> \n> The problem is either that the planner underestimates the cost of\n> the generic plan or overestimates the cost of the custom plans.\n> \n> If you look at the EXPLAIN ANALYZE outputs (probably with\n> http://explain.depesz.com ), are there any row count estimates that\n> differ significantly from reality?\n\nNow that I've read the help about \"rows x\" to understand what it means,\nI can see that while both plans underestimate returned rows, the generic\none underestimates them by a much larger factor. In this case, the\nsolution is to avoid preparing the query to ensure a custom plan is used\nevery time.\n\nSince the planner is significantly underestimating row counts even when\nmaking custom plans, I will continue to try to improve the planner's\ninformation. My default_statistics_target is currently 500. I suppose I\nshould experiment with increasing it for certain columns.\n\nThanks for the pointers.\n\n-- \nJonathan Rogers\nSocialserve.com by Emphasys Software\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 16 Oct 2015 22:14:37 -0400", "msg_from": "Jonathan Rogers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "On 10/14/2015 05:01 AM, Pavel Stehule wrote:\n> Hi\n> \n> 2015-10-14 9:38 GMT+02:00 Jonathan Rogers <[email protected]\n> <mailto:[email protected]>>:\n> \n> I have a very complex SELECT for which I use PREPARE and then EXECUTE.\n> The first five times I run \"explain (analyze, buffers) execute ...\" in\n> psql, it takes about 1s. Starting with the sixth execution, the plan\n> changes and execution time doubles or more. The slower plan is used from\n> then on. If I DEALLOCATE the prepared statement and PREPARE again, the\n> cycle is reset and I get five good executions again.\n> \n> This behavior is utterly mystifying to me since I can see no reason for\n> Postgres to change its plan after an arbitrary number of executions,\n> especially for the worse. When I did the experiment on a development\n> system, Postgres was doing nothing apart from the interactively executed\n> statements. No data were inserted, no settings were changed and no other\n> clients were active in any way. Is there some threshold for five or six\n> executions of the same query?\n> \n> \n> yes, there is. PostgreSQL try to run custom plans five times (optimized\n> for specific parameters) and then compare average cost with cost of\n> generic plan. If generic plan is cheaper, then PostgreSQL will use\n> generic plan (that is optimized for most common value (not for currently\n> used value)).\n> \n> see\n> https://github.com/postgres/postgres/blob/master/src/backend/utils/cache/plancache.c \n> , function choose_custom_plan\n> \n> What I know, this behave isn't possible to change from outside.\n> Shouldn't be hard to write a extension for own PREPARE function, that\n> set CURSOR_OPT_CUSTOM_PLAN option\n\nThanks for the link. I can see the hard-coded \"5\" right there. I looked\nin the docs a bit and found the server C function \"SPI_prepare_cursor\"\nwhich allows explicit selection of a custom or generic plan. However, if\nI understand you correctly, there is currently no SQL interface to\nexplicitly control what type of plan is used.\n\nSo, the solution for my particular query is to avoid preparing it,\nensuring it gets a custom plan every time. The decision to prepare it\ncame from a client-side layer which defaults to preparing everything\nrather than any specific reason and we're now reconsidering that policy.\n\n-- \nJonathan Rogers\nSocialserve.com by Emphasys Software\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 16 Oct 2015 22:29:56 -0400", "msg_from": "Jonathan Rogers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "2015-10-17 4:29 GMT+02:00 Jonathan Rogers <[email protected]>:\n\n> On 10/14/2015 05:01 AM, Pavel Stehule wrote:\n> > Hi\n> >\n> > 2015-10-14 9:38 GMT+02:00 Jonathan Rogers <[email protected]\n> > <mailto:[email protected]>>:\n> >\n> > I have a very complex SELECT for which I use PREPARE and then\n> EXECUTE.\n> > The first five times I run \"explain (analyze, buffers) execute ...\"\n> in\n> > psql, it takes about 1s. Starting with the sixth execution, the plan\n> > changes and execution time doubles or more. The slower plan is used\n> from\n> > then on. If I DEALLOCATE the prepared statement and PREPARE again,\n> the\n> > cycle is reset and I get five good executions again.\n> >\n> > This behavior is utterly mystifying to me since I can see no reason\n> for\n> > Postgres to change its plan after an arbitrary number of executions,\n> > especially for the worse. When I did the experiment on a development\n> > system, Postgres was doing nothing apart from the interactively\n> executed\n> > statements. No data were inserted, no settings were changed and no\n> other\n> > clients were active in any way. Is there some threshold for five or\n> six\n> > executions of the same query?\n> >\n> >\n> > yes, there is. PostgreSQL try to run custom plans five times (optimized\n> > for specific parameters) and then compare average cost with cost of\n> > generic plan. If generic plan is cheaper, then PostgreSQL will use\n> > generic plan (that is optimized for most common value (not for currently\n> > used value)).\n> >\n> > see\n> >\n> https://github.com/postgres/postgres/blob/master/src/backend/utils/cache/plancache.c\n> > , function choose_custom_plan\n> >\n> > What I know, this behave isn't possible to change from outside.\n> > Shouldn't be hard to write a extension for own PREPARE function, that\n> > set CURSOR_OPT_CUSTOM_PLAN option\n>\n> Thanks for the link. I can see the hard-coded \"5\" right there. I looked\n> in the docs a bit and found the server C function \"SPI_prepare_cursor\"\n> which allows explicit selection of a custom or generic plan. However, if\n> I understand you correctly, there is currently no SQL interface to\n> explicitly control what type of plan is used.\n>\n> So, the solution for my particular query is to avoid preparing it,\n> ensuring it gets a custom plan every time. The decision to prepare it\n> came from a client-side layer which defaults to preparing everything\n> rather than any specific reason and we're now reconsidering that policy.\n>\n\nI was not 100% correct - you can use a parametrized queries via PQexecParams\nhttp://www.postgresql.org/docs/9.4/static/libpq-exec.html\n\nIf this function is accessable from your environment, then you should to\nuse it. It is protection against SQL injection, and it doesn't use generic\nplan. For your case the using of prepared statements is contra productive.\n\nAny other solution is client side prepared statements - lot of API used by\ndefault.\n\nRegards\n\nPavel\n\n\n\n\n>\n> --\n> Jonathan Rogers\n> Socialserve.com by Emphasys Software\n> [email protected]\n>\n\n2015-10-17 4:29 GMT+02:00 Jonathan Rogers <[email protected]>:On 10/14/2015 05:01 AM, Pavel Stehule wrote:\n> Hi\n>\n> 2015-10-14 9:38 GMT+02:00 Jonathan Rogers <[email protected]\n> <mailto:[email protected]>>:\n>\n>     I have a very complex SELECT for which I use PREPARE and then EXECUTE.\n>     The first five times I run \"explain (analyze, buffers) execute ...\" in\n>     psql, it takes about 1s. Starting with the sixth execution, the plan\n>     changes and execution time doubles or more. The slower plan is used from\n>     then on. If I DEALLOCATE the prepared statement and PREPARE again, the\n>     cycle is reset and I get five good executions again.\n>\n>     This behavior is utterly mystifying to me since I can see no reason for\n>     Postgres to change its plan after an arbitrary number of executions,\n>     especially for the worse. When I did the experiment on a development\n>     system, Postgres was doing nothing apart from the interactively executed\n>     statements. No data were inserted, no settings were changed and no other\n>     clients were active in any way. Is there some threshold for five or six\n>     executions of the same query?\n>\n>\n> yes, there is. PostgreSQL try to run custom plans five times (optimized\n> for specific parameters) and then compare average cost with cost of\n> generic plan. If generic plan is cheaper, then PostgreSQL will use\n> generic plan (that is optimized for most common value (not for currently\n> used value)).\n>\n> see\n> https://github.com/postgres/postgres/blob/master/src/backend/utils/cache/plancache.c\n> , function choose_custom_plan\n>\n> What I know, this behave isn't possible to change from outside.\n> Shouldn't be hard to write a extension for own PREPARE function, that\n> set CURSOR_OPT_CUSTOM_PLAN option\n\nThanks for the link. I can see the hard-coded \"5\" right there. I looked\nin the docs a bit and found the server C function \"SPI_prepare_cursor\"\nwhich allows explicit selection of a custom or generic plan. However, if\nI understand you correctly, there is currently no SQL interface to\nexplicitly control what type of plan is used.\n\nSo, the solution for my particular query is to avoid preparing it,\nensuring it gets a custom plan every time. The decision to prepare it\ncame from a client-side layer which defaults to preparing everything\nrather than any specific reason and we're now reconsidering that policy.I was not 100% correct - you can use a parametrized queries via PQexecParams http://www.postgresql.org/docs/9.4/static/libpq-exec.htmlIf this function is accessable from your environment, then you should to use it. It is protection against SQL injection, and it doesn't use generic plan. For your case the using of prepared statements is contra productive.Any other solution is client side prepared statements - lot of API used by default.RegardsPavel  \n\n--\nJonathan Rogers\nSocialserve.com by Emphasys Software\[email protected]", "msg_date": "Sat, 17 Oct 2015 05:51:44 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "On 2015-10-14 03:00, Albe Laurenz wrote:\n> \n> You are encountering \"custom plans\", introduced in 9.2.\n> \n> When a statement with parameters is executed, PostgreSQL will not only generate\n> a generic plan, but for the first 5 executions it will substitute the arguments\n> and generate and execute a custom plan for that.\n\nWow! Thanks. I feel this should be documented a bit better.\n\nShouldn't this be explained in at least as much details as in your\nexplanation, in the sql-prepare document?\n\nYves.\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 17 Oct 2015 07:29:57 -0600", "msg_from": "Yves Dorfsman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "2015-10-17 15:29 GMT+02:00 Yves Dorfsman <[email protected]>:\n\n> On 2015-10-14 03:00, Albe Laurenz wrote:\n> >\n> > You are encountering \"custom plans\", introduced in 9.2.\n> >\n> > When a statement with parameters is executed, PostgreSQL will not only\n> generate\n> > a generic plan, but for the first 5 executions it will substitute the\n> arguments\n> > and generate and execute a custom plan for that.\n>\n> Wow! Thanks. I feel this should be documented a bit better.\n>\n> Shouldn't this be explained in at least as much details as in your\n> explanation, in the sql-prepare document?\n>\n\nprobably - some section about benefits and risks can be useful - but it is\ntask for somebody with better English than is mine :)\n\nRegards\n\nPavel\n\n\n>\n> Yves.\n> --\n> http://yves.zioup.com\n> gpg: 4096R/32B0F416\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2015-10-17 15:29 GMT+02:00 Yves Dorfsman <[email protected]>:On 2015-10-14 03:00, Albe Laurenz wrote:\n>\n> You are encountering \"custom plans\", introduced in 9.2.\n>\n> When a statement with parameters is executed, PostgreSQL will not only generate\n> a generic plan, but for the first 5 executions it will substitute the arguments\n> and generate and execute a custom plan for that.\n\nWow! Thanks. I feel this should be documented a bit better.\n\nShouldn't this be explained in at least as much details as in your\nexplanation, in the sql-prepare document?probably - some section about benefits and risks can be useful - but it is task for somebody with better English than is mine :)RegardsPavel \n\nYves.\n--\nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 17 Oct 2015 16:33:23 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "On Fri, Oct 16, 2015 at 9:14 PM, Jonathan Rogers\n<[email protected]> wrote:\n> On 10/16/2015 08:37 AM, Albe Laurenz wrote:\n>> Jonathan Rogers wrote:\n>>>> Look at the EXPLAIN ANALYZE output for both the custom plan (one of the\n>>>> first five executions) and the generic plan (the one used from the sixth\n>>>> time on) and see if you can find and fix the cause for the misestimate.\n>>>\n>>> Yes, I have been looking at both plans and can see where they diverge.\n>>> How could I go about figuring out why Postgres fails to see the large\n>>> difference in plan execution time? I use exactly the same parameters\n>>> every time I execute the prepared statement, so how would Postgres come\n>>> to think that those are not the norm?\n>>\n>> PostgreSQL does not consider the actual query execution time, it only\n>> compares its estimates for there general and the custom plan.\n>> Also, it does not keep track of the parameter values you supply,\n>> only of the average custom plan query cost estimate.\n>\n> OK, that makes more sense then. It's somewhat tedious for the purpose of\n> testing to execute a prepared statement six times to see the plan which\n> needs to be optimized. Unfortunately, there doesn't seem to be any way\n> to force use of a generic plan in SQL based on Pavel Stehule's reply.\n\nYeah. In the worst case, a query can fail in the generic plan because\nit depends on the arguments for dubious things like\n\nSELECT\n CASE WHEN _arg = 'TEXT' THEN foo::text ...\n\nI'm ok with why those things must fail, but it'd sure be nice to be\nable to control the switch to the generic plan.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 19 Oct 2015 11:47:30 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "Jonathan Rogers schrieb am 17.10.2015 um 04:14:\n>>> Yes, I have been looking at both plans and can see where they diverge.\n>>> How could I go about figuring out why Postgres fails to see the large\n>>> difference in plan execution time? I use exactly the same parameters\n>>> every time I execute the prepared statement, so how would Postgres come\n>>> to think that those are not the norm?\n>>\n>> PostgreSQL does not consider the actual query execution time, it only\n>> compares its estimates for there general and the custom plan.\n>> Also, it does not keep track of the parameter values you supply,\n>> only of the average custom plan query cost estimate.\n> \n> OK, that makes more sense then. It's somewhat tedious for the purpose of\n> testing to execute a prepared statement six times to see the plan which\n> needs to be optimized. Unfortunately, there doesn't seem to be any way\n> to force use of a generic plan in SQL based on Pavel Stehule's reply.\n\n\nIf you are using JDBC the threshold can be changed:\n\n https://jdbc.postgresql.org/documentation/94/server-prepare.html\n https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/PGStatement.html#setPrepareThreshold%28int%29\n\nAs I don't think JDBC is using anything \"exotic\" I would be surprised if this \ncan't be changed with other programming environments also.\n\nThomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 20 Oct 2015 08:55:02 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "2015-10-20 8:55 GMT+02:00 Thomas Kellerer <[email protected]>:\n\n> Jonathan Rogers schrieb am 17.10.2015 um 04:14:\n> >>> Yes, I have been looking at both plans and can see where they diverge.\n> >>> How could I go about figuring out why Postgres fails to see the large\n> >>> difference in plan execution time? I use exactly the same parameters\n> >>> every time I execute the prepared statement, so how would Postgres come\n> >>> to think that those are not the norm?\n> >>\n> >> PostgreSQL does not consider the actual query execution time, it only\n> >> compares its estimates for there general and the custom plan.\n> >> Also, it does not keep track of the parameter values you supply,\n> >> only of the average custom plan query cost estimate.\n> >\n> > OK, that makes more sense then. It's somewhat tedious for the purpose of\n> > testing to execute a prepared statement six times to see the plan which\n> > needs to be optimized. Unfortunately, there doesn't seem to be any way\n> > to force use of a generic plan in SQL based on Pavel Stehule's reply.\n>\n>\n> If you are using JDBC the threshold can be changed:\n>\n> https://jdbc.postgresql.org/documentation/94/server-prepare.html\n>\n> https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/PGStatement.html#setPrepareThreshold%28int%29\n>\n> As I don't think JDBC is using anything \"exotic\" I would be surprised if\n> this\n> can't be changed with other programming environments also.\n>\n\nThis is some different - you can switch between server side prepared\nstatements and client side prepared statements in JDBC. It doesn't change\nthe behave of server side prepared statements in Postgres.\n\nPavel\n\n\n>\n> Thomas\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2015-10-20 8:55 GMT+02:00 Thomas Kellerer <[email protected]>:Jonathan Rogers schrieb am 17.10.2015 um 04:14:\n>>> Yes, I have been looking at both plans and can see where they diverge.\n>>> How could I go about figuring out why Postgres fails to see the large\n>>> difference in plan execution time? I use exactly the same parameters\n>>> every time I execute the prepared statement, so how would Postgres come\n>>> to think that those are not the norm?\n>>\n>> PostgreSQL does not consider the actual query execution time, it only\n>> compares its estimates for there general and the custom plan.\n>> Also, it does not keep track of the parameter values you supply,\n>> only of the average custom plan query cost estimate.\n>\n> OK, that makes more sense then. It's somewhat tedious for the purpose of\n> testing to execute a prepared statement six times to see the plan which\n> needs to be optimized. Unfortunately, there doesn't seem to be any way\n> to force use of a generic plan in SQL based on Pavel Stehule's reply.\n\n\nIf you are using JDBC the threshold can be changed:\n\n   https://jdbc.postgresql.org/documentation/94/server-prepare.html\n   https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/PGStatement.html#setPrepareThreshold%28int%29\n\nAs I don't think JDBC is using anything \"exotic\" I would be surprised if this\ncan't be changed with other programming environments also.This is some different - you can switch between server side prepared statements and client side prepared statements in JDBC.  It doesn't change the behave of server side prepared statements in Postgres.Pavel \n\nThomas\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 20 Oct 2015 09:45:44 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "On 10/20/2015 03:45 AM, Pavel Stehule wrote:\n> \n> \n> 2015-10-20 8:55 GMT+02:00 Thomas Kellerer <[email protected]\n> <mailto:[email protected]>>:\n> \n> Jonathan Rogers schrieb am 17.10.2015 um 04:14:\n> >>> Yes, I have been looking at both plans and can see where they\n> diverge.\n> >>> How could I go about figuring out why Postgres fails to see the\n> large\n> >>> difference in plan execution time? I use exactly the same parameters\n> >>> every time I execute the prepared statement, so how would\n> Postgres come\n> >>> to think that those are not the norm?\n> >>\n> >> PostgreSQL does not consider the actual query execution time, it only\n> >> compares its estimates for there general and the custom plan.\n> >> Also, it does not keep track of the parameter values you supply,\n> >> only of the average custom plan query cost estimate.\n> >\n> > OK, that makes more sense then. It's somewhat tedious for the\n> purpose of\n> > testing to execute a prepared statement six times to see the plan\n> which\n> > needs to be optimized. Unfortunately, there doesn't seem to be any way\n> > to force use of a generic plan in SQL based on Pavel Stehule's reply.\n> \n> \n> If you are using JDBC the threshold can be changed:\n> \n> https://jdbc.postgresql.org/documentation/94/server-prepare.html\n> \n> https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/PGStatement.html#setPrepareThreshold%28int%29\n> \n> As I don't think JDBC is using anything \"exotic\" I would be\n> surprised if this\n> can't be changed with other programming environments also.\n> \n> \n> This is some different - you can switch between server side prepared\n> statements and client side prepared statements in JDBC. It doesn't\n> change the behave of server side prepared statements in Postgres.\n\nI am using psycopg2 with a layer on top which can automatically PREPARE\nstatements, so I guess that implements something similar to the JDBC\ninterface. I did solve my problem by turning off the automatic preparation.\n\n-- \nJonathan Rogers\nSocialserve.com by Emphasys Software\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 20 Oct 2015 10:48:22 -0400", "msg_from": "Jonathan Rogers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT slows down on sixth execution" }, { "msg_contents": "2015-10-20 16:48 GMT+02:00 Jonathan Rogers <[email protected]>:\n\n> On 10/20/2015 03:45 AM, Pavel Stehule wrote:\n> >\n> >\n> > 2015-10-20 8:55 GMT+02:00 Thomas Kellerer <[email protected]\n> > <mailto:[email protected]>>:\n> >\n> > Jonathan Rogers schrieb am 17.10.2015 um 04:14:\n> > >>> Yes, I have been looking at both plans and can see where they\n> > diverge.\n> > >>> How could I go about figuring out why Postgres fails to see the\n> > large\n> > >>> difference in plan execution time? I use exactly the same\n> parameters\n> > >>> every time I execute the prepared statement, so how would\n> > Postgres come\n> > >>> to think that those are not the norm?\n> > >>\n> > >> PostgreSQL does not consider the actual query execution time, it\n> only\n> > >> compares its estimates for there general and the custom plan.\n> > >> Also, it does not keep track of the parameter values you supply,\n> > >> only of the average custom plan query cost estimate.\n> > >\n> > > OK, that makes more sense then. It's somewhat tedious for the\n> > purpose of\n> > > testing to execute a prepared statement six times to see the plan\n> > which\n> > > needs to be optimized. Unfortunately, there doesn't seem to be any\n> way\n> > > to force use of a generic plan in SQL based on Pavel Stehule's\n> reply.\n> >\n> >\n> > If you are using JDBC the threshold can be changed:\n> >\n> > https://jdbc.postgresql.org/documentation/94/server-prepare.html\n> >\n> >\n> https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/PGStatement.html#setPrepareThreshold%28int%29\n> >\n> > As I don't think JDBC is using anything \"exotic\" I would be\n> > surprised if this\n> > can't be changed with other programming environments also.\n> >\n> >\n> > This is some different - you can switch between server side prepared\n> > statements and client side prepared statements in JDBC. It doesn't\n> > change the behave of server side prepared statements in Postgres.\n>\n> I am using psycopg2 with a layer on top which can automatically PREPARE\n> statements, so I guess that implements something similar to the JDBC\n> interface. I did solve my problem by turning off the automatic preparation.\n>\n\nyes, you did off server side prepared statements.\n\nPavel\n\n\n>\n> --\n> Jonathan Rogers\n> Socialserve.com by Emphasys Software\n> [email protected]\n>\n\n2015-10-20 16:48 GMT+02:00 Jonathan Rogers <[email protected]>:On 10/20/2015 03:45 AM, Pavel Stehule wrote:\n>\n>\n> 2015-10-20 8:55 GMT+02:00 Thomas Kellerer <[email protected]\n> <mailto:[email protected]>>:\n>\n>     Jonathan Rogers schrieb am 17.10.2015 um 04:14:\n>     >>> Yes, I have been looking at both plans and can see where they\n>     diverge.\n>     >>> How could I go about figuring out why Postgres fails to see the\n>     large\n>     >>> difference in plan execution time? I use exactly the same parameters\n>     >>> every time I execute the prepared statement, so how would\n>     Postgres come\n>     >>> to think that those are not the norm?\n>     >>\n>     >> PostgreSQL does not consider the actual query execution time, it only\n>     >> compares its estimates for there general and the custom plan.\n>     >> Also, it does not keep track of the parameter values you supply,\n>     >> only of the average custom plan query cost estimate.\n>     >\n>     > OK, that makes more sense then. It's somewhat tedious for the\n>     purpose of\n>     > testing to execute a prepared statement six times to see the plan\n>     which\n>     > needs to be optimized. Unfortunately, there doesn't seem to be any way\n>     > to force use of a generic plan in SQL based on Pavel Stehule's reply.\n>\n>\n>     If you are using JDBC the threshold can be changed:\n>\n>        https://jdbc.postgresql.org/documentation/94/server-prepare.html\n>\n>      https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/PGStatement.html#setPrepareThreshold%28int%29\n>\n>     As I don't think JDBC is using anything \"exotic\" I would be\n>     surprised if this\n>     can't be changed with other programming environments also.\n>\n>\n> This is some different - you can switch between server side prepared\n> statements and client side prepared statements in JDBC.  It doesn't\n> change the behave of server side prepared statements in Postgres.\n\nI am using psycopg2 with a layer on top which can automatically PREPARE\nstatements, so I guess that implements something similar to the JDBC\ninterface. I did solve my problem by turning off the automatic preparation.yes, you did off server side prepared statements.Pavel \n\n--\nJonathan Rogers\nSocialserve.com by Emphasys Software\[email protected]", "msg_date": "Tue, 20 Oct 2015 17:01:54 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT slows down on sixth execution" } ]
[ { "msg_contents": "I have about 900 partitioned tables with 67 millons rows.\nAnd I found that my query takes too much time!\n------------------------------------------------------------------------------\nexplain ( ANALYZE,VERBOSE,BUFFERS )\n select report_id from cars.\"all\"\n WHERE\n report_datetime = '2015-10-14 00:02:02+03'::timestamptz AND\n report_uuid = 'f6b08f92-0d5d-28b0-81c3-0c20c4ca3038'::uuid\n------------------------------------------------------------------------------\nAppend (cost=0.00..4.43 rows=2 width=4) (actual time=0.023..0.023 \nrows=1 loops=1)\n Buffers: shared hit=4\n -> Seq Scan on cars.all (cost=0.00..0.00 rows=1 width=4) (actual \ntime=0.002..0.002 rows=0 loops=1)\n Output: all.report_id\n Filter: ((all.report_datetime = '2015-10-14 \n00:02:02+03'::timestamp with time zone) AND (all.report_uuid = \n'f6b08f92-0d5d-28b0-81c3-0c20c4ca3038'::uuid))\n -> Index Scan using day_151014_uuid_idx on cars.day_151014 \n(cost=0.42..4.43 rows=1 width=4) (actual time=0.020..0.020 rows=1 loops=1)\n Output: day_151014.report_id\n Index Cond: (day_151014.report_uuid = \n'f6b08f92-0d5d-28b0-81c3-0c20c4ca3038'::uuid)\n Filter: (day_151014.report_datetime = '2015-10-14 \n00:02:02+03'::timestamp with time zone)\n Buffers: shared hit=4\nTotal runtime: 0.096 ms\n------------------------------------------------------------------------------\n\nThis query takes about 500ms. But query from only part-table takes 12ms:\n\nselect report_id from cars.day_151014\n WHERE\n report_datetime = '2015-10-14 00:02:02+03'::timestamptz AND\n report_uuid = 'f6b08f92-0d5d-28b0-81c3-0c20c4ca3038'::uuid\n------------------------------------------------------------------------------\n explain ( ANALYZE,VERBOSE,BUFFERS )\n select report_id from cars.day_151014\n WHERE\n report_datetime = '2015-10-14 00:02:02+03'::timestamptz AND\n report_uuid = 'f6b08f92-0d5d-28b0-81c3-0c20c4ca3038'::uuid\n------------------------------------------------------------------------------\nIndex Scan using day_151014_uuid_idx on cars.day_151014 (cost=0.42..4.43 \nrows=1 width=4) (actual time=0.022..0.023 rows=1 loops=1)\n Output: report_id\n Index Cond: (day_151014.report_uuid = \n'f6b08f92-0d5d-28b0-81c3-0c20c4ca3038'::uuid)\n Filter: (day_151014.report_datetime = '2015-10-14 \n00:02:02+03'::timestamp with time zone)\n Buffers: shared hit=4\nTotal runtime: 0.045 ms\n------------------------------------------------------------------------------\n\nQuery plans seems fine, but why actual query is so slow?\n\np.s. PostgreSQL 9.3.9 x86_64 on Oracle Linux Server release 6.6 \n(3.8.13-68.3.2.el6uek.x86_64)\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 17 Oct 2015 01:02:54 +0300", "msg_from": "Vladimir Yavoskiy <[email protected]>", "msg_from_op": true, "msg_subject": "query partitioned table is very slow" }, { "msg_contents": "Vladimir Yavoskiy <[email protected]> writes:\n> I have about 900 partitioned tables with 67 millons rows.\n> And I found that my query takes too much time!\n\nThat's about 100X partitions too many for that amount of rows.\nPartitions are a good thing in small doses, otherwise planning\ntime will kill you.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 16 Oct 2015 19:02:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query partitioned table is very slow" } ]
[ { "msg_contents": "Version:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n\nQuery Plan\nhttp://explain.depesz.com/s/4s37\n\nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\n\n\n\n\n\n\n\n\n\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n \nQuery Plan\nhttp://explain.depesz.com/s/4s37\n \nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.", "msg_date": "Tue, 20 Oct 2015 17:34:35 +0000", "msg_from": "Jamie Koceniak <[email protected]>", "msg_from_op": true, "msg_subject": "Recursive query performance issue" }, { "msg_contents": "Hi\n\n\n\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n> Version:\n>\n>\n> -----------------------------------------------------------------------------------------------\n>\n> PostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian\n> 4.7.2-5) 4.7.2, 64-bit\n>\n>\n>\n> Query Plan\n>\n> http://explain.depesz.com/s/4s37\n>\n>\n>\n> Normally, this query takes around 200-300 ms to execute.\n>\n> However when several queries are run concurrently, query performance drops\n> to 30-60 seconds.\n>\n>\n>\n\nthere can be few reasons:\n\n1. locking - are you sure, so your queries don't wait on locks?\n\n2. issues with cache stability - is there high IO load? You can try to\nincrease effective_cache_size (or decrease if you have not enough memory)\n\nRegards\n\nPavel\n\n\n>\n>\n\nHi2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n \nQuery Plan\nhttp://explain.depesz.com/s/4s37\n \nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\n there can be few reasons:1. locking - are you sure, so your queries don't wait on locks?2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)RegardsPavel", "msg_date": "Wed, 21 Oct 2015 09:03:37 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "On Tue, Oct 20, 2015 at 12:34 PM, Jamie Koceniak\n<[email protected]> wrote:\n> Version:\n>\n> -----------------------------------------------------------------------------------------------\n>\n> PostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian\n> 4.7.2-5) 4.7.2, 64-bit\n>\n> Query Plan\n>\n> http://explain.depesz.com/s/4s37\n>\n> Normally, this query takes around 200-300 ms to execute.\n>\n> However when several queries are run concurrently, query performance drops\n> to 30-60 seconds.\n\nPlease define 'several'. Essential information here is a capture of\n'top' and possibly 'perf top'. Also if the problem is storage related\niostat can be very useful (or vmstat in a pinch)\n\nFYI you can use pgbench with -f mode to measure concurrency\nperformance of any query.\n\nThe very first thing to rule out is a storage bottleneck via measured\niowait. Assuming that's the case, this problem is interesting if:\n*) Scaling is much worse than it should be\n*) You can confirm this on more modern postgres (interesting problems\nare only interesting if they are unsolved)\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 21 Oct 2015 08:22:56 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "Hi Pavel,\r\n\r\nThanks for the reply.\r\n\r\n1. The queries aren’t waiting on any locks.\r\nThe query has a recursive join that uses a table with only 80k records and that table is not updated often.\r\n\r\n2. The I/O load was not high. CPU utilization was very high and load was very high.\r\nWe have a large effective_cache_size = 512GB (25% of total memory)\r\n\r\nThanks,\r\nJamie\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]]\r\nSent: Wednesday, October 21, 2015 12:04 AM\r\nTo: Jamie Koceniak\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\nHi\r\n\r\n\r\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]<mailto:[email protected]>>:\r\nVersion:\r\n-----------------------------------------------------------------------------------------------\r\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\r\n\r\nQuery Plan\r\nhttp://explain.depesz.com/s/4s37\r\n\r\nNormally, this query takes around 200-300 ms to execute.\r\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\r\n\r\n\r\nthere can be few reasons:\r\n1. locking - are you sure, so your queries don't wait on locks?\r\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\r\nRegards\r\nPavel\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi Pavel,\n \nThanks for the reply.\n \n1. The queries aren’t waiting on any locks.\r\n\nThe query has a recursive join that uses a table with only 80k records and that table is not updated often.\n \n2. The I/O load was not high. CPU utilization was very high and load was very high.\nWe have a large effective_cache_size = 512GB (25% of total memory)\n \nThanks,\nJamie\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Wednesday, October 21, 2015 12:04 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\nHi\n\n\n\n \n\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n \nQuery Plan\nhttp://explain.depesz.com/s/4s37\n \nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\n \n\n\n\n \n\n\nthere can be few reasons:\n\n\n1. locking - are you sure, so your queries don't wait on locks?\n\n\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\n\n\nRegards\n\n\nPavel", "msg_date": "Wed, 21 Oct 2015 17:55:14 +0000", "msg_from": "Jamie Koceniak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n> Hi Pavel,\n>\n>\n>\n> Thanks for the reply.\n>\n>\n>\n> 1. The queries aren’t waiting on any locks.\n>\n> The query has a recursive join that uses a table with only 80k records and\n> that table is not updated often.\n>\n>\n>\n> 2. The I/O load was not high. CPU utilization was very high and load was\n> very high.\n>\n> We have a large effective_cache_size = 512GB (25% of total memory)\n>\n\nso your server has 2TB RAM? It is not usual server - so this issue can be\npretty strange :(\n\nWhat is size of shared memory? Probably is significantly lower than\neffective_cache_size? Try to reduce effective cache size to be lower than\nshared buffers\n\nRegards\n\nPavel\n\n\n\n>\n>\n> Thanks,\n>\n> Jamie\n>\n>\n>\n> *From:* Pavel Stehule [mailto:[email protected]]\n> *Sent:* Wednesday, October 21, 2015 12:04 AM\n> *To:* Jamie Koceniak\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Recursive query performance issue\n>\n>\n>\n> Hi\n>\n>\n>\n> 2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n>\n> Version:\n>\n>\n> -----------------------------------------------------------------------------------------------\n>\n> PostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian\n> 4.7.2-5) 4.7.2, 64-bit\n>\n>\n>\n> Query Plan\n>\n> http://explain.depesz.com/s/4s37\n>\n>\n>\n> Normally, this query takes around 200-300 ms to execute.\n>\n> However when several queries are run concurrently, query performance drops\n> to 30-60 seconds.\n>\n>\n>\n>\n>\n> there can be few reasons:\n>\n> 1. locking - are you sure, so your queries don't wait on locks?\n>\n> 2. issues with cache stability - is there high IO load? You can try to\n> increase effective_cache_size (or decrease if you have not enough memory)\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n>\n\n2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nHi Pavel,\n \nThanks for the reply.\n \n1. The queries aren’t waiting on any locks.\n\nThe query has a recursive join that uses a table with only 80k records and that table is not updated often.\n \n2. The I/O load was not high. CPU utilization was very high and load was very high.\nWe have a large effective_cache_size = 512GB (25% of total memory)so your server has 2TB RAM? It is not usual server - so this issue can be pretty strange :( What is size of shared memory? Probably is significantly lower than effective_cache_size? Try to reduce effective cache size to be lower than shared buffersRegardsPavel \n \nThanks,\nJamie\n \nFrom: Pavel Stehule [mailto:[email protected]]\n\nSent: Wednesday, October 21, 2015 12:04 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\nHi\n\n\n\n \n\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n \nQuery Plan\nhttp://explain.depesz.com/s/4s37\n \nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\n \n\n\n\n \n\n\nthere can be few reasons:\n\n\n1. locking - are you sure, so your queries don't wait on locks?\n\n\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\n\n\nRegards\n\n\nPavel", "msg_date": "Wed, 21 Oct 2015 20:23:43 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "Ok\r\n\r\ndf -h /dev/shm\r\nFilesystem Size Used Avail Use% Mounted on\r\ntmpfs 406G 0 406G 0% /run/shm\r\n\r\nOk I will try lowering it.\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]]\r\nSent: Wednesday, October 21, 2015 11:24 AM\r\nTo: Jamie Koceniak\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\n\r\n\r\n2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]<mailto:[email protected]>>:\r\nHi Pavel,\r\n\r\nThanks for the reply.\r\n\r\n1. The queries aren’t waiting on any locks.\r\nThe query has a recursive join that uses a table with only 80k records and that table is not updated often.\r\n\r\n2. The I/O load was not high. CPU utilization was very high and load was very high.\r\nWe have a large effective_cache_size = 512GB (25% of total memory)\r\n\r\nso your server has 2TB RAM? It is not usual server - so this issue can be pretty strange :(\r\nWhat is size of shared memory? Probably is significantly lower than effective_cache_size? Try to reduce effective cache size to be lower than shared buffers\r\nRegards\r\nPavel\r\n\r\n\r\n\r\nThanks,\r\nJamie\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]<mailto:[email protected]>]\r\nSent: Wednesday, October 21, 2015 12:04 AM\r\nTo: Jamie Koceniak\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\nHi\r\n\r\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]<mailto:[email protected]>>:\r\nVersion:\r\n-----------------------------------------------------------------------------------------------\r\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\r\n\r\nQuery Plan\r\nhttp://explain.depesz.com/s/4s37\r\n\r\nNormally, this query takes around 200-300 ms to execute.\r\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\r\n\r\n\r\nthere can be few reasons:\r\n1. locking - are you sure, so your queries don't wait on locks?\r\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\r\nRegards\r\nPavel\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nOk\r\n\n \ndf -h /dev/shm\nFilesystem      Size  Used Avail Use% Mounted on\ntmpfs           406G     0  406G   0% /run/shm\n \nOk I will try lowering it.\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Wednesday, October 21, 2015 11:24 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\n \n\n \n\n2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nHi Pavel,\n \nThanks for the reply.\n \n1. The queries aren’t waiting on any locks.\r\n\nThe query has a recursive join that uses a table with only 80k records and that table is not updated\r\n often.\n \n2. The I/O load was not high. CPU utilization was very high and load was very high.\nWe have a large effective_cache_size = 512GB (25% of total memory)\n\n\n\n \n\n\nso your server has 2TB RAM? It is not usual server - so this issue can be pretty strange :(\r\n\n\n\nWhat is size of shared memory? Probably is significantly lower than effective_cache_size? Try to reduce effective cache size to be lower than shared buffers\n\n\nRegards\n\n\nPavel\n\n\n\r\n \n\n\n\n\n \nThanks,\nJamie\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Wednesday, October 21, 2015 12:04 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\nHi\n\n \n\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n \nQuery Plan\nhttp://explain.depesz.com/s/4s37\n \nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\n \n\n\n\n \n\n\nthere can be few reasons:\n\n\n1. locking - are you sure, so your queries don't wait on locks?\n\n\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\n\n\nRegards\n\n\nPavel", "msg_date": "Wed, 21 Oct 2015 18:40:23 +0000", "msg_from": "Jamie Koceniak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "Hi Pavel,\r\n\r\nOr were you referring to SHMMAX?\r\n\r\nThanks\r\n\r\nFrom: Jamie Koceniak\r\nSent: Wednesday, October 21, 2015 11:40 AM\r\nTo: 'Pavel Stehule'\r\nCc: [email protected]\r\nSubject: RE: [PERFORM] Recursive query performance issue\r\n\r\nOk\r\n\r\ndf -h /dev/shm\r\nFilesystem Size Used Avail Use% Mounted on\r\ntmpfs 406G 0 406G 0% /run/shm\r\n\r\nOk I will try lowering it.\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]]\r\nSent: Wednesday, October 21, 2015 11:24 AM\r\nTo: Jamie Koceniak\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\n\r\n\r\n2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]<mailto:[email protected]>>:\r\nHi Pavel,\r\n\r\nThanks for the reply.\r\n\r\n1. The queries aren’t waiting on any locks.\r\nThe query has a recursive join that uses a table with only 80k records and that table is not updated often.\r\n\r\n2. The I/O load was not high. CPU utilization was very high and load was very high.\r\nWe have a large effective_cache_size = 512GB (25% of total memory)\r\n\r\nso your server has 2TB RAM? It is not usual server - so this issue can be pretty strange :(\r\nWhat is size of shared memory? Probably is significantly lower than effective_cache_size? Try to reduce effective cache size to be lower than shared buffers\r\nRegards\r\nPavel\r\n\r\n\r\n\r\nThanks,\r\nJamie\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]<mailto:[email protected]>]\r\nSent: Wednesday, October 21, 2015 12:04 AM\r\nTo: Jamie Koceniak\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\nHi\r\n\r\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]<mailto:[email protected]>>:\r\nVersion:\r\n-----------------------------------------------------------------------------------------------\r\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\r\n\r\nQuery Plan\r\nhttp://explain.depesz.com/s/4s37\r\n\r\nNormally, this query takes around 200-300 ms to execute.\r\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\r\n\r\n\r\nthere can be few reasons:\r\n1. locking - are you sure, so your queries don't wait on locks?\r\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\r\nRegards\r\nPavel\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi Pavel,\n \nOr were you referring to SHMMAX?\n \nThanks\n \n\n\nFrom: Jamie Koceniak\r\n\nSent: Wednesday, October 21, 2015 11:40 AM\nTo: 'Pavel Stehule'\nCc: [email protected]\nSubject: RE: [PERFORM] Recursive query performance issue\n\n\n \nOk\r\n\n \ndf -h /dev/shm\nFilesystem      Size  Used Avail Use% Mounted on\ntmpfs           406G     0  406G   0% /run/shm\n \nOk I will try lowering it.\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Wednesday, October 21, 2015 11:24 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\n \n\n \n\n2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nHi Pavel,\n \nThanks for the reply.\n \n1. The queries aren’t waiting on any locks.\r\n\nThe query has a recursive join that uses a table with only 80k records and that table is not updated\r\n often.\n \n2. The I/O load was not high. CPU utilization was very high and load was very high.\nWe have a large effective_cache_size = 512GB (25% of total memory)\n\n\n\n \n\n\nso your server has 2TB RAM? It is not usual server - so this issue can be pretty strange :(\r\n\n\n\nWhat is size of shared memory? Probably is significantly lower than effective_cache_size? Try to reduce effective cache size to be lower than shared buffers\n\n\nRegards\n\n\nPavel\n\n\n\r\n \n\n\n\n\n \nThanks,\nJamie\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Wednesday, October 21, 2015 12:04 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\nHi\n\n \n\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n \nQuery Plan\nhttp://explain.depesz.com/s/4s37\n \nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\n \n\n\n\n \n\n\nthere can be few reasons:\n\n\n1. locking - are you sure, so your queries don't wait on locks?\n\n\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\n\n\nRegards\n\n\nPavel", "msg_date": "Wed, 21 Oct 2015 18:51:33 +0000", "msg_from": "Jamie Koceniak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "2015-10-21 20:51 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n> Hi Pavel,\n>\n>\n>\n> Or were you referring to SHMMAX?\n>\n\nvalue of shared_buffers - run SQL statements SHOW shared_buffers;\n\nRegards\n\nPavel\n\n>\n>\n> Thanks\n>\n>\n>\n> *From:* Jamie Koceniak\n> *Sent:* Wednesday, October 21, 2015 11:40 AM\n> *To:* 'Pavel Stehule'\n> *Cc:* [email protected]\n> *Subject:* RE: [PERFORM] Recursive query performance issue\n>\n>\n>\n> Ok\n>\n>\n>\n> df -h /dev/shm\n>\n> Filesystem Size Used Avail Use% Mounted on\n>\n> tmpfs 406G 0 406G 0% /run/shm\n>\n>\n>\n> Ok I will try lowering it.\n>\n>\n>\n> *From:* Pavel Stehule [mailto:[email protected]\n> <[email protected]>]\n> *Sent:* Wednesday, October 21, 2015 11:24 AM\n>\n> *To:* Jamie Koceniak\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Recursive query performance issue\n>\n>\n>\n>\n>\n>\n>\n> 2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]>:\n>\n> Hi Pavel,\n>\n>\n>\n> Thanks for the reply.\n>\n>\n>\n> 1. The queries aren’t waiting on any locks.\n>\n> The query has a recursive join that uses a table with only 80k records and\n> that table is not updated often.\n>\n>\n>\n> 2. The I/O load was not high. CPU utilization was very high and load was\n> very high.\n>\n> We have a large effective_cache_size = 512GB (25% of total memory)\n>\n>\n>\n> so your server has 2TB RAM? It is not usual server - so this issue can be\n> pretty strange :(\n>\n> What is size of shared memory? Probably is significantly lower than\n> effective_cache_size? Try to reduce effective cache size to be lower than\n> shared buffers\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n> Thanks,\n>\n> Jamie\n>\n>\n>\n> *From:* Pavel Stehule [mailto:[email protected]]\n> *Sent:* Wednesday, October 21, 2015 12:04 AM\n> *To:* Jamie Koceniak\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Recursive query performance issue\n>\n>\n>\n> Hi\n>\n>\n>\n> 2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n>\n> Version:\n>\n>\n> -----------------------------------------------------------------------------------------------\n>\n> PostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian\n> 4.7.2-5) 4.7.2, 64-bit\n>\n>\n>\n> Query Plan\n>\n> http://explain.depesz.com/s/4s37\n>\n>\n>\n> Normally, this query takes around 200-300 ms to execute.\n>\n> However when several queries are run concurrently, query performance drops\n> to 30-60 seconds.\n>\n>\n>\n>\n>\n> there can be few reasons:\n>\n> 1. locking - are you sure, so your queries don't wait on locks?\n>\n> 2. issues with cache stability - is there high IO load? You can try to\n> increase effective_cache_size (or decrease if you have not enough memory)\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\n2015-10-21 20:51 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nHi Pavel,\n \nOr were you referring to SHMMAX?value of shared_buffers - run SQL statements SHOW shared_buffers;RegardsPavel \n \nThanks\n \n\n\nFrom: Jamie Koceniak\n\nSent: Wednesday, October 21, 2015 11:40 AM\nTo: 'Pavel Stehule'\nCc: [email protected]\nSubject: RE: [PERFORM] Recursive query performance issue\n\n\n \nOk\n\n \ndf -h /dev/shm\nFilesystem      Size  Used Avail Use% Mounted on\ntmpfs           406G     0  406G   0% /run/shm\n \nOk I will try lowering it.\n \nFrom: Pavel Stehule [mailto:[email protected]]\n\nSent: Wednesday, October 21, 2015 11:24 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\n \n\n \n\n2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nHi Pavel,\n \nThanks for the reply.\n \n1. The queries aren’t waiting on any locks.\n\nThe query has a recursive join that uses a table with only 80k records and that table is not updated\n often.\n \n2. The I/O load was not high. CPU utilization was very high and load was very high.\nWe have a large effective_cache_size = 512GB (25% of total memory)\n\n\n\n \n\n\nso your server has 2TB RAM? It is not usual server - so this issue can be pretty strange :(\n\n\n\nWhat is size of shared memory? Probably is significantly lower than effective_cache_size? Try to reduce effective cache size to be lower than shared buffers\n\n\nRegards\n\n\nPavel\n\n\n\n \n\n\n\n\n \nThanks,\nJamie\n \nFrom: Pavel Stehule [mailto:[email protected]]\n\nSent: Wednesday, October 21, 2015 12:04 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\nHi\n\n \n\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n \nQuery Plan\nhttp://explain.depesz.com/s/4s37\n \nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\n \n\n\n\n \n\n\nthere can be few reasons:\n\n\n1. locking - are you sure, so your queries don't wait on locks?\n\n\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\n\n\nRegards\n\n\nPavel", "msg_date": "Wed, 21 Oct 2015 21:25:59 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "adama_prod=# SHOW shared_buffers;\r\nshared_buffers\r\n----------------\r\n64GB\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]]\r\nSent: Wednesday, October 21, 2015 12:26 PM\r\nTo: Jamie Koceniak\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\n\r\n\r\n2015-10-21 20:51 GMT+02:00 Jamie Koceniak <[email protected]<mailto:[email protected]>>:\r\nHi Pavel,\r\n\r\nOr were you referring to SHMMAX?\r\n\r\nvalue of shared_buffers - run SQL statements SHOW shared_buffers;\r\nRegards\r\nPavel\r\n\r\nThanks\r\n\r\nFrom: Jamie Koceniak\r\nSent: Wednesday, October 21, 2015 11:40 AM\r\nTo: 'Pavel Stehule'\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: RE: [PERFORM] Recursive query performance issue\r\n\r\nOk\r\n\r\ndf -h /dev/shm\r\nFilesystem Size Used Avail Use% Mounted on\r\ntmpfs 406G 0 406G 0% /run/shm\r\n\r\nOk I will try lowering it.\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]]\r\nSent: Wednesday, October 21, 2015 11:24 AM\r\n\r\nTo: Jamie Koceniak\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\n\r\n\r\n2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]<mailto:[email protected]>>:\r\nHi Pavel,\r\n\r\nThanks for the reply.\r\n\r\n1. The queries aren’t waiting on any locks.\r\nThe query has a recursive join that uses a table with only 80k records and that table is not updated often.\r\n\r\n2. The I/O load was not high. CPU utilization was very high and load was very high.\r\nWe have a large effective_cache_size = 512GB (25% of total memory)\r\n\r\nso your server has 2TB RAM? It is not usual server - so this issue can be pretty strange :(\r\nWhat is size of shared memory? Probably is significantly lower than effective_cache_size? Try to reduce effective cache size to be lower than shared buffers\r\nRegards\r\nPavel\r\n\r\n\r\n\r\nThanks,\r\nJamie\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]<mailto:[email protected]>]\r\nSent: Wednesday, October 21, 2015 12:04 AM\r\nTo: Jamie Koceniak\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\nHi\r\n\r\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]<mailto:[email protected]>>:\r\nVersion:\r\n-----------------------------------------------------------------------------------------------\r\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\r\n\r\nQuery Plan\r\nhttp://explain.depesz.com/s/4s37\r\n\r\nNormally, this query takes around 200-300 ms to execute.\r\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\r\n\r\n\r\nthere can be few reasons:\r\n1. locking - are you sure, so your queries don't wait on locks?\r\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\r\nRegards\r\nPavel\r\n\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nadama_prod=# SHOW shared_buffers;\nshared_buffers\n----------------\n64GB\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Wednesday, October 21, 2015 12:26 PM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\n \n\n \n\n2015-10-21 20:51 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nHi Pavel,\n \nOr were you referring to SHMMAX?\n\n\n\n \n\n\nvalue of shared_buffers - run SQL statements SHOW shared_buffers;\n\n\nRegards\n\n\nPavel \n\n\n\n\n \nThanks\n \n\n\nFrom: Jamie Koceniak\r\n\nSent: Wednesday, October 21, 2015 11:40 AM\nTo: 'Pavel Stehule'\nCc: [email protected]\nSubject: RE: [PERFORM] Recursive query performance issue\n\n\n \nOk\r\n\n \ndf -h /dev/shm\nFilesystem      Size  Used Avail Use% Mounted on\ntmpfs           406G     0  406G   0% /run/shm\n \nOk I will try lowering it.\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Wednesday, October 21, 2015 11:24 AM\n\n\n\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n\n\n\n\n \n\n \n\n \n\n2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nHi Pavel,\n \nThanks for the reply.\n \n1. The queries aren’t waiting on any locks.\r\n\nThe query has a recursive join that uses a table with only 80k records and that table is not updated\r\n often.\n \n2. The I/O load was not high. CPU utilization was very high and load was very high.\nWe have a large effective_cache_size = 512GB (25% of total memory)\n\n\n\n \n\n\nso your server has 2TB RAM? It is not usual server - so this issue can be pretty strange :(\r\n\n\n\nWhat is size of shared memory? Probably is significantly lower than effective_cache_size? Try to reduce effective cache size to be lower than shared buffers\n\n\nRegards\n\n\nPavel\n\n\n\r\n \n\n\n\n\n \nThanks,\nJamie\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Wednesday, October 21, 2015 12:04 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\nHi\n\n \n\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n \nQuery Plan\nhttp://explain.depesz.com/s/4s37\n \nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\n \n\n\n\n \n\n\nthere can be few reasons:\n\n\n1. locking - are you sure, so your queries don't wait on locks?\n\n\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\n\n\nRegards\n\n\nPavel", "msg_date": "Wed, 21 Oct 2015 19:32:38 +0000", "msg_from": "Jamie Koceniak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "2015-10-21 21:32 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n> adama_prod=# SHOW shared_buffers;\n>\n> shared_buffers\n>\n> ----------------\n>\n> 64GB\n>\n\ncan you try to increase shared buffers to 200GB and decrease effective\ncache size to 180GB? If it is possibly - I am not sure, if this setting is\ngood fro production usage, but the result can be interesting for bottleneck\nidentification.\n\n>\n>\n> *From:* Pavel Stehule [mailto:[email protected]]\n> *Sent:* Wednesday, October 21, 2015 12:26 PM\n>\n> *To:* Jamie Koceniak\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Recursive query performance issue\n>\n>\n>\n>\n>\n>\n>\n> 2015-10-21 20:51 GMT+02:00 Jamie Koceniak <[email protected]>:\n>\n> Hi Pavel,\n>\n>\n>\n> Or were you referring to SHMMAX?\n>\n>\n>\n> value of shared_buffers - run SQL statements SHOW shared_buffers;\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n> Thanks\n>\n>\n>\n> *From:* Jamie Koceniak\n> *Sent:* Wednesday, October 21, 2015 11:40 AM\n> *To:* 'Pavel Stehule'\n> *Cc:* [email protected]\n> *Subject:* RE: [PERFORM] Recursive query performance issue\n>\n>\n>\n> Ok\n>\n>\n>\n> df -h /dev/shm\n>\n> Filesystem Size Used Avail Use% Mounted on\n>\n> tmpfs 406G 0 406G 0% /run/shm\n>\n>\n>\n> Ok I will try lowering it.\n>\n>\n>\n> *From:* Pavel Stehule [mailto:[email protected]\n> <[email protected]>]\n> *Sent:* Wednesday, October 21, 2015 11:24 AM\n>\n>\n> *To:* Jamie Koceniak\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Recursive query performance issue\n>\n>\n>\n>\n>\n>\n>\n> 2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]>:\n>\n> Hi Pavel,\n>\n>\n>\n> Thanks for the reply.\n>\n>\n>\n> 1. The queries aren’t waiting on any locks.\n>\n> The query has a recursive join that uses a table with only 80k records and\n> that table is not updated often.\n>\n>\n>\n> 2. The I/O load was not high. CPU utilization was very high and load was\n> very high.\n>\n> We have a large effective_cache_size = 512GB (25% of total memory)\n>\n>\n>\n> so your server has 2TB RAM? It is not usual server - so this issue can be\n> pretty strange :(\n>\n> What is size of shared memory? Probably is significantly lower than\n> effective_cache_size? Try to reduce effective cache size to be lower than\n> shared buffers\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n> Thanks,\n>\n> Jamie\n>\n>\n>\n> *From:* Pavel Stehule [mailto:[email protected]]\n> *Sent:* Wednesday, October 21, 2015 12:04 AM\n> *To:* Jamie Koceniak\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Recursive query performance issue\n>\n>\n>\n> Hi\n>\n>\n>\n> 2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n>\n> Version:\n>\n>\n> -----------------------------------------------------------------------------------------------\n>\n> PostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian\n> 4.7.2-5) 4.7.2, 64-bit\n>\n>\n>\n> Query Plan\n>\n> http://explain.depesz.com/s/4s37\n>\n>\n>\n> Normally, this query takes around 200-300 ms to execute.\n>\n> However when several queries are run concurrently, query performance drops\n> to 30-60 seconds.\n>\n>\n>\n>\n>\n> there can be few reasons:\n>\n> 1. locking - are you sure, so your queries don't wait on locks?\n>\n> 2. issues with cache stability - is there high IO load? You can try to\n> increase effective_cache_size (or decrease if you have not enough memory)\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\n2015-10-21 21:32 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nadama_prod=# SHOW shared_buffers;\nshared_buffers\n----------------\n64GBcan you try to increase shared buffers to 200GB and decrease effective cache size to 180GB? If it is possibly - I am not sure, if this setting is good fro production usage, but the result can be interesting for bottleneck identification. \n \nFrom: Pavel Stehule [mailto:[email protected]]\n\nSent: Wednesday, October 21, 2015 12:26 PM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\n \n\n \n\n2015-10-21 20:51 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nHi Pavel,\n \nOr were you referring to SHMMAX?\n\n\n\n \n\n\nvalue of shared_buffers - run SQL statements SHOW shared_buffers;\n\n\nRegards\n\n\nPavel \n\n\n\n\n \nThanks\n \n\n\nFrom: Jamie Koceniak\n\nSent: Wednesday, October 21, 2015 11:40 AM\nTo: 'Pavel Stehule'\nCc: [email protected]\nSubject: RE: [PERFORM] Recursive query performance issue\n\n\n \nOk\n\n \ndf -h /dev/shm\nFilesystem      Size  Used Avail Use% Mounted on\ntmpfs           406G     0  406G   0% /run/shm\n \nOk I will try lowering it.\n \nFrom: Pavel Stehule [mailto:[email protected]]\n\nSent: Wednesday, October 21, 2015 11:24 AM\n\n\n\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n\n\n\n\n \n\n \n\n \n\n2015-10-21 19:55 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nHi Pavel,\n \nThanks for the reply.\n \n1. The queries aren’t waiting on any locks.\n\nThe query has a recursive join that uses a table with only 80k records and that table is not updated\n often.\n \n2. The I/O load was not high. CPU utilization was very high and load was very high.\nWe have a large effective_cache_size = 512GB (25% of total memory)\n\n\n\n \n\n\nso your server has 2TB RAM? It is not usual server - so this issue can be pretty strange :(\n\n\n\nWhat is size of shared memory? Probably is significantly lower than effective_cache_size? Try to reduce effective cache size to be lower than shared buffers\n\n\nRegards\n\n\nPavel\n\n\n\n \n\n\n\n\n \nThanks,\nJamie\n \nFrom: Pavel Stehule [mailto:[email protected]]\n\nSent: Wednesday, October 21, 2015 12:04 AM\nTo: Jamie Koceniak\nCc: [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n \n\nHi\n\n \n\n2015-10-20 19:34 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n \nQuery Plan\nhttp://explain.depesz.com/s/4s37\n \nNormally, this query takes around 200-300 ms to execute.\nHowever when several queries are run concurrently, query performance drops to 30-60 seconds.\n \n\n\n\n \n\n\nthere can be few reasons:\n\n\n1. locking - are you sure, so your queries don't wait on locks?\n\n\n2. issues with cache stability - is there high IO load? You can try to increase effective_cache_size (or decrease if you have not enough memory)\n\n\nRegards\n\n\nPavel", "msg_date": "Wed, 21 Oct 2015 21:45:20 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "On Wed, Oct 21, 2015 at 2:45 PM, Pavel Stehule <[email protected]> wrote:\n> 2015-10-21 21:32 GMT+02:00 Jamie Koceniak <[email protected]>:\n>>\n>> adama_prod=# SHOW shared_buffers;\n>>\n>> shared_buffers\n>>\n>> ----------------\n>>\n>> 64GB\n>\n>\n> can you try to increase shared buffers to 200GB and decrease effective cache\n> size to 180GB? If it is possibly - I am not sure, if this setting is good\n> fro production usage, but the result can be interesting for bottleneck\n> identification.\n\nwe need to see a snapshot from\n*) top\n*) perf top\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 21 Oct 2015 14:49:33 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "On 20-10-15 19:34, Jamie Koceniak wrote:\n>\n> Version:\n>\n> -----------------------------------------------------------------------------------------------\n>\n> PostgreSQL 9.1.14 on x86_64-unknown-linux-gnu, compiled by gcc (Debian \n> 4.7.2-5) 4.7.2, 64-bit\n>\n> Query Plan\n>\n> http://explain.depesz.com/s/4s37\n>\n> Normally, this query takes around 200-300 ms to execute.\n>\n> However when several queries are run concurrently, query performance \n> drops to 30-60 seconds.\n>\nIs the concurrency the cause or the result of the slowdown?\nAre you executing the same query with the same parameters or do the \nparameters differ, perhaps making PostgreSQL\nchoose different queryplan?\n\n\n\n\n\n\n\n\nOn 20-10-15 19:34, Jamie Koceniak\n wrote:\n\n\n\n\n\n\nVersion:\n-----------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on\n x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5)\n 4.7.2, 64-bit\n�\nQuery Plan\nhttp://explain.depesz.com/s/4s37\n�\nNormally, this query takes around 200-300\n ms to execute.\nHowever when several queries are run\n concurrently, query performance drops to 30-60 seconds.\n�\n�\n\n\n Is the concurrency the cause or the result of the slowdown?\n Are you executing the same query with the same parameters or do the\n parameters differ, perhaps making PostgreSQL\n choose different queryplan?", "msg_date": "Thu, 22 Oct 2015 16:48:14 +0200", "msg_from": "vincent elschot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "Hi,\r\n\r\nWe just had the performance problem again today.\r\nHere is some of the top output. Unfortunately, we don't have perf top installed.\r\n\r\ntop - 16:22:16 up 29 days, 13:00, 2 users, load average: 164.63, 158.62, 148.52\r\nTasks: 1369 total, 181 running, 1188 sleeping, 0 stopped, 0 zombie\r\n%Cpu(s): 6.2 us, 0.7 sy, 0.0 ni, 93.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\r\nMiB Mem: 2068265 total, 433141 used, 1635124 free, 586 buffers\r\nMiB Swap: 7812 total, 0 used, 7812 free, 412641 cached\r\n\r\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\r\n 81745 postgres 20 0 65.7g 51m 34m R 101 0.0 0:09.20 postgres: user1 db 0.0.0.2(52307) SELECT\r\n 81782 postgres 20 0 65.7g 51m 34m R 101 0.0 0:08.50 postgres: user1 db 0.0.0.3(44630) SELECT\r\n 81797 postgres 20 0 65.7g 51m 34m R 101 0.0 0:08.03 postgres: user1 db 0.0.0.6(60752) SELECT\r\n 67103 postgres 20 0 65.7g 81m 56m R 97 0.0 2:01.89 postgres: user1 db 0.0.0.4(46337) SELECT\r\n 82527 postgres 20 0 65.7g 25m 20m R 93 0.0 0:02.35 postgres: user1 db 0.0.0.2(52490) SELECT\r\n 82559 postgres 20 0 65.7g 25m 20m R 93 0.0 0:02.17 postgres: user1 db 0.0.0.2(52496) SELECT\r\n 82728 postgres 20 0 65.7g 80m 76m R 93 0.0 0:00.60 postgres: user1 db 0.0.0.6(60957) SELECT\r\n 65588 postgres 20 0 65.7g 76m 56m R 89 0.0 2:12.27 postgres: user1 db 0.0.0.6(57195) SELECT\r\n 80594 postgres 20 0 65.7g 34m 28m R 89 0.0 0:22.81 postgres: user1 db 0.0.0.2(52071) SELECT\r\n 25176 postgres 20 0 65.7g 74m 57m R 85 0.0 7:24.42 postgres: user1 db 0.0.0.2(39410) SELECT\r\n 82182 postgres 20 0 65.7g 513m 502m R 85 0.0 0:04.85 postgres: user1 db 0.0.0.4(49789) SELECT\r\n 82034 postgres 20 0 65.7g 523m 510m R 81 0.0 0:05.79 postgres: user1 db 0.0.0.3(44683) SELECT\r\n 82439 postgres 20 0 65.7g 262m 258m R 81 0.0 0:02.64 postgres: user1 db 0.0.0.6(60887) SELECT\r\n 82624 postgres 20 0 65.7g 148m 143m R 81 0.0 0:01.20 postgres: user1 db 0.0.0.4(49888) SELECT\r\n 82637 postgres 20 0 65.7g 139m 134m R 81 0.0 0:01.17 postgres: user1 db 0.0.0.3(44805) SELECT\r\n 82669 postgres 20 0 65.7g 119m 114m R 81 0.0 0:00.97 postgres: user1 db 0.0.0.6(60939) SELECT\r\n 82723 postgres 20 0 65.7g 79m 75m R 81 0.0 0:00.56 postgres: user1 db 0.0.0.4(49907) SELECT\r\n 29160 postgres 20 0 65.7g 79m 54m R 77 0.0 6:52.13 postgres: user1 db 0.0.0.6(48802) SELECT\r\n 51095 postgres 20 0 65.7g 81m 57m R 77 0.0 4:01.51 postgres: user1 db 0.0.0.4(42914) SELECT\r\n 81833 postgres 20 0 65.7g 528m 515m R 77 0.0 0:07.23 postgres: user1 db 0.0.0.3(44644) SELECT\r\n 81978 postgres 20 0 65.7g 528m 515m R 77 0.0 0:06.05 postgres: user1 db 0.0.0.2(52364) SELECT\r\n 82099 postgres 20 0 65.7g 523m 510m R 77 0.0 0:05.18 postgres: user1 db 0.0.0.3(44692) SELECT\r\n 82111 postgres 20 0 65.7g 523m 510m R 77 0.0 0:05.14 postgres: user1 db 0.0.0.4(49773) SELECT\r\n 82242 postgres 20 0 65.7g 433m 429m R 77 0.0 0:04.27 postgres: user1 db 0.0.0.2(52428) SELECT\r\n 82292 postgres 20 0 65.7g 407m 402m R 77 0.0 0:04.10 postgres: user1 db 0.0.0.2(52440) SELECT\r\n 82408 postgres 20 0 65.7g 292m 288m R 77 0.0 0:02.98 postgres: user1 db 0.0.0.4(49835) SELECT\r\n 82542 postgres 20 0 65.7g 207m 202m R 77 0.0 0:01.98 postgres: user1 db 0.0.0.4(49868) SELECT\r\n 63638 postgres 20 0 65.7g 80m 56m R 73 0.0 2:30.10 postgres: user1 db 0.0.0.2(48699) SELECT\r\n 71572 postgres 20 0 65.7g 80m 56m R 73 0.0 1:31.13 postgres: user1 db 0.0.0.2(50223) SELECT\r\n 80580 postgres 20 0 65.7g 34m 28m R 73 0.0 0:22.93 postgres: user1 db 0.0.0.2(52065) SELECT\r\n 81650 postgres 20 0 65.8g 622m 555m R 73 0.0 0:08.84 postgres: user1 db 0.0.0.2(52290) SELECT\r\n 81728 postgres 20 0 65.7g 523m 510m R 73 0.0 0:08.28 postgres: user1 db 0.0.0.4(49684) SELECT\r\n 81942 postgres 20 0 65.7g 528m 515m R 73 0.0 0:06.46 postgres: user1 db 0.0.0.2(52355) SELECT\r\n 81958 postgres 20 0 65.7g 528m 514m R 73 0.0 0:06.48 postgres: user1 db 0.0.0.4(49744) SELECT\r\n 81980 postgres 20 0 65.7g 528m 515m R 73 0.0 0:06.02 postgres: user1 db 0.0.0.3(44671) SELECT\r\n 82007 postgres 20 0 65.7g 523m 510m R 73 0.0 0:06.27 postgres: user1 db 0.0.0.3(44676) SELECT\r\n 82374 postgres 20 0 65.7g 367m 362m R 73 0.0 0:03.48 postgres: user1 db 0.0.0.6(60873) SELECT\r\n 82385 postgres 20 0 65.7g 310m 306m R 73 0.0 0:03.03 postgres: user1 db 0.0.0.6(60876) SELECT\r\n 82520 postgres 20 0 65.7g 220m 215m R 73 0.0 0:02.00 postgres: user1 db 0.0.0.3(44785) SELECT\r\n 82676 postgres 20 0 65.7g 116m 111m R 73 0.0 0:00.90 postgres: user1 db 0.0.0.2(52531) SELECT\r\n 18471 postgres 20 0 65.7g 73m 56m R 69 0.0 8:14.08 postgres: user1 db 0.0.0.6(46144) SELECT\r\n 43890 postgres 20 0 65.7g 76m 56m R 69 0.0 5:04.46 postgres: user1 db 0.0.0.3(36697) SELECT\r\n 46130 postgres 20 0 65.7g 70m 57m R 69 0.0 4:46.56 postgres: user1 db 0.0.0.4(41871) SELECT\r\n 55604 postgres 20 0 65.7g 81m 57m R 69 0.0 3:27.67 postgres: user1 db 0.0.0.3(39292) SELECT\r\n 59139 postgres 20 0 65.7g 81m 57m R 69 0.0 3:01.18 postgres: user1 db 0.0.0.2(47670) SELECT\r\n 63523 postgres 20 0 65.7g 80m 56m R 69 0.0 2:28.04 postgres: user1 db 0.0.0.2(48680) SELECT\r\n 81707 postgres 20 0 65.7g 528m 515m S 69 0.0 0:08.44 postgres: user1 db 0.0.0.6(60737) SELECT\r\n 81830 postgres 20 0 65.7g 523m 510m R 69 0.0 0:07.60 postgres: user1 db 0.0.0.4(49707) SELECT\r\n 81932 postgres 20 0 65.7g 528m 515m R 69 0.0 0:06.65 postgres: user1 db 0.0.0.2(52352) SELECT\r\n 81950 postgres 20 0 65.7g 528m 515m R 69 0.0 0:05.92 postgres: user1 db 0.0.0.6(60783) SELECT\r\n 81973 postgres 20 0 65.7g 522m 510m R 69 0.0 0:06.18 postgres: user1 db 0.0.0.6(60789) SELECT\r\n 82193 postgres 20 0 65.7g 487m 479m R 69 0.0 0:04.61 postgres: user1 db 0.0.0.2(52415) SELECT\r\n 82358 postgres 20 0 65.7g 299m 295m R 69 0.0 0:03.11 postgres: user1 db 0.0.0.2(52453) SELECT\r\n 82372 postgres 20 0 65.7g 318m 313m R 69 0.0 0:03.22 postgres: user1 db 0.0.0.4(49827) SELECT\r\n 82381 postgres 20 0 65.7g 331m 326m R 69 0.0 0:03.30 postgres: user1 db 0.0.0.3(44757) SELECT\r\n 82404 postgres 20 0 65.7g 294m 289m R 69 0.0 0:02.86 postgres: user1 db 0.0.0.3(44761) SELECT\r\n 82415 postgres 20 0 65.7g 270m 266m R 69 0.0 0:02.80 postgres: user1 db 0.0.0.3(44767) SELECT\r\n 82521 postgres 20 0 65.7g 209m 205m R 69 0.0 0:02.00 postgres: user1 db 0.0.0.3(44786) SELECT\r\n 82526 postgres 20 0 65.7g 35m 29m R 69 0.0 0:01.20 postgres: user1 db 0.0.0.6(60906) SELECT\r\n 82550 postgres 20 0 65.7g 188m 184m R 69 0.0 0:01.72 postgres: user1 db 0.0.0.4(49870) SELECT\r\n 82587 postgres 20 0 65.7g 183m 178m R 69 0.0 0:01.64 postgres: user1 db 0.0.0.4(49882) SELECT\r\n 82683 postgres 20 0 65.7g 97m 93m R 69 0.0 0:00.77 postgres: user1 db 0.0.0.4(49899) SELECT\r\n 82685 postgres 20 0 65.7g 103m 99m R 69 0.0 0:00.84 postgres: user1 db 0.0.0.2(52532) SELECT\r\n 82687 postgres 20 0 65.7g 109m 104m R 69 0.0 0:00.85 postgres: user1 db 0.0.0.3(44809) SELECT\r\n 82712 postgres 20 0 65.7g 68m 64m R 69 0.0 0:00.55 postgres: user1 db 0.0.0.3(44814) SELECT\r\n 82715 postgres 20 0 65.7g 75m 70m R 69 0.0 0:00.58 postgres: user1 db 0.0.0.4(49905) SELECT\r\n 19548 postgres 20 0 65.7g 79m 56m R 65 0.0 8:02.44 postgres: user1 db 0.0.0.2(37887) SELECT\r\n 36714 postgres 20 0 65.7g 80m 56m R 65 0.0 5:56.08 postgres: user1 db 0.0.0.3(35177) SELECT\r\n 43599 postgres 20 0 65.7g 80m 56m R 65 0.0 5:05.03 postgres: user1 db 0.0.0.3(36638) SELECT\r\n\r\n-----Original Message-----\r\nFrom: Merlin Moncure [mailto:[email protected]] \r\nSent: Wednesday, October 21, 2015 12:50 PM\r\nTo: Pavel Stehule\r\nCc: Jamie Koceniak; [email protected]\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\nOn Wed, Oct 21, 2015 at 2:45 PM, Pavel Stehule <[email protected]> wrote:\r\n> 2015-10-21 21:32 GMT+02:00 Jamie Koceniak <[email protected]>:\r\n>>\r\n>> adama_prod=# SHOW shared_buffers;\r\n>>\r\n>> shared_buffers\r\n>>\r\n>> ----------------\r\n>>\r\n>> 64GB\r\n>\r\n>\r\n> can you try to increase shared buffers to 200GB and decrease effective \r\n> cache size to 180GB? If it is possibly - I am not sure, if this \r\n> setting is good fro production usage, but the result can be \r\n> interesting for bottleneck identification.\r\n\r\nwe need to see a snapshot from\r\n*) top\r\n*) perf top\r\n\r\nmerlin\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 23 Oct 2015 17:45:22 +0000", "msg_from": "Jamie Koceniak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "Hi\n\nthis extremely high load looks like different issue - maybe spinlock issue\nor virtual memory issue.\n\nProbably you need some low level debug tools like perf or dtrace :(\n\nhttp://www.postgresql.org/message-id/[email protected]\n\nHas you last PostgreSQL upgrade?\n\nresult of \"perf top\" when this issue is active is really requested.\n\nRegards\n\nPavel\n\n2015-10-23 19:45 GMT+02:00 Jamie Koceniak <[email protected]>:\n\n> Hi,\n>\n> We just had the performance problem again today.\n> Here is some of the top output. Unfortunately, we don't have perf top\n> installed.\n>\n> top - 16:22:16 up 29 days, 13:00, 2 users, load average: 164.63, 158.62,\n> 148.52\n> Tasks: 1369 total, 181 running, 1188 sleeping, 0 stopped, 0 zombie\n> %Cpu(s): 6.2 us, 0.7 sy, 0.0 ni, 93.1 id, 0.0 wa, 0.0 hi, 0.0 si,\n> 0.0 st\n> MiB Mem: 2068265 total, 433141 used, 1635124 free, 586 buffers\n> MiB Swap: 7812 total, 0 used, 7812 free, 412641 cached\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 81745 postgres 20 0 65.7g 51m 34m R 101 0.0 0:09.20 postgres:\n> user1 db 0.0.0.2(52307) SELECT\n> 81782 postgres 20 0 65.7g 51m 34m R 101 0.0 0:08.50 postgres:\n> user1 db 0.0.0.3(44630) SELECT\n> 81797 postgres 20 0 65.7g 51m 34m R 101 0.0 0:08.03 postgres:\n> user1 db 0.0.0.6(60752) SELECT\n> 67103 postgres 20 0 65.7g 81m 56m R 97 0.0 2:01.89 postgres:\n> user1 db 0.0.0.4(46337) SELECT\n> 82527 postgres 20 0 65.7g 25m 20m R 93 0.0 0:02.35 postgres:\n> user1 db 0.0.0.2(52490) SELECT\n> 82559 postgres 20 0 65.7g 25m 20m R 93 0.0 0:02.17 postgres:\n> user1 db 0.0.0.2(52496) SELECT\n> 82728 postgres 20 0 65.7g 80m 76m R 93 0.0 0:00.60 postgres:\n> user1 db 0.0.0.6(60957) SELECT\n> 65588 postgres 20 0 65.7g 76m 56m R 89 0.0 2:12.27 postgres:\n> user1 db 0.0.0.6(57195) SELECT\n> 80594 postgres 20 0 65.7g 34m 28m R 89 0.0 0:22.81 postgres:\n> user1 db 0.0.0.2(52071) SELECT\n> 25176 postgres 20 0 65.7g 74m 57m R 85 0.0 7:24.42 postgres:\n> user1 db 0.0.0.2(39410) SELECT\n> 82182 postgres 20 0 65.7g 513m 502m R 85 0.0 0:04.85 postgres:\n> user1 db 0.0.0.4(49789) SELECT\n> 82034 postgres 20 0 65.7g 523m 510m R 81 0.0 0:05.79 postgres:\n> user1 db 0.0.0.3(44683) SELECT\n> 82439 postgres 20 0 65.7g 262m 258m R 81 0.0 0:02.64 postgres:\n> user1 db 0.0.0.6(60887) SELECT\n> 82624 postgres 20 0 65.7g 148m 143m R 81 0.0 0:01.20 postgres:\n> user1 db 0.0.0.4(49888) SELECT\n> 82637 postgres 20 0 65.7g 139m 134m R 81 0.0 0:01.17 postgres:\n> user1 db 0.0.0.3(44805) SELECT\n> 82669 postgres 20 0 65.7g 119m 114m R 81 0.0 0:00.97 postgres:\n> user1 db 0.0.0.6(60939) SELECT\n> 82723 postgres 20 0 65.7g 79m 75m R 81 0.0 0:00.56 postgres:\n> user1 db 0.0.0.4(49907) SELECT\n> 29160 postgres 20 0 65.7g 79m 54m R 77 0.0 6:52.13 postgres:\n> user1 db 0.0.0.6(48802) SELECT\n> 51095 postgres 20 0 65.7g 81m 57m R 77 0.0 4:01.51 postgres:\n> user1 db 0.0.0.4(42914) SELECT\n> 81833 postgres 20 0 65.7g 528m 515m R 77 0.0 0:07.23 postgres:\n> user1 db 0.0.0.3(44644) SELECT\n> 81978 postgres 20 0 65.7g 528m 515m R 77 0.0 0:06.05 postgres:\n> user1 db 0.0.0.2(52364) SELECT\n> 82099 postgres 20 0 65.7g 523m 510m R 77 0.0 0:05.18 postgres:\n> user1 db 0.0.0.3(44692) SELECT\n> 82111 postgres 20 0 65.7g 523m 510m R 77 0.0 0:05.14 postgres:\n> user1 db 0.0.0.4(49773) SELECT\n> 82242 postgres 20 0 65.7g 433m 429m R 77 0.0 0:04.27 postgres:\n> user1 db 0.0.0.2(52428) SELECT\n> 82292 postgres 20 0 65.7g 407m 402m R 77 0.0 0:04.10 postgres:\n> user1 db 0.0.0.2(52440) SELECT\n> 82408 postgres 20 0 65.7g 292m 288m R 77 0.0 0:02.98 postgres:\n> user1 db 0.0.0.4(49835) SELECT\n> 82542 postgres 20 0 65.7g 207m 202m R 77 0.0 0:01.98 postgres:\n> user1 db 0.0.0.4(49868) SELECT\n> 63638 postgres 20 0 65.7g 80m 56m R 73 0.0 2:30.10 postgres:\n> user1 db 0.0.0.2(48699) SELECT\n> 71572 postgres 20 0 65.7g 80m 56m R 73 0.0 1:31.13 postgres:\n> user1 db 0.0.0.2(50223) SELECT\n> 80580 postgres 20 0 65.7g 34m 28m R 73 0.0 0:22.93 postgres:\n> user1 db 0.0.0.2(52065) SELECT\n> 81650 postgres 20 0 65.8g 622m 555m R 73 0.0 0:08.84 postgres:\n> user1 db 0.0.0.2(52290) SELECT\n> 81728 postgres 20 0 65.7g 523m 510m R 73 0.0 0:08.28 postgres:\n> user1 db 0.0.0.4(49684) SELECT\n> 81942 postgres 20 0 65.7g 528m 515m R 73 0.0 0:06.46 postgres:\n> user1 db 0.0.0.2(52355) SELECT\n> 81958 postgres 20 0 65.7g 528m 514m R 73 0.0 0:06.48 postgres:\n> user1 db 0.0.0.4(49744) SELECT\n> 81980 postgres 20 0 65.7g 528m 515m R 73 0.0 0:06.02 postgres:\n> user1 db 0.0.0.3(44671) SELECT\n> 82007 postgres 20 0 65.7g 523m 510m R 73 0.0 0:06.27 postgres:\n> user1 db 0.0.0.3(44676) SELECT\n> 82374 postgres 20 0 65.7g 367m 362m R 73 0.0 0:03.48 postgres:\n> user1 db 0.0.0.6(60873) SELECT\n> 82385 postgres 20 0 65.7g 310m 306m R 73 0.0 0:03.03 postgres:\n> user1 db 0.0.0.6(60876) SELECT\n> 82520 postgres 20 0 65.7g 220m 215m R 73 0.0 0:02.00 postgres:\n> user1 db 0.0.0.3(44785) SELECT\n> 82676 postgres 20 0 65.7g 116m 111m R 73 0.0 0:00.90 postgres:\n> user1 db 0.0.0.2(52531) SELECT\n> 18471 postgres 20 0 65.7g 73m 56m R 69 0.0 8:14.08 postgres:\n> user1 db 0.0.0.6(46144) SELECT\n> 43890 postgres 20 0 65.7g 76m 56m R 69 0.0 5:04.46 postgres:\n> user1 db 0.0.0.3(36697) SELECT\n> 46130 postgres 20 0 65.7g 70m 57m R 69 0.0 4:46.56 postgres:\n> user1 db 0.0.0.4(41871) SELECT\n> 55604 postgres 20 0 65.7g 81m 57m R 69 0.0 3:27.67 postgres:\n> user1 db 0.0.0.3(39292) SELECT\n> 59139 postgres 20 0 65.7g 81m 57m R 69 0.0 3:01.18 postgres:\n> user1 db 0.0.0.2(47670) SELECT\n> 63523 postgres 20 0 65.7g 80m 56m R 69 0.0 2:28.04 postgres:\n> user1 db 0.0.0.2(48680) SELECT\n> 81707 postgres 20 0 65.7g 528m 515m S 69 0.0 0:08.44 postgres:\n> user1 db 0.0.0.6(60737) SELECT\n> 81830 postgres 20 0 65.7g 523m 510m R 69 0.0 0:07.60 postgres:\n> user1 db 0.0.0.4(49707) SELECT\n> 81932 postgres 20 0 65.7g 528m 515m R 69 0.0 0:06.65 postgres:\n> user1 db 0.0.0.2(52352) SELECT\n> 81950 postgres 20 0 65.7g 528m 515m R 69 0.0 0:05.92 postgres:\n> user1 db 0.0.0.6(60783) SELECT\n> 81973 postgres 20 0 65.7g 522m 510m R 69 0.0 0:06.18 postgres:\n> user1 db 0.0.0.6(60789) SELECT\n> 82193 postgres 20 0 65.7g 487m 479m R 69 0.0 0:04.61 postgres:\n> user1 db 0.0.0.2(52415) SELECT\n> 82358 postgres 20 0 65.7g 299m 295m R 69 0.0 0:03.11 postgres:\n> user1 db 0.0.0.2(52453) SELECT\n> 82372 postgres 20 0 65.7g 318m 313m R 69 0.0 0:03.22 postgres:\n> user1 db 0.0.0.4(49827) SELECT\n> 82381 postgres 20 0 65.7g 331m 326m R 69 0.0 0:03.30 postgres:\n> user1 db 0.0.0.3(44757) SELECT\n> 82404 postgres 20 0 65.7g 294m 289m R 69 0.0 0:02.86 postgres:\n> user1 db 0.0.0.3(44761) SELECT\n> 82415 postgres 20 0 65.7g 270m 266m R 69 0.0 0:02.80 postgres:\n> user1 db 0.0.0.3(44767) SELECT\n> 82521 postgres 20 0 65.7g 209m 205m R 69 0.0 0:02.00 postgres:\n> user1 db 0.0.0.3(44786) SELECT\n> 82526 postgres 20 0 65.7g 35m 29m R 69 0.0 0:01.20 postgres:\n> user1 db 0.0.0.6(60906) SELECT\n> 82550 postgres 20 0 65.7g 188m 184m R 69 0.0 0:01.72 postgres:\n> user1 db 0.0.0.4(49870) SELECT\n> 82587 postgres 20 0 65.7g 183m 178m R 69 0.0 0:01.64 postgres:\n> user1 db 0.0.0.4(49882) SELECT\n> 82683 postgres 20 0 65.7g 97m 93m R 69 0.0 0:00.77 postgres:\n> user1 db 0.0.0.4(49899) SELECT\n> 82685 postgres 20 0 65.7g 103m 99m R 69 0.0 0:00.84 postgres:\n> user1 db 0.0.0.2(52532) SELECT\n> 82687 postgres 20 0 65.7g 109m 104m R 69 0.0 0:00.85 postgres:\n> user1 db 0.0.0.3(44809) SELECT\n> 82712 postgres 20 0 65.7g 68m 64m R 69 0.0 0:00.55 postgres:\n> user1 db 0.0.0.3(44814) SELECT\n> 82715 postgres 20 0 65.7g 75m 70m R 69 0.0 0:00.58 postgres:\n> user1 db 0.0.0.4(49905) SELECT\n> 19548 postgres 20 0 65.7g 79m 56m R 65 0.0 8:02.44 postgres:\n> user1 db 0.0.0.2(37887) SELECT\n> 36714 postgres 20 0 65.7g 80m 56m R 65 0.0 5:56.08 postgres:\n> user1 db 0.0.0.3(35177) SELECT\n> 43599 postgres 20 0 65.7g 80m 56m R 65 0.0 5:05.03 postgres:\n> user1 db 0.0.0.3(36638) SELECT\n>\n> -----Original Message-----\n> From: Merlin Moncure [mailto:[email protected]]\n> Sent: Wednesday, October 21, 2015 12:50 PM\n> To: Pavel Stehule\n> Cc: Jamie Koceniak; [email protected]\n> Subject: Re: [PERFORM] Recursive query performance issue\n>\n> On Wed, Oct 21, 2015 at 2:45 PM, Pavel Stehule <[email protected]>\n> wrote:\n> > 2015-10-21 21:32 GMT+02:00 Jamie Koceniak <[email protected]>:\n> >>\n> >> adama_prod=# SHOW shared_buffers;\n> >>\n> >> shared_buffers\n> >>\n> >> ----------------\n> >>\n> >> 64GB\n> >\n> >\n> > can you try to increase shared buffers to 200GB and decrease effective\n> > cache size to 180GB? If it is possibly - I am not sure, if this\n> > setting is good fro production usage, but the result can be\n> > interesting for bottleneck identification.\n>\n> we need to see a snapshot from\n> *) top\n> *) perf top\n>\n> merlin\n>\n\nHithis extremely high load looks like different issue - maybe spinlock issue or virtual memory issue. Probably you need some low level debug tools like perf or dtrace :(http://www.postgresql.org/message-id/[email protected] you last PostgreSQL upgrade?result of \"perf top\" when this issue is active is really requested.RegardsPavel2015-10-23 19:45 GMT+02:00 Jamie Koceniak <[email protected]>:Hi,\n\nWe just had the performance problem again today.\nHere is some of the top output. Unfortunately, we don't have perf top installed.\n\ntop - 16:22:16 up 29 days, 13:00,  2 users,  load average: 164.63, 158.62, 148.52\nTasks: 1369 total, 181 running, 1188 sleeping,   0 stopped,   0 zombie\n%Cpu(s):  6.2 us,  0.7 sy,  0.0 ni, 93.1 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st\nMiB Mem:   2068265 total,   433141 used,  1635124 free,      586 buffers\nMiB Swap:     7812 total,        0 used,     7812 free,   412641 cached\n\n   PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND\n 81745 postgres  20   0 65.7g  51m  34m R   101  0.0   0:09.20 postgres: user1 db 0.0.0.2(52307) SELECT\n 81782 postgres  20   0 65.7g  51m  34m R   101  0.0   0:08.50 postgres: user1 db 0.0.0.3(44630) SELECT\n 81797 postgres  20   0 65.7g  51m  34m R   101  0.0   0:08.03 postgres: user1 db 0.0.0.6(60752) SELECT\n 67103 postgres  20   0 65.7g  81m  56m R    97  0.0   2:01.89 postgres: user1 db 0.0.0.4(46337) SELECT\n 82527 postgres  20   0 65.7g  25m  20m R    93  0.0   0:02.35 postgres: user1 db 0.0.0.2(52490) SELECT\n 82559 postgres  20   0 65.7g  25m  20m R    93  0.0   0:02.17 postgres: user1 db 0.0.0.2(52496) SELECT\n 82728 postgres  20   0 65.7g  80m  76m R    93  0.0   0:00.60 postgres: user1 db 0.0.0.6(60957) SELECT\n 65588 postgres  20   0 65.7g  76m  56m R    89  0.0   2:12.27 postgres: user1 db 0.0.0.6(57195) SELECT\n 80594 postgres  20   0 65.7g  34m  28m R    89  0.0   0:22.81 postgres: user1 db 0.0.0.2(52071) SELECT\n 25176 postgres  20   0 65.7g  74m  57m R    85  0.0   7:24.42 postgres: user1 db 0.0.0.2(39410) SELECT\n 82182 postgres  20   0 65.7g 513m 502m R    85  0.0   0:04.85 postgres: user1 db 0.0.0.4(49789) SELECT\n 82034 postgres  20   0 65.7g 523m 510m R    81  0.0   0:05.79 postgres: user1 db 0.0.0.3(44683) SELECT\n 82439 postgres  20   0 65.7g 262m 258m R    81  0.0   0:02.64 postgres: user1 db 0.0.0.6(60887) SELECT\n 82624 postgres  20   0 65.7g 148m 143m R    81  0.0   0:01.20 postgres: user1 db 0.0.0.4(49888) SELECT\n 82637 postgres  20   0 65.7g 139m 134m R    81  0.0   0:01.17 postgres: user1 db 0.0.0.3(44805) SELECT\n 82669 postgres  20   0 65.7g 119m 114m R    81  0.0   0:00.97 postgres: user1 db 0.0.0.6(60939) SELECT\n 82723 postgres  20   0 65.7g  79m  75m R    81  0.0   0:00.56 postgres: user1 db 0.0.0.4(49907) SELECT\n 29160 postgres  20   0 65.7g  79m  54m R    77  0.0   6:52.13 postgres: user1 db 0.0.0.6(48802) SELECT\n 51095 postgres  20   0 65.7g  81m  57m R    77  0.0   4:01.51 postgres: user1 db 0.0.0.4(42914) SELECT\n 81833 postgres  20   0 65.7g 528m 515m R    77  0.0   0:07.23 postgres: user1 db 0.0.0.3(44644) SELECT\n 81978 postgres  20   0 65.7g 528m 515m R    77  0.0   0:06.05 postgres: user1 db 0.0.0.2(52364) SELECT\n 82099 postgres  20   0 65.7g 523m 510m R    77  0.0   0:05.18 postgres: user1 db 0.0.0.3(44692) SELECT\n 82111 postgres  20   0 65.7g 523m 510m R    77  0.0   0:05.14 postgres: user1 db 0.0.0.4(49773) SELECT\n 82242 postgres  20   0 65.7g 433m 429m R    77  0.0   0:04.27 postgres: user1 db 0.0.0.2(52428) SELECT\n 82292 postgres  20   0 65.7g 407m 402m R    77  0.0   0:04.10 postgres: user1 db 0.0.0.2(52440) SELECT\n 82408 postgres  20   0 65.7g 292m 288m R    77  0.0   0:02.98 postgres: user1 db 0.0.0.4(49835) SELECT\n 82542 postgres  20   0 65.7g 207m 202m R    77  0.0   0:01.98 postgres: user1 db 0.0.0.4(49868) SELECT\n 63638 postgres  20   0 65.7g  80m  56m R    73  0.0   2:30.10 postgres: user1 db 0.0.0.2(48699) SELECT\n 71572 postgres  20   0 65.7g  80m  56m R    73  0.0   1:31.13 postgres: user1 db 0.0.0.2(50223) SELECT\n 80580 postgres  20   0 65.7g  34m  28m R    73  0.0   0:22.93 postgres: user1 db 0.0.0.2(52065) SELECT\n 81650 postgres  20   0 65.8g 622m 555m R    73  0.0   0:08.84 postgres: user1 db 0.0.0.2(52290) SELECT\n 81728 postgres  20   0 65.7g 523m 510m R    73  0.0   0:08.28 postgres: user1 db 0.0.0.4(49684) SELECT\n 81942 postgres  20   0 65.7g 528m 515m R    73  0.0   0:06.46 postgres: user1 db 0.0.0.2(52355) SELECT\n 81958 postgres  20   0 65.7g 528m 514m R    73  0.0   0:06.48 postgres: user1 db 0.0.0.4(49744) SELECT\n 81980 postgres  20   0 65.7g 528m 515m R    73  0.0   0:06.02 postgres: user1 db 0.0.0.3(44671) SELECT\n 82007 postgres  20   0 65.7g 523m 510m R    73  0.0   0:06.27 postgres: user1 db 0.0.0.3(44676) SELECT\n 82374 postgres  20   0 65.7g 367m 362m R    73  0.0   0:03.48 postgres: user1 db 0.0.0.6(60873) SELECT\n 82385 postgres  20   0 65.7g 310m 306m R    73  0.0   0:03.03 postgres: user1 db 0.0.0.6(60876) SELECT\n 82520 postgres  20   0 65.7g 220m 215m R    73  0.0   0:02.00 postgres: user1 db 0.0.0.3(44785) SELECT\n 82676 postgres  20   0 65.7g 116m 111m R    73  0.0   0:00.90 postgres: user1 db 0.0.0.2(52531) SELECT\n 18471 postgres  20   0 65.7g  73m  56m R    69  0.0   8:14.08 postgres: user1 db 0.0.0.6(46144) SELECT\n 43890 postgres  20   0 65.7g  76m  56m R    69  0.0   5:04.46 postgres: user1 db 0.0.0.3(36697) SELECT\n 46130 postgres  20   0 65.7g  70m  57m R    69  0.0   4:46.56 postgres: user1 db 0.0.0.4(41871) SELECT\n 55604 postgres  20   0 65.7g  81m  57m R    69  0.0   3:27.67 postgres: user1 db 0.0.0.3(39292) SELECT\n 59139 postgres  20   0 65.7g  81m  57m R    69  0.0   3:01.18 postgres: user1 db 0.0.0.2(47670) SELECT\n 63523 postgres  20   0 65.7g  80m  56m R    69  0.0   2:28.04 postgres: user1 db 0.0.0.2(48680) SELECT\n 81707 postgres  20   0 65.7g 528m 515m S    69  0.0   0:08.44 postgres: user1 db 0.0.0.6(60737) SELECT\n 81830 postgres  20   0 65.7g 523m 510m R    69  0.0   0:07.60 postgres: user1 db 0.0.0.4(49707) SELECT\n 81932 postgres  20   0 65.7g 528m 515m R    69  0.0   0:06.65 postgres: user1 db 0.0.0.2(52352) SELECT\n 81950 postgres  20   0 65.7g 528m 515m R    69  0.0   0:05.92 postgres: user1 db 0.0.0.6(60783) SELECT\n 81973 postgres  20   0 65.7g 522m 510m R    69  0.0   0:06.18 postgres: user1 db 0.0.0.6(60789) SELECT\n 82193 postgres  20   0 65.7g 487m 479m R    69  0.0   0:04.61 postgres: user1 db 0.0.0.2(52415) SELECT\n 82358 postgres  20   0 65.7g 299m 295m R    69  0.0   0:03.11 postgres: user1 db 0.0.0.2(52453) SELECT\n 82372 postgres  20   0 65.7g 318m 313m R    69  0.0   0:03.22 postgres: user1 db 0.0.0.4(49827) SELECT\n 82381 postgres  20   0 65.7g 331m 326m R    69  0.0   0:03.30 postgres: user1 db 0.0.0.3(44757) SELECT\n 82404 postgres  20   0 65.7g 294m 289m R    69  0.0   0:02.86 postgres: user1 db 0.0.0.3(44761) SELECT\n 82415 postgres  20   0 65.7g 270m 266m R    69  0.0   0:02.80 postgres: user1 db 0.0.0.3(44767) SELECT\n 82521 postgres  20   0 65.7g 209m 205m R    69  0.0   0:02.00 postgres: user1 db 0.0.0.3(44786) SELECT\n 82526 postgres  20   0 65.7g  35m  29m R    69  0.0   0:01.20 postgres: user1 db 0.0.0.6(60906) SELECT\n 82550 postgres  20   0 65.7g 188m 184m R    69  0.0   0:01.72 postgres: user1 db 0.0.0.4(49870) SELECT\n 82587 postgres  20   0 65.7g 183m 178m R    69  0.0   0:01.64 postgres: user1 db 0.0.0.4(49882) SELECT\n 82683 postgres  20   0 65.7g  97m  93m R    69  0.0   0:00.77 postgres: user1 db 0.0.0.4(49899) SELECT\n 82685 postgres  20   0 65.7g 103m  99m R    69  0.0   0:00.84 postgres: user1 db 0.0.0.2(52532) SELECT\n 82687 postgres  20   0 65.7g 109m 104m R    69  0.0   0:00.85 postgres: user1 db 0.0.0.3(44809) SELECT\n 82712 postgres  20   0 65.7g  68m  64m R    69  0.0   0:00.55 postgres: user1 db 0.0.0.3(44814) SELECT\n 82715 postgres  20   0 65.7g  75m  70m R    69  0.0   0:00.58 postgres: user1 db 0.0.0.4(49905) SELECT\n 19548 postgres  20   0 65.7g  79m  56m R    65  0.0   8:02.44 postgres: user1 db 0.0.0.2(37887) SELECT\n 36714 postgres  20   0 65.7g  80m  56m R    65  0.0   5:56.08 postgres: user1 db 0.0.0.3(35177) SELECT\n 43599 postgres  20   0 65.7g  80m  56m R    65  0.0   5:05.03 postgres: user1 db 0.0.0.3(36638) SELECT\n\n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]]\nSent: Wednesday, October 21, 2015 12:50 PM\nTo: Pavel Stehule\nCc: Jamie Koceniak; [email protected]\nSubject: Re: [PERFORM] Recursive query performance issue\n\nOn Wed, Oct 21, 2015 at 2:45 PM, Pavel Stehule <[email protected]> wrote:\n> 2015-10-21 21:32 GMT+02:00 Jamie Koceniak <[email protected]>:\n>>\n>> adama_prod=# SHOW shared_buffers;\n>>\n>> shared_buffers\n>>\n>> ----------------\n>>\n>> 64GB\n>\n>\n> can you try to increase shared buffers to 200GB and decrease effective\n> cache size to 180GB? If it is possibly - I am not sure, if this\n> setting is good fro production usage, but the result can be\n> interesting for bottleneck identification.\n\nwe need to see a snapshot from\n*) top\n*) perf top\n\nmerlin", "msg_date": "Fri, 23 Oct 2015 20:00:10 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "On Fri, Oct 23, 2015 at 12:45 PM, Jamie Koceniak\n<[email protected]> wrote:\n> Hi,\n>\n> We just had the performance problem again today.\n> Here is some of the top output. Unfortunately, we don't have perf top installed.\n>\n> top - 16:22:16 up 29 days, 13:00, 2 users, load average: 164.63, 158.62, 148.52\n> Tasks: 1369 total, 181 running, 1188 sleeping, 0 stopped, 0 zombie\n> %Cpu(s): 6.2 us, 0.7 sy, 0.0 ni, 93.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n> MiB Mem: 2068265 total, 433141 used, 1635124 free, 586 buffers\n> MiB Swap: 7812 total, 0 used, 7812 free, 412641 cached\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 81745 postgres 20 0 65.7g 51m 34m R 101 0.0 0:09.20 postgres: user1 db 0.0.0.2(52307) SELECT\n> 81782 postgres 20 0 65.7g 51m 34m R 101 0.0 0:08.50 postgres: user1 db 0.0.0.3(44630) SELECT\n> 81797 postgres 20 0 65.7g 51m 34m R 101 0.0 0:08.03 postgres: user1 db 0.0.0.6(60752) SELECT\n<snip>\n\nok, this rules out iowait.\n\nload is 160+. system is reporting 6.2%user, 93.1%idle, 0 iowait.\nThis is very odd.\n*) how many processors do you have?\n*) Can we have more details about the hardware platform?\n*) Is this system virtualized? If so, what solution?\n\nwe need a perf top and a capture of 'vmstat 1' for context switches\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Oct 2015 10:03:33 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "Had the issue again today.\r\n\r\nHere is vmstat :\r\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\r\n r b swpd free buff cache si so bi bo in cs us sy id wa\r\n24 0 0 1591718656 605656 499370336 0 0 0 371 0 0 7 1 93 0\r\n25 0 0 1591701376 605656 499371936 0 0 0 600 13975 20168 20 1 79 0\r\n26 0 0 1591654784 605656 499372064 0 0 0 5892 12725 14627 20 1 79 0\r\n25 0 0 1591614336 605656 499372128 0 0 0 600 11665 12642 21 1 78 0\r\n27 0 0 1591549952 605656 499372192 0 0 0 408 16939 23387 23 1 76 0\r\n29 0 0 1591675392 605656 499372288 0 0 0 836 15380 22564 23 1 76 0\r\n27 0 0 1591608704 605656 499372352 0 0 0 456 17593 27955 23 1 76 0\r\n34 0 0 1591524608 605656 499372480 0 0 0 5904 18963 30915 23 1 75 0\r\n23 0 0 1591632384 605656 499372576 0 0 0 704 18190 31002 22 1 77 0\r\n25 0 0 1591551360 605656 499372640 0 0 0 944 12532 14095 21 1 78 0\r\n24 0 0 1591613568 605656 499372704 0 0 0 416 11183 12553 20 1 79 0\r\n23 0 0 1591531520 605656 499372768 0 0 0 400 12648 15540 19 1 80 0\r\n22 0 0 1591510528 605656 499372800 0 0 0 6024 14670 21993 19 1 80 0\r\n31 0 0 1591388800 605656 499372896 0 0 0 472 20605 28242 20 1 79 0\r\n\r\nWe have a 120 CPU server :)\r\n\r\nprocessor : 119\r\nvendor_id : GenuineIntel\r\ncpu family : 6\r\nmodel : 62\r\nmodel name : Intel(R) Xeon(R) CPU E7-4880 v2 @ 2.50GHz\r\n\r\n\r\n-----Original Message-----\r\nFrom: Merlin Moncure [mailto:[email protected]] \r\nSent: Monday, October 26, 2015 8:04 AM\r\nTo: Jamie Koceniak\r\nCc: Pavel Stehule; [email protected]\r\nSubject: Re: [PERFORM] Recursive query performance issue\r\n\r\nOn Fri, Oct 23, 2015 at 12:45 PM, Jamie Koceniak <[email protected]> wrote:\r\n> Hi,\r\n>\r\n> We just had the performance problem again today.\r\n> Here is some of the top output. Unfortunately, we don't have perf top installed.\r\n>\r\n> top - 16:22:16 up 29 days, 13:00, 2 users, load average: 164.63, 158.62, 148.52\r\n> Tasks: 1369 total, 181 running, 1188 sleeping, 0 stopped, 0 zombie\r\n> %Cpu(s): 6.2 us, 0.7 sy, 0.0 ni, 93.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\r\n> MiB Mem: 2068265 total, 433141 used, 1635124 free, 586 buffers\r\n> MiB Swap: 7812 total, 0 used, 7812 free, 412641 cached\r\n>\r\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\r\n> 81745 postgres 20 0 65.7g 51m 34m R 101 0.0 0:09.20 postgres: user1 db 0.0.0.2(52307) SELECT\r\n> 81782 postgres 20 0 65.7g 51m 34m R 101 0.0 0:08.50 postgres: user1 db 0.0.0.3(44630) SELECT\r\n> 81797 postgres 20 0 65.7g 51m 34m R 101 0.0 0:08.03 postgres: user1 db 0.0.0.6(60752) SELECT\r\n<snip>\r\n\r\nok, this rules out iowait.\r\n\r\nload is 160+. system is reporting 6.2%user, 93.1%idle, 0 iowait.\r\nThis is very odd.\r\n*) how many processors do you have?\r\n*) Can we have more details about the hardware platform?\r\n*) Is this system virtualized? If so, what solution?\r\n\r\nwe need a perf top and a capture of 'vmstat 1' for context switches\r\n\r\nmerlin\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 14 Nov 2015 06:58:00 +0000", "msg_from": "Jamie Koceniak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recursive query performance issue" }, { "msg_contents": "On Sat, Nov 14, 2015 at 12:58 AM, Jamie Koceniak\n<[email protected]> wrote:\n> Had the issue again today.\n>\n> Here is vmstat :\n> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 24 0 0 1591718656 605656 499370336 0 0 0 371 0 0 7 1 93 0\n> 25 0 0 1591701376 605656 499371936 0 0 0 600 13975 20168 20 1 79 0\n> 26 0 0 1591654784 605656 499372064 0 0 0 5892 12725 14627 20 1 79 0\n> 25 0 0 1591614336 605656 499372128 0 0 0 600 11665 12642 21 1 78 0\n> 27 0 0 1591549952 605656 499372192 0 0 0 408 16939 23387 23 1 76 0\n> 29 0 0 1591675392 605656 499372288 0 0 0 836 15380 22564 23 1 76 0\n> 27 0 0 1591608704 605656 499372352 0 0 0 456 17593 27955 23 1 76 0\n> 34 0 0 1591524608 605656 499372480 0 0 0 5904 18963 30915 23 1 75 0\n> 23 0 0 1591632384 605656 499372576 0 0 0 704 18190 31002 22 1 77 0\n> 25 0 0 1591551360 605656 499372640 0 0 0 944 12532 14095 21 1 78 0\n> 24 0 0 1591613568 605656 499372704 0 0 0 416 11183 12553 20 1 79 0\n> 23 0 0 1591531520 605656 499372768 0 0 0 400 12648 15540 19 1 80 0\n> 22 0 0 1591510528 605656 499372800 0 0 0 6024 14670 21993 19 1 80 0\n> 31 0 0 1591388800 605656 499372896 0 0 0 472 20605 28242 20 1 79 0\n>\n> We have a 120 CPU server :)\n>\n> processor : 119\n> vendor_id : GenuineIntel\n> cpu family : 6\n> model : 62\n> model name : Intel(R) Xeon(R) CPU E7-4880 v2 @ 2.50GHz\n\nPer the numbers above. this server is very healthy. Something is not\nadding up here: I would really have liked to see a snapshot from 'top'\nand 'perf top' taken at the same time. Via top we could have seen if\nsome of the processors were completely loaded down while some were not\nbeing utilized at all. This would suggest a problem with the operating\nsystem, likely NUMA related.\n\n*) Are you counting hyperthreading to get to the 120 cpu count\n\n*) Is this server virtualized\n\n*) what is the output of:\nlscpu | grep NUMA\n\n*) do you have 'taskset' installed? Can we check affinity via:\ntaskset -c -p <pid>\n\nwhere <pid> is the pid of a few randomly sampled postgres processes at work\n\n*) Can you report exact kernel version\n\n*) what is output of:\ncat /sys/kernel/mm/transparent_hugepage/enabled\ncat /sys/kernel/mm/transparent_hugepage/defrag\n\n*) Is installing a newer postgres an option? Configuring highly SMP\nsystems for reliable scaling may require some progressive thinking.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 16 Nov 2015 08:19:12 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query performance issue" } ]
[ { "msg_contents": "Hi,\n\nWondering if anyone could suggest how we could improve the performance of\nthis type of query?\nThe intensive part is the summing of integer arrays as far as I can see.\nWe're thinking there's not much we can do to improve performance apart from\nthrow more CPU at it... would love to be proven wrong though!\n\n\n*Query:*\n\n explain (analyse,buffers)\n select\n sum(s2.array_a),sum(s2.array_b)\n from mytable s1 left join mytable s2\n on s1.code=s2.code and s1.buyer=s2.seller and s2.seller='XX'\n where s1.buyer='XX'\n group by s1.buyer,s1.code\n;\n\n\n*Depesz Explain Link:*\n\nhttp://explain.depesz.com/s/m3XP\n\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=275573.49..336223.36 rows=2547 width=524) (actual\ntime=1059.340..22946.772 rows=22730 loops=1)\n Buffers: shared hit=113596 read=1020 dirtied=15\n -> Merge Left Join (cost=275573.49..278850.09 rows=113560 width=524)\n(actual time=1058.773..1728.186 rows=240979 loops=1)\n Merge Cond: ((s1.code)::text = (s2.code)::text)\n Join Filter: (s1.buyer = (s2.seller)::bpchar)\n Buffers: shared hit=113596 read=1020 dirtied=15\n -> Index Only Scan using mytable_buyer_idx on mytable s1\n (cost=0.42..1226.06 rows=25465 width=12) (actual time=0.015..35.790\nrows=22730 loops=1)\n Index Cond: (buyer = 'XX'::bpchar)\n Heap Fetches: 3739\n Buffers: shared hit=16805 dirtied=1\n -> Sort (cost=275573.07..275818.33 rows=98106 width=525) (actual\ntime=1058.736..1141.560 rows=231662 loops=1)\n Sort Key: s2.code\n Sort Method: quicksort Memory: 241426kB\n Buffers: shared hit=96791 read=1020 dirtied=14\n -> Bitmap Heap Scan on mytable s2\n (cost=12256.28..267439.07 rows=98106 width=525) (actual\ntime=60.330..325.730 rows=231662 loops=1)\n Recheck Cond: ((seller)::text = 'XX'::text)\n Filter: ((seller)::bpchar = 'XX'::bpchar)\n Buffers: shared hit=96791 read=1020 dirtied=14\n -> Bitmap Index Scan on mytable_seller_idx\n (cost=0.00..12231.75 rows=254844 width=0) (actual time=40.474..40.474\nrows=233244 loops=1)\n Index Cond: ((seller)::text = 'XX'::text)\n Buffers: shared hit=30 read=1020\n Total runtime: 22968.292 ms\n(22 rows)\n\n\n\n*Table size:*\n\n=> select count(*) from mytable;\n count\n--------\n 602669\n(1 row)\n\n\n*Array types:*\n\n# select array_a,array_b from mytable limit 1;\n array_a | array_b\n---------------------------+---------------------------\n {0,0,0,0,0,0,0,0,0,0,0,0} | {0,0,0,0,0,0,0,0,0,0,0,0}\n\n\n*Example schema:*\n\n# \\d mytable\n Table \"public.mytable\"\n Column | Type | Modifiers\n-------------------+-----------------------+------------------------\n buyer | character(2) | not null\n code | character varying(20) | not null\n seller | character varying(50) |\n array_a | integer[] |\n array_b | integer[] |\nIndexes:\n \"mytable_buyer_code_idx\" UNIQUE, btree (buyer, code) CLUSTER\n \"mytable_buyer_idx\" btree (buyer)\n \"mytable_code_idx\" btree (code)\n \"mytable_seller_idx\" btree (seller)\n\n\n*Version:*\n\n> SELECT version() ;\n version\n\n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3\n20120306 (Red Hat 4.6.3-2), 64-bit\n(1 row)\n\nThis is running on an AWS RDS instance.\n\nThanks for any pointers\n-- \nDavid\n\nHi,Wondering if anyone could suggest how we could improve the performance of this type of query?The intensive part is the summing of integer arrays as far as I can see.We're thinking there's not much we can do to improve performance apart from throw more CPU at it... would love to be proven wrong though!Query:  explain (analyse,buffers)   select   sum(s2.array_a),sum(s2.array_b)  from mytable s1 left join mytable s2  on s1.code=s2.code and s1.buyer=s2.seller and s2.seller='XX'  where s1.buyer='XX'  group by s1.buyer,s1.code;Depesz Explain Link:http://explain.depesz.com/s/m3XP                                                                               QUERY PLAN                                                                               ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ GroupAggregate  (cost=275573.49..336223.36 rows=2547 width=524) (actual time=1059.340..22946.772 rows=22730 loops=1)   Buffers: shared hit=113596 read=1020 dirtied=15   ->  Merge Left Join  (cost=275573.49..278850.09 rows=113560 width=524) (actual time=1058.773..1728.186 rows=240979 loops=1)         Merge Cond: ((s1.code)::text = (s2.code)::text)         Join Filter: (s1.buyer = (s2.seller)::bpchar)         Buffers: shared hit=113596 read=1020 dirtied=15         ->  Index Only Scan using mytable_buyer_idx on mytable s1  (cost=0.42..1226.06 rows=25465 width=12) (actual time=0.015..35.790 rows=22730 loops=1)               Index Cond: (buyer = 'XX'::bpchar)               Heap Fetches: 3739               Buffers: shared hit=16805 dirtied=1         ->  Sort  (cost=275573.07..275818.33 rows=98106 width=525) (actual time=1058.736..1141.560 rows=231662 loops=1)               Sort Key: s2.code               Sort Method: quicksort  Memory: 241426kB               Buffers: shared hit=96791 read=1020 dirtied=14               ->  Bitmap Heap Scan on mytable s2  (cost=12256.28..267439.07 rows=98106 width=525) (actual time=60.330..325.730 rows=231662 loops=1)                     Recheck Cond: ((seller)::text = 'XX'::text)                     Filter: ((seller)::bpchar = 'XX'::bpchar)                     Buffers: shared hit=96791 read=1020 dirtied=14                     ->  Bitmap Index Scan on mytable_seller_idx  (cost=0.00..12231.75 rows=254844 width=0) (actual time=40.474..40.474 rows=233244 loops=1)                           Index Cond: ((seller)::text = 'XX'::text)                           Buffers: shared hit=30 read=1020 Total runtime: 22968.292 ms(22 rows)Table size:=> select count(*) from mytable; count  -------- 602669(1 row)Array types:# select array_a,array_b from mytable limit 1;      array_a      |     array_b      ---------------------------+--------------------------- {0,0,0,0,0,0,0,0,0,0,0,0} | {0,0,0,0,0,0,0,0,0,0,0,0}Example schema:# \\d mytable                        Table \"public.mytable\"      Column       |         Type          |       Modifiers        -------------------+-----------------------+------------------------ buyer             | character(2)          | not null code              | character varying(20) | not null seller            | character varying(50) |  array_a           | integer[]             |  array_b           | integer[]             | Indexes:    \"mytable_buyer_code_idx\" UNIQUE, btree (buyer, code) CLUSTER    \"mytable_buyer_idx\" btree (buyer)    \"mytable_code_idx\" btree (code)    \"mytable_seller_idx\" btree (seller)Version:> SELECT version() ;                                                   version                                                    -------------------------------------------------------------------------------------------------------------- PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bit(1 row)This is running on an AWS RDS instance.Thanks for any pointers-- David", "msg_date": "Fri, 23 Oct 2015 15:29:17 +0100", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "GroupAggregate and Integer Arrays" }, { "msg_contents": "On Fri, Oct 23, 2015 at 7:29 AM, David Osborne <[email protected]> wrote:\n\n\n> Hi,\n>\n> Wondering if anyone could suggest how we could improve the performance of\n> this type of query?\n> The intensive part is the summing of integer arrays as far as I can see.\n>\n\n\nPostgres does not ship with any 'sum' function which takes array arguments.\n\n> select sum('{1,2,3,4,5,6}'::int[]);\n\nERROR: function sum(integer[]) does not exist\n\nAre you using a user defined function? If so, how did you define it?\n\nCheers,\n\nJeff\n\nOn Fri, Oct 23, 2015 at 7:29 AM, David Osborne <[email protected]> wrote: Hi,Wondering if anyone could suggest how we could improve the performance of this type of query?The intensive part is the summing of integer arrays as far as I can see.Postgres does not ship with any 'sum' function which takes array arguments.> select sum('{1,2,3,4,5,6}'::int[]);ERROR:  function sum(integer[]) does not existAre you using a user defined function?  If so, how did you define it?Cheers,Jeff", "msg_date": "Fri, 23 Oct 2015 09:15:42 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GroupAggregate and Integer Arrays" }, { "msg_contents": "Ah yes sorry:\n\nI think these cover it...\n\nCREATE AGGREGATE sum (\n sfunc = array_add,\n basetype = INTEGER[],\n stype = INTEGER[],\n initcond = '{}'\n );\n\nCREATE OR REPLACE FUNCTION array_add(int[],int[]) RETURNS int[] AS $$\n -- Add two arrays.\n select\n ARRAY (\n SELECT coalesce($1[i],0) + coalesce($2[i],0)\n FROM (\n select generate_series(least(array_lower($1, 1),array_lower($2,\n1)), greatest(array_upper($1, 1),array_upper($2, 1)), 1) AS i\n ) sub\n GROUP BY i\n ORDER BY i\n );\n$$ LANGUAGE sql STRICT IMMUTABLE;\n\n\n\n\nOn 23 October 2015 at 17:15, Jeff Janes <[email protected]> wrote:\n\n> On Fri, Oct 23, 2015 at 7:29 AM, David Osborne <[email protected]> wrote:\n>\n>\n>> Hi,\n>>\n>> Wondering if anyone could suggest how we could improve the performance of\n>> this type of query?\n>> The intensive part is the summing of integer arrays as far as I can see.\n>>\n>\n>\n> Postgres does not ship with any 'sum' function which takes array arguments.\n>\n> > select sum('{1,2,3,4,5,6}'::int[]);\n>\n> ERROR: function sum(integer[]) does not exist\n>\n> Are you using a user defined function? If so, how did you define it?\n>\n> Cheers,\n>\n> Jeff\n>\n\nAh yes sorry:I think these cover it...CREATE AGGREGATE sum (      sfunc = array_add,      basetype = INTEGER[],      stype = INTEGER[],      initcond = '{}'   );   CREATE OR REPLACE FUNCTION array_add(int[],int[]) RETURNS int[] AS $$   -- Add two arrays.   select      ARRAY (         SELECT coalesce($1[i],0) + coalesce($2[i],0)         FROM (            select generate_series(least(array_lower($1, 1),array_lower($2, 1)), greatest(array_upper($1, 1),array_upper($2, 1)), 1) AS i         ) sub   GROUP BY i   ORDER BY i   );$$ LANGUAGE sql STRICT IMMUTABLE;On 23 October 2015 at 17:15, Jeff Janes <[email protected]> wrote:On Fri, Oct 23, 2015 at 7:29 AM, David Osborne <[email protected]> wrote: Hi,Wondering if anyone could suggest how we could improve the performance of this type of query?The intensive part is the summing of integer arrays as far as I can see.Postgres does not ship with any 'sum' function which takes array arguments.> select sum('{1,2,3,4,5,6}'::int[]);ERROR:  function sum(integer[]) does not existAre you using a user defined function?  If so, how did you define it?Cheers,Jeff", "msg_date": "Fri, 23 Oct 2015 17:26:26 +0100", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GroupAggregate and Integer Arrays" }, { "msg_contents": "On Friday, October 23, 2015, David Osborne <[email protected]> wrote:\n\n> Hi,\n>\n> Wondering if anyone could suggest how we could improve the performance of\n> this type of query?\n> The intensive part is the summing of integer arrays as far as I can see.\n> We're thinking there's not much we can do to improve performance apart\n> from throw more CPU at it... would love to be proven wrong though!\n>\n>\n> *Query:*\n>\n> explain (analyse,buffers)\n> select\n> sum(s2.array_a),sum(s2.array_b)\n> from mytable s1 left join mytable s2\n> on s1.code=s2.code and s1.buyer=s2.seller and s2.seller='XX'\n> where s1.buyer='XX'\n> group by s1.buyer,s1.code\n> ;\n>\n>\n> *Depesz Explain Link:*\n>\n> http://explain.depesz.com/s/m3XP\n>\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=275573.49..336223.36 rows=2547 width=524) (actual\n> time=1059.340..22946.772 rows=22730 loops=1)\n> Buffers: shared hit=113596 read=1020 dirtied=15\n> -> Merge Left Join (cost=275573.49..278850.09 rows=113560 width=524)\n> (actual time=1058.773..1728.186 rows=240979 loops=1)\n> Merge Cond: ((s1.code)::text = (s2.code)::text)\n> Join Filter: (s1.buyer = (s2.seller)::bpchar)\n> Buffers: shared hit=113596 read=1020 dirtied=15\n> -> Index Only Scan using mytable_buyer_idx on mytable s1\n> (cost=0.42..1226.06 rows=25465 width=12) (actual time=0.015..35.790\n> rows=22730 loops=1)\n> Index Cond: (buyer = 'XX'::bpchar)\n> Heap Fetches: 3739\n> Buffers: shared hit=16805 dirtied=1\n> -> Sort (cost=275573.07..275818.33 rows=98106 width=525)\n> (actual time=1058.736..1141.560 rows=231662 loops=1)\n> Sort Key: s2.code\n> Sort Method: quicksort Memory: 241426kB\n> Buffers: shared hit=96791 read=1020 dirtied=14\n> -> Bitmap Heap Scan on mytable s2\n> (cost=12256.28..267439.07 rows=98106 width=525) (actual\n> time=60.330..325.730 rows=231662 loops=1)\n> Recheck Cond: ((seller)::text = 'XX'::text)\n> Filter: ((seller)::bpchar = 'XX'::bpchar)\n> Buffers: shared hit=96791 read=1020 dirtied=14\n> -> Bitmap Index Scan on mytable_seller_idx\n> (cost=0.00..12231.75 rows=254844 width=0) (actual time=40.474..40.474\n> rows=233244 loops=1)\n> Index Cond: ((seller)::text = 'XX'::text)\n> Buffers: shared hit=30 read=1020\n> Total runtime: 22968.292 ms\n> (22 rows)\n>\n>\n>\n> *Table size:*\n>\n> => select count(*) from mytable;\n> count\n> --------\n> 602669\n> (1 row)\n>\n>\n> *Array types:*\n>\n> # select array_a,array_b from mytable limit 1;\n> array_a | array_b\n> ---------------------------+---------------------------\n> {0,0,0,0,0,0,0,0,0,0,0,0} | {0,0,0,0,0,0,0,0,0,0,0,0}\n>\n>\n> *Example schema:*\n>\n> # \\d mytable\n> Table \"public.mytable\"\n> Column | Type | Modifiers\n> -------------------+-----------------------+------------------------\n> buyer | character(2) | not null\n> code | character varying(20) | not null\n> seller | character varying(50) |\n> array_a | integer[] |\n> array_b | integer[] |\n> Indexes:\n> \"mytable_buyer_code_idx\" UNIQUE, btree (buyer, code) CLUSTER\n> \"mytable_buyer_idx\" btree (buyer)\n> \"mytable_code_idx\" btree (code)\n> \"mytable_seller_idx\" btree (seller)\n>\n>\n> *Version:*\n>\n> > SELECT version() ;\n> version\n>\n>\n> --------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3\n> 20120306 (Red Hat 4.6.3-2), 64-bit\n> (1 row)\n>\n> This is running on an AWS RDS instance.\n>\n> Thanks for any pointers\n> --\n> David\n>\n\nWhat's physical memory and setting of work_mem?\n\nmerlin\n\nOn Friday, October 23, 2015, David Osborne <[email protected]> wrote:Hi,Wondering if anyone could suggest how we could improve the performance of this type of query?The intensive part is the summing of integer arrays as far as I can see.We're thinking there's not much we can do to improve performance apart from throw more CPU at it... would love to be proven wrong though!Query:  explain (analyse,buffers)   select   sum(s2.array_a),sum(s2.array_b)  from mytable s1 left join mytable s2  on s1.code=s2.code and s1.buyer=s2.seller and s2.seller='XX'  where s1.buyer='XX'  group by s1.buyer,s1.code;Depesz Explain Link:http://explain.depesz.com/s/m3XP                                                                               QUERY PLAN                                                                               ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ GroupAggregate  (cost=275573.49..336223.36 rows=2547 width=524) (actual time=1059.340..22946.772 rows=22730 loops=1)   Buffers: shared hit=113596 read=1020 dirtied=15   ->  Merge Left Join  (cost=275573.49..278850.09 rows=113560 width=524) (actual time=1058.773..1728.186 rows=240979 loops=1)         Merge Cond: ((s1.code)::text = (s2.code)::text)         Join Filter: (s1.buyer = (s2.seller)::bpchar)         Buffers: shared hit=113596 read=1020 dirtied=15         ->  Index Only Scan using mytable_buyer_idx on mytable s1  (cost=0.42..1226.06 rows=25465 width=12) (actual time=0.015..35.790 rows=22730 loops=1)               Index Cond: (buyer = 'XX'::bpchar)               Heap Fetches: 3739               Buffers: shared hit=16805 dirtied=1         ->  Sort  (cost=275573.07..275818.33 rows=98106 width=525) (actual time=1058.736..1141.560 rows=231662 loops=1)               Sort Key: s2.code               Sort Method: quicksort  Memory: 241426kB               Buffers: shared hit=96791 read=1020 dirtied=14               ->  Bitmap Heap Scan on mytable s2  (cost=12256.28..267439.07 rows=98106 width=525) (actual time=60.330..325.730 rows=231662 loops=1)                     Recheck Cond: ((seller)::text = 'XX'::text)                     Filter: ((seller)::bpchar = 'XX'::bpchar)                     Buffers: shared hit=96791 read=1020 dirtied=14                     ->  Bitmap Index Scan on mytable_seller_idx  (cost=0.00..12231.75 rows=254844 width=0) (actual time=40.474..40.474 rows=233244 loops=1)                           Index Cond: ((seller)::text = 'XX'::text)                           Buffers: shared hit=30 read=1020 Total runtime: 22968.292 ms(22 rows)Table size:=> select count(*) from mytable; count  -------- 602669(1 row)Array types:# select array_a,array_b from mytable limit 1;      array_a      |     array_b      ---------------------------+--------------------------- {0,0,0,0,0,0,0,0,0,0,0,0} | {0,0,0,0,0,0,0,0,0,0,0,0}Example schema:# \\d mytable                        Table \"public.mytable\"      Column       |         Type          |       Modifiers        -------------------+-----------------------+------------------------ buyer             | character(2)          | not null code              | character varying(20) | not null seller            | character varying(50) |  array_a           | integer[]             |  array_b           | integer[]             | Indexes:    \"mytable_buyer_code_idx\" UNIQUE, btree (buyer, code) CLUSTER    \"mytable_buyer_idx\" btree (buyer)    \"mytable_code_idx\" btree (code)    \"mytable_seller_idx\" btree (seller)Version:> SELECT version() ;                                                   version                                                    -------------------------------------------------------------------------------------------------------------- PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bit(1 row)This is running on an AWS RDS instance.Thanks for any pointers-- David What's physical memory and setting of work_mem?merlin", "msg_date": "Fri, 23 Oct 2015 12:35:21 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GroupAggregate and Integer Arrays" }, { "msg_contents": "On Fri, Oct 23, 2015 at 9:26 AM, David Osborne <[email protected]> wrote:\n> Ah yes sorry:\n>\n> I think these cover it...\n>\n> CREATE AGGREGATE sum (\n> sfunc = array_add,\n> basetype = INTEGER[],\n> stype = INTEGER[],\n> initcond = '{}'\n> );\n>\n> CREATE OR REPLACE FUNCTION array_add(int[],int[]) RETURNS int[] AS $$\n> -- Add two arrays.\n> select\n> ARRAY (\n> SELECT coalesce($1[i],0) + coalesce($2[i],0)\n> FROM (\n> select generate_series(least(array_lower($1, 1),array_lower($2,\n> 1)), greatest(array_upper($1, 1),array_upper($2, 1)), 1) AS i\n> ) sub\n> GROUP BY i\n> ORDER BY i\n> );\n> $$ LANGUAGE sql STRICT IMMUTABLE;\n\nYou are paying a lot for the convenience of using a sql language\nfunction here. If you want much better performance, you would\nprobably have to rewrite it into C. But that would be a drag, and I\nwould try just throwing more CPU at it first.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 24 Oct 2015 12:27:05 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GroupAggregate and Integer Arrays" }, { "msg_contents": "> CREATE OR REPLACE FUNCTION array_add(int[],int[]) RETURNS int[] AS $$\n> -- Add two arrays.\n> select\n> ARRAY (\n> SELECT coalesce($1[i],0) + coalesce($2[i],0)\n> FROM (\n> select generate_series(least(array_lower($1, 1),array_lower($2,\n> 1)), greatest(array_upper($1, 1),array_upper($2, 1)), 1) AS i\n> ) sub\n> GROUP BY i\n> ORDER BY i\n> );\n> $$ LANGUAGE sql STRICT IMMUTABLE;\n\nit seems that both the GROUP and ORDER BY are superfluous and adding some cycles.\n\nregards,\n\nMarc Mamin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 24 Oct 2015 20:23:18 +0000", "msg_from": "Marc Mamin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GroupAggregate and Integer Arrays" }, { "msg_contents": "Physical memory is 61GB at the moment.\n\nwork_mem is 1,249,104kB\n\n\n>>\n> What's physical memory and setting of work_mem?\n>\n> merlin\n>\n\n\n\n-- \nDavid Osborne\nQcode Software Limited\nhttp://www.qcode.co.uk\nT: +44 (0)1463 896484\n\nPhysical memory is 61GB at the moment.work_mem is 1,249,104kBWhat's physical memory and setting of work_mem?merlin \n-- David OsborneQcode Software Limitedhttp://www.qcode.co.uk\nT: +44 (0)1463 896484", "msg_date": "Mon, 26 Oct 2015 17:45:31 +0000", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GroupAggregate and Integer Arrays" }, { "msg_contents": "On Mon, Oct 26, 2015 at 12:45 PM, David Osborne <[email protected]> wrote:\n> Physical memory is 61GB at the moment.\n>\n> work_mem is 1,249,104kB\n\nI'm not sure if this query is a candidate because of the function, but\nyou can try progressively cranking work_mem and running explain to see\nwhat it'd take to get a hashaggregate plan.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Oct 2015 13:21:10 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GroupAggregate and Integer Arrays" } ]
[ { "msg_contents": "Hi all,\n\nWe have a slow query. After analyzing, the planner decision seems to be\ndiscutable : the query is faster when disabling seqscan. See below the two\nquery plan, and an extract from pg_stats.\n\nAny idea about what to change to help the planner ?\n\nAn information which can be useful : the number on distinct value on\norganization_id is very very low, may be the planner does not known that,\nand take the wrong decision.\n\nRegards,\n\nBertrand\n\n# explain analyze SELECT 1 AS one FROM \"external_sync_messages\" WHERE\n\"external_sync_messages\".\"organization_id\" = 1612 AND\n(\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress',\n'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n\n QUERY\nPLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=0.00..12.39 rows=1 width=0) (actual time=232.212..232.213\nrows=1 loops=1)\n\n -> Seq Scan on external_sync_messages (cost=0.00..79104.69 rows=6385\nwidth=0) (actual time=232.209..232.209 rows=1 loops=1)\n\n Filter: ((handled_by IS NULL) AND (organization_id = 1612) AND\n((status)::text <> ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n\n Rows Removed by Filter: 600140\n\n Planning time: 0.490 ms\n\n Execution time: 232.246 ms\n\n(6 rows)\n\n# set enable_seqscan = off;\n\nSET\n\n# explain analyze SELECT 1 AS one FROM \"external_sync_messages\" WHERE\n\"external_sync_messages\".\"organization_id\" = 1612 AND\n(\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress',\n'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n\n\n QUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=0.42..39.88 rows=1 width=0) (actual time=0.030..0.030 rows=1\nloops=1)\n\n -> Index Scan using index_external_sync_messages_on_organization_id on\nexternal_sync_messages (cost=0.42..251934.05 rows=6385 width=0) (actual\ntime=0.028..0.028 rows=1 loops=1)\n\n Index Cond: (organization_id = 1612)\n\n Filter: ((handled_by IS NULL) AND ((status)::text <> ALL\n('{sent_to_proxy,in_progress,ok}'::text[])))\n\n Planning time: 0.103 ms\n\n Execution time: 0.052 ms\n\n(6 rows)\n\n# SELECT attname, inherited, n_distinct, array_to_string(most_common_vals,\nE'\\n') as most_common_vals FROM pg_stats WHERE tablename =\n'external_sync_messages' and attname IN ('status', 'organization_id',\n'handled_by');\n\n attname | inherited | n_distinct | most_common_vals\n\n-----------------+-----------+------------+------------------\n\n handled_by | f | 3 | 3 +\n\n | | | 236140 +\n\n | | | 54413\n\n organization_id | f | 22 | 1612 +\n\n | | | 287 +\n\n | | | 967 +\n\n | | | 1223 +\n\n | | | 1123 +\n\n | | | 1930 +\n\n | | | 841 +\n\n | | | 1814 +\n\n | | | 711 +\n\n | | | 1513 +\n\n | | | 1794 +\n\n | | | 1246 +\n\n | | | 1673 +\n\n | | | 1552 +\n\n | | | 1747 +\n\n | | | 2611 +\n\n | | | 2217 +\n\n | | | 2448 +\n\n | | | 2133 +\n\n | | | 1861 +\n\n | | | 2616 +\n\n | | | 2796\n\n status | f | 6 | ok +\n\n | | | ignored +\n\n | | | channel_error +\n\n | | | in_progress +\n\n | | | error +\n\n | | | sent_to_proxy\n\n(3 rows)\n\n# select count(*) from external_sync_messages;\n\n count\n\n--------\n\n 992912\n\n(1 row)\n\nHi all,We have a slow query. After analyzing, the planner decision seems to be discutable : the query is faster when disabling seqscan. See below the two query plan, and an extract from pg_stats.Any idea about what to change to help the planner ?An information which can be useful : the number on distinct value on organization_id is very very low, may be the planner does not known that, and take the wrong decision.Regards,Bertrand# explain analyze SELECT  1 AS one FROM \"external_sync_messages\"  WHERE \"external_sync_messages\".\"organization_id\" = 1612 AND (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress', 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;                                                                 QUERY PLAN                                                                 -------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.00..12.39 rows=1 width=0) (actual time=232.212..232.213 rows=1 loops=1)   ->  Seq Scan on external_sync_messages  (cost=0.00..79104.69 rows=6385 width=0) (actual time=232.209..232.209 rows=1 loops=1)         Filter: ((handled_by IS NULL) AND (organization_id = 1612) AND ((status)::text <> ALL ('{sent_to_proxy,in_progress,ok}'::text[])))         Rows Removed by Filter: 600140 Planning time: 0.490 ms Execution time: 232.246 ms(6 rows)# set enable_seqscan = off;SET# explain analyze SELECT  1 AS one FROM \"external_sync_messages\"  WHERE \"external_sync_messages\".\"organization_id\" = 1612 AND (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress', 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;                                                                                      QUERY PLAN                                                                                      -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.42..39.88 rows=1 width=0) (actual time=0.030..0.030 rows=1 loops=1)   ->  Index Scan using index_external_sync_messages_on_organization_id on external_sync_messages  (cost=0.42..251934.05 rows=6385 width=0) (actual time=0.028..0.028 rows=1 loops=1)         Index Cond: (organization_id = 1612)         Filter: ((handled_by IS NULL) AND ((status)::text <> ALL ('{sent_to_proxy,in_progress,ok}'::text[]))) Planning time: 0.103 ms Execution time: 0.052 ms(6 rows)# SELECT attname, inherited, n_distinct, array_to_string(most_common_vals, E'\\n') as most_common_vals FROM pg_stats WHERE tablename = 'external_sync_messages' and attname IN ('status', 'organization_id', 'handled_by');     attname     | inherited | n_distinct | most_common_vals -----------------+-----------+------------+------------------ handled_by      | f         |          3 | 3               +                 |           |            | 236140          +                 |           |            | 54413 organization_id | f         |         22 | 1612            +                 |           |            | 287             +                 |           |            | 967             +                 |           |            | 1223            +                 |           |            | 1123            +                 |           |            | 1930            +                 |           |            | 841             +                 |           |            | 1814            +                 |           |            | 711             +                 |           |            | 1513            +                 |           |            | 1794            +                 |           |            | 1246            +                 |           |            | 1673            +                 |           |            | 1552            +                 |           |            | 1747            +                 |           |            | 2611            +                 |           |            | 2217            +                 |           |            | 2448            +                 |           |            | 2133            +                 |           |            | 1861            +                 |           |            | 2616            +                 |           |            | 2796 status          | f         |          6 | ok              +                 |           |            | ignored         +                 |           |            | channel_error   +                 |           |            | in_progress     +                 |           |            | error           +                 |           |            | sent_to_proxy(3 rows)\n# select count(*) from external_sync_messages;\n count  \n--------\n 992912\n(1 row)", "msg_date": "Tue, 27 Oct 2015 10:35:27 +0100", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Query planner wants to use seq scan" }, { "msg_contents": "On 27.10.2015 12:35, Bertrand Paquet wrote:\n> Hi all,\n>\n> We have a slow query. After analyzing, the planner decision seems to \n> be discutable : the query is faster when disabling seqscan. See below \n> the two query plan, and an extract from pg_stats.\n>\n> Any idea about what to change to help the planner ?\n>\n> An information which can be useful : the number on distinct value on \n> organization_id is very very low, may be the planner does not known \n> that, and take the wrong decision.\n>\n> Regards,\n>\n> Bertrand\n>\n> # explain analyze SELECT 1 AS one FROM \"external_sync_messages\" \n> WHERE \"external_sync_messages\".\"organization_id\" = 1612 AND \n> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', \n> 'in_progress', 'ok')) AND \"external_sync_messages\".\"handled_by\" IS \n> NULL LIMIT 1;\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Limit (cost=0.00..12.39 rows=1 width=0) (actual \n> time=232.212..232.213 rows=1 loops=1)\n>\n> -> Seq Scan on external_sync_messages (cost=0.00..79104.69 \n> rows=6385 width=0) (actual time=232.209..232.209 rows=1 loops=1)\n>\n> Filter: ((handled_by IS NULL) AND (organization_id = 1612) \n> AND ((status)::text <> ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n>\n> Rows Removed by Filter: 600140\n>\n> Planning time: 0.490 ms\n>\n> Execution time: 232.246 ms\n>\n> (6 rows)\n>\n> # set enable_seqscan = off;\n>\n> SET\n>\n> # explain analyze SELECT 1 AS one FROM \"external_sync_messages\" \n> WHERE \"external_sync_messages\".\"organization_id\" = 1612 AND \n> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', \n> 'in_progress', 'ok')) AND \"external_sync_messages\".\"handled_by\" IS \n> NULL LIMIT 1;\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Limit (cost=0.42..39.88 rows=1 width=0) (actual time=0.030..0.030 \n> rows=1 loops=1)\n>\n> -> Index Scan using \n> index_external_sync_messages_on_organization_id on \n> external_sync_messages (cost=0.42..251934.05 rows=6385 width=0) \n> (actual time=0.028..0.028 rows=1 loops=1)\n>\n> Index Cond: (organization_id = 1612)\n>\n> Filter: ((handled_by IS NULL) AND ((status)::text <> ALL \n> ('{sent_to_proxy,in_progress,ok}'::text[])))\n>\n> Planning time: 0.103 ms\n>\n> Execution time: 0.052 ms\n>\n> (6 rows)\n>\n> # SELECT attname, inherited, \n> n_distinct, array_to_string(most_common_vals, E'\\n') as \n> most_common_vals FROM pg_stats WHERE tablename = \n> 'external_sync_messages' and attname IN ('status', 'organization_id', \n> 'handled_by');\n>\n> attname | inherited | n_distinct | most_common_vals\n>\n> -----------------+-----------+------------+------------------\n>\n> handled_by | f | 3 | 3 +\n>\n> | | | 236140 +\n>\n> | | | 54413\n>\n> organization_id | f | 22 | 1612 +\n>\n> | | | 287 +\n>\n> | | | 967 +\n>\n> | | | 1223 +\n>\n> | | | 1123 +\n>\n> | | | 1930 +\n>\n> | | | 841 +\n>\n> | | | 1814 +\n>\n> | | | 711 +\n>\n> | | | 1513 +\n>\n> | | | 1794 +\n>\n> | | | 1246 +\n>\n> | | | 1673 +\n>\n> | | | 1552 +\n>\n> | | | 1747 +\n>\n> | | | 2611 +\n>\n> | | | 2217 +\n>\n> | | | 2448 +\n>\n> | | | 2133 +\n>\n> | | | 1861 +\n>\n> | | | 2616 +\n>\n> | | | 2796\n>\n> status | f | 6 | ok +\n>\n> | | | ignored +\n>\n> | | | channel_error +\n>\n> | | | in_progress +\n>\n> | | | error +\n>\n> | | | sent_to_proxy\n>\n> (3 rows)\n>\n> # select count(*) from external_sync_messages;\n>\n> count\n>\n> --------\n>\n> 992912\n>\n> (1 row)\n>\n>\nHello, Bertrand!\nMay be statistics on external_sync_messages is wrong? i.e planner give \nus rows=6385 but seq scan give us Rows Removed by Filter: 600140\nMaybe you should recalc it by VACUUM ANALYZE it?\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Oct 2015 14:08:49 +0300", "msg_from": "Alex Ignatov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "Yes, I have run VACUUM ANALYZE, no effect.\n\nBertrand\n\n2015-10-27 12:08 GMT+01:00 Alex Ignatov <[email protected]>:\n\n> On 27.10.2015 12:35, Bertrand Paquet wrote:\n>\n>> Hi all,\n>>\n>> We have a slow query. After analyzing, the planner decision seems to be\n>> discutable : the query is faster when disabling seqscan. See below the two\n>> query plan, and an extract from pg_stats.\n>>\n>> Any idea about what to change to help the planner ?\n>>\n>> An information which can be useful : the number on distinct value on\n>> organization_id is very very low, may be the planner does not known that,\n>> and take the wrong decision.\n>>\n>> Regards,\n>>\n>> Bertrand\n>>\n>> # explain analyze SELECT 1 AS one FROM \"external_sync_messages\" WHERE\n>> \"external_sync_messages\".\"organization_id\" = 1612 AND\n>> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress',\n>> 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>>\n>> QUERY PLAN\n>>\n>>\n>> --------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> Limit (cost=0.00..12.39 rows=1 width=0) (actual time=232.212..232.213\n>> rows=1 loops=1)\n>>\n>> -> Seq Scan on external_sync_messages (cost=0.00..79104.69 rows=6385\n>> width=0) (actual time=232.209..232.209 rows=1 loops=1)\n>>\n>> Filter: ((handled_by IS NULL) AND (organization_id = 1612) AND\n>> ((status)::text <> ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n>>\n>> Rows Removed by Filter: 600140\n>>\n>> Planning time: 0.490 ms\n>>\n>> Execution time: 232.246 ms\n>>\n>> (6 rows)\n>>\n>> # set enable_seqscan = off;\n>>\n>> SET\n>>\n>> # explain analyze SELECT 1 AS one FROM \"external_sync_messages\" WHERE\n>> \"external_sync_messages\".\"organization_id\" = 1612 AND\n>> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress',\n>> 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>>\n>> QUERY PLAN\n>>\n>>\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> Limit (cost=0.42..39.88 rows=1 width=0) (actual time=0.030..0.030\n>> rows=1 loops=1)\n>>\n>> -> Index Scan using index_external_sync_messages_on_organization_id\n>> on external_sync_messages (cost=0.42..251934.05 rows=6385 width=0) (actual\n>> time=0.028..0.028 rows=1 loops=1)\n>>\n>> Index Cond: (organization_id = 1612)\n>>\n>> Filter: ((handled_by IS NULL) AND ((status)::text <> ALL\n>> ('{sent_to_proxy,in_progress,ok}'::text[])))\n>>\n>> Planning time: 0.103 ms\n>>\n>> Execution time: 0.052 ms\n>>\n>> (6 rows)\n>>\n>> # SELECT attname, inherited, n_distinct,\n>> array_to_string(most_common_vals, E'\\n') as most_common_vals FROM pg_stats\n>> WHERE tablename = 'external_sync_messages' and attname IN ('status',\n>> 'organization_id', 'handled_by');\n>>\n>> attname | inherited | n_distinct | most_common_vals\n>>\n>> -----------------+-----------+------------+------------------\n>>\n>> handled_by | f | 3 | 3 +\n>>\n>> | | | 236140 +\n>>\n>> | | | 54413\n>>\n>> organization_id | f | 22 | 1612 +\n>>\n>> | | | 287 +\n>>\n>> | | | 967 +\n>>\n>> | | | 1223 +\n>>\n>> | | | 1123 +\n>>\n>> | | | 1930 +\n>>\n>> | | | 841 +\n>>\n>> | | | 1814 +\n>>\n>> | | | 711 +\n>>\n>> | | | 1513 +\n>>\n>> | | | 1794 +\n>>\n>> | | | 1246 +\n>>\n>> | | | 1673 +\n>>\n>> | | | 1552 +\n>>\n>> | | | 1747 +\n>>\n>> | | | 2611 +\n>>\n>> | | | 2217 +\n>>\n>> | | | 2448 +\n>>\n>> | | | 2133 +\n>>\n>> | | | 1861 +\n>>\n>> | | | 2616 +\n>>\n>> | | | 2796\n>>\n>> status | f | 6 | ok +\n>>\n>> | | | ignored +\n>>\n>> | | | channel_error +\n>>\n>> | | | in_progress +\n>>\n>> | | | error +\n>>\n>> | | | sent_to_proxy\n>>\n>> (3 rows)\n>>\n>> # select count(*) from external_sync_messages;\n>>\n>> count\n>>\n>> --------\n>>\n>> 992912\n>>\n>> (1 row)\n>>\n>>\n>> Hello, Bertrand!\n> May be statistics on external_sync_messages is wrong? i.e planner give us\n> rows=6385 but seq scan give us Rows Removed by Filter: 600140\n> Maybe you should recalc it by VACUUM ANALYZE it?\n>\n> --\n> Alex Ignatov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nYes, I have run VACUUM ANALYZE, no effect.Bertrand2015-10-27 12:08 GMT+01:00 Alex Ignatov <[email protected]>:On 27.10.2015 12:35, Bertrand Paquet wrote:\n\r\nHi all,\n\r\nWe have a slow query. After analyzing, the planner decision seems to be discutable : the query is faster when disabling seqscan. See below the two query plan, and an extract from pg_stats.\n\r\nAny idea about what to change to help the planner ?\n\r\nAn information which can be useful : the number on distinct value on organization_id is very very low, may be the planner does not known that, and take the wrong decision.\n\r\nRegards,\n\r\nBertrand\n\r\n# explain analyze SELECT  1 AS one FROM \"external_sync_messages\"  WHERE \"external_sync_messages\".\"organization_id\" = 1612 AND (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress', 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n\r\n                              QUERY PLAN\n\r\n--------------------------------------------------------------------------------------------------------------------------------------------\n\r\n Limit  (cost=0.00..12.39 rows=1 width=0) (actual time=232.212..232.213 rows=1 loops=1)\n\r\n   ->  Seq Scan on external_sync_messages  (cost=0.00..79104.69 rows=6385 width=0) (actual time=232.209..232.209 rows=1 loops=1)\n\r\n         Filter: ((handled_by IS NULL) AND (organization_id = 1612) AND ((status)::text <> ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n\r\n         Rows Removed by Filter: 600140\n\r\n Planning time: 0.490 ms\n\r\n Execution time: 232.246 ms\n\r\n(6 rows)\n\r\n# set enable_seqscan = off;\n\r\nSET\n\r\n# explain analyze SELECT  1 AS one FROM \"external_sync_messages\"  WHERE \"external_sync_messages\".\"organization_id\" = 1612 AND (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress', 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n\r\n                                                    QUERY PLAN\n\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\r\n Limit  (cost=0.42..39.88 rows=1 width=0) (actual time=0.030..0.030 rows=1 loops=1)\n\r\n   ->  Index Scan using index_external_sync_messages_on_organization_id on external_sync_messages  (cost=0.42..251934.05 rows=6385 width=0) (actual time=0.028..0.028 rows=1 loops=1)\n\r\n         Index Cond: (organization_id = 1612)\n\r\n         Filter: ((handled_by IS NULL) AND ((status)::text <> ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n\r\n Planning time: 0.103 ms\n\r\n Execution time: 0.052 ms\n\r\n(6 rows)\n\r\n# SELECT attname, inherited, n_distinct, array_to_string(most_common_vals, E'\\n') as most_common_vals FROM pg_stats WHERE tablename = 'external_sync_messages' and attname IN ('status', 'organization_id', 'handled_by');\n\r\n     attname     | inherited | n_distinct | most_common_vals\n\r\n-----------------+-----------+------------+------------------\n\r\n handled_by      | f         |       3 | 3               +\n\r\n                 |           |         | 236140          +\n\r\n                 |           |         | 54413\n\r\n organization_id | f         |     22 | 1612            +\n\r\n                 |           |         | 287             +\n\r\n                 |           |         | 967             +\n\r\n                 |           |         | 1223            +\n\r\n                 |           |         | 1123            +\n\r\n                 |           |         | 1930            +\n\r\n                 |           |         | 841             +\n\r\n                 |           |         | 1814            +\n\r\n                 |           |         | 711             +\n\r\n                 |           |         | 1513            +\n\r\n                 |           |         | 1794            +\n\r\n                 |           |         | 1246            +\n\r\n                 |           |         | 1673            +\n\r\n                 |           |         | 1552            +\n\r\n                 |           |         | 1747            +\n\r\n                 |           |         | 2611            +\n\r\n                 |           |         | 2217            +\n\r\n                 |           |         | 2448            +\n\r\n                 |           |         | 2133            +\n\r\n                 |           |         | 1861            +\n\r\n                 |           |         | 2616            +\n\r\n                 |           |         | 2796\n\r\n status          | f         |       6 | ok              +\n\r\n                 |           |         | ignored         +\n\r\n                 |           |         | channel_error   +\n\r\n                 |           |         | in_progress     +\n\r\n                 |           |         | error           +\n\r\n                 |           |         | sent_to_proxy\n\r\n(3 rows)\n\r\n# select count(*) from external_sync_messages;\n\r\n count\n\r\n--------\n\r\n 992912\n\r\n(1 row)\n\n\n\r\nHello, Bertrand!\r\nMay be statistics on external_sync_messages is wrong? i.e planner give us rows=6385 but seq scan give us Rows Removed by Filter: 600140\r\nMaybe you should recalc it by VACUUM ANALYZE it?\n\r\n-- \r\nAlex Ignatov\r\nPostgres Professional: http://www.postgrespro.com\r\nThe Russian Postgres Company", "msg_date": "Tue, 27 Oct 2015 12:10:00 +0100", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "On 27.10.2015 14:10, Bertrand Paquet wrote:\n> Yes, I have run VACUUM ANALYZE, no effect.\n>\n> Bertrand\n>\n> 2015-10-27 12:08 GMT+01:00 Alex Ignatov <[email protected] \n> <mailto:[email protected]>>:\n>\n> On 27.10.2015 12:35, Bertrand Paquet wrote:\n>\n> Hi all,\n>\n> We have a slow query. After analyzing, the planner decision\n> seems to be discutable : the query is faster when disabling\n> seqscan. See below the two query plan, and an extract from\n> pg_stats.\n>\n> Any idea about what to change to help the planner ?\n>\n> An information which can be useful : the number on distinct\n> value on organization_id is very very low, may be the planner\n> does not known that, and take the wrong decision.\n>\n> Regards,\n>\n> Bertrand\n>\n> # explain analyze SELECT 1 AS one FROM\n> \"external_sync_messages\" WHERE\n> \"external_sync_messages\".\"organization_id\" = 1612 AND\n> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy',\n> 'in_progress', 'ok')) AND\n> \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Limit (cost=0.00..12.39 rows=1 width=0) (actual\n> time=232.212..232.213 rows=1 loops=1)\n>\n> -> Seq Scan on external_sync_messages (cost=0.00..79104.69\n> rows=6385 width=0) (actual time=232.209..232.209 rows=1 loops=1)\n>\n> Filter: ((handled_by IS NULL) AND (organization_id =\n> 1612) AND ((status)::text <> ALL\n> ('{sent_to_proxy,in_progress,ok}'::text[])))\n>\n> Rows Removed by Filter: 600140\n>\n> Planning time: 0.490 ms\n>\n> Execution time: 232.246 ms\n>\n> (6 rows)\n>\n> # set enable_seqscan = off;\n>\n> SET\n>\n> # explain analyze SELECT 1 AS one FROM\n> \"external_sync_messages\" WHERE\n> \"external_sync_messages\".\"organization_id\" = 1612 AND\n> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy',\n> 'in_progress', 'ok')) AND\n> \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Limit (cost=0.42..39.88 rows=1 width=0) (actual\n> time=0.030..0.030 rows=1 loops=1)\n>\n> -> Index Scan using\n> index_external_sync_messages_on_organization_id on\n> external_sync_messages (cost=0.42..251934.05 rows=6385\n> width=0) (actual time=0.028..0.028 rows=1 loops=1)\n>\n> Index Cond: (organization_id = 1612)\n>\n> Filter: ((handled_by IS NULL) AND ((status)::text <>\n> ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n>\n> Planning time: 0.103 ms\n>\n> Execution time: 0.052 ms\n>\n> (6 rows)\n>\n> # SELECT attname, inherited, n_distinct,\n> array_to_string(most_common_vals, E'\\n') as most_common_vals\n> FROM pg_stats WHERE tablename = 'external_sync_messages' and\n> attname IN ('status', 'organization_id', 'handled_by');\n>\n> attname | inherited | n_distinct | most_common_vals\n>\n> -----------------+-----------+------------+------------------\n>\n> handled_by | f | 3 | 3 +\n>\n> | | | 236140 +\n>\n> | | | 54413\n>\n> organization_id | f | 22 | 1612 +\n>\n> | | | 287 +\n>\n> | | | 967 +\n>\n> | | | 1223 +\n>\n> | | | 1123 +\n>\n> | | | 1930 +\n>\n> | | | 841 +\n>\n> | | | 1814 +\n>\n> | | | 711 +\n>\n> | | | 1513 +\n>\n> | | | 1794 +\n>\n> | | | 1246 +\n>\n> | | | 1673 +\n>\n> | | | 1552 +\n>\n> | | | 1747 +\n>\n> | | | 2611 +\n>\n> | | | 2217 +\n>\n> | | | 2448 +\n>\n> | | | 2133 +\n>\n> | | | 1861 +\n>\n> | | | 2616 +\n>\n> | | | 2796\n>\n> status | f | 6 | ok +\n>\n> | | | ignored +\n>\n> | | | channel_error +\n>\n> | | | in_progress +\n>\n> | | | error +\n>\n> | | | sent_to_proxy\n>\n> (3 rows)\n>\n> # select count(*) from external_sync_messages;\n>\n> count\n>\n> --------\n>\n> 992912\n>\n> (1 row)\n>\n>\n> Hello, Bertrand!\n> May be statistics on external_sync_messages is wrong? i.e planner\n> give us rows=6385 but seq scan give us Rows Removed by Filter: 600140\n> Maybe you should recalc it by VACUUM ANALYZE it?\n>\n> -- \n> Alex Ignatov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\nWhat is the result of\nselect relname,n_live_tup,n_dead_tup, last_vacuum, last_autovacuum, \nlast_analyze, last_autoanalyze from pg_stat_user_tables where \nrelname='external_sync_messages' ?\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n On 27.10.2015 14:10, Bertrand Paquet wrote:\n\nYes, I have run VACUUM ANALYZE, no effect.\n \n\nBertrand\n\n\n2015-10-27 12:08 GMT+01:00 Alex Ignatov\n <[email protected]>:\n\n\nOn 27.10.2015 12:35, Bertrand Paquet\n wrote:\n\n Hi all,\n\n We have a slow query. After analyzing, the planner\n decision seems to be discutable : the query is faster\n when disabling seqscan. See below the two query plan,\n and an extract from pg_stats.\n\n Any idea about what to change to help the planner ?\n\n An information which can be useful : the number on\n distinct value on organization_id is very very low,\n may be the planner does not known that, and take the\n wrong decision.\n\n Regards,\n\n Bertrand\n\n # explain analyze SELECT  1 AS one FROM\n \"external_sync_messages\"  WHERE\n \"external_sync_messages\".\"organization_id\" = 1612 AND\n (\"external_sync_messages\".\"status\" NOT IN\n ('sent_to_proxy', 'in_progress', 'ok')) AND\n \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n\n                               QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n\n  Limit  (cost=0.00..12.39 rows=1 width=0) (actual\n time=232.212..232.213 rows=1 loops=1)\n\n    ->  Seq Scan on external_sync_messages \n (cost=0.00..79104.69 rows=6385 width=0) (actual\n time=232.209..232.209 rows=1 loops=1)\n\n          Filter: ((handled_by IS NULL) AND\n (organization_id = 1612) AND ((status)::text <>\n ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n\n          Rows Removed by Filter: 600140\n\n  Planning time: 0.490 ms\n\n  Execution time: 232.246 ms\n\n (6 rows)\n\n # set enable_seqscan = off;\n\n SET\n\n # explain analyze SELECT  1 AS one FROM\n \"external_sync_messages\"  WHERE\n \"external_sync_messages\".\"organization_id\" = 1612 AND\n (\"external_sync_messages\".\"status\" NOT IN\n ('sent_to_proxy', 'in_progress', 'ok')) AND\n \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n\n                                                    \n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n  Limit  (cost=0.42..39.88 rows=1 width=0) (actual\n time=0.030..0.030 rows=1 loops=1)\n\n    ->  Index Scan using\n index_external_sync_messages_on_organization_id on\n external_sync_messages  (cost=0.42..251934.05\n rows=6385 width=0) (actual time=0.028..0.028 rows=1\n loops=1)\n\n          Index Cond: (organization_id = 1612)\n\n          Filter: ((handled_by IS NULL) AND\n ((status)::text <> ALL\n ('{sent_to_proxy,in_progress,ok}'::text[])))\n\n  Planning time: 0.103 ms\n\n  Execution time: 0.052 ms\n\n (6 rows)\n\n # SELECT attname, inherited, n_distinct,\n array_to_string(most_common_vals, E'\\n') as\n most_common_vals FROM pg_stats WHERE tablename =\n 'external_sync_messages' and attname IN ('status',\n 'organization_id', 'handled_by');\n\n      attname     | inherited | n_distinct |\n most_common_vals\n\n-----------------+-----------+------------+------------------\n\n  handled_by      | f         |       3 | 3           \n    +\n\n                  |           |         | 236140       \n   +\n\n                  |           |         | 54413\n\n  organization_id | f         |     22 | 1612         \n   +\n\n                  |           |         | 287         \n    +\n\n                  |           |         | 967         \n    +\n\n                  |           |         | 1223         \n   +\n\n                  |           |         | 1123         \n   +\n\n                  |           |         | 1930         \n   +\n\n                  |           |         | 841         \n    +\n\n                  |           |         | 1814         \n   +\n\n                  |           |         | 711         \n    +\n\n                  |           |         | 1513         \n   +\n\n                  |           |         | 1794         \n   +\n\n                  |           |         | 1246         \n   +\n\n                  |           |         | 1673         \n   +\n\n                  |           |         | 1552         \n   +\n\n                  |           |         | 1747         \n   +\n\n                  |           |         | 2611         \n   +\n\n                  |           |         | 2217         \n   +\n\n                  |           |         | 2448         \n   +\n\n                  |           |         | 2133         \n   +\n\n                  |           |         | 1861         \n   +\n\n                  |           |         | 2616         \n   +\n\n                  |           |         | 2796\n\n  status          | f         |       6 | ok           \n   +\n\n                  |           |         | ignored     \n    +\n\n                  |           |         |\n channel_error   +\n\n                  |           |         | in_progress \n    +\n\n                  |           |         | error       \n    +\n\n                  |           |         | sent_to_proxy\n\n (3 rows)\n\n # select count(*) from external_sync_messages;\n\n  count\n\n --------\n\n  992912\n\n (1 row)\n\n\n\n\n\n Hello, Bertrand!\n May be statistics on external_sync_messages is wrong? i.e\n planner give us rows=6385 but seq scan give us Rows Removed\n by Filter: 600140\n Maybe you should recalc it by VACUUM ANALYZE it?\n\n -- \n Alex Ignatov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company\n\n\n\n\n\n\n What is the result of \n select relname,n_live_tup,n_dead_tup, last_vacuum, last_autovacuum,\n last_analyze, last_autoanalyze from pg_stat_user_tables where\n relname='external_sync_messages' ?\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 27 Oct 2015 14:17:58 +0300", "msg_from": "Alex Ignatov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "relname | n_live_tup | n_dead_tup | last_vacuum\n | last_autovacuum | last_analyze |\n last_autoanalyze\n\n------------------------+------------+------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------\n\n external_sync_messages | 998105 | 11750 | 2015-10-26\n20:15:17.484771+00 | 2015-10-02 15:04:25.944479+00 | 2015-10-26\n20:15:19.465308+00 | 2015-10-22 12:24:26.947616+00\n\n(1 row)\n\n2015-10-27 12:17 GMT+01:00 Alex Ignatov <[email protected]>:\n\n> On 27.10.2015 14:10, Bertrand Paquet wrote:\n>\n> Yes, I have run VACUUM ANALYZE, no effect.\n>\n> Bertrand\n>\n> 2015-10-27 12:08 GMT+01:00 Alex Ignatov <[email protected]>:\n>\n>> On 27.10.2015 12:35, Bertrand Paquet wrote:\n>>\n>>> Hi all,\n>>>\n>>> We have a slow query. After analyzing, the planner decision seems to be\n>>> discutable : the query is faster when disabling seqscan. See below the two\n>>> query plan, and an extract from pg_stats.\n>>>\n>>> Any idea about what to change to help the planner ?\n>>>\n>>> An information which can be useful : the number on distinct value on\n>>> organization_id is very very low, may be the planner does not known that,\n>>> and take the wrong decision.\n>>>\n>>> Regards,\n>>>\n>>> Bertrand\n>>>\n>>> # explain analyze SELECT 1 AS one FROM \"external_sync_messages\" WHERE\n>>> \"external_sync_messages\".\"organization_id\" = 1612 AND\n>>> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress',\n>>> 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> --------------------------------------------------------------------------------------------------------------------------------------------\n>>>\n>>> Limit (cost=0.00..12.39 rows=1 width=0) (actual time=232.212..232.213\n>>> rows=1 loops=1)\n>>>\n>>> -> Seq Scan on external_sync_messages (cost=0.00..79104.69\n>>> rows=6385 width=0) (actual time=232.209..232.209 rows=1 loops=1)\n>>>\n>>> Filter: ((handled_by IS NULL) AND (organization_id = 1612) AND\n>>> ((status)::text <> ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n>>>\n>>> Rows Removed by Filter: 600140\n>>>\n>>> Planning time: 0.490 ms\n>>>\n>>> Execution time: 232.246 ms\n>>>\n>>> (6 rows)\n>>>\n>>> # set enable_seqscan = off;\n>>>\n>>> SET\n>>>\n>>> # explain analyze SELECT 1 AS one FROM \"external_sync_messages\" WHERE\n>>> \"external_sync_messages\".\"organization_id\" = 1612 AND\n>>> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress',\n>>> 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>\n>>> Limit (cost=0.42..39.88 rows=1 width=0) (actual time=0.030..0.030\n>>> rows=1 loops=1)\n>>>\n>>> -> Index Scan using index_external_sync_messages_on_organization_id\n>>> on external_sync_messages (cost=0.42..251934.05 rows=6385 width=0) (actual\n>>> time=0.028..0.028 rows=1 loops=1)\n>>>\n>>> Index Cond: (organization_id = 1612)\n>>>\n>>> Filter: ((handled_by IS NULL) AND ((status)::text <> ALL\n>>> ('{sent_to_proxy,in_progress,ok}'::text[])))\n>>>\n>>> Planning time: 0.103 ms\n>>>\n>>> Execution time: 0.052 ms\n>>>\n>>> (6 rows)\n>>>\n>>> # SELECT attname, inherited, n_distinct,\n>>> array_to_string(most_common_vals, E'\\n') as most_common_vals FROM pg_stats\n>>> WHERE tablename = 'external_sync_messages' and attname IN ('status',\n>>> 'organization_id', 'handled_by');\n>>>\n>>> attname | inherited | n_distinct | most_common_vals\n>>>\n>>> -----------------+-----------+------------+------------------\n>>>\n>>> handled_by | f | 3 | 3 +\n>>>\n>>> | | | 236140 +\n>>>\n>>> | | | 54413\n>>>\n>>> organization_id | f | 22 | 1612 +\n>>>\n>>> | | | 287 +\n>>>\n>>> | | | 967 +\n>>>\n>>> | | | 1223 +\n>>>\n>>> | | | 1123 +\n>>>\n>>> | | | 1930 +\n>>>\n>>> | | | 841 +\n>>>\n>>> | | | 1814 +\n>>>\n>>> | | | 711 +\n>>>\n>>> | | | 1513 +\n>>>\n>>> | | | 1794 +\n>>>\n>>> | | | 1246 +\n>>>\n>>> | | | 1673 +\n>>>\n>>> | | | 1552 +\n>>>\n>>> | | | 1747 +\n>>>\n>>> | | | 2611 +\n>>>\n>>> | | | 2217 +\n>>>\n>>> | | | 2448 +\n>>>\n>>> | | | 2133 +\n>>>\n>>> | | | 1861 +\n>>>\n>>> | | | 2616 +\n>>>\n>>> | | | 2796\n>>>\n>>> status | f | 6 | ok +\n>>>\n>>> | | | ignored +\n>>>\n>>> | | | channel_error +\n>>>\n>>> | | | in_progress +\n>>>\n>>> | | | error +\n>>>\n>>> | | | sent_to_proxy\n>>>\n>>> (3 rows)\n>>>\n>>> # select count(*) from external_sync_messages;\n>>>\n>>> count\n>>>\n>>> --------\n>>>\n>>> 992912\n>>>\n>>> (1 row)\n>>>\n>>>\n>>> Hello, Bertrand!\n>> May be statistics on external_sync_messages is wrong? i.e planner give us\n>> rows=6385 but seq scan give us Rows Removed by Filter: 600140\n>> Maybe you should recalc it by VACUUM ANALYZE it?\n>>\n>> --\n>> Alex Ignatov\n>> Postgres Professional: http://www.postgrespro.com\n>> The Russian Postgres Company\n>>\n>>\n> What is the result of\n> select relname,n_live_tup,n_dead_tup, last_vacuum, last_autovacuum,\n> last_analyze, last_autoanalyze from pg_stat_user_tables where\n> relname='external_sync_messages' ?\n>\n> --\n> Alex Ignatov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n\n\n        relname         | n_live_tup | n_dead_tup |          last_vacuum          |        last_autovacuum        |         last_analyze          |       last_autoanalyze        \n------------------------+------------+------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------\n external_sync_messages |     998105 |      11750 | 2015-10-26 20:15:17.484771+00 | 2015-10-02 15:04:25.944479+00 | 2015-10-26 20:15:19.465308+00 | 2015-10-22 12:24:26.947616+00\n(1 row)2015-10-27 12:17 GMT+01:00 Alex Ignatov <[email protected]>:\n\n On 27.10.2015 14:10, Bertrand Paquet wrote:\n\nYes, I have run VACUUM ANALYZE, no effect.\n \n\nBertrand\n\n\n2015-10-27 12:08 GMT+01:00 Alex Ignatov\n <[email protected]>:\n\n\nOn 27.10.2015 12:35, Bertrand Paquet\n wrote:\n\n Hi all,\n\n We have a slow query. After analyzing, the planner\n decision seems to be discutable : the query is faster\n when disabling seqscan. See below the two query plan,\n and an extract from pg_stats.\n\n Any idea about what to change to help the planner ?\n\n An information which can be useful : the number on\n distinct value on organization_id is very very low,\n may be the planner does not known that, and take the\n wrong decision.\n\n Regards,\n\n Bertrand\n\n # explain analyze SELECT  1 AS one FROM\n \"external_sync_messages\"  WHERE\n \"external_sync_messages\".\"organization_id\" = 1612 AND\n (\"external_sync_messages\".\"status\" NOT IN\n ('sent_to_proxy', 'in_progress', 'ok')) AND\n \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n\n                               QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n\n  Limit  (cost=0.00..12.39 rows=1 width=0) (actual\n time=232.212..232.213 rows=1 loops=1)\n\n    ->  Seq Scan on external_sync_messages \n (cost=0.00..79104.69 rows=6385 width=0) (actual\n time=232.209..232.209 rows=1 loops=1)\n\n          Filter: ((handled_by IS NULL) AND\n (organization_id = 1612) AND ((status)::text <>\n ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n\n          Rows Removed by Filter: 600140\n\n  Planning time: 0.490 ms\n\n  Execution time: 232.246 ms\n\n (6 rows)\n\n # set enable_seqscan = off;\n\n SET\n\n # explain analyze SELECT  1 AS one FROM\n \"external_sync_messages\"  WHERE\n \"external_sync_messages\".\"organization_id\" = 1612 AND\n (\"external_sync_messages\".\"status\" NOT IN\n ('sent_to_proxy', 'in_progress', 'ok')) AND\n \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n\n                                                    \n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n  Limit  (cost=0.42..39.88 rows=1 width=0) (actual\n time=0.030..0.030 rows=1 loops=1)\n\n    ->  Index Scan using\n index_external_sync_messages_on_organization_id on\n external_sync_messages  (cost=0.42..251934.05\n rows=6385 width=0) (actual time=0.028..0.028 rows=1\n loops=1)\n\n          Index Cond: (organization_id = 1612)\n\n          Filter: ((handled_by IS NULL) AND\n ((status)::text <> ALL\n ('{sent_to_proxy,in_progress,ok}'::text[])))\n\n  Planning time: 0.103 ms\n\n  Execution time: 0.052 ms\n\n (6 rows)\n\n # SELECT attname, inherited, n_distinct,\n array_to_string(most_common_vals, E'\\n') as\n most_common_vals FROM pg_stats WHERE tablename =\n 'external_sync_messages' and attname IN ('status',\n 'organization_id', 'handled_by');\n\n      attname     | inherited | n_distinct |\n most_common_vals\n\n-----------------+-----------+------------+------------------\n\n  handled_by      | f         |       3 | 3           \n    +\n\n                  |           |         | 236140       \n   +\n\n                  |           |         | 54413\n\n  organization_id | f         |     22 | 1612         \n   +\n\n                  |           |         | 287         \n    +\n\n                  |           |         | 967         \n    +\n\n                  |           |         | 1223         \n   +\n\n                  |           |         | 1123         \n   +\n\n                  |           |         | 1930         \n   +\n\n                  |           |         | 841         \n    +\n\n                  |           |         | 1814         \n   +\n\n                  |           |         | 711         \n    +\n\n                  |           |         | 1513         \n   +\n\n                  |           |         | 1794         \n   +\n\n                  |           |         | 1246         \n   +\n\n                  |           |         | 1673         \n   +\n\n                  |           |         | 1552         \n   +\n\n                  |           |         | 1747         \n   +\n\n                  |           |         | 2611         \n   +\n\n                  |           |         | 2217         \n   +\n\n                  |           |         | 2448         \n   +\n\n                  |           |         | 2133         \n   +\n\n                  |           |         | 1861         \n   +\n\n                  |           |         | 2616         \n   +\n\n                  |           |         | 2796\n\n  status          | f         |       6 | ok           \n   +\n\n                  |           |         | ignored     \n    +\n\n                  |           |         |\n channel_error   +\n\n                  |           |         | in_progress \n    +\n\n                  |           |         | error       \n    +\n\n                  |           |         | sent_to_proxy\n\n (3 rows)\n\n # select count(*) from external_sync_messages;\n\n  count\n\n --------\n\n  992912\n\n (1 row)\n\n\n\n\n\n Hello, Bertrand!\n May be statistics on external_sync_messages is wrong? i.e\n planner give us rows=6385 but seq scan give us Rows Removed\n by Filter: 600140\n Maybe you should recalc it by VACUUM ANALYZE it?\n\n -- \n Alex Ignatov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company\n\n\n\n\n\n\n What is the result of \n select relname,n_live_tup,n_dead_tup, last_vacuum, last_autovacuum,\n last_analyze, last_autoanalyze from pg_stat_user_tables where\n relname='external_sync_messages' ?\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 27 Oct 2015 12:19:54 +0100", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "On 27.10.2015 14:19, Bertrand Paquet wrote:\n>\n> relname | n_live_tup | n_dead_tup | \n> last_vacuum | last_autovacuum | last_analyze \n> | last_autoanalyze\n>\n> ------------------------+------------+------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------\n>\n> external_sync_messages | 998105 | 11750 | 2015-10-26 \n> 20:15:17.484771+00 | 2015-10-02 15:04:25.944479+00 | 2015-10-26 \n> 20:15:19.465308+00 | 2015-10-22 12:24:26.947616+00\n>\n> (1 row)\n>\n>\n> 2015-10-27 12:17 GMT+01:00 Alex Ignatov <[email protected] \n> <mailto:[email protected]>>:\n>\n> On 27.10.2015 14:10, Bertrand Paquet wrote:\n>> Yes, I have run VACUUM ANALYZE, no effect.\n>>\n>> Bertrand\n>>\n>> 2015-10-27 12:08 GMT+01:00 Alex Ignatov <[email protected]\n>> <mailto:[email protected]>>:\n>>\n>> On 27.10.2015 12:35, Bertrand Paquet wrote:\n>>\n>> Hi all,\n>>\n>> We have a slow query. After analyzing, the planner\n>> decision seems to be discutable : the query is faster\n>> when disabling seqscan. See below the two query plan, and\n>> an extract from pg_stats.\n>>\n>> Any idea about what to change to help the planner ?\n>>\n>> An information which can be useful : the number on\n>> distinct value on organization_id is very very low, may\n>> be the planner does not known that, and take the wrong\n>> decision.\n>>\n>> Regards,\n>>\n>> Bertrand\n>>\n>> # explain analyze SELECT 1 AS one FROM\n>> \"external_sync_messages\" WHERE\n>> \"external_sync_messages\".\"organization_id\" = 1612 AND\n>> (\"external_sync_messages\".\"status\" NOT IN\n>> ('sent_to_proxy', 'in_progress', 'ok')) AND\n>> \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>>\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> Limit (cost=0.00..12.39 rows=1 width=0) (actual\n>> time=232.212..232.213 rows=1 loops=1)\n>>\n>> -> Seq Scan on external_sync_messages\n>> (cost=0.00..79104.69 rows=6385 width=0) (actual\n>> time=232.209..232.209 rows=1 loops=1)\n>>\n>> Filter: ((handled_by IS NULL) AND\n>> (organization_id = 1612) AND ((status)::text <> ALL\n>> ('{sent_to_proxy,in_progress,ok}'::text[])))\n>>\n>> Rows Removed by Filter: 600140\n>>\n>> Planning time: 0.490 ms\n>>\n>> Execution time: 232.246 ms\n>>\n>> (6 rows)\n>>\n>> # set enable_seqscan = off;\n>>\n>> SET\n>>\n>> # explain analyze SELECT 1 AS one FROM\n>> \"external_sync_messages\" WHERE\n>> \"external_sync_messages\".\"organization_id\" = 1612 AND\n>> (\"external_sync_messages\".\"status\" NOT IN\n>> ('sent_to_proxy', 'in_progress', 'ok')) AND\n>> \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>>\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> Limit (cost=0.42..39.88 rows=1 width=0) (actual\n>> time=0.030..0.030 rows=1 loops=1)\n>>\n>> -> Index Scan using\n>> index_external_sync_messages_on_organization_id on\n>> external_sync_messages (cost=0.42..251934.05 rows=6385\n>> width=0) (actual time=0.028..0.028 rows=1 loops=1)\n>>\n>> Index Cond: (organization_id = 1612)\n>>\n>> Filter: ((handled_by IS NULL) AND\n>> ((status)::text <> ALL\n>> ('{sent_to_proxy,in_progress,ok}'::text[])))\n>>\n>> Planning time: 0.103 ms\n>>\n>> Execution time: 0.052 ms\n>>\n>> (6 rows)\n>>\n>> # SELECT attname, inherited, n_distinct,\n>> array_to_string(most_common_vals, E'\\n') as\n>> most_common_vals FROM pg_stats WHERE tablename =\n>> 'external_sync_messages' and attname IN ('status',\n>> 'organization_id', 'handled_by');\n>>\n>> attname | inherited | n_distinct | most_common_vals\n>>\n>> -----------------+-----------+------------+------------------\n>>\n>> handled_by | f | 3 | 3 +\n>>\n>> | | | 236140 +\n>>\n>> | | | 54413\n>>\n>> organization_id | f | 22 | 1612 +\n>>\n>> | | | 287 +\n>>\n>> | | | 967 +\n>>\n>> | | | 1223 +\n>>\n>> | | | 1123 +\n>>\n>> | | | 1930 +\n>>\n>> | | | 841 +\n>>\n>> | | | 1814 +\n>>\n>> | | | 711 +\n>>\n>> | | | 1513 +\n>>\n>> | | | 1794 +\n>>\n>> | | | 1246 +\n>>\n>> | | | 1673 +\n>>\n>> | | | 1552 +\n>>\n>> | | | 1747 +\n>>\n>> | | | 2611 +\n>>\n>> | | | 2217 +\n>>\n>> | | | 2448 +\n>>\n>> | | | 2133 +\n>>\n>> | | | 1861 +\n>>\n>> | | | 2616 +\n>>\n>> | | | 2796\n>>\n>> status | f | 6 | ok +\n>>\n>> | | | ignored +\n>>\n>> | | | channel_error +\n>>\n>> | | | in_progress +\n>>\n>> | | | error +\n>>\n>> | | | sent_to_proxy\n>>\n>> (3 rows)\n>>\n>> # select count(*) from external_sync_messages;\n>>\n>> count\n>>\n>> --------\n>>\n>> 992912\n>>\n>> (1 row)\n>>\n>>\n>> Hello, Bertrand!\n>> May be statistics on external_sync_messages is wrong? i.e\n>> planner give us rows=6385 but seq scan give us Rows Removed\n>> by Filter: 600140\n>> Maybe you should recalc it by VACUUM ANALYZE it?\n>>\n>> -- \n>> Alex Ignatov\n>> Postgres Professional: http://www.postgrespro.com\n>> The Russian Postgres Company\n>>\n>>\n> What is the result of\n> select relname,n_live_tup,n_dead_tup, last_vacuum,\n> last_autovacuum, last_analyze, last_autoanalyze from\n> pg_stat_user_tables where relname='external_sync_messages' ?\n>\n> -- \n> Alex Ignatov\n> Postgres Professional:http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\nWhat is yours random_page_cost parameter in postgres config?\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 27.10.2015 14:19, Bertrand Paquet\n wrote:\n\n\n\n        relname         | n_live_tup\n | n_dead_tup |          last_vacuum          |       \n last_autovacuum        |         last_analyze          |    \n   last_autoanalyze        \n------------------------+------------+------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------\n external_sync_messages |     998105\n |      11750 | 2015-10-26 20:15:17.484771+00 | 2015-10-02\n 15:04:25.944479+00 | 2015-10-26 20:15:19.465308+00 |\n 2015-10-22 12:24:26.947616+00\n(1 row)\n\n\n2015-10-27 12:17 GMT+01:00 Alex Ignatov\n <[email protected]>:\n\n\n\n On 27.10.2015 14:10, Bertrand Paquet\n wrote:\n\nYes, I have run VACUUM ANALYZE, no\n effect.\n \n\nBertrand\n\n\n2015-10-27 12:08\n GMT+01:00 Alex Ignatov <[email protected]>:\n\n\nOn 27.10.2015 12:35, Bertrand Paquet\n wrote:\n Hi all,\n\n We have a slow query. After analyzing,\n the planner decision seems to be\n discutable : the query is faster when\n disabling seqscan. See below the two\n query plan, and an extract from\n pg_stats.\n\n Any idea about what to change to help\n the planner ?\n\n An information which can be useful : the\n number on distinct value on\n organization_id is very very low, may be\n the planner does not known that, and\n take the wrong decision.\n\n Regards,\n\n Bertrand\n\n # explain analyze SELECT  1 AS one FROM\n \"external_sync_messages\"  WHERE\n \"external_sync_messages\".\"organization_id\"\n = 1612 AND\n (\"external_sync_messages\".\"status\" NOT\n IN ('sent_to_proxy', 'in_progress',\n 'ok')) AND\n \"external_sync_messages\".\"handled_by\" IS\n NULL LIMIT 1;\n\n                               QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n\n  Limit  (cost=0.00..12.39 rows=1\n width=0) (actual time=232.212..232.213\n rows=1 loops=1)\n\n    ->  Seq Scan on\n external_sync_messages \n (cost=0.00..79104.69 rows=6385 width=0)\n (actual time=232.209..232.209 rows=1\n loops=1)\n\n          Filter: ((handled_by IS NULL)\n AND (organization_id = 1612) AND\n ((status)::text <> ALL\n ('{sent_to_proxy,in_progress,ok}'::text[])))\n\n          Rows Removed by Filter: 600140\n\n  Planning time: 0.490 ms\n\n  Execution time: 232.246 ms\n\n (6 rows)\n\n # set enable_seqscan = off;\n\n SET\n\n # explain analyze SELECT  1 AS one FROM\n \"external_sync_messages\"  WHERE\n \"external_sync_messages\".\"organization_id\"\n = 1612 AND\n (\"external_sync_messages\".\"status\" NOT\n IN ('sent_to_proxy', 'in_progress',\n 'ok')) AND\n \"external_sync_messages\".\"handled_by\" IS\n NULL LIMIT 1;\n\n                                        \n             QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n  Limit  (cost=0.42..39.88 rows=1\n width=0) (actual time=0.030..0.030\n rows=1 loops=1)\n\n    ->  Index Scan using\n index_external_sync_messages_on_organization_id\n on external_sync_messages \n (cost=0.42..251934.05 rows=6385 width=0)\n (actual time=0.028..0.028 rows=1\n loops=1)\n\n          Index Cond: (organization_id =\n 1612)\n\n          Filter: ((handled_by IS NULL)\n AND ((status)::text <> ALL\n ('{sent_to_proxy,in_progress,ok}'::text[])))\n\n  Planning time: 0.103 ms\n\n  Execution time: 0.052 ms\n\n (6 rows)\n\n # SELECT attname, inherited, n_distinct,\n array_to_string(most_common_vals, E'\\n')\n as most_common_vals FROM pg_stats WHERE\n tablename = 'external_sync_messages' and\n attname IN ('status', 'organization_id',\n 'handled_by');\n\n      attname     | inherited |\n n_distinct | most_common_vals\n\n-----------------+-----------+------------+------------------\n\n  handled_by      | f         |       3 |\n 3               +\n\n                  |           |         |\n 236140          +\n\n                  |           |         |\n 54413\n\n  organization_id | f         |     22 |\n 1612            +\n\n                  |           |         |\n 287             +\n\n                  |           |         |\n 967             +\n\n                  |           |         |\n 1223            +\n\n                  |           |         |\n 1123            +\n\n                  |           |         |\n 1930            +\n\n                  |           |         |\n 841             +\n\n                  |           |         |\n 1814            +\n\n                  |           |         |\n 711             +\n\n                  |           |         |\n 1513            +\n\n                  |           |         |\n 1794            +\n\n                  |           |         |\n 1246            +\n\n                  |           |         |\n 1673            +\n\n                  |           |         |\n 1552            +\n\n                  |           |         |\n 1747            +\n\n                  |           |         |\n 2611            +\n\n                  |           |         |\n 2217            +\n\n                  |           |         |\n 2448            +\n\n                  |           |         |\n 2133            +\n\n                  |           |         |\n 1861            +\n\n                  |           |         |\n 2616            +\n\n                  |           |         |\n 2796\n\n  status          | f         |       6 |\n ok              +\n\n                  |           |         |\n ignored         +\n\n                  |           |         |\n channel_error   +\n\n                  |           |         |\n in_progress     +\n\n                  |           |         |\n error           +\n\n                  |           |         |\n sent_to_proxy\n\n (3 rows)\n\n # select count(*) from\n external_sync_messages;\n\n  count\n\n --------\n\n  992912\n\n (1 row)\n\n\n\n\n\n Hello, Bertrand!\n May be statistics on external_sync_messages is\n wrong? i.e planner give us rows=6385 but seq\n scan give us Rows Removed by Filter: 600140\n Maybe you should recalc it by VACUUM ANALYZE\n it?\n\n -- \n Alex Ignatov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company\n\n\n\n\n\n\n\n\n What is the result of \n select relname,n_live_tup,n_dead_tup, last_vacuum,\n last_autovacuum, last_analyze, last_autoanalyze from\n pg_stat_user_tables where relname='external_sync_messages'\n ?\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n What is yours random_page_cost  parameter in postgres config?\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 27 Oct 2015 14:30:45 +0300", "msg_from": "Alex Ignatov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "Bertrand Paquet <[email protected]> writes:\n> We have a slow query. After analyzing, the planner decision seems to be\n> discutable : the query is faster when disabling seqscan. See below the two\n> query plan, and an extract from pg_stats.\n\n> Any idea about what to change to help the planner ?\n\nNeither one of those plans is very good: you're just hoping that the\nFilter condition will let a tuple through sooner rather than later.\n\nIf you care about the performance of this type of query, I'd consider\ncreating an index on (organization_id, status, handled_by) so that all\nthe conditions can be checked in the index.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Oct 2015 08:03:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "show random_page_cost ;\n\n random_page_cost\n\n------------------\n\n 4\n\n(1 row)\n\n2015-10-27 12:30 GMT+01:00 Alex Ignatov <[email protected]>:\n\n>\n>\n> On 27.10.2015 14:19, Bertrand Paquet wrote:\n>\n> relname | n_live_tup | n_dead_tup | last_vacuum\n> | last_autovacuum | last_analyze |\n> last_autoanalyze\n>\n>\n> ------------------------+------------+------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------\n>\n> external_sync_messages | 998105 | 11750 | 2015-10-26\n> 20:15:17.484771+00 | 2015-10-02 15:04:25.944479+00 | 2015-10-26\n> 20:15:19.465308+00 | 2015-10-22 12:24:26.947616+00\n>\n> (1 row)\n>\n> 2015-10-27 12:17 GMT+01:00 Alex Ignatov <[email protected]>:\n>\n>> On 27.10.2015 14:10, Bertrand Paquet wrote:\n>>\n>> Yes, I have run VACUUM ANALYZE, no effect.\n>>\n>> Bertrand\n>>\n>> 2015-10-27 12:08 GMT+01:00 Alex Ignatov < <[email protected]>\n>> [email protected]>:\n>>\n>>> On 27.10.2015 12:35, Bertrand Paquet wrote:\n>>>\n>>>> Hi all,\n>>>>\n>>>> We have a slow query. After analyzing, the planner decision seems to be\n>>>> discutable : the query is faster when disabling seqscan. See below the two\n>>>> query plan, and an extract from pg_stats.\n>>>>\n>>>> Any idea about what to change to help the planner ?\n>>>>\n>>>> An information which can be useful : the number on distinct value on\n>>>> organization_id is very very low, may be the planner does not known that,\n>>>> and take the wrong decision.\n>>>>\n>>>> Regards,\n>>>>\n>>>> Bertrand\n>>>>\n>>>> # explain analyze SELECT 1 AS one FROM \"external_sync_messages\" WHERE\n>>>> \"external_sync_messages\".\"organization_id\" = 1612 AND\n>>>> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress',\n>>>> 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>>>>\n>>>> QUERY PLAN\n>>>>\n>>>>\n>>>> --------------------------------------------------------------------------------------------------------------------------------------------\n>>>>\n>>>> Limit (cost=0.00..12.39 rows=1 width=0) (actual time=232.212..232.213\n>>>> rows=1 loops=1)\n>>>>\n>>>> -> Seq Scan on external_sync_messages (cost=0.00..79104.69\n>>>> rows=6385 width=0) (actual time=232.209..232.209 rows=1 loops=1)\n>>>>\n>>>> Filter: ((handled_by IS NULL) AND (organization_id = 1612) AND\n>>>> ((status)::text <> ALL ('{sent_to_proxy,in_progress,ok}'::text[])))\n>>>>\n>>>> Rows Removed by Filter: 600140\n>>>>\n>>>> Planning time: 0.490 ms\n>>>>\n>>>> Execution time: 232.246 ms\n>>>>\n>>>> (6 rows)\n>>>>\n>>>> # set enable_seqscan = off;\n>>>>\n>>>> SET\n>>>>\n>>>> # explain analyze SELECT 1 AS one FROM \"external_sync_messages\" WHERE\n>>>> \"external_sync_messages\".\"organization_id\" = 1612 AND\n>>>> (\"external_sync_messages\".\"status\" NOT IN ('sent_to_proxy', 'in_progress',\n>>>> 'ok')) AND \"external_sync_messages\".\"handled_by\" IS NULL LIMIT 1;\n>>>>\n>>>> QUERY PLAN\n>>>>\n>>>>\n>>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>\n>>>> Limit (cost=0.42..39.88 rows=1 width=0) (actual time=0.030..0.030\n>>>> rows=1 loops=1)\n>>>>\n>>>> -> Index Scan using index_external_sync_messages_on_organization_id\n>>>> on external_sync_messages (cost=0.42..251934.05 rows=6385 width=0) (actual\n>>>> time=0.028..0.028 rows=1 loops=1)\n>>>>\n>>>> Index Cond: (organization_id = 1612)\n>>>>\n>>>> Filter: ((handled_by IS NULL) AND ((status)::text <> ALL\n>>>> ('{sent_to_proxy,in_progress,ok}'::text[])))\n>>>>\n>>>> Planning time: 0.103 ms\n>>>>\n>>>> Execution time: 0.052 ms\n>>>>\n>>>> (6 rows)\n>>>>\n>>>> # SELECT attname, inherited, n_distinct,\n>>>> array_to_string(most_common_vals, E'\\n') as most_common_vals FROM pg_stats\n>>>> WHERE tablename = 'external_sync_messages' and attname IN ('status',\n>>>> 'organization_id', 'handled_by');\n>>>>\n>>>> attname | inherited | n_distinct | most_common_vals\n>>>>\n>>>> -----------------+-----------+------------+------------------\n>>>>\n>>>> handled_by | f | 3 | 3 +\n>>>>\n>>>> | | | 236140 +\n>>>>\n>>>> | | | 54413\n>>>>\n>>>> organization_id | f | 22 | 1612 +\n>>>>\n>>>> | | | 287 +\n>>>>\n>>>> | | | 967 +\n>>>>\n>>>> | | | 1223 +\n>>>>\n>>>> | | | 1123 +\n>>>>\n>>>> | | | 1930 +\n>>>>\n>>>> | | | 841 +\n>>>>\n>>>> | | | 1814 +\n>>>>\n>>>> | | | 711 +\n>>>>\n>>>> | | | 1513 +\n>>>>\n>>>> | | | 1794 +\n>>>>\n>>>> | | | 1246 +\n>>>>\n>>>> | | | 1673 +\n>>>>\n>>>> | | | 1552 +\n>>>>\n>>>> | | | 1747 +\n>>>>\n>>>> | | | 2611 +\n>>>>\n>>>> | | | 2217 +\n>>>>\n>>>> | | | 2448 +\n>>>>\n>>>> | | | 2133 +\n>>>>\n>>>> | | | 1861 +\n>>>>\n>>>> | | | 2616 +\n>>>>\n>>>> | | | 2796\n>>>>\n>>>> status | f | 6 | ok +\n>>>>\n>>>> | | | ignored +\n>>>>\n>>>> | | | channel_error +\n>>>>\n>>>> | | | in_progress +\n>>>>\n>>>> | | | error +\n>>>>\n>>>> | | | sent_to_proxy\n>>>>\n>>>> (3 rows)\n>>>>\n>>>> # select count(*) from external_sync_messages;\n>>>>\n>>>> count\n>>>>\n>>>> --------\n>>>>\n>>>> 992912\n>>>>\n>>>> (1 row)\n>>>>\n>>>>\n>>>> Hello, Bertrand!\n>>> May be statistics on external_sync_messages is wrong? i.e planner give\n>>> us rows=6385 but seq scan give us Rows Removed by Filter: 600140\n>>> Maybe you should recalc it by VACUUM ANALYZE it?\n>>>\n>>> --\n>>> Alex Ignatov\n>>> Postgres Professional: <http://www.postgrespro.com>\n>>> http://www.postgrespro.com\n>>> The Russian Postgres Company\n>>>\n>>>\n>> What is the result of\n>> select relname,n_live_tup,n_dead_tup, last_vacuum, last_autovacuum,\n>> last_analyze, last_autoanalyze from pg_stat_user_tables where\n>> relname='external_sync_messages' ?\n>>\n>> --\n>> Alex Ignatov\n>> Postgres Professional: http://www.postgrespro.com\n>> The Russian Postgres Company\n>>\n>>\n>>\n> What is yours random_page_cost parameter in postgres config?\n>\n> --\n> Alex Ignatov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n\n\nshow random_page_cost ;\n random_page_cost \n------------------\n 4\n(1 row)2015-10-27 12:30 GMT+01:00 Alex Ignatov <[email protected]>:\n\n\n\nOn 27.10.2015 14:19, Bertrand Paquet\n wrote:\n\n\n\n        relname         | n_live_tup\n | n_dead_tup |          last_vacuum          |       \n last_autovacuum        |         last_analyze          |    \n   last_autoanalyze        \n------------------------+------------+------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------\n external_sync_messages |     998105\n |      11750 | 2015-10-26 20:15:17.484771+00 | 2015-10-02\n 15:04:25.944479+00 | 2015-10-26 20:15:19.465308+00 |\n 2015-10-22 12:24:26.947616+00\n(1 row)\n\n\n2015-10-27 12:17 GMT+01:00 Alex Ignatov\n <[email protected]>:\n\n\n\n On 27.10.2015 14:10, Bertrand Paquet\n wrote:\n\nYes, I have run VACUUM ANALYZE, no\n effect.\n \n\nBertrand\n\n\n2015-10-27 12:08\n GMT+01:00 Alex Ignatov <[email protected]>:\n\n\nOn 27.10.2015 12:35, Bertrand Paquet\n wrote:\n Hi all,\n\n We have a slow query. After analyzing,\n the planner decision seems to be\n discutable : the query is faster when\n disabling seqscan. See below the two\n query plan, and an extract from\n pg_stats.\n\n Any idea about what to change to help\n the planner ?\n\n An information which can be useful : the\n number on distinct value on\n organization_id is very very low, may be\n the planner does not known that, and\n take the wrong decision.\n\n Regards,\n\n Bertrand\n\n # explain analyze SELECT  1 AS one FROM\n \"external_sync_messages\"  WHERE\n \"external_sync_messages\".\"organization_id\"\n = 1612 AND\n (\"external_sync_messages\".\"status\" NOT\n IN ('sent_to_proxy', 'in_progress',\n 'ok')) AND\n \"external_sync_messages\".\"handled_by\" IS\n NULL LIMIT 1;\n\n                               QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n\n  Limit  (cost=0.00..12.39 rows=1\n width=0) (actual time=232.212..232.213\n rows=1 loops=1)\n\n    ->  Seq Scan on\n external_sync_messages \n (cost=0.00..79104.69 rows=6385 width=0)\n (actual time=232.209..232.209 rows=1\n loops=1)\n\n          Filter: ((handled_by IS NULL)\n AND (organization_id = 1612) AND\n ((status)::text <> ALL\n ('{sent_to_proxy,in_progress,ok}'::text[])))\n\n          Rows Removed by Filter: 600140\n\n  Planning time: 0.490 ms\n\n  Execution time: 232.246 ms\n\n (6 rows)\n\n # set enable_seqscan = off;\n\n SET\n\n # explain analyze SELECT  1 AS one FROM\n \"external_sync_messages\"  WHERE\n \"external_sync_messages\".\"organization_id\"\n = 1612 AND\n (\"external_sync_messages\".\"status\" NOT\n IN ('sent_to_proxy', 'in_progress',\n 'ok')) AND\n \"external_sync_messages\".\"handled_by\" IS\n NULL LIMIT 1;\n\n                                        \n             QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n  Limit  (cost=0.42..39.88 rows=1\n width=0) (actual time=0.030..0.030\n rows=1 loops=1)\n\n    ->  Index Scan using\n index_external_sync_messages_on_organization_id\n on external_sync_messages \n (cost=0.42..251934.05 rows=6385 width=0)\n (actual time=0.028..0.028 rows=1\n loops=1)\n\n          Index Cond: (organization_id =\n 1612)\n\n          Filter: ((handled_by IS NULL)\n AND ((status)::text <> ALL\n ('{sent_to_proxy,in_progress,ok}'::text[])))\n\n  Planning time: 0.103 ms\n\n  Execution time: 0.052 ms\n\n (6 rows)\n\n # SELECT attname, inherited, n_distinct,\n array_to_string(most_common_vals, E'\\n')\n as most_common_vals FROM pg_stats WHERE\n tablename = 'external_sync_messages' and\n attname IN ('status', 'organization_id',\n 'handled_by');\n\n      attname     | inherited |\n n_distinct | most_common_vals\n\n-----------------+-----------+------------+------------------\n\n  handled_by      | f         |       3 |\n 3               +\n\n                  |           |         |\n 236140          +\n\n                  |           |         |\n 54413\n\n  organization_id | f         |     22 |\n 1612            +\n\n                  |           |         |\n 287             +\n\n                  |           |         |\n 967             +\n\n                  |           |         |\n 1223            +\n\n                  |           |         |\n 1123            +\n\n                  |           |         |\n 1930            +\n\n                  |           |         |\n 841             +\n\n                  |           |         |\n 1814            +\n\n                  |           |         |\n 711             +\n\n                  |           |         |\n 1513            +\n\n                  |           |         |\n 1794            +\n\n                  |           |         |\n 1246            +\n\n                  |           |         |\n 1673            +\n\n                  |           |         |\n 1552            +\n\n                  |           |         |\n 1747            +\n\n                  |           |         |\n 2611            +\n\n                  |           |         |\n 2217            +\n\n                  |           |         |\n 2448            +\n\n                  |           |         |\n 2133            +\n\n                  |           |         |\n 1861            +\n\n                  |           |         |\n 2616            +\n\n                  |           |         |\n 2796\n\n  status          | f         |       6 |\n ok              +\n\n                  |           |         |\n ignored         +\n\n                  |           |         |\n channel_error   +\n\n                  |           |         |\n in_progress     +\n\n                  |           |         |\n error           +\n\n                  |           |         |\n sent_to_proxy\n\n (3 rows)\n\n # select count(*) from\n external_sync_messages;\n\n  count\n\n --------\n\n  992912\n\n (1 row)\n\n\n\n\n\n Hello, Bertrand!\n May be statistics on external_sync_messages is\n wrong? i.e planner give us rows=6385 but seq\n scan give us Rows Removed by Filter: 600140\n Maybe you should recalc it by VACUUM ANALYZE\n it?\n\n -- \n Alex Ignatov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company\n\n\n\n\n\n\n\n\n What is the result of \n select relname,n_live_tup,n_dead_tup, last_vacuum,\n last_autovacuum, last_analyze, last_autoanalyze from\n pg_stat_user_tables where relname='external_sync_messages'\n ?\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n What is yours random_page_cost  parameter in postgres config?\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 27 Oct 2015 14:06:28 +0100", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "Hi tom,\n\nI did the test yesterday with an index on the three fields, and with a\npartial index on organization and status and where is null condition on\nhandled.\n\nLe mardi 27 octobre 2015, Tom Lane <[email protected]> a écrit :\n\n> Bertrand Paquet <[email protected]> writes:\n> > We have a slow query. After analyzing, the planner decision seems to be\n> > discutable : the query is faster when disabling seqscan. See below the\n> two\n> > query plan, and an extract from pg_stats.\n>\n> > Any idea about what to change to help the planner ?\n>\n> Neither one of those plans is very good: you're just hoping that the\n> Filter condition will let a tuple through sooner rather than later.\n>\n> If you care about the performance of this type of query, I'd consider\n> creating an index on (organization_id, status, handled_by) so that all\n> the conditions can be checked in the index.\n>\n> regards, tom lane\n>\n\nHi tom,I did the test yesterday with an index on the three fields, and with a partial index on organization and status and where is null condition on handled.Le mardi 27 octobre 2015, Tom Lane <[email protected]> a écrit :Bertrand Paquet <[email protected]> writes:\n> We have a slow query. After analyzing, the planner decision seems to be\n> discutable : the query is faster when disabling seqscan. See below the two\n> query plan, and an extract from pg_stats.\n\n> Any idea about what to change to help the planner ?\n\nNeither one of those plans is very good: you're just hoping that the\nFilter condition will let a tuple through sooner rather than later.\n\nIf you care about the performance of this type of query, I'd consider\ncreating an index on (organization_id, status, handled_by) so that all\nthe conditions can be checked in the index.\n\n                        regards, tom lane", "msg_date": "Tue, 27 Oct 2015 14:06:41 +0100", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "Hi tom,\n\nI did the test yesterday with an index on the three fields, and with a\npartial index on organization and status and where is null condition on\nhandled. I saw no modification on query plan.\nMay be I forgot to analyze vacuum after. I will retry tonight.\n\nI use a btree index. Is it the good solution, even with the In clause ?\n\nRegards,\n\nBertrand\n\nLe mardi 27 octobre 2015, Tom Lane <[email protected]> a écrit :\n\n> Bertrand Paquet <[email protected] <javascript:;>> writes:\n> > We have a slow query. After analyzing, the planner decision seems to be\n> > discutable : the query is faster when disabling seqscan. See below the\n> two\n> > query plan, and an extract from pg_stats.\n>\n> > Any idea about what to change to help the planner ?\n>\n> Neither one of those plans is very good: you're just hoping that the\n> Filter condition will let a tuple through sooner rather than later.\n>\n> If you care about the performance of this type of query, I'd consider\n> creating an index on (organization_id, status, handled_by) so that all\n> the conditions can be checked in the index.\n>\n> regards, tom lane\n>\n\nHi tom,I did the test yesterday with an index on the three fields, and with a partial index on organization and status and where is null condition on handled. I saw no modification on query plan.May be I forgot to analyze vacuum after. I will retry tonight. I use a btree index. Is it the good solution, even with the In clause ?Regards,BertrandLe mardi 27 octobre 2015, Tom Lane <[email protected]> a écrit :Bertrand Paquet <[email protected]> writes:\n> We have a slow query. After analyzing, the planner decision seems to be\n> discutable : the query is faster when disabling seqscan. See below the two\n> query plan, and an extract from pg_stats.\n\n> Any idea about what to change to help the planner ?\n\nNeither one of those plans is very good: you're just hoping that the\nFilter condition will let a tuple through sooner rather than later.\n\nIf you care about the performance of this type of query, I'd consider\ncreating an index on (organization_id, status, handled_by) so that all\nthe conditions can be checked in the index.\n\n                        regards, tom lane", "msg_date": "Tue, 27 Oct 2015 18:33:17 +0100", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "So,\n\nTonight, the index on the three field is used, may be my yesterday vacuum\nupdated stats.\n\nThx you for your help.\n\nRegards,\n\nBertrand\n\n\n\n\n2015-10-27 18:33 GMT+01:00 Bertrand Paquet <[email protected]>:\n\n> Hi tom,\n>\n> I did the test yesterday with an index on the three fields, and with a\n> partial index on organization and status and where is null condition on\n> handled. I saw no modification on query plan.\n> May be I forgot to analyze vacuum after. I will retry tonight.\n>\n> I use a btree index. Is it the good solution, even with the In clause ?\n>\n> Regards,\n>\n> Bertrand\n>\n> Le mardi 27 octobre 2015, Tom Lane <[email protected]> a écrit :\n>\n>> Bertrand Paquet <[email protected]> writes:\n>> > We have a slow query. After analyzing, the planner decision seems to be\n>> > discutable : the query is faster when disabling seqscan. See below the\n>> two\n>> > query plan, and an extract from pg_stats.\n>>\n>> > Any idea about what to change to help the planner ?\n>>\n>> Neither one of those plans is very good: you're just hoping that the\n>> Filter condition will let a tuple through sooner rather than later.\n>>\n>> If you care about the performance of this type of query, I'd consider\n>> creating an index on (organization_id, status, handled_by) so that all\n>> the conditions can be checked in the index.\n>>\n>> regards, tom lane\n>>\n>\n\nSo,Tonight, the index on the three field is used, may be my yesterday vacuum updated stats.Thx you for your help.Regards,Bertrand2015-10-27 18:33 GMT+01:00 Bertrand Paquet <[email protected]>:Hi tom,I did the test yesterday with an index on the three fields, and with a partial index on organization and status and where is null condition on handled. I saw no modification on query plan.May be I forgot to analyze vacuum after. I will retry tonight. I use a btree index. Is it the good solution, even with the In clause ?Regards,BertrandLe mardi 27 octobre 2015, Tom Lane <[email protected]> a écrit :Bertrand Paquet <[email protected]> writes:\n> We have a slow query. After analyzing, the planner decision seems to be\n> discutable : the query is faster when disabling seqscan. See below the two\n> query plan, and an extract from pg_stats.\n\n> Any idea about what to change to help the planner ?\n\nNeither one of those plans is very good: you're just hoping that the\nFilter condition will let a tuple through sooner rather than later.\n\nIf you care about the performance of this type of query, I'd consider\ncreating an index on (organization_id, status, handled_by) so that all\nthe conditions can be checked in the index.\n\n                        regards, tom lane", "msg_date": "Tue, 27 Oct 2015 21:56:21 +0100", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "On 10/27/15 3:56 PM, Bertrand Paquet wrote:\n> Tonight, the index on the three field is used, may be my yesterday\n> vacuum updated stats.\n\nBTW, you can run just ANALYZE, which is *far* faster than a VACUUM on a \nlarge table.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Oct 2015 22:07:38 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "On 27.10.2015 23:56, Bertrand Paquet wrote:\n> So,\n>\n> Tonight, the index on the three field is used, may be my yesterday \n> vacuum updated stats.\n>\n> Thx you for your help.\n>\n> Regards,\n>\n> Bertrand\n>\n>\n>\n>\n> 2015-10-27 18:33 GMT+01:00 Bertrand Paquet \n> <[email protected] <mailto:[email protected]>>:\n>\n> Hi tom,\n>\n> I did the test yesterday with an index on the three fields, and\n> with a partial index on organization and status and where is null\n> condition on handled. I saw no modification on query plan.\n> May be I forgot to analyze vacuum after. I will retry tonight.\n>\n> I use a btree index. Is it the good solution, even with the In\n> clause ?\n>\n> Regards,\n>\n> Bertrand\n>\n> Le mardi 27 octobre 2015, Tom Lane <[email protected]\n> <mailto:[email protected]>> a écrit :\n>\n> Bertrand Paquet <[email protected]> writes:\n> > We have a slow query. After analyzing, the planner decision\n> seems to be\n> > discutable : the query is faster when disabling seqscan. See\n> below the two\n> > query plan, and an extract from pg_stats.\n>\n> > Any idea about what to change to help the planner ?\n>\n> Neither one of those plans is very good: you're just hoping\n> that the\n> Filter condition will let a tuple through sooner rather than\n> later.\n>\n> If you care about the performance of this type of query, I'd\n> consider\n> creating an index on (organization_id, status, handled_by) so\n> that all\n> the conditions can be checked in the index.\n>\n> regards, tom lane\n>\n>\nHello Bertrand once again!\nWhat's your status? Does the plan changed after deploying three field \nindex ?\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 27.10.2015 23:56, Bertrand Paquet\n wrote:\n\n\nSo,\n \n\nTonight, the index on the three field is used, may be my\n yesterday vacuum updated stats.\n\n\nThx you for your help.\n\n\nRegards,\n\n\nBertrand\n\n\n\n\n\n\n\n\n2015-10-27 18:33 GMT+01:00 Bertrand\n Paquet <[email protected]>:\nHi tom,\n \n\nI did the test yesterday with an index on the three\n fields, and with a partial index on organization and\n status and where is null condition on handled. I saw no\n modification on query plan.\nMay be I forgot to analyze vacuum after. I will retry\n tonight. \n\n\n I use a btree index. Is it the good solution, even with the\n In clause ?\n \n\nRegards,\n\n\nBertrand\n\n Le mardi 27 octobre 2015, Tom Lane <[email protected]>\n a écrit :\n\n\n\nBertrand\n Paquet <[email protected]>\n writes:\n > We have a slow query. After analyzing, the\n planner decision seems to be\n > discutable : the query is faster when\n disabling seqscan. See below the two\n > query plan, and an extract from pg_stats.\n\n > Any idea about what to change to help the\n planner ?\n\n Neither one of those plans is very good: you're\n just hoping that the\n Filter condition will let a tuple through sooner\n rather than later.\n\n If you care about the performance of this type of\n query, I'd consider\n creating an index on (organization_id, status,\n handled_by) so that all\n the conditions can be checked in the index.\n\n                         regards, tom lane\n\n\n\n\n\n\n\n\n\n\n Hello Bertrand once again! \n What's your status? Does the plan changed after deploying three\n field index ?\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 29 Oct 2015 15:27:40 +0300", "msg_from": "Alex Ignatov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner wants to use seq scan" }, { "msg_contents": "Yes, the three fields index AND vacuum solve the issue.\n\nRegards,\n\nBertrand\n\n2015-10-29 13:27 GMT+01:00 Alex Ignatov <[email protected]>:\n\n>\n>\n> On 27.10.2015 23:56, Bertrand Paquet wrote:\n>\n> So,\n>\n> Tonight, the index on the three field is used, may be my yesterday vacuum\n> updated stats.\n>\n> Thx you for your help.\n>\n> Regards,\n>\n> Bertrand\n>\n>\n>\n>\n> 2015-10-27 18:33 GMT+01:00 Bertrand Paquet <[email protected]>:\n>\n>> Hi tom,\n>>\n>> I did the test yesterday with an index on the three fields, and with a\n>> partial index on organization and status and where is null condition on\n>> handled. I saw no modification on query plan.\n>> May be I forgot to analyze vacuum after. I will retry tonight.\n>>\n>> I use a btree index. Is it the good solution, even with the In clause ?\n>>\n>> Regards,\n>>\n>> Bertrand\n>>\n>> Le mardi 27 octobre 2015, Tom Lane < <[email protected]>[email protected]>\n>> a écrit :\n>>\n>>> Bertrand Paquet <[email protected]> writes:\n>>> > We have a slow query. After analyzing, the planner decision seems to be\n>>> > discutable : the query is faster when disabling seqscan. See below the\n>>> two\n>>> > query plan, and an extract from pg_stats.\n>>>\n>>> > Any idea about what to change to help the planner ?\n>>>\n>>> Neither one of those plans is very good: you're just hoping that the\n>>> Filter condition will let a tuple through sooner rather than later.\n>>>\n>>> If you care about the performance of this type of query, I'd consider\n>>> creating an index on (organization_id, status, handled_by) so that all\n>>> the conditions can be checked in the index.\n>>>\n>>> regards, tom lane\n>>>\n>>\n> Hello Bertrand once again!\n> What's your status? Does the plan changed after deploying three field\n> index ?\n>\n> --\n> Alex Ignatov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n\nYes, the three fields index AND vacuum solve the issue.Regards,Bertrand2015-10-29 13:27 GMT+01:00 Alex Ignatov <[email protected]>:\n\n\n\nOn 27.10.2015 23:56, Bertrand Paquet\n wrote:\n\n\nSo,\n \n\nTonight, the index on the three field is used, may be my\n yesterday vacuum updated stats.\n\n\nThx you for your help.\n\n\nRegards,\n\n\nBertrand\n\n\n\n\n\n\n\n\n2015-10-27 18:33 GMT+01:00 Bertrand\n Paquet <[email protected]>:\nHi tom,\n \n\nI did the test yesterday with an index on the three\n fields, and with a partial index on organization and\n status and where is null condition on handled. I saw no\n modification on query plan.\nMay be I forgot to analyze vacuum after. I will retry\n tonight. \n\n\n I use a btree index. Is it the good solution, even with the\n In clause ?\n \n\nRegards,\n\n\nBertrand\n\n Le mardi 27 octobre 2015, Tom Lane <[email protected]>\n a écrit :\n\n\n\nBertrand\n Paquet <[email protected]>\n writes:\n > We have a slow query. After analyzing, the\n planner decision seems to be\n > discutable : the query is faster when\n disabling seqscan. See below the two\n > query plan, and an extract from pg_stats.\n\n > Any idea about what to change to help the\n planner ?\n\n Neither one of those plans is very good: you're\n just hoping that the\n Filter condition will let a tuple through sooner\n rather than later.\n\n If you care about the performance of this type of\n query, I'd consider\n creating an index on (organization_id, status,\n handled_by) so that all\n the conditions can be checked in the index.\n\n                         regards, tom lane\n\n\n\n\n\n\n\n\n\n\n Hello Bertrand once again! \n What's your status? Does the plan changed after deploying three\n field index ?\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 29 Oct 2015 14:16:02 +0100", "msg_from": "Bertrand Paquet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner wants to use seq scan" } ]
[ { "msg_contents": "I have partitioned a large table in my PG database (6.7 billion rows!) by a date column and in general constraint exclusion works well but only in relatively simple case when the partition key is specified exactly as created in the CHECK constraint. I'm curious if there is a way to get it to work a little more generally though.\n\nFor example my CHECK constraint (see code below) specifying a hard-coded field value works well (#1 and #2). Specifying a function that returns a value even though it is the appropriate type scans all of the partitions (#3) unfortunately. Likewise any join, CTE, or sub-query expression, even for a single row that returns the correct type also results in a scan of all of the partitions. \n\nI was curious if there was a way specifically to get #3 to work as the WHERE predicate in this case is stored as an integer but the table itself is partitioned by the appropriate date type. I believe I could work around this issue with dynamic sql in a function but there are lots of cases of this type of simple conversion and I wanted to avoid the maintenance of creating a function per query.\n\nIt's also slightly surprising that queries that join with the appropriate type (#4 & #5) also cause a full partition scan. Is there a work-around to get constraint_exclusion to work in this case?\n\n</snip>\n-- constraint exclusion tests\n-- generate some data\ncreate schema if not exists ptest;\nset search_path=ptest;\ndrop table if exists ptest.tbl cascade;\ncreate table if not exists tbl as select * from (\nwith a as (\n select\n generate_series('2014-01-01'::date, now(), '1 day'::interval)::date dt\n),\nb as (\n select\n generate_series(1, 1000) i\n)\nselect\n a.dt,\n b.i,\n md5((random()*4+5)::text) str\nfrom\n a cross join b\n) c;\n\n-- create child partitions\ncreate table ptest.tbl_p2014(check (dt >= '2014-01-01'::date and dt < '2015-01-01'::date)) inherits (ptest.tbl);\ncreate table ptest.tbl_p2015(check (dt >= '2015-01-01'::date and dt < '2016-01-01'::date)) inherits (ptest.tbl);\n\n-- populate child partitions\nwith pd as ( delete from only ptest.tbl where dt >= '2014-01-01'::date and dt < '2015-01-01'::date returning *) \ninsert into ptest.tbl_p2014 select * from pd;\nwith pd as ( delete from only ptest.tbl where dt >= '2015-01-01'::date and dt < '2016-01-01'::date returning *) \ninsert into ptest.tbl_p2015 select * from pd;\n\n-- clean parent of any data\ntruncate table only ptest.tbl;\n\n-- create dt field indexes\ncreate index i_tbl_dt on ptest.tbl(dt);\ncreate index i_tbl_dt_p2014 on ptest.tbl_p2014(dt);\ncreate index i_tble_dt_p2015 on ptest.tbl_p2015(dt);\n\n-- vacuum\nvacuum analyze verbose ptest.tbl;\n\n-- verify parent is empty and partitions have some data (estimated)\nselect relname, n_live_tup from pg_stat_user_tables where relname like 'tbl%' and schemaname = 'ptest' order by relname;\n\n-- check that partitions show in parent\n\\d+ ptest.tbl\n\n-- force constraint_exclusion to partition\nset constraint_exclusion = partition;\n\n-- #1: works\nexplain analyze select count(1) from ptest.tbl where dt = '2014-06-01'::date;\n\n-- #2: works\nexplain analyze select count(1) from ptest.tbl where dt = DATE '2014-06-01';\n\n-- #3: full scan (no constraint exclusion)\nexplain analyze select count(1) from ptest.tbl where dt = to_date(201406::text||01::text, 'YYYYMMDD');\n\n-- #4: full scan (no constraint exclusion)\nexplain analyze select count(1) from ptest.tbl where dt = (select '2014-06-01'::date);\n\n-- #5: full scan (no constraint exclusion)\nexplain analyze with foo as (select '2014-06-01'::date dt)\nselect count(1) from ptest.tbl inner join foo on (ptest.tbl.dt = foo.dt);\n\n</snip>\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Oct 2015 11:29:04 -0700", "msg_from": "GMail <[email protected]>", "msg_from_op": true, "msg_subject": "Partition Constraint Exclusion Limits" }, { "msg_contents": "On Tue, Oct 27, 2015 at 2:29 PM, GMail <[email protected]> wrote:\n\n> I have partitioned a large table in my PG database (6.7 billion rows!) by\n> a date column and in general constraint exclusion works well but only in\n> relatively simple case when the partition key is specified exactly as\n> created in the CHECK constraint. I'm curious if there is a way to get it\n> to work a little more generally though.\n>\n> For example my CHECK constraint (see code below) specifying a hard-coded\n> field value works well (#1 and #2). Specifying a function that returns a\n> value even though it is the appropriate type scans all of the partitions\n> (#3) unfortunately. Likewise any join, CTE, or sub-query expression, even\n> for a single row that returns the correct type also results in a scan of\n> all of the partitions.\n>\n> I was curious if there was a way specifically to get #3 to work as the\n> WHERE predicate in this case is stored as an integer but the table itself\n> is partitioned by the appropriate date type. I believe I could work around\n> this issue with dynamic sql in a function but there are lots of cases of\n> this type of simple conversion and I wanted to avoid the maintenance of\n> creating a function per query.\n>\n\n​Short answer, no.\n\nThe planner has the responsibility for performing constraint exclusion and\nit only has access to constants during its evaluation. It has no clue what\nkind of transformations a function might do. Various other optimizations\nare indeed possible but are not presently performed.\n\n​So, #3 (\nto_date(201406::text||01::text, 'YYYYMMDD');\n​) ​\nis down-right impossible given the present architecture\n​; and likely any future architecture.\n\nWith #4 (\nexplain analyze select count(1) from ptest.tbl where dt = (select\n'2014-06-01'::date);\n​) ​\nin theory the re-write module could recognize and re-write this remove the\nsub-select.\n​ But likely real-life is not so simple otherwise the query writer likely\nwould have simply done is directly themself.\n\n​\n​\n​\n​In a partitioning scheme the partitioning data has to be injected into the\nquery explicitly so that it is already in place before the planner receives\nthe query. Anything within the query requiring \"execution\" is handled by\nthe executor and at that point the chance to exclude partitions has come\nand gone.\n\nDavid J.\n\nOn Tue, Oct 27, 2015 at 2:29 PM, GMail <[email protected]> wrote:I have partitioned a large table in my PG database (6.7 billion rows!) by a date column and in general constraint exclusion works well but only in relatively simple case when the partition key is specified exactly as created in the CHECK constraint.  I'm curious if there is a way to get it to work a little more generally though.\n\nFor example my CHECK constraint (see code below) specifying a hard-coded field value works well (#1 and #2).  Specifying a function that returns a value even though it is the appropriate type scans all of the partitions (#3) unfortunately.  Likewise any join, CTE, or sub-query expression, even for a single row that returns the correct type also results in a scan of all of the partitions.\n\nI was curious if there was a way specifically to get #3 to work as the WHERE predicate in this case is stored as an integer but the table itself is partitioned by the appropriate date type.  I believe I could work around this issue with dynamic sql in a function but there are lots of cases of this type of simple conversion and I wanted to avoid the maintenance of creating a function per query.​Short answer, no.The planner has the responsibility for performing constraint exclusion and it only has access to constants during its evaluation.  It has no clue what kind of transformations a function might do.  Various other optimizations are indeed possible but are not presently performed.​So, #3 (to_date(201406::text||01::text, 'YYYYMMDD');​) ​is down-right impossible given the present architecture​; and likely any future architecture.With #4 (explain analyze select count(1) from ptest.tbl where dt = (select '2014-06-01'::date);​) ​in theory the re-write module could recognize and re-write this remove the sub-select.​  But likely real-life is not so simple otherwise the query writer likely would have simply done is directly themself.​​​​In a partitioning scheme the partitioning data has to be injected into the query explicitly so that it is already in place before the planner receives the query.  Anything within the query requiring \"execution\" is handled by the executor and at that point the chance to exclude partitions has come and gone.David J.", "msg_date": "Tue, 27 Oct 2015 15:03:05 -0400", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition Constraint Exclusion Limits" }, { "msg_contents": "BTW: May be it could be feasible in future to perform partition exclusion\nduring the execution? This would be very neat feature.\n\nRegards, Vitalii Tymchyshyn\n\nВт, 27 жовт. 2015 15:03 David G. Johnston <[email protected]> пише:\n\n> On Tue, Oct 27, 2015 at 2:29 PM, GMail <[email protected]> wrote:\n>\n>> I have partitioned a large table in my PG database (6.7 billion rows!) by\n>> a date column and in general constraint exclusion works well but only in\n>> relatively simple case when the partition key is specified exactly as\n>> created in the CHECK constraint. I'm curious if there is a way to get it\n>> to work a little more generally though.\n>>\n>> For example my CHECK constraint (see code below) specifying a hard-coded\n>> field value works well (#1 and #2). Specifying a function that returns a\n>> value even though it is the appropriate type scans all of the partitions\n>> (#3) unfortunately. Likewise any join, CTE, or sub-query expression, even\n>> for a single row that returns the correct type also results in a scan of\n>> all of the partitions.\n>>\n>> I was curious if there was a way specifically to get #3 to work as the\n>> WHERE predicate in this case is stored as an integer but the table itself\n>> is partitioned by the appropriate date type. I believe I could work around\n>> this issue with dynamic sql in a function but there are lots of cases of\n>> this type of simple conversion and I wanted to avoid the maintenance of\n>> creating a function per query.\n>>\n>\n> ​Short answer, no.\n>\n> The planner has the responsibility for performing constraint exclusion and\n> it only has access to constants during its evaluation. It has no clue what\n> kind of transformations a function might do. Various other optimizations\n> are indeed possible but are not presently performed.\n>\n> ​So, #3 (\n> to_date(201406::text||01::text, 'YYYYMMDD');\n> ​) ​\n> is down-right impossible given the present architecture\n> ​; and likely any future architecture.\n>\n> With #4 (\n> explain analyze select count(1) from ptest.tbl where dt = (select\n> '2014-06-01'::date);\n> ​) ​\n> in theory the re-write module could recognize and re-write this remove the\n> sub-select.\n> ​ But likely real-life is not so simple otherwise the query writer likely\n> would have simply done is directly themself.\n>\n> ​\n> ​\n> ​\n> ​In a partitioning scheme the partitioning data has to be injected into\n> the query explicitly so that it is already in place before the planner\n> receives the query. Anything within the query requiring \"execution\" is\n> handled by the executor and at that point the chance to exclude partitions\n> has come and gone.\n>\n> David J.\n>\n\nBTW: May be it could be feasible in future to perform partition exclusion during the execution? This would be very neat feature.\nRegards, Vitalii Tymchyshyn\nВт, 27 жовт. 2015 15:03 David G. Johnston <[email protected]> пише:On Tue, Oct 27, 2015 at 2:29 PM, GMail <[email protected]> wrote:I have partitioned a large table in my PG database (6.7 billion rows!) by a date column and in general constraint exclusion works well but only in relatively simple case when the partition key is specified exactly as created in the CHECK constraint.  I'm curious if there is a way to get it to work a little more generally though.\n\nFor example my CHECK constraint (see code below) specifying a hard-coded field value works well (#1 and #2).  Specifying a function that returns a value even though it is the appropriate type scans all of the partitions (#3) unfortunately.  Likewise any join, CTE, or sub-query expression, even for a single row that returns the correct type also results in a scan of all of the partitions.\n\nI was curious if there was a way specifically to get #3 to work as the WHERE predicate in this case is stored as an integer but the table itself is partitioned by the appropriate date type.  I believe I could work around this issue with dynamic sql in a function but there are lots of cases of this type of simple conversion and I wanted to avoid the maintenance of creating a function per query.​Short answer, no.The planner has the responsibility for performing constraint exclusion and it only has access to constants during its evaluation.  It has no clue what kind of transformations a function might do.  Various other optimizations are indeed possible but are not presently performed.​So, #3 (to_date(201406::text||01::text, 'YYYYMMDD');​) ​is down-right impossible given the present architecture​; and likely any future architecture.With #4 (explain analyze select count(1) from ptest.tbl where dt = (select '2014-06-01'::date);​) ​in theory the re-write module could recognize and re-write this remove the sub-select.​  But likely real-life is not so simple otherwise the query writer likely would have simply done is directly themself.​​​​In a partitioning scheme the partitioning data has to be injected into the query explicitly so that it is already in place before the planner receives the query.  Anything within the query requiring \"execution\" is handled by the executor and at that point the chance to exclude partitions has come and gone.David J.", "msg_date": "Tue, 27 Oct 2015 20:33:08 +0000", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition Constraint Exclusion Limits" }, { "msg_contents": "On 10/27/15 3:33 PM, Vitalii Tymchyshyn wrote:\n> BTW: May be it could be feasible in future to perform partition\n> exclusion during the execution? This would be very neat feature.\n\nTrue exclusion? probably not. The problem is you can't completely \nexclude something based on any value that could change during execution.\n\nThere has been some work done on declarative partition specification, \nwhere a given value would be fit to the exact partition it belong in. \nIIRC that's currently stalled though.\n\nOne thing you could try would be to create an index on each partition \nthat would always be empty. IE, if you have a June 2015 partition, you \ncould:\n\nCREATE INDEX ... ON( date_field ) WHERE date_field < '2015-6-1'::date OR \ndate_field >= '2015-7-1'::date;\n\nBecause the WHERE clause will never be true, that index will always be \nempty, which will make probing it very fast. I suspect that might be \nfaster than probing a regular index on the date field, but you should \ntest it.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Oct 2015 21:59:12 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition Constraint Exclusion Limits" } ]
[ { "msg_contents": "Has PostgreSQL 9.4 official Support on a system with more than 64 legitimate cores? ( 72 Cores , 4 CPUs Intel(R) Xeon(R) CPU E7-8890 )\n\n\nThe work Robert Haas did to fix the CPU locking way back when showed \nsignificant improvements up to 64, but so far as I know.\n\nThanks in Advance.\n \t\t \t \t\t \n\n\n\nHas PostgreSQL 9.4  official Support on a system with more than 64 legitimate cores? ( 72 Cores , 4 CPUs Intel(R) Xeon(R) CPU E7-8890 )The  work Robert Haas did to fix the CPU locking way back when showed significant improvements up to 64, but so far as I know.Thanks in Advance.", "msg_date": "Tue, 27 Oct 2015 22:04:04 -0700", "msg_from": "Javier Muro <[email protected]>", "msg_from_op": true, "msg_subject": "Scalability to more than 64 cores With PG 9.4 and RHEL 7.1 Kernel\n 3.10" }, { "msg_contents": "\nOn 28.10.2015 8:04, Javier Muro wrote:\n> Has PostgreSQL 9.4 official Support on a system with more than 64 \n> legitimate cores? ( 72 Cores , 4 CPUs Intel(R) Xeon(R) CPU E7-8890 )\n>\n>\n>\n> The work Robert Haas did to fix the CPU locking way back when showed\n> significant improvements up to 64, but so far as I know.\n>\n>\n> Thanks in Advance.\nHello Javier!\nOur tests shows that PG 9.4 scales well up to 60 Intel cores. I.E \npgbech -S and DB on tmpfs gave us 700 000 tps. After 60 соres s_lock is \ndominating in cpu usage%. 9.5 scales way better.\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Oct 2015 16:46:39 +0300", "msg_from": "Alex Ignatov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability to more than 64 cores With PG 9.4 and RHEL\n 7.1 Kernel 3.10" } ]
[ { "msg_contents": "This related to a post in the general bugs forum, but I found this forum,\nand\n this seems more appropriate. This is my second attempt to post, I believe\nthe first attempt last week did not work, apologies if I'm duplicating.\n\nhttp://comments.gmane.org/gmane.comp.db.postgresql.bugs/39011\n\nI made have several users encounter performance problems, which all\nseem to come down to this problem: multiplying selectivity estimates can\ncause tuple estimates to grow very small very quickly, once the estimator\ngets to 1 row, the planner may choose plans that are very good ONLY WHEN\nthere is exactly 1 row (maybe even O(N^large)). Unfortunately, these may\nbe the worst plans if the estimate is even slightly off (even just\nreturning\n2 or 3 rows versus 1).\n\nUsing the patch below, I discovered that clamping relation tuple estimates\nto a number as small as 2 seemed to avoid all the catastrophic query\nplans.\n\nIn the scenarios I'm seeing, I have several examples of queries\nthat take >1m to run that should run in <1s. The estimate of 1 row\n(versus thousands actual) leads the planner to tee up several nest loop\njoins\nwhich causes thousands of table scans.\n\nI have been working on a more complete which tracks uniqueness along\nwith selectivity so that optimizer can benefit from knowing when a\nrelation must have 1 (or fewer) tuples, while clamping all other relations\nto 2 rather than 1.\n\ntypedef struct\n{\ndouble selectivity;\nboolean unique;\n} Selectivity;\n\nI am interested in hearing discussion about this problem, and if the\ncommunity\nis open to a patch if I continue pursuing the development.\n\nMatt\n\n\n\nFIRST ARTIFACT\n\nplan with expensive (80s join) and join estimate of 1\nnote the first Nested Loop join and 81s join\n(I gave up trying to post the full explain, because of the 80 char limit)\n\n\"Sort (cost=7000.04..7000.04 rows=1 width=49)\n(actual time=81739.426..81740.023 rows=5091 loops=1)\"\n\" Sort Key: c.ten DESC\"\n\" Sort Method: quicksort Memory: 948kB\"\n\" CTE cte\"\n\" -> Values Scan on \"*VALUES*\" (cost=0.00..0.06 rows=5 width=4)\n(actual time=0.001..0.001 rows=5 loops=1)\"\n\" -> Nested Loop (cost=1.36..6999.97 rows=1 width=49)\n (actual time=0.059..81725.475 rows=5091 loops=1)\"\n\"Planning time: 1.912 ms\"\n\"Execution time: 81740.328 ms\"\n\n\nSECOND ARTIFACT\n\nforce join row estimate to be minimun of 2\nquery completes very quickly\n\n\"Sort (cost=7000.06..7000.06 rows=2 width=49)\n(actual time=84.610..85.192 rows=5142 loops=1)\"\n\" Sort Key: c.ten DESC\"\n\" Sort Method: quicksort Memory: 956kB\"\n\" CTE cte\"\n\" -> Values Scan on \"*VALUES*\" (cost=0.00..0.06 rows=5 width=4)\n(actual time=0.002..0.003 rows=5 loops=1)\"\n\" -> Hash Join (cost=2518.99..6999.98 rows=2 width=49)\n(actual time=17.629..82.886 rows=5142 loops=1)\"\n\n\"Planning time: 2.982 ms\"\n\"Execution time: 85.514 ms\"\n\n\nTHIRD ARTIFACT\n\npatch I used to make experimenting easier w/o recompiling\n\n\nindex 1b61fd9..444703c 100644\n--- a/src/backend/optimizer/path/costsize.c\n+++ b/src/backend/optimizer/path/costsize.c\n@@ -68,6 +68,12 @@\n *-------------------------------------------------------------------------\n */\n\n+\n+\n+/* These parameters are set by GUC */\n+int join_row_estimate_clamp=1;\n+\n+\n #include \"postgres.h\"\n\n #ifdef _MSC_VER\n@@ -175,6 +181,17 @@ clamp_row_est(double nrows)\n }\n\n\n+double\n+clamp_join_row_est(double nrows)\n+{\n+ nrows = clamp_row_est(nrows);\n+ if (nrows >= (double)join_row_estimate_clamp)\n+ return nrows;\n+ return (double)join_row_estimate_clamp;\n+}\n+\n+\n+\n /*\n * cost_seqscan\n * Determines and returns the cost of scanning a relation sequentially.\n@@ -3886,7 +3903,7 @@ calc_joinrel_size_estimate(PlannerInfo *root,\n break;\n }\n\n- return clamp_row_est(nrows);\n+ return clamp_join_row_est(nrows);\n }\n\n /*\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 71090f2..fabb8ac 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -2664,6 +2664,16 @@ static struct config_int ConfigureNamesInt[] =\n NULL, NULL, NULL\n },\n\n+ {\n+ {\"join_row_estimate_clamp\", PGC_USERSET, QUERY_TUNING_OTHER,\n+ gettext_noop(\"Set the minimum estimated size of a join result.\"),\n+ NULL\n+ },\n+ &join_row_estimate_clamp,\n+ 1, 1, 10000,\n+ NULL, NULL, NULL\n+ },\n+\n /* End-of-list marker */\n {\n {NULL, 0, 0, NULL, NULL}, NULL, 0, 0, 0, NULL, NULL, NULL\ndiff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h\nindex 25a7303..0161c4b 100644\n--- a/src/include/optimizer/cost.h\n+++ b/src/include/optimizer/cost.h\n@@ -67,8 +67,10 @@ extern bool enable_material;\n extern bool enable_mergejoin;\n extern bool enable_hashjoin;\n extern int constraint_exclusion;\n+extern int join_row_estimate_clamp;\n\n extern double clamp_row_est(double nrows);\n+extern double clamp_join_row_est(double nrows);\n extern double index_pages_fetched(double tuples_fetched, BlockNumber pages,\n double index_pages, PlannerInfo *root);\n extern void cost_seqscan(Path *path, PlannerInfo *root, RelOptInfo\n*baserel,\n\nThis related to a post in the general bugs forum, but I found this forum, and this seems more appropriate.  This is my second attempt to post, I believethe first attempt last week did not work, apologies if I'm duplicating.http://comments.gmane.org/gmane.comp.db.postgresql.bugs/39011I made have several users encounter performance problems, which allseem to come down to this problem: multiplying selectivity estimates can cause tuple estimates to grow very small very quickly, once the estimatorgets to 1 row, the planner may choose plans that are very good ONLY WHEN there is exactly 1 row (maybe even O(N^large)).  Unfortunately, these maybe the worst plans if the estimate is even slightly off (even just returning 2 or 3 rows versus 1).Using the patch below, I discovered that clamping relation tuple estimatesto a number as small as 2 seemed to avoid all the catastrophic query plans.In the scenarios I'm seeing, I have several examples of queriesthat take >1m to run that should run in <1s. The estimate of 1 row (versus thousands actual) leads the planner to tee up several nest loop joinswhich causes thousands of table scans.I have been working on a more complete which tracks uniqueness alongwith selectivity so that optimizer can benefit from knowing when a relation must have 1 (or fewer) tuples, while clamping all other relationsto 2 rather than 1.typedef struct{ double selectivity; boolean unique;} Selectivity;I am interested in hearing discussion about this problem, and if the communityis open to a patch if I continue pursuing the development.MattFIRST ARTIFACTplan with expensive (80s join) and join estimate of 1note the first Nested Loop join and 81s join(I gave up trying to post the full explain, because of the 80 char limit)\"Sort  (cost=7000.04..7000.04 rows=1 width=49)  (actual time=81739.426..81740.023 rows=5091 loops=1)\"\"  Sort Key: c.ten DESC\"\"  Sort Method: quicksort  Memory: 948kB\"\"  CTE cte\"\"    ->  Values Scan on \"*VALUES*\"  (cost=0.00..0.06 rows=5 width=4)  (actual time=0.001..0.001 rows=5 loops=1)\"\"  ->  Nested Loop  (cost=1.36..6999.97 rows=1 width=49)                     (actual time=0.059..81725.475 rows=5091 loops=1)\"\"Planning time: 1.912 ms\"\"Execution time: 81740.328 ms\"SECOND ARTIFACTforce join row estimate to be minimun of 2query completes very quickly\"Sort  (cost=7000.06..7000.06 rows=2 width=49)  (actual time=84.610..85.192 rows=5142 loops=1)\"\"  Sort Key: c.ten DESC\"\"  Sort Method: quicksort  Memory: 956kB\"\"  CTE cte\"\"    ->  Values Scan on \"*VALUES*\"  (cost=0.00..0.06 rows=5 width=4)  (actual time=0.002..0.003 rows=5 loops=1)\"\"  ->  Hash Join  (cost=2518.99..6999.98 rows=2 width=49)  (actual time=17.629..82.886 rows=5142 loops=1)\"\"Planning time: 2.982 ms\"\"Execution time: 85.514 ms\"THIRD ARTIFACTpatch I used to make experimenting easier w/o recompilingindex 1b61fd9..444703c 100644--- a/src/backend/optimizer/path/costsize.c+++ b/src/backend/optimizer/path/costsize.c@@ -68,6 +68,12 @@  *-------------------------------------------------------------------------  */ +++/* These parameters are set by GUC */+int                     join_row_estimate_clamp=1;++ #include \"postgres.h\"  #ifdef _MSC_VER@@ -175,6 +181,17 @@ clamp_row_est(double nrows) }  +double +clamp_join_row_est(double nrows)+{+ nrows = clamp_row_est(nrows);+ if (nrows >= (double)join_row_estimate_clamp)+ return nrows;+        return (double)join_row_estimate_clamp;+}+++ /*  * cost_seqscan  *  Determines and returns the cost of scanning a relation sequentially.@@ -3886,7 +3903,7 @@ calc_joinrel_size_estimate(PlannerInfo *root,  break;  } - return clamp_row_est(nrows);+ return clamp_join_row_est(nrows); }  /*diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.cindex 71090f2..fabb8ac 100644--- a/src/backend/utils/misc/guc.c+++ b/src/backend/utils/misc/guc.c@@ -2664,6 +2664,16 @@ static struct config_int ConfigureNamesInt[] =  NULL, NULL, NULL  }, + {+ {\"join_row_estimate_clamp\", PGC_USERSET, QUERY_TUNING_OTHER,+ gettext_noop(\"Set the minimum estimated size of a join result.\"),+                        NULL+ },+ &join_row_estimate_clamp,+ 1, 1, 10000,+ NULL, NULL, NULL+ },+  /* End-of-list marker */  {  {NULL, 0, 0, NULL, NULL}, NULL, 0, 0, 0, NULL, NULL, NULLdiff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.hindex 25a7303..0161c4b 100644--- a/src/include/optimizer/cost.h+++ b/src/include/optimizer/cost.h@@ -67,8 +67,10 @@ extern bool enable_material; extern bool enable_mergejoin; extern bool enable_hashjoin; extern int constraint_exclusion;+extern int join_row_estimate_clamp;  extern double clamp_row_est(double nrows);+extern double clamp_join_row_est(double nrows); extern double index_pages_fetched(double tuples_fetched, BlockNumber pages,  double index_pages, PlannerInfo *root); extern void cost_seqscan(Path *path, PlannerInfo *root, RelOptInfo *baserel,", "msg_date": "Thu, 29 Oct 2015 09:52:13 -0700", "msg_from": "Matthew Bellew <[email protected]>", "msg_from_op": true, "msg_subject": "Query optimizer plans with very small selectivity estimates" }, { "msg_contents": "Matthew Bellew <[email protected]> writes:\n> I made have several users encounter performance problems, which all\n> seem to come down to this problem: multiplying selectivity estimates can\n> cause tuple estimates to grow very small very quickly, once the estimator\n> gets to 1 row, the planner may choose plans that are very good ONLY WHEN\n> there is exactly 1 row (maybe even O(N^large)). Unfortunately, these may\n> be the worst plans if the estimate is even slightly off (even just\n> returning 2 or 3 rows versus 1).\n\nYeah, this is a well-known problem. There has been prior discussion along\nthe same lines as you mention (only believe 1-row estimates when it's\nprovably true that there's at most one row), but it hasn't looked like an\neasy change. See the pgsql-hackers archives for previous threads.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Oct 2015 14:24:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimizer plans with very small selectivity estimates" }, { "msg_contents": "On 10/29/2015 11:24 AM, Tom Lane wrote:\n> Matthew Bellew <[email protected]> writes:\n>> I made have several users encounter performance problems, which all\n>> seem to come down to this problem: multiplying selectivity estimates can\n>> cause tuple estimates to grow very small very quickly, once the estimator\n>> gets to 1 row, the planner may choose plans that are very good ONLY WHEN\n>> there is exactly 1 row (maybe even O(N^large)). Unfortunately, these may\n>> be the worst plans if the estimate is even slightly off (even just\n>> returning 2 or 3 rows versus 1).\n> \n> Yeah, this is a well-known problem. There has been prior discussion along\n> the same lines as you mention (only believe 1-row estimates when it's\n> provably true that there's at most one row), but it hasn't looked like an\n> easy change. See the pgsql-hackers archives for previous threads.\n\nAlso see Tomas's correlated stats patch submitted for 9.6.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Oct 2015 17:00:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimizer plans with very small selectivity\n estimates" } ]
[ { "msg_contents": "I have a table with roughly 300 000 rows, each row containing two large\nstrings - one is news article HTML, the other is news article plaintext.\nThe table has a bigint primary key.\n\nA GIN index is used to do fulltext search on the plaintext part. All I want\nto retrieve when I do fulltext search is values of primary key column.\n\nWith a popular word, the amount of results from fulltext search query can\nbe pretty high - a query can return 23 000 rows and some can more, and will\nreturn more as the database continues to grow.\n\nThe problem I have is that postgres always does Re-check condition step for\nmy request. That query with 23k rows takes 20 seconds to execute, and\nEXPLAIN shows that almost all of that time is spent\nre-checking condition. The second time I run the same query, I get results\nimmediately.\nWhat this means is that every time user does a search for some word no one\nsearched before, he has to wait a very long time, which is unacceptable for\nus.\n\nIs this happening by design, or am I doing something wrong? The way I see\nit, since docs say GIN indexes are lossless, the database should be able to\nfetch just primary key values for matching rows for me.\n\nHere's schema, query, explain and database version:\n\nCREATE TABLE kard_md.fulldata\n(\n id_iu bigint NOT NULL,\n url character varying NOT NULL,\n original text,\n edited text,\n plaintext text,\n date timestamp without time zone,\n CONSTRAINT fulldata_pkey PRIMARY KEY (id_iu)\n);\n\nCREATE INDEX fulldata_plaintext_idx\n ON kard_md.fulldata\n USING gin\n (to_tsvector('russian'::regconfig, plaintext));\n\n\nEXPLAIN (ANALYZE, BUFFERS) select id_iu from kard_md.fulldata where\nto_tsvector('russian',fulldata.plaintext) @@\nplainto_tsquery('russian','москва');\n\n1st run:\nBitmap Heap Scan on fulldata (cost=266.79..39162.57 rows=23069 width=8)\n(actual time=135.727..19499.667 rows=23132 loops=1)\n Recheck Cond: (to_tsvector('russian'::regconfig, plaintext) @@\n'''москв'''::tsquery)\n Buffers: shared hit=115 read=13000\n -> Bitmap Index Scan on fulldata_plaintext_idx (cost=0.00..261.02\nrows=23069 width=0) (actual time=104.834..104.834 rows=23132 loops=1)\n Index Cond: (to_tsvector('russian'::regconfig, plaintext) @@\n'''москв'''::tsquery)\n Buffers: shared hit=3 read=21\nTotal runtime: 19512.479 ms\n\n2nd run:\nBitmap Heap Scan on fulldata (cost=266.79..39162.57 rows=23069 width=8)\n(actual time=25.423..48.649 rows=23132 loops=1)\n Recheck Cond: (to_tsvector('russian'::regconfig, plaintext) @@\n'''москв'''::tsquery)\n Buffers: shared hit=13115\n -> Bitmap Index Scan on fulldata_plaintext_idx (cost=0.00..261.02\nrows=23069 width=0) (actual time=18.057..18.057 rows=23132 loops=1)\n Index Cond: (to_tsvector('russian'::regconfig, plaintext) @@\n'''москв'''::tsquery)\n Buffers: shared hit=24\nTotal runtime: 49.612 ms\n\nselect version()\n'PostgreSQL 9.1.15, compiled by Visual C++ build 1500, 64-bit'\n\nI have a table with roughly 300 000 rows, each row containing two large strings - one is news article HTML, the other is news article plaintext. The table has a bigint primary key.A GIN index is used to do fulltext search on the plaintext part. All I want to retrieve when I do fulltext search is values of primary key column.With a popular word, the amount of results from fulltext search query can be pretty high - a query can return 23 000 rows and some can more, and will return more as the database continues to grow.The problem I have is that postgres always does Re-check condition step for my request. That query with 23k rows takes 20 seconds to execute, and EXPLAIN shows that almost all of that time is spentre-checking condition. The second time I run the same query, I get results immediately.What this means is that every time user does a search for some word no one searched before, he has to wait a very long time, which is unacceptable for us.Is this happening by design, or am I doing something wrong? The way I see it, since docs say GIN indexes are lossless, the database should be able to fetch just primary key values for matching rows for me.Here's schema, query, explain and database version:CREATE TABLE kard_md.fulldata(  id_iu bigint NOT NULL,  url character varying NOT NULL,  original text,  edited text,  plaintext text,  date timestamp without time zone,  CONSTRAINT fulldata_pkey PRIMARY KEY (id_iu));CREATE INDEX fulldata_plaintext_idx  ON kard_md.fulldata  USING gin  (to_tsvector('russian'::regconfig, plaintext));EXPLAIN (ANALYZE, BUFFERS) select id_iu from kard_md.fulldata where to_tsvector('russian',fulldata.plaintext) @@ plainto_tsquery('russian','москва');1st run:Bitmap Heap Scan on fulldata  (cost=266.79..39162.57 rows=23069 width=8) (actual time=135.727..19499.667 rows=23132 loops=1)  Recheck Cond: (to_tsvector('russian'::regconfig, plaintext) @@ '''москв'''::tsquery)  Buffers: shared hit=115 read=13000  ->  Bitmap Index Scan on fulldata_plaintext_idx  (cost=0.00..261.02 rows=23069 width=0) (actual time=104.834..104.834 rows=23132 loops=1)        Index Cond: (to_tsvector('russian'::regconfig, plaintext) @@ '''москв'''::tsquery)        Buffers: shared hit=3 read=21Total runtime: 19512.479 ms2nd run:Bitmap Heap Scan on fulldata  (cost=266.79..39162.57 rows=23069 width=8) (actual time=25.423..48.649 rows=23132 loops=1)  Recheck Cond: (to_tsvector('russian'::regconfig, plaintext) @@ '''москв'''::tsquery)  Buffers: shared hit=13115  ->  Bitmap Index Scan on fulldata_plaintext_idx  (cost=0.00..261.02 rows=23069 width=0) (actual time=18.057..18.057 rows=23132 loops=1)        Index Cond: (to_tsvector('russian'::regconfig, plaintext) @@ '''москв'''::tsquery)        Buffers: shared hit=24Total runtime: 49.612 msselect version()'PostgreSQL 9.1.15, compiled by Visual C++ build 1500, 64-bit'", "msg_date": "Mon, 2 Nov 2015 09:52:33 +0300", "msg_from": "Andrey Osenenko <[email protected]>", "msg_from_op": true, "msg_subject": "GIN index always doing Re-check condition, postgres 9.1" }, { "msg_contents": "On Sun, Nov 1, 2015 at 10:52 PM, Andrey Osenenko\n<[email protected]> wrote:\n> I have a table with roughly 300 000 rows, each row containing two large\n> strings - one is news article HTML, the other is news article plaintext. The\n> table has a bigint primary key.\n>\n> A GIN index is used to do fulltext search on the plaintext part. All I want\n> to retrieve when I do fulltext search is values of primary key column.\n>\n> With a popular word, the amount of results from fulltext search query can be\n> pretty high - a query can return 23 000 rows and some can more, and will\n> return more as the database continues to grow.\n>\n> The problem I have is that postgres always does Re-check condition step for\n> my request. That query with 23k rows takes 20 seconds to execute, and\n> EXPLAIN shows that almost all of that time is spent\n> re-checking condition.\n\nExplain does not address the issue of how much time was spent doing\nrechecks. You are misinterpreting something.\n\n> The second time I run the same query, I get results\n> immediately.\n\nThat suggests the time is spent reading data from disk the first time,\nnot spent doing rechecks. Rechecks do not get faster by repeated\nexecution, except indirectly to the extent the data has already been\npulled into memory. But other things get faster due to that, as well.\n\nNow those are not mutually exclusive, as doing a recheck might lead to\nreading toast tables that don't need to get read at all in the absence\nof rechecks. So rechecks can lead to IO problems. But there is no\nevidence that that is the case for you.\n\n>\n> 1st run:\n> Bitmap Heap Scan on fulldata (cost=266.79..39162.57 rows=23069 width=8)\n> (actual time=135.727..19499.667 rows=23132 loops=1)\n> Recheck Cond: (to_tsvector('russian'::regconfig, plaintext) @@\n> '''москв'''::tsquery)\n\nThis tells you what condition will be applied to the recheck, in case\na recheck is needed due to bitmap memory overflow. It doesn't tell\nhow many times, if any, that was actually done, or how much time was\nspent doing it.\n\nAs far as I know, there is no way to distinguish a \"lossy index\"\nrecheck form a \"lossy bitmap\" recheck in version 9.1.\n\n> Buffers: shared hit=115 read=13000\n\nThat you needed only 13115 blocks to deliver 23069 tells me that there\nis little if any recheck-driven toast table access going on. That the\nsecond execution was very fast tells me that there is little\nrechecking at all going on, because actual rechecking is CPU\nintensive.\n\nI don't think your problem has anything to do with rechecking. You\nsimply have too much data that is not in memory. You need more\nmemory, or some way to keep your memory pinned with what you need. If\nyou are on a RAID, you could also increase effective_io_concurrency,\nwhich lets the bitmap scan prefetch table blocks.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Nov 2015 23:22:47 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GIN index always doing Re-check condition, postgres 9.1" }, { "msg_contents": "Thank you.\n\nThat's really sad news. This means that even though there is an index that\nlets you find rows you want almost immediately, to retrieve primary keys,\nyou still have to do a lot of disk io.\n\nI created a new table that contains only primary key and tsvector value,\nand (at least that's how I'm interpreting it) since there is less data to\nread per row, it returns same results about 2 times as quickly (I restarted\ncomputer to make sure nothing is in memory).\n\nOriginal table:\nBitmap Heap Scan on fulldata (cost=266.79..39162.57 rows=23069 width=8)\n(actual time=113.472..18368.769 rows=23132 loops=1)\n Recheck Cond: (to_tsvector('russian'::regconfig, plaintext) @@\n'''москв'''::tsquery)\n Buffers: shared hit=1 read=13114\n -> Bitmap Index Scan on fulldata_plaintext_idx (cost=0.00..261.02\nrows=23069 width=0) (actual time=90.859..90.859 rows=23132 loops=1)\n Index Cond: (to_tsvector('russian'::regconfig, plaintext) @@\n'''москв'''::tsquery)\n Buffers: shared hit=1 read=23\nTotal runtime: 18425.265 ms\n\nTable with only key and vector:\nBitmap Heap Scan on fts (cost=273.67..27903.61 rows=23441 width=8) (actual\ntime=219.896..10095.159 rows=23132 loops=1)\n Recheck Cond: (vector @@ '''москв'''::tsquery)\n Buffers: shared hit=1 read=10877\n -> Bitmap Index Scan on fts_vector_idx (cost=0.00..267.81 rows=23441\nwidth=0) (actual time=204.631..204.631 rows=23132 loops=1)\n Index Cond: (vector @@ '''москв'''::tsquery)\n Buffers: shared hit=1 read=23\nTotal runtime: 10103.858 ms\n\nIt also looks like if there was a way to create a table with just primary\nkey and add an index to it that indexes data from another table, it would\nwork much, much faster since there would be very little to read from disk\nafter index lookup. But looks like there isn't.\n\nSo am I correct in assumption that as the amount of rows grows, query times\nfor rows that are not in memory (and considering how many of them there\nare, most won't be) will grow linearly?\n\nOn Mon, Nov 2, 2015 at 11:14 AM, Andrey Osenenko <[email protected]>\nwrote:\n\n> Thank you.\n>\n> That's really sad news. This means that even though there is an index that\n> lets you find rows you want almost immediately, to retrieve primary keys,\n> you still have to do a lot of disk io.\n>\n> I created a new table that contains only primary key and tsvector value,\n> and (at least that's how I'm interpreting it) since there is less data to\n> read per row, it returns same results about 2 times as quickly (I restarted\n> computer to make sure nothing is in memory).\n>\n> Original table:\n> Bitmap Heap Scan on fulldata (cost=266.79..39162.57 rows=23069 width=8)\n> (actual time=113.472..18368.769 rows=23132 loops=1)\n> Recheck Cond: (to_tsvector('russian'::regconfig, plaintext) @@\n> '''москв'''::tsquery)\n> Buffers: shared hit=1 read=13114\n> -> Bitmap Index Scan on fulldata_plaintext_idx (cost=0.00..261.02\n> rows=23069 width=0) (actual time=90.859..90.859 rows=23132 loops=1)\n> Index Cond: (to_tsvector('russian'::regconfig, plaintext) @@\n> '''москв'''::tsquery)\n> Buffers: shared hit=1 read=23\n> Total runtime: 18425.265 ms\n>\n> Table with only key and vector:\n> Bitmap Heap Scan on fts (cost=273.67..27903.61 rows=23441 width=8)\n> (actual time=219.896..10095.159 rows=23132 loops=1)\n> Recheck Cond: (vector @@ '''москв'''::tsquery)\n> Buffers: shared hit=1 read=10877\n> -> Bitmap Index Scan on fts_vector_idx (cost=0.00..267.81 rows=23441\n> width=0) (actual time=204.631..204.631 rows=23132 loops=1)\n> Index Cond: (vector @@ '''москв'''::tsquery)\n> Buffers: shared hit=1 read=23\n> Total runtime: 10103.858 ms\n>\n> It also looks like if there was a way to create a table with just primary\n> key and add an index to it that indexes data from another table, it would\n> work much, much faster since there would be very little to read from disk\n> after index lookup. But looks like there isn't.\n>\n> So am I correct in assumption that as the amount of rows grows, query\n> times for rows that are not in memory (and considering how many of them\n> there are, most won't be) will grow linearly?\n>\n\nThank you.That's really sad news. This means that even though \nthere is an index that lets you find rows you want almost immediately, \nto retrieve primary keys, you still have to do a lot of disk io.I\n created a new table that contains only primary key and tsvector value, \nand (at least that's how I'm interpreting it) since there is less data \nto read per row, it returns same results about 2 times as quickly (I \nrestarted computer to make sure nothing is in memory).Original table:Bitmap Heap Scan on fulldata  (cost=266.79..39162.57 rows=23069 width=8) (actual time=113.472..18368.769 rows=23132 loops=1)  Recheck Cond: (to_tsvector('russian'::regconfig, plaintext) @@ '''москв'''::tsquery)  Buffers: shared hit=1 read=13114 \n ->  Bitmap Index Scan on fulldata_plaintext_idx  (cost=0.00..261.02 \nrows=23069 width=0) (actual time=90.859..90.859 rows=23132 loops=1)        Index Cond: (to_tsvector('russian'::regconfig, plaintext) @@ '''москв'''::tsquery)        Buffers: shared hit=1 read=23Total runtime: 18425.265 msTable with only key and vector:Bitmap Heap Scan on fts  (cost=273.67..27903.61 rows=23441 width=8) (actual time=219.896..10095.159 rows=23132 loops=1)  Recheck Cond: (vector @@ '''москв'''::tsquery)  Buffers: shared hit=1 read=10877 \n ->  Bitmap Index Scan on fts_vector_idx  (cost=0.00..267.81 \nrows=23441 width=0) (actual time=204.631..204.631 rows=23132 loops=1)        Index Cond: (vector @@ '''москв'''::tsquery)        Buffers: shared hit=1 read=23Total runtime: 10103.858 msIt\n also looks like if there was a way to create a table with just primary \nkey and add an index to it that indexes data from another table, it \nwould work much, much faster since there would be very little to read \nfrom disk after index lookup. But looks like there isn't.So am I\n correct in assumption that as the amount of rows grows, query times for\n rows that are not in memory (and considering how many of them there \nare, most won't be) will grow linearly?On Mon, Nov 2, 2015 at 11:14 AM, Andrey Osenenko <[email protected]> wrote:Thank you.That's really sad news. This means that even though there is an index that lets you find rows you want almost immediately, to retrieve primary keys, you still have to do a lot of disk io.I created a new table that contains only primary key and tsvector value, and (at least that's how I'm interpreting it) since there is less data to read per row, it returns same results about 2 times as quickly (I restarted computer to make sure nothing is in memory).Original table:Bitmap Heap Scan on fulldata  (cost=266.79..39162.57 rows=23069 width=8) (actual time=113.472..18368.769 rows=23132 loops=1)  Recheck Cond: (to_tsvector('russian'::regconfig, plaintext) @@ '''москв'''::tsquery)  Buffers: shared hit=1 read=13114  ->  Bitmap Index Scan on fulldata_plaintext_idx  (cost=0.00..261.02 rows=23069 width=0) (actual time=90.859..90.859 rows=23132 loops=1)        Index Cond: (to_tsvector('russian'::regconfig, plaintext) @@ '''москв'''::tsquery)        Buffers: shared hit=1 read=23Total runtime: 18425.265 msTable with only key and vector:Bitmap Heap Scan on fts  (cost=273.67..27903.61 rows=23441 width=8) (actual time=219.896..10095.159 rows=23132 loops=1)  Recheck Cond: (vector @@ '''москв'''::tsquery)  Buffers: shared hit=1 read=10877  ->  Bitmap Index Scan on fts_vector_idx  (cost=0.00..267.81 rows=23441 width=0) (actual time=204.631..204.631 rows=23132 loops=1)        Index Cond: (vector @@ '''москв'''::tsquery)        Buffers: shared hit=1 read=23Total runtime: 10103.858 msIt also looks like if there was a way to create a table with just primary key and add an index to it that indexes data from another table, it would work much, much faster since there would be very little to read from disk after index lookup. But looks like there isn't.So am I correct in assumption that as the amount of rows grows, query times for rows that are not in memory (and considering how many of them there are, most won't be) will grow linearly?", "msg_date": "Mon, 2 Nov 2015 11:19:22 +0300", "msg_from": "Andrey Osenenko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GIN index always doing Re-check condition, postgres 9.1" }, { "msg_contents": "On 11/2/15 2:19 AM, Andrey Osenenko wrote:\n> It also looks like if there was a way to create a table with just\n> primary key and add an index to it that indexes data from another table,\n> it would work much, much faster since there would be very little to read\n> from disk after index lookup. But looks like there isn't.\n\nThat probably wouldn't help as much as you'd hope, because heap tuples \nin Postgres have a minimum 24 byte overhead. Add in 8 bytes for bigint \nand that's 32 bytes extra per row.\n\nI think what might gain you more is if you moved 9.2 and got index only \nscans. Though, if you're getting lossy results, I don't think that'll \nhelp [1].\n\n> So am I correct in assumption that as the amount of rows grows, query\n> times for rows that are not in memory (and considering how many of them\n> there are, most won't be) will grow linearly?\n\nMaybe, maybe not. Query times for data that has to come from the disk \ncan vary wildly based on what other activity is happening on the IO \nsystem. Ultimately, your IO system can only do so many IOs Per Second.\n\n[1] \nhttps://wiki.postgresql.org/wiki/Index-only_scans#Index-only_scans_and_index-access_methods\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 2 Nov 2015 12:28:18 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GIN index always doing Re-check condition, postgres 9.1" }, { "msg_contents": "On Mon, Nov 2, 2015 at 12:19 AM, Andrey Osenenko\n<[email protected]> wrote:\n>\n> It also looks like if there was a way to create a table with just primary\n> key and add an index to it that indexes data from another table, it would\n> work much, much faster since there would be very little to read from disk\n> after index lookup. But looks like there isn't.\n\nThere is a way to do this, but it is pretty gross.\n\nYou can define function which takes the primary key as input and\nreturns the data to index. Mark the function as immutable, even\nthough it isn't. Something like:\n\nCREATE OR REPLACE FUNCTION public.filler_by_aid(integer)\n RETURNS text\n LANGUAGE sql\n IMMUTABLE\nAS $function$ select filler::text from pgbench_accounts where aid=$1 $function$\n\nThen create a table which has just the primary key, and create a\nfunctional index on that table\n\ncreate table foobar as select aid from pgbench_accounts;\ncreate index on foobar (filler_by_aid(aid));\n\nNow you can query the skinny table by reference to the data in the wide table:\n\nexplain analyze select count(*) from foobar where filler_by_aid(aid)='zebra';\n\nSince you fibbed to PostgreSQL about the functions immutability, it is\npretty easy to get a corrupt index here. Every time the parent is\nupdated, you have to be sure to delete and reinsert the primary key in\nthe corresponding skinny table, otherwise it will not reflect the\nupdated value.\n\nWhat you gain in the skinny table you could very well lose with the\ntriggers needed to maintain it. Not to mention the fragility.\n\n\nIt would be simpler if you could just force the wide data to always be\ntoasted, even if it is not wide enough to trigger the default toast\nthreshold. You could get a skinnier table (although not quite as\nskinny as one with only a single column), without having to do\nunsupported hacks. (I am assuming here, without real evidence other\nthan intuition, that most of your news articles are in fact coming in\nunder the toast threshold).\n\n\n> So am I correct in assumption that as the amount of rows grows, query times\n> for rows that are not in memory (and considering how many of them there are,\n> most won't be) will grow linearly?\n\nYep.\n\nWhat you really need are index only scans. But those are not\nsupported by gin indexes, and given the gin index structure it seems\nunlikely they will ever support index-only scans, at least not in a\nway that would help you.\n\nWhat are you going to do with these 23,000 primary keys once you get\nthem, anyway? Perhaps you can push that analysis into the database\nand gain some efficiencies there.\n\nOr you could change your data structure. If all you are doing is\nsearching for one tsvector token at a time, you could unnest ts_vector\nand store it in a table like (ts_token text, id_iu bigint). Then\nbuild a regular btree index on (ts_token, id_iu) and get index-only\nscans (once you upgrade from 9.1 to something newer)\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 2 Nov 2015 14:09:02 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GIN index always doing Re-check condition, postgres 9.1" }, { "msg_contents": "Jeff Janes:\n\nThat is such a beautiful little trick. I made a table with just ids, and a\nquery for it reads almost 10 times less buffers (as reported by explain\nanalyze buffers), and sure enough, after another reboot, query executes\nabout 10 times faster.\n\nI'm not doing anything special with those results. I have a main table\n\"core\" with various information about entries. Some entries have plaintexts\nattached and those are stored in the additional table \"fulldata\".\nfulldata's primary key refers to core's primary key and to do a fulltext\nsearch filtering results using core's other fields I have to retrieve\nprimary keys from fulldata.\nWe tried many different ways to join rows from fulldata and core for that\nquery, and ended up with something along the lines of:\n\nwhere core.id_iu in (with ids as(select id_iu from fulldata where <fulltext\ncondition here>) select * from ids) and <other core conditions here>\n\nIt was just as fast/slow as table joins and subqueries but always used\nfulltext index no matter what planner had in mind.\n\nI'll be sure to play around with fake immutable function, and I think it\nmight be even worth it to add the index to core instead of thin table.\n\nThanks!\n\nJim Nasby:\nWell, it's 32 bytes per row vs only god knows how many bytes per row since\nevery row contains a tsvector value (although since practice shows 10 times\nless buffers read, it's probably somewhere around 320 bytes on average?).\n\nJeff Janes:That is such a beautiful little trick. I made a table with\n just ids, and a query for it reads almost 10 times less buffers (as \nreported by explain analyze buffers), and sure enough, after another \nreboot, query executes about 10 times faster.I'm not doing \nanything special with those results. I have a main table \"core\" with \nvarious information about entries. Some entries have plaintexts attached\n and those are stored in the additional table \"fulldata\". fulldata's \nprimary key refers to core's primary key and to do a fulltext search \nfiltering results using core's other fields I have to retrieve primary \nkeys from fulldata.We tried many different ways to join rows from fulldata and core for that query, and ended up with something along the lines of:where core.id_iu in (with ids as(select id_iu from fulldata where <fulltext condition here>) select * from ids) and <other core conditions here>It was just as fast/slow as table joins and subqueries but always used fulltext index no matter what planner had in mind.I'll\n be sure to play around with fake immutable function, and I think it \nmight be even worth it to add the index to core instead of thin table.Thanks!Jim Nasby:Well,\n it's 32 bytes per row vs only god knows how many bytes per row since \nevery row contains a tsvector value (although since practice shows 10 \ntimes less buffers read, it's probably somewhere around 320 bytes on \naverage?).", "msg_date": "Tue, 3 Nov 2015 12:05:02 +0300", "msg_from": "Andrey Osenenko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GIN index always doing Re-check condition, postgres 9.1" } ]
[ { "msg_contents": "Hello,\n\nI´ve got a table custom_data which essentially contains a number of key/value pairs. This table holds a large number (about 40M) of records and I need the distinct keys and values for some reasons. Selecting those distinct data takes a couple of seconds, so I decided to maintain a separate lookup table for both the key and value data. The lookup tables are maintained by a trigger that reacts on inserts/updates/deletes on the original table. While checking the correctness of my trigger function I noticed that the SQL query in the trigger function is surprisingly slow, taking about 5-6 seconds. When I ran the SQL query outside the trigger function it showed the expected performance and returned in a couple of milliseconds. Though the original table is very large it holds only a small number of distinct key / value values:\n\nSELECT DISTINCT key FROM custom_data;\n>> 12 rows returned\n\nSELECT DISTINCT value FROM custom_data;\n>> 13 rows returned\n\n\nHere are the relveant information (function body of the trigger function reduced to show the behaviour):\n\nPostgreSQL Version:\nPostgreSQL 9.1.13, compiled by Visual C++ build 1500, 64-bit\n\nOS Version:\nWindows 7 64bit\n\nScenario to reproduce the behaviour:\nEMS Solution SQL Manager: SQL Editor used to run SQL commands from an editor\n\nServer configuration:\nname current_setting source\nDateStyle ISO, DMY session\ndefault_text_search_config pg_catalog.german configuration file\neffective_cache_size 8GB configuration file\nlc_messages German_Germany.1252 configuration file\nlc_monetary German_Germany.1252 configuration file\nlc_numeric German_Germany.1252 configuration file\nlc_time German_Germany.1252 configuration file\nlisten_addresses * configuration file\nlog_destination stderr configuration file\nlog_line_prefix %t configuration file\nlog_timezone CET environment variable\nlogging_collector on configuration file\nmax_connections 100 configuration file\nmax_stack_depth 2MB environment variable\nport 5432 configuration file\nshared_buffers 4GB configuration file\nstatement_timeout 0 session\nTimeZone CET environment variable\nwork_mem 64MB configuration file\n\ncustom_data table definition:\nCREATE TABLE public.custom_data (\n custom_data_id SERIAL,\n file_id INTEGER DEFAULT 0 NOT NULL,\n user_id INTEGER DEFAULT 0 NOT NULL,\n \"timestamp\" TIMESTAMP(0) WITHOUT TIME ZONE DEFAULT '1970-01-01 00:00:00'::timestamp without time zone NOT NULL,\n key TEXT DEFAULT ''::text NOT NULL,\n value TEXT DEFAULT ''::text NOT NULL,\n CONSTRAINT pkey_custom_data PRIMARY KEY(custom_data_id),\n) WITHOUT OIDS;\n\nCREATE INDEX idx_custom_data_key ON public.custom_data USING btree (key);\n\nCREATE INDEX idx_custom_data_value ON public.custom_data USING btree (value);\n\nCREATE TRIGGER on_custom_data_changed AFTER INSERT OR UPDATE OR DELETE\nON public.custom_data FOR EACH ROW\nEXECUTE PROCEDURE public.on_change_custom_data();\n\nCREATE OR REPLACE FUNCTION public.on_change_custom_data ()\nRETURNS trigger AS\n$body$\nBEGIN\n IF TG_OP = 'UPDATE' THEN\n RAISE NOTICE 'Check custom data key start : %', timeofday();\n IF NOT EXISTS (SELECT 1 FROM custom_data WHERE key = OLD.key ) THEN\n END IF;\n RAISE NOTICE 'Check custom data key end : %', timeofday();\n END IF;\n RETURN NULL;\nEND;\n$body$\nLANGUAGE 'plpgsql'\nVOLATILE CALLED ON NULL INPUT SECURITY INVOKER COST 100;\n\npostgreSQL log:\nHINWEIS: Check custom data key start : Fri Oct 30 11:56:41.785000 2015 CET << start of IF NOT EXIST (...)\nHINWEIS: Check custom data key end : Fri Oct 30 11:56:47.145000 2015 CET << end of IF NOT EXISTS (...) : ~5.4 seconds\n\nQuery OK, 1 rows affected (5,367 sec)\n\nSame query run in SQL editor:\nSELECT 1 FROM custom_data WHERE key='key-1'\n1 rows returned (16 ms)\n\nAs you can see there´s a huge runtime difference between the select query used in the trigger function and the one run from the SQL editor.\n\n\n\nGuido Niewerth\n\n25 years inspired synergy\nOCS Optical Control Systems GmbH\nWullener Feld 24\n58454 Witten\nGermany\n\nTel: +49 (0) 2302 95622-0\nFax: +49 (0) 2302 95622-33\nEmail: [email protected]\nWeb: http://www.ocsgmbh.com\n\nHRB 8442 (Bochum) | VAT-ID: DE 124 084 990\nDirectors: Hans Gloeckler, Fatah Najaf, Merdan Sariboga\n\n\n\n\n\n\n\n\n\n\n\nHello,\n \nI´ve got a table custom_data which essentially contains a number of key/value pairs. This table holds a large number (about 40M) of records and I need the distinct keys and values for some reasons. Selecting those distinct data takes a\n couple of seconds, so I decided to maintain a separate lookup table for both the key and value data. The lookup tables are maintained by a trigger that reacts on inserts/updates/deletes on the original table. While checking the correctness of my trigger function\n I noticed that the SQL query in the trigger function is surprisingly slow, taking about 5-6 seconds. When I ran the SQL query outside the trigger function it showed the expected performance and returned in a couple of milliseconds. Though the original table\n is very large it holds only a small number of distinct key / value values:\n \nSELECT DISTINCT key FROM custom_data;\n>> 12 rows returned\n \nSELECT DISTINCT value FROM custom_data;\n>> 13 rows returned\n \n \nHere are the relveant information (function body of the trigger function reduced to show the behaviour):\n \nPostgreSQL Version:\nPostgreSQL 9.1.13, compiled by Visual C++ build 1500, 64-bit\n \nOS Version:\nWindows 7 64bit\n \nScenario to reproduce the behaviour:\nEMS Solution SQL Manager: SQL Editor used to run SQL commands from an editor\n \nServer configuration:\nname                                                   current_setting                               source\nDateStyle                                           ISO, DMY                                           session\ndefault_text_search_config      pg_catalog.german                        configuration file\neffective_cache_size                    8GB                                                      configuration file\nlc_messages                                     German_Germany.1252              configuration file\nlc_monetary                                     German_Germany.1252              configuration file\nlc_numeric                                        German_Germany.1252              configuration file\nlc_time                                                German_Germany.1252              configuration file\nlisten_addresses                             *                                                            configuration file\nlog_destination                               stderr                                                  configuration file\nlog_line_prefix                                %t                                                         configuration file\nlog_timezone                                  CET                                                       environment variable\nlogging_collector                             on                                                         configuration file\nmax_connections                           100                                                        configuration file\nmax_stack_depth                          2MB                                                     environment variable\nport                                                      5432                                                     configuration file\nshared_buffers                               4GB                                                      configuration file\nstatement_timeout                       0                                                            session\nTimeZone                                          CET                                                       environment variable\nwork_mem                                       64MB                                                   configuration file\n \ncustom_data table definition:\nCREATE TABLE public.custom_data (\n  custom_data_id SERIAL, \n  file_id INTEGER DEFAULT 0 NOT NULL, \n  user_id INTEGER DEFAULT 0 NOT NULL, \n  \"timestamp\" TIMESTAMP(0) WITHOUT TIME ZONE DEFAULT '1970-01-01 00:00:00'::timestamp without time zone NOT NULL,\n\n  key TEXT DEFAULT ''::text NOT NULL, \n  value TEXT DEFAULT ''::text NOT NULL, \n  CONSTRAINT pkey_custom_data PRIMARY KEY(custom_data_id),\n) WITHOUT OIDS;\n \nCREATE INDEX idx_custom_data_key ON public.custom_data USING btree (key);\n \nCREATE INDEX idx_custom_data_value ON public.custom_data  USING btree (value);\n \nCREATE TRIGGER on_custom_data_changed AFTER INSERT OR UPDATE OR DELETE\n\nON public.custom_data FOR EACH ROW \nEXECUTE PROCEDURE public.on_change_custom_data();\n \nCREATE OR REPLACE FUNCTION public.on_change_custom_data () \nRETURNS trigger AS \n$body$ \nBEGIN \n   IF TG_OP = 'UPDATE' THEN \n      RAISE NOTICE 'Check custom data key start  : %', timeofday(); \n      IF NOT EXISTS (SELECT 1 FROM custom_data WHERE key = OLD.key ) THEN      \n\n      END IF; \n      RAISE NOTICE 'Check custom data key end    : %', timeofday(); \n    END IF; \n    RETURN NULL; \nEND; \n$body$ \nLANGUAGE 'plpgsql' \nVOLATILE CALLED ON NULL INPUT SECURITY INVOKER COST 100;\n \npostgreSQL log:\nHINWEIS:  Check custom data key start  : Fri Oct 30 11:56:41.785000 2015 CET << start of IF NOT EXIST (...)\nHINWEIS:  Check custom data key end    : Fri Oct 30 11:56:47.145000 2015 CET << end of IF NOT EXISTS (...) : ~5.4 seconds\n \nQuery OK, 1 rows affected (5,367 sec)\n \nSame query run in SQL editor:\nSELECT 1 FROM custom_data WHERE key='key-1' \n1 rows returned (16 ms)\n \nAs you can see there´s a huge runtime difference between the select query used in the trigger function and the one run from the SQL editor.\n \n \n\n\n\n\n\n\n\nGuido\nNiewerth \n\n25 years inspired synergy\nOCS\nOptical\nControl\nSystems GmbH\nWullener Feld 24\n58454\nWitten\nGermany\n\nTel:\n+49 (0) 2302 95622-0 \nFax: +49 (0) 2302 95622-33\nEmail: [email protected]\nWeb: http://www.ocsgmbh.com\n\nHRB 8442 (Bochum) | VAT-ID: DE 124 084 990\nDirectors: Hans Gloeckler, Fatah Najaf, Merdan Sariboga", "msg_date": "Mon, 2 Nov 2015 10:15:35 +0000", "msg_from": "Guido Niewerth <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query in trigger function" }, { "msg_contents": "Guido Niewerth <[email protected]> writes:\n> As you can see there�s a huge runtime difference between the select query used in the trigger function and the one run from the SQL editor.\n\ncontrib/auto_explain might be useful in seeing what's going on, in\nparticular it would tell us whether or not a different plan is being\nused for the query inside the function.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 02 Nov 2015 09:45:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query in trigger function" }, { "msg_contents": "I needed to set up the trigger function again, so here it is:\n\nCREATE OR REPLACE FUNCTION public.fn_trigger_test ()\nRETURNS trigger AS\n$body$\nDECLARE\n start TIMESTAMP;\nBEGIN\n start := timeofday();\n IF TG_OP = 'UPDATE' THEN\n IF NOT EXISTS( SELECT key FROM custom_data WHERE key = old.key LIMIT 1 ) THEN\n DELETE FROM lookup_custom_data_keys WHERE key = old.key;\n END IF;\n IF NOT EXISTS( SELECT 1 FROM lookup_custom_data_keys WHERE key = new.key LIMIT 1 ) THEN\n INSERT INTO lookup_custom_data_keys (key) VALUES (new.key);\n END IF;\n END IF;\n RAISE NOTICE 'Trigger % ran: %', TG_OP, age( timeofday() ::TIMESTAMP, start );\n RETURN NULL;\nEND;\n$body$\nLANGUAGE 'plpgsql'\nVOLATILE\nCALLED ON NULL INPUT\nSECURITY INVOKER\nCOST 100;\n\nAnd this is the execution plan. It looks like it does a slow sequential scan where it´s able to do an index scan:\n\n2015-11-02 17:42:10 CET LOG: duration: 5195.673 ms plan:\n Query Text: SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1 )\n Result (cost=0.09..0.10 rows=1 width=0) (actual time=5195.667..5195.667 rows=1 loops=1)\n Output: (NOT $0)\n Buffers: shared hit=34 read=351750\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..0.09 rows=1 width=0) (actual time=5195.662..5195.662 rows=0 loops=1)\n Output: (1)\n Buffers: shared hit=34 read=351750\n -> Seq Scan on public.custom_data (cost=0.00..821325.76 rows=9390835 width=0) (actual time=5195.658..5195.658 rows=0 loops=1)\n Output: 1\n Filter: (custom_data.key = $15)\n Buffers: shared hit=34 read=351750\n2015-11-02 17:42:10 CET ZUSAMMENHANG: SQL-Anweisung »SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1 )«\n PL/pgSQL function \"fn_trigger_test\" line 7 at IF\n2015-11-02 17:42:10 CET LOG: duration: 0.014 ms plan:\n Query Text: DELETE FROM lookup_custom_data_keys WHERE key = old.key\n Delete on public.lookup_custom_data_keys (cost=0.00..38.25 rows=1 width=6) (actual time=0.013..0.013 rows=0 loops=1)\n Buffers: shared hit=2\n -> Seq Scan on public.lookup_custom_data_keys (cost=0.00..38.25 rows=1 width=6) (actual time=0.007..0.007 rows=1 loops=1)\n Output: ctid\n Filter: (lookup_custom_data_keys.key = $15)\n Buffers: shared hit=1\n2015-11-02 17:42:10 CET ZUSAMMENHANG: SQL-Anweisung »DELETE FROM lookup_custom_data_keys WHERE key = old.key«\n PL/pgSQL function \"fn_trigger_test\" line 8 at SQL-Anweisung\n2015-11-02 17:42:10 CET LOG: duration: 0.005 ms plan:\n Query Text: SELECT NOT EXISTS( SELECT 1 FROM lookup_custom_data_keys WHERE key = new.key LIMIT 1 )\n Result (cost=38.25..38.26 rows=1 width=0) (actual time=0.004..0.004 rows=1 loops=1)\n Output: (NOT $0)\n Buffers: shared hit=1\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..38.25 rows=1 width=0) (actual time=0.003..0.003 rows=0 loops=1)\n Output: (1)\n Buffers: shared hit=1\n -> Seq Scan on public.lookup_custom_data_keys (cost=0.00..38.25 rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)\n Output: 1\n Filter: (lookup_custom_data_keys.key = $17)\n Buffers: shared hit=1\n2015-11-02 17:42:10 CET ZUSAMMENHANG: SQL-Anweisung »SELECT NOT EXISTS( SELECT 1 FROM lookup_custom_data_keys WHERE key = new.key LIMIT 1 )«\n PL/pgSQL function \"fn_trigger_test\" line 10 at IF\n2015-11-02 17:42:10 CET LOG: duration: 0.116 ms plan:\n Query Text: INSERT INTO lookup_custom_data_keys (key) VALUES (new.key)\n Insert on public.lookup_custom_data_keys (cost=0.00..0.01 rows=1 width=0) (actual time=0.115..0.115 rows=0 loops=1)\n Buffers: shared hit=1\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1)\n Output: $17\n2015-11-02 17:42:10 CET ZUSAMMENHANG: SQL-Anweisung »INSERT INTO lookup_custom_data_keys (key) VALUES (new.key)«\n PL/pgSQL function \"fn_trigger_test\" line 11 at SQL-Anweisung\n2015-11-02 17:42:10 CET LOG: duration: 5200.475 ms plan:\n Query Text: UPDATE custom_data SET key= 'key-2' WHERE key = 'key-1'\n Update on public.custom_data (cost=0.00..15.35 rows=1 width=34) (actual time=0.369..0.369 rows=0 loops=1)\n Buffers: shared hit=29\n -> Index Scan using idx_custom_data_key on public.custom_data (cost=0.00..15.35 rows=1 width=34) (actual time=0.088..0.090 rows=1 loops=1)\n Output: custom_data_id, file_id, user_id, \"timestamp\", 'key-2'::text, value, ctid\n Index Cond: (custom_data.key = 'key-1'::text)\n Buffers: shared hit=6\n\n\n\nExecution plan of the normal query \"SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key='key-1' LIMIT 1 );\":\n\n2015-11-02 17:44:28 CET LOG: duration: 0.052 ms plan:\n Query Text: SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key='key-1' LIMIT 1 );\n Result (cost=15.35..15.36 rows=1 width=0) (actual time=0.047..0.047 rows=1 loops=1)\n Output: (NOT $0)\n Buffers: shared hit=6\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..15.35 rows=1 width=0) (actual time=0.045..0.045 rows=0 loops=1)\n Output: (1)\n Buffers: shared hit=6\n -> Index Scan using idx_custom_data_key on public.custom_data (cost=0.00..15.35 rows=1 width=0) (actual time=0.043..0.043 rows=0 loops=1)\n Output: 1\n Index Cond: (custom_data.key = 'key-1'::text)\n Buffers: shared hit=6\n\n\nGuido Niewerth\n\n25 years inspired synergy\nOCS Optical Control Systems GmbH\nWullener Feld 24\n58454 Witten\nGermany\n\nTel: +49 (0) 2302 95622-0\nFax: +49 (0) 2302 95622-33\nEmail: [email protected]\nWeb: http://www.ocsgmbh.com\n\nHRB 8442 (Bochum) | VAT-ID: DE 124 084 990\nDirectors: Hans Gloeckler, Fatah Najaf, Merdan Sariboga\n\n\n\n\n\n\n\n\n\n\n\nI needed to set up the trigger function again, so here it is:\n \nCREATE OR REPLACE FUNCTION public.fn_trigger_test ()\nRETURNS trigger AS\n$body$\nDECLARE\n                start TIMESTAMP;\nBEGIN\n   start := timeofday();\n   IF TG_OP = 'UPDATE' THEN\n      IF NOT EXISTS( SELECT key FROM custom_data WHERE key = old.key LIMIT 1 ) THEN\n                DELETE FROM lookup_custom_data_keys WHERE key = old.key;\n      END IF;\n      IF NOT EXISTS( SELECT 1 FROM lookup_custom_data_keys WHERE key = new.key LIMIT 1 ) THEN\n                INSERT INTO lookup_custom_data_keys (key) VALUES (new.key);\n      END IF;\n   END IF;\n   RAISE NOTICE 'Trigger % ran: %', TG_OP, age( timeofday() ::TIMESTAMP, start );\n   RETURN NULL;\nEND;\n$body$\nLANGUAGE 'plpgsql'\nVOLATILE\nCALLED ON NULL INPUT\nSECURITY INVOKER\nCOST 100;\n \nAnd this is the execution plan. It looks like it does a slow sequential scan where it´s able to do an index scan:\n \n2015-11-02 17:42:10 CET LOG:  duration: 5195.673 ms  plan:\n                Query Text: SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1 )\n                Result  (cost=0.09..0.10 rows=1 width=0) (actual time=5195.667..5195.667 rows=1 loops=1)\n                  Output: (NOT $0)\n                  Buffers: shared hit=34 read=351750\n                  InitPlan 1 (returns $0)\n                    ->  Limit  (cost=0.00..0.09 rows=1 width=0) (actual time=5195.662..5195.662 rows=0 loops=1)\n                          Output: (1)\n                          Buffers: shared hit=34 read=351750\n                          ->  Seq Scan on public.custom_data  (cost=0.00..821325.76 rows=9390835 width=0) (actual time=5195.658..5195.658 rows=0 loops=1)\n                                Output: 1\n                                Filter: (custom_data.key = $15)\n                                Buffers: shared hit=34 read=351750\n2015-11-02 17:42:10 CET ZUSAMMENHANG:  SQL-Anweisung »SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1 )«\n                PL/pgSQL function \"fn_trigger_test\" line 7 at IF\n2015-11-02 17:42:10 CET LOG:  duration: 0.014 ms  plan:\n                Query Text: DELETE FROM lookup_custom_data_keys WHERE key = old.key\n                Delete on public.lookup_custom_data_keys  (cost=0.00..38.25 rows=1 width=6) (actual time=0.013..0.013 rows=0 loops=1)\n                  Buffers: shared hit=2\n                  ->  Seq Scan on public.lookup_custom_data_keys  (cost=0.00..38.25 rows=1 width=6) (actual time=0.007..0.007 rows=1 loops=1)\n                        Output: ctid\n                        Filter: (lookup_custom_data_keys.key = $15)\n                        Buffers: shared hit=1\n2015-11-02 17:42:10 CET ZUSAMMENHANG:  SQL-Anweisung »DELETE FROM lookup_custom_data_keys WHERE key = old.key«\n                PL/pgSQL function \"fn_trigger_test\" line 8 at SQL-Anweisung\n2015-11-02 17:42:10 CET LOG:  duration: 0.005 ms  plan:\n                Query Text: SELECT NOT EXISTS( SELECT 1 FROM lookup_custom_data_keys WHERE key = new.key LIMIT 1 )\n                Result  (cost=38.25..38.26 rows=1 width=0) (actual time=0.004..0.004 rows=1 loops=1)\n                  Output: (NOT $0)\n                  Buffers: shared hit=1\n                  InitPlan 1 (returns $0)\n                    ->  Limit  (cost=0.00..38.25 rows=1 width=0) (actual time=0.003..0.003 rows=0 loops=1)\n                          Output: (1)\n                          Buffers: shared hit=1\n                          ->  Seq Scan on public.lookup_custom_data_keys  (cost=0.00..38.25 rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)\n                                Output: 1\n                                Filter: (lookup_custom_data_keys.key = $17)\n                                Buffers: shared hit=1\n2015-11-02 17:42:10 CET ZUSAMMENHANG:  SQL-Anweisung »SELECT NOT EXISTS( SELECT 1 FROM lookup_custom_data_keys WHERE key = new.key LIMIT 1 )«\n                PL/pgSQL function \"fn_trigger_test\" line 10 at IF\n2015-11-02 17:42:10 CET LOG:  duration: 0.116 ms  plan:\n                Query Text: INSERT INTO lookup_custom_data_keys (key) VALUES (new.key)\n                Insert on public.lookup_custom_data_keys  (cost=0.00..0.01 rows=1 width=0) (actual time=0.115..0.115 rows=0 loops=1)\n                  Buffers: shared hit=1\n                  ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1)\n                        Output: $17\n2015-11-02 17:42:10 CET ZUSAMMENHANG:  SQL-Anweisung »INSERT INTO lookup_custom_data_keys (key) VALUES (new.key)«\n                PL/pgSQL function \"fn_trigger_test\" line 11 at SQL-Anweisung\n2015-11-02 17:42:10 CET LOG:  duration: 5200.475 ms  plan:\n                Query Text: UPDATE custom_data SET key= 'key-2' WHERE key = 'key-1'\n                Update on public.custom_data  (cost=0.00..15.35 rows=1 width=34) (actual time=0.369..0.369 rows=0 loops=1)\n                  Buffers: shared hit=29\n                  ->  Index Scan using idx_custom_data_key on public.custom_data  (cost=0.00..15.35 rows=1 width=34) (actual time=0.088..0.090 rows=1 loops=1)\n                        Output: custom_data_id, file_id, user_id, \"timestamp\", 'key-2'::text, value, ctid\n                        Index Cond: (custom_data.key = 'key-1'::text)\n                        Buffers: shared hit=6\n \n \n \nExecution plan of the normal query \"SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key='key-1' LIMIT 1 );\":\n \n2015-11-02 17:44:28 CET LOG:  duration: 0.052 ms  plan:\n                Query Text: SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key='key-1' LIMIT 1 );\n                Result  (cost=15.35..15.36 rows=1 width=0) (actual time=0.047..0.047 rows=1 loops=1)\n                  Output: (NOT $0)\n                  Buffers: shared hit=6\n                  InitPlan 1 (returns $0)\n                    ->  Limit  (cost=0.00..15.35 rows=1 width=0) (actual time=0.045..0.045 rows=0 loops=1)\n                          Output: (1)\n                          Buffers: shared hit=6\n                          ->  Index Scan using idx_custom_data_key on public.custom_data  (cost=0.00..15.35 rows=1 width=0) (actual time=0.043..0.043 rows=0 loops=1)\n                                Output: 1\n                                Index Cond: (custom_data.key = 'key-1'::text)\n                                Buffers: shared hit=6\n \n\n\n\n\n\n\n\nGuido\nNiewerth \n\n25 years inspired synergy\nOCS\nOptical\nControl\nSystems GmbH\nWullener Feld 24\n58454\nWitten\nGermany\n\nTel:\n+49 (0) 2302 95622-0 \nFax: +49 (0) 2302 95622-33\nEmail: [email protected]\nWeb: http://www.ocsgmbh.com\n\nHRB 8442 (Bochum) | VAT-ID: DE 124 084 990\nDirectors: Hans Gloeckler, Fatah Najaf, Merdan Sariboga", "msg_date": "Mon, 2 Nov 2015 16:54:21 +0000", "msg_from": "Guido Niewerth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query in trigger function" }, { "msg_contents": "Guido Niewerth <[email protected]> writes:\n> And this is the execution plan. It looks like it does a slow sequential scan where it�s able to do an index scan:\n\n> 2015-11-02 17:42:10 CET LOG: duration: 5195.673 ms plan:\n> Query Text: SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1 )\n> Result (cost=0.09..0.10 rows=1 width=0) (actual time=5195.667..5195.667 rows=1 loops=1)\n> Output: (NOT $0)\n> Buffers: shared hit=34 read=351750\n> InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..0.09 rows=1 width=0) (actual time=5195.662..5195.662 rows=0 loops=1)\n> Output: (1)\n> Buffers: shared hit=34 read=351750\n> -> Seq Scan on public.custom_data (cost=0.00..821325.76 rows=9390835 width=0) (actual time=5195.658..5195.658 rows=0 loops=1)\n> Output: 1\n> Filter: (custom_data.key = $15)\n> Buffers: shared hit=34 read=351750\n\nIt looks like you're getting bit by an inaccurate estimate of what will be\nthe quickest way to satisfy a LIMIT query. In this particular situation,\nI'd advise just dropping the LIMIT, as it contributes nothing useful.\n\n(If memory serves, 9.5 will actually ignore constant-LIMIT clauses inside\nEXISTS(), because people keep writing them even though they're useless.\nEarlier releases do not have that code though.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 02 Nov 2015 13:10:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query in trigger function" }, { "msg_contents": "These are the queries I used to get the execution planer use the index scan instead of the sequential scan:\n\nIF NOT EXISTS (SELECT 1 FROM custom_data WHERE key = old.key) => sequential scan\nIF NOT EXISTS (SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1) => sequential scan\nIF NOT EXISTS (SELECT max( 1 ) FROM custom_data WHERE key = old.key) => sequential scan\n\nAfter breaking up the code into two statements the execution planer uses the index scan:\n\nresult INTEGER;\nSELECT 1 FROM custom_data where key = old.key INTO result;\nIF result ISNULL THEN\n ...\nEND IF;\n\nTo me it looks like the execution planer does not choose the optimal strategy. Even small changes in the function body make the execution planer use the slow sequential scan.\n\nGuido Niewerth\n\n25 years inspired synergy\nOCS Optical Control Systems GmbH\nWullener Feld 24\n58454 Witten\nGermany\n\nTel: +49 (0) 2302 95622-0\nFax: +49 (0) 2302 95622-33\nEmail: [email protected]\nWeb: http://www.ocsgmbh.com\n\nHRB 8442 (Bochum) | VAT-ID: DE 124 084 990\nDirectors: Hans Gloeckler, Fatah Najaf, Merdan Sariboga\n\n\n\n\n\n\n\n\n\n\n\nThese are the queries I used to get the execution planer use the index scan instead of the sequential scan:\n \nIF NOT EXISTS (SELECT 1 FROM custom_data WHERE key = old.key) => sequential scan\nIF NOT EXISTS (SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1) => sequential scan\nIF NOT EXISTS (SELECT max( 1 ) FROM custom_data WHERE key = old.key) => sequential scan\n \nAfter breaking up the code into two statements the execution planer uses the index scan:\n \nresult INTEGER;\nSELECT 1 FROM custom_data where key = old.key INTO result;\nIF result ISNULL THEN\n   ...\nEND IF;\n \nTo me it looks like the execution planer does not choose the optimal strategy. Even small changes in the function body make the execution planer use the slow sequential scan.\n\n\n\n\n\n\n\nGuido\nNiewerth \n\n25 years inspired synergy\nOCS\nOptical\nControl\nSystems GmbH\nWullener Feld 24\n58454\nWitten\nGermany\n\nTel:\n+49 (0) 2302 95622-0 \nFax: +49 (0) 2302 95622-33\nEmail: [email protected]\nWeb: http://www.ocsgmbh.com\n\nHRB 8442 (Bochum) | VAT-ID: DE 124 084 990\nDirectors: Hans Gloeckler, Fatah Najaf, Merdan Sariboga", "msg_date": "Tue, 3 Nov 2015 10:58:26 +0000", "msg_from": "Guido Niewerth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query in trigger function" } ]
[ { "msg_contents": "Hi all,\n\nIf I install the PostgreSQL on Linux (Debian),\nHow much the limit of max_connections that PostgreSQL can take?\nHow much the limit of max_prepared_transactions that PostgreSQL can take?\nHow much the limit of max_files_per_process that PostgreSQL can take?\n\n\nRegards,\nFattah\n--\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 2 Nov 2015 17:52:15 +0700", "msg_from": "FattahRozzaq <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL limitation" }, { "msg_contents": "On Mon, Nov 2, 2015 at 7:52 PM, FattahRozzaq <[email protected]> wrote:\n> If I install the PostgreSQL on Linux (Debian),\n> How much the limit of max_connections that PostgreSQL can take?\n> How much the limit of max_prepared_transactions that PostgreSQL can take?\n\nPer definition, those parameters have a max value of 2^23-1. For a\nPostgres instance, a couple of hundred connections is already a lot,\nand you'd want not really much more than the maximum number of\nconnections for max_prepared_transactions.\n\n> How much the limit of max_files_per_process that PostgreSQL can take?\n\nAnd this one has a maximum limit of 2^31-1. This can be helpful on\nsome platforms where kernel allows a process to open more files than\nit can, like FreeBSD.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 2 Nov 2015 21:12:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL limitation" } ]
[ { "msg_contents": "Hi all.\n\nIs the speed of hash operations stands on the performance of CPU?\nBelow you can see part from output of explain analyze command\n\n*Intel(R) Xeon(R) CPU E7520 @ 1.87GHz*\n\" -> Hash (cost=337389.43..337389.43 rows=3224443 width=34)\n(actual time=15046.382..15046.382 rows=3225191 loops=1)\"\n\" Buckets: 524288 Batches: 1 Memory Usage: 207874kB\"\n\n*Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz*\n\n\" -> Hash (cost=340758.94..340758.94 rows=3191894 width=34)\n(actual time=2692.878..2692.878 rows=3192103 loops=1)\"\n\" Buckets: 524288 Batches: 1 Memory Usage: 205742kB\"\n\n*Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz*\n\" -> Hash (cost=337389.43..337389.43 rows=3224443 width=34)\n(actual time=8559.849..8559.849 rows=3225293 loops=1)\"\n\" Buckets: 524288 Batches: 1 Memory Usage: 207881kB\"\n\n*Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz*\n\" -> Hash (cost=356613.23..356613.23 rows=3224623 width=40)\n(actual time=3635.931..3635.931 rows=3224623 loops=1)\"\n\" Buckets: 524288 Batches: 1 Memory Usage: 207838kB\"\n\nThanks.\n\nHi all.Is the speed of hash operations stands on the performance of CPU?Below you can see part from output of explain analyze command\nIntel(R) Xeon(R) CPU           E7520  @ 1.87GHz\"              ->  Hash  (cost=337389.43..337389.43 rows=3224443 width=34) (actual time=15046.382..15046.382 rows=3225191 loops=1)\"\"                    Buckets: 524288  Batches: 1  Memory Usage: 207874kB\"Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz\"              ->  Hash  (cost=340758.94..340758.94 rows=3191894 width=34) (actual time=2692.878..2692.878 rows=3192103 loops=1)\"\"                    Buckets: 524288  Batches: 1  Memory Usage: 205742kB\"\n\nIntel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz\"              ->  Hash  (cost=337389.43..337389.43 rows=3224443 width=34) (actual time=8559.849..8559.849 rows=3225293 loops=1)\"\"                    Buckets: 524288  Batches: 1  Memory Usage: 207881kB\"Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz\"              ->  Hash  (cost=356613.23..356613.23 rows=3224623 width=40) (actual time=3635.931..3635.931 rows=3224623 loops=1)\"\"                    Buckets: 524288  Batches: 1  Memory Usage: 207838kB\"Thanks.", "msg_date": "Thu, 5 Nov 2015 11:11:10 +0200", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": true, "msg_subject": "HASH" }, { "msg_contents": "On Thu, Nov 5, 2015 at 1:11 AM, Artem Tomyuk <[email protected]> wrote:\n> Hi all.\n>\n> Is the speed of hash operations stands on the performance of CPU?\n\nYes, but the variation is probably not as much as the raw timing in\nyour example indicates.\n\n> Below you can see part from output of explain analyze command\n>\n> Intel(R) Xeon(R) CPU E7520 @ 1.87GHz\n>\n> \" -> Hash (cost=337389.43..337389.43 rows=3224443 width=34)\n> (actual time=15046.382..15046.382 rows=3225191 loops=1)\"\n> \" Buckets: 524288 Batches: 1 Memory Usage: 207874kB\"\n\nA lot of that time was probably spent reading the data off of disk so\nthat it could hash it.\n\nYou should turn track_io_timing on, run \"explain (analyze, buffers)\n...\" and then show the entire explain output, or at least also show\nthe entries downstream of the Hash node.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Nov 2015 03:24:24 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HASH" } ]
[ { "msg_contents": "I noticed that querying for\n product_attributes @> '{\"upsell\":[\"true\"]}'\nis much slower than querying for\n product_attributes @> '{\"upsell\": 1}'\n\nIs that expected? I have a gin index on product_attributes. I'm using 9.4.1.\n\nexplain analyze\nselect count(*) from products where product_attributes @>\n'{\"upsell\":[\"true\"]}' and site_id = '1';\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1382.92..1382.93 rows=1 width=0) (actual\ntime=410.498..410.499 rows=1 loops=1)\n -> Bitmap Heap Scan on products (cost=46.94..1382.52 rows=157 width=0)\n(actual time=31.747..363.145 rows=45165 loops=1)\n Recheck Cond: (product_attributes @> '{\"upsell\": [\"true\"]}'::jsonb)\n Filter: (site_id = '1'::text)\n Rows Removed by Filter: 90330\n Heap Blocks: exact=12740\n -> Bitmap Index Scan on products_attributes_idx\n (cost=0.00..46.90 rows=386 width=0) (actual time=29.585..29.585\nrows=135843 loops=1)\n Index Cond: (product_attributes @> '{\"upsell\":\n[\"true\"]}'::jsonb)\n Planning time: 0.851 ms\n Execution time: 410.825 ms\n(10 rows)\n\nTime: 413.172 ms\n\n\n\nexplain analyze\nselect count(*) from products where product_attributes @> '{\"upsell\": 1}'\nand site_id = '1';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1382.92..1382.93 rows=1 width=0) (actual\ntime=0.110..0.111 rows=1 loops=1)\n -> Bitmap Heap Scan on products (cost=46.94..1382.52 rows=157 width=0)\n(actual time=0.107..0.107 rows=0 loops=1)\n Recheck Cond: (product_attributes @> '{\"upsell\": 1}'::jsonb)\n Filter: (site_id = '1'::text)\n -> Bitmap Index Scan on products_attributes_idx\n (cost=0.00..46.90 rows=386 width=0) (actual time=0.105..0.105 rows=0\nloops=1)\n Index Cond: (product_attributes @> '{\"upsell\": 1}'::jsonb)\n Planning time: 0.091 ms\n Execution time: 0.140 ms\n(8 rows)\n\nTime: 1.264 ms\n\nI noticed that querying for    product_attributes @> '{\"upsell\":[\"true\"]}' is much slower than querying for    product_attributes @> '{\"upsell\": 1}'Is that expected? I have a gin index on product_attributes. I'm using 9.4.1.explain analyzeselect count(*) from products where product_attributes @> '{\"upsell\":[\"true\"]}' and site_id = '1';                                                                   QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=1382.92..1382.93 rows=1 width=0) (actual time=410.498..410.499 rows=1 loops=1)   ->  Bitmap Heap Scan on products  (cost=46.94..1382.52 rows=157 width=0) (actual time=31.747..363.145 rows=45165 loops=1)         Recheck Cond: (product_attributes @> '{\"upsell\": [\"true\"]}'::jsonb)         Filter: (site_id = '1'::text)         Rows Removed by Filter: 90330         Heap Blocks: exact=12740         ->  Bitmap Index Scan on products_attributes_idx  (cost=0.00..46.90 rows=386 width=0) (actual time=29.585..29.585 rows=135843 loops=1)               Index Cond: (product_attributes @> '{\"upsell\": [\"true\"]}'::jsonb) Planning time: 0.851 ms Execution time: 410.825 ms(10 rows)Time: 413.172 msexplain analyzeselect count(*) from products where product_attributes @> '{\"upsell\": 1}' and site_id = '1';                                                               QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=1382.92..1382.93 rows=1 width=0) (actual time=0.110..0.111 rows=1 loops=1)   ->  Bitmap Heap Scan on products  (cost=46.94..1382.52 rows=157 width=0) (actual time=0.107..0.107 rows=0 loops=1)         Recheck Cond: (product_attributes @> '{\"upsell\": 1}'::jsonb)         Filter: (site_id = '1'::text)         ->  Bitmap Index Scan on products_attributes_idx  (cost=0.00..46.90 rows=386 width=0) (actual time=0.105..0.105 rows=0 loops=1)               Index Cond: (product_attributes @> '{\"upsell\": 1}'::jsonb) Planning time: 0.091 ms Execution time: 0.140 ms(8 rows)Time: 1.264 ms", "msg_date": "Sat, 7 Nov 2015 15:31:18 -0800", "msg_from": "Joe Van Dyk <[email protected]>", "msg_from_op": true, "msg_subject": "querying jsonb for arrays inside a hash" }, { "msg_contents": "Joe Van Dyk <[email protected]> writes:\n> I noticed that querying for\n> product_attributes @> '{\"upsell\":[\"true\"]}'\n> is much slower than querying for\n> product_attributes @> '{\"upsell\": 1}'\n\n> Is that expected?\n\nYour EXPLAIN results say that the first query matched 135843 rows and the\nsecond one none at all, so a significant variation in runtime doesn't seem\nthat surprising to me ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 07 Nov 2015 19:00:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: querying jsonb for arrays inside a hash" }, { "msg_contents": "You're right, brain fart. Nevermind! :)\n\nOn Sat, Nov 7, 2015 at 4:00 PM, Tom Lane <[email protected]> wrote:\n\n> Joe Van Dyk <[email protected]> writes:\n> > I noticed that querying for\n> > product_attributes @> '{\"upsell\":[\"true\"]}'\n> > is much slower than querying for\n> > product_attributes @> '{\"upsell\": 1}'\n>\n> > Is that expected?\n>\n> Your EXPLAIN results say that the first query matched 135843 rows and the\n> second one none at all, so a significant variation in runtime doesn't seem\n> that surprising to me ...\n>\n> regards, tom lane\n>\n\nYou're right, brain fart. Nevermind! :)On Sat, Nov 7, 2015 at 4:00 PM, Tom Lane <[email protected]> wrote:Joe Van Dyk <[email protected]> writes:\n> I noticed that querying for\n>    product_attributes @> '{\"upsell\":[\"true\"]}'\n> is much slower than querying for\n>    product_attributes @> '{\"upsell\": 1}'\n\n> Is that expected?\n\nYour EXPLAIN results say that the first query matched 135843 rows and the\nsecond one none at all, so a significant variation in runtime doesn't seem\nthat surprising to me ...\n\n                        regards, tom lane", "msg_date": "Mon, 9 Nov 2015 09:13:48 -0800", "msg_from": "Joe Van Dyk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: querying jsonb for arrays inside a hash" } ]
[ { "msg_contents": "We're hoping to get some suggestions as to improving the performance of a 3\ntable join we're carrying out.\n(I've stripped out some schema info to try to keep this post from getting\ntoo convoluted - if something doesn't make sense it maybe I've erroneously\ntaken out something significant)\n\nThe 3 tables and indices are:\n\n\\d branch_purchase_order\n\n Table\n\"public.branch_purchase_order\"\n Column | Type |\n Modifiers\n-------------------+--------------------------------+-----------------------------------------------------------------------\n po_id | integer | not null default\nnextval('branch_purchase_order_po_id_seq'::regclass)\n branch_code | character(2) |\n po_number | character varying(20) |\n supplier | character varying(50) |\n order_date | timestamp(0) without time zone |\n po_state | character varying(10) |\nIndexes:\n \"branch_purchase_order_pkey\" PRIMARY KEY, btree (po_id)\n \"branch_po_unique_order_no_idx\" UNIQUE, btree (branch_code, po_number)\n \"branch_po_no_idx\" btree (po_number)\n \"branch_po_state_idx\" btree (po_state)\nReferenced by:\n TABLE \"branch_purchase_order_products\" CONSTRAINT\n\"branch_purchase_order_products_po_id_fkey\" FOREIGN KEY (po_id) REFERENCES\nbranch_purchase_order(po_id) ON DELETE CASCADE\n\n\n\\d branch_purchase_order_products\n Table \"public.branch_purchase_order_products\"\n Column | Type | Modifiers\n--------------------+--------------------------------+-----------\n po_id | integer |\n product_code | character varying(20) |\n date_received | date |\nIndexes:\n \"branch_purchase_order_product_code_idx\" btree (product_code)\n \"branch_purchase_order_product_po_idx\" btree (po_id)\n \"branch_purchase_order_products_date_received_idx\" btree (date_received)\nForeign-key constraints:\n \"branch_purchase_order_products_po_id_fkey\" FOREIGN KEY (po_id)\nREFERENCES branch_purchase_order(po_id) ON DELETE CASCADE\n\n\\d stocksales_ib\n Table \"public.stocksales_ib\"\n Column | Type | Modifiers\n--------------+--------------------------------+-----------\n row_id | integer |\n branch_code | character(2) |\n product_code | character varying(20) |\n invoice_date | timestamp(0) without time zone |\n qty | integer |\n order_no | character varying(30) |\nIndexes:\n \"ssales_ib_branch_idx\" btree (branch_code)\n \"ssales_ib_invoice_date_date_idx\" btree ((invoice_date::date))\n \"ssales_ib_invoice_date_idx\" btree (invoice_date)\n \"ssales_ib_order_no\" btree (order_no)\n \"ssales_ib_product_idx\" btree (product_code)\n \"ssales_ib_replace_order_no\" btree (replace(order_no::text, ' '::text,\n''::text))\n \"ssales_ib_row_idx\" btree (row_id)\n \"stocksales_ib_branch_code_row_id_idx\" btree (branch_code, row_id)\n \"stocksales_ib_substring_idx\" btree\n(\"substring\"(replace(order_no::text, ' '::text, ''::text), 3, 2))\n\n\nThe join we're using is:\n\nbranch_purchase_order o\njoin branch_purchase_order_products p using(po_id)\njoin stocksales_ib ss on o.supplier=ss.branch_code\nand p.product_code=ss.product_code\nand X\n\nWe have 3 different ways we have to do the final X join condition (we use 3\nsubqueries UNIONed together), but the one causing the issues is:\n\n(o.branch_code || o.po_number = replace(ss.order_no,' ',''))\n\nwhich joins branch_purchase_order to stocksales_ib under the following\ncircumstances:\n\n ss.order_no | o.branch_code | o.po_number\n----------------+---------------+-----------\n AA IN105394 | AA | IN105394\n BB IN105311 | BB | IN105311\n CC IN105311 | CC | IN105311\n DD IN105310 | DD | IN105310\n EE IN105310 | EE | IN105310\n\n\nThe entire query (leaving aside the UNION'ed subqueries for readability)\nlooks like this:\n\nselect\npo_id,\nproduct_code,\nsum(qty) as dispatch_qty,\nmax(invoice_date) as dispatch_date,\ncount(invoice_date) as dispatch_count\nfrom (\n\nselect\no.po_id,\np.product_code,\nss.qty,\nss.invoice_date\nfrom\nbranch_purchase_order o\njoin branch_purchase_order_products p using(po_id)\njoin stocksales_ib ss on o.supplier=ss.branch_code\nand p.product_code=ss.product_code\nand (o.branch_code || o.po_number=replace(ss.order_no,' ',''))\nwhere\no.po_state='PLACED'\nand o.supplier='XX'\n\n) x\ngroup by po_id,product_code\n\n\nExplain output:\n\nhttp://explain.depesz.com/s/TzF8h\n\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=83263.72..83263.73 rows=1 width=24) (actual\ntime=23908.777..23927.461 rows=52500 loops=1)\n Buffers: shared hit=23217993 dirtied=1\n -> Nested Loop (cost=1.29..83263.71 rows=1 width=24) (actual\ntime=0.196..23799.930 rows=53595 loops=1)\n Join Filter: (o.po_id = p.po_id)\n Rows Removed by Join Filter: 23006061\n Buffers: shared hit=23217993 dirtied=1\n -> Nested Loop (cost=0.86..57234.41 rows=3034 width=23) (actual\ntime=0.162..129.508 rows=54259 loops=1)\n Buffers: shared hit=18520\n -> Index Scan using branch_po_state_idx on\nbranch_purchase_order o (cost=0.42..807.12 rows=1672 width=17) (actual\ntime=0.037..4.863 rows=1916 loops=1)\n Index Cond: ((po_state)::text = 'PLACED'::text)\n Filter: ((supplier)::text = 'XX'::text)\n Rows Removed by Filter: 3050\n Buffers: shared hit=2157\n -> Index Scan using ssales_ib_replace_order_no on\nstocksales_ib ss (cost=0.44..33.74 rows=1 width=31) (actual\ntime=0.014..0.044 rows=28 loops=1916)\n Index Cond: (replace((order_no)::text, ' '::text,\n''::text) = ((o.branch_code)::text || (o.po_number)::text))\n Filter: ((o.supplier)::bpchar = branch_code)\n Rows Removed by Filter: 0\n Buffers: shared hit=16363\n -> Index Scan using branch_purchase_order_product_code_idx on\nbranch_purchase_order_products p (cost=0.43..5.45 rows=250 width=12)\n(actual time=0.018..0.335 rows=425 loops=54259)\n Index Cond: ((product_code)::text = (ss.product_code)::text)\n Buffers: shared hit=23199473 dirtied=1\n Total runtime: 23935.995 ms\n(22 rows)\n\n\nSo we can see straight away that the outer Nested loop expects 1 row, and\ngets 53595. This isn't going to help the planner pick the most efficient\nplan I suspect.\n\nI've tried increasing default_statistics_target to the max and re analysing\nall the tables involved but this does not help the estimate.\nI suspect it's due to the join being based on functional result meaning any\nstats are ignored?\n\nWhat has improved runtimes is using a WITH clause to carry out the first\njoin explicitly. But although it runs in half the time, the stats are still\nway out and I feel it is maybe just because I'm limiting the planner's\nchoices that it by chance picks a different, quicker, plan.\nIt does a Hash Join and Seq Scan\n\n\nwith bpo as (\nselect\no.branch_code || o.po_number as order_no,\no.po_id,\no.supplier,\no.branch_code,\np.product_code\nfrom branch_purchase_order o\njoin branch_purchase_order_products p using(po_id)\nwhere\no.po_state='PLACED'\nand o.supplier='XX'\n)\nselect\npo_id,\nproduct_code,\nsum(qty) as dispatch_qty,\nmax(invoice_date) as dispatch_date,\ncount(invoice_date) as dispatch_count\nfrom (\n\nselect\no.po_id,\no.product_code,\nss.qty,\nss.invoice_date\nfrom\nbpo o\njoin stocksales_ib ss on o.supplier=ss.branch_code\nand o.product_code=ss.product_code\nand o.order_no=replace(ss.order_no,' ','')\n\n) x\ngroup by po_id,product_code\n\nExplain:\nhttp://explain.depesz.com/s/r7v\n\n\nCan anyone suggest a better approach for improving the plan for this type\nof query?\n\n\n select version();\n version\n\n---------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.10 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3\n20120306 (Red Hat 4.6.3-2), 64-bit\n\n\nRegards,\n-- \nDavid\n\nWe're hoping to get some suggestions as to improving the performance of a 3 table join we're carrying out.(I've stripped out some schema info to try to keep this post from getting too convoluted - if something doesn't make sense it maybe I've erroneously taken out something significant)The 3 tables and indices are: \\d branch_purchase_order                                            Table \"public.branch_purchase_order\"      Column       |              Type              |                               Modifiers                               -------------------+--------------------------------+----------------------------------------------------------------------- po_id             | integer                        | not null default nextval('branch_purchase_order_po_id_seq'::regclass) branch_code       | character(2)                   |  po_number         | character varying(20)          |  supplier          | character varying(50)          |  order_date        | timestamp(0) without time zone |  po_state          | character varying(10)          | Indexes:    \"branch_purchase_order_pkey\" PRIMARY KEY, btree (po_id)    \"branch_po_unique_order_no_idx\" UNIQUE, btree (branch_code, po_number)    \"branch_po_no_idx\" btree (po_number)    \"branch_po_state_idx\" btree (po_state)Referenced by:    TABLE \"branch_purchase_order_products\" CONSTRAINT \"branch_purchase_order_products_po_id_fkey\" FOREIGN KEY (po_id) REFERENCES branch_purchase_order(po_id) ON DELETE CASCADE\\d branch_purchase_order_products          Table \"public.branch_purchase_order_products\"       Column       |              Type              | Modifiers --------------------+--------------------------------+----------- po_id              | integer                        |  product_code       | character varying(20)          |  date_received      | date                           | Indexes:    \"branch_purchase_order_product_code_idx\" btree (product_code)    \"branch_purchase_order_product_po_idx\" btree (po_id)    \"branch_purchase_order_products_date_received_idx\" btree (date_received)Foreign-key constraints:    \"branch_purchase_order_products_po_id_fkey\" FOREIGN KEY (po_id) REFERENCES branch_purchase_order(po_id) ON DELETE CASCADE\\d stocksales_ib               Table \"public.stocksales_ib\"    Column    |              Type              | Modifiers --------------+--------------------------------+----------- row_id       | integer                        |  branch_code  | character(2)                   |  product_code | character varying(20)          |  invoice_date | timestamp(0) without time zone |  qty          | integer                        |  order_no     | character varying(30)          | Indexes:    \"ssales_ib_branch_idx\" btree (branch_code)    \"ssales_ib_invoice_date_date_idx\" btree ((invoice_date::date))    \"ssales_ib_invoice_date_idx\" btree (invoice_date)    \"ssales_ib_order_no\" btree (order_no)    \"ssales_ib_product_idx\" btree (product_code)    \"ssales_ib_replace_order_no\" btree (replace(order_no::text, ' '::text, ''::text))    \"ssales_ib_row_idx\" btree (row_id)    \"stocksales_ib_branch_code_row_id_idx\" btree (branch_code, row_id)    \"stocksales_ib_substring_idx\" btree (\"substring\"(replace(order_no::text, ' '::text, ''::text), 3, 2))The join we're using is:branch_purchase_order o join branch_purchase_order_products p using(po_id)join stocksales_ib ss on o.supplier=ss.branch_code and p.product_code=ss.product_codeand XWe have 3 different ways we have to do the final X join condition (we use 3 subqueries UNIONed together), but the one causing the issues is:(o.branch_code || o.po_number = replace(ss.order_no,' ',''))which joins branch_purchase_order to stocksales_ib under the following circumstances:  ss.order_no   | o.branch_code | o.po_number ----------------+---------------+----------- AA IN105394    | AA            | IN105394 BB IN105311    | BB            | IN105311 CC IN105311    | CC            | IN105311 DD IN105310    | DD            | IN105310 EE IN105310    | EE            | IN105310The entire query (leaving aside the UNION'ed subqueries for readability) looks like this:selectpo_id,product_code,sum(qty) as dispatch_qty,max(invoice_date) as dispatch_date,count(invoice_date) as dispatch_countfrom (selecto.po_id,p.product_code,ss.qty,ss.invoice_datefrombranch_purchase_order o join branch_purchase_order_products p using(po_id)join stocksales_ib ss on o.supplier=ss.branch_code and p.product_code=ss.product_code and (o.branch_code || o.po_number=replace(ss.order_no,' ',''))where  o.po_state='PLACED'and o.supplier='XX') x   group by po_id,product_codeExplain output:http://explain.depesz.com/s/TzF8h                                                                                          QUERY PLAN                                                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=83263.72..83263.73 rows=1 width=24) (actual time=23908.777..23927.461 rows=52500 loops=1)   Buffers: shared hit=23217993 dirtied=1   ->  Nested Loop  (cost=1.29..83263.71 rows=1 width=24) (actual time=0.196..23799.930 rows=53595 loops=1)         Join Filter: (o.po_id = p.po_id)         Rows Removed by Join Filter: 23006061         Buffers: shared hit=23217993 dirtied=1         ->  Nested Loop  (cost=0.86..57234.41 rows=3034 width=23) (actual time=0.162..129.508 rows=54259 loops=1)               Buffers: shared hit=18520               ->  Index Scan using branch_po_state_idx on branch_purchase_order o  (cost=0.42..807.12 rows=1672 width=17) (actual time=0.037..4.863 rows=1916 loops=1)                     Index Cond: ((po_state)::text = 'PLACED'::text)                     Filter: ((supplier)::text = 'XX'::text)                     Rows Removed by Filter: 3050                     Buffers: shared hit=2157               ->  Index Scan using ssales_ib_replace_order_no on stocksales_ib ss  (cost=0.44..33.74 rows=1 width=31) (actual time=0.014..0.044 rows=28 loops=1916)                     Index Cond: (replace((order_no)::text, ' '::text, ''::text) = ((o.branch_code)::text || (o.po_number)::text))                     Filter: ((o.supplier)::bpchar = branch_code)                     Rows Removed by Filter: 0                     Buffers: shared hit=16363         ->  Index Scan using branch_purchase_order_product_code_idx on branch_purchase_order_products p  (cost=0.43..5.45 rows=250 width=12) (actual time=0.018..0.335 rows=425 loops=54259)               Index Cond: ((product_code)::text = (ss.product_code)::text)               Buffers: shared hit=23199473 dirtied=1 Total runtime: 23935.995 ms(22 rows)So we can see straight away that the outer Nested loop expects 1 row, and gets 53595. This isn't going to help the planner pick the most efficient plan I suspect.I've tried increasing default_statistics_target to the max and re analysing all the tables involved but this does not help the estimate.I suspect it's due to the join being based on functional result meaning any stats are ignored?What has improved runtimes is using a WITH clause to carry out the first join explicitly. But although it runs in half the time, the stats are still way out and I feel it is maybe just because I'm limiting the planner's choices that it by chance picks a different, quicker, plan.It does a Hash Join and Seq Scanwith bpo as (select o.branch_code || o.po_number as order_no,o.po_id,o.supplier,o.branch_code,p.product_codefrom branch_purchase_order o join branch_purchase_order_products p using(po_id)where o.po_state='PLACED'and o.supplier='XX')selectpo_id,product_code,sum(qty) as dispatch_qty,max(invoice_date) as dispatch_date,count(invoice_date) as dispatch_countfrom (selecto.po_id,o.product_code,ss.qty,ss.invoice_datefrombpo o join stocksales_ib ss on o.supplier=ss.branch_code and o.product_code=ss.product_code and o.order_no=replace(ss.order_no,' ','')) x   group by po_id,product_codeExplain:http://explain.depesz.com/s/r7vCan anyone suggest a better approach for improving the plan for this type of query? select version();                                                    version                                                    --------------------------------------------------------------------------------------------------------------- PostgreSQL 9.3.10 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bitRegards,-- David", "msg_date": "Tue, 10 Nov 2015 12:13:21 +0000", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "Slow 3 Table Join with v bad row estimate" }, { "msg_contents": "David Osborne <[email protected]> writes:\n> We have 3 different ways we have to do the final X join condition (we use 3\n> subqueries UNIONed together), but the one causing the issues is:\n\n> (o.branch_code || o.po_number = replace(ss.order_no,' ',''))\n\n> ... So we can see straight away that the outer Nested loop expects 1 row, and\n> gets 53595. This isn't going to help the planner pick the most efficient\n> plan I suspect.\n\n> I've tried increasing default_statistics_target to the max and re analysing\n> all the tables involved but this does not help the estimate.\n> I suspect it's due to the join being based on functional result meaning any\n> stats are ignored?\n\nYeah, the planner is not nearly smart enough to draw any useful\nconclusions about the selectivity of that clause from standard statistics.\nWhat you might try doing is creating functional indexes on the two\nsubexpressions:\n\ncreate index on branch_purchase_order ((branch_code || po_number));\ncreate index on stocksales_ib (replace(order_no,' ',''));\n\n(actually it looks like you've already got the latter one) and then\nre-ANALYZING. I'm not necessarily expecting that the planner will\nactually choose to use these indexes in its plan; but their existence\nwill prompt ANALYZE to gather stats about the expression results,\nand that should at least let the planner draw more-accurate conclusions\nabout the selectivity of the equality constraint.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Nov 2015 10:03:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow 3 Table Join with v bad row estimate" }, { "msg_contents": "Thanks very much Tom.\n\nDoesn't seem to quite do the trick. I created both those indexes (or the\nmissing one at least)\nThen I ran analyse on stocksales_ib and branch_purchase_order.\nI checked there were stats held in pg_stats for both indexes, which there\nwere.\nBut the query plan still predicts 1 row and comes up with the same plan.\n\nI also tried setting default_statistics_target to 10000 and reran analyse\non both tables with the same results.\n\nIn addition, also no change if I change the query to have the join ss.order_\nno=o.branch_code || ' ' || o.po_number and create an index on (branch_code\n|| ' ' || o.po_number)\n\nAm I right in thinking my workaround with the WITH clause is in no way\nguaranteed to continue to perform better than the current query if I rolled\nthat out?\n\n\n\nOn 10 November 2015 at 15:03, Tom Lane <[email protected]> wrote:\n\n>\n> Yeah, the planner is not nearly smart enough to draw any useful\n> conclusions about the selectivity of that clause from standard statistics.\n> What you might try doing is creating functional indexes on the two\n> subexpressions:\n>\n> create index on branch_purchase_order ((branch_code || po_number));\n> create index on stocksales_ib (replace(order_no,' ',''));\n>\n> (actually it looks like you've already got the latter one) and then\n> re-ANALYZING. I'm not necessarily expecting that the planner will\n> actually choose to use these indexes in its plan; but their existence\n> will prompt ANALYZE to gather stats about the expression results,\n> and that should at least let the planner draw more-accurate conclusions\n> about the selectivity of the equality constraint.\n>\n> regards, tom lane\n>\n\nThanks very much Tom.Doesn't seem to quite do the trick. I created both those indexes (or the missing one at least)Then I ran analyse on stocksales_ib and branch_purchase_order.I checked there were stats held in pg_stats for both indexes, which there were.But the query plan still predicts 1 row and comes up with the same plan.I also tried setting default_statistics_target to 10000 and reran analyse on both tables with the same results.In addition, also no change if I change the query to have the join ss.order_no=o.branch_code || ' ' || o.po_number and create an index on  (branch_code || ' ' || o.po_number)Am I right in thinking my workaround with the WITH clause is in no way guaranteed to continue to perform better than the current query if I rolled that out?On 10 November 2015 at 15:03, Tom Lane <[email protected]> wrote:\nYeah, the planner is not nearly smart enough to draw any useful\nconclusions about the selectivity of that clause from standard statistics.\nWhat you might try doing is creating functional indexes on the two\nsubexpressions:\n\ncreate index on branch_purchase_order ((branch_code || po_number));\ncreate index on stocksales_ib (replace(order_no,' ',''));\n\n(actually it looks like you've already got the latter one) and then\nre-ANALYZING.  I'm not necessarily expecting that the planner will\nactually choose to use these indexes in its plan; but their existence\nwill prompt ANALYZE to gather stats about the expression results,\nand that should at least let the planner draw more-accurate conclusions\nabout the selectivity of the equality constraint.\n\n                        regards, tom lane", "msg_date": "Tue, 10 Nov 2015 16:38:37 +0000", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow 3 Table Join with v bad row estimate" }, { "msg_contents": "David Osborne <[email protected]> writes:\n> Doesn't seem to quite do the trick. I created both those indexes (or the\n> missing one at least)\n> Then I ran analyse on stocksales_ib and branch_purchase_order.\n> I checked there were stats held in pg_stats for both indexes, which there\n> were.\n> But the query plan still predicts 1 row and comes up with the same plan.\n\nMeh. In that case, likely the explanation is that the various conditions\nin your query are highly correlated, and the planner is underestimating\nthe number of rows that will satisfy them because it doesn't know about\nthe correlation.\n\nBut taking a step back, it seems like the core problem in your explain\noutput is here:\n\n>> -> Nested Loop (cost=1.29..83263.71 rows=1 width=24) (actual time=0.196..23799.930 rows=53595 loops=1)\n>> Join Filter: (o.po_id = p.po_id)\n>> Rows Removed by Join Filter: 23006061\n>> Buffers: shared hit=23217993 dirtied=1\n\nThat's an awful lot of rows being formed by the join only to be rejected.\nYou should try creating an index on\nbranch_purchase_order_products(po_id, product_code)\nso that the po_id condition could be enforced at the inner indexscan\ninstead of the join.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Nov 2015 12:05:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow 3 Table Join with v bad row estimate" }, { "msg_contents": "Ok - wow.\n\nAdding that index, I get the same estimate of 1 row, but a runtime of\n~450ms.\nA 23000ms improvement.\n\nhttp://explain.depesz.com/s/TzF8h\n\nThis is great. So as a general rule of thumb, if I see a Join Filter\nremoving an excessive number of rows, I can check if that condition can be\nadded to an index from the same table which is already being scanned.\n\nThanks for this!\n\nOn 10 November 2015 at 17:05, Tom Lane <[email protected]> wrote:\n\n>\n> But taking a step back, it seems like the core problem in your explain\n> output is here:\n>\n> >> -> Nested Loop (cost=1.29..83263.71 rows=1 width=24) (actual\n> time=0.196..23799.930 rows=53595 loops=1)\n> >> Join Filter: (o.po_id = p.po_id)\n> >> Rows Removed by Join Filter: 23006061\n> >> Buffers: shared hit=23217993 dirtied=1\n>\n> That's an awful lot of rows being formed by the join only to be rejected.\n> You should try creating an index on\n> branch_purchase_order_products(po_id, product_code)\n> so that the po_id condition could be enforced at the inner indexscan\n> instead of the join.\n>\n>\n>\n\nOk - wow.Adding that index, I get the same estimate of 1 row, but a runtime of ~450ms.A 23000ms improvement.http://explain.depesz.com/s/TzF8hThis is great. So as a general rule of thumb, if I see a Join Filter removing an excessive number of rows, I can check if that condition can be added to an index from the same table which is already being scanned.Thanks for this!On 10 November 2015 at 17:05, Tom Lane <[email protected]> wrote:\nBut taking a step back, it seems like the core problem in your explain\noutput is here:\n\n>>    ->  Nested Loop  (cost=1.29..83263.71 rows=1 width=24) (actual time=0.196..23799.930 rows=53595 loops=1)\n>>          Join Filter: (o.po_id = p.po_id)\n>>          Rows Removed by Join Filter: 23006061\n>>          Buffers: shared hit=23217993 dirtied=1\n\nThat's an awful lot of rows being formed by the join only to be rejected.\nYou should try creating an index on\nbranch_purchase_order_products(po_id, product_code)\nso that the po_id condition could be enforced at the inner indexscan\ninstead of the join.", "msg_date": "Tue, 10 Nov 2015 17:31:52 +0000", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow 3 Table Join with v bad row estimate" }, { "msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of David Osborne\r\nSent: Tuesday, November 10, 2015 12:32 PM\r\nTo: Tom Lane <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Slow 3 Table Join with v bad row estimate\r\n\r\nOk - wow.\r\n\r\nAdding that index, I get the same estimate of 1 row, but a runtime of ~450ms.\r\nA 23000ms improvement.\r\n\r\nhttp://explain.depesz.com/s/TzF8h\r\n\r\nThis is great. So as a general rule of thumb, if I see a Join Filter removing an excessive number of rows, I can check if that condition can be added to an index from the same table which is already being scanned.\r\n\r\nThanks for this!\r\n\r\nDavid,\r\nI believe the plan you are posting is the old plan.\r\nCould you please post explain analyze with the index that Tom suggested?\r\n\r\nRegards,\r\nIgor Neyman\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of David Osborne\nSent: Tuesday, November 10, 2015 12:32 PM\nTo: Tom Lane <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Slow 3 Table Join with v bad row estimate\n \n\n\nOk - wow.\n\n\n \n\n\nAdding that index, I get the same estimate of 1 row, but a runtime of ~450ms.\r\nA 23000ms improvement.\n\n\n \n\n\nhttp://explain.depesz.com/s/TzF8h\n\n\n \n\n\nThis is great. So as a general rule of thumb, if I see a Join Filter removing an excessive number of rows, I can check if that condition can be added to an index from the same table which is already being scanned.\n\n\n \n\n\nThanks for this!\n\n\n \nDavid,\nI believe the plan you are posting is the old plan.\nCould you please post explain analyze with the index that Tom suggested?\n \nRegards,\nIgor Neyman", "msg_date": "Tue, 10 Nov 2015 18:38:30 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow 3 Table Join with v bad row estimate" }, { "msg_contents": "Sorry Igor - yes wrong plan.\n\nHere's the new one ...\n(running a wee bit slower this morning - still 20x faster that before\nhowever)\n\nhttp://explain.depesz.com/s/64YM\n\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=70661.35..70661.36 rows=1 width=24) (actual\ntime=1305.098..1326.956 rows=52624 loops=1)\n Buffers: shared hit=232615 read=3871 dirtied=387\n -> Nested Loop (cost=1.29..70661.34 rows=1 width=24) (actual\ntime=6.307..1242.567 rows=53725 loops=1)\n Buffers: shared hit=232615 read=3871 dirtied=387\n -> Index Scan using branch_po_state_idx on branch_purchase_order\no (cost=0.42..822.22 rows=1768 width=17) (actual time=0.042..6.001\nrows=1861 loops=1)\n Index Cond: ((po_state)::text = 'PLACED'::text)\n Filter: ((supplier)::text = 'XX'::text)\n Rows Removed by Filter: 3016\n Buffers: shared hit=2218\n -> Nested Loop (cost=0.87..39.49 rows=1 width=36) (actual\ntime=0.151..0.651 rows=29 loops=1861)\n Buffers: shared hit=230397 read=3871 dirtied=387\n -> Index Scan using ssales_ib_replace_order_no on\nstocksales_ib ss (cost=0.44..33.59 rows=1 width=31) (actual\ntime=0.093..0.401 rows=29 loops=1861)\n Index Cond: (replace((order_no)::text, ' '::text,\n''::text) = ((o.branch_code)::text || (o.po_number)::text))\n Filter: ((o.supplier)::bpchar = branch_code)\n Buffers: shared hit=13225 read=2994\n -> Index Only Scan using\nbranch_purchase_order_products_po_id_product_code_idx on\nbranch_purchase_order_products p (cost=0.43..5.90 rows=1 width=12) (actual\ntime=0.006..0.007 rows=1 loops=54396)\n Index Cond: ((po_id = o.po_id) AND (product_code =\n(ss.product_code)::text))\n Heap Fetches: 54475\n Buffers: shared hit=217172 read=877 dirtied=387\n Total runtime: 1336.253 ms\n(20 rows)\n\nSorry Igor - yes wrong plan.Here's the new one ...(running a wee bit slower this morning - still 20x faster that before however)http://explain.depesz.com/s/64YM                                                                                                    QUERY PLAN                                                                                                     -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=70661.35..70661.36 rows=1 width=24) (actual time=1305.098..1326.956 rows=52624 loops=1)   Buffers: shared hit=232615 read=3871 dirtied=387   ->  Nested Loop  (cost=1.29..70661.34 rows=1 width=24) (actual time=6.307..1242.567 rows=53725 loops=1)         Buffers: shared hit=232615 read=3871 dirtied=387         ->  Index Scan using branch_po_state_idx on branch_purchase_order o  (cost=0.42..822.22 rows=1768 width=17) (actual time=0.042..6.001 rows=1861 loops=1)               Index Cond: ((po_state)::text = 'PLACED'::text)               Filter: ((supplier)::text = 'XX'::text)               Rows Removed by Filter: 3016               Buffers: shared hit=2218         ->  Nested Loop  (cost=0.87..39.49 rows=1 width=36) (actual time=0.151..0.651 rows=29 loops=1861)               Buffers: shared hit=230397 read=3871 dirtied=387               ->  Index Scan using ssales_ib_replace_order_no on stocksales_ib ss  (cost=0.44..33.59 rows=1 width=31) (actual time=0.093..0.401 rows=29 loops=1861)                     Index Cond: (replace((order_no)::text, ' '::text, ''::text) = ((o.branch_code)::text || (o.po_number)::text))                     Filter: ((o.supplier)::bpchar = branch_code)                     Buffers: shared hit=13225 read=2994               ->  Index Only Scan using branch_purchase_order_products_po_id_product_code_idx on branch_purchase_order_products p  (cost=0.43..5.90 rows=1 width=12) (actual time=0.006..0.007 rows=1 loops=54396)                     Index Cond: ((po_id = o.po_id) AND (product_code = (ss.product_code)::text))                     Heap Fetches: 54475                     Buffers: shared hit=217172 read=877 dirtied=387 Total runtime: 1336.253 ms(20 rows)", "msg_date": "Wed, 11 Nov 2015 10:04:41 +0000", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow 3 Table Join with v bad row estimate" } ]
[ { "msg_contents": "Hi,\n\nWe using Postgres 9.3.10 on Amazon RDS and running into some strange\nbehavior that has been tough to track down and debug (partially due to the\nlimited admin access from RDS).\n\nWe're running a read-only query that normally takes ~10-15 min., but also\nruns concurrently with several other intensive queries (these queries\nthemselves, finish).\n\nOn one particular day, this query hung for many hours and even while we\nkilled pids for running queries and any locks granted, the query would\nnever return. Also no hints of blocking processes. After some digging\nthrough some I/O metrics, we didn't see any memory issues or unusual spikes\nthat would lead us to believe that we're running low on resources.\n\nThere is 1 caveat, however: there was a different schema that contained a\nday-old copy of data that isn't normally present when the hang started to\noccur. However, since these are completely different schema namespaces\nwith no crossovers in the queries themselves, I don't see how this is\nrelevant.\n\n\n 1) We ended up doing a full reboot of the RDS instance and ran the query\nagain, this time, no other queries are running off of a fresh boot-up (no\ncompeting locks or transactions). The query continued to hang.\n\n 2) We then ran pg_dump to snapshot the current data and did a full\npg_restore (after dropping all schemas) of an older dataset where we knew\nthis query would run successfully. As expected, the query ran fine.\n\n 3) We then dropped all schemas again and pg_restored the previous dataset\nthat was causing the query to hang, and then to my surprise, the query ran\njust fine. No hangs.\n\nWe thought this might be possibly due to some internal vacuuming, but this\nis unlikely since there are no real concurrent reads or updates happening.\nAuto-vacuum is also on with default settings.\n\nWhat is the most confusing part in all of this is why a DROP SCHEMA CASCADE\nand a fresh pg_restore would somehow fix the problem. Even a fresh reboot\ndidn't fix it.\n\nAny ideas??\n\nHi,We using Postgres 9.3.10 on Amazon RDS and running into some strange behavior that has been tough to track down and debug (partially due to the limited admin access from RDS).We're running a read-only query that normally takes ~10-15 min., but also runs concurrently with several other intensive queries (these queries themselves, finish).  On one particular day, this query hung for many hours and even while we killed pids for running queries and any locks granted, the query would never return.  Also no hints of blocking processes.  After some digging through some I/O metrics, we didn't see any memory issues or unusual spikes that would lead us to believe that we're running low on resources.There is 1 caveat, however:  there was a different schema that contained a day-old copy of data that isn't normally present when the hang started to occur.  However, since these are completely different schema namespaces with no crossovers in the queries themselves, I don't see how this is relevant. 1) We ended up doing a full reboot of the RDS instance and ran the query again, this time, no other queries are running off of a fresh boot-up (no competing locks or transactions).  The query continued to hang. 2) We then ran pg_dump to snapshot the current data and did a full pg_restore (after dropping all schemas) of an older dataset where we knew this query would run successfully.  As expected, the query ran fine. 3) We then dropped all schemas again and pg_restored the previous dataset that was causing the query to hang, and then to my surprise, the query ran just fine.  No hangs.  We thought this might be possibly due to some internal vacuuming, but this is unlikely since there are no real concurrent reads or updates happening.  Auto-vacuum is also on with default settings.What is the most confusing part in all of this is why a DROP SCHEMA CASCADE and a fresh pg_restore would somehow fix the problem.  Even a fresh reboot didn't fix it.Any ideas??", "msg_date": "Tue, 10 Nov 2015 16:42:33 -0500", "msg_from": "Jason Jho <[email protected]>", "msg_from_op": true, "msg_subject": "Hanging query on a fresh restart" }, { "msg_contents": "On 11/10/15 3:42 PM, Jason Jho wrote:\n> On one particular day, this query hung for many hours and even while we\n> killed pids for running queries and any locks granted, the query would\n> never return. Also no hints of blocking processes. After some digging\n> through some I/O metrics, we didn't see any memory issues or unusual\n> spikes that would lead us to believe that we're running low on resources.\n\nDid IO stats indicate IO was happening? Did you see a pegged CPU running \nthe query?\n\n> There is 1 caveat, however: there was a different schema that contained\n> a day-old copy of data that isn't normally present when the hang started\n> to occur. However, since these are completely different schema\n> namespaces with no crossovers in the queries themselves, I don't see how\n> this is relevant.\n\nIf search_path wasn't what you thought it was you could have easily been \nrunning against the wrong set of tables.\n\n> We thought this might be possibly due to some internal vacuuming, but\n> this is unlikely since there are no real concurrent reads or updates\n> happening. Auto-vacuum is also on with default settings.\n\nThere are other reasons why autovacuum could kick in, notably to prevent \ntransaction ID wraparound.\n\n> What is the most confusing part in all of this is why a DROP SCHEMA\n> CASCADE and a fresh pg_restore would somehow fix the problem. Even a\n> fresh reboot didn't fix it.\n\nWithout more info we're stuck guessing. You might try submitting a \nticket with amazon, especially if you can reproduce this.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 13 Nov 2015 15:40:42 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hanging query on a fresh restart" }, { "msg_contents": "On Friday, November 13, 2015 3:41 PM, Jim Nasby <[email protected]> wrote:\n> On 11/10/15 3:42 PM, Jason Jho wrote:\n\n>> We using Postgres 9.3.10\n\n>> What is the most confusing part in all of this is why a DROP\n>> SCHEMA CASCADE and a fresh pg_restore would somehow fix the\n>> problem. Even a fresh reboot didn't fix it.\n>\n> Without more info we're stuck guessing. You might try submitting\n> a ticket with amazon, especially if you can reproduce this.\n\nThere have been occasional reports of corrupted indexes causing\nendless loops which could cause these symptoms if one core was\npegged at 100% during the incident. There are many possible causes\nfor such corruption -- see:\n\nhttp://rhaas.blogspot.com/2012/03/why-is-my-database-corrupted.html\n\nThat said, there was a long-standing bug in btree index page\ndeletion (which could only happen during vacuum or autovacuum)\nwhich was fixed in 9.4:\n\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=efada2b8e920adfdf7418862e939925d2acd1b89\n\nIt was pretty hard to hit, and normally wouldn't cause these\nsymptoms, but if there was a \"perfect storm\" of events before the\nproblem self-corrected, I think it might possibly lead to this. If\nwe could somehow confirm that this old bug was the cause, it might\njustify pushing this patch back into older branches. As the commit\nmessage said:\n\n| This bug is old, all supported versions are affected, but this patch is too\n| big to back-patch (and changes the WAL record formats of related records).\n| We have not heard any reports of the bug from users, so clearly it's not\n| easy to bump into. Maybe backpatch later, after this has had some field\n| testing.\n\nDid you make a filesystem-level copy of the data directory? If so,\nthe first step in checking this theory would be to restore a copy\nand reindex all indexes used by the problem query to see if that\nfixes it. If it does, close examination of the corrupted index\nmight provide clues about how the corruption occurred.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 14 Nov 2015 16:07:26 +0000 (UTC)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hanging query on a fresh restart" } ]
[ { "msg_contents": "We're using postgres 9.2.5. Every couple of days we'll have a query get\ncancelled and it is always at the start of one of our custom procedures.\nThe query is typically part of a php loop that runs in less than a second,\nhowever when the issue occurs, that pass through the loop takes multiple\nseconds (~3-4) before the query cancels. The loop continues to run after\nthe issue occurs, so the php script itself is not terminating. We also see\nit in non-loop based updates and selects, so it's not tied to that\nparticular script. It's possible that some of the selects are actually\ncancelled by the user, but the inserts have no interaction with the users.\nThe updates are triggered by the user, but there's no way for them to\ncancel it short of killing their client, and the timeframe for them to do\nthat would (normally) be minuscule.\n\nAre we doing anything weird with this procedure? Is there anything more I\ncan do to get more info as to why/how the cancellation is happening or why\nthe function would slow down seemingly randomly?\n\n\n\nERROR: canceling statement due to user request\nCONTEXT: PL/pgSQL function chooselast(character varying,character varying)\nline 1 at IF\n SQL statement \"INSERT INTO partition_2015 VALUES (NEW.*)\"\n PL/pgSQL function table1_insert_trigger() line 4 at SQL statement\nSTATEMENT: INSERT into table1 (create_time,cusid,last1) Values\n('NOW','8175','ROBERT'')\n\n\nERROR: canceling statement due to user request\nCONTEXT: PL/pgSQL function chooselast(character varying,character varying)\nline 1 at IF\nSTATEMENT: SELECT * FROM table2 WHERE (cusid = 2521) AND\nLOWER(chooselast(last1,last2)) LIKE LOWER('87092%')\n\nERROR: canceling statement due to user request\nCONTEXT: PL/pgSQL function chooselast(character varying,character varying)\nline 1 at IF\nSTATEMENT: update table1 set status='DELETE' where id=200498919\n\npartition_2015 (on table 1) has one index that references chooselast:\n\"pik_last_2015\" btree (cusid, lower(chooselast(last1, last2)::text))\n\n\nI'm not sure why an update on table1 that does not change last1 or last2\nwould touch the index, so why would we even call the chooselast procedure?\n\n\ntable2 has no indexes that reference chooselast, but is also partitioned\n(by year as well).\n\n\ndb01=# select * from pg_proc where proname='chooselast';\n-[ RECORD 1\n]---+----------------------------------------------------------------------------------------------------------------\nproname | chooselast\npronamespace | 2200\nproowner | 10\nprolang | 12599\nprocost | 100\nprorows | 0\nprovariadic | 0\nprotransform | -\nproisagg | f\nproiswindow | f\nprosecdef | f\nproleakproof | f\nproisstrict | f\nproretset | f\nprovolatile | i\npronargs | 2\npronargdefaults | 0\nprorettype | 1043\nproargtypes | 1043 1043\nproallargtypes |\nproargmodes |\nproargnames |\nproargdefaults |\nprosrc | DECLARE t text; BEGIN IF (character_length($1) > 0)\nTHEN t = $1; ELSE t = $2; END IF; RETURN t; END;\nprobin |\nproconfig |\nproacl |\n\n\nWe're using postgres 9.2.5. Every couple of\ndays we'll have a query get cancelled and it is always at the start of one of\nour custom procedures. The query is typically part of a php loop that runs in\nless than a second, however when the issue occurs, that pass through the loop\ntakes multiple seconds (~3-4) before the query cancels. The loop continues to run after the issue\noccurs, so the php script itself is not terminating. We also see it in non-loop\nbased updates and selects, so it's not tied to that particular script. It's\npossible that some of the selects are actually cancelled by the user, but the\ninserts have no interaction with the users. The updates are triggered by the\nuser, but there's no way for them to cancel it short of killing their client,\nand the timeframe for them to do that would (normally) be minuscule.\nAre we doing anything weird with\nthis procedure? Is there anything more I can do to get more info as to why/how\nthe cancellation is happening or why the function would slow down seemingly randomly? \n \nERROR:  canceling statement due\nto user request\nCONTEXT:  PL/pgSQL function chooselast(character varying,character\nvarying) line 1 at IF\n        SQL statement \"INSERT INTO\npartition_2015 VALUES (NEW.*)\"\n        PL/pgSQL function table1_insert_trigger()\nline 4 at SQL statement\nSTATEMENT:  INSERT into table1 (create_time,cusid,last1) Values\n('NOW','8175','ROBERT'')\nERROR:  canceling statement due\nto user request\nCONTEXT:  PL/pgSQL function chooselast(character varying,character\nvarying) line 1 at IF\nSTATEMENT:  SELECT * FROM table2 WHERE   (cusid = 2521) AND\nLOWER(chooselast(last1,last2)) LIKE LOWER('87092%')\nERROR:  canceling statement due\nto user request\nCONTEXT:  PL/pgSQL function chooselast(character varying,character\nvarying) line 1 at IF\nSTATEMENT:  update table1 set status='DELETE' where id=200498919\npartition_2015 (on table 1) has one\nindex that references chooselast: \"pik_last_2015\" btree (cusid,\nlower(chooselast(last1, last2)::text))\nI'm not sure why an update on table1\nthat does not change last1 or last2 would touch the index, so why would we even\ncall the chooselast procedure?\ntable2 has no indexes that reference\nchooselast, but is also partitioned (by year as well).\ndb01=# select * from pg_proc where proname='chooselast';\n-[ RECORD 1 ]---+----------------------------------------------------------------------------------------------------------------\nproname         | chooselast\npronamespace    | 2200\nproowner        | 10\nprolang         | 12599\nprocost         | 100\nprorows         | 0\nprovariadic     | 0\nprotransform    | -\nproisagg        | f\nproiswindow     | f\nprosecdef       | f\nproleakproof    | f\nproisstrict     | f\nproretset       | f\nprovolatile     | i\npronargs        | 2\npronargdefaults | 0\nprorettype      | 1043\nproargtypes     | 1043 1043\nproallargtypes  |\nproargmodes     |\nproargnames     |\nproargdefaults  |\nprosrc          |  DECLARE t\ntext;  BEGIN  IF  (character_length($1) > 0) THEN  t =\n$1; ELSE  t = $2;   END IF;  RETURN t; END;\nprobin          |\nproconfig       |\nproacl          |", "msg_date": "Wed, 11 Nov 2015 11:57:18 -0600", "msg_from": "Skarsol <[email protected]>", "msg_from_op": true, "msg_subject": "Queries getting canceled inside a proc that seems to slow down\n randomly" }, { "msg_contents": "On 11/11/15 11:57 AM, Skarsol wrote:\n> Are we doing anything weird with this procedure? Is there anything more\n> I can do to get more info as to why/how the cancellation is happening or\n> why the function would slow down seemingly randomly?\n>\n> ERROR: canceling statement due to user request\n> CONTEXT: PL/pgSQL function chooselast(character varying,character\n> varying) line 1 at IF\n> SQL statement \"INSERT INTO partition_2015 VALUES (NEW.*)\"\n> PL/pgSQL function table1_insert_trigger() line 4 at SQL statement\n> STATEMENT: INSERT into table1 (create_time,cusid,last1) Values\n> ('NOW','8175','ROBERT'')\n\nThe error tells you what's causing this; it's a client-side interrupt. \nActually, looking at the code, you might get the same request if someone \nsent a signal to the relevant backend, either at the OS level or via \npg_cancel_backend(). You can test that and see what error you get. A \nstatement_timeout expiration would give you a different error.\n\nAs for the hang, maybe someone is ALTERing or replacing the function?\n\nBTW, you could write that function in the SQL language, which might \nallow the optimizer to inline it. Even if it couldn't, you'd probably \nstill see a performance gain from not firing up plpgsql on every row. \nThough, if you didn't allow empty strings in last1, you could also just \nreplace that whole function with coalesce(). I see the function is \nmarked IMMUTABLE, which is good.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 13 Nov 2015 15:50:42 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries getting canceled inside a proc that seems to\n slow down randomly" }, { "msg_contents": "On Fri, Nov 13, 2015 at 3:50 PM, Jim Nasby <[email protected]> wrote:\n\n> On 11/11/15 11:57 AM, Skarsol wrote:\n>\n>> Are we doing anything weird with this procedure? Is there anything more\n>> I can do to get more info as to why/how the cancellation is happening or\n>> why the function would slow down seemingly randomly?\n>>\n>> ERROR: canceling statement due to user request\n>> CONTEXT: PL/pgSQL function chooselast(character varying,character\n>> varying) line 1 at IF\n>> SQL statement \"INSERT INTO partition_2015 VALUES (NEW.*)\"\n>> PL/pgSQL function table1_insert_trigger() line 4 at SQL statement\n>> STATEMENT: INSERT into table1 (create_time,cusid,last1) Values\n>> ('NOW','8175','ROBERT'')\n>>\n>\n> The error tells you what's causing this; it's a client-side interrupt.\n> Actually, looking at the code, you might get the same request if someone\n> sent a signal to the relevant backend, either at the OS level or via\n> pg_cancel_backend(). You can test that and see what error you get. A\n> statement_timeout expiration would give you a different error.\n>\n> As for the hang, maybe someone is ALTERing or replacing the function?\n>\n> BTW, you could write that function in the SQL language, which might allow\n> the optimizer to inline it. Even if it couldn't, you'd probably still see a\n> performance gain from not firing up plpgsql on every row. Though, if you\n> didn't allow empty strings in last1, you could also just replace that whole\n> function with coalesce(). I see the function is marked IMMUTABLE, which is\n> good.\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX\n> Experts in Analytics, Data Architecture and PostgreSQL\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>\n\nNo one is ALTERing or replacing the function (I'm the only person that\nwould). Recently we had an automated process attempted to load a file\ncontaining one record (which usually takes under a second) and it failed\nbecause of this issue (the insert was cancelled due to user request at the\nIF in the chooselast function).\n\nSimilarly, but unrelated to the functions, we had a PHP script running as a\nheadless daemon process get this error (ERROR: canceling statement due to\nuser request) while running 'SELECT * FROM connection WHERE id=109'.\nEveryone was at lunch, so there's no way a user could have cancelled it.\nThese scripts run for months at a time with no issues, so it's not timeout\nrelated. It's not a long running or complicated query:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Seq Scan on connection (cost=0.00..7.83 rows=1 width=60) (actual\ntime=0.031..0.033 rows=1 loops=1)\n Filter: (id = 109)\n Rows Removed by Filter: 306\n Total runtime: 0.047 ms\n(4 rows)\n\nThis is going through pgbouncer and the related log entries are:\n\npostgres:\n5018 2016-01-11 12:23:28.143 CST:ERROR: canceling statement due to user\nrequest\n5018 2016-01-11 12:23:28.143 CST:STATEMENT: SELECT * FROM connection WHERE\nid=109\n\npgbouncer (S-0x17fd780 entries):\n2016-01-11 12:21:46.600 32905 LOG S-0x17fd780: edi01/[email protected]:6432\nclosing because: server idle timeout (age=612)\n2016-01-11 12:23:28.142 32905 LOG S-0x17fd780: edi01/[email protected]:6432\nnew connection to server\n2016-01-11 12:23:28.143 32905 LOG S-0x17fd780: edi01/[email protected]:6432\nclosing because: sent cancel req (age=0)\n2016-01-11 12:26:31.510 32905 LOG S-0x17fd780: edi01/[email protected]:6432\nnew connection to server\n\nOn Fri, Nov 13, 2015 at 3:50 PM, Jim Nasby <[email protected]> wrote:On 11/11/15 11:57 AM, Skarsol wrote:\n\nAre we doing anything weird with this procedure? Is there anything more\nI can do to get more info as to why/how the cancellation is happening or\nwhy the function would slow down seemingly randomly?\n\nERROR:  canceling statement due to user request\nCONTEXT:  PL/pgSQL function chooselast(character varying,character\nvarying) line 1 at IF\n         SQL statement \"INSERT INTO partition_2015 VALUES (NEW.*)\"\n         PL/pgSQL function table1_insert_trigger() line 4 at SQL statement\nSTATEMENT:  INSERT into table1 (create_time,cusid,last1) Values\n('NOW','8175','ROBERT'')\n\n\nThe error tells you what's causing this; it's a client-side interrupt. Actually, looking at the code, you might get the same request if someone sent a signal to the relevant backend, either at the OS level or via pg_cancel_backend(). You can test that and see what error you get. A statement_timeout expiration would give you a different error.\n\nAs for the hang, maybe someone is ALTERing or replacing the function?\n\nBTW, you could write that function in the SQL language, which might allow the optimizer to inline it. Even if it couldn't, you'd probably still see a performance gain from not firing up plpgsql on every row. Though, if you didn't allow empty strings in last1, you could also just replace that whole function with coalesce(). I see the function is marked IMMUTABLE, which is good.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\nNo one is ALTERing or replacing the function (I'm the only person that would). Recently we had an automated process attempted to load a file containing one record (which usually takes under a second) and it failed because of this issue (the insert was cancelled due to user request at the IF in the chooselast function). Similarly, but unrelated to the functions, we had a PHP script running as a headless daemon process get this error (ERROR:  canceling statement due to user request) while running 'SELECT * FROM connection WHERE id=109'. Everyone was at lunch, so there's no way a user could have cancelled it. These scripts run for months at a time with no issues, so it's not timeout related. It's not a long running or complicated query:                                             QUERY PLAN----------------------------------------------------------------------------------------------------- Seq Scan on connection  (cost=0.00..7.83 rows=1 width=60) (actual time=0.031..0.033 rows=1 loops=1)   Filter: (id = 109)   Rows Removed by Filter: 306 Total runtime: 0.047 ms(4 rows)This is going through pgbouncer and the related log entries are:postgres:5018 2016-01-11 12:23:28.143 CST:ERROR:  canceling statement due to user request5018 2016-01-11 12:23:28.143 CST:STATEMENT:  SELECT * FROM connection WHERE id=109pgbouncer (S-0x17fd780 entries):2016-01-11 12:21:46.600 32905 LOG S-0x17fd780: edi01/[email protected]:6432 closing because: server idle timeout (age=612)2016-01-11 12:23:28.142 32905 LOG S-0x17fd780: edi01/[email protected]:6432 new connection to server2016-01-11 12:23:28.143 32905 LOG S-0x17fd780: edi01/[email protected]:6432 closing because: sent cancel req (age=0)2016-01-11 12:26:31.510 32905 LOG S-0x17fd780: edi01/[email protected]:6432 new connection to server", "msg_date": "Mon, 11 Jan 2016 13:32:21 -0600", "msg_from": "Skarsol <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries getting canceled inside a proc that seems to\n slow down randomly" }, { "msg_contents": "On 1/11/16 1:32 PM, Skarsol wrote:\n> pgbouncer (S-0x17fd780 entries):\n> 2016-01-11 12:21:46.600 32905 LOG S-0x17fd780:\n> edi01/[email protected]:6432 <http://[email protected]:6432> closing\n> because: server idle timeout (age=612)\n> 2016-01-11 12:23:28.142 32905 LOG S-0x17fd780:\n> edi01/[email protected]:6432 <http://[email protected]:6432> new\n> connection to server\n> 2016-01-11 12:23:28.143 32905 LOG S-0x17fd780:\n> edi01/[email protected]:6432 <http://[email protected]:6432> closing\n> because: sent cancel req (age=0)\n> 2016-01-11 12:26:31.510 32905 LOG S-0x17fd780:\n> edi01/[email protected]:6432 <http://[email protected]:6432> new\n> connection to server\n\nThis makes me think there's a race condition in pgBouncer, or that their \nlogging is just confusing.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Jan 2016 13:39:42 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries getting canceled inside a proc that seems to\n slow down randomly" }, { "msg_contents": "On Mon, Jan 11, 2016 at 1:39 PM, Jim Nasby <[email protected]> wrote:\n\n> On 1/11/16 1:32 PM, Skarsol wrote:\n>\n>> pgbouncer (S-0x17fd780 entries):\n>> 2016-01-11 12:21:46.600 32905 LOG S-0x17fd780:\n>> edi01/[email protected]:6432 <http://[email protected]:6432> closing\n>> because: server idle timeout (age=612)\n>> 2016-01-11 12:23:28.142 32905 LOG S-0x17fd780:\n>> edi01/[email protected]:6432 <http://[email protected]:6432> new\n>> connection to server\n>> 2016-01-11 12:23:28.143 32905 LOG S-0x17fd780:\n>> edi01/[email protected]:6432 <http://[email protected]:6432> closing\n>> because: sent cancel req (age=0)\n>> 2016-01-11 12:26:31.510 32905 LOG S-0x17fd780:\n>> edi01/[email protected]:6432 <http://[email protected]:6432> new\n>> connection to server\n>>\n>\n> This makes me think there's a race condition in pgBouncer, or that their\n> logging is just confusing.\n>\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX\n> Experts in Analytics, Data Architecture and PostgreSQL\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>\n\nOkay, I'll go visit the pgbouncer guys and see if they can shed any light.\nThanks.\n\nOn Mon, Jan 11, 2016 at 1:39 PM, Jim Nasby <[email protected]> wrote:On 1/11/16 1:32 PM, Skarsol wrote:\n\npgbouncer (S-0x17fd780 entries):\n2016-01-11 12:21:46.600 32905 LOG S-0x17fd780:\nedi01/[email protected]:6432 <http://[email protected]:6432> closing\nbecause: server idle timeout (age=612)\n2016-01-11 12:23:28.142 32905 LOG S-0x17fd780:\nedi01/[email protected]:6432 <http://[email protected]:6432> new\nconnection to server\n2016-01-11 12:23:28.143 32905 LOG S-0x17fd780:\nedi01/[email protected]:6432 <http://[email protected]:6432> closing\nbecause: sent cancel req (age=0)\n2016-01-11 12:26:31.510 32905 LOG S-0x17fd780:\nedi01/[email protected]:6432 <http://[email protected]:6432> new\nconnection to server\n\n\nThis makes me think there's a race condition in pgBouncer, or that their logging is just confusing.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.comOkay, I'll go visit the pgbouncer guys and see if they can shed any light. Thanks.", "msg_date": "Mon, 11 Jan 2016 13:41:19 -0600", "msg_from": "Skarsol <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries getting canceled inside a proc that seems to\n slow down randomly" } ]
[ { "msg_contents": "Postgresql version 9.4.4.\nI'm having an issue. The query never ends:\n\ndelete from bb_gamelist_league;\n\nNo WHERE clause used. There are approx. 227000 rows in that table.\n\nHere is the table itself:\nCREATE TABLE bb_gamelist_league (\n id SERIAL NOT NULL ,\n bb_league_id INTEGER NOT NULL ,\n day_number INTEGER,\n date BIGINT ,\n team_id1 INTEGER ,\n team_id2 INTEGER ,\n score1 SMALLINT ,\n score2 SMALLINT ,\n attended_people INTEGER ,\n is_play_off BOOL ,\n play_off_code VARCHAR(5),\n game_status BOOL ,\n is_finished BOOL ,\n was_taken_by_gameserv BOOL,\n taken_by_coordinator_status BOOL,\n seed TIMESTAMP,\n managerA_watching BOOL,\n managerB_watching BOOL,\n day_period VARCHAR(10),\n group_number VARCHAR(30),\nPRIMARY KEY(id) ,\n FOREIGN KEY(bb_league_id) REFERENCES bb_league(id),\n FOREIGN KEY (team_id1) REFERENCES bb_team_info(id),\n FOREIGN KEY (team_id2) REFERENCES bb_team_info(id));\n\nThere are some indexes on that table:\n public | bb_gamelist_league | bb_gamelist_league_fkindex1 |\n | CREATE INDEX bb_gamelist_league_fkindex1 ON bb_gamelist_league USING\nbtree (bb_league_id)\n public | bb_gamelist_league | bb_gamelist_league_pkey |\n | CREATE UNIQUE INDEX bb_gamelist_league_pkey ON bb_gamelist_league USING\nbtree (id)\n\nAlso explain gives the following result:\n explain delete from bb_gamelist_league;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Delete on bb_gamelist_league (cost=0.00..6954.63 rows=281363 width=6)\n -> Seq Scan on bb_gamelist_league (cost=0.00..6954.63 rows=281363\nwidth=6)\n(2 rows)\n\nExplain analyze never ends (because the query itself is never ending).\n\nI checked the locks: there are no locks on tables.\n\nThe CPU is fast enough but \"top\" command on linux shows 100% load for\npostgres process.\nCould you help to resolve the issue?\n\nPostgresql version 9.4.4.I'm having an issue. The query never ends:delete from bb_gamelist_league;No WHERE clause used. There are approx. 227000 rows in that table.Here is the table itself:CREATE TABLE bb_gamelist_league (  id SERIAL  NOT NULL ,  bb_league_id INTEGER   NOT NULL ,  day_number INTEGER,  date BIGINT ,  team_id1 INTEGER    ,  team_id2 INTEGER    ,  score1 SMALLINT    ,  score2 SMALLINT    ,  attended_people INTEGER    ,  is_play_off BOOL    ,  play_off_code VARCHAR(5),  game_status BOOL    ,  is_finished BOOL  ,  was_taken_by_gameserv BOOL,  taken_by_coordinator_status BOOL,  seed TIMESTAMP,  managerA_watching BOOL,  managerB_watching BOOL,  day_period VARCHAR(10),  group_number VARCHAR(30),PRIMARY KEY(id)  ,  FOREIGN KEY(bb_league_id) REFERENCES bb_league(id),  FOREIGN KEY (team_id1) REFERENCES bb_team_info(id),  FOREIGN KEY (team_id2) REFERENCES bb_team_info(id));There are some indexes on that table: public     | bb_gamelist_league | bb_gamelist_league_fkindex1 |            | CREATE INDEX bb_gamelist_league_fkindex1 ON bb_gamelist_league USING btree (bb_league_id) public     | bb_gamelist_league | bb_gamelist_league_pkey     |            | CREATE UNIQUE INDEX bb_gamelist_league_pkey ON bb_gamelist_league USING btree (id)Also explain gives the following result: explain delete from bb_gamelist_league;                                   QUERY PLAN-------------------------------------------------------------------------------- Delete on bb_gamelist_league  (cost=0.00..6954.63 rows=281363 width=6)   ->  Seq Scan on bb_gamelist_league  (cost=0.00..6954.63 rows=281363 width=6)(2 rows)Explain analyze never ends (because the query itself is never ending).I checked the locks: there are no locks on tables. The CPU is fast enough but \"top\" command on linux shows 100% load for postgres process.Could you help to resolve the issue?", "msg_date": "Thu, 12 Nov 2015 01:09:36 +0600", "msg_from": "Massalin Yerzhan <[email protected]>", "msg_from_op": true, "msg_subject": "Simple delete query is taking too long (never ends)" }, { "msg_contents": "Massalin Yerzhan <[email protected]> writes:\n> I'm having an issue. The query never ends:\n> delete from bb_gamelist_league;\n\n9 times out of 10, the answer to this type of problem is that you have\nsome table referencing this one by a foreign key, and the referencing\ncolumn is not indexed. PG doesn't require such an index, but lack of\none will mean that retail checks or deletions of referencing rows are\nreally slow.\n\nIf you're not sure which table is the problem, try doing an EXPLAIN\nANALYZE of a DELETE that will only remove a few rows. You should\nsee some time blamed on a trigger associated with the FK constraint.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Nov 2015 14:33:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple delete query is taking too long (never ends)" }, { "msg_contents": "On Wed, Nov 11, 2015 at 1:33 PM, Tom Lane <[email protected]> wrote:\n> Massalin Yerzhan <[email protected]> writes:\n>> I'm having an issue. The query never ends:\n>> delete from bb_gamelist_league;\n>\n> 9 times out of 10, the answer to this type of problem is that you have\n> some table referencing this one by a foreign key, and the referencing\n> column is not indexed. PG doesn't require such an index, but lack of\n> one will mean that retail checks or deletions of referencing rows are\n> really slow.\n>\n> If you're not sure which table is the problem, try doing an EXPLAIN\n> ANALYZE of a DELETE that will only remove a few rows. You should\n> see some time blamed on a trigger associated with the FK constraint.\n\nYou've answered this question (with the same answer) what feels like a\ngazillion times. I guess the underlying problem is that EXPLAIN is,\nuh, not explaining things very well.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Nov 2015 09:12:51 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple delete query is taking too long (never ends)" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Wed, Nov 11, 2015 at 1:33 PM, Tom Lane <[email protected]> wrote:\n>> If you're not sure which table is the problem, try doing an EXPLAIN\n>> ANALYZE of a DELETE that will only remove a few rows. You should\n>> see some time blamed on a trigger associated with the FK constraint.\n\n> You've answered this question (with the same answer) what feels like a\n> gazillion times. I guess the underlying problem is that EXPLAIN is,\n> uh, not explaining things very well.\n\nIn principle, a plain EXPLAIN could list the triggers that would\npotentially be fired by the query, but I'm afraid that wouldn't help\nmuch. The actual performance problem is down inside the trigger,\nwhich is an opaque black box so far as EXPLAIN can possibly know.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Nov 2015 10:36:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple delete query is taking too long (never ends)" }, { "msg_contents": "On Thu, Nov 12, 2015 at 7:12 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Wed, Nov 11, 2015 at 1:33 PM, Tom Lane <[email protected]> wrote:\n> > Massalin Yerzhan <[email protected]> writes:\n> >> I'm having an issue. The query never ends:\n> >> delete from bb_gamelist_league;\n> >\n> > 9 times out of 10, the answer to this type of problem is that you have\n> > some table referencing this one by a foreign key, and the referencing\n> > column is not indexed. PG doesn't require such an index, but lack of\n> > one will mean that retail checks or deletions of referencing rows are\n> > really slow.\n> >\n> > If you're not sure which table is the problem, try doing an EXPLAIN\n> > ANALYZE of a DELETE that will only remove a few rows. You should\n> > see some time blamed on a trigger associated with the FK constraint.\n>\n> You've answered this question (with the same answer) what feels like a\n> gazillion times. I guess the underlying problem is that EXPLAIN is,\n> uh, not explaining things very well.\n>\n\nWhat about a warning on creation?\n\ndb=> create table foo(i integer primary key);\ndb=> create table bar(j integer primary key, i integer);\ndb=> alter table bar add constraint fk_bar foreign key(i) references foo(i);\nWARNING: fk_bar: column bar(i) has no index, deletions on table foo may be\nslow.\n\n\nIt might save some fraction of these questions.\n\nCraig\n\n\n>\n> merlin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Thu, Nov 12, 2015 at 7:12 AM, Merlin Moncure <[email protected]> wrote:On Wed, Nov 11, 2015 at 1:33 PM, Tom Lane <[email protected]> wrote:\n> Massalin Yerzhan <[email protected]> writes:\n>> I'm having an issue. The query never ends:\n>> delete from bb_gamelist_league;\n>\n> 9 times out of 10, the answer to this type of problem is that you have\n> some table referencing this one by a foreign key, and the referencing\n> column is not indexed.  PG doesn't require such an index, but lack of\n> one will mean that retail checks or deletions of referencing rows are\n> really slow.\n>\n> If you're not sure which table is the problem, try doing an EXPLAIN\n> ANALYZE of a DELETE that will only remove a few rows.  You should\n> see some time blamed on a trigger associated with the FK constraint.\n\nYou've answered this question (with the same answer) what feels like a\ngazillion times.  I guess the underlying problem is that EXPLAIN is,\nuh, not explaining things very well.What about a warning on creation?db=> create table foo(i integer primary key);db=> create table bar(j integer primary key, i integer);db=> alter table bar add constraint fk_bar foreign key(i) references foo(i);WARNING: fk_bar: column bar(i) has no index, deletions on table foo may be slow.It might save some fraction of these questions.Craig \n\nmerlin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Thu, 12 Nov 2015 07:48:50 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple delete query is taking too long (never ends)" }, { "msg_contents": "On Thu, Nov 12, 2015 at 9:48 AM, Craig James <[email protected]> wrote:\n>\n> On Thu, Nov 12, 2015 at 7:12 AM, Merlin Moncure <[email protected]> wrote:\n>>\n>> On Wed, Nov 11, 2015 at 1:33 PM, Tom Lane <[email protected]> wrote:\n>> > Massalin Yerzhan <[email protected]> writes:\n>> >> I'm having an issue. The query never ends:\n>> >> delete from bb_gamelist_league;\n>> >\n>> > 9 times out of 10, the answer to this type of problem is that you have\n>> > some table referencing this one by a foreign key, and the referencing\n>> > column is not indexed. PG doesn't require such an index, but lack of\n>> > one will mean that retail checks or deletions of referencing rows are\n>> > really slow.\n>> >\n>> > If you're not sure which table is the problem, try doing an EXPLAIN\n>> > ANALYZE of a DELETE that will only remove a few rows. You should\n>> > see some time blamed on a trigger associated with the FK constraint.\n>>\n>> You've answered this question (with the same answer) what feels like a\n>> gazillion times. I guess the underlying problem is that EXPLAIN is,\n>> uh, not explaining things very well.\n>\n>\n> What about a warning on creation?\n>\n> db=> create table foo(i integer primary key);\n> db=> create table bar(j integer primary key, i integer);\n> db=> alter table bar add constraint fk_bar foreign key(i) references foo(i);\n> WARNING: fk_bar: column bar(i) has no index, deletions on table foo may be\n> slow.\n>\n> It might save some fraction of these questions.\n\nMaybe, but I wonder if this would cause pg_restore to bleat warnings\nwhen restoring. I was hoping that explain could report potential\nirregularities, but Tom's comments seem to suggest difficulties there.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Nov 2015 16:07:29 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple delete query is taking too long (never ends)" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Thu, Nov 12, 2015 at 9:48 AM, Craig James <[email protected]> wrote:\n>> What about a warning on creation?\n>> \n>> db=> create table foo(i integer primary key);\n>> db=> create table bar(j integer primary key, i integer);\n>> db=> alter table bar add constraint fk_bar foreign key(i) references foo(i);\n>> WARNING: fk_bar: column bar(i) has no index, deletions on table foo may be\n>> slow.\n>> \n>> It might save some fraction of these questions.\n\n> Maybe, but I wonder if this would cause pg_restore to bleat warnings\n> when restoring.\n\nWe could probably teach pg_dump to put index definitions before FKs, if it\ndoesn't already. But I'm suspicious of this sort of \"training wheels\"\nwarning --- we've had roughly similar messages in the past and removed\nthem because too many people complained about them.\n\nWorth noting in particular is that there would be no way to avoid the\nwarning unless you split out the FK declaration to a separate \"alter table\nadd constraint\" step. pg_dump does that anyway, but manual schema\ndefinitions likely would look more like\n\ncreate table foo(i integer primary key);\ncreate table bar(j integer primary key, i integer references foo);\ncreate index on bar(i);\n\nwhich would provoke the warning. I fear a warning like that would have\na very short life expectancy.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Nov 2015 17:26:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple delete query is taking too long (never ends)" }, { "msg_contents": "On Thu, Nov 12, 2015 at 4:26 PM, Tom Lane <[email protected]> wrote:\n> Merlin Moncure <[email protected]> writes:\n>> On Thu, Nov 12, 2015 at 9:48 AM, Craig James <[email protected]> wrote:\n>>> What about a warning on creation?\n>>>\n>>> db=> create table foo(i integer primary key);\n>>> db=> create table bar(j integer primary key, i integer);\n>>> db=> alter table bar add constraint fk_bar foreign key(i) references foo(i);\n>>> WARNING: fk_bar: column bar(i) has no index, deletions on table foo may be\n>>> slow.\n>>>\n>>> It might save some fraction of these questions.\n>\n>> Maybe, but I wonder if this would cause pg_restore to bleat warnings\n>> when restoring.\n>\n> We could probably teach pg_dump to put index definitions before FKs, if it\n> doesn't already. But I'm suspicious of this sort of \"training wheels\"\n> warning --- we've had roughly similar messages in the past and removed\n> them because too many people complained about them.\n\nFor posterity, indexes are the last step -- and I think that's a good\nway to do things. As to the broader point, I agree. Warnings should\nbe reserved for things that are demonstrably dubious, and there are\njust too many situations where that doesn't apply for an unindexed\nforeign constraint. Oh well.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 13 Nov 2015 08:15:06 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple delete query is taking too long (never ends)" }, { "msg_contents": "On Fri, Nov 13, 2015 at 7:15 AM, Merlin Moncure <[email protected]> wrote:\n> On Thu, Nov 12, 2015 at 4:26 PM, Tom Lane <[email protected]> wrote:\n>> Merlin Moncure <[email protected]> writes:\n>>> On Thu, Nov 12, 2015 at 9:48 AM, Craig James <[email protected]> wrote:\n>>>> What about a warning on creation?\n>>>>\n>>>> db=> create table foo(i integer primary key);\n>>>> db=> create table bar(j integer primary key, i integer);\n>>>> db=> alter table bar add constraint fk_bar foreign key(i) references foo(i);\n>>>> WARNING: fk_bar: column bar(i) has no index, deletions on table foo may be\n>>>> slow.\n>>>>\n>>>> It might save some fraction of these questions.\n>>\n>>> Maybe, but I wonder if this would cause pg_restore to bleat warnings\n>>> when restoring.\n>>\n>> We could probably teach pg_dump to put index definitions before FKs, if it\n>> doesn't already. But I'm suspicious of this sort of \"training wheels\"\n>> warning --- we've had roughly similar messages in the past and removed\n>> them because too many people complained about them.\n>\n> For posterity, indexes are the last step -- and I think that's a good\n> way to do things. As to the broader point, I agree. Warnings should\n> be reserved for things that are demonstrably dubious, and there are\n> just too many situations where that doesn't apply for an unindexed\n> foreign constraint. Oh well.\n\nIf they were implemented as a notice that would be different. Much\nlike the notice you get about index creation on PK / Unique constraint\ncreation. But I'm not 100% sure it's a good idea.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 13 Nov 2015 08:51:30 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple delete query is taking too long (never ends)" } ]
[ { "msg_contents": "Hello,\n\nI stumbled over this answer: http://stackoverflow.com/a/9717125/330315 and this sounded quite strange to me. \n\nSo I ran this on my Windows laptop with Postgres 9.4.5, 64bit and indeed now()::date is much faster than current_date:\n\n explain analyze\n select current_date\n from generate_series (1, 1000000);\n\n Function Scan on generate_series (cost=0.00..6.00 rows=1000 width=0) (actual time=243.878..1451.839 rows=1000000 loops=1)\n Planning time: 0.047 ms\n Execution time: 1517.881 ms\n\nAnd:\n\n explain analyze\n select now()::date\n from generate_series (1, 1000000);\n\n Function Scan on generate_series (cost=0.00..6.00 rows=1000 width=0) (actual time=244.491..785.819 rows=1000000 loops=1)\n Planning time: 0.037 ms\n Execution time: 826.612 ms\n\nRunning this on a CentOS 6.6. test server (Postgres 9.4.1, 64bit), there is still a difference, but not as big as on Windows:\n\n explain analyze\n select current_date\n from generate_series (1, 1000000);\n\n Function Scan on generate_series (cost=0.00..15.00 rows=1000 width=0) (actual time=233.599..793.032 rows=1000000 loops=1)\n Planning time: 0.087 ms\n Execution time: 850.198 ms\n\nAnd\n\n explain analyze\n select now()::date\n from generate_series (1, 1000000);\n\n Function Scan on generate_series (cost=0.00..15.00 rows=1000 width=0) (actual time=198.385..570.171 rows=1000000 loops=1)\n Planning time: 0.074 ms\n Execution time: 623.211 ms\n\nAny ideas? \n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Nov 2015 09:49:18 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": true, "msg_subject": "Why is now()::date so much faster than current_date" }, { "msg_contents": "On 17 November 2015 at 21:49, Thomas Kellerer <[email protected]> wrote:\n\n> Hello,\n>\n> I stumbled over this answer: http://stackoverflow.com/a/9717125/330315\n> and this sounded quite strange to me.\n>\n> So I ran this on my Windows laptop with Postgres 9.4.5, 64bit and indeed\n> now()::date is much faster than current_date:\n>\n> explain analyze\n> select current_date\n> from generate_series (1, 1000000);\n>\n> Function Scan on generate_series (cost=0.00..6.00 rows=1000 width=0)\n> (actual time=243.878..1451.839 rows=1000000 loops=1)\n> Planning time: 0.047 ms\n> Execution time: 1517.881 ms\n>\n> And:\n>\n> explain analyze\n> select now()::date\n> from generate_series (1, 1000000);\n>\n> Function Scan on generate_series (cost=0.00..6.00 rows=1000 width=0)\n> (actual time=244.491..785.819 rows=1000000 loops=1)\n> Planning time: 0.037 ms\n> Execution time: 826.612 ms\n>\n>\n>\nThe key to this is in the EXPLAIN VERBOSE output:\n\npostgres=# explain verbose select current_date;\n QUERY PLAN\n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0)\n Output: ('now'::cstring)::date\n(2 rows)\n\nYou can see that the implementation of current_date requires using the\ndate_in() function as well as the date_out() function. date_in() parses the\n'now' string, then the resulting date is converted back into a date string\nwith date_out(). Using now()::date does not have to parse any date\nstrings, it just needs to call date_out() to give the final output.\n\nThe reason for this is likely best explained by the comment in gram.y:\n\n/*\n* Translate as \"'now'::text::date\".\n*\n* We cannot use \"'now'::date\" because coerce_type() will\n* immediately reduce that to a constant representing\n* today's date. We need to delay the conversion until\n* runtime, else the wrong things will happen when\n* CURRENT_DATE is used in a column default value or rule.\n*\n* This could be simplified if we had a way to generate\n* an expression tree representing runtime application\n* of type-input conversion functions. (As of PG 7.3\n* that is actually possible, but not clear that we want\n* to rely on it.)\n*\n* The token location is attached to the run-time\n* typecast, not to the Const, for the convenience of\n* pg_stat_statements (which doesn't want these constructs\n* to appear to be replaceable constants).\n*/\n\n--\n David Rowley http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn 17 November 2015 at 21:49, Thomas Kellerer <[email protected]> wrote:Hello,\n\nI stumbled over this answer: http://stackoverflow.com/a/9717125/330315 and this sounded quite strange to me.\n\nSo I ran this on my Windows laptop with Postgres 9.4.5, 64bit and indeed now()::date is much faster than current_date:\n\n  explain analyze\n  select current_date\n  from   generate_series (1, 1000000);\n\n  Function Scan on generate_series  (cost=0.00..6.00 rows=1000 width=0) (actual time=243.878..1451.839 rows=1000000 loops=1)\n  Planning time: 0.047 ms\n  Execution time: 1517.881 ms\n\nAnd:\n\n  explain analyze\n  select now()::date\n  from   generate_series (1, 1000000);\n\n  Function Scan on generate_series  (cost=0.00..6.00 rows=1000 width=0) (actual time=244.491..785.819 rows=1000000 loops=1)\n  Planning time: 0.037 ms\n  Execution time: 826.612 ms\nThe key to this is in the EXPLAIN VERBOSE output:postgres=# explain verbose select current_date;                QUERY PLAN------------------------------------------ Result  (cost=0.00..0.01 rows=1 width=0)   Output: ('now'::cstring)::date(2 rows) You can see that the implementation of current_date requires using the date_in() function as well as the date_out() function. date_in() parses the 'now' string, then the resulting date is converted back into a date string with date_out().  Using now()::date does not have to parse any date strings, it just needs to call date_out() to give the final output.The reason for this is likely best explained by the comment in gram.y: /* * Translate as \"'now'::text::date\". * * We cannot use \"'now'::date\" because coerce_type() will * immediately reduce that to a constant representing * today's date.  We need to delay the conversion until * runtime, else the wrong things will happen when * CURRENT_DATE is used in a column default value or rule. * * This could be simplified if we had a way to generate * an expression tree representing runtime application * of type-input conversion functions.  (As of PG 7.3 * that is actually possible, but not clear that we want * to rely on it.) * * The token location is attached to the run-time * typecast, not to the Const, for the convenience of * pg_stat_statements (which doesn't want these constructs * to appear to be replaceable constants). */-- David Rowley                   http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 18 Nov 2015 00:03:57 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is now()::date so much faster than current_date" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On 17 November 2015 at 21:49, Thomas Kellerer <[email protected]> wrote:\n>> So I ran this on my Windows laptop with Postgres 9.4.5, 64bit and indeed\n>> now()::date is much faster than current_date:\n\n> You can see that the implementation of current_date requires using the\n> date_in() function as well as the date_out() function. date_in() parses the\n> 'now' string, then the resulting date is converted back into a date string\n> with date_out(). Using now()::date does not have to parse any date\n> strings, it just needs to call date_out() to give the final output.\n\nActually, in the context of EXPLAIN ANALYZE, date_out() will never be\ninvoked at all --- EXPLAIN just throws away the query output without\nbothering to transform it to text first. So what we're really comparing\nis timestamptz_date(now()) versus date_in('now'). The useful work ends\nup being exactly the same in either code path, but date_in has to expend\nadditional cycles on parsing the string and recognizing that it means\nDTK_NOW.\n\n> The reason for this is likely best explained by the comment in gram.y:\n\nThat bit of gram.y is just an old bad decision though, along with similar\nchoices for some other SQL special functions. Quite aside from any\nefficiency issues, doing things this way makes it impossible to\nreverse-list a call of CURRENT_DATE as CURRENT_DATE, which we really\nought to do if we pretend to be a SQL-compliant RDBMS. And it's just\nugly at a code level too: the raw grammar is not the place to encode\nimplementation decisions like these.\n\nI've had it on my to-do list for awhile to replace those things with\nsome new expression node type, but haven't got round to it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Nov 2015 10:26:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is now()::date so much faster than current_date" } ]
[ { "msg_contents": "I have a query that produce a different query plan if I put a trim in one\nof the columns in the order by.\n\n\nWhen i put the trim in any column it use hashaggregate and took 3 seconds\nagainst 30 when not.\n\n\nIs wear because the columns is clean not need to be trimmed, I have check\nthat.\n\n\nThe problem is that I can't change the query because is generate by the\nmondrian.\n\n\nI do research and found in postgres list that I need to crank work_mem up\nhigh but don't work for me.\n\n\n\nMy postgresql.conf\n\n\n# Add settings for extensions here\n\n\ndefault_statistics_target = 50 # pgtune wizard 2014-06-04\n\n\nmaintenance_work_mem = 1GB # pgtune wizard 2014-06-04\n\n\nconstraint_exclusion = on # pgtune wizard 2014-06-04\n\n\ncheckpoint_completion_target = 0.9 # pgtune wizard 2014-06-04\n\n\neffective_cache_size = 44GB # pgtune wizard 2014-06-04\n\n\nwork_mem = 1536MB # pgtune wizard 2014-06-04\n\n\n#work_mem = 16GB # I have try this but don't work\n\n\nwal_buffers = 32MB # pgtune wizard 2014-06-04\n\n\ncheckpoint_segments = 16 # pgtune wizard 2014-06-04\n\n\nshared_buffers = 15GB # pgtune wizard 2014-06-04\n\n\nmax_connections = 20 # pgtune wizard 2014-06-04\n\n\n\n\n___________________________________________________\n\n\n\n\nQuery with trim\n\n\nSELECT \"dim_cliente\".\"tipocliente\" AS \"c0\",\n\n\n \"dim_cliente\".\"a1_ibge\" AS \"c1\",\n\n\n \"dim_cliente\".\"a1_cod\" AS \"c2\",\n\n\n \"dim_cliente\".\"a1_nome\" AS \"c3\",\n\n\n \"dim_vendedor\".\"a3_nome\" AS \"c4\"\n\n\n FROM \"public\".\"dim_cliente\" AS \"dim_cliente\",\n\n\n \"public\".\"fato_ventas_productos\" AS \"fato_ventas_productos\",\n\n\n \"public\".\"dim_vendedor\" AS \"dim_vendedor\"\n\n\n WHERE \"fato_ventas_productos\".\"key_cliente\" = \"dim_cliente\".\"key_cliente\"\n\n\n AND \"fato_ventas_productos\".\"key_vendedor\" =\n\"dim_vendedor\".\"key_vendedor\"\n\n\n GROUP\n\n\n BY \"dim_cliente\".\"tipocliente\" ,\n\n\n \"dim_cliente\".\"a1_ibge\",\n\n\n \"dim_cliente\".\"a1_cod\",\n\n\n \"dim_cliente\".\"a1_nome\",\n\n\n \"dim_vendedor\".\"a3_nome\"\n\n\n ORDER\n\n\n BY trim(\"dim_cliente\".\"tipocliente\") ASC NULLS LAST,\n\n\n \"dim_cliente\".\"a1_ibge\" ASC NULLS LAST, -- the same result if I put\nthe trim here\n\n\n \"dim_cliente\".\"a1_cod\" ASC NULLS LAST, -- or here\n\n\n \"dim_cliente\".\"a1_nome\" ASC NULLS LAST; -- or here\n\n\n-- this query took 3845.895 ms\n\n\n\n\n___________________________________________________\n\n\n\n\nQuery Plan when using trim\n\n\n\n QUERY PLAN\n\n\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Sort (cost=193101.41..195369.80 rows=907357 width=129) (actual\ntime=3828.176..3831.261 rows=43615 loops=1)\n\n\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome,\n(btrim((dim_cliente.tipocliente)::text))\n\n\n Sort Key: (btrim((dim_cliente.tipocliente)::text)), dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome\n\n\n Sort Method: quicksort Memory: 13121kB\n\n\n -> HashAggregate (cost=91970.52..103312.49 rows=907357 width=129)\n(actual time=2462.690..2496.729 rows=43615 loops=1)\n\n\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome,\nbtrim((dim_cliente.tipocliente)::text)\n\n\n -> Hash Join (cost=856.30..80628.56 rows=907357 width=129)\n(actual time=29.524..1533.880 rows=907357 loops=1)\n\n\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n\n\n Hash Cond: (fato_ventas_productos.key_vendedor =\ndim_vendedor.key_vendedor)\n\n\n -> Hash Join (cost=830.02..68126.13 rows=907357 width=86)\n(actual time=28.746..1183.691 rows=907357 loops=1)\n\n\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome, fato_ventas_productos.key_vendedor\n\n\n Hash Cond: (fato_ventas_productos.key_cliente =\ndim_cliente.key_cliente)\n\n\n -> Seq Scan on public.fato_ventas_productos\n (cost=0.00..46880.57 rows=907357 width=16) (actual time=0.004..699.779\nrows=907357 loops=1)\n\n\n Output: fato_ventas_productos.key_cliente,\nfato_ventas_productos.key_vendedor\n\n\n -> Hash (cost=618.90..618.90 rows=16890 width=86)\n(actual time=28.699..28.699 rows=16890 loops=1)\n\n\n Output: dim_cliente.tipocliente,\ndim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome,\ndim_cliente.key_cliente\n\n\n Buckets: 2048 Batches: 1 Memory Usage: 1980kB\n\n\n -> Seq Scan on public.dim_cliente\n (cost=0.00..618.90 rows=16890 width=86) (actual time=0.008..16.537\nrows=16890 loops=1)\n\n\n Output: dim_cliente.tipocliente,\ndim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome,\ndim_cliente.key_cliente\n\n\n -> Hash (cost=18.90..18.90 rows=590 width=59) (actual\ntime=0.747..0.747 rows=590 loops=1)\n\n\n Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor\n\n\n Buckets: 1024 Batches: 1 Memory Usage: 56kB\n\n\n -> Seq Scan on public.dim_vendedor (cost=0.00..18.90\nrows=590 width=59) (actual time=0.026..0.423 rows=590 loops=1)\n\n\n Output: dim_vendedor.a3_nome,\ndim_vendedor.key_vendedor\n\n\n Total runtime: 3845.895 ms\n\n\n(25 filas)\n\n\n\n___________________________________________________\n\n\nQuery without trim\n\nSELECT \"dim_cliente\".\"tipocliente\" AS \"c0\",\n\n \"dim_cliente\".\"a1_ibge\" AS \"c1\",\n\n \"dim_cliente\".\"a1_cod\" AS \"c2\",\n\n \"dim_cliente\".\"a1_nome\" AS \"c3\",\n\n \"dim_vendedor\".\"a3_nome\" AS \"c4\"\n\n FROM \"public\".\"dim_cliente\" AS \"dim_cliente\",\n\n \"public\".\"fato_ventas_productos\" AS \"fato_ventas_productos\",\n\n \"public\".\"dim_vendedor\" AS \"dim_vendedor\"\n\n WHERE \"fato_ventas_productos\".\"key_cliente\" = \"dim_cliente\".\"key_cliente\"\n\n AND \"fato_ventas_productos\".\"key_vendedor\" =\n\"dim_vendedor\".\"key_vendedor\"\n\n GROUP\n\n BY \"dim_cliente\".\"tipocliente\" ,\n\n \"dim_cliente\".\"a1_ibge\",\n\n \"dim_cliente\".\"a1_cod\",\n\n \"dim_cliente\".\"a1_nome\",\n\n \"dim_vendedor\".\"a3_nome\"\n\n ORDER\n\n BY \"dim_cliente\".\"tipocliente\" ASC NULLS LAST,\n\n \"dim_cliente\".\"a1_ibge\" ASC NULLS LAST,\n\n \"dim_cliente\".\"a1_cod\" ASC NULLS LAST,\n\n \"dim_cliente\".\"a1_nome\" ASC NULLS LAST;\n\n-- this query took 37249.268 ms\n\n\n___________________________________________________\n\n\nQuery Plan when not using trim\n\n\nQUERY PLAN\n\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Group (cost=170417.48..184027.84 rows=907357 width=129) (actual\ntime=36649.329..37235.158 rows=43615 loops=1)\n\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n\n -> Sort (cost=170417.48..172685.88 rows=907357 width=129) (actual\ntime=36649.315..36786.760 rows=907357 loops=1)\n\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n\n Sort Key: dim_cliente.tipocliente, dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n\n Sort Method: quicksort Memory: 265592kB\n\n -> Hash Join (cost=856.30..80628.56 rows=907357 width=129)\n(actual time=26.719..1593.693 rows=907357 loops=1)\n\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n\n Hash Cond: (fato_ventas_productos.key_vendedor =\ndim_vendedor.key_vendedor)\n\n -> Hash Join (cost=830.02..68126.13 rows=907357 width=86)\n(actual time=25.980..1203.775 rows=907357 loops=1)\n\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\ndim_cliente.a1_cod, dim_cliente.a1_nome, fato_ventas_productos.key_vendedor\n\n Hash Cond: (fato_ventas_productos.key_cliente =\ndim_cliente.key_cliente)\n\n -> Seq Scan on public.fato_ventas_productos\n (cost=0.00..46880.57 rows=907357 width=16) (actual time=0.004..680.283\nrows=907357 loops=1)\n\n Output: fato_ventas_productos.key_cliente,\nfato_ventas_productos.key_vendedor\n\n -> Hash (cost=618.90..618.90 rows=16890 width=86)\n(actual time=25.931..25.931 rows=16890 loops=1)\n\n Output: dim_cliente.tipocliente,\ndim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome,\ndim_cliente.key_cliente\n\n Buckets: 2048 Batches: 1 Memory Usage: 1980kB\n\n -> Seq Scan on public.dim_cliente\n (cost=0.00..618.90 rows=16890 width=86) (actual time=0.005..13.736\nrows=16890 loops=1)\n\n Output: dim_cliente.tipocliente,\ndim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome,\ndim_cliente.key_cliente\n\n -> Hash (cost=18.90..18.90 rows=590 width=59) (actual\ntime=0.715..0.715 rows=590 loops=1)\n\n Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor\n\n Buckets: 1024 Batches: 1 Memory Usage: 56kB\n\n -> Seq Scan on public.dim_vendedor (cost=0.00..18.90\nrows=590 width=59) (actual time=0.024..0.405 rows=590 loops=1)\n\n Output: dim_vendedor.a3_nome,\ndim_vendedor.key_vendedor\n\n Total runtime: 37249.268 ms\n\n(25 filas)\n\n\n___________________________________________________\n\n\nIs anything that I can do to solve this problem, is that a bug or a config\nproblem?\n\n\nHere the link with a dump of the tables\n\nhttps://drive.google.com/file/d/0Bwupj61i9BtWZ1NiVXltaWc0dnM/view?usp=sharing\n\n\nI appreciate your help\n\nI have a query that produce a different query plan if I put a trim in one of the columns in the order by.When i put the trim in any column it use hashaggregate and took 3 seconds against 30 when not.Is wear because the columns is clean not need to be trimmed, I have check that.The problem is that I can't change the query because is generate by the mondrian.I do research and found in postgres list that I need to crank work_mem up high but don't work for me.My postgresql.conf# Add settings for extensions heredefault_statistics_target = 50 # pgtune wizard 2014-06-04maintenance_work_mem = 1GB # pgtune wizard 2014-06-04constraint_exclusion = on # pgtune wizard 2014-06-04checkpoint_completion_target = 0.9 # pgtune wizard 2014-06-04effective_cache_size = 44GB # pgtune wizard 2014-06-04work_mem = 1536MB # pgtune wizard 2014-06-04#work_mem = 16GB # I have try this but don't workwal_buffers = 32MB # pgtune wizard 2014-06-04checkpoint_segments = 16 # pgtune wizard 2014-06-04shared_buffers = 15GB # pgtune wizard 2014-06-04max_connections = 20 # pgtune wizard 2014-06-04___________________________________________________Query with trimSELECT \"dim_cliente\".\"tipocliente\" AS \"c0\",        \"dim_cliente\".\"a1_ibge\" AS \"c1\",        \"dim_cliente\".\"a1_cod\" AS \"c2\",        \"dim_cliente\".\"a1_nome\" AS \"c3\",        \"dim_vendedor\".\"a3_nome\" AS \"c4\"   FROM \"public\".\"dim_cliente\" AS \"dim_cliente\",        \"public\".\"fato_ventas_productos\" AS \"fato_ventas_productos\",        \"public\".\"dim_vendedor\" AS \"dim_vendedor\"  WHERE \"fato_ventas_productos\".\"key_cliente\" = \"dim_cliente\".\"key_cliente\"    AND \"fato_ventas_productos\".\"key_vendedor\" = \"dim_vendedor\".\"key_vendedor\"  GROUP     BY \"dim_cliente\".\"tipocliente\" ,        \"dim_cliente\".\"a1_ibge\",        \"dim_cliente\".\"a1_cod\",        \"dim_cliente\".\"a1_nome\",        \"dim_vendedor\".\"a3_nome\"  ORDER     BY trim(\"dim_cliente\".\"tipocliente\") ASC NULLS LAST,        \"dim_cliente\".\"a1_ibge\" ASC NULLS LAST, -- the same result if I put the trim here       \"dim_cliente\".\"a1_cod\" ASC NULLS LAST, -- or here       \"dim_cliente\".\"a1_nome\" ASC NULLS LAST; -- or here-- this query took 3845.895 ms___________________________________________________Query Plan when using trim                                                                             QUERY PLAN                                                                              --------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=193101.41..195369.80 rows=907357 width=129) (actual time=3828.176..3831.261 rows=43615 loops=1)   Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, (btrim((dim_cliente.tipocliente)::text))   Sort Key: (btrim((dim_cliente.tipocliente)::text)), dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome   Sort Method: quicksort  Memory: 13121kB   ->  HashAggregate  (cost=91970.52..103312.49 rows=907357 width=129) (actual time=2462.690..2496.729 rows=43615 loops=1)         Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, btrim((dim_cliente.tipocliente)::text)         ->  Hash Join  (cost=856.30..80628.56 rows=907357 width=129) (actual time=29.524..1533.880 rows=907357 loops=1)               Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome               Hash Cond: (fato_ventas_productos.key_vendedor = dim_vendedor.key_vendedor)               ->  Hash Join  (cost=830.02..68126.13 rows=907357 width=86) (actual time=28.746..1183.691 rows=907357 loops=1)                     Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, fato_ventas_productos.key_vendedor                     Hash Cond: (fato_ventas_productos.key_cliente = dim_cliente.key_cliente)                     ->  Seq Scan on public.fato_ventas_productos  (cost=0.00..46880.57 rows=907357 width=16) (actual time=0.004..699.779 rows=907357 loops=1)                           Output: fato_ventas_productos.key_cliente, fato_ventas_productos.key_vendedor                     ->  Hash  (cost=618.90..618.90 rows=16890 width=86) (actual time=28.699..28.699 rows=16890 loops=1)                           Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente                           Buckets: 2048  Batches: 1  Memory Usage: 1980kB                           ->  Seq Scan on public.dim_cliente  (cost=0.00..618.90 rows=16890 width=86) (actual time=0.008..16.537 rows=16890 loops=1)                                 Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente               ->  Hash  (cost=18.90..18.90 rows=590 width=59) (actual time=0.747..0.747 rows=590 loops=1)                     Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor                     Buckets: 1024  Batches: 1  Memory Usage: 56kB                     ->  Seq Scan on public.dim_vendedor  (cost=0.00..18.90 rows=590 width=59) (actual time=0.026..0.423 rows=590 loops=1)                           Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor Total runtime: 3845.895 ms(25 filas)___________________________________________________Query without trimSELECT \"dim_cliente\".\"tipocliente\" AS \"c0\",        \"dim_cliente\".\"a1_ibge\" AS \"c1\",        \"dim_cliente\".\"a1_cod\" AS \"c2\",        \"dim_cliente\".\"a1_nome\" AS \"c3\",        \"dim_vendedor\".\"a3_nome\" AS \"c4\"   FROM \"public\".\"dim_cliente\" AS \"dim_cliente\",        \"public\".\"fato_ventas_productos\" AS \"fato_ventas_productos\",        \"public\".\"dim_vendedor\" AS \"dim_vendedor\"  WHERE \"fato_ventas_productos\".\"key_cliente\" = \"dim_cliente\".\"key_cliente\"    AND \"fato_ventas_productos\".\"key_vendedor\" = \"dim_vendedor\".\"key_vendedor\"  GROUP     BY \"dim_cliente\".\"tipocliente\" ,        \"dim_cliente\".\"a1_ibge\",        \"dim_cliente\".\"a1_cod\",        \"dim_cliente\".\"a1_nome\",        \"dim_vendedor\".\"a3_nome\"  ORDER     BY \"dim_cliente\".\"tipocliente\" ASC NULLS LAST,        \"dim_cliente\".\"a1_ibge\" ASC NULLS LAST,        \"dim_cliente\".\"a1_cod\" ASC NULLS LAST,        \"dim_cliente\".\"a1_nome\" ASC NULLS LAST;-- this query took 37249.268 ms___________________________________________________Query Plan when not using trim                                                                          QUERY PLAN                                                                           --------------------------------------------------------------------------------------------------------------------------------------------------------------- Group  (cost=170417.48..184027.84 rows=907357 width=129) (actual time=36649.329..37235.158 rows=43615 loops=1)   Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome   ->  Sort  (cost=170417.48..172685.88 rows=907357 width=129) (actual time=36649.315..36786.760 rows=907357 loops=1)         Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome         Sort Key: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome         Sort Method: quicksort  Memory: 265592kB         ->  Hash Join  (cost=856.30..80628.56 rows=907357 width=129) (actual time=26.719..1593.693 rows=907357 loops=1)               Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome               Hash Cond: (fato_ventas_productos.key_vendedor = dim_vendedor.key_vendedor)               ->  Hash Join  (cost=830.02..68126.13 rows=907357 width=86) (actual time=25.980..1203.775 rows=907357 loops=1)                     Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, fato_ventas_productos.key_vendedor                     Hash Cond: (fato_ventas_productos.key_cliente = dim_cliente.key_cliente)                     ->  Seq Scan on public.fato_ventas_productos  (cost=0.00..46880.57 rows=907357 width=16) (actual time=0.004..680.283 rows=907357 loops=1)                           Output: fato_ventas_productos.key_cliente, fato_ventas_productos.key_vendedor                     ->  Hash  (cost=618.90..618.90 rows=16890 width=86) (actual time=25.931..25.931 rows=16890 loops=1)                           Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente                           Buckets: 2048  Batches: 1  Memory Usage: 1980kB                           ->  Seq Scan on public.dim_cliente  (cost=0.00..618.90 rows=16890 width=86) (actual time=0.005..13.736 rows=16890 loops=1)                                 Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente               ->  Hash  (cost=18.90..18.90 rows=590 width=59) (actual time=0.715..0.715 rows=590 loops=1)                     Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor                     Buckets: 1024  Batches: 1  Memory Usage: 56kB                     ->  Seq Scan on public.dim_vendedor  (cost=0.00..18.90 rows=590 width=59) (actual time=0.024..0.405 rows=590 loops=1)                           Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor Total runtime: 37249.268 ms(25 filas)___________________________________________________Is anything that I can do to solve this problem, is that a bug or a config problem?Here the link with a dump of the tables https://drive.google.com/file/d/0Bwupj61i9BtWZ1NiVXltaWc0dnM/view?usp=sharingI appreciate your help", "msg_date": "Wed, 25 Nov 2015 11:15:31 -0300", "msg_from": "Blas Pico <[email protected]>", "msg_from_op": true, "msg_subject": "Query that took a lot of time in Postgresql when not using trim in\n order by" }, { "msg_contents": "On 25.11.2015 17:15, Blas Pico wrote:\n>\n> I have a query that produce a different query plan if I put a trim in \n> one of the columns in the order by.\n>\n>\n> When i put the trim in any column it use hashaggregate and took 3 \n> seconds against 30 when not.\n>\n>\n> Is wear because the columns is clean not need to be trimmed, I have \n> check that.\n>\n>\n> The problem is that I can't change the query because is generate by \n> the mondrian.\n>\n>\n> I do research and found in postgres list that I need to crank work_mem \n> up high but don't work for me.\n>\n>\n>\n> My postgresql.conf\n>\n>\n> # Add settings for extensions here\n>\n>\n> default_statistics_target = 50 # pgtune wizard 2014-06-04\n>\n>\n> maintenance_work_mem = 1GB # pgtune wizard 2014-06-04\n>\n>\n> constraint_exclusion = on # pgtune wizard 2014-06-04\n>\n>\n> checkpoint_completion_target = 0.9 # pgtune wizard 2014-06-04\n>\n>\n> effective_cache_size = 44GB # pgtune wizard 2014-06-04\n>\n>\n> work_mem = 1536MB # pgtune wizard 2014-06-04\n>\n>\n> #work_mem = 16GB # I have try this but don't work\n>\n>\n> wal_buffers = 32MB # pgtune wizard 2014-06-04\n>\n>\n> checkpoint_segments = 16 # pgtune wizard 2014-06-04\n>\n>\n> shared_buffers = 15GB # pgtune wizard 2014-06-04\n>\n>\n> max_connections = 20 # pgtune wizard 2014-06-04\n>\n>\n>\n>\n> ___________________________________________________\n>\n>\n>\n>\n> Query with trim\n>\n>\n> SELECT \"dim_cliente\".\"tipocliente\" AS \"c0\",\n>\n>\n> \"dim_cliente\".\"a1_ibge\" AS \"c1\",\n>\n>\n> \"dim_cliente\".\"a1_cod\" AS \"c2\",\n>\n>\n> \"dim_cliente\".\"a1_nome\" AS \"c3\",\n>\n>\n> \"dim_vendedor\".\"a3_nome\" AS \"c4\"\n>\n>\n> FROM \"public\".\"dim_cliente\" AS \"dim_cliente\",\n>\n>\n> \"public\".\"fato_ventas_productos\" AS \"fato_ventas_productos\",\n>\n>\n> \"public\".\"dim_vendedor\" AS \"dim_vendedor\"\n>\n>\n> WHERE \"fato_ventas_productos\".\"key_cliente\" = \n> \"dim_cliente\".\"key_cliente\"\n>\n>\n> AND \"fato_ventas_productos\".\"key_vendedor\" = \n> \"dim_vendedor\".\"key_vendedor\"\n>\n>\n> GROUP\n>\n>\n> BY \"dim_cliente\".\"tipocliente\" ,\n>\n>\n> \"dim_cliente\".\"a1_ibge\",\n>\n>\n> \"dim_cliente\".\"a1_cod\",\n>\n>\n> \"dim_cliente\".\"a1_nome\",\n>\n>\n> \"dim_vendedor\".\"a3_nome\"\n>\n>\n> ORDER\n>\n>\n> BY trim(\"dim_cliente\".\"tipocliente\") ASC NULLS LAST,\n>\n>\n> \"dim_cliente\".\"a1_ibge\" ASC NULLS LAST, -- the same result if I put \n> the trim here\n>\n>\n> \"dim_cliente\".\"a1_cod\" ASC NULLS LAST, -- or here\n>\n>\n> \"dim_cliente\".\"a1_nome\" ASC NULLS LAST; -- or here\n>\n>\n> -- this query took 3845.895 ms\n>\n>\n>\n>\n> ___________________________________________________\n>\n>\n>\n>\n> Query Plan when using trim\n>\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n>\n> Sort (cost=193101.41..195369.80 rows=907357 width=129) (actual \n> time=3828.176..3831.261 rows=43615 loops=1)\n>\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, \n> (btrim((dim_cliente.tipocliente)::text))\n>\n>\n> Sort Key: (btrim((dim_cliente.tipocliente)::text)), \n> dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome\n>\n>\n> Sort Method: quicksort Memory: 13121kB\n>\n>\n> -> HashAggregate (cost=91970.52..103312.49 rows=907357 width=129) \n> (actual time=2462.690..2496.729 rows=43615 loops=1)\n>\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, \n> btrim((dim_cliente.tipocliente)::text)\n>\n>\n> -> Hash Join (cost=856.30..80628.56 rows=907357 width=129) \n> (actual time=29.524..1533.880 rows=907357 loops=1)\n>\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n>\n>\n> Hash Cond: (fato_ventas_productos.key_vendedor = \n> dim_vendedor.key_vendedor)\n>\n>\n> -> Hash Join (cost=830.02..68126.13 rows=907357 width=86) (actual \n> time=28.746..1183.691 rows=907357 loops=1)\n>\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, \n> fato_ventas_productos.key_vendedor\n>\n>\n> Hash Cond: (fato_ventas_productos.key_cliente = \n> dim_cliente.key_cliente)\n>\n>\n> -> Seq Scan on public.fato_ventas_productos \n> (cost=0.00..46880.57 rows=907357 width=16) (actual \n> time=0.004..699.779 rows=907357 loops=1)\n>\n>\n> Output: fato_ventas_productos.key_cliente, \n> fato_ventas_productos.key_vendedor\n>\n>\n> -> Hash (cost=618.90..618.90 rows=16890 width=86) (actual \n> time=28.699..28.699 rows=16890 loops=1)\n>\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente\n>\n>\n> Buckets: 2048 Batches: 1 Memory Usage: 1980kB\n>\n>\n> -> Seq Scan on public.dim_cliente (cost=0.00..618.90 \n> rows=16890 width=86) (actual time=0.008..16.537 rows=16890 loops=1)\n>\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente\n>\n>\n> -> Hash (cost=18.90..18.90 rows=590 width=59) (actual \n> time=0.747..0.747 rows=590 loops=1)\n>\n>\n> Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor\n>\n>\n> Buckets: 1024 Batches: 1 Memory Usage: 56kB\n>\n>\n> -> Seq Scan on public.dim_vendedor (cost=0.00..18.90 rows=590 \n> width=59) (actual time=0.026..0.423 rows=590 loops=1)\n>\n>\n> Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor\n>\n>\n> Total runtime: 3845.895 ms\n>\n>\n> (25 filas)\n>\n>\n>\n> ___________________________________________________\n>\n>\n> Query without trim\n>\n> SELECT \"dim_cliente\".\"tipocliente\" AS \"c0\",\n>\n> \"dim_cliente\".\"a1_ibge\" AS \"c1\",\n>\n> \"dim_cliente\".\"a1_cod\" AS \"c2\",\n>\n> \"dim_cliente\".\"a1_nome\" AS \"c3\",\n>\n> \"dim_vendedor\".\"a3_nome\" AS \"c4\"\n>\n> FROM \"public\".\"dim_cliente\" AS \"dim_cliente\",\n>\n> \"public\".\"fato_ventas_productos\" AS \"fato_ventas_productos\",\n>\n> \"public\".\"dim_vendedor\" AS \"dim_vendedor\"\n>\n> WHERE \"fato_ventas_productos\".\"key_cliente\" = \n> \"dim_cliente\".\"key_cliente\"\n>\n> AND \"fato_ventas_productos\".\"key_vendedor\" = \n> \"dim_vendedor\".\"key_vendedor\"\n>\n> GROUP\n>\n> BY \"dim_cliente\".\"tipocliente\" ,\n>\n> \"dim_cliente\".\"a1_ibge\",\n>\n> \"dim_cliente\".\"a1_cod\",\n>\n> \"dim_cliente\".\"a1_nome\",\n>\n> \"dim_vendedor\".\"a3_nome\"\n>\n> ORDER\n>\n> BY \"dim_cliente\".\"tipocliente\" ASC NULLS LAST,\n>\n> \"dim_cliente\".\"a1_ibge\" ASC NULLS LAST,\n>\n> \"dim_cliente\".\"a1_cod\" ASC NULLS LAST,\n>\n> \"dim_cliente\".\"a1_nome\" ASC NULLS LAST;\n>\n> -- this query took 37249.268 ms\n>\n>\n> ___________________________________________________\n>\n>\n> Query Plan when not using trim\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Group (cost=170417.48..184027.84 rows=907357 width=129) (actual \n> time=36649.329..37235.158 rows=43615 loops=1)\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n>\n> -> Sort (cost=170417.48..172685.88 rows=907357 width=129) (actual \n> time=36649.315..36786.760 rows=907357 loops=1)\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n>\n> Sort Key: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n>\n> Sort Method: quicksort Memory: 265592kB\n>\n> -> Hash Join (cost=856.30..80628.56 rows=907357 width=129) \n> (actual time=26.719..1593.693 rows=907357 loops=1)\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n>\n> Hash Cond: (fato_ventas_productos.key_vendedor = \n> dim_vendedor.key_vendedor)\n>\n> -> Hash Join (cost=830.02..68126.13 rows=907357 width=86) (actual \n> time=25.980..1203.775 rows=907357 loops=1)\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, \n> fato_ventas_productos.key_vendedor\n>\n> Hash Cond: (fato_ventas_productos.key_cliente = \n> dim_cliente.key_cliente)\n>\n> -> Seq Scan on public.fato_ventas_productos \n> (cost=0.00..46880.57 rows=907357 width=16) (actual \n> time=0.004..680.283 rows=907357 loops=1)\n>\n> Output: fato_ventas_productos.key_cliente, \n> fato_ventas_productos.key_vendedor\n>\n> -> Hash (cost=618.90..618.90 rows=16890 width=86) (actual \n> time=25.931..25.931 rows=16890 loops=1)\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente\n>\n> Buckets: 2048 Batches: 1 Memory Usage: 1980kB\n>\n> -> Seq Scan on public.dim_cliente (cost=0.00..618.90 \n> rows=16890 width=86) (actual time=0.005..13.736 rows=16890 loops=1)\n>\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, \n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente\n>\n> -> Hash (cost=18.90..18.90 rows=590 width=59) (actual \n> time=0.715..0.715 rows=590 loops=1)\n>\n> Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor\n>\n> Buckets: 1024 Batches: 1 Memory Usage: 56kB\n>\n> -> Seq Scan on public.dim_vendedor (cost=0.00..18.90 rows=590 \n> width=59) (actual time=0.024..0.405 rows=590 loops=1)\n>\n> Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor\n>\n> Total runtime: 37249.268 ms\n>\n> (25 filas)\n>\n>\n> ___________________________________________________\n>\n>\n> Is anything that I can do to solve this problem, is that a bug or a \n> config problem?\n>\n>\n> Here the link with a dump of the tables\n>\n> https://drive.google.com/file/d/0Bwupj61i9BtWZ1NiVXltaWc0dnM/view?usp=sharing\n>\n>\n> I appreciate your help\n>\nHello!\nWhat is your Postgres version?\nDo you have correct statistics on this tables?\nPlease show yours execution plans with buffers i.e. explain \n(analyze,buffers) ...\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n On 25.11.2015 17:15, Blas Pico wrote:\n\n\nI have a query\n that produce a different query plan if I put a trim in one\n of the columns in the order by.\n\n\nWhen i put the\n trim in any column it use hashaggregate and took 3 seconds\n against 30 when not.\n\n\nIs wear because\n the columns is clean not need to be trimmed, I have check\n that.\n\n\nThe problem is\n that I can't change the query because is generate by the\n mondrian.\n\n\nI do research\n and found in postgres list that I need to crank work_mem\n up high but don't work for me.\n\n\n\n\nMy\n postgresql.conf\n\n\n# Add settings\n for extensions here\n\n\ndefault_statistics_target\n = 50 # pgtune wizard 2014-06-04\n\n\nmaintenance_work_mem\n = 1GB # pgtune wizard 2014-06-04\n\n\nconstraint_exclusion\n = on # pgtune wizard 2014-06-04\n\n\ncheckpoint_completion_target\n = 0.9 # pgtune wizard 2014-06-04\n\n\neffective_cache_size\n = 44GB # pgtune wizard 2014-06-04\n\n\nwork_mem =\n 1536MB # pgtune wizard 2014-06-04\n\n\n#work_mem = 16GB\n # I have try this but don't work\n\n\nwal_buffers =\n 32MB # pgtune wizard 2014-06-04\n\n\ncheckpoint_segments\n = 16 # pgtune wizard 2014-06-04\n\n\nshared_buffers =\n 15GB # pgtune wizard 2014-06-04\n\n\nmax_connections\n = 20 # pgtune wizard 2014-06-04\n\n\n\n\n\n\n___________________________________________________\n\n\n\n\n\n\nQuery with trim\n\n\nSELECT\n \"dim_cliente\".\"tipocliente\" AS \"c0\", \n\n\n     \n  \"dim_cliente\".\"a1_ibge\" AS \"c1\", \n\n\n     \n  \"dim_cliente\".\"a1_cod\" AS \"c2\", \n\n\n     \n  \"dim_cliente\".\"a1_nome\" AS \"c3\", \n\n\n     \n  \"dim_vendedor\".\"a3_nome\" AS \"c4\" \n\n\n  FROM\n \"public\".\"dim_cliente\" AS \"dim_cliente\", \n\n\n     \n  \"public\".\"fato_ventas_productos\" AS\n \"fato_ventas_productos\", \n\n\n     \n  \"public\".\"dim_vendedor\" AS \"dim_vendedor\" \n\n\n WHERE\n \"fato_ventas_productos\".\"key_cliente\" =\n \"dim_cliente\".\"key_cliente\" \n\n\n   AND\n \"fato_ventas_productos\".\"key_vendedor\" =\n \"dim_vendedor\".\"key_vendedor\" \n\n\n GROUP \n\n\n    BY\n \"dim_cliente\".\"tipocliente\" , \n\n\n     \n  \"dim_cliente\".\"a1_ibge\", \n\n\n     \n  \"dim_cliente\".\"a1_cod\", \n\n\n     \n  \"dim_cliente\".\"a1_nome\", \n\n\n     \n  \"dim_vendedor\".\"a3_nome\" \n\n\n ORDER \n\n\n    BY\n trim(\"dim_cliente\".\"tipocliente\") ASC NULLS LAST, \n\n\n     \n  \"dim_cliente\".\"a1_ibge\" ASC NULLS LAST, -- the same\n result if I put the trim here\n\n\n     \n  \"dim_cliente\".\"a1_cod\" ASC NULLS LAST, -- or here\n\n\n     \n  \"dim_cliente\".\"a1_nome\" ASC NULLS LAST; -- or here\n\n\n-- this query\n took 3845.895 ms\n\n\n\n\n\n\n___________________________________________________\n\n\n\n\n\n\nQuery Plan when\n using trim\n\n\n               \n                                                          \n    QUERY PLAN                                            \n                                  \n\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Sort\n  (cost=193101.41..195369.80 rows=907357 width=129) (actual\n time=3828.176..3831.261 rows=43615 loops=1)\n\n\n   Output:\n dim_cliente.tipocliente, dim_cliente.a1_ibge,\n dim_cliente.a1_cod, dim_cliente.a1_nome,\n dim_vendedor.a3_nome,\n (btrim((dim_cliente.tipocliente)::text))\n\n\n   Sort Key:\n (btrim((dim_cliente.tipocliente)::text)),\n dim_cliente.a1_ibge, dim_cliente.a1_cod,\n dim_cliente.a1_nome\n\n\n   Sort Method:\n quicksort  Memory: 13121kB\n\n\n   ->\n  HashAggregate  (cost=91970.52..103312.49 rows=907357\n width=129) (actual time=2462.690..2496.729 rows=43615\n loops=1)\n\n\n         Output:\n dim_cliente.tipocliente, dim_cliente.a1_ibge,\n dim_cliente.a1_cod, dim_cliente.a1_nome,\n dim_vendedor.a3_nome,\n btrim((dim_cliente.tipocliente)::text)\n\n\n         ->\n  Hash Join  (cost=856.30..80628.56 rows=907357 width=129)\n (actual time=29.524..1533.880 rows=907357 loops=1)\n\n\n             \n  Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n dim_cliente.a1_cod, dim_cliente.a1_nome,\n dim_vendedor.a3_nome\n\n\n             \n  Hash Cond: (fato_ventas_productos.key_vendedor =\n dim_vendedor.key_vendedor)\n\n\n             \n  ->  Hash Join  (cost=830.02..68126.13 rows=907357\n width=86) (actual time=28.746..1183.691 rows=907357\n loops=1)\n\n\n               \n      Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n dim_cliente.a1_cod, dim_cliente.a1_nome,\n fato_ventas_productos.key_vendedor\n\n\n               \n      Hash Cond: (fato_ventas_productos.key_cliente =\n dim_cliente.key_cliente)\n\n\n               \n      ->  Seq Scan on public.fato_ventas_productos\n  (cost=0.00..46880.57 rows=907357 width=16) (actual\n time=0.004..699.779 rows=907357 loops=1)\n\n\n               \n            Output: fato_ventas_productos.key_cliente,\n fato_ventas_productos.key_vendedor\n\n\n               \n      ->  Hash  (cost=618.90..618.90 rows=16890\n width=86) (actual time=28.699..28.699 rows=16890 loops=1)\n\n\n               \n            Output: dim_cliente.tipocliente,\n dim_cliente.a1_ibge, dim_cliente.a1_cod,\n dim_cliente.a1_nome, dim_cliente.key_cliente\n\n\n               \n            Buckets: 2048  Batches: 1  Memory Usage: 1980kB\n\n\n               \n            ->  Seq Scan on public.dim_cliente\n  (cost=0.00..618.90 rows=16890 width=86) (actual\n time=0.008..16.537 rows=16890 loops=1)\n\n\n               \n                  Output: dim_cliente.tipocliente,\n dim_cliente.a1_ibge, dim_cliente.a1_cod,\n dim_cliente.a1_nome, dim_cliente.key_cliente\n\n\n             \n  ->  Hash  (cost=18.90..18.90 rows=590 width=59)\n (actual time=0.747..0.747 rows=590 loops=1)\n\n\n               \n      Output: dim_vendedor.a3_nome,\n dim_vendedor.key_vendedor\n\n\n               \n      Buckets: 1024  Batches: 1  Memory Usage: 56kB\n\n\n               \n      ->  Seq Scan on public.dim_vendedor\n  (cost=0.00..18.90 rows=590 width=59) (actual\n time=0.026..0.423 rows=590 loops=1)\n\n\n               \n            Output: dim_vendedor.a3_nome,\n dim_vendedor.key_vendedor\n\n\n Total runtime:\n 3845.895 ms\n\n\n(25 filas)\n\n\n\n\n___________________________________________________\n\n\nQuery without\n trim\nSELECT\n \"dim_cliente\".\"tipocliente\" AS \"c0\", \n     \n  \"dim_cliente\".\"a1_ibge\" AS \"c1\", \n     \n  \"dim_cliente\".\"a1_cod\" AS \"c2\", \n     \n  \"dim_cliente\".\"a1_nome\" AS \"c3\", \n     \n  \"dim_vendedor\".\"a3_nome\" AS \"c4\" \n  FROM\n \"public\".\"dim_cliente\" AS \"dim_cliente\", \n     \n  \"public\".\"fato_ventas_productos\" AS\n \"fato_ventas_productos\", \n     \n  \"public\".\"dim_vendedor\" AS \"dim_vendedor\" \n WHERE\n \"fato_ventas_productos\".\"key_cliente\" =\n \"dim_cliente\".\"key_cliente\" \n   AND\n \"fato_ventas_productos\".\"key_vendedor\" =\n \"dim_vendedor\".\"key_vendedor\" \n GROUP \n    BY\n \"dim_cliente\".\"tipocliente\" , \n     \n  \"dim_cliente\".\"a1_ibge\", \n     \n  \"dim_cliente\".\"a1_cod\", \n     \n  \"dim_cliente\".\"a1_nome\", \n     \n  \"dim_vendedor\".\"a3_nome\" \n ORDER \n    BY\n \"dim_cliente\".\"tipocliente\" ASC NULLS LAST, \n     \n  \"dim_cliente\".\"a1_ibge\" ASC NULLS LAST, \n     \n  \"dim_cliente\".\"a1_cod\" ASC NULLS LAST, \n     \n  \"dim_cliente\".\"a1_nome\" ASC NULLS LAST;\n-- this query\n took 37249.268 ms\n\n\n___________________________________________________\n\n\nQuery Plan when\n not using trim\n               \n                                                          \n QUERY PLAN                                                \n                           \n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Group\n  (cost=170417.48..184027.84 rows=907357 width=129) (actual\n time=36649.329..37235.158 rows=43615 loops=1)\n   Output:\n dim_cliente.tipocliente, dim_cliente.a1_ibge,\n dim_cliente.a1_cod, dim_cliente.a1_nome,\n dim_vendedor.a3_nome\n   ->  Sort\n  (cost=170417.48..172685.88 rows=907357 width=129) (actual\n time=36649.315..36786.760 rows=907357 loops=1)\n         Output:\n dim_cliente.tipocliente, dim_cliente.a1_ibge,\n dim_cliente.a1_cod, dim_cliente.a1_nome,\n dim_vendedor.a3_nome\n         Sort\n Key: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n dim_cliente.a1_cod, dim_cliente.a1_nome,\n dim_vendedor.a3_nome\n         Sort\n Method: quicksort  Memory: 265592kB\n         ->\n  Hash Join  (cost=856.30..80628.56 rows=907357 width=129)\n (actual time=26.719..1593.693 rows=907357 loops=1)\n             \n  Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n dim_cliente.a1_cod, dim_cliente.a1_nome,\n dim_vendedor.a3_nome\n             \n  Hash Cond: (fato_ventas_productos.key_vendedor =\n dim_vendedor.key_vendedor)\n             \n  ->  Hash Join  (cost=830.02..68126.13 rows=907357\n width=86) (actual time=25.980..1203.775 rows=907357\n loops=1)\n               \n      Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n dim_cliente.a1_cod, dim_cliente.a1_nome,\n fato_ventas_productos.key_vendedor\n               \n      Hash Cond: (fato_ventas_productos.key_cliente =\n dim_cliente.key_cliente)\n               \n      ->  Seq Scan on public.fato_ventas_productos\n  (cost=0.00..46880.57 rows=907357 width=16) (actual\n time=0.004..680.283 rows=907357 loops=1)\n               \n            Output: fato_ventas_productos.key_cliente,\n fato_ventas_productos.key_vendedor\n               \n      ->  Hash  (cost=618.90..618.90 rows=16890\n width=86) (actual time=25.931..25.931 rows=16890 loops=1)\n               \n            Output: dim_cliente.tipocliente,\n dim_cliente.a1_ibge, dim_cliente.a1_cod,\n dim_cliente.a1_nome, dim_cliente.key_cliente\n               \n            Buckets: 2048  Batches: 1  Memory Usage: 1980kB\n               \n            ->  Seq Scan on public.dim_cliente\n  (cost=0.00..618.90 rows=16890 width=86) (actual\n time=0.005..13.736 rows=16890 loops=1)\n               \n                  Output: dim_cliente.tipocliente,\n dim_cliente.a1_ibge, dim_cliente.a1_cod,\n dim_cliente.a1_nome, dim_cliente.key_cliente\n             \n  ->  Hash  (cost=18.90..18.90 rows=590 width=59)\n (actual time=0.715..0.715 rows=590 loops=1)\n               \n      Output: dim_vendedor.a3_nome,\n dim_vendedor.key_vendedor\n               \n      Buckets: 1024  Batches: 1  Memory Usage: 56kB\n               \n      ->  Seq Scan on public.dim_vendedor\n  (cost=0.00..18.90 rows=590 width=59) (actual\n time=0.024..0.405 rows=590 loops=1)\n               \n            Output: dim_vendedor.a3_nome,\n dim_vendedor.key_vendedor\n Total runtime:\n 37249.268 ms\n(25 filas)\n\n\n___________________________________________________\n\n\nIs anything that\n I can do to solve this problem, is that a bug or a config\n problem?\n\n\nHere the link\n with a dump of the tables \nhttps://drive.google.com/file/d/0Bwupj61i9BtWZ1NiVXltaWc0dnM/view?usp=sharing\n\n\nI appreciate\n your help\n\n\n Hello!\n What is your Postgres version?\n Do you have correct statistics on this tables?\n Please show  yours execution plans with buffers i.e. explain\n (analyze,buffers) ...\n\n-- \nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 25 Nov 2015 19:22:28 +0300", "msg_from": "Alex Ignatov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query that took a lot of time in Postgresql when not\n using trim in order by" }, { "msg_contents": "> What is your Postgres version?\n> Do you have correct statistics on this tables?\n> Please show yours execution plans with buffers i.e. explain (analyze,buffers) ...\n> \n\n\nFast:\n\n Sort (cost=193101.41..195369.80 rows=907357 width=129) (actual time=3828.176..3831.261 rows=43615 loops=1)\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, (btrim((dim_cliente.tipocliente)::text))\n Sort Key: (btrim((dim_cliente.tipocliente)::text)), dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome\n Sort Method: quicksort Memory: 13121kB\n -> HashAggregate (cost=91970.52..103312.49 rows=907357 width=129) (actual time=2462.690..2496.729 rows=43615 loops=1)\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, btrim((dim_cliente.tipocliente)::text)\n -> Hash Join (cost=856.30..80628.56 rows=907357 width=129) (actual time=29.524..1533.880 rows=907357 loops=1)\n\n\nSlow:\n\n Group (cost=170417.48..184027.84 rows=907357 width=129) (actual time=36649.329..37235.158 rows=43615 loops=1)\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n -> Sort (cost=170417.48..172685.88 rows=907357 width=129) (actual time=36649.315..36786.760 rows=907357 loops=1)\n Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n Sort Key: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n Sort Method: quicksort Memory: 265592kB\n -> Hash Join (cost=856.30..80628.56 rows=907357 width=129) (actual time=26.719..1593.693 rows=907357 loops=1)\n\n\nThe difference is in the top of plans.\nAs we see, hashjoin time is practically the same. \nBut fast plan uses hashagg first and only 43k rows require sorting.\nSlow plan dominated by sorting 900k rows.\n\nI wonder if increasing cpu_tuple_cost will help.\nAs cost difference between two plans is negligible now.\n \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Nov 2015 19:35:15 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query that took a lot of time in Postgresql when not using trim\n in order by" }, { "msg_contents": "My database version is 9.3 but I have test with 9.4 too with the same\nresult, and I have test changing that parameter without success.\nI want to know what does have to do the trim with the different query plans?\n\n2015-11-25 13:35 GMT-03:00 Evgeniy Shishkin <[email protected]>:\n\n> > What is your Postgres version?\n> > Do you have correct statistics on this tables?\n> > Please show yours execution plans with buffers i.e. explain\n> (analyze,buffers) ...\n> >\n>\n>\n> Fast:\n>\n> Sort (cost=193101.41..195369.80 rows=907357 width=129) (actual\n> time=3828.176..3831.261 rows=43615 loops=1)\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome,\n> (btrim((dim_cliente.tipocliente)::text))\n> Sort Key: (btrim((dim_cliente.tipocliente)::text)),\n> dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome\n> Sort Method: quicksort Memory: 13121kB\n> -> HashAggregate (cost=91970.52..103312.49 rows=907357 width=129)\n> (actual time=2462.690..2496.729 rows=43615 loops=1)\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome,\n> btrim((dim_cliente.tipocliente)::text)\n> -> Hash Join (cost=856.30..80628.56 rows=907357 width=129)\n> (actual time=29.524..1533.880 rows=907357 loops=1)\n>\n>\n> Slow:\n>\n> Group (cost=170417.48..184027.84 rows=907357 width=129) (actual\n> time=36649.329..37235.158 rows=43615 loops=1)\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n> -> Sort (cost=170417.48..172685.88 rows=907357 width=129) (actual\n> time=36649.315..36786.760 rows=907357 loops=1)\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n> Sort Key: dim_cliente.tipocliente, dim_cliente.a1_ibge,\n> dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n> Sort Method: quicksort Memory: 265592kB\n> -> Hash Join (cost=856.30..80628.56 rows=907357 width=129)\n> (actual time=26.719..1593.693 rows=907357 loops=1)\n>\n>\n> The difference is in the top of plans.\n> As we see, hashjoin time is practically the same.\n> But fast plan uses hashagg first and only 43k rows require sorting.\n> Slow plan dominated by sorting 900k rows.\n>\n> I wonder if increasing cpu_tuple_cost will help.\n> As cost difference between two plans is negligible now.\n>\n\nMy database version is 9.3 but I have test with 9.4 too with the same result, and I have test changing that parameter without success.I want to know what does have to do the trim with the different query plans?2015-11-25 13:35 GMT-03:00 Evgeniy Shishkin <[email protected]>:> What is your Postgres version?\n> Do you have correct statistics on this tables?\n> Please show  yours execution plans with buffers i.e. explain (analyze,buffers) ...\n>\n\n\nFast:\n\n Sort  (cost=193101.41..195369.80 rows=907357 width=129) (actual time=3828.176..3831.261 rows=43615 loops=1)\n   Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, (btrim((dim_cliente.tipocliente)::text))\n   Sort Key: (btrim((dim_cliente.tipocliente)::text)), dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome\n   Sort Method: quicksort  Memory: 13121kB\n   ->  HashAggregate  (cost=91970.52..103312.49 rows=907357 width=129) (actual time=2462.690..2496.729 rows=43615 loops=1)\n         Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, btrim((dim_cliente.tipocliente)::text)\n         ->  Hash Join  (cost=856.30..80628.56 rows=907357 width=129) (actual time=29.524..1533.880 rows=907357 loops=1)\n\n\nSlow:\n\n Group  (cost=170417.48..184027.84 rows=907357 width=129) (actual time=36649.329..37235.158 rows=43615 loops=1)\n   Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n   ->  Sort  (cost=170417.48..172685.88 rows=907357 width=129) (actual time=36649.315..36786.760 rows=907357 loops=1)\n         Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n         Sort Key: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n         Sort Method: quicksort  Memory: 265592kB\n         ->  Hash Join  (cost=856.30..80628.56 rows=907357 width=129) (actual time=26.719..1593.693 rows=907357 loops=1)\n\n\nThe difference is in the top of plans.\nAs we see, hashjoin time is practically the same.\nBut fast plan uses hashagg first and only 43k rows require sorting.\nSlow plan dominated by sorting 900k rows.\n\nI wonder if increasing cpu_tuple_cost will help.\nAs cost difference between two plans is negligible now.", "msg_date": "Wed, 25 Nov 2015 15:01:07 -0300", "msg_from": "Blas Pico <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query that took a lot of time in Postgresql when not\n using trim in order by" }, { "msg_contents": "On 2015-11-25 19:35:15 +0300, Evgeniy Shishkin wrote:\n> Fast:\n> \n> Sort (cost=193101.41..195369.80 rows=907357 width=129) (actual time=3828.176..3831.261 rows=43615 loops=1)\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, (btrim((dim_cliente.tipocliente)::text))\n> Sort Key: (btrim((dim_cliente.tipocliente)::text)), dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome\n> Sort Method: quicksort Memory: 13121kB\n> -> HashAggregate (cost=91970.52..103312.49 rows=907357 width=129) (actual time=2462.690..2496.729 rows=43615 loops=1)\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome, btrim((dim_cliente.tipocliente)::text)\n> -> Hash Join (cost=856.30..80628.56 rows=907357 width=129) (actual time=29.524..1533.880 rows=907357 loops=1)\n> \n> \n> Slow:\n> \n> Group (cost=170417.48..184027.84 rows=907357 width=129) (actual time=36649.329..37235.158 rows=43615 loops=1)\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n> -> Sort (cost=170417.48..172685.88 rows=907357 width=129) (actual time=36649.315..36786.760 rows=907357 loops=1)\n> Output: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n> Sort Key: dim_cliente.tipocliente, dim_cliente.a1_ibge, dim_cliente.a1_cod, dim_cliente.a1_nome, dim_vendedor.a3_nome\n> Sort Method: quicksort Memory: 265592kB\n> -> Hash Join (cost=856.30..80628.56 rows=907357 width=129) (actual time=26.719..1593.693 rows=907357 loops=1)\n> \n> \n> The difference is in the top of plans.\n> As we see, hashjoin time is practically the same. \n> But fast plan uses hashagg first and only 43k rows require sorting.\n> Slow plan dominated by sorting 900k rows.\n> \n> I wonder if increasing cpu_tuple_cost will help.\n> As cost difference between two plans is negligible now.\n\nSeems plausible. Also I'm wondering what CPU this is: 36 seconds for an\nin-memory sort of 900k rows seems slow to me. I tested this on my PC at\nhome (1.8 GHz Core2 Dual, so a rather old and slow box) and I could sort\n1E6 rows of 128 random bytes in 5.6 seconds. Even if I kept the first 96\nbytes constant (so only the last 32 were random), it took only 21\nseconds. Either this CPU is really slow or the data is heavily skewed -\nis it possible that all dimensions except dim_vendedor.a3_nome have only\none or very few values? In that case changing the sort order might help.\n\n\thp\n\n\n-- \n _ | Peter J. Holzer | I want to forget all about both belts and\n|_|_) | | suspenders; instead, I want to buy pants \n| | | [email protected] | that actually fit.\n__/ | http://www.hjp.at/ | -- http://noncombatant.org/", "msg_date": "Sun, 29 Nov 2015 14:23:23 +0100", "msg_from": "\"Peter J. Holzer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query that took a lot of time in Postgresql when not\n using trim in order by" }, { "msg_contents": "\"Peter J. Holzer\" <[email protected]> writes:\n> Seems plausible. Also I'm wondering what CPU this is: 36 seconds for an\n> in-memory sort of 900k rows seems slow to me.\n\nI'm wondering if it's textual data in some locale whose strcoll() behavior\nis exceptionally slow.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 29 Nov 2015 12:58:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query that took a lot of time in Postgresql when not using trim\n in order by" } ]
[ { "msg_contents": "Hey all,\n\nI have an attachment table in my database which stores a file in a bytea\ncolumn, the file name, and the size of the file.\n\nSchema:\nCREATE TABLE attachment\n(\n attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),\n attachment_name character varying NOT NULL,\n attachment_bytes_size integer NOT NULL,\n attachment_bytes bytea NOT NULL,\n CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id)\n);\n\nI do lookups on this table based on the md5 of the attachment_bytes column,\nso I added an index:\nCREATE INDEX idx_attachment_bytes_md5 ON attachment\n((md5(attachment_bytes)::uuid));\n\nQueries like this are sped up by the index no problem:\nSELECT attachment_id\nFROM attachment\nWHERE md5(attachment_bytes)::uuid = 'b2ab855ece13a72a398096dfb6c832aa';\n\nBut if I wanted to return the md5 value, it seems to be totally unable to\nuse an index only scan:\nSELECT md5(attachment_bytes)::uuid\nFROM attachment;\n\nHey all,I have an attachment table in my database which stores a file in a bytea column, the file name, and the size of the file.Schema: CREATE TABLE attachment(  attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),  attachment_name character varying NOT NULL,  attachment_bytes_size integer NOT NULL,  attachment_bytes bytea NOT NULL,  CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id));I do lookups on this table based on the md5 of the attachment_bytes column, so I added an index: CREATE INDEX idx_attachment_bytes_md5 ON attachment ((md5(attachment_bytes)::uuid));Queries like this are sped up by the index no problem:SELECT attachment_idFROM attachmentWHERE md5(attachment_bytes)::uuid = 'b2ab855ece13a72a398096dfb6c832aa';But if I wanted to return the md5 value, it seems to be totally unable to use an index only scan:SELECT md5(attachment_bytes)::uuidFROM attachment;", "msg_date": "Wed, 25 Nov 2015 19:25:30 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "No index only scan on md5 index" }, { "msg_contents": "On Wednesday, November 25, 2015, Adam Brusselback <[email protected]>\nwrote:\n\n> Hey all,\n>\n> I have an attachment table in my database which stores a file in a bytea\n> column, the file name, and the size of the file.\n>\n> Schema:\n> CREATE TABLE attachment\n> (\n> attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),\n> attachment_name character varying NOT NULL,\n> attachment_bytes_size integer NOT NULL,\n> attachment_bytes bytea NOT NULL,\n> CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id)\n> );\n>\n> I do lookups on this table based on the md5 of the attachment_bytes\n> column, so I added an index:\n> CREATE INDEX idx_attachment_bytes_md5 ON attachment\n> ((md5(attachment_bytes)::uuid));\n>\n> Queries like this are sped up by the index no problem:\n> SELECT attachment_id\n> FROM attachment\n> WHERE md5(attachment_bytes)::uuid = 'b2ab855ece13a72a398096dfb6c832aa';\n>\n> But if I wanted to return the md5 value, it seems to be totally unable to\n> use an index only scan:\n> SELECT md5(attachment_bytes)::uuid\n> FROM attachment;\n>\n>\nOk.\n\nAny reason not to add the uuid column to the table?\n\nAFAIK The system is designed to return data from the heap, not an index.\nWhile it possibly can in some instances if you need to return data you\nshould store it directly in the table.\n\nDavid J.\n\nOn Wednesday, November 25, 2015, Adam Brusselback <[email protected]> wrote:Hey all,I have an attachment table in my database which stores a file in a bytea column, the file name, and the size of the file.Schema: CREATE TABLE attachment(  attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),  attachment_name character varying NOT NULL,  attachment_bytes_size integer NOT NULL,  attachment_bytes bytea NOT NULL,  CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id));I do lookups on this table based on the md5 of the attachment_bytes column, so I added an index: CREATE INDEX idx_attachment_bytes_md5 ON attachment ((md5(attachment_bytes)::uuid));Queries like this are sped up by the index no problem:SELECT attachment_idFROM attachmentWHERE md5(attachment_bytes)::uuid = 'b2ab855ece13a72a398096dfb6c832aa';But if I wanted to return the md5 value, it seems to be totally unable to use an index only scan:SELECT md5(attachment_bytes)::uuidFROM attachment;Ok.Any reason not to add the uuid column to the table?AFAIK The system is designed to return data from the heap, not an index.  While it possibly can in some instances if you need to return data you should store it directly in the table.David J.", "msg_date": "Wed, 25 Nov 2015 17:55:11 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No index only scan on md5 index" }, { "msg_contents": "Adam Brusselback <[email protected]> writes:\n> CREATE TABLE attachment\n> (\n> attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),\n> attachment_name character varying NOT NULL,\n> attachment_bytes_size integer NOT NULL,\n> attachment_bytes bytea NOT NULL,\n> CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id)\n> );\n> CREATE INDEX idx_attachment_bytes_md5 ON attachment\n> ((md5(attachment_bytes)::uuid));\n\n> But if I wanted to return the md5 value, it seems to be totally unable to\n> use an index only scan:\n> SELECT md5(attachment_bytes)::uuid\n> FROM attachment;\n\nNope, sorry, you're out of luck on that, because the check for whether an\nindex-only scan is feasible checks whether all the variables used in the\nquery are available from the index. (Testing whether an index expression\ncould match everything asked for would greatly slow down planning, whether\nor not the index turned out to be applicable, so we don't try. I have\nsome rough ideas about making that better, but don't hold your breath.)\n\nIIRC, it does actually get it right in terms of constructing the\nfinished plan, if you can get past the index-only-scan-is-feasible test.\nSo some people have done something like this to avoid recalculations of\nexpensive functions:\n\ncreate table ff(f1 float8);\ncreate index on ff(sin(f1), f1);\nselect sin(f1) from ff; -- can generate IOS and not re-evaluate sin()\n\nBut if I'm right in guessing that attachment_bytes can be large,\nthat's not going to be a workable hack for your case.\n\nProbably the only thing that's going to work for you is to store\nmd5(attachment_bytes) in its own plain column (you can use a trigger\nto compute it for you), and then build a regular index on that,\nand query for that column not the md5() expression.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Nov 2015 20:01:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No index only scan on md5 index" }, { "msg_contents": "Main reason I was hoping to not do that, is the value that would be stored\nin that column is dependent on what is stored in the attachment_bytes\ncolumn, so to be 100% sure it's correct, you'd need that column controlled\nby a trigger, disallowing any explicit inserts or updates to the value.\nWas having a hard time finding info on this type of thing online though, so\nI was unsure if Postgres was working as intended, or if I had made a\nmistake somehow.\n\nIf you do know, what are the instances it is able to return data directly\nfrom an index instead of having to go to heap?\n\nOn Wed, Nov 25, 2015 at 7:55 PM, David G. Johnston <\[email protected]> wrote:\n\n> On Wednesday, November 25, 2015, Adam Brusselback <\n> [email protected]> wrote:\n>\n>> Hey all,\n>>\n>> I have an attachment table in my database which stores a file in a bytea\n>> column, the file name, and the size of the file.\n>>\n>> Schema:\n>> CREATE TABLE attachment\n>> (\n>> attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),\n>> attachment_name character varying NOT NULL,\n>> attachment_bytes_size integer NOT NULL,\n>> attachment_bytes bytea NOT NULL,\n>> CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id)\n>> );\n>>\n>> I do lookups on this table based on the md5 of the attachment_bytes\n>> column, so I added an index:\n>> CREATE INDEX idx_attachment_bytes_md5 ON attachment\n>> ((md5(attachment_bytes)::uuid));\n>>\n>> Queries like this are sped up by the index no problem:\n>> SELECT attachment_id\n>> FROM attachment\n>> WHERE md5(attachment_bytes)::uuid = 'b2ab855ece13a72a398096dfb6c832aa';\n>>\n>> But if I wanted to return the md5 value, it seems to be totally unable to\n>> use an index only scan:\n>> SELECT md5(attachment_bytes)::uuid\n>> FROM attachment;\n>>\n>>\n> Ok.\n>\n> Any reason not to add the uuid column to the table?\n>\n> AFAIK The system is designed to return data from the heap, not an index.\n> While it possibly can in some instances if you need to return data you\n> should store it directly in the table.\n>\n> David J.\n>\n\nMain reason I was hoping to not do that, is the value that would be stored in that column is dependent on what is stored in the attachment_bytes column, so to be 100% sure it's correct, you'd need that column controlled by a trigger, disallowing any explicit inserts or updates to the value.  Was having a hard time finding info on this type of thing online though, so I was unsure if Postgres was working as intended, or if I had made a mistake somehow.If you do know, what are the instances it is able to return data directly from an index instead of having to go to heap?On Wed, Nov 25, 2015 at 7:55 PM, David G. Johnston <[email protected]> wrote:On Wednesday, November 25, 2015, Adam Brusselback <[email protected]> wrote:Hey all,I have an attachment table in my database which stores a file in a bytea column, the file name, and the size of the file.Schema: CREATE TABLE attachment(  attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),  attachment_name character varying NOT NULL,  attachment_bytes_size integer NOT NULL,  attachment_bytes bytea NOT NULL,  CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id));I do lookups on this table based on the md5 of the attachment_bytes column, so I added an index: CREATE INDEX idx_attachment_bytes_md5 ON attachment ((md5(attachment_bytes)::uuid));Queries like this are sped up by the index no problem:SELECT attachment_idFROM attachmentWHERE md5(attachment_bytes)::uuid = 'b2ab855ece13a72a398096dfb6c832aa';But if I wanted to return the md5 value, it seems to be totally unable to use an index only scan:SELECT md5(attachment_bytes)::uuidFROM attachment;Ok.Any reason not to add the uuid column to the table?AFAIK The system is designed to return data from the heap, not an index.  While it possibly can in some instances if you need to return data you should store it directly in the table.David J.", "msg_date": "Wed, 25 Nov 2015 20:06:10 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: No index only scan on md5 index" }, { "msg_contents": "I appreciate the response Tom, and you are correct that the workaround\nwould not work in my case.\n\nSo no index expressions can return the their value without recomputing\nwithout that work around? I learn something new every day it seems.\nThank you for the alternate method.\n\n-Adam\n\nOn Wed, Nov 25, 2015 at 8:01 PM, Tom Lane <[email protected]> wrote:\n\n> Adam Brusselback <[email protected]> writes:\n> > CREATE TABLE attachment\n> > (\n> > attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),\n> > attachment_name character varying NOT NULL,\n> > attachment_bytes_size integer NOT NULL,\n> > attachment_bytes bytea NOT NULL,\n> > CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id)\n> > );\n> > CREATE INDEX idx_attachment_bytes_md5 ON attachment\n> > ((md5(attachment_bytes)::uuid));\n>\n> > But if I wanted to return the md5 value, it seems to be totally unable to\n> > use an index only scan:\n> > SELECT md5(attachment_bytes)::uuid\n> > FROM attachment;\n>\n> Nope, sorry, you're out of luck on that, because the check for whether an\n> index-only scan is feasible checks whether all the variables used in the\n> query are available from the index. (Testing whether an index expression\n> could match everything asked for would greatly slow down planning, whether\n> or not the index turned out to be applicable, so we don't try. I have\n> some rough ideas about making that better, but don't hold your breath.)\n>\n> IIRC, it does actually get it right in terms of constructing the\n> finished plan, if you can get past the index-only-scan-is-feasible test.\n> So some people have done something like this to avoid recalculations of\n> expensive functions:\n>\n> create table ff(f1 float8);\n> create index on ff(sin(f1), f1);\n> select sin(f1) from ff; -- can generate IOS and not re-evaluate sin()\n>\n> But if I'm right in guessing that attachment_bytes can be large,\n> that's not going to be a workable hack for your case.\n>\n> Probably the only thing that's going to work for you is to store\n> md5(attachment_bytes) in its own plain column (you can use a trigger\n> to compute it for you), and then build a regular index on that,\n> and query for that column not the md5() expression.\n>\n> regards, tom lane\n>\n\nI appreciate the response Tom, and you are correct that the workaround would not work in my case.So no index expressions can return the their value without recomputing without that work around?  I learn something new every day it seems. Thank you for the alternate method.-AdamOn Wed, Nov 25, 2015 at 8:01 PM, Tom Lane <[email protected]> wrote:Adam Brusselback <[email protected]> writes:\n> CREATE TABLE attachment\n> (\n>   attachment_id uuid NOT NULL DEFAULT gen_random_uuid(),\n>   attachment_name character varying NOT NULL,\n>   attachment_bytes_size integer NOT NULL,\n>   attachment_bytes bytea NOT NULL,\n>   CONSTRAINT attachment_pkey PRIMARY KEY (attachment_id)\n> );\n> CREATE INDEX idx_attachment_bytes_md5 ON attachment\n> ((md5(attachment_bytes)::uuid));\n\n> But if I wanted to return the md5 value, it seems to be totally unable to\n> use an index only scan:\n> SELECT md5(attachment_bytes)::uuid\n> FROM attachment;\n\nNope, sorry, you're out of luck on that, because the check for whether an\nindex-only scan is feasible checks whether all the variables used in the\nquery are available from the index.  (Testing whether an index expression\ncould match everything asked for would greatly slow down planning, whether\nor not the index turned out to be applicable, so we don't try.  I have\nsome rough ideas about making that better, but don't hold your breath.)\n\nIIRC, it does actually get it right in terms of constructing the\nfinished plan, if you can get past the index-only-scan-is-feasible test.\nSo some people have done something like this to avoid recalculations of\nexpensive functions:\n\ncreate table ff(f1 float8);\ncreate index on ff(sin(f1), f1);\nselect sin(f1) from ff;  -- can generate IOS and not re-evaluate sin()\n\nBut if I'm right in guessing that attachment_bytes can be large,\nthat's not going to be a workable hack for your case.\n\nProbably the only thing that's going to work for you is to store\nmd5(attachment_bytes) in its own plain column (you can use a trigger\nto compute it for you), and then build a regular index on that,\nand query for that column not the md5() expression.\n\n                        regards, tom lane", "msg_date": "Wed, 25 Nov 2015 20:16:40 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: No index only scan on md5 index" }, { "msg_contents": "Adam Brusselback wrote:\r\n> I appreciate the response Tom, and you are correct that the workaround would not work in my case.\r\n> \r\n> So no index expressions can return the their value without recomputing without that work around? I\r\n> learn something new every day it seems.\r\n> Thank you for the alternate method.\r\n\r\nNo, what Tom said is that the check whether an \"index only scan\" was feasible\r\nor not does not consider expressions.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Nov 2015 08:20:19 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No index only scan on md5 index" } ]
[ { "msg_contents": "Hi All,\n\nUsing pg 9.4.5 I'm looking at a table set up by a 3rd party application and trying to figure out why a particular index is being chosen over another for updates/deletes.\n\n From what I can see the reason is that plans using either index have the same exactly the same cost. So rather I'm asking if there's something glaringly obvious I'm missing, or is there anything I can to to get better estimates.\n\nThe table is as follows and has ~ 50M rows, ~ 4.5GB in size:\n\nCREATE TABLE tickets.seats\n(\n recnum serial NOT NULL,\n show numeric(8,0) NOT NULL,\n type numeric(4,0) NOT NULL,\n block character varying(8) NOT NULL,\n \"row\" numeric(14,0) NOT NULL,\n seat numeric(8,0) NOT NULL,\n flag character varying(15) NOT NULL,\n transno numeric(8,0) NOT NULL,\n best numeric(4,0) NOT NULL,\n \"user\" character varying(15) NOT NULL,\n \"time\" numeric(10,0) NOT NULL,\n date date NOT NULL,\n date_reserved timestamp NOT NULL\n);\n\nIndexes:\n \"seats_index01\" PRIMARY KEY, btree (show, type, best, block, \"row\", seat) // (1094 MB)\n \"seats_index00\" UNIQUE, btree (recnum) // (2423 MB)\n \"seats_index02\" UNIQUE, btree (show, type, best, block, flag, \"row\", seat, recnum) // (2908 MB)\n\ndefault_statistics target is 100, and the following columns are non-default:\n\nattname | attstattarget\n--------+---------------\nshow | 1000\ntype | 1000\nblock | 2000\nrow | 1000\nseat | 1000\nflag | 1000\nbest | 1000\n\nIncreasing these further appears to make no noticeable difference. (pg_stats here for these columns here: http://pastebin.com/2WQQec7N)\n\nAn example query below shows that in some cases the seats_index02 index is being chosen:\n\n# analyze verbose seats;\nINFO: analyzing \"tickets.seats\"\nINFO: \"seats\": scanned 593409 of 593409 pages, containing 50926456 live rows and 349030 dead rows; 600000 rows in sample, 50926456 estimated total rows\n\n# begin;\nBEGIN\n# explain analyze delete from seats where (\"show\" = 58919 AND \"type\" = 1 AND \"best\" = 10 AND \"block\" = 'GMA' AND \"row\" =26 AND \"seat\" = 15);\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nDelete on seats (cost=0.56..4.59 rows=1 width=6) (actual time=0.480..0.480 rows=0 loops=1)\n-> Index Scan using seats_index02 on seats (cost=0.56..4.59 rows=1 width=6) (actual time=0.452..0.453 rows=1 loops=1)\nIndex Cond: ((show = 58919::numeric) AND (type = 1::numeric) AND (best = 10::numeric) AND ((block)::text = 'GMA'::text) AND (\"row\" = 26::numeric) AND (seat = 15::numeric))\nPlanning time: 2.172 ms\nExecution time: 0.531 ms\n(5 rows)\n\nBut from my naive standpoint, seats_index01 is a better candidate:\n\n# abort; begin;\nROLLBACK\nBEGIN\n\n# update pg_index set indisvalid = false where indexrelid = 'seats_index02'::regclass;\n# explain analyze delete from seats where (\"show\" = 58919 AND \"type\" = 1 AND \"best\" = 10 AND \"block\" = 'GMA' AND \"row\" =26 AND \"seat\" = 15);\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nDelete on seats (cost=0.56..4.59 rows=1 width=6) (actual time=0.103..0.103 rows=0 loops=1)\n-> Index Scan using seats_index01 on seats (cost=0.56..4.59 rows=1 width=6) (actual time=0.078..0.080 rows=1 loops=1)\nIndex Cond: ((show = 58919::numeric) AND (type = 1::numeric) AND (best = 10::numeric) AND ((block)::text = 'GMA'::text) AND (\"row\" = 26::numeric) AND (seat = 15::numeric))\nPlanning time: 0.535 ms\nExecution time: 0.146 ms\n(5 rows)\n\n\nIn this instance, the time difference is not huge, however in some seemingly random cases where there are a lot of rows with only the \"seat\" column differing the choice of seats_index02 is much larger ~ 70ms vs 0.something ms with seats_index01\n\nI suspect some of the seemingly random cases could be where there's been an update, followed by a delete since the last analyze, despite auto analyze running fairly frequently.\n\nAny suggestions appreciated.\n\nThanks\nGlyn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Nov 2015 16:11:41 +0000 (UTC)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Index scan cost calculation" }, { "msg_contents": "----- Original Message -----\n\n> From: Glyn Astill <[email protected]>\n> To: Pgsql-performance <[email protected]>\n> Sent: Thursday, 26 November 2015, 16:11\n> Subject: [PERFORM] Index scan cost calculation\n> \n> Hi All,\n> \n> Using pg 9.4.5 I'm looking at a table set up by a 3rd party application and \n> trying to figure out why a particular index is being chosen over another for \n> updates/deletes.\n> \n> From what I can see the reason is that plans using either index have the same \n> exactly the same cost. So rather I'm asking if there's something \n> glaringly obvious I'm missing, or is there anything I can to to get better \n> estimates.\n> \n> The table is as follows and has ~ 50M rows, ~ 4.5GB in size:\n> \n> CREATE TABLE tickets.seats\n> (\n> recnum serial NOT NULL,\n> show numeric(8,0) NOT NULL,\n> type numeric(4,0) NOT NULL,\n> block character varying(8) NOT NULL,\n> \"row\" numeric(14,0) NOT NULL,\n> seat numeric(8,0) NOT NULL,\n> flag character varying(15) NOT NULL,\n> transno numeric(8,0) NOT NULL,\n> best numeric(4,0) NOT NULL,\n> \"user\" character varying(15) NOT NULL,\n> \"time\" numeric(10,0) NOT NULL,\n> date date NOT NULL,\n> date_reserved timestamp NOT NULL\n> );\n> \n> Indexes:\n> \"seats_index01\" PRIMARY KEY, btree (show, type, best, block, \n> \"row\", seat) // (1094 MB)\n> \"seats_index00\" UNIQUE, btree (recnum) \n> // (2423 MB)\n> \"seats_index02\" UNIQUE, btree (show, type, best, block, flag, \n> \"row\", seat, recnum) // (2908 MB)\n\n> \n\n\n^^ If those first two sizes look wrong, it's because they are; they should be the other way around.\n\n> default_statistics target is 100, and the following columns are non-default:\n> \n> attname | attstattarget\n> --------+---------------\n> show | 1000\n> type | 1000\n> block | 2000\n> row | 1000\n> seat | 1000\n> flag | 1000\n> best | 1000\n> \n> Increasing these further appears to make no noticeable difference. (pg_stats \n> here for these columns here: http://pastebin.com/2WQQec7N)\n> \n> An example query below shows that in some cases the seats_index02 index is being \n> chosen:\n> \n> # analyze verbose seats;\n> INFO: analyzing \"tickets.seats\"\n> INFO: \"seats\": scanned 593409 of 593409 pages, containing 50926456 \n> live rows and 349030 dead rows; 600000 rows in sample, 50926456 estimated total \n> rows\n> \n> # begin;\n> BEGIN\n> # explain analyze delete from seats where (\"show\" = 58919 AND \n> \"type\" = 1 AND \"best\" = 10 AND \"block\" = \n> 'GMA' AND \"row\" =26 AND \"seat\" = 15);\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Delete on seats (cost=0.56..4.59 rows=1 width=6) (actual time=0.480..0.480 \n> rows=0 loops=1)\n> -> Index Scan using seats_index02 on seats (cost=0.56..4.59 rows=1 width=6) \n> (actual time=0.452..0.453 rows=1 loops=1)\n> Index Cond: ((show = 58919::numeric) AND (type = 1::numeric) AND (best = \n> 10::numeric) AND ((block)::text = 'GMA'::text) AND (\"row\" = \n> 26::numeric) AND (seat = 15::numeric))\n> Planning time: 2.172 ms\n> Execution time: 0.531 ms\n> (5 rows)\n> \n> But from my naive standpoint, seats_index01 is a better candidate:\n> \n> # abort; begin;\n> ROLLBACK\n> BEGIN\n> \n> # update pg_index set indisvalid = false where indexrelid = \n> 'seats_index02'::regclass;\n> # explain analyze delete from seats where (\"show\" = 58919 AND \n> \"type\" = 1 AND \"best\" = 10 AND \"block\" = \n> 'GMA' AND \"row\" =26 AND \"seat\" = 15);\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Delete on seats (cost=0.56..4.59 rows=1 width=6) (actual time=0.103..0.103 \n> rows=0 loops=1)\n> -> Index Scan using seats_index01 on seats (cost=0.56..4.59 rows=1 width=6) \n> (actual time=0.078..0.080 rows=1 loops=1)\n> Index Cond: ((show = 58919::numeric) AND (type = 1::numeric) AND (best = \n> 10::numeric) AND ((block)::text = 'GMA'::text) AND (\"row\" = \n> 26::numeric) AND (seat = 15::numeric))\n> Planning time: 0.535 ms\n> Execution time: 0.146 ms\n> (5 rows)\n> \n> \n> In this instance, the time difference is not huge, however in some seemingly \n> random cases where there are a lot of rows with only the \"seat\" column \n> differing the choice of seats_index02 is much larger ~ 70ms vs 0.something ms \n> with seats_index01\n> \n> I suspect some of the seemingly random cases could be where there's been an \n> update, followed by a delete since the last analyze, despite auto analyze \n> running fairly frequently.\n> \n> Any suggestions appreciated.\n> \n> Thanks\n> Glyn\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Nov 2015 16:34:35 +0000 (UTC)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index scan cost calculation" }, { "msg_contents": "Glyn Astill <[email protected]> writes:\n> Using pg 9.4.5 I'm looking at a table set up by a 3rd party application and trying to figure out why a particular index is being chosen over another for updates/deletes.\n> From what I can see the reason is that plans using either index have the same exactly the same cost. So rather I'm asking if there's something glaringly obvious I'm missing, or is there anything I can to to get better estimates.\n\nI think what's happening is that it's estimating that exactly one index\ntuple needs to be visited in both cases, so that the cost estimates come\nout the same. That's correct in the one case but overly optimistic in the\nother; the misestimate likely is a consequence of the index columns being\ninterdependent. For instance, if \"type\" can be predicted from the other\ncolumns then specifying it isn't really adding anything to the query\nselectivity, but the planner won't know that. We can conclude from the\nresults you've shown that the planner thinks that show+type+best+block\nis sufficient to uniquely determine a table entry, which implies that\nat least some of those columns are strongly correlated with row+seat.\n\nThe problem will probably go away by itself as your table grows, but\nif you don't want to wait, you might want to reflect on which of the index\ncolumns might be (partially?) functionally dependent on the other columns,\nand whether you could redesign the key structure to avoid that.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Nov 2015 11:44:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index scan cost calculation" }, { "msg_contents": "----- Original Message -----\n\n> From: Tom Lane <[email protected]>\n> To: Glyn Astill <[email protected]>\n> Cc: Pgsql-performance <[email protected]>\n> Sent: Thursday, 26 November 2015, 16:44\n> Subject: Re: [PERFORM] Index scan cost calculation\n> \n> Glyn Astill <[email protected]> writes:\n>> Using pg 9.4.5 I'm looking at a table set up by a 3rd party application \n> and trying to figure out why a particular index is being chosen over another for \n> updates/deletes.\n>> From what I can see the reason is that plans using either index have the \n> same exactly the same cost. So rather I'm asking if there's something \n> glaringly obvious I'm missing, or is there anything I can to to get better \n> estimates.\n> \n> I think what's happening is that it's estimating that exactly one index\n> tuple needs to be visited in both cases, so that the cost estimates come\n> out the same. That's correct in the one case but overly optimistic in the\n> other; the misestimate likely is a consequence of the index columns being\n> interdependent. For instance, if \"type\" can be predicted from the \n> other\n> columns then specifying it isn't really adding anything to the query\n> selectivity, but the planner won't know that. We can conclude from the\n> results you've shown that the planner thinks that show+type+best+block\n> is sufficient to uniquely determine a table entry, which implies that\n> at least some of those columns are strongly correlated with row+seat.\n> \n> The problem will probably go away by itself as your table grows, but\n> if you don't want to wait, you might want to reflect on which of the index\n> columns might be (partially?) functionally dependent on the other columns,\n> and whether you could redesign the key structure to avoid that.\n\n\nMany thanks for the explanation, is such a functional dependency assumed purely based optimistically on statistics gathered by analyze? My (ignorant) thinking was that those sorts of decisions would only be made from keys or constraints on the table.\n\n\nThere's no way to determine a particular seat+row combination from show+type+best+block or vice versa.\n\nWe need show+type+best+block+row+seat to identify an individual row, but approximately 90% of the table has just a space \" \" for the value of \"block\", and zeros for both \"best\" and \"row\", and for each of those you could say any show+type would almost certainly have row+seat combinations of 0+1, 0+2 and so on.\n\n\nUnfortunately it's an unnormalized legacy structure that I can't really change.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Nov 2015 17:50:55 +0000 (UTC)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index scan cost calculation" }, { "msg_contents": "Glyn Astill <[email protected]> writes:\n>> From: Tom Lane <[email protected]>\n>> The problem will probably go away by itself as your table grows, but\n>> if you don't want to wait, you might want to reflect on which of the index\n>> columns might be (partially?) functionally dependent on the other columns,\n>> and whether you could redesign the key structure to avoid that.\n\n> Many thanks for the explanation, is such a functional dependency assumed purely based optimistically on statistics gathered by analyze?\n\nWell, the point is that the selectivities associated with the individual\nWHERE clauses are assumed independent, which allows us to just multiply\nthem together. If they're not really independent then multiplication\ngives a combined selectivity that's too small. But without cross-column\nstatistics there's not much we can do to get a better estimate.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Nov 2015 13:26:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index scan cost calculation" }, { "msg_contents": "On Thu, Nov 26, 2015 at 8:11 AM, Glyn Astill <[email protected]> wrote:\n> Hi All,\n>\n> Using pg 9.4.5 I'm looking at a table set up by a 3rd party application and trying to figure out why a particular index is being chosen over another for updates/deletes.\n>\n> From what I can see the reason is that plans using either index have the same exactly the same cost. So rather I'm asking if there's something glaringly obvious I'm missing, or is there anything I can to to get better estimates.\n>\n> The table is as follows and has ~ 50M rows, ~ 4.5GB in size:\n>\n> CREATE TABLE tickets.seats\n> (\n> recnum serial NOT NULL,\n> show numeric(8,0) NOT NULL,\n> type numeric(4,0) NOT NULL,\n> block character varying(8) NOT NULL,\n> \"row\" numeric(14,0) NOT NULL,\n> seat numeric(8,0) NOT NULL,\n> flag character varying(15) NOT NULL,\n> transno numeric(8,0) NOT NULL,\n> best numeric(4,0) NOT NULL,\n> \"user\" character varying(15) NOT NULL,\n> \"time\" numeric(10,0) NOT NULL,\n> date date NOT NULL,\n> date_reserved timestamp NOT NULL\n> );\n>\n> Indexes:\n> \"seats_index01\" PRIMARY KEY, btree (show, type, best, block, \"row\", seat) // (1094 MB)\n> \"seats_index00\" UNIQUE, btree (recnum) // (2423 MB)\n> \"seats_index02\" UNIQUE, btree (show, type, best, block, flag, \"row\", seat, recnum) // (2908 MB)\n\n\nWhy does the index seats_index02 exist in the first place? It looks\nlike an index designed for the benefit of a single query. In which\ncase, could flag column be moved up front? That should prevent it\nfrom looking falsely enticing.\n\nA column named \"flag\" is not usually the type of thing you expect to\nsee a range query on, so moving it leftward in the index should not be\na problem.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 28 Nov 2015 11:25:47 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index scan cost calculation" }, { "msg_contents": "\n\n> From: Jeff Janes <[email protected]>\n> To: Glyn Astill <[email protected]>\n> Cc: Pgsql-performance <[email protected]>\n> Sent: Saturday, 28 November 2015, 19:25\n> Subject: Re: [PERFORM] Index scan cost calculation\n> \n> \n> Why does the index seats_index02 exist in the first place? It looks\n> like an index designed for the benefit of a single query. In which\n> case, could flag column be moved up front? That should prevent it\n> from looking falsely enticing.\n> \n> A column named \"flag\" is not usually the type of thing you expect to\n> see a range query on, so moving it leftward in the index should not be\n> a problem.\n> \n\n\nUnfortunately it's not possible to move flag left in this scenario.\n\nAs you say it's an issue that would not really exist in normal SQL access. The main issue is the way it's required for ordering; The index in question is used by a legacy language that accesses records sequentially as if they were direct from isam files it used historically via a driver. In some cases it steps through records on a particular show+type until a flag changes and carries on unless particular values are seen.\n\n\nIf I create the index show+best+block+row+seat then the planner appears to favour that, and all is well. Despite the startup cost estimate being the same, and total cost being 0.01 higher. This is something I fail to understand fully.\n\nTom stated the index choice is due to a selectivity underestimate. I think this may be because there is actually a correlation between \"best\"+\"block\" and \"type\", but from Toms reply my understanding was that total selectivity for the query is calculated as the product of the individual selectivities in the where clause. Are particular equality clauses ever excluded from the calculation as a result of available indexes or otherwise? Or is it just likely that the selection of the new index is just by chance?\n\n\nEither way I my understanding here is definitely lacking.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 Nov 2015 14:03:21 +0000 (UTC)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index scan cost calculation" }, { "msg_contents": "On Mon, Nov 30, 2015 at 6:03 AM, Glyn Astill <[email protected]> wrote:\n>\n>\n>\n>\n> If I create the index show+best+block+row+seat then the planner appears to favour that, and all is well. Despite the startup cost estimate being the same, and total cost being 0.01 higher. This is something I fail to understand fully.\n\nI think usually Index scans that are estimated to be within 1% of each\nother are considered to be identical. Which one gets chosen then\ndepends on what order they are considered in, which I think is in\nimplementation dependent detail. Usually it is the most recently\ncreated one, which would explain why you got the plan switch with the\nnew index.\n\n\n> Tom stated the index choice is due to a selectivity underestimate. I think this may be because there is actually a correlation between \"best\"+\"block\" and \"type\", but from Toms reply my understanding was that total selectivity for the query is calculated as the product of the individual selectivities in the where clause.\n\nI think the problem here is not with total query selectivity estimate,\nbut rather selectivity estimates of the indexes.\n\nIt thinks the combination of (show, type, best, block) is enough to\nget down to a single row. One index adds \"flag\" to that (which is not\nuseful to the query) and the other adds \"row\" to that, which is useful\nbut the planner doesn't think it is because once you are down to a\nsingle tuple additional selectivity doesn't help.\n\n\n> Are particular equality clauses ever excluded from the calculation as a result of available indexes or otherwise?\n\nClauses that can't be used in an \"indexable\" way are excluded from the\nindex selectivity, but not from the total query selectivity.\n\n> Or is it just likely that the selection of the new index is just by chance?\n\nBingo.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 Nov 2015 15:03:23 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index scan cost calculation" }, { "msg_contents": ">\n>Clauses that can't be used in an \"indexable\" way are excluded from the\n>index selectivity, but not from the total query selectivity.\n>\n>> Or is it just likely that the selection of the new index is just by chance?\n>\n>Bingo.\n>\n\n\nGot it, thanks! Very much appreciated.\n\n\nGlyn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 1 Dec 2015 11:22:12 +0000 (UTC)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index scan cost calculation" }, { "msg_contents": "On 11/30/15 5:03 PM, Jeff Janes wrote:\n> It thinks the combination of (show, type, best, block) is enough to\n> get down to a single row. One index adds \"flag\" to that (which is not\n> useful to the query) and the other adds \"row\" to that, which is useful\n> but the planner doesn't think it is because once you are down to a\n> single tuple additional selectivity doesn't help.\n\nIt occurs to me that maybe you could force this behavior by building an \nindex on a row() instead of on the individual fields. IE:\n\nCREATE INDEX ... ON( row(show, type, best, block, row) )\n\nYou would then have to query based on that:\n\nWHERE row(show, type, best, block, row) = row( 'Trans Siberian \nOrchestra', 'Music', true, 1, 1 )\n\nYou mentioned legacy code which presumably you can't modify to do that, \nbut maybe there's a way to trick the planner into it with a view...\n\nCREATE VIEW AS\nSELECT r.show, r.type, r..., etc, etc\n FROM ( SELECT *, row(show, type, best, block, row) AS r FROM table ) a\n;\n\nWhen you stick a where clause on that there's a chance it'd get turned \ninto WHERE row() = row()... but now that I see it I'm probably being \nover optimistic about that. You could probably force the issue with an \nON SELECT ON table DO INSTEAD rule, but IIRC those aren't supported.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Dec 2015 16:32:43 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index scan cost calculation" }, { "msg_contents": "\n> From: Jim Nasby <[email protected]>\n>To: Jeff Janes <[email protected]>; Glyn Astill <[email protected]> \n>Cc: Pgsql-performance <[email protected]>\n>Sent: Wednesday, 2 December 2015, 22:32\n>Subject: Re: [PERFORM] Index scan cost calculation\n> \n>\n>On 11/30/15 5:03 PM, Jeff Janes wrote:\n>> It thinks the combination of (show, type, best, block) is enough to\n>> get down to a single row. One index adds \"flag\" to that (which is not\n>> useful to the query) and the other adds \"row\" to that, which is useful\n>> but the planner doesn't think it is because once you are down to a\n>> single tuple additional selectivity doesn't help.\n>\n>It occurs to me that maybe you could force this behavior by building an \n>index on a row() instead of on the individual fields. IE:\n>\n>CREATE INDEX ... ON( row(show, type, best, block, row) )\n>\n>You would then have to query based on that:\n>\n>WHERE row(show, type, best, block, row) = row( 'Trans Siberian \n>Orchestra', 'Music', true, 1, 1 )\n>\n>You mentioned legacy code which presumably you can't modify to do that, \n>but maybe there's a way to trick the planner into it with a view...\n>\n>CREATE VIEW AS\n>SELECT r.show, r.type, r..., etc, etc\n> FROM ( SELECT *, row(show, type, best, block, row) AS r FROM table ) a\n>;\n>\n>When you stick a where clause on that there's a chance it'd get turned \n>into WHERE row() = row()... but now that I see it I'm probably being \n>over optimistic about that. You could probably force the issue with an \n>ON SELECT ON table DO INSTEAD rule, but IIRC those aren't supported.\n\n\nThanks, interesting idea, but no cigar.\n\nFor the moment just ensuring the seats_index01 is the last index created seems to suffice, fragile though it is.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Dec 2015 13:06:57 +0000 (UTC)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index scan cost calculation" } ]
[ { "msg_contents": "Hello,\n\nWe're in the process of designing the database for a new service, and \nsome of our tables are going to be using UUID primary key columns.\n\nWe're trying to decide between:\n\n* UUIDv1 (timestamp/MAC uuid) and\n\n* UUIDv4 (random uuid)\n\nAnd the separate but related choice between\n\n* Generating the UUIDs client-side with the Python uuid library \n(https://docs.python.org/2/library/uuid.html) or\n\n* Letting PostgreSQL handle uuid creation with the uuid-ossp extension \n(http://www.postgresql.org/docs/9.4/static/uuid-ossp.html)\n\nIn terms of insert and indexing/retrieval performance, is there one \nclearly superior approach? If not, could somebody speak to the \nperformance tradeoffs of different approaches?\n\nThere seem to be sources online (e.g. \nhttps://blog.starkandwayne.com/2015/05/23/uuid-primary-keys-in-postgresql/ \nhttp://rob.conery.io/2014/05/29/a-better-id-generator-for-postgresql/) \nthat claim that UUIDv4 (random) will lead to damaging keyspace \nfragmentation and using UUIDv1 will avoid this.\n\nIs that true? If so, does that mean UUIDv1 must always be generated on \nthe same machine (with same MAC address) in order to benefit from the \nbetter clustering of UUIDs? What about uuid_generate_v1mc() in \nuuid-ossp? Not exposing a server's real MAC address seems like a good \nsecurity feature, but does it compromise the clustering properties of \nUUIDv1?\n\nThanks in advance,\nBrendan McCollam\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Dec 2015 13:34:29 -0600", "msg_from": "Brendan McCollam <[email protected]>", "msg_from_op": true, "msg_subject": "ossp-uuid: Performance considerations for different UUID approaches?" } ]
[ { "msg_contents": "Just trying to figure something out.\n\n9.3.4, CentOS6.5\n256GB Ram\n\nMaintenance_work_mem = 125GB\nEffective_Cache = 65GB\n\nI have 2 indexes running, started at the same time, they are not small and\none will take 7 hours to complete.\n\nI see almost zero disk access, very minor, not what I want to see when I\nhave an index that will be as large as it is! , but whatever.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n\n18059 postgres 20 0 5119m 2.4g 20m R 100.0 0.9 12:31.05 postmaster\n\n\n18091 postgres 20 0 9794m 6.9g 20m R 100.0 2.7 11:41.61 postmaster\nWhy are 2 index using different amounts of resident mem, when they have the\nkeys to the castle, and why are they not taking more?\n\nI've tried this with as low as 4GB of maintenance_work_mem and the numbers\nin TOP stay the same.\n\nWork_mem is 7GB.\n\nWhat am I not understanding missing?\n\nThanks\nTory\n\nJust trying to figure something out.9.3.4, CentOS6.5256GB RamMaintenance_work_mem = 125GBEffective_Cache = 65GBI have 2 indexes running, started at the same time, they are not small and one will take 7 hours to complete.I see almost zero disk access, very minor, not what I want to see when I have an index that will be as large as it is! , but whatever.\n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND   \n18059 postgres  20   0 5119m 2.4g  20m R 100.0  0.9  12:31.05 postmaster                                               \n18091 postgres  20   0 9794m 6.9g  20m R 100.0  2.7  11:41.61 postmaster Why are 2 index using different amounts of resident mem, when they have the keys to the castle, and why are they not taking more?I've tried this with as low as 4GB of maintenance_work_mem and the numbers in TOP stay the same.Work_mem is 7GB.What am I not understanding missing?ThanksTory", "msg_date": "Mon, 7 Dec 2015 22:36:56 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Index Creation, why is it not consuming more memory." }, { "msg_contents": "On Mon, Dec 7, 2015 at 10:36 PM, Tory M Blue <[email protected]> wrote:\n> What am I not understanding missing?\n\nYes. There is a hard limit on the number of tuples than can be sorted\nin memory prior to PostgreSQL 9.4. It's also the case that very large\nwork_mem or maintenance_work_mem settings are unlikely to help unless\nthey result in a fully internal sort.\n\nThere is evidence that the heap that tuple sorting uses benefits from\n*lower* settings. Sometimes as low as 64MB.\n\nWe're working to make this better in 9.6.\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 7 Dec 2015 22:40:21 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Index Creation, why is it not consuming more memory." } ]
[ { "msg_contents": "9.3.4 CentOS 256Gb system\n\n total_checkpoints | minutes_between_checkpoints\n\n-------------------+-----------------------------\n\n 109943 | 0.0274886580556895\n\n\nI've just bumped then to 150.\n\n\n# - Checkpoints -\n\ncheckpoint_segments = 150\n\ncheckpoint_timeout = 5min\n\ncheckpoint_completion_target = 0.9\n\ncheckpoint_warning = 3600s\n\n\nWAL buffers are 32MB\n\nVery intense work loads during the eve, much lighter during the day. I need\nthe help when I attempt to shove a ton of data in the DB\n\n\nAny suggestions?\n\n\nTory\n\n9.3.4 CentOS 256Gb system\n total_checkpoints | minutes_between_checkpoints \n-------------------+-----------------------------\n            109943 |          0.0274886580556895I've just bumped then to 150.# - Checkpoints -checkpoint_segments = 150             checkpoint_timeout = 5min              checkpoint_completion_target = 0.9     checkpoint_warning = 3600s WAL buffers are 32MBVery intense work loads during the eve, much lighter during the day. I need the help when I attempt to shove a ton of data in the DBAny suggestions?Tory", "msg_date": "Thu, 10 Dec 2015 01:12:19 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "checkpoints, proper config" }, { "msg_contents": "On 12/10/2015 01:12 AM, Tory M Blue wrote:\n\n> checkpoint_timeout = 5min\n>\n> checkpoint_completion_target = 0.9\n>\n\nThe above is your problem. Make checkpoint_timeout = 1h . Also, \nconsidering turning synchronous_commit off.\n\nJD\n\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Dec 2015 09:20:59 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoints, proper config" }, { "msg_contents": "On Thu, Dec 10, 2015 at 9:20 AM, Joshua D. Drake <[email protected]>\nwrote:\n\n> On 12/10/2015 01:12 AM, Tory M Blue wrote:\n>\n> checkpoint_timeout = 5min\n>>\n>> checkpoint_completion_target = 0.9\n>>\n>>\n> The above is your problem. Make checkpoint_timeout = 1h . Also,\n> considering turning synchronous_commit off.\n>\n> JD\n>\n\nThiis valid regardless of the workload? Seems that I would be storing a ton\nof data and writing it once an hour, so would have potential perf hits on\nthe hour. I guess I'm not too up to date on the checkpoint configuration.\n\nMy settings on this particular DB\n\nfsync = off\n\n#synchronous_commit = on\nThanks\nTory\n\nOn Thu, Dec 10, 2015 at 9:20 AM, Joshua D. Drake <[email protected]> wrote:On 12/10/2015 01:12 AM, Tory M Blue wrote:\n\n\ncheckpoint_timeout = 5min\n\ncheckpoint_completion_target = 0.9\n\n\n\nThe above is your problem. Make checkpoint_timeout = 1h . Also, considering turning synchronous_commit off.\n\nJDThiis valid regardless of the workload? Seems that I would be storing a ton of data and writing it once an hour, so would have potential perf hits on the hour. I guess I'm not too  up to date on the checkpoint configuration. My settings on this particular DB\nfsync = off#synchronous_commit = on          ThanksTory", "msg_date": "Thu, 10 Dec 2015 10:35:22 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: checkpoints, proper config" }, { "msg_contents": "On 12/10/2015 10:35 AM, Tory M Blue wrote:\n\n>\n> Thiis valid regardless of the workload?\n\nYes.\n\n\n> Seems that I would be storing a\n> ton of data and writing it once an hour, so would have potential perf\n> hits on the hour. I guess I'm not too up to date on the checkpoint\n> configuration.\n\nNo, that isn't how it works.\n\nhttp://www.postgresql.org/docs/9.4/static/wal-configuration.html\n\n>\n> My settings on this particular DB\n>\n> fsync = off\n\nThis will cause data corruption in the event of improper shutdown.\n\n>\n> #synchronous_commit = on\n>\n\nI would turn that off and turn fsync back on.\n\nJD\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Dec 2015 12:00:38 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoints, proper config" }, { "msg_contents": "On Thu, Dec 10, 2015 at 12:00 PM, Joshua D. Drake <[email protected]>\nwrote:\n\n> On 12/10/2015 10:35 AM, Tory M Blue wrote:\n>\n>\n>> Thiis valid regardless of the workload?\n>>\n>\n> Yes.\n>\n>\n> Seems that I would be storing a\n>> ton of data and writing it once an hour, so would have potential perf\n>> hits on the hour. I guess I'm not too up to date on the checkpoint\n>> configuration.\n>>\n>\n> No, that isn't how it works.\n>\n> http://www.postgresql.org/docs/9.4/static/wal-configuration.html\n>\n>\nThanks will give this a read and get my self up to snuff..\n\n\n>\n>> My settings on this particular DB\n>>\n>> fsync = off\n>>\n>\n> This will cause data corruption in the event of improper shutdown.\n>\n>\n>> #synchronous_commit = on\n>>\n>>\n> I would turn that off and turn fsync back on.\n>\n>\nsynchronous is commented out, is it on by default?\n\nThis is a slony slave node, so I'm not too worried about this particular\nhost losing it's data, thus fsync is off,\n\nthanks again sir\n\nTory\n\nOn Thu, Dec 10, 2015 at 12:00 PM, Joshua D. Drake <[email protected]> wrote:On 12/10/2015 10:35 AM, Tory M Blue wrote:\n\n\n\nThiis valid regardless of the workload?\n\n\nYes.\n\n\n\nSeems that I would be storing a\nton of data and writing it once an hour, so would have potential perf\nhits on the hour. I guess I'm not too  up to date on the checkpoint\nconfiguration.\n\n\nNo, that isn't how it works.\n\nhttp://www.postgresql.org/docs/9.4/static/wal-configuration.html\nThanks will give this a read and get my self up to snuff.. \n\n\nMy settings on this particular DB\n\nfsync = off\n\n\nThis will cause data corruption in the event of improper shutdown.\n\n\n\n#synchronous_commit = on\n\n\n\nI would turn that off and turn fsync back on.synchronous is commented out, is it on by default? This is a slony slave node, so I'm not too worried about this particular host losing it's data, thus fsync is off,thanks again sirTory", "msg_date": "Thu, 10 Dec 2015 12:58:17 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: checkpoints, proper config" }, { "msg_contents": "On 12/10/2015 12:58 PM, Tory M Blue wrote:\n\n> synchronous is commented out, is it on by default?\n\nYes it is on by default.\n\n>\n> This is a slony slave node, so I'm not too worried about this particular\n> host losing it's data, thus fsync is off,\n>\n> thanks again sir\n>\n> Tory\n>\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Dec 2015 13:14:02 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoints, proper config" }, { "msg_contents": "On 12/10/2015 06:20 PM, Joshua D. Drake wrote:\n> On 12/10/2015 01:12 AM, Tory M Blue wrote:\n>\n>> checkpoint_timeout = 5min\n>>\n>> checkpoint_completion_target = 0.9\n>>\n>\n> The above is your problem. Make checkpoint_timeout = 1h . Also,\n> considering turning synchronous_commit off.\n\nI doubt that. The report mentioned that the checkpoints happen 0.027... \nminutes apart (assuming the minutes_between_checkpoints is computed in a \nsane way). That's way below 5 minutes, so the checkpoints have to be \ntriggered by something else - probably by running out of segments, but \nwe don't know the value before Tory increased it to 150.\n\nAlso, recommending synchronous_commit=off is a bit silly, because not \nonly it introduces data loss issues, but it'll likely cause even more \nfrequent checkpoints.\n\nTory, please enable logging of checkpoints (log_checkpoints=on). Also, I \ndon't think it makes much sense to set\n\n (checkpoint_warning > checkpoint_timeout)\n\nas it kinda defeats the whole purpose of the warning.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Dec 2015 23:40:18 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoints, proper config" }, { "msg_contents": "Tomas Vondra wrote:\n\n> Also, I don't think it makes much sense to set\n> \n> (checkpoint_warning > checkpoint_timeout)\n> \n> as it kinda defeats the whole purpose of the warning.\n\nI agree, but actually, what is the sense of checkpoint_warning? I think\nit was useful back when we didn't have log_checkpoints, but now that we\nhave detailed checkpoint logging I think it's pretty much useless noise.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Dec 2015 19:45:41 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoints, proper config" }, { "msg_contents": "\n\nOn 12/10/2015 11:45 PM, Alvaro Herrera wrote:\n> Tomas Vondra wrote:\n>\n>> Also, I don't think it makes much sense to set\n>>\n>> (checkpoint_warning > checkpoint_timeout)\n>>\n>> as it kinda defeats the whole purpose of the warning.\n>\n> I agree, but actually, what is the sense of checkpoint_warning? I think\n> it was useful back when we didn't have log_checkpoints, but now that we\n> have detailed checkpoint logging I think it's pretty much useless noise.\n>\n\nNot entirely. The WARNING only triggers when you get below the 30s (or \nwhatever value is set in the config) and explicitly warns you about \ndoing checkpoints too often. log_checkpoints=on logs all checkpoints and \nyou have to do further analysis on the data (and it's just LOG).\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Dec 2015 02:20:29 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoints, proper config" }, { "msg_contents": "On 12/10/15 2:58 PM, Tory M Blue wrote:\n> This is a slony slave node, so I'm not too worried about this particular\n> host losing it's data, thus fsync is off,\n\nThe Amazon RDS team actually benchmarked fsync=off vs sync commit off \nand discovered that you get better performance turning sync commit off \nand leaving fsync alone in some cases. In other cases the difference \nisn't enough to be worth it.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Dec 2015 17:45:40 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoints, proper config" }, { "msg_contents": "On 12/10/15 7:20 PM, Tomas Vondra wrote:\n>> I agree, but actually, what is the sense of checkpoint_warning? I think\n>> it was useful back when we didn't have log_checkpoints, but now that we\n>> have detailed checkpoint logging I think it's pretty much useless noise.\n>>\n>\n> Not entirely. The WARNING only triggers when you get below the 30s (or\n> whatever value is set in the config) and explicitly warns you about\n> doing checkpoints too often. log_checkpoints=on logs all checkpoints and\n> you have to do further analysis on the data (and it's just LOG).\n\nAgree, though I also find it pretty useless to set it significantly less \nthan checkpoint_timeout in almost all cases. If you want ~5 minutes \nbetween checkpoints checkpoint_timeout=30 seconds is way too low to be \nuseful. We should really change the default.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Dec 2015 17:47:36 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoints, proper config" } ]
[ { "msg_contents": "Guys I m trying to upgrade postgresql 9.3 -> 9.4\r\n\r\nOS VERSION : CentOS release 6.3 (Final) (2.6.32-279.el6.x86_64.x86_64)\r\n\r\n\r\nWhen I run the upgrade script /usr/pgsql-9.4/bin/pg_upgrade\r\n\r\nIt gives me this error in the log file.\r\n\r\n\r\npg_restore: creating VIEW tables\r\npg_restore: [archiver (db)] Error while PROCESSING TOC:\r\npg_restore: [archiver (db)] Error from TOC entry 1206; 1259 10190710 VIEW tables postgres\r\npg_restore: [archiver (db)] could not execute query: ERROR: column pg_class.reltoastidxid does not exist\r\nLINE 19: ELSE ( SELECT \"pg_class\".\"reltoastidxid\"\r\n ^\r\n Command was:\r\n-- For binary upgrade, must preserve pg_type oid\r\nSELECT binary_upgrade.set_next_pg_type_oid('10190712'::pg_catalog.oid);\r\n\r\n\r\n\r\nPlease let me know what is going on here and what I can do to get this fixed\r\n\r\nThanks\r\nPrabhjot\r\n\r\n\r\n\r\n\n\n\n\n\n\nGuys I m trying to upgrade postgresql 9.3 ->  9.4 \n\n\nOS VERSION : CentOS release 6.3 (Final) (2.6.32-279.el6.x86_64.x86_64)\n\n\n\n\nWhen I run the upgrade script  /usr/pgsql-9.4/bin/pg_upgrade\n\n\nIt gives me this error  in the log file. \n\n\n\n\n\npg_restore: creating VIEW tables\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 1206; 1259 10190710 VIEW tables postgres\npg_restore: [archiver (db)] could not execute query: ERROR:  column pg_class.reltoastidxid does not exist\nLINE 19:             ELSE ( SELECT \"pg_class\".\"reltoastidxid\"\n                                   ^\n    Command was:\n-- For binary upgrade, must preserve pg_type oid\nSELECT binary_upgrade.set_next_pg_type_oid('10190712'::pg_catalog.oid);\n\n\n\n\n\n\n\nPlease let me know what is going on here and what I can do to get this fixed\n\n\nThanks\nPrabhjot", "msg_date": "Thu, 10 Dec 2015 20:35:23 +0000", "msg_from": "\"Sheena, Prabhjot\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql upgrade from 9.3 to 9.4 error" } ]
[ { "msg_contents": "Hi Folks,\n\nI am a newbie to this mailing list. Tried searching the forum but didn't\nfind something similar to the problem I am facing. \n\nBackground:\nI have a Rails app with Postgres db. For certain reports, I have to join\nmultiple tables. However, certain join queries are dog slow and I am\nwondering if I am missing any index. \n\nProblem:\nI have the following tables:\ncompanies \n\n // belongs to a company\nad_accounts: (id, company_id, other fields)\n\n // belongs to an ad account and can have many custom tag groups and tags\nfb_ad_campaigns (id, ad_account_id, other fields)\n\n // belongs to a campaign and can have many custom tag groups and tags\nfb_ad_sets: (id, ad_account_id, fb_ad_campaign_id, other fields)\n\n // belongs to an ad set and can have many custom tag groups and tags\nfb_ad_groups (id, ad_account_id, fb_ad_campaign_id, fb_ad_set_id, other\nfields)\n\n // belongs to a company and can have many campaigns/adsets/ads\ncustom_tag_groups: (id, company_id, other fields)\n\n// belongs to a tag group and can have many campaigns/adsets/ads\ncustom_tags (id, company_id, custom_tag_group_id, other fields)\n\n// association between tags and campaigns\ncustom_tags_fb_ad_campaigns : (custom_tag_id, fb_ad_campaign_id)\n\n// association between tags and ad sets\ncustom_tags_fb_ad_sets: (custom_tag_id, fb_ad_set_id)\n\n// association between tags and ad groups\ncustom_tags_fb_ad_groups: (custom_tag_id, fb_ad_group_id)\n\n// stores performance data of each campaign\nfb_ad_campaign_reports : (id, ad_account_id, fb_ad_campaign_id,\nstart_timestamp, other fields)\n\n// stores performance data of each adset\nfb_ad_set_reports : (id, ad_account_id, fb_ad_campaign_id, fb_ad_set_id, \nstart_timestamp, other fields)\n\n// stores performance data of each adgroup\nfb_ad_group_reports : (id, ad_account_id, fb_ad_campaign_id, fb_ad_set_id,\nfb_ad_group_id, start_timestamp, other fields)\n\nNow a query where we ask give me all custom tags performance data group by a\nparticular custom tag group where ad sets are one of them in a big array\nthen the join query is really slow (takes over 132 seconds!):\nSELECT \"custom_tags\".\"custom_tag_group_id\", custom_tags.name as name,\nsum(impressions) as total_impressions, sum(clicks) as total_clicks,\nsum(reach) as total_reach, sum(spend_rupees) as total_spend, sum(spend) as\nspend_account, sum(actions_link_click) as website_clicks,\nsum(actions_mobile_app_install) as mobile_app_install,\nsum(actions_app_custom_event) as app_custom_event FROM \"custom_tags\" INNER\nJOIN \"custom_tags_fb_ad_groups\" ON\n\"custom_tags_fb_ad_groups\".\"custom_tag_id\" = \"custom_tags\".\"id\" INNER JOIN\n\"fb_ad_groups\" ON \"fb_ad_groups\".\"id\" =\n\"custom_tags_fb_ad_groups\".\"fb_ad_group_id\" INNER JOIN \"fb_ad_group_reports\"\nON \"fb_ad_group_reports\".\"fb_ad_group_id\" = \"fb_ad_groups\".\"id\" WHERE\n\"fb_ad_group_reports\".\"fb_ad_group_id\" IN (SELECT \"fb_ad_groups\".\"id\" FROM\n\"fb_ad_groups\" WHERE \"fb_ad_groups\".\"id\" IN (SELECT \"fb_ad_groups\".\"id\" FROM\n\"fb_ad_groups\" INNER JOIN \"custom_tags_fb_ad_groups\" ON \"fb_ad_groups\".\"id\"\n= \"custom_tags_fb_ad_groups\".\"fb_ad_group_id\" INNER JOIN \"custom_tags\" ON\n\"custom_tags_fb_ad_groups\".\"custom_tag_id\" = \"custom_tags\".\"id\" WHERE\n\"custom_tags\".\"custom_tag_group_id\" = 235 AND \"fb_ad_groups\".\"ad_account_id\"\n= 29) AND \"fb_ad_groups\".\"fb_ad_set_id\" IN (166302, 39917, 78123, 194477,\n145218, 177579, 89732, 177674, 135464, 34510, 167214, 193144, 173264,\n168117, 140425, 169398, 146744, 170529, 183603, 173473, 88015, 117229,\n50056, 135148, 116862, 169811, 57620, 159177, 57850, 177677, 127638, 187933,\n167885, 73687, 191732, 135058, 186625, 156565, 135150, 164615, 67089,\n146341, 168521, 183634, 138699, 165182, 156675, 134834, 169774, 152209,\n67048, 146801, 75084, 165749, 70322, 152206, 143700, 139109, 169659, 117316,\n134473, 152105, 94276, 163671, 162461, 41502, 75087, 45153, 184190, 185004,\n95300, 160507, 189382, 135461, 169085, 117485, 166205, 57572, 132230,\n187890, 44333, 185096, 45154, 149021, 177116, 177580, 160509, 89730, 50007,\n161811, 164414, 166207, 148633, 166786, 78189, 185745, 169564, 185682,\n60898, 165107, 165180, 190722, 175737, 89694, 140764, 148253, 173610,\n139106, 162788, 174475, 168806, 135151, 166542, 185826, 173472, 56723,\n167017, 177354, 191734, 185618, 169808, 95798, 135640, 143698, 145272,\n57616, 126137, 171387, 146690, 185843, 169396, 140427, 35003, 44844, 179589,\n50061, 67091, 167015, 162021, 159175, 185842, 169292, 152208, 34909, 161913,\n152107, 165337, 67057, 159277, 134998, 166545, 166784, 60895, 135462, 97173,\n162133, 191615, 78188, 117226, 191735, 50069, 191617, 185007, 140496,\n134835, 173263, 162018, 121551, 174474, 67058, 95797, 148737, 140326,\n162791, 34547, 140323, 136350, 34752, 97177, 160609, 57625, 140590, 172303,\n175740, 34932, 67086, 136222, 135063, 179591, 117561, 139140, 194476,\n175738, 34492, 161345, 136468, 177151, 94274, 166923, 34977, 162240, 167216,\n169563, 121073, 117478, 138086, 135231, 165104, 34755, 148635, 117313,\n67051, 127197, 127478, 34826, 162132, 132229, 191616, 164928, 116859,\n171386, 117318, 185038, 148302, 173475, 50082, 148250, 148252, 84923,\n170990, 176730, 182258, 162243, 161910, 34904, 166671, 148734, 163570,\n165336, 126134, 148736, 67055, 45148, 173613, 190906, 34589, 89733, 161243,\n184975, 152207, 145271, 117314, 140251, 194499, 190721, 185620, 169562,\n170987, 169087, 139143, 140253, 177675, 140147, 177759, 143540, 169293,\n183635, 185034, 116911, 185033, 170989, 165105, 170908, 185063, 177121,\n75085, 132263, 185720, 164415, 185037, 45119, 177154, 68771, 135573, 159174,\n169807, 166785, 161808, 145273, 147682, 34601, 120501, 35093, 140116,\n169812, 70323, 175902, 191644, 175981, 117483, 185744, 186290, 134870,\n179546, 161246, 134832, 121549, 164929, 183651, 178933, 176063, 162241,\n146803, 78121, 34936, 120500, 135062, 187934, 151896, 166300, 34564, 185810,\n117315, 190652, 135641, 117279, 158630, 159278, 95299, 166920, 34570,\n175654, 134863, 95275, 140150, 68769, 127633, 95274, 140591, 60897, 167887,\n185098, 116861, 132419, 117559, 133865, 173265, 126601, 39693, 162350,\n177153, 168630, 127578, 158398, 136219, 152382, 126605, 34310, 167217,\n169658, 169294, 135059, 140372, 134864, 148303, 95778, 173612, 84920,\n177357, 135229, 174488, 185586, 166762, 193145, 167016, 132265, 88016,\n169809, 126132, 165825, 166763, 39698, 138810, 177256, 134866, 95800,\n146691, 149284, 174486, 173611, 169295, 151788, 151789, 34573, 178642,\n185619, 162353, 178701, 190903, 151895, 117484, 169773, 117479, 160153,\n164617, 138764, 133970, 127474, 132418, 182257, 174487, 178511, 89731,\n121072, 50040, 134750, 165258, 165905, 133923, 173694, 117477, 185683,\n136220, 166206, 141678, 185660, 133922, 177578, 185066, 139141, 185662,\n183602, 134751, 158397, 78412, 84925, 177117, 156566, 185064, 146339,\n126609, 44291, 140250, 166702, 160611, 190651, 132421, 156672, 167886,\n117227, 140426, 168120, 126607, 45179, 136348, 168119, 70324, 145270, 34804,\n158628, 134865, 185697, 116864, 75083, 132228, 173474, 97172, 138765, 34578,\n166299, 178643, 179549, 161244, 50077, 135149, 165339, 185035, 185766,\n167884, 193147, 175651, 34701, 160612, 190905, 67087, 138809, 34696, 162242,\n95777, 185097, 127573, 163569, 177676, 185008, 34502, 34648, 78124, 145140,\n135574, 121075, 183587, 78190, 175903, 194501, 35000, 177581, 43464, 135644,\n94275, 166670, 95273, 160051, 116907, 168804, 172381, 95801, 135645, 132267,\n166382, 127477, 171389, 117480, 175739, 151786, 165183, 49992, 44335, 60785,\n169660, 165827, 184987, 179547, 161912, 169395, 148354, 138647, 34817,\n34685, 162789, 165907, 60845, 171421, 191645, 165826, 145220, 117481, 67088,\n97174, 164926, 146802, 95799, 34822, 168631, 68773, 34673, 191618, 173691,\n67056, 165338, 50039, 127574, 161245, 138085, 158627, 186289, 184985,\n141679, 162351, 145073, 152384, 117317, 168118, 127576, 165906, 116910,\n138915, 94277, 169810, 148356, 127577, 170906, 117277, 166208, 176062,\n178491, 140374, 34442, 173692, 185661, 161347, 165070, 121547, 50023,\n185067, 156673, 134942, 177254, 121074, 171418, 70319, 186309, 133921,\n127476, 165261, 169086, 178639, 126602, 175979, 185856, 172301, 179588,\n117231, 148357, 34759, 174473, 184974, 177355, 140765, 126606, 140149,\n170907, 162131, 149019, 117558, 185858, 121552, 190904, 170675, 178935,\n67053, 135463, 68770, 138762, 146692, 127479, 160510, 117281, 169194, 78122,\n34486, 173690, 127634, 133918, 175652, 78414, 177356, 146743, 75086, 185696,\n185065, 140148, 140589, 49977, 185036, 165747, 176728, 34813, 145143,\n140117, 175653, 126135, 165748, 117230, 163669, 173608, 177253, 97176,\n143696, 146340, 138698, 166765, 116863, 134747, 94278, 185585, 178506,\n138811, 149285, 148355, 34997, 67054, 168805, 194502, 187931, 186623,\n160508, 126136, 147683, 117228, 136349, 152381, 95776, 185640, 88014,\n143538, 89695, 34773, 135230, 135061, 166544, 121548, 39854, 173262, 166543,\n138648, 169565, 166384, 140115, 140495, 190653, 165746, 139107, 185722,\n160610, 173609, 78416, 126139, 89697, 132262, 136466, 34948, 57578, 34761,\n34503, 84921, 158629, 170530, 164618, 183650, 34744, 95773, 165106, 135643,\n127575, 185811, 178936, 39956, 34611, 185681, 183588, 191646, 183619,\n135642, 117280, 185003, 189380, 39779, 136221, 57874, 127196, 135001, 64480,\n146290, 190654, 151894, 41511, 135575, 127636, 121550, 145219, 140324,\n152104, 135060, 171388, 60776, 134748, 149283, 177761, 170909, 127475,\n149020, 194500, 170603, 39929, 34323, 185641, 172302, 136465, 185698, 89698,\n43759, 166381, 140428, 117278, 164616, 134882, 175978, 165824, 182260,\n140762, 178702, 120505, 165259, 156564, 39765, 148251, 162459, 160053,\n160155, 132231, 127198, 173695, 160052, 156563, 184745, 57878, 67090,\n143697, 185068, 186310, 159279, 177758, 168629, 161346, 67052, 78413,\n185094, 140494, 148304, 177120, 44278, 187932, 49836, 84922, 158396, 126138,\n189381, 95774, 162352, 44816, 185584, 70320, 97175, 146741, 169197, 159276,\n184973, 127159, 34988, 34556, 126133, 146287, 67047, 120504, 174472, 126140,\n70321, 39804, 194478, 165260, 151897, 132266, 49963, 56721, 127637, 166922,\n139108, 146689, 190720, 117557, 67050, 166701, 162020, 146342, 134946,\n178510, 134833, 49971, 176729, 172380, 117282, 57877, 179590, 186624,\n145141, 147680, 140114, 177255, 146742, 168807, 135000, 163672, 166787,\n184187, 185827, 133920, 133971, 160154, 186311, 135460, 167215, 140325,\n136351, 34434, 178638, 134999, 193146, 190719, 179548, 95775, 166301, 34683,\n116909, 143537, 191733, 116912, 172378, 169084, 167014, 44853, 178486,\n176064, 162460, 166921, 135570, 178934, 159176, 138812, 163567, 172379,\n183618, 165071, 175904, 59468, 34897, 165181, 40055, 117482, 158399, 166383,\n75082, 44319, 183017, 78415, 120503, 78187, 146288, 34966, 178490, 40048,\n178507, 84924, 39923, 34317, 149022, 185639, 68772, 187891, 194475, 185857,\n34426, 34970, 127158, 177760, 162790, 163568, 163670, 143539, 43741, 39848,\n116908, 148634, 132417, 126604, 133972, 120502, 75223, 152383, 126608,\n94273, 148735, 185767, 162130, 147681, 88013, 165904, 171419, 139142,\n185765, 134949, 117560, 135572, 132264, 168524, 34594, 161348, 34583,\n168522, 136467, 191647, 187889, 133973, 117556, 170988, 89734, 134474,\n34559, 185093, 164927, 145072, 126603, 67049, 145139, 184986, 140373,\n185746, 171420, 39799, 182259, 169397, 168632, 135459, 176065, 45130,\n148305, 140763, 134944, 186288, 34694, 169195, 132422, 174489, 34607,\n132420, 140371, 185721, 127635, 135228, 44708, 89735, 168523, 138914,\n152106, 160050, 60624, 151787, 185005, 133864, 185095, 140592, 185006,\n145221, 161911, 49774, 178487, 95276, 95802, 176727, 169661, 34944, 135571,\n169196, 175980, 160156, 78411, 34493, 44437, 133919, 166764, 34922, 60846,\n34914, 34920, 148632, 162458, 34953, 34363, 173693, 140493, 149286, 175905,\n170674, 116860, 57873, 138763, 172300, 166703, 140252, 146804, 57628,\n183018, 156674, 50114, 177152, 170602, 162019, 146289, 161809)) AND\n(start_timestamp >= '2015-10-31 18:30:00' and start_timestamp < '2015-11-30\n18:30:00') AND \"custom_tags\".\"company_id\" = 12 AND\n\"fb_ad_groups\".\"ad_account_id\" = 29 AND \"custom_tags\".\"custom_tag_group_id\"\n= 235 GROUP BY \"custom_tags\".\"custom_tag_group_id\", custom_tags.name\n\nI used Explain analyse to see where the bottleneck are and here is the\nresult: http://explain.depesz.com/s/opM7\n\nAm I missing some index that will speed up the computation?\n\nRelevant indices that I have:\nfb_ad_groups: \n(ad_account_id, fb_ad_set_id)\n(ad_account_id, fb_ad_campaign_id)\n(fb_ad_set_id)\n(fb_ad_campaign_id)\n\nfb_ad_group_reports:\n(ad_account_id, fb_ad_campaign_id, start_timestamp)\n(ad_account_id, fb_ad_group_id, start_timestamp)\n(ad_account_id, fb_ad_set_id, start_timestamp)\n(ad_account_id, start_timestamp)\n(fb_ad_set_id, start_timestamp)\n(fb_ad_group_id, start_timestamp)\n(fb_ad_campaign_id, start_timestamp)\n\ncustom_tags:\n(custom_tag_group_id)\n(company_id, custom_tag_group_id)\n\ncustom_tags_fb_ad_groups:\n(custom_tag_id, fb_ad_group_id)\n\n\nThanks and apologies for the long post!\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Advise-needed-for-a-join-query-with-a-where-conditional-tp5877097.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Dec 2015 21:38:22 -0700 (MST)", "msg_from": "ankur_adwyze <[email protected]>", "msg_from_op": true, "msg_subject": "Advise needed for a join query with a where conditional" }, { "msg_contents": "Note that the inner query is fast (takes only about 0.5 second):\n\nSELECT \"fb_ad_groups\".\"id\" FROM \"fb_ad_groups\" WHERE \"fb_ad_groups\".\"id\" IN\n(SELECT \"fb_ad_groups\".\"id\" FROM \"fb_ad_groups\" INNER JOIN\n\"custom_tags_fb_ad_groups\" ON \"fb_ad_groups\".\"id\" =\n\"custom_tags_fb_ad_groups\".\"fb_ad_group_id\" INNER JOIN \"custom_tags\" ON\n\"custom_tags_fb_ad_groups\".\"custom_tag_id\" = \"custom_tags\".\"id\" WHERE\n\"custom_tags\".\"custom_tag_group_id\" = 235 AND \"fb_ad_groups\".\"ad_account_id\"\n= 29) AND \"fb_ad_groups\".\"fb_ad_set_id\" IN (166302, 39917, 78123, 194477,\n145218, 177579, 89732, 177674, 135464, 34510, 167214, 193144, 173264,\n168117, 140425, 169398, 146744, 170529, 183603, 173473, 88015, 117229,\n50056, 135148, 116862, 169811, 57620, 159177, 57850, 177677, 127638, 187933,\n167885, 73687, 191732, 135058, 186625, 156565, 135150, 164615, 67089,\n146341, 168521, 183634, 138699, 165182, 156675, 134834, 169774, 152209,\n67048, 146801, 75084, 165749, 70322, 152206, 143700, 139109, 169659, 117316,\n134473, 152105, 94276, 163671, 162461, 41502, 75087, 45153, 184190, 185004,\n95300, 160507, 189382, 135461, 169085, 117485, 166205, 57572, 132230,\n187890, 44333, 185096, 45154, 149021, 177116, 177580, 160509, 89730, 50007,\n161811, 164414, 166207, 148633, 166786, 78189, 185745, 169564, 185682,\n60898, 165107, 165180, 190722, 175737, 89694, 140764, 148253, 173610,\n139106, 162788, 174475, 168806, 135151, 166542, 185826, 173472, 56723,\n167017, 177354, 191734, 185618, 169808, 95798, 135640, 143698, 145272,\n57616, 126137, 171387, 146690, 185843, 169396, 140427, 35003, 44844, 179589,\n50061, 67091, 167015, 162021, 159175, 185842, 169292, 152208, 34909, 161913,\n152107, 165337, 67057, 159277, 134998, 166545, 166784, 60895, 135462, 97173,\n162133, 191615, 78188, 117226, 191735, 50069, 191617, 185007, 140496,\n134835, 173263, 162018, 121551, 174474, 67058, 95797, 148737, 140326,\n162791, 34547, 140323, 136350, 34752, 97177, 160609, 57625, 140590, 172303,\n175740, 34932, 67086, 136222, 135063, 179591, 117561, 139140, 194476,\n175738, 34492, 161345, 136468, 177151, 94274, 166923, 34977, 162240, 167216,\n169563, 121073, 117478, 138086, 135231, 165104, 34755, 148635, 117313,\n67051, 127197, 127478, 34826, 162132, 132229, 191616, 164928, 116859,\n171386, 117318, 185038, 148302, 173475, 50082, 148250, 148252, 84923,\n170990, 176730, 182258, 162243, 161910, 34904, 166671, 148734, 163570,\n165336, 126134, 148736, 67055, 45148, 173613, 190906, 34589, 89733, 161243,\n184975, 152207, 145271, 117314, 140251, 194499, 190721, 185620, 169562,\n170987, 169087, 139143, 140253, 177675, 140147, 177759, 143540, 169293,\n183635, 185034, 116911, 185033, 170989, 165105, 170908, 185063, 177121,\n75085, 132263, 185720, 164415, 185037, 45119, 177154, 68771, 135573, 159174,\n169807, 166785, 161808, 145273, 147682, 34601, 120501, 35093, 140116,\n169812, 70323, 175902, 191644, 175981, 117483, 185744, 186290, 134870,\n179546, 161246, 134832, 121549, 164929, 183651, 178933, 176063, 162241,\n146803, 78121, 34936, 120500, 135062, 187934, 151896, 166300, 34564, 185810,\n117315, 190652, 135641, 117279, 158630, 159278, 95299, 166920, 34570,\n175654, 134863, 95275, 140150, 68769, 127633, 95274, 140591, 60897, 167887,\n185098, 116861, 132419, 117559, 133865, 173265, 126601, 39693, 162350,\n177153, 168630, 127578, 158398, 136219, 152382, 126605, 34310, 167217,\n169658, 169294, 135059, 140372, 134864, 148303, 95778, 173612, 84920,\n177357, 135229, 174488, 185586, 166762, 193145, 167016, 132265, 88016,\n169809, 126132, 165825, 166763, 39698, 138810, 177256, 134866, 95800,\n146691, 149284, 174486, 173611, 169295, 151788, 151789, 34573, 178642,\n185619, 162353, 178701, 190903, 151895, 117484, 169773, 117479, 160153,\n164617, 138764, 133970, 127474, 132418, 182257, 174487, 178511, 89731,\n121072, 50040, 134750, 165258, 165905, 133923, 173694, 117477, 185683,\n136220, 166206, 141678, 185660, 133922, 177578, 185066, 139141, 185662,\n183602, 134751, 158397, 78412, 84925, 177117, 156566, 185064, 146339,\n126609, 44291, 140250, 166702, 160611, 190651, 132421, 156672, 167886,\n117227, 140426, 168120, 126607, 45179, 136348, 168119, 70324, 145270, 34804,\n158628, 134865, 185697, 116864, 75083, 132228, 173474, 97172, 138765, 34578,\n166299, 178643, 179549, 161244, 50077, 135149, 165339, 185035, 185766,\n167884, 193147, 175651, 34701, 160612, 190905, 67087, 138809, 34696, 162242,\n95777, 185097, 127573, 163569, 177676, 185008, 34502, 34648, 78124, 145140,\n135574, 121075, 183587, 78190, 175903, 194501, 35000, 177581, 43464, 135644,\n94275, 166670, 95273, 160051, 116907, 168804, 172381, 95801, 135645, 132267,\n166382, 127477, 171389, 117480, 175739, 151786, 165183, 49992, 44335, 60785,\n169660, 165827, 184987, 179547, 161912, 169395, 148354, 138647, 34817,\n34685, 162789, 165907, 60845, 171421, 191645, 165826, 145220, 117481, 67088,\n97174, 164926, 146802, 95799, 34822, 168631, 68773, 34673, 191618, 173691,\n67056, 165338, 50039, 127574, 161245, 138085, 158627, 186289, 184985,\n141679, 162351, 145073, 152384, 117317, 168118, 127576, 165906, 116910,\n138915, 94277, 169810, 148356, 127577, 170906, 117277, 166208, 176062,\n178491, 140374, 34442, 173692, 185661, 161347, 165070, 121547, 50023,\n185067, 156673, 134942, 177254, 121074, 171418, 70319, 186309, 133921,\n127476, 165261, 169086, 178639, 126602, 175979, 185856, 172301, 179588,\n117231, 148357, 34759, 174473, 184974, 177355, 140765, 126606, 140149,\n170907, 162131, 149019, 117558, 185858, 121552, 190904, 170675, 178935,\n67053, 135463, 68770, 138762, 146692, 127479, 160510, 117281, 169194, 78122,\n34486, 173690, 127634, 133918, 175652, 78414, 177356, 146743, 75086, 185696,\n185065, 140148, 140589, 49977, 185036, 165747, 176728, 34813, 145143,\n140117, 175653, 126135, 165748, 117230, 163669, 173608, 177253, 97176,\n143696, 146340, 138698, 166765, 116863, 134747, 94278, 185585, 178506,\n138811, 149285, 148355, 34997, 67054, 168805, 194502, 187931, 186623,\n160508, 126136, 147683, 117228, 136349, 152381, 95776, 185640, 88014,\n143538, 89695, 34773, 135230, 135061, 166544, 121548, 39854, 173262, 166543,\n138648, 169565, 166384, 140115, 140495, 190653, 165746, 139107, 185722,\n160610, 173609, 78416, 126139, 89697, 132262, 136466, 34948, 57578, 34761,\n34503, 84921, 158629, 170530, 164618, 183650, 34744, 95773, 165106, 135643,\n127575, 185811, 178936, 39956, 34611, 185681, 183588, 191646, 183619,\n135642, 117280, 185003, 189380, 39779, 136221, 57874, 127196, 135001, 64480,\n146290, 190654, 151894, 41511, 135575, 127636, 121550, 145219, 140324,\n152104, 135060, 171388, 60776, 134748, 149283, 177761, 170909, 127475,\n149020, 194500, 170603, 39929, 34323, 185641, 172302, 136465, 185698, 89698,\n43759, 166381, 140428, 117278, 164616, 134882, 175978, 165824, 182260,\n140762, 178702, 120505, 165259, 156564, 39765, 148251, 162459, 160053,\n160155, 132231, 127198, 173695, 160052, 156563, 184745, 57878, 67090,\n143697, 185068, 186310, 159279, 177758, 168629, 161346, 67052, 78413,\n185094, 140494, 148304, 177120, 44278, 187932, 49836, 84922, 158396, 126138,\n189381, 95774, 162352, 44816, 185584, 70320, 97175, 146741, 169197, 159276,\n184973, 127159, 34988, 34556, 126133, 146287, 67047, 120504, 174472, 126140,\n70321, 39804, 194478, 165260, 151897, 132266, 49963, 56721, 127637, 166922,\n139108, 146689, 190720, 117557, 67050, 166701, 162020, 146342, 134946,\n178510, 134833, 49971, 176729, 172380, 117282, 57877, 179590, 186624,\n145141, 147680, 140114, 177255, 146742, 168807, 135000, 163672, 166787,\n184187, 185827, 133920, 133971, 160154, 186311, 135460, 167215, 140325,\n136351, 34434, 178638, 134999, 193146, 190719, 179548, 95775, 166301, 34683,\n116909, 143537, 191733, 116912, 172378, 169084, 167014, 44853, 178486,\n176064, 162460, 166921, 135570, 178934, 159176, 138812, 163567, 172379,\n183618, 165071, 175904, 59468, 34897, 165181, 40055, 117482, 158399, 166383,\n75082, 44319, 183017, 78415, 120503, 78187, 146288, 34966, 178490, 40048,\n178507, 84924, 39923, 34317, 149022, 185639, 68772, 187891, 194475, 185857,\n34426, 34970, 127158, 177760, 162790, 163568, 163670, 143539, 43741, 39848,\n116908, 148634, 132417, 126604, 133972, 120502, 75223, 152383, 126608,\n94273, 148735, 185767, 162130, 147681, 88013, 165904, 171419, 139142,\n185765, 134949, 117560, 135572, 132264, 168524, 34594, 161348, 34583,\n168522, 136467, 191647, 187889, 133973, 117556, 170988, 89734, 134474,\n34559, 185093, 164927, 145072, 126603, 67049, 145139, 184986, 140373,\n185746, 171420, 39799, 182259, 169397, 168632, 135459, 176065, 45130,\n148305, 140763, 134944, 186288, 34694, 169195, 132422, 174489, 34607,\n132420, 140371, 185721, 127635, 135228, 44708, 89735, 168523, 138914,\n152106, 160050, 60624, 151787, 185005, 133864, 185095, 140592, 185006,\n145221, 161911, 49774, 178487, 95276, 95802, 176727, 169661, 34944, 135571,\n169196, 175980, 160156, 78411, 34493, 44437, 133919, 166764, 34922, 60846,\n34914, 34920, 148632, 162458, 34953, 34363, 173693, 140493, 149286, 175905,\n170674, 116860, 57873, 138763, 172300, 166703, 140252, 146804, 57628,\n183018, 156674, 50114, 177152, 170602, 162019, 146289, 161809\n\nHence, we can simplify the previous long query into this:\n\nSELECT \"custom_tags\".\"custom_tag_group_id\", custom_tags.name as name,\nsum(impressions) as total_impressions, sum(clicks) as total_clicks,\nsum(reach) as total_reach, sum(spend_rupees) as total_spend, sum(spend) as\nspend_account, sum(actions_link_click) as website_clicks,\nsum(actions_mobile_app_install) as mobile_app_install,\nsum(actions_app_custom_event) as app_custom_event FROM \"custom_tags\" INNER\nJOIN \"custom_tags_fb_ad_groups\" ON\n\"custom_tags_fb_ad_groups\".\"custom_tag_id\" = \"custom_tags\".\"id\" INNER JOIN\n\"fb_ad_groups\" ON \"fb_ad_groups\".\"id\" =\n\"custom_tags_fb_ad_groups\".\"fb_ad_group_id\" INNER JOIN \"fb_ad_group_reports\"\nON \"fb_ad_group_reports\".\"fb_ad_group_id\" = \"fb_ad_groups\".\"id\" WHERE\n\"fb_ad_group_reports\".\"fb_ad_group_id\" IN (<array of 1777 fb_ad_group_ids>)\nAND (start_timestamp >= '2015-10-31 18:30:00' and start_timestamp <\n'2015-11-30 18:30:00') AND \"custom_tags\".\"company_id\" = 12 AND\n\"fb_ad_groups\".\"ad_account_id\" = 29 AND \"custom_tags\".\"custom_tag_group_id\"\n= 235 GROUP BY \"custom_tags\".\"custom_tag_group_id\", custom_tags.name\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Advise-needed-for-a-join-query-with-a-where-conditional-tp5877097p5877098.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Dec 2015 22:04:52 -0700 (MST)", "msg_from": "ankur_adwyze <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Advise needed for a join query with a where conditional" }, { "msg_contents": "On 12/10/15 10:38 PM, ankur_adwyze wrote:\n> I have a Rails app with Postgres db. For certain reports, I have to join\n> multiple tables. However, certain join queries are dog slow and I am\n> wondering if I am missing any index.\n\nMy guess is that the planner is coming up with a bad estimate. Please \npost the output of EXPLAIN ANALYZE, preferably via \nhttp://explain.depesz.com/\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Dec 2015 17:50:24 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advise needed for a join query with a where conditional" }, { "msg_contents": "On Thu, Dec 10, 2015 at 8:38 PM, ankur_adwyze <[email protected]> wrote:\n> Hi Folks,\n>\n> I am a newbie to this mailing list. Tried searching the forum but didn't\n> find something similar to the problem I am facing.\n>\n> Background:\n> I have a Rails app with Postgres db. For certain reports, I have to join\n> multiple tables. However, certain join queries are dog slow and I am\n> wondering if I am missing any index.\n\nAre you vacuuming and analyzing your database appropriately? What\nnon-default config settings do you have.\n\nSomething certainly seems suspicious about custom_tags_fb_ad_groups\nand its index.\n\n\n-> Index Only Scan using custom_tags_fb_ad_groups_index on\ncustom_tags_fb_ad_groups custom_tags_fb_ad_groups_1\n(cost=0.42..1728.30 rows=1 width=8) (actual time=1.352..3.815 rows=1\nloops=32934)\n Index Cond: (fb_ad_group_id = fb_ad_group_reports.fb_ad_group_id)\n Heap Fetches: 32934\n\nDoing a single-value look up into an index should have an estimated\ncost of around 9, unless you did something screwy with your cost_*\nparameter settings. Why does it think it is 1728.30 instead? Is the\nindex insanely bloated? And it actually is slow to do those look ups,\nwhich is where almost all of your time is going.\n\nAnd, why isn't it just using a hash join on that table, since you are\nreading so much of it?\n\nI'd do a VACUUM FULL of that table, then a regular VACUUM on it (or\nthe entire database), then ANALYZE it (or your entire database), and\nsee if that took care of the problem.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 12 Dec 2015 08:58:51 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advise needed for a join query with a where conditional" } ]
[ { "msg_contents": "Hello,\n\nI would like to know how row estimation is calculed by explain ?\nIn my execution plan, this estimation is extremely wrong (267 instead of\n198000)\nI reproduced this estimation error in this simple case :\n\ndrop table if exists t1;\ndrop table if exists t2;\ndrop table if exists t3;\ndrop table if exists t4;\n\ncreate table t1 as select generate_Series(1,300000) as c1;\ncreate table t2 as select generate_Series(1,400) as c1;\ncreate table t3 as select generate_Series(1,200000)%100 as\nc1,generate_Series(1,200000) as c2;\ncreate table t4 as select generate_Series(1,200000) as c1;\n\nalter table t1 add PRIMARY KEY (c1);\nalter table t2 add PRIMARY KEY (c1);\nalter table t3 add PRIMARY KEY (c1,c2);\ncreate index on t3 (c1);\ncreate index on t3 (c2);\nalter table t4 add PRIMARY KEY (c1);\n\nanalyze t1;\nanalyze t2;\nanalyze t3;\nanalyze t4;\n\nEXPLAIN (analyze on, buffers on, verbose on)\nselect\n*\nfrom\nt1 t1\ninner join t2 on t1.c1=t2.c1\ninner join t3 on t2.c1=t3.c1\ninner join t4 on t3.c2=t4.c1\n\nExplain plan :\nhttp://explain.depesz.com/s/wZ3v\n\nI think this error may be problematic because planner will choose nested\nloop instead of hash joins for ultimate join. Can you help me to improve\nthis row estimation ?\n\nThank you for answering\n\nBest Regards,\n<http://www.psih.fr/>PSIH Décisionnel en santé\nMathieu VINCENT\nData Analyst\nPMSIpilot - 61 rue Sully - 69006 Lyon - France\n\nHello,I would like to know how row estimation is calculed by explain ?In my execution plan, this estimation is extremely wrong (267 instead of 198000)I reproduced this estimation error in this simple case :drop table if exists t1;drop table if exists t2;drop table if exists t3;drop table if exists t4;create table t1 as select generate_Series(1,300000) as c1; create table t2 as select generate_Series(1,400) as c1; create table t3 as select generate_Series(1,200000)%100 as c1,generate_Series(1,200000) as c2;create table t4 as select generate_Series(1,200000) as c1;alter table t1 add PRIMARY KEY (c1);alter table t2 add PRIMARY KEY (c1);alter table t3 add PRIMARY KEY (c1,c2);create index on t3 (c1);create index on t3 (c2);alter table t4 add PRIMARY KEY (c1);analyze t1;analyze t2;analyze t3;analyze t4;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 inner join t4 on t3.c2=t4.c1Explain plan :http://explain.depesz.com/s/wZ3vI think this error may be problematic because planner will choose nested loop instead of hash joins for ultimate join. Can you help me to improve this row estimation ? Thank you for answeringBest Regards,PSIH Décisionnel en santéMathieu VINCENT Data AnalystPMSIpilot - 61 rue Sully - 69006 Lyon - France", "msg_date": "Fri, 11 Dec 2015 09:53:37 +0100", "msg_from": "Mathieu VINCENT <[email protected]>", "msg_from_op": true, "msg_subject": "Estimation row error" }, { "msg_contents": "Sorry, I forget to precise Postgresql version\n\n'PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-11), 64-bit'\n\n\nBR\n\nMathieu VINCENT\n\n\n\n2015-12-11 9:53 GMT+01:00 Mathieu VINCENT <[email protected]>:\n\n> Hello,\n>\n> I would like to know how row estimation is calculed by explain ?\n> In my execution plan, this estimation is extremely wrong (267 instead of\n> 198000)\n> I reproduced this estimation error in this simple case :\n>\n> drop table if exists t1;\n> drop table if exists t2;\n> drop table if exists t3;\n> drop table if exists t4;\n>\n> create table t1 as select generate_Series(1,300000) as c1;\n> create table t2 as select generate_Series(1,400) as c1;\n> create table t3 as select generate_Series(1,200000)%100 as\n> c1,generate_Series(1,200000) as c2;\n> create table t4 as select generate_Series(1,200000) as c1;\n>\n> alter table t1 add PRIMARY KEY (c1);\n> alter table t2 add PRIMARY KEY (c1);\n> alter table t3 add PRIMARY KEY (c1,c2);\n> create index on t3 (c1);\n> create index on t3 (c2);\n> alter table t4 add PRIMARY KEY (c1);\n>\n> analyze t1;\n> analyze t2;\n> analyze t3;\n> analyze t4;\n>\n> EXPLAIN (analyze on, buffers on, verbose on)\n> select\n> *\n> from\n> t1 t1\n> inner join t2 on t1.c1=t2.c1\n> inner join t3 on t2.c1=t3.c1\n> inner join t4 on t3.c2=t4.c1\n>\n> Explain plan :\n> http://explain.depesz.com/s/wZ3v\n>\n> I think this error may be problematic because planner will choose nested\n> loop instead of hash joins for ultimate join. Can you help me to improve\n> this row estimation ?\n>\n> Thank you for answering\n>\n> Best Regards,\n> <http://www.psih.fr/>PSIH Décisionnel en santé\n> Mathieu VINCENT\n> Data Analyst\n> PMSIpilot - 61 rue Sully - 69006 Lyon - France\n>\n\nSorry, I forget to precise Postgresql version'PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11), 64-bit'BRMathieu VINCENT\n2015-12-11 9:53 GMT+01:00 Mathieu VINCENT <[email protected]>:Hello,I would like to know how row estimation is calculed by explain ?In my execution plan, this estimation is extremely wrong (267 instead of 198000)I reproduced this estimation error in this simple case :drop table if exists t1;drop table if exists t2;drop table if exists t3;drop table if exists t4;create table t1 as select generate_Series(1,300000) as c1; create table t2 as select generate_Series(1,400) as c1; create table t3 as select generate_Series(1,200000)%100 as c1,generate_Series(1,200000) as c2;create table t4 as select generate_Series(1,200000) as c1;alter table t1 add PRIMARY KEY (c1);alter table t2 add PRIMARY KEY (c1);alter table t3 add PRIMARY KEY (c1,c2);create index on t3 (c1);create index on t3 (c2);alter table t4 add PRIMARY KEY (c1);analyze t1;analyze t2;analyze t3;analyze t4;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 inner join t4 on t3.c2=t4.c1Explain plan :http://explain.depesz.com/s/wZ3vI think this error may be problematic because planner will choose nested loop instead of hash joins for ultimate join. Can you help me to improve this row estimation ? Thank you for answeringBest Regards,PSIH Décisionnel en santéMathieu VINCENT Data AnalystPMSIpilot - 61 rue Sully - 69006 Lyon - France", "msg_date": "Fri, 11 Dec 2015 12:35:53 +0100", "msg_from": "Mathieu VINCENT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "Hello,\n\nNo one to help me to understand this bad estimation rows ?\n\nMathieu VINCENT\n\n2015-12-11 12:35 GMT+01:00 Mathieu VINCENT <[email protected]>:\n\n> Sorry, I forget to precise Postgresql version\n>\n> 'PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n> 20120313 (Red Hat 4.4.7-11), 64-bit'\n>\n>\n> BR\n>\n> Mathieu VINCENT\n>\n>\n>\n> 2015-12-11 9:53 GMT+01:00 Mathieu VINCENT <[email protected]>:\n>\n>> Hello,\n>>\n>> I would like to know how row estimation is calculed by explain ?\n>> In my execution plan, this estimation is extremely wrong (267 instead of\n>> 198000)\n>> I reproduced this estimation error in this simple case :\n>>\n>> drop table if exists t1;\n>> drop table if exists t2;\n>> drop table if exists t3;\n>> drop table if exists t4;\n>>\n>> create table t1 as select generate_Series(1,300000) as c1;\n>> create table t2 as select generate_Series(1,400) as c1;\n>> create table t3 as select generate_Series(1,200000)%100 as\n>> c1,generate_Series(1,200000) as c2;\n>> create table t4 as select generate_Series(1,200000) as c1;\n>>\n>> alter table t1 add PRIMARY KEY (c1);\n>> alter table t2 add PRIMARY KEY (c1);\n>> alter table t3 add PRIMARY KEY (c1,c2);\n>> create index on t3 (c1);\n>> create index on t3 (c2);\n>> alter table t4 add PRIMARY KEY (c1);\n>>\n>> analyze t1;\n>> analyze t2;\n>> analyze t3;\n>> analyze t4;\n>>\n>> EXPLAIN (analyze on, buffers on, verbose on)\n>> select\n>> *\n>> from\n>> t1 t1\n>> inner join t2 on t1.c1=t2.c1\n>> inner join t3 on t2.c1=t3.c1\n>> inner join t4 on t3.c2=t4.c1\n>>\n>> Explain plan :\n>> http://explain.depesz.com/s/wZ3v\n>>\n>> I think this error may be problematic because planner will choose nested\n>> loop instead of hash joins for ultimate join. Can you help me to improve\n>> this row estimation ?\n>>\n>> Thank you for answering\n>>\n>> Best Regards,\n>> <http://www.psih.fr/>PSIH Décisionnel en santé\n>> Mathieu VINCENT\n>> Data Analyst\n>> PMSIpilot - 61 rue Sully - 69006 Lyon - France\n>>\n>\n>\n\nHello,No one to help me to understand this bad estimation rows ?Mathieu VINCENT\n2015-12-11 12:35 GMT+01:00 Mathieu VINCENT <[email protected]>:Sorry, I forget to precise Postgresql version'PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11), 64-bit'BRMathieu VINCENT\n2015-12-11 9:53 GMT+01:00 Mathieu VINCENT <[email protected]>:Hello,I would like to know how row estimation is calculed by explain ?In my execution plan, this estimation is extremely wrong (267 instead of 198000)I reproduced this estimation error in this simple case :drop table if exists t1;drop table if exists t2;drop table if exists t3;drop table if exists t4;create table t1 as select generate_Series(1,300000) as c1; create table t2 as select generate_Series(1,400) as c1; create table t3 as select generate_Series(1,200000)%100 as c1,generate_Series(1,200000) as c2;create table t4 as select generate_Series(1,200000) as c1;alter table t1 add PRIMARY KEY (c1);alter table t2 add PRIMARY KEY (c1);alter table t3 add PRIMARY KEY (c1,c2);create index on t3 (c1);create index on t3 (c2);alter table t4 add PRIMARY KEY (c1);analyze t1;analyze t2;analyze t3;analyze t4;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 inner join t4 on t3.c2=t4.c1Explain plan :http://explain.depesz.com/s/wZ3vI think this error may be problematic because planner will choose nested loop instead of hash joins for ultimate join. Can you help me to improve this row estimation ? Thank you for answeringBest Regards,PSIH Décisionnel en santéMathieu VINCENT Data AnalystPMSIpilot - 61 rue Sully - 69006 Lyon - France", "msg_date": "Tue, 15 Dec 2015 09:05:38 +0100", "msg_from": "Mathieu VINCENT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n> Hello,\n> \n> No one to help me to understand this bad estimation rows ?\n\nWell,\n\non a rather beefy machine, I'm getting quite a different plan:\nhttp://explain.depesz.com/s/3y5r\n\nWhich may be related to this setting:\nperftest=# show default_statistics_target ;\n default_statistics_target\n---------------------------\n 1000\n(1 Zeile)\n\n\nI guess the wrong row assumption (which I get as well!) is caused by the\ngiven correlation of t3.c1 and t3.c2 (which the planner doesn't \"see\").\n\nTomas Vondra has written a nice blog post, covering that topic as well:\nhttp://blog.pgaddict.com/posts/common-issues-with-planner-statistics\n\nAFAIK, 9.5 has received some improvements in that field, but I didn't\ntry that yet.\n\nBest regards,\n\nNick\n\n> \n> Mathieu VINCENT\n> \n> 2015-12-11 12:35 GMT+01:00 Mathieu VINCENT\n> <[email protected] <mailto:[email protected]>>:\n> \n> Sorry, I forget to precise Postgresql version\n> \n> 'PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n> 4.4.7 20120313 (Red Hat 4.4.7-11), 64-bit'\n> \n> \n> BR\n> \n> Mathieu VINCENT\n> \n> \t\n> \n> \n> 2015-12-11 9:53 GMT+01:00 Mathieu VINCENT\n> <[email protected] <mailto:[email protected]>>:\n> \n> Hello,\n> \n> I would like to know how row estimation is calculed by explain ?\n> In my execution plan, this estimation is extremely wrong (267\n> instead of 198000)\n> I reproduced this estimation error in this simple case :\n> \n> drop table if exists t1;\n> drop table if exists t2;\n> drop table if exists t3;\n> drop table if exists t4;\n> \n> create table t1 as select generate_Series(1,300000) as c1; \n> create table t2 as select generate_Series(1,400) as c1; \n> create table t3 as select generate_Series(1,200000)%100 as\n> c1,generate_Series(1,200000) as c2;\n> create table t4 as select generate_Series(1,200000) as c1;\n> \n> alter table t1 add PRIMARY KEY (c1);\n> alter table t2 add PRIMARY KEY (c1);\n> alter table t3 add PRIMARY KEY (c1,c2);\n> create index on t3 (c1);\n> create index on t3 (c2);\n> alter table t4 add PRIMARY KEY (c1);\n> \n> analyze t1;\n> analyze t2;\n> analyze t3;\n> analyze t4;\n> \n> EXPLAIN (analyze on, buffers on, verbose on)\n> select \n> *\n> from \n> t1 t1\n> inner join t2 on t1.c1=t2.c1\n> inner join t3 on t2.c1=t3.c1\n> inner join t4 on t3.c2=t4.c1\n> \n> Explain plan :\n> http://explain.depesz.com/s/wZ3v\n> \n> I think this error may be problematic because planner will\n> choose nested loop instead of hash joins for ultimate join. Can\n> you help me to improve this row estimation ? \n> \n> Thank you for answering\n> \n> Best Regards,\n> <http://www.psih.fr/>\tPSIH Décisionnel en santé\n> Mathieu VINCENT \n> Data Analyst\n> PMSIpilot - 61 rue Sully - 69006 Lyon - France\n> \n> \n> \n\n\n-- \nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel: +49 911/991-4665\nMobil: +49 172/8853339\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Dec 2015 10:25:30 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "Gunnar Nick Bluth <[email protected]> wrote:\n\n> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n> > Hello,\n> > \n> > No one to help me to understand this bad estimation rows ?\n> \n> Well,\n> \n> on a rather beefy machine, I'm getting quite a different plan:\n> http://explain.depesz.com/s/3y5r\n\nyou are using 9.5, right? Got the same plan with 9.5.\n\nBtw.: Hi Gunnar ;-)\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Dec 2015 10:49:38 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n> Gunnar Nick Bluth <[email protected]> wrote:\n> \n>> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>> Hello,\n>>>\n>>> No one to help me to understand this bad estimation rows ?\n>>\n>> Well,\n>>\n>> on a rather beefy machine, I'm getting quite a different plan:\n>> http://explain.depesz.com/s/3y5r\n> \n> you are using 9.5, right? Got the same plan with 9.5.\n\nNope...:\n version\n\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nSo much for those correlation improvements then ;-/\n\n\n> Btw.: Hi Gunnar ;-)\n\nHi :)\n\n-- \nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel: +49 911/991-4665\nMobil: +49 172/8853339\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Dec 2015 11:21:57 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "thks Gunnar,\n\nI removed the correlation between t3.c1 and t3.c2 in this sql script :\n\ndrop table if exists t1;\ndrop table if exists t2;\ndrop table if exists t3;\ndrop table if exists t4;\n\ncreate table t1 as select generate_Series(1,300000) as c1;\ncreate table t2 as select generate_Series(1,400) as c1;\ncreate table t3 as select floor(random()*100+1) as c1, c2 from\ngenerate_Series(1,200000) c2;\ncreate table t4 as select generate_Series(1,200000) as c1;\n\nalter table t1 add PRIMARY KEY (c1);\nalter table t2 add PRIMARY KEY (c1);\nalter table t3 add PRIMARY KEY (c1,c2);\ncreate index on t3 (c1);\ncreate index on t3 (c2);\nalter table t4 add PRIMARY KEY (c1);\n\nanalyze verbose t1;\nanalyze verbose t2;\nanalyze verbose t3;\nanalyze verbose t4;\n\nEXPLAIN (analyze on, buffers on, verbose on)\nselect\n*\nfrom\nt1 t1\ninner join t2 on t1.c1=t2.c1\ninner join t3 on t2.c1=t3.c1\ninner join t4 on t3.c2=t4.c1\n\nNow, the estimate is good : http://explain.depesz.com/s/gCX\n\nHave a good day\n\nMathieu VINCENT\n\n2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <\[email protected]>:\n\n> Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n> > Gunnar Nick Bluth <[email protected]> wrote:\n> >\n> >> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n> >>> Hello,\n> >>>\n> >>> No one to help me to understand this bad estimation rows ?\n> >>\n> >> Well,\n> >>\n> >> on a rather beefy machine, I'm getting quite a different plan:\n> >> http://explain.depesz.com/s/3y5r\n> >\n> > you are using 9.5, right? Got the same plan with 9.5.\n>\n> Nope...:\n> version\n>\n>\n> ------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>\n> So much for those correlation improvements then ;-/\n>\n>\n> > Btw.: Hi Gunnar ;-)\n>\n> Hi :)\n>\n> --\n> Gunnar \"Nick\" Bluth\n> DBA ELSTER\n>\n> Tel: +49 911/991-4665\n> Mobil: +49 172/8853339\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nthks Gunnar,I removed the correlation between t3.c1 and t3.c2 in this sql script :drop table if exists t1;drop table if exists t2;drop table if exists t3;drop table if exists t4;create table t1 as select generate_Series(1,300000) as c1; create table t2 as select generate_Series(1,400) as c1; create table t3 as select floor(random()*100+1) as c1, c2 from generate_Series(1,200000) c2;create table t4 as select generate_Series(1,200000) as c1;alter table t1 add PRIMARY KEY (c1);alter table t2 add PRIMARY KEY (c1);alter table t3 add PRIMARY KEY (c1,c2);create index on t3 (c1);create index on t3 (c2);alter table t4 add PRIMARY KEY (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;analyze verbose t4;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 inner join t4 on t3.c2=t4.c1Now, the estimate is good : http://explain.depesz.com/s/gCXHave a good dayMathieu VINCENT\n2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <[email protected]>:Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n> Gunnar Nick Bluth <[email protected]> wrote:\n>\n>> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>> Hello,\n>>>\n>>> No one to help me to understand this bad estimation rows ?\n>>\n>> Well,\n>>\n>> on a rather beefy machine, I'm getting quite a different plan:\n>> http://explain.depesz.com/s/3y5r\n>\n> you are using 9.5, right? Got the same plan with 9.5.\n\nNope...:\n                                                  version\n\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nSo much for those correlation improvements then ;-/\n\n\n> Btw.: Hi Gunnar ;-)\n\nHi :)\n\n--\nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel:   +49 911/991-4665\nMobil: +49 172/8853339\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 17 Dec 2015 10:10:42 +0100", "msg_from": "Mathieu VINCENT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "Thank you both for the help!\nhappy holidays\n\n2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <[email protected]>:\n\n> thks Gunnar,\n>\n> I removed the correlation between t3.c1 and t3.c2 in this sql script :\n>\n> drop table if exists t1;\n> drop table if exists t2;\n> drop table if exists t3;\n> drop table if exists t4;\n>\n> create table t1 as select generate_Series(1,300000) as c1;\n> create table t2 as select generate_Series(1,400) as c1;\n> create table t3 as select floor(random()*100+1) as c1, c2 from\n> generate_Series(1,200000) c2;\n> create table t4 as select generate_Series(1,200000) as c1;\n>\n> alter table t1 add PRIMARY KEY (c1);\n> alter table t2 add PRIMARY KEY (c1);\n> alter table t3 add PRIMARY KEY (c1,c2);\n> create index on t3 (c1);\n> create index on t3 (c2);\n> alter table t4 add PRIMARY KEY (c1);\n>\n> analyze verbose t1;\n> analyze verbose t2;\n> analyze verbose t3;\n> analyze verbose t4;\n>\n> EXPLAIN (analyze on, buffers on, verbose on)\n> select\n> *\n> from\n> t1 t1\n> inner join t2 on t1.c1=t2.c1\n> inner join t3 on t2.c1=t3.c1\n> inner join t4 on t3.c2=t4.c1\n>\n> Now, the estimate is good : http://explain.depesz.com/s/gCX\n>\n> Have a good day\n>\n> Mathieu VINCENT\n>\n> 2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <\n> [email protected]>:\n>\n>> Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n>> > Gunnar Nick Bluth <[email protected]> wrote:\n>> >\n>> >> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>> >>> Hello,\n>> >>>\n>> >>> No one to help me to understand this bad estimation rows ?\n>> >>\n>> >> Well,\n>> >>\n>> >> on a rather beefy machine, I'm getting quite a different plan:\n>> >> http://explain.depesz.com/s/3y5r\n>> >\n>> > you are using 9.5, right? Got the same plan with 9.5.\n>>\n>> Nope...:\n>> version\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------------\n>> PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n>> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>>\n>> So much for those correlation improvements then ;-/\n>>\n>>\n>> > Btw.: Hi Gunnar ;-)\n>>\n>> Hi :)\n>>\n>> --\n>> Gunnar \"Nick\" Bluth\n>> DBA ELSTER\n>>\n>> Tel: +49 911/991-4665\n>> Mobil: +49 172/8853339\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\nThank you both for the help!happy holidays2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <[email protected]>:thks Gunnar,I removed the correlation between t3.c1 and t3.c2 in this sql script :drop table if exists t1;drop table if exists t2;drop table if exists t3;drop table if exists t4;create table t1 as select generate_Series(1,300000) as c1; create table t2 as select generate_Series(1,400) as c1; create table t3 as select floor(random()*100+1) as c1, c2 from generate_Series(1,200000) c2;create table t4 as select generate_Series(1,200000) as c1;alter table t1 add PRIMARY KEY (c1);alter table t2 add PRIMARY KEY (c1);alter table t3 add PRIMARY KEY (c1,c2);create index on t3 (c1);create index on t3 (c2);alter table t4 add PRIMARY KEY (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;analyze verbose t4;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 inner join t4 on t3.c2=t4.c1Now, the estimate is good : http://explain.depesz.com/s/gCXHave a good dayMathieu VINCENT\n2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <[email protected]>:Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n> Gunnar Nick Bluth <[email protected]> wrote:\n>\n>> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>> Hello,\n>>>\n>>> No one to help me to understand this bad estimation rows ?\n>>\n>> Well,\n>>\n>> on a rather beefy machine, I'm getting quite a different plan:\n>> http://explain.depesz.com/s/3y5r\n>\n> you are using 9.5, right? Got the same plan with 9.5.\n\nNope...:\n                                                  version\n\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nSo much for those correlation improvements then ;-/\n\n\n> Btw.: Hi Gunnar ;-)\n\nHi :)\n\n--\nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel:   +49 911/991-4665\nMobil: +49 172/8853339\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 17 Dec 2015 10:14:44 +0100", "msg_from": "Matteo Grolla <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "Here, another issue with row estimate.\nAnd, in this example, there is not correlation beetween columns in a same\ntable.\n\ndrop table if exists t1;\ndrop table if exists t2;\ndrop table if exists t3;\n\ncreate table t1 as select generate_Series(1,200000) as c1;\ncreate table t2 as select generate_Series(1,200000)%100 as c1;\ncreate table t3 as select generate_Series(1,1500)%750 as c1;\n\nalter table t1 add PRIMARY KEY (c1);\ncreate index on t2 (c1);\ncreate index on t3 (c1);\n\nanalyze verbose t1;\nanalyze verbose t2;\nanalyze verbose t3;\n\nEXPLAIN (analyze on, buffers on, verbose on)\nselect\n*\nfrom\nt1 t1\ninner join t2 on t1.c1=t2.c1\ninner join t3 on t2.c1=t3.c1\nthe explain plan : http://explain.depesz.com/s/YVw\nDo you understand how postgresql calculate the row estimate ?\n\nBR\nMathieu VINCENT\n\n2015-12-17 10:14 GMT+01:00 Matteo Grolla <[email protected]>:\n\n> Thank you both for the help!\n> happy holidays\n>\n> 2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <[email protected]>\n> :\n>\n>> thks Gunnar,\n>>\n>> I removed the correlation between t3.c1 and t3.c2 in this sql script :\n>>\n>> drop table if exists t1;\n>> drop table if exists t2;\n>> drop table if exists t3;\n>> drop table if exists t4;\n>>\n>> create table t1 as select generate_Series(1,300000) as c1;\n>> create table t2 as select generate_Series(1,400) as c1;\n>> create table t3 as select floor(random()*100+1) as c1, c2 from\n>> generate_Series(1,200000) c2;\n>> create table t4 as select generate_Series(1,200000) as c1;\n>>\n>> alter table t1 add PRIMARY KEY (c1);\n>> alter table t2 add PRIMARY KEY (c1);\n>> alter table t3 add PRIMARY KEY (c1,c2);\n>> create index on t3 (c1);\n>> create index on t3 (c2);\n>> alter table t4 add PRIMARY KEY (c1);\n>>\n>> analyze verbose t1;\n>> analyze verbose t2;\n>> analyze verbose t3;\n>> analyze verbose t4;\n>>\n>> EXPLAIN (analyze on, buffers on, verbose on)\n>> select\n>> *\n>> from\n>> t1 t1\n>> inner join t2 on t1.c1=t2.c1\n>> inner join t3 on t2.c1=t3.c1\n>> inner join t4 on t3.c2=t4.c1\n>>\n>> Now, the estimate is good : http://explain.depesz.com/s/gCX\n>>\n>> Have a good day\n>>\n>> Mathieu VINCENT\n>>\n>> 2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <\n>> [email protected]>:\n>>\n>>> Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n>>> > Gunnar Nick Bluth <[email protected]> wrote:\n>>> >\n>>> >> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>> >>> Hello,\n>>> >>>\n>>> >>> No one to help me to understand this bad estimation rows ?\n>>> >>\n>>> >> Well,\n>>> >>\n>>> >> on a rather beefy machine, I'm getting quite a different plan:\n>>> >> http://explain.depesz.com/s/3y5r\n>>> >\n>>> > you are using 9.5, right? Got the same plan with 9.5.\n>>>\n>>> Nope...:\n>>> version\n>>>\n>>>\n>>> ------------------------------------------------------------------------------------------------------------\n>>> PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n>>> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>>>\n>>> So much for those correlation improvements then ;-/\n>>>\n>>>\n>>> > Btw.: Hi Gunnar ;-)\n>>>\n>>> Hi :)\n>>>\n>>> --\n>>> Gunnar \"Nick\" Bluth\n>>> DBA ELSTER\n>>>\n>>> Tel: +49 911/991-4665\n>>> Mobil: +49 172/8853339\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list (\n>>> [email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>>\n>\n\nHere, another issue with row estimate.And, in this example, there is not correlation beetween columns in a same table.drop table if exists t1;drop table if exists t2;drop table if exists t3;create table t1 as select generate_Series(1,200000) as c1; create table t2 as select generate_Series(1,200000)%100 as c1; create table t3 as select generate_Series(1,1500)%750 as c1;alter table t1 add PRIMARY KEY (c1);create index on t2 (c1);create index on t3 (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 the explain plan : http://explain.depesz.com/s/YVwDo you understand how postgresql calculate the row estimate ?BRMathieu VINCENT\n2015-12-17 10:14 GMT+01:00 Matteo Grolla <[email protected]>:Thank you both for the help!happy holidays2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <[email protected]>:thks Gunnar,I removed the correlation between t3.c1 and t3.c2 in this sql script :drop table if exists t1;drop table if exists t2;drop table if exists t3;drop table if exists t4;create table t1 as select generate_Series(1,300000) as c1; create table t2 as select generate_Series(1,400) as c1; create table t3 as select floor(random()*100+1) as c1, c2 from generate_Series(1,200000) c2;create table t4 as select generate_Series(1,200000) as c1;alter table t1 add PRIMARY KEY (c1);alter table t2 add PRIMARY KEY (c1);alter table t3 add PRIMARY KEY (c1,c2);create index on t3 (c1);create index on t3 (c2);alter table t4 add PRIMARY KEY (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;analyze verbose t4;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 inner join t4 on t3.c2=t4.c1Now, the estimate is good : http://explain.depesz.com/s/gCXHave a good dayMathieu VINCENT\n2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <[email protected]>:Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n> Gunnar Nick Bluth <[email protected]> wrote:\n>\n>> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>> Hello,\n>>>\n>>> No one to help me to understand this bad estimation rows ?\n>>\n>> Well,\n>>\n>> on a rather beefy machine, I'm getting quite a different plan:\n>> http://explain.depesz.com/s/3y5r\n>\n> you are using 9.5, right? Got the same plan with 9.5.\n\nNope...:\n                                                  version\n\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nSo much for those correlation improvements then ;-/\n\n\n> Btw.: Hi Gunnar ;-)\n\nHi :)\n\n--\nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel:   +49 911/991-4665\nMobil: +49 172/8853339\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 17 Dec 2015 11:37:32 +0100", "msg_from": "Mathieu VINCENT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "Adding foreign key between on t2 and t3, does not change the plan.\n\ndrop table if exists t1;\ndrop table if exists t2;\ndrop table if exists t3;\n\ncreate table t1 as select generate_Series(1,200000) as c1;\ncreate table t2 as select generate_Series(1,200000)%100+1 as c1;\ncreate table t3 as select generate_Series(1,1500)%750+1 as c1;\n\nalter table t1 add PRIMARY KEY (c1);\ncreate index on t2 (c1);\ncreate index on t3 (c1);\nALTER TABLE t2 ADD CONSTRAINT t2_fk FOREIGN KEY (c1) REFERENCES t1(c1);\nALTER TABLE t3 ADD CONSTRAINT t3_fk FOREIGN KEY (c1) REFERENCES t1(c1);\n\nanalyze verbose t1;\nanalyze verbose t2;\nanalyze verbose t3;\n\nEXPLAIN (analyze on, buffers on, verbose on)\nselect\n*\nfrom\nt1 t1\ninner join t2 on t1.c1=t2.c1\ninner join t3 on t1.c1=t3.c1\n\nCordialement,\n<http://www.psih.fr/>PSIH Décisionnel en santé\nMathieu VINCENT\nData Analyst\nPMSIpilot - 61 rue Sully - 69006 Lyon - France\n\n2015-12-17 11:37 GMT+01:00 Mathieu VINCENT <[email protected]>:\n\n> Here, another issue with row estimate.\n> And, in this example, there is not correlation beetween columns in a same\n> table.\n>\n> drop table if exists t1;\n> drop table if exists t2;\n> drop table if exists t3;\n>\n> create table t1 as select generate_Series(1,200000) as c1;\n> create table t2 as select generate_Series(1,200000)%100 as c1;\n> create table t3 as select generate_Series(1,1500)%750 as c1;\n>\n> alter table t1 add PRIMARY KEY (c1);\n> create index on t2 (c1);\n> create index on t3 (c1);\n>\n> analyze verbose t1;\n> analyze verbose t2;\n> analyze verbose t3;\n>\n> EXPLAIN (analyze on, buffers on, verbose on)\n> select\n> *\n> from\n> t1 t1\n> inner join t2 on t1.c1=t2.c1\n> inner join t3 on t2.c1=t3.c1\n> the explain plan : http://explain.depesz.com/s/YVw\n> Do you understand how postgresql calculate the row estimate ?\n>\n> BR\n> Mathieu VINCENT\n>\n> 2015-12-17 10:14 GMT+01:00 Matteo Grolla <[email protected]>:\n>\n>> Thank you both for the help!\n>> happy holidays\n>>\n>> 2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <[email protected]\n>> >:\n>>\n>>> thks Gunnar,\n>>>\n>>> I removed the correlation between t3.c1 and t3.c2 in this sql script :\n>>>\n>>> drop table if exists t1;\n>>> drop table if exists t2;\n>>> drop table if exists t3;\n>>> drop table if exists t4;\n>>>\n>>> create table t1 as select generate_Series(1,300000) as c1;\n>>> create table t2 as select generate_Series(1,400) as c1;\n>>> create table t3 as select floor(random()*100+1) as c1, c2 from\n>>> generate_Series(1,200000) c2;\n>>> create table t4 as select generate_Series(1,200000) as c1;\n>>>\n>>> alter table t1 add PRIMARY KEY (c1);\n>>> alter table t2 add PRIMARY KEY (c1);\n>>> alter table t3 add PRIMARY KEY (c1,c2);\n>>> create index on t3 (c1);\n>>> create index on t3 (c2);\n>>> alter table t4 add PRIMARY KEY (c1);\n>>>\n>>> analyze verbose t1;\n>>> analyze verbose t2;\n>>> analyze verbose t3;\n>>> analyze verbose t4;\n>>>\n>>> EXPLAIN (analyze on, buffers on, verbose on)\n>>> select\n>>> *\n>>> from\n>>> t1 t1\n>>> inner join t2 on t1.c1=t2.c1\n>>> inner join t3 on t2.c1=t3.c1\n>>> inner join t4 on t3.c2=t4.c1\n>>>\n>>> Now, the estimate is good : http://explain.depesz.com/s/gCX\n>>>\n>>> Have a good day\n>>>\n>>> Mathieu VINCENT\n>>>\n>>> 2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <\n>>> [email protected]>:\n>>>\n>>>> Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n>>>> > Gunnar Nick Bluth <[email protected]> wrote:\n>>>> >\n>>>> >> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>>> >>> Hello,\n>>>> >>>\n>>>> >>> No one to help me to understand this bad estimation rows ?\n>>>> >>\n>>>> >> Well,\n>>>> >>\n>>>> >> on a rather beefy machine, I'm getting quite a different plan:\n>>>> >> http://explain.depesz.com/s/3y5r\n>>>> >\n>>>> > you are using 9.5, right? Got the same plan with 9.5.\n>>>>\n>>>> Nope...:\n>>>> version\n>>>>\n>>>>\n>>>> ------------------------------------------------------------------------------------------------------------\n>>>> PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n>>>> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>>>>\n>>>> So much for those correlation improvements then ;-/\n>>>>\n>>>>\n>>>> > Btw.: Hi Gunnar ;-)\n>>>>\n>>>> Hi :)\n>>>>\n>>>> --\n>>>> Gunnar \"Nick\" Bluth\n>>>> DBA ELSTER\n>>>>\n>>>> Tel: +49 911/991-4665\n>>>> Mobil: +49 172/8853339\n>>>>\n>>>>\n>>>> --\n>>>> Sent via pgsql-performance mailing list (\n>>>> [email protected])\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>\n>>>\n>>>\n>>\n>\n\nAdding foreign key between on t2 and t3, does not change the plan.drop table if exists t1;drop table if exists t2;drop table if exists t3;create table t1 as select generate_Series(1,200000) as c1; create table t2 as select generate_Series(1,200000)%100+1 as c1; create table t3 as select generate_Series(1,1500)%750+1 as c1;alter table t1 add PRIMARY KEY (c1);create index on t2 (c1);create index on t3 (c1);ALTER TABLE t2  ADD CONSTRAINT t2_fk FOREIGN KEY (c1) REFERENCES t1(c1);ALTER TABLE t3  ADD CONSTRAINT t3_fk FOREIGN KEY (c1) REFERENCES t1(c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t1.c1=t3.c1 Cordialement,PSIH Décisionnel en santéMathieu VINCENT Data AnalystPMSIpilot - 61 rue Sully - 69006 Lyon - France\n2015-12-17 11:37 GMT+01:00 Mathieu VINCENT <[email protected]>:Here, another issue with row estimate.And, in this example, there is not correlation beetween columns in a same table.drop table if exists t1;drop table if exists t2;drop table if exists t3;create table t1 as select generate_Series(1,200000) as c1; create table t2 as select generate_Series(1,200000)%100 as c1; create table t3 as select generate_Series(1,1500)%750 as c1;alter table t1 add PRIMARY KEY (c1);create index on t2 (c1);create index on t3 (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 the explain plan : http://explain.depesz.com/s/YVwDo you understand how postgresql calculate the row estimate ?BRMathieu VINCENT\n2015-12-17 10:14 GMT+01:00 Matteo Grolla <[email protected]>:Thank you both for the help!happy holidays2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <[email protected]>:thks Gunnar,I removed the correlation between t3.c1 and t3.c2 in this sql script :drop table if exists t1;drop table if exists t2;drop table if exists t3;drop table if exists t4;create table t1 as select generate_Series(1,300000) as c1; create table t2 as select generate_Series(1,400) as c1; create table t3 as select floor(random()*100+1) as c1, c2 from generate_Series(1,200000) c2;create table t4 as select generate_Series(1,200000) as c1;alter table t1 add PRIMARY KEY (c1);alter table t2 add PRIMARY KEY (c1);alter table t3 add PRIMARY KEY (c1,c2);create index on t3 (c1);create index on t3 (c2);alter table t4 add PRIMARY KEY (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;analyze verbose t4;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 inner join t4 on t3.c2=t4.c1Now, the estimate is good : http://explain.depesz.com/s/gCXHave a good dayMathieu VINCENT\n2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <[email protected]>:Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n> Gunnar Nick Bluth <[email protected]> wrote:\n>\n>> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>> Hello,\n>>>\n>>> No one to help me to understand this bad estimation rows ?\n>>\n>> Well,\n>>\n>> on a rather beefy machine, I'm getting quite a different plan:\n>> http://explain.depesz.com/s/3y5r\n>\n> you are using 9.5, right? Got the same plan with 9.5.\n\nNope...:\n                                                  version\n\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nSo much for those correlation improvements then ;-/\n\n\n> Btw.: Hi Gunnar ;-)\n\nHi :)\n\n--\nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel:   +49 911/991-4665\nMobil: +49 172/8853339\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 17 Dec 2015 11:58:24 +0100", "msg_from": "Mathieu VINCENT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "Hello,\n\nNo one to help me to understand this bad estimation rows ?\nIt's *NOT* caused by :\n\n - correlation between columns (cross-correlation)\n - bad statistics (i tried with default_statistics_target to 10 000)\n - bad number of distinct values\n - complexe join conditions\n\nI have no more ideas.\n\nthank you for your help.\nMathieu VINCENT\n\n2015-12-17 11:58 GMT+01:00 Mathieu VINCENT <[email protected]>:\n\n> Adding foreign key between on t2 and t3, does not change the plan.\n>\n> drop table if exists t1;\n> drop table if exists t2;\n> drop table if exists t3;\n>\n> create table t1 as select generate_Series(1,200000) as c1;\n> create table t2 as select generate_Series(1,200000)%100+1 as c1;\n> create table t3 as select generate_Series(1,1500)%750+1 as c1;\n>\n> alter table t1 add PRIMARY KEY (c1);\n> create index on t2 (c1);\n> create index on t3 (c1);\n> ALTER TABLE t2 ADD CONSTRAINT t2_fk FOREIGN KEY (c1) REFERENCES t1(c1);\n> ALTER TABLE t3 ADD CONSTRAINT t3_fk FOREIGN KEY (c1) REFERENCES t1(c1);\n>\n> analyze verbose t1;\n> analyze verbose t2;\n> analyze verbose t3;\n>\n> EXPLAIN (analyze on, buffers on, verbose on)\n> select\n> *\n> from\n> t1 t1\n> inner join t2 on t1.c1=t2.c1\n> inner join t3 on t1.c1=t3.c1\n>\n> Cordialement,\n> <http://www.psih.fr/>PSIH Décisionnel en santé\n> Mathieu VINCENT\n> Data Analyst\n> PMSIpilot - 61 rue Sully - 69006 Lyon - France\n>\n> 2015-12-17 11:37 GMT+01:00 Mathieu VINCENT <[email protected]>\n> :\n>\n>> Here, another issue with row estimate.\n>> And, in this example, there is not correlation beetween columns in a same\n>> table.\n>>\n>> drop table if exists t1;\n>> drop table if exists t2;\n>> drop table if exists t3;\n>>\n>> create table t1 as select generate_Series(1,200000) as c1;\n>> create table t2 as select generate_Series(1,200000)%100 as c1;\n>> create table t3 as select generate_Series(1,1500)%750 as c1;\n>>\n>> alter table t1 add PRIMARY KEY (c1);\n>> create index on t2 (c1);\n>> create index on t3 (c1);\n>>\n>> analyze verbose t1;\n>> analyze verbose t2;\n>> analyze verbose t3;\n>>\n>> EXPLAIN (analyze on, buffers on, verbose on)\n>> select\n>> *\n>> from\n>> t1 t1\n>> inner join t2 on t1.c1=t2.c1\n>> inner join t3 on t2.c1=t3.c1\n>> the explain plan : http://explain.depesz.com/s/YVw\n>> Do you understand how postgresql calculate the row estimate ?\n>>\n>> BR\n>> Mathieu VINCENT\n>>\n>> 2015-12-17 10:14 GMT+01:00 Matteo Grolla <[email protected]>:\n>>\n>>> Thank you both for the help!\n>>> happy holidays\n>>>\n>>> 2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <\n>>> [email protected]>:\n>>>\n>>>> thks Gunnar,\n>>>>\n>>>> I removed the correlation between t3.c1 and t3.c2 in this sql script :\n>>>>\n>>>> drop table if exists t1;\n>>>> drop table if exists t2;\n>>>> drop table if exists t3;\n>>>> drop table if exists t4;\n>>>>\n>>>> create table t1 as select generate_Series(1,300000) as c1;\n>>>> create table t2 as select generate_Series(1,400) as c1;\n>>>> create table t3 as select floor(random()*100+1) as c1, c2 from\n>>>> generate_Series(1,200000) c2;\n>>>> create table t4 as select generate_Series(1,200000) as c1;\n>>>>\n>>>> alter table t1 add PRIMARY KEY (c1);\n>>>> alter table t2 add PRIMARY KEY (c1);\n>>>> alter table t3 add PRIMARY KEY (c1,c2);\n>>>> create index on t3 (c1);\n>>>> create index on t3 (c2);\n>>>> alter table t4 add PRIMARY KEY (c1);\n>>>>\n>>>> analyze verbose t1;\n>>>> analyze verbose t2;\n>>>> analyze verbose t3;\n>>>> analyze verbose t4;\n>>>>\n>>>> EXPLAIN (analyze on, buffers on, verbose on)\n>>>> select\n>>>> *\n>>>> from\n>>>> t1 t1\n>>>> inner join t2 on t1.c1=t2.c1\n>>>> inner join t3 on t2.c1=t3.c1\n>>>> inner join t4 on t3.c2=t4.c1\n>>>>\n>>>> Now, the estimate is good : http://explain.depesz.com/s/gCX\n>>>>\n>>>> Have a good day\n>>>>\n>>>> Mathieu VINCENT\n>>>>\n>>>> 2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <\n>>>> [email protected]>:\n>>>>\n>>>>> Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n>>>>> > Gunnar Nick Bluth <[email protected]> wrote:\n>>>>> >\n>>>>> >> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>>>> >>> Hello,\n>>>>> >>>\n>>>>> >>> No one to help me to understand this bad estimation rows ?\n>>>>> >>\n>>>>> >> Well,\n>>>>> >>\n>>>>> >> on a rather beefy machine, I'm getting quite a different plan:\n>>>>> >> http://explain.depesz.com/s/3y5r\n>>>>> >\n>>>>> > you are using 9.5, right? Got the same plan with 9.5.\n>>>>>\n>>>>> Nope...:\n>>>>> version\n>>>>>\n>>>>>\n>>>>> ------------------------------------------------------------------------------------------------------------\n>>>>> PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n>>>>> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>>>>>\n>>>>> So much for those correlation improvements then ;-/\n>>>>>\n>>>>>\n>>>>> > Btw.: Hi Gunnar ;-)\n>>>>>\n>>>>> Hi :)\n>>>>>\n>>>>> --\n>>>>> Gunnar \"Nick\" Bluth\n>>>>> DBA ELSTER\n>>>>>\n>>>>> Tel: +49 911/991-4665\n>>>>> Mobil: +49 172/8853339\n>>>>>\n>>>>>\n>>>>> --\n>>>>> Sent via pgsql-performance mailing list (\n>>>>> [email protected])\n>>>>> To make changes to your subscription:\n>>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>>\n>>>>\n>>>>\n>>>\n>>\n>\n\nHello,No one to help me to understand this bad estimation rows ?It's NOT caused by :correlation between columns (cross-correlation)bad statistics (i tried with  default_statistics_target to 10 000)bad number of distinct valuescomplexe join conditionsI have no more ideas.thank you for your help.Mathieu VINCENT\n2015-12-17 11:58 GMT+01:00 Mathieu VINCENT <[email protected]>:Adding foreign key between on t2 and t3, does not change the plan.drop table if exists t1;drop table if exists t2;drop table if exists t3;create table t1 as select generate_Series(1,200000) as c1; create table t2 as select generate_Series(1,200000)%100+1 as c1; create table t3 as select generate_Series(1,1500)%750+1 as c1;alter table t1 add PRIMARY KEY (c1);create index on t2 (c1);create index on t3 (c1);ALTER TABLE t2  ADD CONSTRAINT t2_fk FOREIGN KEY (c1) REFERENCES t1(c1);ALTER TABLE t3  ADD CONSTRAINT t3_fk FOREIGN KEY (c1) REFERENCES t1(c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t1.c1=t3.c1 Cordialement,PSIH Décisionnel en santéMathieu VINCENT Data AnalystPMSIpilot - 61 rue Sully - 69006 Lyon - France\n2015-12-17 11:37 GMT+01:00 Mathieu VINCENT <[email protected]>:Here, another issue with row estimate.And, in this example, there is not correlation beetween columns in a same table.drop table if exists t1;drop table if exists t2;drop table if exists t3;create table t1 as select generate_Series(1,200000) as c1; create table t2 as select generate_Series(1,200000)%100 as c1; create table t3 as select generate_Series(1,1500)%750 as c1;alter table t1 add PRIMARY KEY (c1);create index on t2 (c1);create index on t3 (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 the explain plan : http://explain.depesz.com/s/YVwDo you understand how postgresql calculate the row estimate ?BRMathieu VINCENT\n2015-12-17 10:14 GMT+01:00 Matteo Grolla <[email protected]>:Thank you both for the help!happy holidays2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <[email protected]>:thks Gunnar,I removed the correlation between t3.c1 and t3.c2 in this sql script :drop table if exists t1;drop table if exists t2;drop table if exists t3;drop table if exists t4;create table t1 as select generate_Series(1,300000) as c1; create table t2 as select generate_Series(1,400) as c1; create table t3 as select floor(random()*100+1) as c1, c2 from generate_Series(1,200000) c2;create table t4 as select generate_Series(1,200000) as c1;alter table t1 add PRIMARY KEY (c1);alter table t2 add PRIMARY KEY (c1);alter table t3 add PRIMARY KEY (c1,c2);create index on t3 (c1);create index on t3 (c2);alter table t4 add PRIMARY KEY (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;analyze verbose t4;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 inner join t4 on t3.c2=t4.c1Now, the estimate is good : http://explain.depesz.com/s/gCXHave a good dayMathieu VINCENT\n2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <[email protected]>:Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n> Gunnar Nick Bluth <[email protected]> wrote:\n>\n>> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>> Hello,\n>>>\n>>> No one to help me to understand this bad estimation rows ?\n>>\n>> Well,\n>>\n>> on a rather beefy machine, I'm getting quite a different plan:\n>> http://explain.depesz.com/s/3y5r\n>\n> you are using 9.5, right? Got the same plan with 9.5.\n\nNope...:\n                                                  version\n\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nSo much for those correlation improvements then ;-/\n\n\n> Btw.: Hi Gunnar ;-)\n\nHi :)\n\n--\nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel:   +49 911/991-4665\nMobil: +49 172/8853339\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 18 Dec 2015 16:21:04 +0100", "msg_from": "Mathieu VINCENT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation row error" }, { "msg_contents": "Hi\n\n2015-12-18 16:21 GMT+01:00 Mathieu VINCENT <[email protected]>:\n\n> Hello,\n>\n> No one to help me to understand this bad estimation rows ?\n> It's *NOT* caused by :\n>\n> - correlation between columns (cross-correlation)\n> - bad statistics (i tried with default_statistics_target to 10 000)\n> - bad number of distinct values\n> - complexe join conditions\n>\n> I have no more ideas.\n>\n\nPostgreSQL has not cross tables statistics - so expect uniform distribution\nof foreign keys. This expectation is broken in your example.\n\nYou can find some prototype solutions by Tomas Vondra in hackars mailing\nlist.\n\nRegards\n\nPavel\n\n\n>\n> thank you for your help.\n> Mathieu VINCENT\n>\n> 2015-12-17 11:58 GMT+01:00 Mathieu VINCENT <[email protected]>\n> :\n>\n>> Adding foreign key between on t2 and t3, does not change the plan.\n>>\n>> drop table if exists t1;\n>> drop table if exists t2;\n>> drop table if exists t3;\n>>\n>> create table t1 as select generate_Series(1,200000) as c1;\n>> create table t2 as select generate_Series(1,200000)%100+1 as c1;\n>> create table t3 as select generate_Series(1,1500)%750+1 as c1;\n>>\n>> alter table t1 add PRIMARY KEY (c1);\n>> create index on t2 (c1);\n>> create index on t3 (c1);\n>> ALTER TABLE t2 ADD CONSTRAINT t2_fk FOREIGN KEY (c1) REFERENCES t1(c1);\n>> ALTER TABLE t3 ADD CONSTRAINT t3_fk FOREIGN KEY (c1) REFERENCES t1(c1);\n>>\n>> analyze verbose t1;\n>> analyze verbose t2;\n>> analyze verbose t3;\n>>\n>> EXPLAIN (analyze on, buffers on, verbose on)\n>> select\n>> *\n>> from\n>> t1 t1\n>> inner join t2 on t1.c1=t2.c1\n>> inner join t3 on t1.c1=t3.c1\n>>\n>> Cordialement,\n>> <http://www.psih.fr/>PSIH Décisionnel en santé\n>> Mathieu VINCENT\n>> Data Analyst\n>> PMSIpilot - 61 rue Sully - 69006 Lyon - France\n>>\n>> 2015-12-17 11:37 GMT+01:00 Mathieu VINCENT <[email protected]\n>> >:\n>>\n>>> Here, another issue with row estimate.\n>>> And, in this example, there is not correlation beetween columns in a\n>>> same table.\n>>>\n>>> drop table if exists t1;\n>>> drop table if exists t2;\n>>> drop table if exists t3;\n>>>\n>>> create table t1 as select generate_Series(1,200000) as c1;\n>>> create table t2 as select generate_Series(1,200000)%100 as c1;\n>>> create table t3 as select generate_Series(1,1500)%750 as c1;\n>>>\n>>> alter table t1 add PRIMARY KEY (c1);\n>>> create index on t2 (c1);\n>>> create index on t3 (c1);\n>>>\n>>> analyze verbose t1;\n>>> analyze verbose t2;\n>>> analyze verbose t3;\n>>>\n>>> EXPLAIN (analyze on, buffers on, verbose on)\n>>> select\n>>> *\n>>> from\n>>> t1 t1\n>>> inner join t2 on t1.c1=t2.c1\n>>> inner join t3 on t2.c1=t3.c1\n>>> the explain plan : http://explain.depesz.com/s/YVw\n>>> Do you understand how postgresql calculate the row estimate ?\n>>>\n>>> BR\n>>> Mathieu VINCENT\n>>>\n>>> 2015-12-17 10:14 GMT+01:00 Matteo Grolla <[email protected]>:\n>>>\n>>>> Thank you both for the help!\n>>>> happy holidays\n>>>>\n>>>> 2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <\n>>>> [email protected]>:\n>>>>\n>>>>> thks Gunnar,\n>>>>>\n>>>>> I removed the correlation between t3.c1 and t3.c2 in this sql script :\n>>>>>\n>>>>> drop table if exists t1;\n>>>>> drop table if exists t2;\n>>>>> drop table if exists t3;\n>>>>> drop table if exists t4;\n>>>>>\n>>>>> create table t1 as select generate_Series(1,300000) as c1;\n>>>>> create table t2 as select generate_Series(1,400) as c1;\n>>>>> create table t3 as select floor(random()*100+1) as c1, c2 from\n>>>>> generate_Series(1,200000) c2;\n>>>>> create table t4 as select generate_Series(1,200000) as c1;\n>>>>>\n>>>>> alter table t1 add PRIMARY KEY (c1);\n>>>>> alter table t2 add PRIMARY KEY (c1);\n>>>>> alter table t3 add PRIMARY KEY (c1,c2);\n>>>>> create index on t3 (c1);\n>>>>> create index on t3 (c2);\n>>>>> alter table t4 add PRIMARY KEY (c1);\n>>>>>\n>>>>> analyze verbose t1;\n>>>>> analyze verbose t2;\n>>>>> analyze verbose t3;\n>>>>> analyze verbose t4;\n>>>>>\n>>>>> EXPLAIN (analyze on, buffers on, verbose on)\n>>>>> select\n>>>>> *\n>>>>> from\n>>>>> t1 t1\n>>>>> inner join t2 on t1.c1=t2.c1\n>>>>> inner join t3 on t2.c1=t3.c1\n>>>>> inner join t4 on t3.c2=t4.c1\n>>>>>\n>>>>> Now, the estimate is good : http://explain.depesz.com/s/gCX\n>>>>>\n>>>>> Have a good day\n>>>>>\n>>>>> Mathieu VINCENT\n>>>>>\n>>>>> 2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <\n>>>>> [email protected]>:\n>>>>>\n>>>>>> Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n>>>>>> > Gunnar Nick Bluth <[email protected]> wrote:\n>>>>>> >\n>>>>>> >> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>>>>> >>> Hello,\n>>>>>> >>>\n>>>>>> >>> No one to help me to understand this bad estimation rows ?\n>>>>>> >>\n>>>>>> >> Well,\n>>>>>> >>\n>>>>>> >> on a rather beefy machine, I'm getting quite a different plan:\n>>>>>> >> http://explain.depesz.com/s/3y5r\n>>>>>> >\n>>>>>> > you are using 9.5, right? Got the same plan with 9.5.\n>>>>>>\n>>>>>> Nope...:\n>>>>>> version\n>>>>>>\n>>>>>>\n>>>>>> ------------------------------------------------------------------------------------------------------------\n>>>>>> PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n>>>>>> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>>>>>>\n>>>>>> So much for those correlation improvements then ;-/\n>>>>>>\n>>>>>>\n>>>>>> > Btw.: Hi Gunnar ;-)\n>>>>>>\n>>>>>> Hi :)\n>>>>>>\n>>>>>> --\n>>>>>> Gunnar \"Nick\" Bluth\n>>>>>> DBA ELSTER\n>>>>>>\n>>>>>> Tel: +49 911/991-4665\n>>>>>> Mobil: +49 172/8853339\n>>>>>>\n>>>>>>\n>>>>>> --\n>>>>>> Sent via pgsql-performance mailing list (\n>>>>>> [email protected])\n>>>>>> To make changes to your subscription:\n>>>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>>>\n>>>>>\n>>>>>\n>>>>\n>>>\n>>\n>\n\nHi2015-12-18 16:21 GMT+01:00 Mathieu VINCENT <[email protected]>:Hello,No one to help me to understand this bad estimation rows ?It's NOT caused by :correlation between columns (cross-correlation)bad statistics (i tried with  default_statistics_target to 10 000)bad number of distinct valuescomplexe join conditionsI have no more ideas.PostgreSQL has not cross tables statistics - so expect uniform distribution of foreign keys.  This expectation is broken in your example.You can find some prototype solutions by Tomas Vondra in hackars mailing list.RegardsPavel thank you for your help.Mathieu VINCENT\n2015-12-17 11:58 GMT+01:00 Mathieu VINCENT <[email protected]>:Adding foreign key between on t2 and t3, does not change the plan.drop table if exists t1;drop table if exists t2;drop table if exists t3;create table t1 as select generate_Series(1,200000) as c1; create table t2 as select generate_Series(1,200000)%100+1 as c1; create table t3 as select generate_Series(1,1500)%750+1 as c1;alter table t1 add PRIMARY KEY (c1);create index on t2 (c1);create index on t3 (c1);ALTER TABLE t2  ADD CONSTRAINT t2_fk FOREIGN KEY (c1) REFERENCES t1(c1);ALTER TABLE t3  ADD CONSTRAINT t3_fk FOREIGN KEY (c1) REFERENCES t1(c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t1.c1=t3.c1 Cordialement,PSIH Décisionnel en santéMathieu VINCENT Data AnalystPMSIpilot - 61 rue Sully - 69006 Lyon - France\n2015-12-17 11:37 GMT+01:00 Mathieu VINCENT <[email protected]>:Here, another issue with row estimate.And, in this example, there is not correlation beetween columns in a same table.drop table if exists t1;drop table if exists t2;drop table if exists t3;create table t1 as select generate_Series(1,200000) as c1; create table t2 as select generate_Series(1,200000)%100 as c1; create table t3 as select generate_Series(1,1500)%750 as c1;alter table t1 add PRIMARY KEY (c1);create index on t2 (c1);create index on t3 (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 the explain plan : http://explain.depesz.com/s/YVwDo you understand how postgresql calculate the row estimate ?BRMathieu VINCENT\n2015-12-17 10:14 GMT+01:00 Matteo Grolla <[email protected]>:Thank you both for the help!happy holidays2015-12-17 10:10 GMT+01:00 Mathieu VINCENT <[email protected]>:thks Gunnar,I removed the correlation between t3.c1 and t3.c2 in this sql script :drop table if exists t1;drop table if exists t2;drop table if exists t3;drop table if exists t4;create table t1 as select generate_Series(1,300000) as c1; create table t2 as select generate_Series(1,400) as c1; create table t3 as select floor(random()*100+1) as c1, c2 from generate_Series(1,200000) c2;create table t4 as select generate_Series(1,200000) as c1;alter table t1 add PRIMARY KEY (c1);alter table t2 add PRIMARY KEY (c1);alter table t3 add PRIMARY KEY (c1,c2);create index on t3 (c1);create index on t3 (c2);alter table t4 add PRIMARY KEY (c1);analyze verbose t1;analyze verbose t2;analyze verbose t3;analyze verbose t4;EXPLAIN (analyze on, buffers on, verbose on)select  *from  t1 t1 inner join t2 on t1.c1=t2.c1 inner join t3 on t2.c1=t3.c1 inner join t4 on t3.c2=t4.c1Now, the estimate is good : http://explain.depesz.com/s/gCXHave a good dayMathieu VINCENT\n2015-12-15 11:21 GMT+01:00 Gunnar \"Nick\" Bluth <[email protected]>:Am 15.12.2015 um 10:49 schrieb Andreas Kretschmer:\n> Gunnar Nick Bluth <[email protected]> wrote:\n>\n>> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:\n>>> Hello,\n>>>\n>>> No one to help me to understand this bad estimation rows ?\n>>\n>> Well,\n>>\n>> on a rather beefy machine, I'm getting quite a different plan:\n>> http://explain.depesz.com/s/3y5r\n>\n> you are using 9.5, right? Got the same plan with 9.5.\n\nNope...:\n                                                  version\n\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nSo much for those correlation improvements then ;-/\n\n\n> Btw.: Hi Gunnar ;-)\n\nHi :)\n\n--\nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel:   +49 911/991-4665\nMobil: +49 172/8853339\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 18 Dec 2015 16:33:11 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation row error" } ]
[ { "msg_contents": "I'm using PostgreSQL 9.5 Beta 2.\n\nI am working with a partitioned table set.\n\nThe first thing I noticed, when creating indexes on the 20 or so\npartitions, was that if I create them too fast they don't all succeed. I\nhave to do a few at a time, let them breathe for a few seconds, and then do\na few more. I had been simply generating all of the create index commands\nin a text editor, and then cutting and pasting the lot of them into psql\nall at once or running them by using psql '-f'. Most would get created,\nbut not all. It seems almost random. There were no obvious error\nmessages. When I do a few at a time, it is never an issue.\n\nThis tripped me up because I couldn't figure out why some of the child\ntables were sequence scanning and some were not. It turned out that some\nof the partitions were missing some of the indexes. I'm mentioning it\nhere just in case someone else is observing strange behaviour where some\nchildren are scanning and some aren't. You might not have all of your\nindexes deployed correctly.\n\n--\n\nAnyway, the issue I am trying to figure out at the moment:\n\nIf I do a simple query with a where clause on a specific column from the\nparent table, I can see it index scan each of the children. This is what I\nwant it to do, so no complaints there.\n\nHowever, if I try to (inner) join another table with that column, the\nplanner sequence scans each of the children instead of using the indexes.\nI saw someone had posted a similar question to this list back in January,\nhowever I didn't see the answer.\n\nWhat should I look at to try to figure out why a join doesn't use the\nindexes while a straight query on the same column for the table does?\n\nFWIW, the column in question is a UUID column and is the primary key for\neach of the child tables.\n\n--\nRick.\n\nI'm using PostgreSQL 9.5 Beta 2.I am working with a partitioned table set.The first thing I noticed, when creating indexes on the 20 or so partitions, was that if I create them too fast they don't all succeed.  I have to do a few at a time, let them breathe for a few seconds, and then do a few more.   I had been simply generating all of the create index commands in a text editor, and then cutting and pasting the lot of them into psql all at once or running them by using psql '-f'.  Most would get created, but not all.  It seems almost random.  There were no obvious error messages.  When I do a few at a time, it is never an issue.This tripped me up because I couldn't figure out why some of the child tables were sequence scanning and some were not.  It turned out that some of the partitions were missing some of the indexes.   I'm mentioning it here just in case someone else is observing strange behaviour where some children are scanning and some aren't.  You might not have all of your indexes deployed correctly.--Anyway, the issue I am trying to figure out at the moment:If I do a simple query with a where clause on a specific column from the parent table, I can see it index scan each of the children.  This is what I want it to do, so no complaints there.However, if I try to (inner) join another table with that column, the planner sequence scans each of the children instead of using the indexes.  I saw someone had posted a similar question to this list back in January, however I didn't see the answer.What should I look at to try to figure out why a join doesn't use the indexes while a straight query on the same column for the table does?FWIW, the column in question is a UUID column and is the primary key for each of the child tables.--Rick.", "msg_date": "Fri, 11 Dec 2015 14:01:32 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "partitioned table set and indexes" }, { "msg_contents": "Rick Otten <[email protected]> wrote:\n\n> I'm using PostgreSQL 9.5 Beta 2.\n> \n> I am working with a partitioned table set.\n> \n> The first thing I noticed, when creating indexes on the 20 or so partitions,\n> was that if I create them too fast they don't all succeed.� I have to do a few\n> at a time, let them breathe for a few seconds, and then do a few more. � I had\n> been simply generating all of the create index commands in a text editor, and\n> then cutting and pasting the lot of them into psql all at once or running them\n> by using psql '-f'.� Most would get created, but not all.� It seems almost\n> random.� There were no obvious error messages.� When I do a few at a time, it\n> is never an issue.\n\nSure? Have you checked that?\n\n\n> If I do a simple query with a where clause on a specific column from the parent\n> table, I can see it index scan each of the children.� This is what I want it to\n> do, so no complaints there.\n> \n> However, if I try to (inner) join another table with that column, the planner\n> sequence scans each of the children instead of using the indexes.� I saw\n> someone had posted a similar question to this list back in January, however I\n> didn't see the answer.\n\nShow us the output from explain analyse <your query>\n\n\n> FWIW, the column in question is a UUID column and is the primary key for each\n> of the child tables.\n\n\nPostgreSQL using a cost-modell, so maybe there are not enough rows in\nthe table. That's just a guess, you can see that with explain analyse\n... \n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Dec 2015 20:44:40 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table set and indexes" }, { "msg_contents": "I do not know why if I blast a new index creation on the 20 or so children\nall at once some of them fail, but then if I go back and do a few at a time\nthey all work. It has happened to me 3 times now, so I'm pretty sure I'm\nnot imagining it.\n\nWhat specifically in the explain analyze output tells you that it is using\na sequence scan instead of an index scan _because_ there are too few rows?\nI can see where it chooses a sequence scan over an index and I know there\nare only a few rows in those tables, but I'm not sure how the explain\noutput tells you that it made that choice on purpose.\n\nWhy would the select statement use the index, but not the join?\n\nThere used to be an explain output anonymizer tool, if I can find that\nagain, I'll send along the output. It has been a few years since I posted\na question to this list so I don't think I have a bookmark for it any\nmore.... Hmmm. I'll look around.\n\nMeanwhile:\n\n--\n\n select\n *\n from\n my_parent_table\n where\n mypk = 'something';\n\nUses an index scan on each of my_parent_table's children except for a\ncouple of them that don't have a lot of rows, and those are sequence\nscanned. (which is ok)\n\n--\n\n select\n *\n from\n some_other_table sot\n join my_parent_table mpt on sot.some_column = mpt.mypk\n where\n sot.another_column = 'q'\n\nSequence scans each of my_parent_table's children. (It doesn't matter\nwhich order I put the join.)\n\n--\n\n select\n *\n from\n some_other_table sot\n join my_parent_table mpt on sot.some_column = mpt.mypk\n where\n mpt.column_3 = 'z'\n and\n sot.another_column = 'q'\n\nIndex scans my_parent_table's children on column_3 (except for the couple\nwith only a few rows), and doesn't sequence scan for the mypk column at all.\n\n\n\nOn Fri, Dec 11, 2015 at 2:44 PM, Andreas Kretschmer <\[email protected]> wrote:\n\n> Rick Otten <[email protected]> wrote:\n>\n> > I'm using PostgreSQL 9.5 Beta 2.\n> >\n> > I am working with a partitioned table set.\n> >\n> > The first thing I noticed, when creating indexes on the 20 or so\n> partitions,\n> > was that if I create them too fast they don't all succeed. I have to do\n> a few\n> > at a time, let them breathe for a few seconds, and then do a few more.\n> I had\n> > been simply generating all of the create index commands in a text\n> editor, and\n> > then cutting and pasting the lot of them into psql all at once or\n> running them\n> > by using psql '-f'. Most would get created, but not all. It seems\n> almost\n> > random. There were no obvious error messages. When I do a few at a\n> time, it\n> > is never an issue.\n>\n> Sure? Have you checked that?\n>\n>\n> > If I do a simple query with a where clause on a specific column from the\n> parent\n> > table, I can see it index scan each of the children. This is what I\n> want it to\n> > do, so no complaints there.\n> >\n> > However, if I try to (inner) join another table with that column, the\n> planner\n> > sequence scans each of the children instead of using the indexes. I saw\n> > someone had posted a similar question to this list back in January,\n> however I\n> > didn't see the answer.\n>\n> Show us the output from explain analyse <your query>\n>\n>\n> > FWIW, the column in question is a UUID column and is the primary key for\n> each\n> > of the child tables.\n>\n>\n> PostgreSQL using a cost-modell, so maybe there are not enough rows in\n> the table. That's just a guess, you can see that with explain analyse\n> ...\n>\n>\n> Andreas\n> --\n> Really, I'm not out to destroy Microsoft. That will just be a completely\n> unintentional side effect. (Linus Torvalds)\n> \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI do not know why if I blast a new index creation on the 20 or so children all at once some of them fail, but then if I go back and do a few at a time they all work.  It has happened to me 3 times now, so I'm pretty sure I'm not imagining it.  What specifically in the explain analyze output tells you that it is using a sequence scan instead of an index scan _because_ there are too few rows?  I can see where it chooses a sequence scan over an index and I know there are only a few rows in those tables, but I'm not sure how the explain output tells you that it made that choice on purpose.   Why would the select statement use the index, but not the join?There used to be an explain output anonymizer tool, if I can find that again, I'll send along the output.  It has been a few years since I posted a question to this list so I don't think I have a bookmark for it any more.... Hmmm.  I'll look around.Meanwhile:--  select        *   from      my_parent_table   where      mypk = 'something';Uses an index scan on each of my_parent_table's children except for a couple of them that don't have a lot of rows, and those are sequence scanned.  (which is ok)--   select        *    from        some_other_table  sot       join my_parent_table mpt on sot.some_column = mpt.mypk  where       sot.another_column = 'q'Sequence scans each of my_parent_table's children.  (It doesn't matter which order I put the join.)--    select        *    from       some_other_table  sot       join my_parent_table mpt on sot.some_column = mpt.mypk  where       mpt.column_3 = 'z'       and       sot.another_column = 'q'Index scans my_parent_table's children on column_3 (except for the couple with only a few rows), and doesn't sequence scan for the mypk column at all.On Fri, Dec 11, 2015 at 2:44 PM, Andreas Kretschmer <[email protected]> wrote:Rick Otten <[email protected]> wrote:\n\n> I'm using PostgreSQL 9.5 Beta 2.\n>\n> I am working with a partitioned table set.\n>\n> The first thing I noticed, when creating indexes on the 20 or so partitions,\n> was that if I create them too fast they don't all succeed.  I have to do a few\n> at a time, let them breathe for a few seconds, and then do a few more.   I had\n> been simply generating all of the create index commands in a text editor, and\n> then cutting and pasting the lot of them into psql all at once or running them\n> by using psql '-f'.  Most would get created, but not all.  It seems almost\n> random.  There were no obvious error messages.  When I do a few at a time, it\n> is never an issue.\n\nSure? Have you checked that?\n\n\n> If I do a simple query with a where clause on a specific column from the parent\n> table, I can see it index scan each of the children.  This is what I want it to\n> do, so no complaints there.\n>\n> However, if I try to (inner) join another table with that column, the planner\n> sequence scans each of the children instead of using the indexes.  I saw\n> someone had posted a similar question to this list back in January, however I\n> didn't see the answer.\n\nShow us the output from explain analyse <your query>\n\n\n> FWIW, the column in question is a UUID column and is the primary key for each\n> of the child tables.\n\n\nPostgreSQL using a cost-modell, so maybe there are not enough rows in\nthe table. That's just a guess, you can see that with explain analyse\n...\n\n\nAndreas\n--\nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect.                              (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\"   (unknown)\nKaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 11 Dec 2015 15:40:30 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioned table set and indexes" }, { "msg_contents": "On Fri, Dec 11, 2015 at 1:01 PM, Rick Otten <[email protected]> wrote:\n\n> The first thing I noticed, when creating indexes on the 20 or so partitions,\n> was that if I create them too fast they don't all succeed. I have to do a\n> few at a time, let them breathe for a few seconds, and then do a few more.\n> I had been simply generating all of the create index commands in a text\n> editor, and then cutting and pasting the lot of them into psql all at once\n\nI have seen problems with OS \"paste\" functionality dropping chunks of\npasted text if it was too big.\n\n> or running them by using psql '-f'.\n\n... but I would be surprised if that happened when reading from a file.\n\n-- \nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Dec 2015 15:27:10 -0600", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table set and indexes" }, { "msg_contents": "\n\n> Rick Otten <[email protected]> hat am 11. Dezember 2015 um 21:40\n> geschrieben:\n> \n> \n> I do not know why if I blast a new index creation on the 20 or so children\n> all at once some of them fail, but then if I go back and do a few at a time\n> they all work. It has happened to me 3 times now, so I'm pretty sure I'm\n> not imagining it.\n\ndon't believe that, sorry.\n\n\n> \n> What specifically in the explain analyze output tells you that it is using\n> a sequence scan instead of an index scan _because_ there are too few rows?\n> I can see where it chooses a sequence scan over an index and I know there\n> are only a few rows in those tables, but I'm not sure how the explain\n> output tells you that it made that choice on purpose.\n\na sequentiell scan over a small table are cheaper than an index-scan. Imageine a\nsmall table,\nonly 3 rows. Fits in one page. It's cheaper to read just this page than read the\nindex\nplus read the table to put out the result row. \n\n\nWhy are you using partitioning? That's make only sense with large child-tables\n(more than 1 million rows or so)\nand if you have a proper partitioning schema.\n\n\n\n> \n> Why would the select statement use the index, but not the join?\n> \n> There used to be an explain output anonymizer tool, if I can find that\n> again, I'll send along the output. It has been a few years since I posted\n> a question to this list so I don't think I have a bookmark for it any\n> more.... Hmmm. I'll look around.\n\n\nhttp://explain.depesz.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Dec 2015 22:44:05 +0100 (CET)", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table set and indexes" }, { "msg_contents": "Ok, here is the first case where I select on the column:\n\n http://explain.depesz.com/s/ECb\n\nHere is the second case where I try a join:\n\n http://explain.depesz.com/s/qIu\n\nAnd here is the third case where I add a filter on the parent table:\n\n http://explain.depesz.com/s/1es\n\nThe primary use case for partitioning is for performance gains when working\nwith very large tables. I agree these are not that large and by itself it\ndoes not justify the extra complexity of working with partitioning.\n\nHowever there are other use cases for the partitioning model. In our case\nwe have legacy business processes that swap out the child tables and\noperate on them independently from each other. They could be refactored to\nwork together within one big table, but that is a project for another day.\n The segmentation of the data into structurally consistent but related\nseparate tables is the first step in that direction. (previously they were\nall different from each other, but similar too) Some of these children\ntables will hit 1M rows by the end of 2016, but it will take a while for\nthem to grow to that size.\n\nI do have another table with many millions of rows that could use\npartitioning, and eventually I'll split that one up - probably around the\ntime I merge this one into a single table. First I have to finish getting\neverything off of MySQL...\n\nThe query performance hit for sequence scanning isn't all that terrible,\nbut I'd rather understand and get rid of the issue if I can, now, before I\nrun into it again in a situation where it is crippling.\n\nThank you for your help with this!\n\n--\n\nps: You don't have to believe me about the bulk index adding thing. I\nhardly believe it myself. It is just something to keep an eye out for. If\nit is a real issue, I ought to be able to build a reproducible test case to\nshare - at that time I'll see if I can open it up as a real bug. For now\nI'd rather focus on understanding why my select uses an index and a join\nwon't.\n\n\n\nOn Fri, Dec 11, 2015 at 4:44 PM, Andreas Kretschmer <[email protected]\n> wrote:\n\n>\n>\n> > Rick Otten <[email protected]> hat am 11. Dezember 2015 um 21:40\n> > geschrieben:\n> >\n> >\n> > I do not know why if I blast a new index creation on the 20 or so\n> children\n> > all at once some of them fail, but then if I go back and do a few at a\n> time\n> > they all work. It has happened to me 3 times now, so I'm pretty sure I'm\n> > not imagining it.\n>\n> don't believe that, sorry.\n>\n>\n> >\n> > What specifically in the explain analyze output tells you that it is\n> using\n> > a sequence scan instead of an index scan _because_ there are too few\n> rows?\n> > I can see where it chooses a sequence scan over an index and I know there\n> > are only a few rows in those tables, but I'm not sure how the explain\n> > output tells you that it made that choice on purpose.\n>\n> a sequentiell scan over a small table are cheaper than an index-scan.\n> Imageine a\n> small table,\n> only 3 rows. Fits in one page. It's cheaper to read just this page than\n> read the\n> index\n> plus read the table to put out the result row.\n>\n>\n> Why are you using partitioning? That's make only sense with large\n> child-tables\n> (more than 1 million rows or so)\n> and if you have a proper partitioning schema.\n>\n>\n>\n> >\n> > Why would the select statement use the index, but not the join?\n> >\n> > There used to be an explain output anonymizer tool, if I can find that\n> > again, I'll send along the output. It has been a few years since I\n> posted\n> > a question to this list so I don't think I have a bookmark for it any\n> > more.... Hmmm. I'll look around.\n>\n>\n> http://explain.depesz.com/\n>\n\nOk, here is the first case where I select on the column:    http://explain.depesz.com/s/ECbHere is the second case where I try a join:    http://explain.depesz.com/s/qIuAnd here is the third case where I add a filter on the parent table:   http://explain.depesz.com/s/1esThe primary use case for partitioning is for performance gains when working with very large tables.  I agree these are not that large and by itself it does not justify the extra complexity of working with partitioning.However there are other use cases for the partitioning model.  In our case we have legacy business processes that swap out the child tables and operate on them independently from each other.  They could be refactored to work together within one big table, but that is a project for another day.   The segmentation of the data into structurally consistent but related separate tables is the first step in that direction.  (previously they were all different from each other, but similar too)   Some of these children tables will hit 1M rows by the end of 2016, but it will take a while for them to grow to that size.I do have another table with many millions of rows that could use partitioning, and eventually I'll split that one up - probably around the time I merge this one into a single table.  First I have to finish getting everything off of MySQL...The query performance hit for sequence scanning isn't all that terrible, but I'd rather understand and get rid of the issue if I can, now, before I run into it again in a situation where it is crippling.Thank you for your help with this!--ps:  You don't have to believe me about the bulk index adding thing.  I hardly believe it myself. It is just something to keep an eye out for.  If it is a real issue, I ought to be able to build a reproducible test case to share - at that time I'll see if I can open it up as a real bug.  For now I'd rather focus on understanding why my select uses an index and a join won't.On Fri, Dec 11, 2015 at 4:44 PM, Andreas Kretschmer <[email protected]> wrote:\n\n> Rick Otten <[email protected]> hat am 11. Dezember 2015 um 21:40\n> geschrieben:\n>\n>\n> I do not know why if I blast a new index creation on the 20 or so children\n> all at once some of them fail, but then if I go back and do a few at a time\n> they all work.  It has happened to me 3 times now, so I'm pretty sure I'm\n> not imagining it.\n\ndon't believe that, sorry.\n\n\n>\n> What specifically in the explain analyze output tells you that it is using\n> a sequence scan instead of an index scan _because_ there are too few rows?\n> I can see where it chooses a sequence scan over an index and I know there\n> are only a few rows in those tables, but I'm not sure how the explain\n> output tells you that it made that choice on purpose.\n\na sequentiell scan over a small table are cheaper than an index-scan. Imageine a\nsmall table,\nonly 3 rows. Fits in one page. It's cheaper to read just this page than read the\nindex\nplus read the table to put out the result row.\n\n\nWhy are you using partitioning? That's make only sense with large child-tables\n(more than 1 million rows or so)\nand if you have a proper partitioning schema.\n\n\n\n>\n> Why would the select statement use the index, but not the join?\n>\n> There used to be an explain output anonymizer tool, if I can find that\n> again, I'll send along the output.  It has been a few years since I posted\n> a question to this list so I don't think I have a bookmark for it any\n> more.... Hmmm.  I'll look around.\n\n\nhttp://explain.depesz.com/", "msg_date": "Fri, 11 Dec 2015 17:09:46 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioned table set and indexes" }, { "msg_contents": "\n> Rick Otten <[email protected]> hat am 11. Dezember 2015 um 23:09\n> geschrieben:\n\n> \n> The query performance hit for sequence scanning isn't all that terrible,\n> but I'd rather understand and get rid of the issue if I can, now, before I\n> run into it again in a situation where it is crippling.\n\ni think, you should try to understand how the planner works.\n\na simple example:\n\ntest=# create table foo (id serial primary key, val text);\nCREATE TABLE\ntest=*# insert into foo (val) select repeat(md5(1::text), 5);\nINSERT 0 1\ntest=*# analyse foo;\nANALYZE\ntest=*# explain analyse select val from foo where id=1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..1.02 rows=1 width=164) (actual time=0.006..0.007\nrows=1 loops=1)\n Filter: (id = 1)\n Rows Removed by Filter: 1\n Planning time: 0.118 ms\n Execution time: 0.021 ms\n(5 rows)\n\n\nAs you can see a seq-scan. It's a small table, costs ..1.02. \n\nAdding one row:\n\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 1\ntest=*# analyse foo;\nANALYZE\ntest=*# explain analyse select val from foo where id=1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..1.02 rows=1 width=164) (actual time=0.006..0.007\nrows=1 loops=1)\n Filter: (id = 1)\n Rows Removed by Filter: 1\n Planning time: 0.118 ms\n Execution time: 0.021 ms\n(5 rows)\n\n\nThe same plan. Adding 2 rows:\n\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 2\ntest=*# analyse foo;\nANALYZE\ntest=*# explain analyse select val from foo where id=1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..1.05 rows=1 width=164) (actual time=0.220..0.277\nrows=1 loops=1)\n Filter: (id = 1)\n Rows Removed by Filter: 3\n Planning time: 0.149 ms\n Execution time: 0.453 ms\n(5 rows)\n\n\nThe same plan. Adding more rows:\n\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 4\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 8\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 16\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 32\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 64\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 128\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 256\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 512\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 1024\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 2048\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 4096\ntest=*# analyse foo;\nANALYZE\ntest=*# explain analyse select val from foo where id=1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Index Scan using foo_pkey on foo (cost=0.28..8.30 rows=1 width=164) (actual\ntime=0.007..0.008 rows=1 loops=1)\n Index Cond: (id = 1)\n Planning time: 0.120 ms\n Execution time: 0.024 ms\n(4 rows)\n\n\nWe got a new plan! Index-Scan now. We are looking now in pg_class to see how\nmany rows and pages we have:\n\ntest=*# select relpages, reltuples from pg_class where relname = 'foo';\n relpages | reltuples\n----------+-----------\n 200 | 8192\n(1 row)\n\nHow large ist the Index?\n\ntest=*# select relpages, reltuples from pg_class where relname = 'foo_pkey';\n relpages | reltuples\n----------+-----------\n 25 | 8192\n(1 row)\n\n\n\nSo, now it's cheaper to read the index and than do an index-scan on the heap to\nread one record (our where-condition is on the primary key, so only one row\nexpected, one page have to read with random access)\n\n\n\nIt's simple math! If you want to learn more you can find a lot about that via\ngoogle:\n\nhttps://www.google.de/?gws_rd=ssl#q=explaining+explain\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 12 Dec 2015 01:20:21 +0100 (CET)", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table set and indexes" }, { "msg_contents": "Why does it index scan when I use where, but not when I do a join?\n\nOn Fri, Dec 11, 2015 at 7:20 PM, Andreas Kretschmer <[email protected]\n> wrote:\n\n>\n> > Rick Otten <[email protected]> hat am 11. Dezember 2015 um 23:09\n> > geschrieben:\n>\n> >\n> > The query performance hit for sequence scanning isn't all that terrible,\n> > but I'd rather understand and get rid of the issue if I can, now, before\n> I\n> > run into it again in a situation where it is crippling.\n>\n> i think, you should try to understand how the planner works.\n>\n> a simple example:\n>\n> test=# create table foo (id serial primary key, val text);\n> CREATE TABLE\n> test=*# insert into foo (val) select repeat(md5(1::text), 5);\n> INSERT 0 1\n> test=*# analyse foo;\n> ANALYZE\n> test=*# explain analyse select val from foo where id=1;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------\n> Seq Scan on foo (cost=0.00..1.02 rows=1 width=164) (actual\n> time=0.006..0.007\n> rows=1 loops=1)\n> Filter: (id = 1)\n> Rows Removed by Filter: 1\n> Planning time: 0.118 ms\n> Execution time: 0.021 ms\n> (5 rows)\n>\n>\n> As you can see a seq-scan. It's a small table, costs ..1.02.\n>\n> Adding one row:\n>\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 1\n> test=*# analyse foo;\n> ANALYZE\n> test=*# explain analyse select val from foo where id=1;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------\n> Seq Scan on foo (cost=0.00..1.02 rows=1 width=164) (actual\n> time=0.006..0.007\n> rows=1 loops=1)\n> Filter: (id = 1)\n> Rows Removed by Filter: 1\n> Planning time: 0.118 ms\n> Execution time: 0.021 ms\n> (5 rows)\n>\n>\n> The same plan. Adding 2 rows:\n>\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 2\n> test=*# analyse foo;\n> ANALYZE\n> test=*# explain analyse select val from foo where id=1;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------\n> Seq Scan on foo (cost=0.00..1.05 rows=1 width=164) (actual\n> time=0.220..0.277\n> rows=1 loops=1)\n> Filter: (id = 1)\n> Rows Removed by Filter: 3\n> Planning time: 0.149 ms\n> Execution time: 0.453 ms\n> (5 rows)\n>\n>\n> The same plan. Adding more rows:\n>\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 4\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 8\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 16\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 32\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 64\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 128\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 256\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 512\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 1024\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 2048\n> test=*# insert into foo (val) select val from foo;\n> INSERT 0 4096\n> test=*# analyse foo;\n> ANALYZE\n> test=*# explain analyse select val from foo where id=1;\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------\n> Index Scan using foo_pkey on foo (cost=0.28..8.30 rows=1 width=164)\n> (actual\n> time=0.007..0.008 rows=1 loops=1)\n> Index Cond: (id = 1)\n> Planning time: 0.120 ms\n> Execution time: 0.024 ms\n> (4 rows)\n>\n>\n> We got a new plan! Index-Scan now. We are looking now in pg_class to see\n> how\n> many rows and pages we have:\n>\n> test=*# select relpages, reltuples from pg_class where relname = 'foo';\n> relpages | reltuples\n> ----------+-----------\n> 200 | 8192\n> (1 row)\n>\n> How large ist the Index?\n>\n> test=*# select relpages, reltuples from pg_class where relname =\n> 'foo_pkey';\n> relpages | reltuples\n> ----------+-----------\n> 25 | 8192\n> (1 row)\n>\n>\n>\n> So, now it's cheaper to read the index and than do an index-scan on the\n> heap to\n> read one record (our where-condition is on the primary key, so only one row\n> expected, one page have to read with random access)\n>\n>\n>\n> It's simple math! If you want to learn more you can find a lot about that\n> via\n> google:\n>\n> https://www.google.de/?gws_rd=ssl#q=explaining+explain\n>\n\nWhy does it index scan when I use where, but not when I do a join?On Fri, Dec 11, 2015 at 7:20 PM, Andreas Kretschmer <[email protected]> wrote:\n> Rick Otten <[email protected]> hat am 11. Dezember 2015 um 23:09\n> geschrieben:\n\n>\n> The query performance hit for sequence scanning isn't all that terrible,\n> but I'd rather understand and get rid of the issue if I can, now, before I\n> run into it again in a situation where it is crippling.\n\ni think, you should try to understand how the planner works.\n\na simple example:\n\ntest=# create table foo (id serial primary key, val text);\nCREATE TABLE\ntest=*# insert into foo (val) select repeat(md5(1::text), 5);\nINSERT 0 1\ntest=*# analyse foo;\nANALYZE\ntest=*# explain analyse select val from foo where id=1;\n                                          QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Seq Scan on foo  (cost=0.00..1.02 rows=1 width=164) (actual time=0.006..0.007\nrows=1 loops=1)\n   Filter: (id = 1)\n   Rows Removed by Filter: 1\n Planning time: 0.118 ms\n Execution time: 0.021 ms\n(5 rows)\n\n\nAs you can see a seq-scan. It's a small table, costs ..1.02.\n\nAdding one row:\n\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 1\ntest=*# analyse foo;\nANALYZE\ntest=*# explain analyse select val from foo where id=1;\n                                          QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Seq Scan on foo  (cost=0.00..1.02 rows=1 width=164) (actual time=0.006..0.007\nrows=1 loops=1)\n   Filter: (id = 1)\n   Rows Removed by Filter: 1\n Planning time: 0.118 ms\n Execution time: 0.021 ms\n(5 rows)\n\n\nThe same plan. Adding 2 rows:\n\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 2\ntest=*# analyse foo;\nANALYZE\ntest=*# explain analyse select val from foo where id=1;\n                                          QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Seq Scan on foo  (cost=0.00..1.05 rows=1 width=164) (actual time=0.220..0.277\nrows=1 loops=1)\n   Filter: (id = 1)\n   Rows Removed by Filter: 3\n Planning time: 0.149 ms\n Execution time: 0.453 ms\n(5 rows)\n\n\nThe same plan. Adding more rows:\n\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 4\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 8\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 16\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 32\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 64\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 128\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 256\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 512\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 1024\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 2048\ntest=*# insert into foo (val) select val from foo;\nINSERT 0 4096\ntest=*# analyse foo;\nANALYZE\ntest=*# explain analyse select val from foo where id=1;\n                                                   QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Index Scan using foo_pkey on foo  (cost=0.28..8.30 rows=1 width=164) (actual\ntime=0.007..0.008 rows=1 loops=1)\n   Index Cond: (id = 1)\n Planning time: 0.120 ms\n Execution time: 0.024 ms\n(4 rows)\n\n\nWe got a new plan! Index-Scan now. We are looking now in pg_class to see how\nmany rows and pages we have:\n\ntest=*# select relpages, reltuples from pg_class where relname = 'foo';\n relpages | reltuples\n----------+-----------\n      200 |      8192\n(1 row)\n\nHow large ist the Index?\n\ntest=*# select relpages, reltuples from pg_class where relname = 'foo_pkey';\n relpages | reltuples\n----------+-----------\n       25 |      8192\n(1 row)\n\n\n\nSo, now it's cheaper to read the index and than do an index-scan on the heap to\nread one record (our where-condition is on the primary key, so only one row\nexpected, one page have to read with random access)\n\n\n\nIt's simple math! If you want to learn more you can find a lot about that via\ngoogle:\n\nhttps://www.google.de/?gws_rd=ssl#q=explaining+explain", "msg_date": "Fri, 11 Dec 2015 19:55:14 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioned table set and indexes" }, { "msg_contents": "\n\n> Rick Otten <[email protected]> hat am 12. Dezember 2015 um 01:55\n> geschrieben:\n> \n> \n> Why does it index scan when I use where, but not when I do a join?\n\n\ndifficult to say/guess because of anonymized names and not knowing the real\nquery. This one? http://explain.depesz.com/s/1es ?\nAll seqscans are fast, a seqscan isn't evil per se.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 12 Dec 2015 04:13:38 +0100 (CET)", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table set and indexes" } ]
[ { "msg_contents": "Hello all, I hope someone can help me with this.\n\nPostgres 9.4.4\nSlon 2.2.4\nLinux\n\nI am using slony-i to replicate a production database which is in the \norder of 70GB. I have a reasonably complex select query that runs in 40 \nseconds on the master but takes in the region of 30-40 minutes on the \nslave. The postgres configurations are identical and the machines are a \nsimilar specifications (12 core hyper threaded HP server and the slave \nhas slightly less RAM: 132GB vs 148GB) The server running the slave \ndatabase has a higher load than the one running the master though the \nload average on the slave machine was low (1-2) when running the test \nand the postgres process on the slave machine runs at 100% of a CPU with \nvery little iowait on the server.\n\nInspecting the execution plan shows that there are some differences, for \nexample, the slave is using a HashAggregate when the master is simply \ngrouping. There also seems to be a difference with the ordering of the \nsub plans. Armed with this knowledge I have set enable_hashagg to off \nand run the query again and it now takes 53 seconds on the slave which \nis a more acceptable difference and the execution plans now look very \nsimilar (one difference being that there is another HashAggregate in the \nmaster which is now missing on the slave and may account for the 13 \nseconds). I have isolated a much simpler query which I have detailed \nbelow with their execution plans which shows the difference on line 4. I \nwould rather not disable hash aggregation on the slave as this might \nhave other consequences so this raises a number of questions. Firstly Is \nthere anything that I can do to stop this feature? Why is the slave \nbehaving differently to the master?\n\nThanks in advance for any help.\n\nCheers\nMatthew\n\nexplain\nwith my_view_booking_pax_breakdown as (\nSELECT bev.booking_id,\n ( SELECT count(*) AS count\n FROM passenger_version\n WHERE passenger_version.current_version = 'T'::bpchar AND \npassenger_version.deleted = 'F'::bpchar AND \npassenger_version.indicative_pax_type = 'A'::bpchar AND \npassenger_version.booking_id = bev.booking_id) AS adult_count,\n ( SELECT count(*) AS count\n FROM passenger_version\n WHERE passenger_version.current_version = 'T'::bpchar AND \npassenger_version.deleted = 'F'::bpchar AND \npassenger_version.indicative_pax_type = 'C'::bpchar AND \npassenger_version.booking_id = bev.booking_id) AS child_count,\n ( SELECT count(*) AS count\n FROM passenger_version\n WHERE passenger_version.current_version = 'T'::bpchar AND \npassenger_version.deleted = 'F'::bpchar AND \npassenger_version.indicative_pax_type = 'I'::bpchar AND \npassenger_version.booking_id = bev.booking_id) AS infant_count\n FROM booking_expanded_version bev\n GROUP BY bev.booking_id\n)\nselect * from \"my_view_booking_pax_breakdown\" \"view_booking_pax_breakdown\"\n INNER JOIN \"booking\".\"booking_expanded_version\" \n\"booking_expanded_version\" ON \n\"view_booking_pax_breakdown\".\"booking_id\"=\"booking_expanded_version\".\"booking_id\" \n\n\nMaster\n\n\"Merge Join (cost=5569138.32..6158794.12 rows=2461265 width=1375)\"\n\" Merge Cond: (booking_expanded_version.booking_id = \nview_booking_pax_breakdown.booking_id)\"\n\" CTE my_view_booking_pax_breakdown\"\n*\" -> Group (cost=0.43..5545692.19 rows=215891 width=4)\"*\n\" Group Key: bev.booking_id\"\n\" -> Index Only Scan using \nbooking_expanded_version_booking_idx on booking_expanded_version bev \n(cost=0.43..64607.40 rows=2461265 width=4)\"\n\" SubPlan 1\"\n\" -> Aggregate (cost=8.57..8.58 rows=1 width=0)\"\n\" -> Index Scan using passenger_version_idx_4 on \npassenger_version (cost=0.43..8.55 rows=5 width=0)\"\n\" Index Cond: (booking_id = bev.booking_id)\"\n\" SubPlan 2\"\n\" -> Aggregate (cost=8.45..8.46 rows=1 width=0)\"\n\" -> Index Scan using passenger_version_idx_3 on \npassenger_version passenger_version_1 (cost=0.42..8.45 rows=1 width=0)\"\n\" Index Cond: (booking_id = bev.booking_id)\"\n\" SubPlan 3\"\n\" -> Aggregate (cost=8.31..8.32 rows=1 width=0)\"\n\" -> Index Scan using passenger_version_idx_2 on \npassenger_version passenger_version_2 (cost=0.29..8.31 rows=1 width=0)\"\n\" Index Cond: (booking_id = bev.booking_id)\"\n\" -> Index Scan using booking_expanded_version_booking_idx on \nbooking_expanded_version (cost=0.43..546584.09 rows=2461265 width=1347)\"\n\" -> Sort (cost=23445.70..23985.43 rows=215891 width=28)\"\n\" Sort Key: view_booking_pax_breakdown.booking_id\"\n\" -> CTE Scan on my_view_booking_pax_breakdown \nview_booking_pax_breakdown (cost=0.00..4317.82 rows=215891 width=28)\"\n\nSlave\n\n\"Merge Join (cost=6168518.91..6764756.86 rows=2505042 width=1299)\"\n\" Merge Cond: (booking_expanded_version.booking_id = \nview_booking_pax_breakdown.booking_id)\"\n\" CTE my_view_booking_pax_breakdown\"\n*\" -> HashAggregate (cost=212185.03..6142965.53 rows=234040 width=4)\"*\n\" Group Key: bev.booking_id\"\n\" -> Seq Scan on booking_expanded_version bev \n(cost=0.00..205922.42 rows=2505042 width=4)\"\n\" SubPlan 1\"\n\" -> Aggregate (cost=8.54..8.55 rows=1 width=0)\"\n\" -> Index Scan using passenger_version_idx_4 on \npassenger_version (cost=0.43..8.53 rows=4 width=0)\"\n\" Index Cond: (booking_id = bev.booking_id)\"\n\" SubPlan 2\"\n\" -> Aggregate (cost=8.45..8.46 rows=1 width=0)\"\n\" -> Index Scan using passenger_version_idx_3 on \npassenger_version passenger_version_1 (cost=0.42..8.45 rows=1 width=0)\"\n\" Index Cond: (booking_id = bev.booking_id)\"\n\" SubPlan 3\"\n\" -> Aggregate (cost=8.31..8.32 rows=1 width=0)\"\n\" -> Index Scan using passenger_version_idx_2 on \npassenger_version passenger_version_2 (cost=0.29..8.31 rows=1 width=0)\"\n\" Index Cond: (booking_id = bev.booking_id)\"\n\" -> Index Scan using booking_expanded_version_booking_idx on \nbooking_expanded_version (cost=0.43..552400.15 rows=2505042 width=1271)\"\n\" -> Sort (cost=25552.95..26138.05 rows=234040 width=28)\"\n\" Sort Key: view_booking_pax_breakdown.booking_id\"\n\" -> CTE Scan on my_view_booking_pax_breakdown \nview_booking_pax_breakdown (cost=0.00..4680.80 rows=234040 width=28)\"\n\n\n\nThis message has been scanned for malware by Websense. www.websense.com\n\n\n\n\n\n\n Hello all, I hope someone can help me with this.\n\n Postgres 9.4.4\n Slon 2.2.4\n Linux \n\n I am using slony-i to replicate a production database which is in\n the order of 70GB. I have a reasonably complex select query that\n runs in 40 seconds on the master but takes in the region of 30-40\n minutes on the slave. The postgres configurations are identical and\n the machines are a similar specifications (12 core hyper threaded HP\n server and the slave has slightly less RAM: 132GB vs 148GB) The\n server running the slave database has a higher load than the one\n running the master though the load average on the slave machine was\n low (1-2) when running the test and the postgres process on the\n slave machine runs at 100% of a CPU with very little iowait on the\n server.\n\n Inspecting the execution plan shows that there are some differences,\n for example, the slave is using a HashAggregate when the master is\n simply grouping. There also seems to be a difference with the\n ordering of the sub plans. Armed with this knowledge I have set\n enable_hashagg to off and run the query again and it now takes 53\n seconds on the slave which is a more acceptable difference and the\n execution plans now look very similar (one difference being that\n there is another HashAggregate in the master which is now missing on\n the slave and may account for the 13 seconds). I have isolated a\n much simpler query which I have detailed below with their execution\n plans which shows the difference on line 4. I would rather not\n disable hash aggregation on the slave as this might have other\n consequences so this raises a number of questions. Firstly Is there\n anything that I can do to stop this feature? Why is the slave\n behaving differently to the master? \n\n Thanks in advance for any help.\n\n Cheers\n Matthew\n\n explain\n with my_view_booking_pax_breakdown as (\n SELECT bev.booking_id,\n     ( SELECT count(*) AS count\n            FROM passenger_version\n           WHERE passenger_version.current_version = 'T'::bpchar AND\n passenger_version.deleted = 'F'::bpchar AND\n passenger_version.indicative_pax_type = 'A'::bpchar AND\n passenger_version.booking_id = bev.booking_id) AS adult_count,\n     ( SELECT count(*) AS count\n            FROM passenger_version\n           WHERE passenger_version.current_version = 'T'::bpchar AND\n passenger_version.deleted = 'F'::bpchar AND\n passenger_version.indicative_pax_type = 'C'::bpchar AND\n passenger_version.booking_id = bev.booking_id) AS child_count,\n     ( SELECT count(*) AS count\n            FROM passenger_version\n           WHERE passenger_version.current_version = 'T'::bpchar AND\n passenger_version.deleted = 'F'::bpchar AND\n passenger_version.indicative_pax_type = 'I'::bpchar AND\n passenger_version.booking_id = bev.booking_id) AS infant_count\n    FROM booking_expanded_version bev\n   GROUP BY bev.booking_id\n )\n select * from \"my_view_booking_pax_breakdown\"\n \"view_booking_pax_breakdown\"\n     INNER JOIN \"booking\".\"booking_expanded_version\"\n \"booking_expanded_version\" ON\n \"view_booking_pax_breakdown\".\"booking_id\"=\"booking_expanded_version\".\"booking_id\"\n \n     \n Master\n\n \"Merge Join  (cost=5569138.32..6158794.12 rows=2461265 width=1375)\"\n \"  Merge Cond: (booking_expanded_version.booking_id =\n view_booking_pax_breakdown.booking_id)\"\n \"  CTE my_view_booking_pax_breakdown\"\n\"    ->  Group  (cost=0.43..5545692.19 rows=215891 width=4)\"\n \"          Group Key: bev.booking_id\"\n \"          ->  Index Only Scan using\n booking_expanded_version_booking_idx on booking_expanded_version\n bev  (cost=0.43..64607.40 rows=2461265 width=4)\"\n \"          SubPlan 1\"\n \"            ->  Aggregate  (cost=8.57..8.58 rows=1 width=0)\"\n \"                  ->  Index Scan using passenger_version_idx_4\n on passenger_version  (cost=0.43..8.55 rows=5 width=0)\"\n \"                        Index Cond: (booking_id = bev.booking_id)\"\n \"          SubPlan 2\"\n \"            ->  Aggregate  (cost=8.45..8.46 rows=1 width=0)\"\n \"                  ->  Index Scan using passenger_version_idx_3\n on passenger_version passenger_version_1  (cost=0.42..8.45 rows=1\n width=0)\"\n \"                        Index Cond: (booking_id = bev.booking_id)\"\n \"          SubPlan 3\"\n \"            ->  Aggregate  (cost=8.31..8.32 rows=1 width=0)\"\n \"                  ->  Index Scan using passenger_version_idx_2\n on passenger_version passenger_version_2  (cost=0.29..8.31 rows=1\n width=0)\"\n \"                        Index Cond: (booking_id = bev.booking_id)\"\n \"  ->  Index Scan using booking_expanded_version_booking_idx on\n booking_expanded_version  (cost=0.43..546584.09 rows=2461265\n width=1347)\"\n \"  ->  Sort  (cost=23445.70..23985.43 rows=215891 width=28)\"\n \"        Sort Key: view_booking_pax_breakdown.booking_id\"\n \"        ->  CTE Scan on my_view_booking_pax_breakdown\n view_booking_pax_breakdown  (cost=0.00..4317.82 rows=215891\n width=28)\"\n\n Slave\n\n \"Merge Join  (cost=6168518.91..6764756.86 rows=2505042 width=1299)\"\n \"  Merge Cond: (booking_expanded_version.booking_id =\n view_booking_pax_breakdown.booking_id)\"\n \"  CTE my_view_booking_pax_breakdown\"\n\"    ->  HashAggregate  (cost=212185.03..6142965.53\n rows=234040 width=4)\"\n \"          Group Key: bev.booking_id\"\n \"          ->  Seq Scan on booking_expanded_version bev \n (cost=0.00..205922.42 rows=2505042 width=4)\"\n \"          SubPlan 1\"\n \"            ->  Aggregate  (cost=8.54..8.55 rows=1 width=0)\"\n \"                  ->  Index Scan using passenger_version_idx_4\n on passenger_version  (cost=0.43..8.53 rows=4 width=0)\"\n \"                        Index Cond: (booking_id = bev.booking_id)\"\n \"          SubPlan 2\"\n \"            ->  Aggregate  (cost=8.45..8.46 rows=1 width=0)\"\n \"                  ->  Index Scan using passenger_version_idx_3\n on passenger_version passenger_version_1  (cost=0.42..8.45 rows=1\n width=0)\"\n \"                        Index Cond: (booking_id = bev.booking_id)\"\n \"          SubPlan 3\"\n \"            ->  Aggregate  (cost=8.31..8.32 rows=1 width=0)\"\n \"                  ->  Index Scan using passenger_version_idx_2\n on passenger_version passenger_version_2  (cost=0.29..8.31 rows=1\n width=0)\"\n \"                        Index Cond: (booking_id = bev.booking_id)\"\n \"  ->  Index Scan using booking_expanded_version_booking_idx on\n booking_expanded_version  (cost=0.43..552400.15 rows=2505042\n width=1271)\"\n \"  ->  Sort  (cost=25552.95..26138.05 rows=234040 width=28)\"\n \"        Sort Key: view_booking_pax_breakdown.booking_id\"\n \"        ->  CTE Scan on my_view_booking_pax_breakdown\n view_booking_pax_breakdown  (cost=0.00..4680.80 rows=234040\n width=28)\"\n\n\nThis message has been scanned for malware by Websense. www.websense.com", "msg_date": "Mon, 14 Dec 2015 17:16:52 +0000", "msg_from": "Matthew Lunnon <[email protected]>", "msg_from_op": true, "msg_subject": "Performance difference between Slon master and slave" }, { "msg_contents": "On 12/14/15 11:16 AM, Matthew Lunnon wrote:\n> Inspecting the execution plan shows that there are some differences, for\n> example, the slave is using a HashAggregate when the master is simply\n> grouping. There also seems to be a difference with the ordering of the\n> sub plans.\n\nHave you tried analyzing the tables on the slave?\n\nAlso, keep in mind that the first time you access rows on a Slony slave \nafter they're replicated Postgres will need to write hint bits out, \nwhich will take some time. But that's clearly not the issue here.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Dec 2015 11:49:57 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance difference between Slon master and slave" }, { "msg_contents": "Hi Jim,\nThanks for your response. Yes the tables have been analysed and I have also re-indexed and vacuumed the slave database. \nRegards\nMatthew \n\nSent from my iPad\n\n> On 14 Dec 2015, at 17:49, Jim Nasby <[email protected]> wrote:\n> \n>> On 12/14/15 11:16 AM, Matthew Lunnon wrote:\n>> Inspecting the execution plan shows that there are some differences, for\n>> example, the slave is using a HashAggregate when the master is simply\n>> grouping. There also seems to be a difference with the ordering of the\n>> sub plans.\n> \n> Have you tried analyzing the tables on the slave?\n> \n> Also, keep in mind that the first time you access rows on a Slony slave after they're replicated Postgres will need to write hint bits out, which will take some time. But that's clearly not the issue here.\n> -- \n> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX\n> Experts in Analytics, Data Architecture and PostgreSQL\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n\n\nThis message has been scanned for malware by Websense. www.websense.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Dec 2015 21:43:44 +0000", "msg_from": "Mattthew Lunnon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance difference between Slon master and slave" } ]
[ { "msg_contents": "I have a really busy function that I need to optimize the best way I can.\nThis function is just a nested select statement that is requested several\ntimes a sec by a legacy application. I'm running a PostgreSQL 9.4 on a\nCentOS 6;\n\nThe indexes are in place but I've noticed that it is only used after the\nfirst execution of the function. I think that the problem is that Postgres\nisn't getting the best execution plan at first because of a parameter that\nit is highly exclusive in the majority of the cases, but it can be not as\ngood sometimes. We can't change the way we call the function to a plain sql\nstatement or a view because we can't change the application code itself.\n\nWhen I test with EXPLAIN ANALYZE after the first execution, the query runs\nreally fast but the aplication sessions call the function only once and\nthen are terminated. I need that the first execution use the actual\noptimized plan.\n\nWe tried messing around with the connector driver that manage the\nconnection pooling to issue a DISCARD TEMP instead of DISCARD ALL, so it\ncould keep the cached plan of the sessions and the performance improved a\nlot, but I don't want to do that in a production environment.\n\nI've tried to change the language to a sql function but it didn't help as\nthe execution time didn't drop after the first execution. I've tried to add\nthe \"SET LOCAL join_collapse_limit = 1\" too but it appears it doesn't work\ninside a function; Here is the function code:\n\nCREATE OR REPLACE FUNCTION public.ap_keepalive_geteqpid_veiid(\n IN tcbserie bigint,\n IN protocolo integer)\n RETURNS TABLE(eqpid integer, veiid integer, tcbid integer, veiplaca\ncharacter varying, veiproprietariocliid integer, tcbtppid integer,\ntcbversao character, veirpmparametro double precision, tcbconfiguracao\nbigint, tcbevtconfig integer, veibitsalertas integer, sluid integer, harid\ninteger) AS\n$BODY$\nBEGIN\n\nRETURN QUERY\nSELECT teqp.eqpID,\nteqp.eqpveiID AS veiID,\ntcb.tcbID,\ntvei.veiPlaca,\ntvei.veiProprietariocliID,\ntcb.tcbtppID,\ntcb.tcbVersao,\ntvei.veiRPMParametro,\nCOALESCE(COALESCE(NULLIF(tcb.tcbConfiguracao, 0),\ntcc.clcConfiguracaoBitsVeic), 0) AS tcbConfiguracao,\nCOALESCE(tcb.tcbevtConfig, 0) AS tcbevtConfig,\nCOALESCE(tvei.veiBitsAlertas, 0) AS veiBitsAlertas,\nCOALESCE(tvei.veisluID, 0) AS sluID,\nCOALESCE(tcb.tcbharID, 0) AS harID\nFROM TabComputadorBordo tcb\nINNER JOIN TabEquipamento teqp ON teqp.eqptcbID = tcb.tcbID\nINNER JOIN TabPacoteProduto tpp ON teqp.eqptppID = tpp.tppID\nINNER JOIN TabVeiculos tvei ON teqp.eqpveiID = tvei.veiID\nLEFT JOIN TabCliente tcli ON tcli.cliid = tvei.veiProprietariocliID\nLEFT JOIN TabClienteConfig tcc ON tcc.clcCliID = tcli.cliID\nWHERE tcb.tcbserie = $1\nAND teqp.eqpAtivo = 1\nAND tpp.tppIDProtocolo = $2\nAND tvei.veiBloqueioSinal = 0;\n\nEND\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 10000\n ROWS 1;\n\nExecution plan in the first execution:\n\n\"Function Scan on ap_keepalive_geteqpid_veiid (cost=0.25..0.26 rows=1\nwidth=116) (actual time=3.268..3.268 rows=1 loops=1)\"\n\"Planning time: 0.032 ms\"\n\"Execution time: 3.288 ms\"\n\nSecond execution:\n\n\"Function Scan on ap_keepalive_geteqpid_veiid (cost=0.25..0.26 rows=1\nwidth=116) (actual time=0.401..0.402 rows=1 loops=1)\"\n\"Planning time: 0.058 ms\"\n\"Execution time: 0.423 ms\"\n\nThank you in advance,\nPedro Ivo\n\nI have a really busy function that I need to optimize the best way I can. This function is just a nested select statement that is requested several times a sec by a legacy application. I'm running a PostgreSQL 9.4 on a CentOS 6;The indexes are in place but I've noticed that it is only used after the first execution of the function. I think that the problem is that Postgres isn't getting the best execution plan at first because of a parameter that it is highly exclusive in the majority of the cases, but it can be not as good sometimes. We can't change the way we call the function to a plain sql statement or a view because we can't change the application code itself.When I test with EXPLAIN ANALYZE after the first execution, the query runs really fast but the aplication sessions call the function only once and then are terminated. I need that the first execution use the actual optimized plan. We tried messing around with the connector driver that manage the connection pooling to issue a DISCARD TEMP instead of DISCARD ALL, so it could keep the cached plan of the sessions and the performance improved a lot, but I don't want to do that in a production environment.I've tried to change the language to a sql function but it didn't help as the execution time didn't drop after the first execution. I've tried to add the \"SET LOCAL join_collapse_limit = 1\" too but it appears it doesn't work inside a function; Here is the function code:CREATE OR REPLACE FUNCTION public.ap_keepalive_geteqpid_veiid(    IN tcbserie bigint,    IN protocolo integer)  RETURNS TABLE(eqpid integer, veiid integer, tcbid integer, veiplaca character varying, veiproprietariocliid integer, tcbtppid integer, tcbversao character, veirpmparametro double precision, tcbconfiguracao bigint, tcbevtconfig integer, veibitsalertas integer, sluid integer, harid integer) AS$BODY$BEGIN RETURN QUERY SELECT teqp.eqpID,  teqp.eqpveiID AS veiID,  tcb.tcbID,  tvei.veiPlaca,  tvei.veiProprietariocliID,  tcb.tcbtppID,  tcb.tcbVersao, tvei.veiRPMParametro,  COALESCE(COALESCE(NULLIF(tcb.tcbConfiguracao, 0), tcc.clcConfiguracaoBitsVeic), 0) AS tcbConfiguracao, COALESCE(tcb.tcbevtConfig, 0) AS tcbevtConfig, COALESCE(tvei.veiBitsAlertas, 0) AS veiBitsAlertas, COALESCE(tvei.veisluID, 0) AS sluID, COALESCE(tcb.tcbharID, 0) AS harID FROM TabComputadorBordo tcb INNER JOIN TabEquipamento teqp ON teqp.eqptcbID = tcb.tcbID INNER JOIN TabPacoteProduto tpp ON teqp.eqptppID = tpp.tppID INNER JOIN TabVeiculos tvei ON teqp.eqpveiID = tvei.veiID LEFT JOIN TabCliente tcli ON tcli.cliid = tvei.veiProprietariocliID LEFT JOIN TabClienteConfig tcc ON tcc.clcCliID = tcli.cliID WHERE   tcb.tcbserie = $1 AND teqp.eqpAtivo = 1 AND tpp.tppIDProtocolo = $2 AND tvei.veiBloqueioSinal = 0;END$BODY$  LANGUAGE plpgsql VOLATILE  COST 10000  ROWS 1;Execution plan in the first execution:\"Function Scan on ap_keepalive_geteqpid_veiid  (cost=0.25..0.26 rows=1 width=116) (actual time=3.268..3.268 rows=1 loops=1)\"\"Planning time: 0.032 ms\"\"Execution time: 3.288 ms\"Second execution:\"Function Scan on ap_keepalive_geteqpid_veiid  (cost=0.25..0.26 rows=1 width=116) (actual time=0.401..0.402 rows=1 loops=1)\"\"Planning time: 0.058 ms\"\"Execution time: 0.423 ms\"Thank you in advance,Pedro Ivo", "msg_date": "Mon, 14 Dec 2015 16:53:45 -0200", "msg_from": "=?UTF-8?Q?Pedro_Fran=C3=A7a?= <[email protected]>", "msg_from_op": true, "msg_subject": "Getting an optimal plan on the first execution of a pl/pgsql function" }, { "msg_contents": "On Mon, Dec 14, 2015 at 11:53 AM, Pedro França <[email protected]>\nwrote:\n\n> I have a really busy function that I need to optimize the best way I can.\n> This function is just a nested select statement that is requested several\n> times a sec by a legacy application. I'm running a PostgreSQL 9.4 on a\n> CentOS 6;\n>\n> The indexes are in place but I've noticed that it is only used after the\n> first execution of the function.\n>\n\n​How do you know this?​\n\nI think that the problem is that Postgres isn't getting the best execution\n> plan at first because of a parameter that it is highly exclusive in the\n> majority of the cases, but it can be not as good sometimes. We can't change\n> the way we call the function to a plain sql statement or a view because we\n> can't change the application code itself.\n>\n> When I test with EXPLAIN ANALYZE after the first execution, the query runs\n> really fast but the aplication sessions call the function only once and\n> then are terminated. I need that the first execution use the actual\n> optimized plan.\n>\n> We tried messing around with the connector driver that manage the\n> connection pooling to issue a DISCARD TEMP instead of DISCARD ALL, so it\n> could keep the cached plan of the sessions and the performance improved a\n> lot, but I don't want to do that in a production environment.\n>\n\nGiven the constraints you've listed this seems like it might be your only\navenue of improvement.​ Your problem that the performance improvement is\nseen due to caching effects. If you throw away the cache you loose the\nimprovement.\n\n\n> I've tried to change the language to a sql function but it didn't help as\n> the execution time didn't drop after the first execution.\n>\n\n​Yes, this likely would make thing worse...depending upon how it is called.\n\nI've tried to add the \"SET LOCAL join_collapse_limit = 1\" too but it\n> appears it doesn't work inside a function;\n>\n\n​I wouldn't expect that parameter to have any effect in this scenario.\n\nHere is the function code:\n>\n> CREATE OR REPLACE FUNCTION public.ap_keepalive_geteqpid_veiid(\n> IN tcbserie bigint,\n> IN protocolo integer)\n> RETURNS TABLE(eqpid integer, veiid integer, tcbid integer, veiplaca\n> character varying, veiproprietariocliid integer, tcbtppid integer,\n> tcbversao character, veirpmparametro double precision, tcbconfiguracao\n> bigint, tcbevtconfig integer, veibitsalertas integer, sluid integer, harid\n> integer) AS\n> $BODY$\n> BEGIN\n>\n> RETURN QUERY\n> SELECT teqp.eqpID,\n> teqp.eqpveiID AS veiID,\n> tcb.tcbID,\n> tvei.veiPlaca,\n> tvei.veiProprietariocliID,\n> tcb.tcbtppID,\n> tcb.tcbVersao,\n> tvei.veiRPMParametro,\n> COALESCE(COALESCE(NULLIF(tcb.tcbConfiguracao, 0),\n> tcc.clcConfiguracaoBitsVeic), 0) AS tcbConfiguracao,\n> COALESCE(tcb.tcbevtConfig, 0) AS tcbevtConfig,\n> COALESCE(tvei.veiBitsAlertas, 0) AS veiBitsAlertas,\n> COALESCE(tvei.veisluID, 0) AS sluID,\n> COALESCE(tcb.tcbharID, 0) AS harID\n> FROM TabComputadorBordo tcb\n> INNER JOIN TabEquipamento teqp ON teqp.eqptcbID = tcb.tcbID\n> INNER JOIN TabPacoteProduto tpp ON teqp.eqptppID = tpp.tppID\n> INNER JOIN TabVeiculos tvei ON teqp.eqpveiID = tvei.veiID\n> LEFT JOIN TabCliente tcli ON tcli.cliid = tvei.veiProprietariocliID\n> LEFT JOIN TabClienteConfig tcc ON tcc.clcCliID = tcli.cliID\n> WHERE tcb.tcbserie = $1\n> AND teqp.eqpAtivo = 1\n> AND tpp.tppIDProtocolo = $2\n> AND tvei.veiBloqueioSinal = 0;\n>\n> END\n> $BODY$\n> LANGUAGE plpgsql VOLATILE\n> COST 10000\n> ROWS 1;\n>\n> Execution plan in the first execution:\n>\n\n​You likely could make this STABLE instead of VOLATILE; though that doesn't\nsolve your problem.​\n\n\n> \"Function Scan on ap_keepalive_geteqpid_veiid (cost=0.25..0.26 rows=1\n> width=116) (actual time=3.268..3.268 rows=1 loops=1)\"\n> \"Planning time: 0.032 ms\"\n> \"Execution time: 3.288 ms\"\n>\n> Second execution:\n>\n> \"Function Scan on ap_keepalive_geteqpid_veiid (cost=0.25..0.26 rows=1\n> width=116) (actual time=0.401..0.402 rows=1 loops=1)\"\n> \"Planning time: 0.058 ms\"\n> \"Execution time: 0.423 ms\"\n>\n>\n​I'm doubting the query inside of the function is the problem here...it is\nthe function usage itself. Calling a function has overhead in that the\nbody of function needs to be processed. This only has to happen once per\nsession. The first call of the function incurs this overhead while\nsubsequent calls do not.\n\nPending others correcting me...I fairly certain regarding my conclusions\nthough somewhat inexperienced in doing this kind of diagnostics.\n\nDavid J.\n\nOn Mon, Dec 14, 2015 at 11:53 AM, Pedro França <[email protected]> wrote:I have a really busy function that I need to optimize the best way I can. This function is just a nested select statement that is requested several times a sec by a legacy application. I'm running a PostgreSQL 9.4 on a CentOS 6;The indexes are in place but I've noticed that it is only used after the first execution of the function. ​How do you know this?​I think that the problem is that Postgres isn't getting the best execution plan at first because of a parameter that it is highly exclusive in the majority of the cases, but it can be not as good sometimes. We can't change the way we call the function to a plain sql statement or a view because we can't change the application code itself.When I test with EXPLAIN ANALYZE after the first execution, the query runs really fast but the aplication sessions call the function only once and then are terminated. I need that the first execution use the actual optimized plan. We tried messing around with the connector driver that manage the connection pooling to issue a DISCARD TEMP instead of DISCARD ALL, so it could keep the cached plan of the sessions and the performance improved a lot, but I don't want to do that in a production environment.Given the constraints you've listed this seems like it might be your only avenue of improvement.​  Your problem that the performance improvement is seen due to caching effects.  If you throw away the cache you loose the improvement.I've tried to change the language to a sql function but it didn't help as the execution time didn't drop after the first execution. ​Yes, this likely would make thing worse...depending upon how it is called.I've tried to add the \"SET LOCAL join_collapse_limit = 1\" too but it appears it doesn't work inside a function;​I wouldn't expect that parameter to have any effect in this scenario.Here is the function code:CREATE OR REPLACE FUNCTION public.ap_keepalive_geteqpid_veiid(    IN tcbserie bigint,    IN protocolo integer)  RETURNS TABLE(eqpid integer, veiid integer, tcbid integer, veiplaca character varying, veiproprietariocliid integer, tcbtppid integer, tcbversao character, veirpmparametro double precision, tcbconfiguracao bigint, tcbevtconfig integer, veibitsalertas integer, sluid integer, harid integer) AS$BODY$BEGIN RETURN QUERY SELECT teqp.eqpID,  teqp.eqpveiID AS veiID,  tcb.tcbID,  tvei.veiPlaca,  tvei.veiProprietariocliID,  tcb.tcbtppID,  tcb.tcbVersao, tvei.veiRPMParametro,  COALESCE(COALESCE(NULLIF(tcb.tcbConfiguracao, 0), tcc.clcConfiguracaoBitsVeic), 0) AS tcbConfiguracao, COALESCE(tcb.tcbevtConfig, 0) AS tcbevtConfig, COALESCE(tvei.veiBitsAlertas, 0) AS veiBitsAlertas, COALESCE(tvei.veisluID, 0) AS sluID, COALESCE(tcb.tcbharID, 0) AS harID FROM TabComputadorBordo tcb INNER JOIN TabEquipamento teqp ON teqp.eqptcbID = tcb.tcbID INNER JOIN TabPacoteProduto tpp ON teqp.eqptppID = tpp.tppID INNER JOIN TabVeiculos tvei ON teqp.eqpveiID = tvei.veiID LEFT JOIN TabCliente tcli ON tcli.cliid = tvei.veiProprietariocliID LEFT JOIN TabClienteConfig tcc ON tcc.clcCliID = tcli.cliID WHERE   tcb.tcbserie = $1 AND teqp.eqpAtivo = 1 AND tpp.tppIDProtocolo = $2 AND tvei.veiBloqueioSinal = 0;END$BODY$  LANGUAGE plpgsql VOLATILE  COST 10000  ROWS 1;Execution plan in the first execution:​You likely could make this STABLE instead of VOLATILE; though that doesn't solve your problem.​\"Function Scan on ap_keepalive_geteqpid_veiid  (cost=0.25..0.26 rows=1 width=116) (actual time=3.268..3.268 rows=1 loops=1)\"\"Planning time: 0.032 ms\"\"Execution time: 3.288 ms\"Second execution:\"Function Scan on ap_keepalive_geteqpid_veiid  (cost=0.25..0.26 rows=1 width=116) (actual time=0.401..0.402 rows=1 loops=1)\"\"Planning time: 0.058 ms\"\"Execution time: 0.423 ms\"​I'm doubting the query inside of the function is the problem here...it is the function usage itself.  Calling a function has overhead in that the body of function needs to be processed.  This only has to happen once per session.  The first call of the function incurs this overhead while subsequent calls do not.Pending others correcting me...I fairly certain regarding my conclusions though somewhat inexperienced in doing this kind of diagnostics.David J.", "msg_date": "Mon, 14 Dec 2015 12:21:54 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting an optimal plan on the first execution of a\n pl/pgsql function" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Mon, Dec 14, 2015 at 11:53 AM, Pedro França <[email protected]>\n> wrote:\n>> When I test with EXPLAIN ANALYZE after the first execution, the query runs\n>> really fast but the aplication sessions call the function only once and\n>> then are terminated. I need that the first execution use the actual\n>> optimized plan.\n\n> Your problem that the performance improvement is\n> seen due to caching effects. If you throw away the cache you loose the\n> improvement.\n\nYeah. And it's not only the function itself, it's catalog caches and a\nbunch of other stuff. Basically, you should expect that the first few\nqueries executed by any PG session are going to be slower than those\nexecuted later. If you can't fix your application to hold sessions open\nfor a reasonable amount of time, use a connection pooler to do it for you\n(pgpooler for instance).\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Dec 2015 15:41:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting an optimal plan on the first execution of a pl/pgsql\n function" }, { "msg_contents": "Thank you for the replies guys, The output of auto-explain pratically\ncomfirms what you say (sorry there are some portuguese words in there). I\nwill try pgpooler.\n\n< 2015-12-14 18:10:02.314 BRST >LOG: duration: 0.234 ms plan:\nQuery Text: SELECT teqp.eqpID,\nteqp.eqpveiID AS veiID,\ntcb.tcbID,\ntvei.veiPlaca,\ntvei.veiProprietariocliID,\ntcb.tcbtppID,\ntcb.tcbVersao,\ntvei.veiRPMParametro,\nCOALESCE(COALESCE(NULLIF(tcb.tcbConfiguracao, 0),\ntcc.clcConfiguracaoBitsVeic), 0) AS tcbConfiguracao,\nCOALESCE(tcb.tcbevtConfig, 0) AS tcbevtConfig,\nCOALESCE(tvei.veiBitsAlertas, 0) AS veiBitsAlertas,\nCOALESCE(tvei.veisluID, 0) AS sluID,\nCOALESCE(tcb.tcbharID, 0) AS harID\nFROM TabComputadorBordo tcb\nINNER JOIN TabEquipamento teqp ON teqp.eqptcbID = tcb.tcbID\nINNER JOIN TabPacoteProduto tpp ON teqp.eqptppID = tpp.tppID\nINNER JOIN TabVeiculos tvei ON teqp.eqpveiID = tvei.veiID\nLEFT JOIN TabCliente tcli ON tcli.cliid = tvei.veiProprietariocliID\nLEFT JOIN TabClienteConfig tcc ON tcc.clcCliID = tcli.cliID\nWHERE tcb.tcbserie = $1\nAND teqp.eqpAtivo = 1\nAND tpp.tppIDProtocolo = $2\nAND tvei.veiBloqueioSinal = 0\nNested Loop Left Join (cost=1.29..18.65 rows=1 width=75) (actual\ntime=0.226..0.230 rows=1 loops=1)\n Join Filter: (tcc.clccliid = tcli.cliid)\n Rows Removed by Join Filter: 3\n -> Nested Loop Left Join (cost=1.29..17.57 rows=1 width=75) (actual\ntime=0.205..0.209 rows=1 loops=1)\n -> Nested Loop (cost=1.01..17.26 rows=1 width=71) (actual\ntime=0.200..0.203 rows=1 loops=1)\n -> Nested Loop (cost=0.72..16.80 rows=1 width=43) (actual\ntime=0.097..0.098 rows=1 loops=1)\n -> Nested Loop (cost=0.58..16.63 rows=1 width=47)\n(actual time=0.079..0.080 rows=1 loops=1)\n -> Index Scan using\nix_tabcomputadorbordo_tcbserie on tabcomputadorbordo tcb (cost=0.29..8.31\nrows=1 width=35) (actual time=0.046..0.046 rows=1 loops=1)\n Index Cond: (tcbserie = $1)\n -> Index Scan using\nix_tabequipamento_eqptcbid_eqpativo_eqptppid_eqpveiid on tabequipamento\nteqp (cost=0.29..8.31 rows=1 width=16) (actual time=0.030..0.031 rows=1\nloops=1)\n Index Cond: ((eqptcbid = tcb.tcbid) AND\n(eqpativo = 1))\n -> Index Only Scan using\nix_tabpacoteproduto_tppidprotocolo on tabpacoteproduto tpp\n (cost=0.14..0.16 rows=1 width=4) (actual time=0.015..0.015 rows=1 loops=1)\n Index Cond: ((tppidprotocolo = $2) AND (tppid =\nteqp.eqptppid))\n Heap Fetches: 1\n -> Index Scan using pk_tabveiculos on tabveiculos tvei\n (cost=0.29..0.45 rows=1 width=32) (actual time=0.100..0.101 rows=1 loops=1)\n Index Cond: (veiid = teqp.eqpveiid)\n Filter: (veibloqueiosinal = 0)\n -> Index Only Scan using pk_tabcliente on tabcliente tcli\n (cost=0.28..0.30 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=1)\n Index Cond: (cliid = tvei.veiproprietariocliid)\n Heap Fetches: 1\n -> Seq Scan on tabclienteconfig tcc (cost=0.00..1.03 rows=3 width=8)\n(actual time=0.014..0.015 rows=3 loops=1)\n< 2015-12-14 18:10:02.314 BRST >CONTEXTO: função PL/pgSQL\nap_keepalive_geteqpid_veiid(bigint,integer) linha 4 em RETURN QUERY\n< 2015-12-14 18:10:02.314 BRST >LOG: duration: 4.057 ms plan:\nQuery Text: SELECT * FROM ap_keepalive_geteqpid_veiid (tcbSerie := 8259492,\nprotocolo:= 422);\n\nThank you for the replies guys, The output of auto-explain pratically comfirms what you say (sorry there are some portuguese words in there). I will try pgpooler.\n< 2015-12-14 18:10:02.314 BRST >LOG:  duration: 0.234 ms  plan: Query Text: SELECT teqp.eqpID,  teqp.eqpveiID AS veiID,  tcb.tcbID,  tvei.veiPlaca,  tvei.veiProprietariocliID,  tcb.tcbtppID,  tcb.tcbVersao, tvei.veiRPMParametro,  COALESCE(COALESCE(NULLIF(tcb.tcbConfiguracao, 0), tcc.clcConfiguracaoBitsVeic), 0) AS tcbConfiguracao, COALESCE(tcb.tcbevtConfig, 0) AS tcbevtConfig, COALESCE(tvei.veiBitsAlertas, 0) AS veiBitsAlertas, COALESCE(tvei.veisluID, 0) AS sluID, COALESCE(tcb.tcbharID, 0) AS harID FROM TabComputadorBordo tcb INNER JOIN TabEquipamento teqp ON teqp.eqptcbID = tcb.tcbID INNER JOIN TabPacoteProduto tpp ON teqp.eqptppID = tpp.tppID INNER JOIN TabVeiculos tvei ON teqp.eqpveiID = tvei.veiID LEFT JOIN TabCliente tcli ON tcli.cliid = tvei.veiProprietariocliID LEFT JOIN TabClienteConfig tcc ON tcc.clcCliID = tcli.cliID WHERE   tcb.tcbserie = $1 AND teqp.eqpAtivo = 1 AND tpp.tppIDProtocolo = $2 AND tvei.veiBloqueioSinal = 0 Nested Loop Left Join  (cost=1.29..18.65 rows=1 width=75) (actual time=0.226..0.230 rows=1 loops=1)  Join Filter: (tcc.clccliid = tcli.cliid)  Rows Removed by Join Filter: 3  ->  Nested Loop Left Join  (cost=1.29..17.57 rows=1 width=75) (actual time=0.205..0.209 rows=1 loops=1)        ->  Nested Loop  (cost=1.01..17.26 rows=1 width=71) (actual time=0.200..0.203 rows=1 loops=1)              ->  Nested Loop  (cost=0.72..16.80 rows=1 width=43) (actual time=0.097..0.098 rows=1 loops=1)                    ->  Nested Loop  (cost=0.58..16.63 rows=1 width=47) (actual time=0.079..0.080 rows=1 loops=1)                          ->  Index Scan using ix_tabcomputadorbordo_tcbserie on tabcomputadorbordo tcb  (cost=0.29..8.31 rows=1 width=35) (actual time=0.046..0.046 rows=1 loops=1)                                Index Cond: (tcbserie = $1)                          ->  Index Scan using ix_tabequipamento_eqptcbid_eqpativo_eqptppid_eqpveiid on tabequipamento teqp  (cost=0.29..8.31 rows=1 width=16) (actual time=0.030..0.031 rows=1 loops=1)                                Index Cond: ((eqptcbid = tcb.tcbid) AND (eqpativo = 1))                    ->  Index Only Scan using ix_tabpacoteproduto_tppidprotocolo on tabpacoteproduto tpp  (cost=0.14..0.16 rows=1 width=4) (actual time=0.015..0.015 rows=1 loops=1)                          Index Cond: ((tppidprotocolo = $2) AND (tppid = teqp.eqptppid))                          Heap Fetches: 1              ->  Index Scan using pk_tabveiculos on tabveiculos tvei  (cost=0.29..0.45 rows=1 width=32) (actual time=0.100..0.101 rows=1 loops=1)                    Index Cond: (veiid = teqp.eqpveiid)                    Filter: (veibloqueiosinal = 0)        ->  Index Only Scan using pk_tabcliente on tabcliente tcli  (cost=0.28..0.30 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=1)              Index Cond: (cliid = tvei.veiproprietariocliid)              Heap Fetches: 1  ->  Seq Scan on tabclienteconfig tcc  (cost=0.00..1.03 rows=3 width=8) (actual time=0.014..0.015 rows=3 loops=1)< 2015-12-14 18:10:02.314 BRST >CONTEXTO:  função PL/pgSQL ap_keepalive_geteqpid_veiid(bigint,integer) linha 4 em RETURN QUERY< 2015-12-14 18:10:02.314 BRST >LOG:  duration: 4.057 ms  plan: Query Text: SELECT * FROM ap_keepalive_geteqpid_veiid (tcbSerie := 8259492, protocolo:= 422);", "msg_date": "Mon, 14 Dec 2015 18:50:56 -0200", "msg_from": "=?UTF-8?Q?Pedro_Fran=C3=A7a?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Getting an optimal plan on the first execution of a\n pl/pgsql function" } ]
[ { "msg_contents": "Hi,\n I hope you can help me understand why the db is so big and if there's\nanything I can do.\nIt's the DB of an Enterprise Content Management application, Alfresco. Here\nare some data I collected, after executing a vaccum from pg admin.\n\nA) Largest tables sizes relation total_size\n SELECT nspname || '.' || relname AS \"relation\",\n pg_size_pretty(pg_total_relation_size(C.oid)) AS \"total_size\"\n FROM pg_class C\n LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\n WHERE nspname NOT IN ('pg_catalog', 'information_schema')\n AND C.relkind <> 'i'\n AND nspname !~ '^pg_toast'\n ORDER BY pg_total_relation_size(C.oid) DESC\n LIMIT 20; public.alf_node_properties 17 GB\npublic.alf_node 2331 MB\npublic.alf_node_aspects 1939 MB\npublic.alf_child_assoc 1160 MB\npublic.alf_content_data 338 MB\npublic.alf_content_url 296 MB\npublic.alf_transaction 207 MB\npublic.alf_node_assoc 140 MB\npublic.alf_activity_feed 6016 kB\npublic.alf_activity_post 1464 kB\npublic.alf_acl_member 1056 kB\npublic.alf_access_control_list 712 kB\npublic.act_hi_detail 352 kB\npublic.act_hi_varinst 248 kB\npublic.alf_prop_value 240 kB\npublic.act_ru_variable 232 kB\npublic.alf_access_control_entry 208 kB\npublic.alf_lock 144 kB\npublic.alf_authority 144 kB\npublic.jbpm_log 120 kB\n\nB) size size of alf_node_properties\nselect pg_catalog.pg_size_pretty(sum(pg_column_size(alf_node_properties.*)))\n from alf_node_properties 6322 MB\n\nC) size of all the columns of alf_node_properties\nDimensioni delle singole colonne (MB)node_idactual_type_npersisted_type_n\nqname_idlist_indexlocale_idboolean_valuelong_valuefloat_valuedouble_value\nstring_valueserializable_valuetotale (MB)totale values select\npg_catalog.pg_size_pretty(sum(pg_column_size(node_id))) node_id,\npg_catalog.pg_size_pretty(sum(pg_column_size(actual_type_n))) actual_type_n,\npg_catalog.pg_size_pretty(sum(pg_column_size(persisted_type_n)))\npersisted_type_n,\npg_catalog.pg_size_pretty(sum(pg_column_size(qname_id))) qname_id,\npg_catalog.pg_size_pretty(sum(pg_column_size(list_index))) list_index,\npg_catalog.pg_size_pretty(sum(pg_column_size(locale_id))) locale_id,\npg_catalog.pg_size_pretty(sum(pg_column_size(boolean_value))) boolean_value,\npg_catalog.pg_size_pretty(sum(pg_column_size(long_value))) long_value,\npg_catalog.pg_size_pretty(sum(pg_column_size(float_value))) float_value,\npg_catalog.pg_size_pretty(sum(pg_column_size(double_value))) double_value,\npg_catalog.pg_size_pretty(sum(pg_column_size(string_value))) string_value,\npg_catalog.pg_size_pretty(sum(pg_column_size(serializable_value)))\nserializable_value\nfrom alf_node_properties419210210419210419524192104196686237171830\n\n-------Questions----------------\n\n1) Can you explain me the big difference between the result in A for table\nalf_node_properties: 17GB and the result in B: ~6GB ?\n\n2) Can you explain me the difference between the result in B: ~6GB and the\nresult in C, the sum of all column sizes, 3717MB ?\n\nThanks\n\nHi,     I hope you can help me understand why the db is so big and if there's anything I can do.It's the DB of an Enterprise Content Management application, Alfresco. Here are some data I collected, after executing a vaccum from pg admin.A) Largest tables sizes\t\trelation total_size  SELECT nspname || '.' || relname AS \"relation\",    pg_size_pretty(pg_total_relation_size(C.oid)) AS \"total_size\"  FROM pg_class C  LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)  WHERE nspname NOT IN ('pg_catalog', 'information_schema')    AND C.relkind <> 'i'    AND nspname !~ '^pg_toast'  ORDER BY pg_total_relation_size(C.oid) DESC  LIMIT 20; public.alf_node_properties 17 GB public.alf_node 2331 MB public.alf_node_aspects 1939 MB public.alf_child_assoc 1160 MB public.alf_content_data 338 MB public.alf_content_url 296 MB public.alf_transaction 207 MB public.alf_node_assoc 140 MB public.alf_activity_feed 6016 kB public.alf_activity_post 1464 kB public.alf_acl_member 1056 kB public.alf_access_control_list 712 kB public.act_hi_detail 352 kB public.act_hi_varinst 248 kB public.alf_prop_value 240 kB public.act_ru_variable 232 kB public.alf_access_control_entry 208 kB public.alf_lock 144 kB public.alf_authority 144 kB public.jbpm_log 120 kBB) size size of alf_node_propertiesselect pg_catalog.pg_size_pretty(sum(pg_column_size(alf_node_properties.*)))      from alf_node_properties 6322 MBC) size of all the columns of alf_node_propertiesDimensioni delle singole colonne (MB)node_idactual_type_npersisted_type_nqname_idlist_indexlocale_idboolean_valuelong_valuefloat_valuedouble_valuestring_valueserializable_valuetotale (MB)totale values select pg_catalog.pg_size_pretty(sum(pg_column_size(node_id))) node_id, pg_catalog.pg_size_pretty(sum(pg_column_size(actual_type_n))) actual_type_n, pg_catalog.pg_size_pretty(sum(pg_column_size(persisted_type_n))) persisted_type_n, pg_catalog.pg_size_pretty(sum(pg_column_size(qname_id))) qname_id, pg_catalog.pg_size_pretty(sum(pg_column_size(list_index))) list_index, pg_catalog.pg_size_pretty(sum(pg_column_size(locale_id))) locale_id, pg_catalog.pg_size_pretty(sum(pg_column_size(boolean_value))) boolean_value, pg_catalog.pg_size_pretty(sum(pg_column_size(long_value))) long_value, pg_catalog.pg_size_pretty(sum(pg_column_size(float_value))) float_value, pg_catalog.pg_size_pretty(sum(pg_column_size(double_value))) double_value, pg_catalog.pg_size_pretty(sum(pg_column_size(string_value))) string_value, pg_catalog.pg_size_pretty(sum(pg_column_size(serializable_value))) serializable_value from alf_node_properties419210210419210419524192104196686237171830-------Questions----------------1) Can you explain me the big difference between the result in A for table alf_node_properties: 17GB and the result in B: ~6GB ?2) Can you explain me the difference between the result in B: ~6GB and the result in C, the sum of all column sizes, 3717MB ?Thanks", "msg_date": "Tue, 15 Dec 2015 10:52:16 +0100", "msg_from": "Matteo Grolla <[email protected]>", "msg_from_op": true, "msg_subject": "Can't explain db size" }, { "msg_contents": "Matteo Grolla <[email protected]> wrote:\n\n> \n> -------Questions----------------\n> \n> 1) Can you explain me the big difference between the result in A for table\n> alf_node_properties: 17GB and the result in B: ~6GB ?\n> \n> 2) Can you explain me the difference between the result in B: ~6GB and the\n> result in C, the sum of all column sizes, 3717MB ?\n\nMaybe there are some dead tuples, run a VACUUM FULL (be careful, it\nrequires an explicit lock). And please keep in mind that a table\ncan contains indexes and other objects. A nice explanation and some ways\nto gather informations on table-, index- and database sizes can you find\nhere:\nhttp://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html\n\n\nRegards, Andreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Dec 2015 11:07:58 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't explain db size" }, { "msg_contents": "Thanks Andreas,\n Il try\n\n2015-12-15 11:07 GMT+01:00 Andreas Kretschmer <[email protected]>:\n\n> Matteo Grolla <[email protected]> wrote:\n>\n> >\n> > -------Questions----------------\n> >\n> > 1) Can you explain me the big difference between the result in A for\n> table\n> > alf_node_properties: 17GB and the result in B: ~6GB ?\n> >\n> > 2) Can you explain me the difference between the result in B: ~6GB and\n> the\n> > result in C, the sum of all column sizes, 3717MB ?\n>\n> Maybe there are some dead tuples, run a VACUUM FULL (be careful, it\n> requires an explicit lock). And please keep in mind that a table\n> can contains indexes and other objects. A nice explanation and some ways\n> to gather informations on table-, index- and database sizes can you find\n> here:\n>\n> http://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html\n>\n>\n> Regards, Andreas\n> --\n> Really, I'm not out to destroy Microsoft. That will just be a completely\n> unintentional side effect. (Linus Torvalds)\n> \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks Andreas,     Il try2015-12-15 11:07 GMT+01:00 Andreas Kretschmer <[email protected]>:Matteo Grolla <[email protected]> wrote:\n\n>\n> -------Questions----------------\n>\n> 1) Can you explain me the big difference between the result in A for table\n> alf_node_properties: 17GB and the result in B: ~6GB ?\n>\n> 2) Can you explain me the difference between the result in B: ~6GB and the\n> result in C, the sum of all column sizes, 3717MB ?\n\nMaybe there are some dead tuples, run a VACUUM FULL (be careful, it\nrequires an explicit lock). And please keep in mind that a table\ncan contains indexes and other objects. A nice explanation and some ways\nto gather informations on table-, index- and database sizes can you find\nhere:\nhttp://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html\n\n\nRegards, Andreas\n--\nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect.                              (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\"   (unknown)\nKaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 15 Dec 2015 12:11:12 +0100", "msg_from": "Matteo Grolla <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't explain db size" }, { "msg_contents": "have news,\n the pg version is 9.1.3\n a vaccum full, not a plain vaccum, was performed.\n o.s. is red hat 7\n filesystem: xfs with block size 4k\n\ncould it be a problem regarding the block size?\nthanks\n\n2015-12-15 12:11 GMT+01:00 Matteo Grolla <[email protected]>:\n\n> Thanks Andreas,\n> Il try\n>\n> 2015-12-15 11:07 GMT+01:00 Andreas Kretschmer <[email protected]>:\n>\n>> Matteo Grolla <[email protected]> wrote:\n>>\n>> >\n>> > -------Questions----------------\n>> >\n>> > 1) Can you explain me the big difference between the result in A for\n>> table\n>> > alf_node_properties: 17GB and the result in B: ~6GB ?\n>> >\n>> > 2) Can you explain me the difference between the result in B: ~6GB and\n>> the\n>> > result in C, the sum of all column sizes, 3717MB ?\n>>\n>> Maybe there are some dead tuples, run a VACUUM FULL (be careful, it\n>> requires an explicit lock). And please keep in mind that a table\n>> can contains indexes and other objects. A nice explanation and some ways\n>> to gather informations on table-, index- and database sizes can you find\n>> here:\n>>\n>> http://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html\n>>\n>>\n>> Regards, Andreas\n>> --\n>> Really, I'm not out to destroy Microsoft. That will just be a completely\n>> unintentional side effect. (Linus Torvalds)\n>> \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n>> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\nhave news,      the pg version is 9.1.3      a vaccum full, not a plain vaccum, was performed.      o.s. is red hat 7      filesystem: xfs with block size 4kcould it be a problem regarding the block size?thanks2015-12-15 12:11 GMT+01:00 Matteo Grolla <[email protected]>:Thanks Andreas,     Il try2015-12-15 11:07 GMT+01:00 Andreas Kretschmer <[email protected]>:Matteo Grolla <[email protected]> wrote:\n\n>\n> -------Questions----------------\n>\n> 1) Can you explain me the big difference between the result in A for table\n> alf_node_properties: 17GB and the result in B: ~6GB ?\n>\n> 2) Can you explain me the difference between the result in B: ~6GB and the\n> result in C, the sum of all column sizes, 3717MB ?\n\nMaybe there are some dead tuples, run a VACUUM FULL (be careful, it\nrequires an explicit lock). And please keep in mind that a table\ncan contains indexes and other objects. A nice explanation and some ways\nto gather informations on table-, index- and database sizes can you find\nhere:\nhttp://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html\n\n\nRegards, Andreas\n--\nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect.                              (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\"   (unknown)\nKaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 17 Dec 2015 16:12:40 +0100", "msg_from": "Matteo Grolla <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't explain db size" }, { "msg_contents": "also,\nserializable_value is of type bytea\n\n2015-12-17 16:12 GMT+01:00 Matteo Grolla <[email protected]>:\n\n> have news,\n> the pg version is 9.1.3\n> a vaccum full, not a plain vaccum, was performed.\n> o.s. is red hat 7\n> filesystem: xfs with block size 4k\n>\n> could it be a problem regarding the block size?\n> thanks\n>\n> 2015-12-15 12:11 GMT+01:00 Matteo Grolla <[email protected]>:\n>\n>> Thanks Andreas,\n>> Il try\n>>\n>> 2015-12-15 11:07 GMT+01:00 Andreas Kretschmer <[email protected]>\n>> :\n>>\n>>> Matteo Grolla <[email protected]> wrote:\n>>>\n>>> >\n>>> > -------Questions----------------\n>>> >\n>>> > 1) Can you explain me the big difference between the result in A for\n>>> table\n>>> > alf_node_properties: 17GB and the result in B: ~6GB ?\n>>> >\n>>> > 2) Can you explain me the difference between the result in B: ~6GB and\n>>> the\n>>> > result in C, the sum of all column sizes, 3717MB ?\n>>>\n>>> Maybe there are some dead tuples, run a VACUUM FULL (be careful, it\n>>> requires an explicit lock). And please keep in mind that a table\n>>> can contains indexes and other objects. A nice explanation and some ways\n>>> to gather informations on table-, index- and database sizes can you find\n>>> here:\n>>>\n>>> http://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html\n>>>\n>>>\n>>> Regards, Andreas\n>>> --\n>>> Really, I'm not out to destroy Microsoft. That will just be a completely\n>>> unintentional side effect. (Linus Torvalds)\n>>> \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n>>> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list (\n>>> [email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>>\n>\n\nalso, serializable_value is of type bytea2015-12-17 16:12 GMT+01:00 Matteo Grolla <[email protected]>:have news,      the pg version is 9.1.3      a vaccum full, not a plain vaccum, was performed.      o.s. is red hat 7      filesystem: xfs with block size 4kcould it be a problem regarding the block size?thanks2015-12-15 12:11 GMT+01:00 Matteo Grolla <[email protected]>:Thanks Andreas,     Il try2015-12-15 11:07 GMT+01:00 Andreas Kretschmer <[email protected]>:Matteo Grolla <[email protected]> wrote:\n\n>\n> -------Questions----------------\n>\n> 1) Can you explain me the big difference between the result in A for table\n> alf_node_properties: 17GB and the result in B: ~6GB ?\n>\n> 2) Can you explain me the difference between the result in B: ~6GB and the\n> result in C, the sum of all column sizes, 3717MB ?\n\nMaybe there are some dead tuples, run a VACUUM FULL (be careful, it\nrequires an explicit lock). And please keep in mind that a table\ncan contains indexes and other objects. A nice explanation and some ways\nto gather informations on table-, index- and database sizes can you find\nhere:\nhttp://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html\n\n\nRegards, Andreas\n--\nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect.                              (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\"   (unknown)\nKaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 17 Dec 2015 18:03:59 +0100", "msg_from": "Matteo Grolla <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't explain db size" }, { "msg_contents": "On 12/15/15 3:52 AM, Matteo Grolla wrote:\n> 1) Can you explain me the big difference between the result in A for\n> table alf_node_properties: 17GB and the result in B: ~6GB ?\n\n11GB of indexes would explain it.\n\n> 2) Can you explain me the difference between the result in B: ~6GB and\n> the result in C, the sum of all column sizes, 3717MB ?\n\nProbably per-page and per-tuple overhead.\n\nWhat does SELECT reltuples, relpages FROM pg_class WHERE oid = \n'public.alf_node_properties'::regclass show?\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 20 Dec 2015 17:08:04 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't explain db size" } ]
[ { "msg_contents": "Hi all,\n\nI have an application that runs in production in multiple instances, and \non one of these the performance of certain queries suddenly became truly \nabysmal. I basically know why, but I would much appreciate if I could \nobtain a deeper understanding of the selectivity function involved and \nany possible means of making Postgres choose a better plan. In the \nfollowing I have tried to boil the problem down to something manageable:\n\nThe schema contains two tables, t1 and t2.\nt2 has two fields, an id and a tag, and it contains 146 rows that are \nunique.\nt1 has two fields, a value and a foreign key referring to t2.id, and it \ncontains 266177 rows.\n\nThe application retrieves the rows in t1 that match a specific tag in \nt2, and it turned out that the contents of t1 were distributed in a very \nlopsided way, where more than 90% of the rows refer to one of two tags \nfrom t2:\n\nEXPLAIN SELECT(*) FROM t1 WHERE t2_id = '<some_id>'\n\nIndex Scan using t1_t2_id_idx on t1 (cost=0.42..7039.67 rows=103521 \nwidth=367)\n Index Cond: (t2_id = '<some_id>'::text)\n\nThe row count estimate is exactly as expected; about 39% of the rows \nrefer to that specific tag.\n\nWhat the application actually does is\n\nEXPLAIN SELECT(*) FROM t1 INNER JOIN t2 ON t1.t2_id = t2.id WHERE t2.tag \n= '<some_tag>'\n\nNested Loop (cost=0.69..3152.53 rows=1824 width=558)\n -> Index Scan using t2_tag_idx ON t2 (cost=0.27..2.29 rows=1 \nwidth=191)\n Index Cond: (tag = '<some_tag>'::text)\n -> Index Scan using t1_t2_id_idx on t1 (cost=0.42..3058.42 rows=9182 \nwidth=367)\n Index Cond: (t2_id = t2.id)\n\nThe estimate for the number of rows in the result (1824) is way too low, \nand that leads to bad plans and queries involving more joins on the \ntables that run about 1000x slower than they should.\n\nI have currently rewritten the application code to do two queries; one \nto retrieve the id from t2 that matches the given tag and one to \nretrieve the rows from t1, and that's a usable workaround but not \nsomething we really like doing as a permanent solution. Fiddling with \nthe various statistics related knobs seems to make no difference, but is \nthere be some other way I can make Postgres assume high selectivity for \ncertain tag values? Am I just SOL with the given schema?\n\nAny pointers to information about how to handle potentially lopsided \ndata like this are highly welcome.\n\nBest regards,\n Mikkel Lauritsen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Dec 2015 11:11:13 +0100", "msg_from": "Mikkel Lauritsen <[email protected]>", "msg_from_op": true, "msg_subject": "Selectivity for lopsided foreign key columns" }, { "msg_contents": "Mikkel Lauritsen <[email protected]> writes:\n> The schema contains two tables, t1 and t2.\n> t2 has two fields, an id and a tag, and it contains 146 rows that are \n> unique.\n> t1 has two fields, a value and a foreign key referring to t2.id, and it \n> contains 266177 rows.\n> The application retrieves the rows in t1 that match a specific tag in \n> t2, and it turned out that the contents of t1 were distributed in a very \n> lopsided way, where more than 90% of the rows refer to one of two tags \n> from t2:\n> ...\n> The estimate for the number of rows in the result (1824) is way too low, \n> and that leads to bad plans and queries involving more joins on the \n> tables that run about 1000x slower than they should.\n\n> I have currently rewritten the application code to do two queries; one \n> to retrieve the id from t2 that matches the given tag and one to \n> retrieve the rows from t1, and that's a usable workaround but not \n> something we really like doing as a permanent solution. Fiddling with \n> the various statistics related knobs seems to make no difference, but is \n> there be some other way I can make Postgres assume high selectivity for \n> certain tag values? Am I just SOL with the given schema?\n\nYou're pretty much SOL. Lacking cross-column statistics, the planner has\nno idea which t2.id goes with the given tag, so it can't see that the\nselected id is the one that is most common in t1. You're getting a\njoin size estimate that is basically size of t1 divided by number of\npossible values (146), which is about the best we can do without knowing\nwhich id is selected.\n\nOne possibility, if you can change the schema, is to denormalize by\ncopying the tag field into t1. (You could enforce that it's correct\nby using a two-column foreign key constraint on (id, tag).) Then the\nquery would look like\nSELECT * FROM t1 INNER JOIN t2 ON t1.tag = t2.tag WHERE t2.tag = '<some_tag>'\nand since the planner is smart enough to deduce t1.tag = '<some_tag>' from\nthat, it would arrive at the correct estimate for any particular tag.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Dec 2015 10:23:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selectivity for lopsided foreign key columns" }, { "msg_contents": "On 2015-12-17 16:23, Tom Lane wrote:\n> Mikkel Lauritsen <[email protected]> writes:\n>> The schema contains two tables, t1 and t2.\n>> t2 has two fields, an id and a tag, and it contains 146 rows that are\n>> unique.\n>> t1 has two fields, a value and a foreign key referring to t2.id, and \n>> it\n>> contains 266177 rows.\n>> The application retrieves the rows in t1 that match a specific tag in\n>> t2, and it turned out that the contents of t1 were distributed in a \n>> very\n>> lopsided way, where more than 90% of the rows refer to one of two tags\n>> from t2:\n>> ...\n>> The estimate for the number of rows in the result (1824) is way too \n>> low,\n>> and that leads to bad plans and queries involving more joins on the\n>> tables that run about 1000x slower than they should.\n> \n>> I have currently rewritten the application code to do two queries; one\n>> to retrieve the id from t2 that matches the given tag and one to\n>> retrieve the rows from t1, and that's a usable workaround but not\n>> something we really like doing as a permanent solution. Fiddling with\n>> the various statistics related knobs seems to make no difference, but \n>> is\n>> there be some other way I can make Postgres assume high selectivity \n>> for\n>> certain tag values? Am I just SOL with the given schema?\n> \n> You're pretty much SOL. Lacking cross-column statistics, the planner \n> has\n> no idea which t2.id goes with the given tag, so it can't see that the\n> selected id is the one that is most common in t1. You're getting a\n> join size estimate that is basically size of t1 divided by number of\n> possible values (146), which is about the best we can do without \n> knowing\n> which id is selected.\n\n--- snip --\n\nThanks - I thought as much, but it's really nice to have it confirmed \nfrom\npeople who are way more knowledgeable.\n\nBest regards and thanks again,\n Mikkel Lauritsen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Dec 2015 20:14:13 +0100", "msg_from": "Mikkel Lauritsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Selectivity for lopsided foreign key columns" } ]
[ { "msg_contents": "Hey all, first off, Postgres version 9.4.4 (also tested on 9.5 beta).\n\nI have been having a pretty hard time getting a view of mine to play nice\nwith any other queries I need it for.\n\nI have a few tables you'd need to know about to understand why i'm doing\nwhat i'm doing.\n\nFirst thing is we have a \"contract_item\", it can have either a product or a\ngrouping of products (only one or the other, not both) on it, and a few\nextra attributes.\n\nThe groupings are defined in a hierarchy, and they can have many of these\ngroupings or products on a single \"contract\", but only unique records per\nproduct or grouping.\n\nNow when it comes down to finding what the extra attributes that are valid\non a contract are for the specific product you're looking for, the most\nrelevant is the product being on the contract directly. If the product\ndoesn't exist, we choose the lowest level grouping that is defined on the\ncontract that contains that product.\n\nThe view I am having trouble with is able to push down it's where clause\nwhen the id's are directly specified like so:\nSELECT *\nFROM contract_product cp\nWHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\nAND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f';\n\nBut the where clause or join conditions are not pushed down in these cases\n(which is how I need to use the view):\nSELECT *\nFROM contract_product cp\nWHERE EXISTS (\nSELECT 1\nWHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\nAND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f'\n);\n\nor\n\nSELECT *\nFROM contract_product cp\nINNER JOIN (\nSELECT '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'::uuid as contract_id,\n'00c117d7-6451-4842-b17b-baa44baa375f'::uuid as product_id\n) p\nON cp.contract_id = p.contract_id\nAND cp.product_id = p.product_id;\n\n\nThe definition of the view i'm having trouble with:\nCREATE OR REPLACE VIEW contract_product AS\nSELECT DISTINCT ON (ci.contract_id, p.product_id)\n ci.contract_item_id,\n ci.contract_id,\n p.product_id,\n ci.uom_type_id,\n ci.rebate_direct_rate,\n ci.decimal_model,\n ci.rebate_deviated_value,\n ci.rebate_deviated_type\nFROM contract_item ci\nLEFT JOIN grouping_hierarchy gh\n ON gh.original_grouping_id = ci.grouping_id\n AND NOT (EXISTS (\n SELECT 1\n FROM contract_item cig\n WHERE cig.contract_id = ci.contract_id\n AND gh.grouping_id = cig.grouping_id\n AND cig.grouping_id <> ci.grouping_id)\n )\nLEFT JOIN product_grouping pg\n ON pg.grouping_id = gh.grouping_id\n AND NOT (EXISTS (\n SELECT 1\n FROM contract_item cip\n WHERE cip.contract_id = ci.contract_id\n AND pg.product_id = cip.product_id)\n )\nJOIN product p\n ON p.product_id = COALESCE(ci.product_id, pg.product_id)\nORDER BY ci.contract_id, p.product_id, gh.level;\n\nThat view references another view to make it easy to find the correct level\nin my hierarchy, so here is the definition for that:\nCREATE OR REPLACE VIEW grouping_hierarchy AS\n WITH RECURSIVE groupings_list(original_grouping_id, parent_grouping_id,\ngrouping_id) AS (\n SELECT pg.grouping_id AS original_grouping_id,\n pg.parent_grouping_id,\n pg.grouping_id,\n 0 AS level\n FROM grouping pg\n UNION ALL\n SELECT gl.original_grouping_id,\n cg.parent_grouping_id,\n cg.grouping_id,\n gl.level + 1\n FROM groupings_list gl\n JOIN grouping cg ON cg.parent_grouping_id = gl.grouping_id\n WHERE cg.active_ind = true\n )\n SELECT groupings_list.original_grouping_id,\n groupings_list.parent_grouping_id,\n groupings_list.grouping_id,\n groupings_list.level\n FROM groupings_list;\n\n\nAnd here are the query plans (in order) for those three queries:\nhttp://explain.depesz.com/s/YCee\nhttp://explain.depesz.com/s/1SE2\nhttp://explain.depesz.com/s/ci7\n\nAny help would be greatly appreciated on how to speed this up, or if i'm\ndoing something Postgres just doesn't like and what an alternative method\nwould be.\n\nThanks,\n-Adam\n\nHey all, first off, Postgres version 9.4.4 (also tested on 9.5 beta).I have been having a pretty hard time getting a view of mine to play nice with any other queries I need it for.I have a few tables you'd need to know about to understand why i'm doing what i'm doing.First thing is we have a \"contract_item\", it can have either a product or a grouping of products (only one or the other, not both) on it, and a few extra attributes.The groupings are defined in a hierarchy, and they can have many of these groupings or products on a single \"contract\", but only unique records per product or grouping.Now when it comes down to finding what the extra attributes that are valid on a contract are for the specific product you're looking for, the most relevant is the product being on the contract directly. If the product doesn't exist, we choose the lowest level grouping that is defined on the contract that contains that product.The view I am having trouble with is able to push down it's where clause when the id's are directly specified like so:SELECT *FROM contract_product cpWHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f';But the where clause or join conditions are not pushed down in these cases (which is how I need to use the view): SELECT *FROM contract_product cpWHERE EXISTS (SELECT 1WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f');or SELECT *FROM contract_product cpINNER JOIN (SELECT '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'::uuid as contract_id, '00c117d7-6451-4842-b17b-baa44baa375f'::uuid as product_id) pON cp.contract_id = p.contract_idAND cp.product_id = p.product_id;The definition of the view i'm having trouble with:CREATE OR REPLACE VIEW contract_product AS SELECT DISTINCT ON (ci.contract_id, p.product_id)   ci.contract_item_id,  ci.contract_id,  p.product_id,  ci.uom_type_id,  ci.rebate_direct_rate,  ci.decimal_model,  ci.rebate_deviated_value,  ci.rebate_deviated_typeFROM contract_item ciLEFT JOIN grouping_hierarchy gh  ON gh.original_grouping_id = ci.grouping_id  AND NOT (EXISTS (  SELECT 1  FROM contract_item cig  WHERE cig.contract_id = ci.contract_id  AND gh.grouping_id = cig.grouping_id  AND cig.grouping_id <> ci.grouping_id)  )LEFT JOIN product_grouping pg  ON pg.grouping_id = gh.grouping_id  AND NOT (EXISTS (  SELECT 1  FROM contract_item cip  WHERE cip.contract_id = ci.contract_id  AND pg.product_id = cip.product_id)  )JOIN product p  ON p.product_id = COALESCE(ci.product_id, pg.product_id)ORDER BY ci.contract_id, p.product_id, gh.level;That view references another view to make it easy to find the correct level in my hierarchy, so here is the definition for that:CREATE OR REPLACE VIEW grouping_hierarchy AS  WITH RECURSIVE groupings_list(original_grouping_id, parent_grouping_id, grouping_id) AS (         SELECT pg.grouping_id AS original_grouping_id,            pg.parent_grouping_id,            pg.grouping_id,            0 AS level           FROM grouping pg        UNION ALL         SELECT gl.original_grouping_id,            cg.parent_grouping_id,            cg.grouping_id,            gl.level + 1           FROM groupings_list gl             JOIN grouping cg ON cg.parent_grouping_id = gl.grouping_id          WHERE cg.active_ind = true        ) SELECT groupings_list.original_grouping_id,    groupings_list.parent_grouping_id,    groupings_list.grouping_id,    groupings_list.level   FROM groupings_list;And here are the query plans (in order) for those three queries:http://explain.depesz.com/s/YCeehttp://explain.depesz.com/s/1SE2http://explain.depesz.com/s/ci7Any help would be greatly appreciated on how to speed this up, or if i'm doing something Postgres just doesn't like and what an alternative method would be.Thanks,-Adam", "msg_date": "Thu, 17 Dec 2015 10:44:04 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Terrible plan choice for view with distinct on clause" }, { "msg_contents": "Adam Brusselback <[email protected]> writes:\n> The view I am having trouble with is able to push down it's where clause\n> when the id's are directly specified like so:\n> SELECT *\n> FROM contract_product cp\n> WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n> AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f';\n\n> But the where clause or join conditions are not pushed down in these cases\n> (which is how I need to use the view):\n> SELECT *\n> FROM contract_product cp\n> WHERE EXISTS (\n> SELECT 1\n> WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n> AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f'\n> );\n\nThis plea for help would be more convincing if you could explain *why*\nyou needed to do that. As is, it sure looks like \"Doctor, it hurts when\nI do this\". What about that construction isn't just silly?\n\n(And if you say \"it's produced by an ORM I have no control over\", I'm\ngoing to say \"your ORM was evidently written by blithering idiots, and\nyou should not have any faith in it\".)\n\nHaving said that, the reason nothing good happens is that\nconvert_EXISTS_sublink_to_join() punts on subqueries that have an empty\nFROM clause, as well as some other corner cases that I did not care to\nanalyze carefully at the time. Just looking at this example, it seems\nlike if the SELECT list is trivial then we could simply replace the EXISTS\nclause in toto with the contents of the lower WHERE clause, thereby\nundoing the silliness of the query author. I don't think this could be\nhandled directly in convert_EXISTS_sublink_to_join(), because it's defined\nto return a JoinExpr which would not apply in such a case. But possibly\nit could be dealt with in make_subplan() without too much overhead. I'm\nnot feeling motivated to work on this myself, absent a more convincing\nexplanation of why we should expend any effort to support this query\npattern. But if anyone else is, have at it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Dec 2015 12:00:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terrible plan choice for view with distinct on clause" }, { "msg_contents": "No ORM, just me.\nWas somewhat similar to something I had seen done at an old job, but they\nused SQL Server and that type of query worked fine there.\n\nThere were a couple business cases that had to be satisfied, which is why I\nwent the way I did:\nThe first was \"allow products to be grouped together, and those groups be\nplaced in a hierarchy. All of the products in child groupings are valid for\nthe parent of the grouping. Products should be able to be added and removed\nfrom this group at any point.\"\nThe next was \"allow a user to add a product, or a group of products to a\ncontract and set pricing information, product is the most definitive and\noverrides any groupings the product may be in, and the lowest level of the\ngrouping hierarchy should be used if the product is not directly on the\ncontract.\"\nThe last was \"adding and removing products from a group should immediately\ntake effect to make those products valid or invalid on any contracts that\ngrouping is a part of.\"\n\nNow I am not going to say I love this design, I actually am not a fan of it\nat all. I just couldn't think of any other design pattern that would meet\nthose business requirements. I was hoping to create a view to make working\nwith the final result the rules specified above easy when you want to know\nwhat pricing is valid for a specific product on a contract.\n\nSo that is the \"why\" at least.\n\nOn Thu, Dec 17, 2015 at 12:00 PM, Tom Lane <[email protected]> wrote:\n\n> Adam Brusselback <[email protected]> writes:\n> > The view I am having trouble with is able to push down it's where clause\n> > when the id's are directly specified like so:\n> > SELECT *\n> > FROM contract_product cp\n> > WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n> > AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f';\n>\n> > But the where clause or join conditions are not pushed down in these\n> cases\n> > (which is how I need to use the view):\n> > SELECT *\n> > FROM contract_product cp\n> > WHERE EXISTS (\n> > SELECT 1\n> > WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n> > AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f'\n> > );\n>\n> This plea for help would be more convincing if you could explain *why*\n> you needed to do that. As is, it sure looks like \"Doctor, it hurts when\n> I do this\". What about that construction isn't just silly?\n>\n> (And if you say \"it's produced by an ORM I have no control over\", I'm\n> going to say \"your ORM was evidently written by blithering idiots, and\n> you should not have any faith in it\".)\n>\n> Having said that, the reason nothing good happens is that\n> convert_EXISTS_sublink_to_join() punts on subqueries that have an empty\n> FROM clause, as well as some other corner cases that I did not care to\n> analyze carefully at the time. Just looking at this example, it seems\n> like if the SELECT list is trivial then we could simply replace the EXISTS\n> clause in toto with the contents of the lower WHERE clause, thereby\n> undoing the silliness of the query author. I don't think this could be\n> handled directly in convert_EXISTS_sublink_to_join(), because it's defined\n> to return a JoinExpr which would not apply in such a case. But possibly\n> it could be dealt with in make_subplan() without too much overhead. I'm\n> not feeling motivated to work on this myself, absent a more convincing\n> explanation of why we should expend any effort to support this query\n> pattern. But if anyone else is, have at it.\n>\n> regards, tom lane\n>\n\nNo ORM, just me.  Was somewhat similar to something I had seen done at an old job, but they used SQL Server and that type of query worked fine there.There were a couple business cases that had to be satisfied, which is why I went the way I did:The first was \"allow products to be grouped together, and those groups be placed in a hierarchy. All of the products in child groupings are valid for the parent of the grouping. Products should be able to be added and removed from this group at any point.\"The next was \"allow a user to add a product, or a group of products to a contract and set pricing information, product is the most definitive and overrides any groupings the product may be in, and the lowest level of the grouping hierarchy should be used if the product is not directly on the contract.\"The last was \"adding and removing products from a group should immediately take effect to make those products valid or invalid on any contracts that grouping is a part of.\"Now I am not going to say I love this design, I actually am not a fan of it at all. I just couldn't think of any other design pattern that would meet those business requirements. I was hoping to create a view to make working with the final result the rules specified above easy when you want to know what pricing is valid for a specific product on a contract.So that is the \"why\" at least.On Thu, Dec 17, 2015 at 12:00 PM, Tom Lane <[email protected]> wrote:Adam Brusselback <[email protected]> writes:\n> The view I am having trouble with is able to push down it's where clause\n> when the id's are directly specified like so:\n> SELECT *\n> FROM contract_product cp\n> WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n> AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f';\n\n> But the where clause or join conditions are not pushed down in these cases\n> (which is how I need to use the view):\n> SELECT *\n> FROM contract_product cp\n> WHERE EXISTS (\n> SELECT 1\n> WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n> AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f'\n> );\n\nThis plea for help would be more convincing if you could explain *why*\nyou needed to do that.  As is, it sure looks like \"Doctor, it hurts when\nI do this\".  What about that construction isn't just silly?\n\n(And if you say \"it's produced by an ORM I have no control over\", I'm\ngoing to say \"your ORM was evidently written by blithering idiots, and\nyou should not have any faith in it\".)\n\nHaving said that, the reason nothing good happens is that\nconvert_EXISTS_sublink_to_join() punts on subqueries that have an empty\nFROM clause, as well as some other corner cases that I did not care to\nanalyze carefully at the time.  Just looking at this example, it seems\nlike if the SELECT list is trivial then we could simply replace the EXISTS\nclause in toto with the contents of the lower WHERE clause, thereby\nundoing the silliness of the query author.  I don't think this could be\nhandled directly in convert_EXISTS_sublink_to_join(), because it's defined\nto return a JoinExpr which would not apply in such a case.  But possibly\nit could be dealt with in make_subplan() without too much overhead.  I'm\nnot feeling motivated to work on this myself, absent a more convincing\nexplanation of why we should expend any effort to support this query\npattern.  But if anyone else is, have at it.\n\n                        regards, tom lane", "msg_date": "Thu, 17 Dec 2015 13:08:58 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terrible plan choice for view with distinct on clause" }, { "msg_contents": "Also, sorry if I wasn't clear. Those two example queries above that\nperformed badly were not exact queries that I would use, they were just\nsimple examples that performed identically to something like this (or the\nexists version of the same query):\n\nSELECT cp.*\nFROM contract_product cp\nINNER JOIN claim_product clp\nON cp.contract_id = clp.contract_id\nWHERE clp.claim_id = 'whatever';\n\n\nOn Thu, Dec 17, 2015 at 1:08 PM, Adam Brusselback <[email protected]\n> wrote:\n\n> No ORM, just me.\n> Was somewhat similar to something I had seen done at an old job, but they\n> used SQL Server and that type of query worked fine there.\n>\n> There were a couple business cases that had to be satisfied, which is why\n> I went the way I did:\n> The first was \"allow products to be grouped together, and those groups be\n> placed in a hierarchy. All of the products in child groupings are valid for\n> the parent of the grouping. Products should be able to be added and removed\n> from this group at any point.\"\n> The next was \"allow a user to add a product, or a group of products to a\n> contract and set pricing information, product is the most definitive and\n> overrides any groupings the product may be in, and the lowest level of the\n> grouping hierarchy should be used if the product is not directly on the\n> contract.\"\n> The last was \"adding and removing products from a group should immediately\n> take effect to make those products valid or invalid on any contracts that\n> grouping is a part of.\"\n>\n> Now I am not going to say I love this design, I actually am not a fan of\n> it at all. I just couldn't think of any other design pattern that would\n> meet those business requirements. I was hoping to create a view to make\n> working with the final result the rules specified above easy when you want\n> to know what pricing is valid for a specific product on a contract.\n>\n> So that is the \"why\" at least.\n>\n> On Thu, Dec 17, 2015 at 12:00 PM, Tom Lane <[email protected]> wrote:\n>\n>> Adam Brusselback <[email protected]> writes:\n>> > The view I am having trouble with is able to push down it's where clause\n>> > when the id's are directly specified like so:\n>> > SELECT *\n>> > FROM contract_product cp\n>> > WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n>> > AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f';\n>>\n>> > But the where clause or join conditions are not pushed down in these\n>> cases\n>> > (which is how I need to use the view):\n>> > SELECT *\n>> > FROM contract_product cp\n>> > WHERE EXISTS (\n>> > SELECT 1\n>> > WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n>> > AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f'\n>> > );\n>>\n>> This plea for help would be more convincing if you could explain *why*\n>> you needed to do that. As is, it sure looks like \"Doctor, it hurts when\n>> I do this\". What about that construction isn't just silly?\n>>\n>> (And if you say \"it's produced by an ORM I have no control over\", I'm\n>> going to say \"your ORM was evidently written by blithering idiots, and\n>> you should not have any faith in it\".)\n>>\n>> Having said that, the reason nothing good happens is that\n>> convert_EXISTS_sublink_to_join() punts on subqueries that have an empty\n>> FROM clause, as well as some other corner cases that I did not care to\n>> analyze carefully at the time. Just looking at this example, it seems\n>> like if the SELECT list is trivial then we could simply replace the EXISTS\n>> clause in toto with the contents of the lower WHERE clause, thereby\n>> undoing the silliness of the query author. I don't think this could be\n>> handled directly in convert_EXISTS_sublink_to_join(), because it's defined\n>> to return a JoinExpr which would not apply in such a case. But possibly\n>> it could be dealt with in make_subplan() without too much overhead. I'm\n>> not feeling motivated to work on this myself, absent a more convincing\n>> explanation of why we should expend any effort to support this query\n>> pattern. But if anyone else is, have at it.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nAlso, sorry if I wasn't clear. Those two example queries above that performed badly were not exact queries that I would use, they were just simple examples that performed identically to something like this (or the exists version of the same query):SELECT cp.*FROM contract_product cpINNER JOIN claim_product clpON cp.contract_id = clp.contract_idWHERE clp.claim_id = 'whatever';On Thu, Dec 17, 2015 at 1:08 PM, Adam Brusselback <[email protected]> wrote:No ORM, just me.  Was somewhat similar to something I had seen done at an old job, but they used SQL Server and that type of query worked fine there.There were a couple business cases that had to be satisfied, which is why I went the way I did:The first was \"allow products to be grouped together, and those groups be placed in a hierarchy. All of the products in child groupings are valid for the parent of the grouping. Products should be able to be added and removed from this group at any point.\"The next was \"allow a user to add a product, or a group of products to a contract and set pricing information, product is the most definitive and overrides any groupings the product may be in, and the lowest level of the grouping hierarchy should be used if the product is not directly on the contract.\"The last was \"adding and removing products from a group should immediately take effect to make those products valid or invalid on any contracts that grouping is a part of.\"Now I am not going to say I love this design, I actually am not a fan of it at all. I just couldn't think of any other design pattern that would meet those business requirements. I was hoping to create a view to make working with the final result the rules specified above easy when you want to know what pricing is valid for a specific product on a contract.So that is the \"why\" at least.On Thu, Dec 17, 2015 at 12:00 PM, Tom Lane <[email protected]> wrote:Adam Brusselback <[email protected]> writes:\n> The view I am having trouble with is able to push down it's where clause\n> when the id's are directly specified like so:\n> SELECT *\n> FROM contract_product cp\n> WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n> AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f';\n\n> But the where clause or join conditions are not pushed down in these cases\n> (which is how I need to use the view):\n> SELECT *\n> FROM contract_product cp\n> WHERE EXISTS (\n> SELECT 1\n> WHERE cp.contract_id = '16d6df05-d8a0-4ec9-ae39-f4d8e13da597'\n> AND cp.product_id = '00c117d7-6451-4842-b17b-baa44baa375f'\n> );\n\nThis plea for help would be more convincing if you could explain *why*\nyou needed to do that.  As is, it sure looks like \"Doctor, it hurts when\nI do this\".  What about that construction isn't just silly?\n\n(And if you say \"it's produced by an ORM I have no control over\", I'm\ngoing to say \"your ORM was evidently written by blithering idiots, and\nyou should not have any faith in it\".)\n\nHaving said that, the reason nothing good happens is that\nconvert_EXISTS_sublink_to_join() punts on subqueries that have an empty\nFROM clause, as well as some other corner cases that I did not care to\nanalyze carefully at the time.  Just looking at this example, it seems\nlike if the SELECT list is trivial then we could simply replace the EXISTS\nclause in toto with the contents of the lower WHERE clause, thereby\nundoing the silliness of the query author.  I don't think this could be\nhandled directly in convert_EXISTS_sublink_to_join(), because it's defined\nto return a JoinExpr which would not apply in such a case.  But possibly\nit could be dealt with in make_subplan() without too much overhead.  I'm\nnot feeling motivated to work on this myself, absent a more convincing\nexplanation of why we should expend any effort to support this query\npattern.  But if anyone else is, have at it.\n\n                        regards, tom lane", "msg_date": "Thu, 17 Dec 2015 13:51:55 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terrible plan choice for view with distinct on clause" } ]
[ { "msg_contents": "Hi.\n\nI've noticed huge decrease in performance.\nDuring this in htop i see a lot (200 - 300) of connections in state\n\"startup\", each of them eats 3-3% of CPU time. This processes are not\nvisible in pg_stat_activity so i cant understand what they are doing, and i\ncant kill them. I cant see the bottleneck in Disk IO to. The logs of\npostgres says nothing to. I am confused.....\nWhat can be the cause of huge amount of \"startup\" connections....\nMaybe its better to start use connection pooler such as pgbouncer?\nThanks a lot.\n\nPS.\nServer config is:\n2 * Intel Xeon 2660 CPU with 64 gigs of RAM.\nHardware RAID10.\nCentos 6.6, PostgreSQL 9.1.2\n\nHi.I've noticed huge decrease in performance.During this in htop i see a lot (200 - 300) of connections in state \"startup\", each of them eats 3-3% of CPU time. This processes are not visible in pg_stat_activity so i cant understand what they are doing, and i cant kill them. I cant see the bottleneck in Disk IO to. The logs of postgres says nothing to. I am confused.....What can be the cause of  huge amount of \"startup\" connections.... Maybe its better to start use connection pooler such as pgbouncer? Thanks a lot.PS.Server config is:2 * Intel Xeon 2660 CPU with 64 gigs of RAM. Hardware RAID10.Centos 6.6, PostgreSQL 9.1.2", "msg_date": "Tue, 22 Dec 2015 09:59:27 +0200", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": true, "msg_subject": "Connections \"Startup\"" }, { "msg_contents": "Hi\n\n2015-12-22 8:59 GMT+01:00 Artem Tomyuk <[email protected]>:\n\n> Hi.\n>\n> I've noticed huge decrease in performance.\n> During this in htop i see a lot (200 - 300) of connections in state\n> \"startup\", each of them eats 3-3% of CPU time. This processes are not\n> visible in pg_stat_activity so i cant understand what they are doing, and i\n> cant kill them. I cant see the bottleneck in Disk IO to. The logs of\n> postgres says nothing to. I am confused.....\n> What can be the cause of huge amount of \"startup\" connections....\n> Maybe its better to start use connection pooler such as pgbouncer?\n> Thanks a lot.\n>\n\nWhat is your max_connections? Can you ran \"perf top\" ? What is there.\n\nToo high number can enforce system overloading. You cannot to see these\nconnections in pg_stat_activity because the process in this state isn't\nfully initialized.\n\nThere was lot of bugfix releases after 9.1.2 - currently there is\nPostgreSQL 9.2.19. Try to upgrade first.\n\nRegards\n\nPavel\n\n\n>\n> PS.\n> Server config is:\n> 2 * Intel Xeon 2660 CPU with 64 gigs of RAM.\n> Hardware RAID10.\n> Centos 6.6, PostgreSQL 9.1.2\n>\n>\n>\n>\n\nHi2015-12-22 8:59 GMT+01:00 Artem Tomyuk <[email protected]>:Hi.I've noticed huge decrease in performance.During this in htop i see a lot (200 - 300) of connections in state \"startup\", each of them eats 3-3% of CPU time. This processes are not visible in pg_stat_activity so i cant understand what they are doing, and i cant kill them. I cant see the bottleneck in Disk IO to. The logs of postgres says nothing to. I am confused.....What can be the cause of  huge amount of \"startup\" connections.... Maybe its better to start use connection pooler such as pgbouncer? Thanks a lot.What is your max_connections? Can you ran \"perf top\" ? What is there.Too high number can enforce system overloading. You cannot to see these connections in pg_stat_activity because the process in this state isn't fully initialized.There was lot of bugfix releases after 9.1.2 - currently there is PostgreSQL 9.2.19. Try to upgrade first.RegardsPavel PS.Server config is:2 * Intel Xeon 2660 CPU with 64 gigs of RAM. Hardware RAID10.Centos 6.6, PostgreSQL 9.1.2", "msg_date": "Tue, 22 Dec 2015 09:09:18 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connections \"Startup\"" }, { "msg_contents": "You can definitely overload most systems by trying to start too many\nconnections at once. (This is actually true for most relational\ndatabases.) We used to see this scenario when we'd start a bunch web\nservers that used preforked apache at the same time (where each fork had\nits own connection). One temporary work around is to slow start the web\ncluster - bringing up one at a time and giving them a chance to complete.\n\nYou can kill the processes by looking for them on the unix prompt instead\nof inside the database. ( 'ps -fu postgres' ) You can see where they are\ncoming from using something like 'netstat -an | grep 5432' (or whatever\nport your database is listening on.\n\npgbouncer is a great solution for managing large connection sets that come\nand go often. It will really help. You can run it directly on each of the\nweb servers or client systems, you can run it in between on its own\nsystem(s), or you can run it on the database server (if necessary). You'll\nwant to tune it so it only opens as many connections as you expect to be\nrunning concurrent queries. It takes a little experimenting to figure out\nthe optimum settings. If you start pgbouncer first, you can bring up lots\nof concurrent connections to pgbouncer, and you will hardly notice it on\nthe database.\n\nTrying to stay current with the latest patches and releases is a lot of\nwork and little appreciated. However, in the long run it is far easier to\ntackle this incrementally than trying to do one big upgrade - skipping a\nbunch of releases - every now and then. This is true for the OS as well as\nthe Database. It is not always possible to do an upgrade, and when it is,\nit can take months of planning. Hopefully you aren't in that situation.\nBuilding processes that make these patches and upgrades routine is much\nsaner if you can. One nice thing about having pgbouncer in between the\napplication and the database is you can reconfigure pgbouncer to talk to a\ndifferent database and you won't have to touch the application code at\nall. Sometimes that is easier to accomplish politically. Swapping out a\ndatabase which is running behind a cluster of application servers with\nminimal risk and minimal downtime is a technical as well as political\nchallenge, but worth it when you can get on the latest and greatest. Good\nLuck!\n\n\n\n\n\n\n\nOn Tue, Dec 22, 2015 at 3:09 AM, Pavel Stehule <[email protected]>\nwrote:\n\n> Hi\n>\n> 2015-12-22 8:59 GMT+01:00 Artem Tomyuk <[email protected]>:\n>\n>> Hi.\n>>\n>> I've noticed huge decrease in performance.\n>> During this in htop i see a lot (200 - 300) of connections in state\n>> \"startup\", each of them eats 3-3% of CPU time. This processes are not\n>> visible in pg_stat_activity so i cant understand what they are doing, and i\n>> cant kill them. I cant see the bottleneck in Disk IO to. The logs of\n>> postgres says nothing to. I am confused.....\n>> What can be the cause of huge amount of \"startup\" connections....\n>> Maybe its better to start use connection pooler such as pgbouncer?\n>> Thanks a lot.\n>>\n>\n> What is your max_connections? Can you ran \"perf top\" ? What is there.\n>\n> Too high number can enforce system overloading. You cannot to see these\n> connections in pg_stat_activity because the process in this state isn't\n> fully initialized.\n>\n> There was lot of bugfix releases after 9.1.2 - currently there is\n> PostgreSQL 9.2.19. Try to upgrade first.\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> PS.\n>> Server config is:\n>> 2 * Intel Xeon 2660 CPU with 64 gigs of RAM.\n>> Hardware RAID10.\n>> Centos 6.6, PostgreSQL 9.1.2\n>>\n>>\n>>\n>>\n>\n\nYou can definitely overload most systems by trying to start too many connections at once.  (This is actually true for most relational databases.)  We used to see this scenario when we'd start a bunch web servers that used preforked apache at the same time (where each fork had its own connection).  One temporary work around is to slow start the web cluster - bringing up one at a time and giving them a chance to complete.You can kill the processes by looking for them on the unix prompt instead of inside the database. ( 'ps -fu postgres' ) You can see where they are coming from using something like 'netstat -an | grep 5432' (or whatever port your database is listening on.pgbouncer is a great solution for managing large connection sets that come and go often.  It will really help.  You can run it directly on each of the web servers or client systems, you can run it in between on its own system(s), or you can run it on the database server (if necessary).  You'll want to tune it so it only opens as many connections as you expect to be running concurrent queries.  It takes a little experimenting to figure out the optimum settings.   If you start pgbouncer first, you can bring up lots of concurrent connections to pgbouncer, and you will hardly notice it on the database.Trying to stay current with the latest patches and releases is a lot of work and little appreciated.  However, in the long run it is far easier to tackle this incrementally than trying to do one big upgrade - skipping a bunch of releases - every now and then.  This is true for the OS as well as the Database.  It is not always possible to do an upgrade, and when it is, it can take months of planning.  Hopefully you aren't in that situation.  Building processes that make these patches and upgrades routine is much saner if you can.   One nice thing about having pgbouncer in between the application and the database is you can reconfigure pgbouncer to talk to a different database and you won't have to touch the application code at all.  Sometimes that is easier to accomplish politically.   Swapping out a database which is running behind a cluster of application servers with minimal risk and minimal downtime is a technical as well as political challenge, but worth it when you can get on the latest and greatest.  Good Luck!On Tue, Dec 22, 2015 at 3:09 AM, Pavel Stehule <[email protected]> wrote:Hi2015-12-22 8:59 GMT+01:00 Artem Tomyuk <[email protected]>:Hi.I've noticed huge decrease in performance.During this in htop i see a lot (200 - 300) of connections in state \"startup\", each of them eats 3-3% of CPU time. This processes are not visible in pg_stat_activity so i cant understand what they are doing, and i cant kill them. I cant see the bottleneck in Disk IO to. The logs of postgres says nothing to. I am confused.....What can be the cause of  huge amount of \"startup\" connections.... Maybe its better to start use connection pooler such as pgbouncer? Thanks a lot.What is your max_connections? Can you ran \"perf top\" ? What is there.Too high number can enforce system overloading. You cannot to see these connections in pg_stat_activity because the process in this state isn't fully initialized.There was lot of bugfix releases after 9.1.2 - currently there is PostgreSQL 9.2.19. Try to upgrade first.RegardsPavel PS.Server config is:2 * Intel Xeon 2660 CPU with 64 gigs of RAM. Hardware RAID10.Centos 6.6, PostgreSQL 9.1.2", "msg_date": "Tue, 22 Dec 2015 06:36:34 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connections \"Startup\"" }, { "msg_contents": "On 12/22/15 2:09 AM, Pavel Stehule wrote:\n>\n> There was lot of bugfix releases after 9.1.2 - currently there is\n> PostgreSQL 9.2.19.\n\nI'm sure Pavel meant 9.1.19, not 9.2.19.\n\nIn any case, be aware that 9.1 goes end of life next year. You should \nstart planning on a major version upgrade now if you haven't already. \n9.5 should release in January so you might want to wait for that version.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 22 Dec 2015 20:32:57 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connections \"Startup\"" }, { "msg_contents": "Postgres is designed in this way. It can handle such problem by adopting the following steps: \n1.Increase the kernal level parameters:shmmax and shmallexample for 2GB RAM size for postgres processing is below\nvi /etc/sysctl.confkernel.shmmax = 2147483648\nkernel.shmall = 2883584\n\nsimilar way you increase the configuration paramater for half of RAM size of your machine.\n\n2. Edit your postgresql.conf file following settings:\na. Increase the number of connection parameter.\n Connection = 500\nb.Effective_cache_size = 2GB\nc. Shared_memory = 500MB\n\n\n \n\n \n\n \n\n On Wednesday, 23 December 2015 8:04 AM, Jim Nasby <[email protected]> wrote:\n \n\n On 12/22/15 2:09 AM, Pavel Stehule wrote:\n>\n> There was lot of bugfix releases after 9.1.2 - currently there is\n> PostgreSQL 9.2.19.\n\nI'm sure Pavel meant 9.1.19, not 9.2.19.\n\nIn any case, be aware that 9.1 goes end of life next year. You should \nstart planning on a major version upgrade now if you haven't already. \n9.5 should release in January so you might want to wait for that version.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n\n\n \nPostgres is designed in this way. It can handle such problem by adopting the following steps: 1.Increase the kernal level parameters:shmmax and shmallexample for 2GB RAM size for postgres processing is belowvi /etc/sysctl.confkernel.shmmax = 2147483648kernel.shmall = 2883584similar way you increase the configuration paramater for half of RAM size of your machine.2. Edit your postgresql.conf file following settings:a. Increase the number of connection parameter. Connection = 500b.Effective_cache_size = 2GBc. Shared_memory = 500MB  On Wednesday, 23 December 2015 8:04 AM, Jim Nasby <[email protected]> wrote: On 12/22/15 2:09 AM, Pavel Stehule wrote:>> There was lot of bugfix releases after 9.1.2 - currently there is> PostgreSQL 9.2.19.I'm sure Pavel meant 9.1.19, not 9.2.19.In any case, be aware that 9.1 goes end of life next year. You should start planning on a major version upgrade now if you haven't already. 9.5 should release in January so you might want to wait for that version.-- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TXExperts in Analytics, Data Architecture and PostgreSQLData in Trouble? Get it in Treble! http://BlueTreble.com-- Sent via pgsql-admin mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-admin", "msg_date": "Wed, 23 Dec 2015 03:52:08 +0000 (UTC)", "msg_from": "Om Prakash Jaiswal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Connections \"Startup\"" }, { "msg_contents": "2015-12-23 4:52 GMT+01:00 Om Prakash Jaiswal <[email protected]>:\n\n>\n> *Postgres is designed in this way. It can handle such problem by adopting the following steps: *\n>\n> 1.Increase the kernal level parameters:\n> shmmax and shmall\n> example for 2GB RAM size for postgres processing is below\n>\n> *vi /etc/sysctl.conf*\n>\n> kernel.shmmax = 2147483648\n> kernel.shmall = 2883584\n>\n> similar way you increase the configuration paramater for half of RAM size of your machine.\n>\n> 2. Edit your postgresql.conf file following settings:\n> a. Increase the number of connection parameter.\n> Connection = 500\n> b.Effective_cache_size = 2GB\n> c. Shared_memory = 500MB\n>\n>\nincreasing max connection when you have these strange issues isn't good\nadvice. Running 500 connections on 2GB server is highly risky.\n\nPavel\n\n\n>\n>\n>\n>\n>\n>\n>\n> On Wednesday, 23 December 2015 8:04 AM, Jim Nasby <[email protected]>\n> wrote:\n>\n>\n> On 12/22/15 2:09 AM, Pavel Stehule wrote:\n>\n> >\n> > There was lot of bugfix releases after 9.1.2 - currently there is\n> > PostgreSQL 9.2.19.\n>\n>\n> I'm sure Pavel meant 9.1.19, not 9.2.19.\n>\n> In any case, be aware that 9.1 goes end of life next year. You should\n> start planning on a major version upgrade now if you haven't already.\n> 9.5 should release in January so you might want to wait for that version.\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX\n> Experts in Analytics, Data Architecture and PostgreSQL\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n> <http://bluetreble.com/>\n>\n>\n> --\n> Sent via pgsql-admin mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-admin\n>\n>\n>\n>\n\n2015-12-23 4:52 GMT+01:00 Om Prakash Jaiswal <[email protected]>:Postgres is designed in this way. It can handle such problem by adopting the following steps: 1.Increase the kernal level parameters:shmmax and shmallexample for 2GB RAM size for postgres processing is belowvi /etc/sysctl.confkernel.shmmax = 2147483648kernel.shmall = 2883584similar way you increase the configuration paramater for half of RAM size of your machine.2. Edit your postgresql.conf file following settings:a. Increase the number of connection parameter. Connection = 500b.Effective_cache_size = 2GBc. Shared_memory = 500MBincreasing max connection when you have these strange issues isn't good advice. Running 500 connections on 2GB server is highly risky.Pavel   On Wednesday, 23 December 2015 8:04 AM, Jim Nasby <[email protected]> wrote: On 12/22/15 2:09 AM, Pavel Stehule wrote:>> There was lot of bugfix releases after 9.1.2 - currently there is> PostgreSQL 9.2.19.I'm sure Pavel meant 9.1.19, not 9.2.19.In any case, be aware that 9.1 goes end of life next year. You should start planning on a major version upgrade now if you haven't already. 9.5 should release in January so you might want to wait for that version.-- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TXExperts in Analytics, Data Architecture and PostgreSQLData in Trouble? Get it in Treble! http://BlueTreble.com-- Sent via pgsql-admin mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-admin", "msg_date": "Wed, 23 Dec 2015 05:50:12 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Connections \"Startup\"" } ]
[ { "msg_contents": "Hi,\n\nI moved a DB between two \"somewhat\" similar Postgres installs and am\ngetting much worse plans on the second. The DB was dumped via pg_dump\n(keeping indexes, etc.) and loaded to the new server.\n\nThe first (installed via emerge):\n\nselect version();\n PostgreSQL 9.4rc1 on x86_64-pc-linux-gnu, compiled by\nx86_64-pc-linux-gnu-gcc (Gentoo 4.7.3-r1 p1.4, pie-0.5.5) 4.7.3, 64-bit\n\nThe second (installed from the Postgres centos repo) :\n\nselect version();\n PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-11), 64-bit\n\nSHOW ALL; gives identical results on both - I increased several values on\nboth servers:\n\nmax_connections = 300\n\nshared_buffers = 16GB\ntemp_buffers = 128MB\nwork_mem = 128MB\n\nseq_page_cost = 0.5\nrandom_page_cost = 1.0\neffective_cache_size = 16GB\n\nThe first machine has 32GB of RAM and 16 cores (Intel(R) Xeon(R) CPU\nE5-2650 v2 @ 2.60GHz) and the second 96GB of RAM and 24 cores (Intel(R)\nXeon(R) CPU E5-2430L v2 @ 2.40GHz). I have a series of python scripts\n(including a Django site) also on the machine but did before also - load\nshouldn't have changed (there were some external backups on the other\nmachine and on the new machine only my DB + scripts).\n\ndd performance is similar for sizes under the RAM size:\n\noldserver:~$ dd if=/dev/zero of=output.img bs=8k count=256k\n262144+0 records in\n262144+0 records out\n2147483648 bytes (2.1 GB) copied, 2.04997 s, 1.0 GB/s\noldserver:~$ dd if=/dev/zero of=output.img bs=8k count=1M\n1048576+0 records in\n1048576+0 records out\n8589934592 bytes (8.6 GB) copied, 13.7105 s, 627 MB/s\n\n[newserver ~]$ dd if=/dev/zero of=output.img bs=8k count=256k\n262144+0 records in\n262144+0 records out\n2147483648 bytes (2.1 GB) copied, 2.03452 s, 1.1 GB/s\n[newserver ~]$ dd if=/dev/zero of=output.img bs=8k count=1M\n1048576+0 records in\n1048576+0 records out\n8589934592 bytes (8.6 GB) copied, 21.4785 s, 400 MB/s\n\nBut significantly better on the new machine over the RAM size:\n\noldserver:~$ dd if=/dev/zero of=output.img bs=8k count=5M\n5242880+0 records in\n5242880+0 records out\n42949672960 bytes (43 GB) copied, 478.037 s, 89.8 MB/s\n\n[newserver ~]$ dd if=/dev/zero of=output.img bs=8k count=15M\n15728640+0 records in\n15728640+0 records out\n128849018880 bytes (129 GB) copied, 256.748 s, 502 MB/s\n\nI get the following plan on the old machine for a query:\n\noldserver=# explain analyze select count(0) from (select message_id,\ncount(0) from accepted where message_id like '20151213%' group by\nmessage_id having count(0) > 1) as toto;\n\nQUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=734.26..734.27 rows=1 width=0) (actual\ntime=2519.545..2519.546 rows=1 loops=1)\n -> GroupAggregate (cost=0.70..452.90 rows=22509 width=46) (actual\ntime=2519.542..2519.542 rows=0 loops=1)\n Group Key: accepted.message_id\n Filter: (count(0) > 1)\n Rows Removed by Filter: 1289815\n -> Index Only Scan using idx_accepted2_mid on accepted\n (cost=0.70..2.72 rows=22509 width=46) (actual time=0.037..1613.982\nrows=1289815 loops=1)\n Index Cond: ((message_id >= '20151213'::text) AND\n(message_id < '20151214'::text))\n Filter: ((message_id)::text ~~ '20151213%'::text)\n Heap Fetches: 1289815\n Planning time: 0.325 ms\n Execution time: 2519.610 ms\n(11 rows)\n\nTime: 2520.534 ms\n\nOn the new machine, I was originally getting:\n\nnewserver=# explain analyze select count(0) from (select message_id,\ncount(0) from accepted where message_id like '20151213%' group by\nmessage_id having count(0) > 1) as toto;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=8018044.22..8018044.23 rows=1 width=0) (actual\ntime=123964.197..123964.197 rows=1 loops=1)\n -> GroupAggregate (cost=7935128.17..7988431.35 rows=2369030 width=46)\n(actual time=123964.195..123964.195 rows=0 loops=1)\n Group Key: accepted.message_id\n Filter: (count(0) > 1)\n Rows Removed by Filter: 1289817\n -> Sort (cost=7935128.17..7941050.75 rows=2369030 width=46)\n(actual time=123112.260..123572.412 rows=1289817 loops=1)\n Sort Key: accepted.message_id\n Sort Method: external merge Disk: 70920kB\n -> Seq Scan on accepted (cost=0.00..7658269.38\nrows=2369030 width=46) (actual time=4450.097..105171.191 rows=1289817\nloops=1)\n Filter: ((message_id)::text ~~ '20151213%'::text)\n Rows Removed by Filter: 232872643\n Planning time: 0.145 ms\n Execution time: 123995.671 ms\n\nBut after a vacuum analyze got:\n\nnewserver=# explain analyze select count(0) from (select message_id,\ncount(0) from accepted where message_id like '20151213%' group by\nmessage_id having count(0) > 1) as toto;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=6210972.24..6210972.25 rows=1 width=0) (actual\ntime=93052.551..93052.551 rows=1 loops=1)\n -> GroupAggregate (cost=0.70..6181400.28 rows=2365757 width=46)\n(actual time=93052.548..93052.548 rows=0 loops=1)\n Group Key: accepted.message_id\n Filter: (count(0) > 1)\n Rows Removed by Filter: 1289817\n -> Index Only Scan using idx_accepted2_mid on accepted\n (cost=0.70..6134085.13 rows=2365757 width=46) (actual\ntime=41992.489..92674.187 rows=1289817 loops=1)\n Filter: ((message_id)::text ~~ '20151213%'::text)\n Rows Removed by Filter: 232920074\n Heap Fetches: 0\n Planning time: 0.634 ms\n Execution time: 93052.605 ms\n(11 rows)\n\nTime: 93078.267 ms\n\nSo at least it appears to be using the index (btree, non-unique) - but it's\nnot using the >= + < trick which appears to drastically reduce execution\ntime. messag_ids start with the date. If I manually use > and <, then the\nplans and approx performance are the same:\n\nnewserver=# explain analyze select count(0) from (select message_id,\ncount(0) from accepted where message_id > '20151213' and message_id <\n'20151214' group by message_id having count(0) > 1) as toto;\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=72044.92..72044.93 rows=1 width=0) (actual\ntime=1205.840..1205.840 rows=1 loops=1)\n -> GroupAggregate (cost=0.70..57367.34 rows=1174206 width=46) (actual\ntime=1205.838..1205.838 rows=0 loops=1)\n Group Key: accepted.message_id\n Filter: (count(0) > 1)\n Rows Removed by Filter: 1289817\n -> Index Only Scan using idx_accepted2_mid on accepted\n (cost=0.70..33883.22 rows=1174206 width=46) (actual time=7.558..852.394\nrows=1289817 loops=1)\n Index Cond: ((message_id > '20151213'::text) AND (message_id\n< '20151214'::text))\n Heap Fetches: 91\n Planning time: 0.232 ms\n Execution time: 1205.890 ms\n(10 rows)\n\nTime: 1225.515 ms\n\nDoes anyone have any ideas? All data are loaded into this table via copy\nand no updates are done. Autovacuum settings weren't changed (and is on\nboth). Do I need to increase shared_buffers to half of available memory for\nthe planner to make certain optimisations? Anything else I'm missing or can\ntry? The new server has been running for almost two weeks now so I would\nhave thought things would have had a chance to settle down.\n\nCheers,\nAnton\n\nHi,I moved a DB between two \"somewhat\" similar Postgres installs and am getting much worse plans on the second. The DB was dumped via pg_dump (keeping indexes, etc.) and loaded to the new server. The first (installed via emerge):select version(); PostgreSQL 9.4rc1 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (Gentoo 4.7.3-r1 p1.4, pie-0.5.5) 4.7.3, 64-bitThe second (installed from the Postgres centos repo) :select version(); PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11), 64-bitSHOW ALL; gives identical results on both - I increased several values on both servers:max_connections = 300shared_buffers = 16GBtemp_buffers = 128MBwork_mem = 128MBseq_page_cost = 0.5random_page_cost = 1.0effective_cache_size = 16GBThe first machine has 32GB of RAM and 16 cores (Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz) and the second 96GB of RAM and 24 cores (Intel(R) Xeon(R) CPU E5-2430L v2 @ 2.40GHz). I have a series of python scripts (including a Django site) also on the machine but did before also - load shouldn't have changed (there were some external backups on the other machine and on the new machine only my DB + scripts).dd performance is similar for sizes under the RAM size:oldserver:~$ dd if=/dev/zero of=output.img bs=8k count=256k262144+0 records in262144+0 records out2147483648 bytes (2.1 GB) copied, 2.04997 s, 1.0 GB/soldserver:~$ dd if=/dev/zero of=output.img bs=8k count=1M  1048576+0 records in1048576+0 records out8589934592 bytes (8.6 GB) copied, 13.7105 s, 627 MB/s[newserver ~]$ dd if=/dev/zero of=output.img bs=8k count=256k262144+0 records in262144+0 records out2147483648 bytes (2.1 GB) copied, 2.03452 s, 1.1 GB/s[newserver ~]$ dd if=/dev/zero of=output.img bs=8k count=1M1048576+0 records in1048576+0 records out8589934592 bytes (8.6 GB) copied, 21.4785 s, 400 MB/sBut significantly better on the new machine over the RAM size:oldserver:~$ dd if=/dev/zero of=output.img bs=8k count=5M5242880+0 records in5242880+0 records out42949672960 bytes (43 GB) copied, 478.037 s, 89.8 MB/s[newserver ~]$ dd if=/dev/zero of=output.img bs=8k count=15M15728640+0 records in15728640+0 records out128849018880 bytes (129 GB) copied, 256.748 s, 502 MB/sI get the following plan on the old machine for a query:oldserver=# explain analyze select count(0) from (select message_id, count(0) from accepted where message_id like '20151213%' group by message_id having count(0) > 1) as toto;                                                                        QUERY PLAN                                                                         ----------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=734.26..734.27 rows=1 width=0) (actual time=2519.545..2519.546 rows=1 loops=1)   ->  GroupAggregate  (cost=0.70..452.90 rows=22509 width=46) (actual time=2519.542..2519.542 rows=0 loops=1)         Group Key: accepted.message_id         Filter: (count(0) > 1)         Rows Removed by Filter: 1289815         ->  Index Only Scan using idx_accepted2_mid on accepted  (cost=0.70..2.72 rows=22509 width=46) (actual time=0.037..1613.982 rows=1289815 loops=1)               Index Cond: ((message_id >= '20151213'::text) AND (message_id < '20151214'::text))               Filter: ((message_id)::text ~~ '20151213%'::text)               Heap Fetches: 1289815 Planning time: 0.325 ms Execution time: 2519.610 ms(11 rows)Time: 2520.534 msOn the new machine, I was originally getting:newserver=# explain analyze select count(0) from (select message_id, count(0) from accepted where message_id like '20151213%' group by message_id having count(0) > 1) as toto;                                                                  QUERY PLAN                                                                   ----------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=8018044.22..8018044.23 rows=1 width=0) (actual time=123964.197..123964.197 rows=1 loops=1)   ->  GroupAggregate  (cost=7935128.17..7988431.35 rows=2369030 width=46) (actual time=123964.195..123964.195 rows=0 loops=1)         Group Key: accepted.message_id         Filter: (count(0) > 1)         Rows Removed by Filter: 1289817         ->  Sort  (cost=7935128.17..7941050.75 rows=2369030 width=46) (actual time=123112.260..123572.412 rows=1289817 loops=1)               Sort Key: accepted.message_id               Sort Method: external merge  Disk: 70920kB               ->  Seq Scan on accepted  (cost=0.00..7658269.38 rows=2369030 width=46) (actual time=4450.097..105171.191 rows=1289817 loops=1)                     Filter: ((message_id)::text ~~ '20151213%'::text)                     Rows Removed by Filter: 232872643 Planning time: 0.145 ms Execution time: 123995.671 msBut after a vacuum analyze got:newserver=# explain analyze select count(0) from (select message_id, count(0) from accepted where message_id like '20151213%' group by message_id having count(0) > 1) as toto;                                                                               QUERY PLAN                                                                               ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=6210972.24..6210972.25 rows=1 width=0) (actual time=93052.551..93052.551 rows=1 loops=1)   ->  GroupAggregate  (cost=0.70..6181400.28 rows=2365757 width=46) (actual time=93052.548..93052.548 rows=0 loops=1)         Group Key: accepted.message_id         Filter: (count(0) > 1)         Rows Removed by Filter: 1289817         ->  Index Only Scan using idx_accepted2_mid on accepted  (cost=0.70..6134085.13 rows=2365757 width=46) (actual time=41992.489..92674.187 rows=1289817 loops=1)               Filter: ((message_id)::text ~~ '20151213%'::text)               Rows Removed by Filter: 232920074               Heap Fetches: 0 Planning time: 0.634 ms Execution time: 93052.605 ms(11 rows)Time: 93078.267 msSo at least it appears to be using the index (btree, non-unique) - but it's not using the >= + < trick which appears to drastically reduce execution time. messag_ids start with the date. If I manually use > and <, then the plans and approx performance are the same:newserver=# explain analyze select count(0) from (select message_id, count(0) from accepted where message_id > '20151213' and message_id < '20151214' group by message_id having count(0) > 1) as toto;                                                                           QUERY PLAN                                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=72044.92..72044.93 rows=1 width=0) (actual time=1205.840..1205.840 rows=1 loops=1)   ->  GroupAggregate  (cost=0.70..57367.34 rows=1174206 width=46) (actual time=1205.838..1205.838 rows=0 loops=1)         Group Key: accepted.message_id         Filter: (count(0) > 1)         Rows Removed by Filter: 1289817         ->  Index Only Scan using idx_accepted2_mid on accepted  (cost=0.70..33883.22 rows=1174206 width=46) (actual time=7.558..852.394 rows=1289817 loops=1)               Index Cond: ((message_id > '20151213'::text) AND (message_id < '20151214'::text))               Heap Fetches: 91 Planning time: 0.232 ms Execution time: 1205.890 ms(10 rows)Time: 1225.515 msDoes anyone have any ideas? All data are loaded into this table via copy and no updates are done. Autovacuum settings weren't changed (and is on both). Do I need to increase shared_buffers to half of available memory for the planner to make certain optimisations? Anything else I'm missing or can try? The new server has been running for almost two weeks now so I would have thought things would have had a chance to settle down.Cheers,Anton", "msg_date": "Thu, 31 Dec 2015 11:12:58 +0100", "msg_from": "Anton Melser <[email protected]>", "msg_from_op": true, "msg_subject": "Plan differences" }, { "msg_contents": "Hi\n\n\n> Does anyone have any ideas? All data are loaded into this table via copy\n> and no updates are done. Autovacuum settings weren't changed (and is on\n> both). Do I need to increase shared_buffers to half of available memory for\n> the planner to make certain optimisations? Anything else I'm missing or can\n> try? The new server has been running for almost two weeks now so I would\n> have thought things would have had a chance to settle down.\n>\n>\nIt is looking like some missing optimization that was removed from RC\nrelease.\n\nRegards\n\nPavel\n\n\n> Cheers,\n> Anton\n>\n>\n\nHiDoes anyone have any ideas? All data are loaded into this table via copy and no updates are done. Autovacuum settings weren't changed (and is on both). Do I need to increase shared_buffers to half of available memory for the planner to make certain optimisations? Anything else I'm missing or can try? The new server has been running for almost two weeks now so I would have thought things would have had a chance to settle down.It is looking like some missing optimization that was removed from RC release.RegardsPavel Cheers,Anton", "msg_date": "Thu, 31 Dec 2015 11:44:42 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plan differences" }, { "msg_contents": "Hi,\n\nIt is looking like some missing optimization that was removed from RC\n> release.\n>\n\nThanks. Is there some discussion of why these optimisations were removed? I\nstarted looking at some of the more complicated queries I do and there are\nmany occasions where there are 10-30x performance degradations compared\nwith the RC. Not what I was hoping for with a much more powerful machine!\nWere these optimisations really dangerous? Is there any (easy and safe) way\nto get them back or would I need to reinstall an RC version?\n\nThanks again,\nAnton\n\nHi,It is looking like some missing optimization that was removed from RC release.Thanks. Is there some discussion of why these optimisations were removed? I started looking at some of the more complicated queries I do and there are many occasions where there are 10-30x performance degradations compared with the RC. Not what I was hoping for with a much more powerful machine! Were these optimisations really dangerous? Is there any (easy and safe) way to get them back or would I need to reinstall an RC version?\nThanks again,Anton", "msg_date": "Thu, 31 Dec 2015 15:49:54 +0100", "msg_from": "Anton Melser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Plan differences" }, { "msg_contents": "Anton Melser <[email protected]> writes:\n> I moved a DB between two \"somewhat\" similar Postgres installs and am\n> getting much worse plans on the second. The DB was dumped via pg_dump\n> (keeping indexes, etc.) and loaded to the new server.\n\n> [ \"like 'foo%'\" is not getting converted into index bounds ]\n\nI'd bet your old database is in C locale and the new one is not.\n\nThe LIKE optimization requires an index that's sorted according to plain\nC (strcmp) rules. A regular text index will be that way only if the\ndatabase's LC_COLLATE is C.\n\nIf you don't want to rebuild the whole database, you can create indexes to\nsupport this by declaring them with COLLATE \"C\", or the older way is to\ndeclare them with text_pattern_ops as the index opclass.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Dec 2015 10:02:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plan differences" }, { "msg_contents": "On 12/31/15 9:02 AM, Tom Lane wrote:\n> If you don't want to rebuild the whole database, you can create indexes to\n> support this by declaring them with COLLATE \"C\", or the older way is to\n> declare them with text_pattern_ops as the index opclass.\n\nDo you have to do anything special in the query itself for COLLATE \"C\" \nto work?\n\nI didn't realize the two methods were equivalent.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Dec 2015 10:29:10 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plan differences" }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> On 12/31/15 9:02 AM, Tom Lane wrote:\n>> If you don't want to rebuild the whole database, you can create indexes to\n>> support this by declaring them with COLLATE \"C\", or the older way is to\n>> declare them with text_pattern_ops as the index opclass.\n\n> Do you have to do anything special in the query itself for COLLATE \"C\" \n> to work?\n\nNo.\n\n> I didn't realize the two methods were equivalent.\n\nWell, they're not equivalent exactly, but indxpath.c knows that either\nway produces an index that will work for LIKE.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Dec 2015 12:11:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plan differences" }, { "msg_contents": ">\n> I'd bet your old database is in C locale and the new one is not.\n>\n\nRemind me never to never bet against you :-).\n\n\n> The LIKE optimization requires an index that's sorted according to plain\n> C (strcmp) rules. A regular text index will be that way only if the\n> database's LC_COLLATE is C.\n>\n> If you don't want to rebuild the whole database, you can create indexes to\n> support this by declaring them with COLLATE \"C\", or the older way is to\n> declare them with text_pattern_ops as the index opclass.\n>\n\nDeclaring new indexes with COLLATE \"C\" and removing the old indexes fixed\nthe like problem but it created a another - the > and < queries need a sort\nbefore passing off the the new index. Having two indexes seems to give me\nthe best of both worlds, though obviously it's taking up (much) more space.\nAs space isn't ever likely to be a problem, and there are no updates (only\ncopy) to these tables, I'll keep it like this to avoid having to reload the\nentire DB.\n\nThanks very much for your help.\nCheers,\nAnton\n\nI'd bet your old database is in C locale and the new one is not.Remind me never to never bet against you :-). The LIKE optimization requires an index that's sorted according to plain\nC (strcmp) rules.  A regular text index will be that way only if the\ndatabase's LC_COLLATE is C.\n\nIf you don't want to rebuild the whole database, you can create indexes to\nsupport this by declaring them with COLLATE \"C\", or the older way is to\ndeclare them with text_pattern_ops as the index opclass.Declaring new indexes with COLLATE \"C\" and removing the old indexes fixed the like problem but it created a another - the > and < queries need a sort before passing off the the new index. Having two indexes seems to give me the best of both worlds, though obviously it's taking up (much) more space. As space isn't ever likely to be a problem, and there are no updates (only copy) to these tables, I'll keep it like this to avoid having to reload the entire DB.Thanks very much for your help.Cheers,Anton", "msg_date": "Thu, 31 Dec 2015 21:10:12 +0100", "msg_from": "Anton Melser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Plan differences" }, { "msg_contents": ">\n> Declaring new indexes with COLLATE \"C\" and removing the old indexes fixed\n>> the like problem but it created a another - the > and < queries need a sort\n>> before passing off the the new index. Having two indexes seems to give me\n>> the best of both worlds, though obviously it's taking up (much) more space.\n>> As space isn't ever likely to be a problem, and there are no updates (only\n>> copy) to these tables, I'll keep it like this to avoid having to reload the\n>> entire DB.\n>\n>\nI spoke a little soon - while many of the simple queries are now hitting\nthe indexes, some of the more complicated ones are still producing\nsubstantially inferior plans, even after reloading the whole DB with an\nidentical lc_collate and lc_ctype. Here are the plans on the original\nserver and the new server (identical collations, lctypes and index types -\nbtree C). I have been experimenting (accepted = accepted2,\nidx_accepted2_mid = idx_accepted_mid, etc.) and the tables no longer have\nexactly the same data but there is nothing substantially different (a few\ndays of data more with about a year total). The oldserver query is actually\nworking on about 3x the amount of data - I tried reducing the amounts on\nthe new server to get done in memory but it didn't seem to help the plan.\n\n HashAggregate (cost=3488512.43..3496556.16 rows=536249 width=143) (actual\ntime=228467.924..229026.799 rows=1426351 loops=1)\n Group Key: to_char(timezone('UTC'::text, a.tstamp), 'YYYY-MM-DD'::text),\na.column1, CASE WHEN (d.column1 IS NOT NULL) THEN d.column1 ELSE\nfff.column1 END, a.column2\n -> Merge Left Join (cost=110018.15..3072358.66 rows=23780215\nwidth=143) (actual time=3281.993..200563.177 rows=23554638 loops=1)\n Merge Cond: ((a.message_id)::text = (fff.message_id)::text)\n -> Merge Left Join (cost=110017.58..2781199.04 rows=23780215\nwidth=136) (actual time=3281.942..157385.338 rows=23554636 loops=1)\n Merge Cond: ((a.message_id)::text = (d.message_id)::text)\n -> Index Scan using idx_accepted2_mid on accepted a\n (cost=0.70..2226690.13 rows=23780215 width=83) (actual\ntime=3.690..73048.662 rows=23554632 loops=1)\n Index Cond: ((message_id)::text > '20151130'::text)\n Filter: (((mrid)::text <>\n'zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz'::text) AND ((mrid)::text <>\n'BAT'::text) AND ((column2)::text <> 'text1'::text) AND ((column2)::text\n!~~ 'text2.%'::text))\n Rows Removed by Filter: 342947\n -> Index Scan using idx_delivered2_mid on delivered d\n (cost=110016.89..482842.01 rows=3459461 width=53) (actual\ntime=3278.245..64031.033 rows=23666434 loops=1)\n Index Cond: ((message_id)::text > '20151130'::text)\n Filter: (NOT (hashed SubPlan 1))\n Rows Removed by Filter: 443\n SubPlan 1\n -> Index Scan using idx_failed2_mid on failed ff\n (cost=0.57..109953.48 rows=25083 width=46) (actual time=0.041..3124.642\nrows=237026 loops=1)\n Index Cond: ((message_id)::text >\n'20151130'::text)\n Filter: ((severity)::text = 'permanent'::text)\n Rows Removed by Filter: 5080519\n -> Index Scan using idx_failed2_mid on failed fff\n (cost=0.57..112718.27 rows=25083 width=53) (actual time=0.034..4861.762\nrows=236676 loops=1)\n Index Cond: ((message_id)::text > '20151130'::text)\n Filter: ((severity)::text = 'permanent'::text)\n Rows Removed by Filter: 5080519\n Planning time: 2.039 ms\n Execution time: 229076.361 ms\n\n\n HashAggregate (cost=7636055.05..7640148.23 rows=272879 width=143) (actual\ntime=488739.376..488915.545 rows=403741 loops=1)\n Group Key: to_char(timezone('UTC'::text, a.tstamp), 'YYYY-MM-DD'::text),\na.column1, CASE WHEN (d.column1 IS NOT NULL) THEN d.column1 ELSE\nfff.column1 END, a.column2\n -> Hash Right Join (cost=5119277.32..7528101.45 rows=6168777\nwidth=143) (actual time=271256.212..480958.460 rows=6516196 loops=1)\n Hash Cond: ((d.message_id)::text = (a.message_id)::text)\n -> Bitmap Heap Scan on delivered2 d (cost=808012.86..3063311.98\nrows=3117499 width=53) (actual time=7012.487..194557.307 rows=6604970\nloops=1)\n Recheck Cond: ((message_id)::text > '20151225'::text)\n Rows Removed by Index Recheck: 113028616\n Filter: (NOT (hashed SubPlan 1))\n Rows Removed by Filter: 88\n Heap Blocks: exact=1146550 lossy=2543948\n -> Bitmap Index Scan on idx_delivered_mid\n (cost=0.00..100075.17 rows=6234997 width=0) (actual\ntime=4414.860..4414.860 rows=6605058 loops=1)\n Index Cond: ((message_id)::text > '20151225'::text)\n SubPlan 1\n -> Bitmap Heap Scan on failed2 ff\n (cost=19778.06..707046.73 rows=44634 width=46) (actual\ntime=828.164..1949.687 rows=71500 loops=1)\n Recheck Cond: ((message_id)::text > '20151225'::text)\n Filter: ((severity)::text = 'permanent'::text)\n Rows Removed by Filter: 1257151\n Heap Blocks: exact=545606\n -> Bitmap Index Scan on idx_failed_mid\n (cost=0.00..19766.90 rows=1232978 width=0) (actual time=599.864..599.864\nrows=1328651 loops=1)\n Index Cond: ((message_id)::text >\n'20151225'::text)\n -> Hash (cost=4173912.75..4173912.75 rows=6168777 width=136)\n(actual time=264243.046..264243.046 rows=6516194 loops=1)\n Buckets: 131072 Batches: 8 Memory Usage: 93253kB\n -> Hash Right Join (cost=3443580.52..4173912.75\nrows=6168777 width=136) (actual time=254876.487..261300.772 rows=6516194\nloops=1)\n Hash Cond: ((fff.message_id)::text =\n(a.message_id)::text)\n -> Bitmap Heap Scan on failed2 fff\n (cost=19778.06..707046.73 rows=44634 width=53) (actual\ntime=668.372..3876.360 rows=71500 loops=1)\n Recheck Cond: ((message_id)::text >\n'20151225'::text)\n Filter: ((severity)::text = 'permanent'::text)\n Rows Removed by Filter: 1257151\n Heap Blocks: exact=545606\n -> Bitmap Index Scan on idx_failed_mid\n (cost=0.00..19766.90 rows=1232978 width=0) (actual time=459.303..459.303\nrows=1328651 loops=1)\n Index Cond: ((message_id)::text >\n'20151225'::text)\n -> Hash (cost=3304523.24..3304523.24 rows=6168777\nwidth=83) (actual time=254206.923..254206.923 rows=6516194 loops=1)\n Buckets: 131072 Batches: 8 Memory Usage:\n92972kB\n -> Bitmap Heap Scan on accepted2 a\n (cost=102690.65..3304523.24 rows=6168777 width=83) (actual\ntime=5493.239..248361.721 rows=6516194 loops=1)\n Recheck Cond: ((message_id)::text >\n'20151225'::text)\n Rows Removed by Index Recheck: 79374688\n Filter: (((mrid)::text <>\n'zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz'::text) AND ((mrid)::text <>\n'BAT'::text) AND ((column2)::text <> 'text1'::text) AND ((column2)::text\n!~~ 'text2.%'::text))\n Rows Removed by Filter: 163989\n Heap Blocks: exact=1434533 lossy=3404597\n -> Bitmap Index Scan on idx_accepted_mid\n (cost=0.00..101148.46 rows=6301568 width=0) (actual\ntime=4806.816..4806.816 rows=6680183 loops=1)\n Index Cond: ((message_id)::text >\n'20151225'::text)\n Planning time: 76.707 ms\n Execution time: 488939.880 ms\n\nAny suggestions on something else to try?\n\nThanks again,\nAnton\n\nDeclaring new indexes with COLLATE \"C\" and removing the old indexes fixed the like problem but it created a another - the > and < queries need a sort before passing off the the new index. Having two indexes seems to give me the best of both worlds, though obviously it's taking up (much) more space. As space isn't ever likely to be a problem, and there are no updates (only copy) to these tables, I'll keep it like this to avoid having to reload the entire DB.I spoke a little soon - while many of the simple queries are now hitting the indexes, some of the more complicated ones are still producing substantially inferior plans, even after reloading the whole DB with an identical lc_collate and lc_ctype. Here are the plans on the original server and the new server (identical collations, lctypes and index types - btree C). I have been experimenting (accepted = accepted2, idx_accepted2_mid = idx_accepted_mid, etc.) and the tables no longer have exactly the same data but there is nothing substantially different (a few days of data more with about a year total). The oldserver query is actually working on about 3x the amount of data - I tried reducing the amounts on the new server to get done in memory but it didn't seem to help the plan. HashAggregate  (cost=3488512.43..3496556.16 rows=536249 width=143) (actual time=228467.924..229026.799 rows=1426351 loops=1)   Group Key: to_char(timezone('UTC'::text, a.tstamp), 'YYYY-MM-DD'::text), a.column1, CASE WHEN (d.column1 IS NOT NULL) THEN d.column1 ELSE fff.column1 END, a.column2   ->  Merge Left Join  (cost=110018.15..3072358.66 rows=23780215 width=143) (actual time=3281.993..200563.177 rows=23554638 loops=1)         Merge Cond: ((a.message_id)::text = (fff.message_id)::text)         ->  Merge Left Join  (cost=110017.58..2781199.04 rows=23780215 width=136) (actual time=3281.942..157385.338 rows=23554636 loops=1)               Merge Cond: ((a.message_id)::text = (d.message_id)::text)               ->  Index Scan using idx_accepted2_mid on accepted a  (cost=0.70..2226690.13 rows=23780215 width=83) (actual time=3.690..73048.662 rows=23554632 loops=1)                     Index Cond: ((message_id)::text > '20151130'::text)                     Filter: (((mrid)::text <> 'zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz'::text) AND ((mrid)::text <> 'BAT'::text) AND ((column2)::text <> 'text1'::text) AND ((column2)::text !~~ 'text2.%'::text))                     Rows Removed by Filter: 342947               ->  Index Scan using idx_delivered2_mid on delivered d  (cost=110016.89..482842.01 rows=3459461 width=53) (actual time=3278.245..64031.033 rows=23666434 loops=1)                     Index Cond: ((message_id)::text > '20151130'::text)                     Filter: (NOT (hashed SubPlan 1))                     Rows Removed by Filter: 443                     SubPlan 1                       ->  Index Scan using idx_failed2_mid on failed ff  (cost=0.57..109953.48 rows=25083 width=46) (actual time=0.041..3124.642 rows=237026 loops=1)                             Index Cond: ((message_id)::text > '20151130'::text)                             Filter: ((severity)::text = 'permanent'::text)                             Rows Removed by Filter: 5080519         ->  Index Scan using idx_failed2_mid on failed fff  (cost=0.57..112718.27 rows=25083 width=53) (actual time=0.034..4861.762 rows=236676 loops=1)               Index Cond: ((message_id)::text > '20151130'::text)               Filter: ((severity)::text = 'permanent'::text)               Rows Removed by Filter: 5080519 Planning time: 2.039 ms Execution time: 229076.361 ms HashAggregate  (cost=7636055.05..7640148.23 rows=272879 width=143) (actual time=488739.376..488915.545 rows=403741 loops=1)   Group Key: to_char(timezone('UTC'::text, a.tstamp), 'YYYY-MM-DD'::text), a.column1, CASE WHEN (d.column1 IS NOT NULL) THEN d.column1 ELSE fff.column1 END, a.column2   ->  Hash Right Join  (cost=5119277.32..7528101.45 rows=6168777 width=143) (actual time=271256.212..480958.460 rows=6516196 loops=1)         Hash Cond: ((d.message_id)::text = (a.message_id)::text)         ->  Bitmap Heap Scan on delivered2 d  (cost=808012.86..3063311.98 rows=3117499 width=53) (actual time=7012.487..194557.307 rows=6604970 loops=1)               Recheck Cond: ((message_id)::text > '20151225'::text)               Rows Removed by Index Recheck: 113028616               Filter: (NOT (hashed SubPlan 1))               Rows Removed by Filter: 88               Heap Blocks: exact=1146550 lossy=2543948               ->  Bitmap Index Scan on idx_delivered_mid  (cost=0.00..100075.17 rows=6234997 width=0) (actual time=4414.860..4414.860 rows=6605058 loops=1)                     Index Cond: ((message_id)::text > '20151225'::text)               SubPlan 1                 ->  Bitmap Heap Scan on failed2 ff  (cost=19778.06..707046.73 rows=44634 width=46) (actual time=828.164..1949.687 rows=71500 loops=1)                       Recheck Cond: ((message_id)::text > '20151225'::text)                       Filter: ((severity)::text = 'permanent'::text)                       Rows Removed by Filter: 1257151                       Heap Blocks: exact=545606                       ->  Bitmap Index Scan on idx_failed_mid  (cost=0.00..19766.90 rows=1232978 width=0) (actual time=599.864..599.864 rows=1328651 loops=1)                             Index Cond: ((message_id)::text > '20151225'::text)         ->  Hash  (cost=4173912.75..4173912.75 rows=6168777 width=136) (actual time=264243.046..264243.046 rows=6516194 loops=1)               Buckets: 131072  Batches: 8  Memory Usage: 93253kB               ->  Hash Right Join  (cost=3443580.52..4173912.75 rows=6168777 width=136) (actual time=254876.487..261300.772 rows=6516194 loops=1)                     Hash Cond: ((fff.message_id)::text = (a.message_id)::text)                     ->  Bitmap Heap Scan on failed2 fff  (cost=19778.06..707046.73 rows=44634 width=53) (actual time=668.372..3876.360 rows=71500 loops=1)                           Recheck Cond: ((message_id)::text > '20151225'::text)                           Filter: ((severity)::text = 'permanent'::text)                           Rows Removed by Filter: 1257151                           Heap Blocks: exact=545606                           ->  Bitmap Index Scan on idx_failed_mid  (cost=0.00..19766.90 rows=1232978 width=0) (actual time=459.303..459.303 rows=1328651 loops=1)                                 Index Cond: ((message_id)::text > '20151225'::text)                     ->  Hash  (cost=3304523.24..3304523.24 rows=6168777 width=83) (actual time=254206.923..254206.923 rows=6516194 loops=1)                           Buckets: 131072  Batches: 8  Memory Usage: 92972kB                           ->  Bitmap Heap Scan on accepted2 a  (cost=102690.65..3304523.24 rows=6168777 width=83) (actual time=5493.239..248361.721 rows=6516194 loops=1)                                 Recheck Cond: ((message_id)::text > '20151225'::text)                                 Rows Removed by Index Recheck: 79374688                                 Filter: (((mrid)::text <> 'zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz'::text) AND ((mrid)::text <> 'BAT'::text) AND ((column2)::text <> 'text1'::text) AND ((column2)::text !~~ 'text2.%'::text))                                 Rows Removed by Filter: 163989                                 Heap Blocks: exact=1434533 lossy=3404597                                 ->  Bitmap Index Scan on idx_accepted_mid  (cost=0.00..101148.46 rows=6301568 width=0) (actual time=4806.816..4806.816 rows=6680183 loops=1)                                       Index Cond: ((message_id)::text > '20151225'::text) Planning time: 76.707 ms Execution time: 488939.880 msAny suggestions on something else to try?Thanks again,Anton", "msg_date": "Fri, 1 Jan 2016 18:13:06 +0100", "msg_from": "Anton Melser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Plan differences" }, { "msg_contents": "Hello Anton,\r\n Changing the locale to anything other than C or POSIX will have a performance overhead. I’m pretty sure that just declaring the locale on the indexes is just like plastering over the cracks.\r\n\r\nIs it possible to reload the database with the same locale as the original database server?\r\n\r\nRegards,\r\nAdam\r\n\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Anton Melser\r\nSent: 01 January 2016 5:13 PM\r\nTo: Tom Lane\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Plan differences\r\n\r\nDeclaring new indexes with COLLATE \"C\" and removing the old indexes fixed the like problem but it created a another - the > and < queries need a sort before passing off the the new index. Having two indexes seems to give me the best of both worlds, though obviously it's taking up (much) more space. As space isn't ever likely to be a problem, and there are no updates (only copy) to these tables, I'll keep it like this to avoid having to reload the entire DB.\r\n\r\nI spoke a little soon - while many of the simple queries are now hitting the indexes, some of the more complicated ones are still producing substantially inferior plans, even after reloading the whole DB with an identical lc_collate and lc_ctype. Here are the plans on the original server and the new server (identical collations, lctypes and index types - btree C). I have been experimenting (accepted = accepted2, idx_accepted2_mid = idx_accepted_mid, etc.) and the tables no longer have exactly the same data but there is nothing substantially different (a few days of data more with about a year total). The oldserver query is actually working on about 3x the amount of data - I tried reducing the amounts on the new server to get done in memory but it didn't seem to help the plan.\r\n\r\n HashAggregate (cost=3488512.43..3496556.16 rows=536249 width=143) (actual time=228467.924..229026.799 rows=1426351 loops=1)\r\n Group Key: to_char(timezone('UTC'::text, a.tstamp), 'YYYY-MM-DD'::text), a.column1, CASE WHEN (d.column1 IS NOT NULL) THEN d.column1 ELSE fff.column1 END, a.column2\r\n -> Merge Left Join (cost=110018.15..3072358.66 rows=23780215 width=143) (actual time=3281.993..200563.177 rows=23554638 loops=1)\r\n Merge Cond: ((a.message_id)::text = (fff.message_id)::text)\r\n -> Merge Left Join (cost=110017.58..2781199.04 rows=23780215 width=136) (actual time=3281.942..157385.338 rows=23554636 loops=1)\r\n Merge Cond: ((a.message_id)::text = (d.message_id)::text)\r\n -> Index Scan using idx_accepted2_mid on accepted a (cost=0.70..2226690.13 rows=23780215 width=83) (actual time=3.690..73048.662 rows=23554632 loops=1)\r\n Index Cond: ((message_id)::text > '20151130'::text)\r\n Filter: (((mrid)::text <> 'zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz'::text) AND ((mrid)::text <> 'BAT'::text) AND ((column2)::text <> 'text1'::text) AND ((column2)::text !~~ 'text2.%'::text))\r\n Rows Removed by Filter: 342947\r\n -> Index Scan using idx_delivered2_mid on delivered d (cost=110016.89..482842.01 rows=3459461 width=53) (actual time=3278.245..64031.033 rows=23666434 loops=1)\r\n Index Cond: ((message_id)::text > '20151130'::text)\r\n Filter: (NOT (hashed SubPlan 1))\r\n Rows Removed by Filter: 443\r\n SubPlan 1\r\n -> Index Scan using idx_failed2_mid on failed ff (cost=0.57..109953.48 rows=25083 width=46) (actual time=0.041..3124.642 rows=237026 loops=1)\r\n Index Cond: ((message_id)::text > '20151130'::text)\r\n Filter: ((severity)::text = 'permanent'::text)\r\n Rows Removed by Filter: 5080519\r\n -> Index Scan using idx_failed2_mid on failed fff (cost=0.57..112718.27 rows=25083 width=53) (actual time=0.034..4861.762 rows=236676 loops=1)\r\n Index Cond: ((message_id)::text > '20151130'::text)\r\n Filter: ((severity)::text = 'permanent'::text)\r\n Rows Removed by Filter: 5080519\r\n Planning time: 2.039 ms\r\n Execution time: 229076.361 ms\r\n\r\n\r\n HashAggregate (cost=7636055.05..7640148.23 rows=272879 width=143) (actual time=488739.376..488915.545 rows=403741 loops=1)\r\n Group Key: to_char(timezone('UTC'::text, a.tstamp), 'YYYY-MM-DD'::text), a.column1, CASE WHEN (d.column1 IS NOT NULL) THEN d.column1 ELSE fff.column1 END, a.column2\r\n -> Hash Right Join (cost=5119277.32..7528101.45 rows=6168777 width=143) (actual time=271256.212..480958.460 rows=6516196 loops=1)\r\n Hash Cond: ((d.message_id)::text = (a.message_id)::text)\r\n -> Bitmap Heap Scan on delivered2 d (cost=808012.86..3063311.98 rows=3117499 width=53) (actual time=7012.487..194557.307 rows=6604970 loops=1)\r\n Recheck Cond: ((message_id)::text > '20151225'::text)\r\n Rows Removed by Index Recheck: 113028616\r\n Filter: (NOT (hashed SubPlan 1))\r\n Rows Removed by Filter: 88\r\n Heap Blocks: exact=1146550 lossy=2543948\r\n -> Bitmap Index Scan on idx_delivered_mid (cost=0.00..100075.17 rows=6234997 width=0) (actual time=4414.860..4414.860 rows=6605058 loops=1)\r\n Index Cond: ((message_id)::text > '20151225'::text)\r\n SubPlan 1\r\n -> Bitmap Heap Scan on failed2 ff (cost=19778.06..707046.73 rows=44634 width=46) (actual time=828.164..1949.687 rows=71500 loops=1)\r\n Recheck Cond: ((message_id)::text > '20151225'::text)\r\n Filter: ((severity)::text = 'permanent'::text)\r\n Rows Removed by Filter: 1257151\r\n Heap Blocks: exact=545606\r\n -> Bitmap Index Scan on idx_failed_mid (cost=0.00..19766.90 rows=1232978 width=0) (actual time=599.864..599.864 rows=1328651 loops=1)\r\n Index Cond: ((message_id)::text > '20151225'::text)\r\n -> Hash (cost=4173912.75..4173912.75 rows=6168777 width=136) (actual time=264243.046..264243.046 rows=6516194 loops=1)\r\n Buckets: 131072 Batches: 8 Memory Usage: 93253kB\r\n -> Hash Right Join (cost=3443580.52..4173912.75 rows=6168777 width=136) (actual time=254876.487..261300.772 rows=6516194 loops=1)\r\n Hash Cond: ((fff.message_id)::text = (a.message_id)::text)\r\n -> Bitmap Heap Scan on failed2 fff (cost=19778.06..707046.73 rows=44634 width=53) (actual time=668.372..3876.360 rows=71500 loops=1)\r\n Recheck Cond: ((message_id)::text > '20151225'::text)\r\n Filter: ((severity)::text = 'permanent'::text)\r\n Rows Removed by Filter: 1257151\r\n Heap Blocks: exact=545606\r\n -> Bitmap Index Scan on idx_failed_mid (cost=0.00..19766.90 rows=1232978 width=0) (actual time=459.303..459.303 rows=1328651 loops=1)\r\n Index Cond: ((message_id)::text > '20151225'::text)\r\n -> Hash (cost=3304523.24..3304523.24 rows=6168777 width=83) (actual time=254206.923..254206.923 rows=6516194 loops=1)\r\n Buckets: 131072 Batches: 8 Memory Usage: 92972kB\r\n -> Bitmap Heap Scan on accepted2 a (cost=102690.65..3304523.24 rows=6168777 width=83) (actual time=5493.239..248361.721 rows=6516194 loops=1)\r\n Recheck Cond: ((message_id)::text > '20151225'::text)\r\n Rows Removed by Index Recheck: 79374688\r\n Filter: (((mrid)::text <> 'zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz'::text) AND ((mrid)::text <> 'BAT'::text) AND ((column2)::text <> 'text1'::text) AND ((column2)::text !~~ 'text2.%'::text))\r\n Rows Removed by Filter: 163989\r\n Heap Blocks: exact=1434533 lossy=3404597\r\n -> Bitmap Index Scan on idx_accepted_mid (cost=0.00..101148.46 rows=6301568 width=0) (actual time=4806.816..4806.816 rows=6680183 loops=1)\r\n Index Cond: ((message_id)::text > '20151225'::text)\r\n Planning time: 76.707 ms\r\n Execution time: 488939.880 ms\r\n\r\nAny suggestions on something else to try?\r\n\r\nThanks again,\r\nAnton\r\n\n\n\n\n\n\n\n\n\nHello Anton,\n                Changing the locale to anything other than C or POSIX will have a performance overhead.  I’m pretty sure that just declaring the locale on the\r\n indexes is just like plastering over the cracks.\n \nIs it possible to reload the database with the same locale as the original database server?\n \nRegards,\nAdam\n \n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Anton Melser\nSent: 01 January 2016 5:13 PM\nTo: Tom Lane\nCc: [email protected]\nSubject: Re: [PERFORM] Plan differences\n \n\n\n\n\n\n\n\n\nDeclaring new indexes with COLLATE \"C\" and removing the old indexes fixed the like problem but it created a another - the > and < queries need a sort before passing off the the new index. Having two indexes seems to give me the best of\r\n both worlds, though obviously it's taking up (much) more space. As space isn't ever likely to be a problem, and there are no updates (only copy) to these tables, I'll keep it like this to avoid having to reload the entire DB.\n\n\n\n\n\n\n \n\n\nI spoke a little soon - while many of the simple queries are now hitting the indexes, some of the more complicated ones are still producing substantially inferior plans, even after reloading the whole DB with an identical lc_collate and\r\n lc_ctype. Here are the plans on the original server and the new server (identical collations, lctypes and index types - btree C). I have been experimenting (accepted = accepted2, idx_accepted2_mid = idx_accepted_mid, etc.) and the tables no longer have exactly\r\n the same data but there is nothing substantially different (a few days of data more with about a year total). The oldserver query is actually working on about 3x the amount of data - I tried reducing the amounts on the new server to get done in memory but\r\n it didn't seem to help the plan.\n\n\n \n\n\n\n\n HashAggregate  (cost=3488512.43..3496556.16 rows=536249 width=143) (actual time=228467.924..229026.799 rows=1426351 loops=1)\n\n\n   Group Key: to_char(timezone('UTC'::text, a.tstamp), 'YYYY-MM-DD'::text), a.column1, CASE WHEN (d.column1 IS NOT NULL) THEN d.column1 ELSE fff.column1 END, a.column2\n\n\n   ->  Merge Left Join  (cost=110018.15..3072358.66 rows=23780215 width=143) (actual time=3281.993..200563.177 rows=23554638 loops=1)\n\n\n         Merge Cond: ((a.message_id)::text = (fff.message_id)::text)\n\n\n         ->  Merge Left Join  (cost=110017.58..2781199.04 rows=23780215 width=136) (actual time=3281.942..157385.338 rows=23554636 loops=1)\n\n\n               Merge Cond: ((a.message_id)::text = (d.message_id)::text)\n\n\n               ->  Index Scan using idx_accepted2_mid on accepted a  (cost=0.70..2226690.13 rows=23780215 width=83) (actual time=3.690..73048.662 rows=23554632 loops=1)\n\n\n                     Index Cond: ((message_id)::text > '20151130'::text)\n\n\n                     Filter: (((mrid)::text <> 'zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz'::text) AND ((mrid)::text <> 'BAT'::text) AND ((column2)::text <> 'text1'::text) AND ((column2)::text !~~ 'text2.%'::text))\n\n\n                     Rows Removed by Filter: 342947\n\n\n               ->  Index Scan using idx_delivered2_mid on delivered d  (cost=110016.89..482842.01 rows=3459461 width=53) (actual time=3278.245..64031.033 rows=23666434 loops=1)\n\n\n                     Index Cond: ((message_id)::text > '20151130'::text)\n\n\n                     Filter: (NOT (hashed SubPlan 1))\n\n\n                     Rows Removed by Filter: 443\n\n\n                     SubPlan 1\n\n\n                       ->  Index Scan using idx_failed2_mid on failed ff  (cost=0.57..109953.48 rows=25083 width=46) (actual time=0.041..3124.642 rows=237026 loops=1)\n\n\n                             Index Cond: ((message_id)::text > '20151130'::text)\n\n\n                             Filter: ((severity)::text = 'permanent'::text)\n\n\n                             Rows Removed by Filter: 5080519\n\n\n         ->  Index Scan using idx_failed2_mid on failed fff  (cost=0.57..112718.27 rows=25083 width=53) (actual time=0.034..4861.762 rows=236676 loops=1)\n\n\n               Index Cond: ((message_id)::text > '20151130'::text)\n\n\n               Filter: ((severity)::text = 'permanent'::text)\n\n\n               Rows Removed by Filter: 5080519\n\n\n Planning time: 2.039 ms\n\n\n Execution time: 229076.361 ms\n\n\n\n\n \n\n\n \n\n\n\n HashAggregate  (cost=7636055.05..7640148.23 rows=272879 width=143) (actual time=488739.376..488915.545 rows=403741 loops=1)\n\n\n   Group Key: to_char(timezone('UTC'::text, a.tstamp), 'YYYY-MM-DD'::text), a.column1, CASE WHEN (d.column1 IS NOT NULL) THEN d.column1 ELSE fff.column1 END, a.column2\n\n\n   ->  Hash Right Join  (cost=5119277.32..7528101.45 rows=6168777 width=143) (actual time=271256.212..480958.460 rows=6516196 loops=1)\n\n\n         Hash Cond: ((d.message_id)::text = (a.message_id)::text)\n\n\n         ->  Bitmap Heap Scan on delivered2 d  (cost=808012.86..3063311.98 rows=3117499 width=53) (actual time=7012.487..194557.307 rows=6604970 loops=1)\n\n\n               Recheck Cond: ((message_id)::text > '20151225'::text)\n\n\n               Rows Removed by Index Recheck: 113028616\n\n\n               Filter: (NOT (hashed SubPlan 1))\n\n\n               Rows Removed by Filter: 88\n\n\n               Heap Blocks: exact=1146550 lossy=2543948\n\n\n               ->  Bitmap Index Scan on idx_delivered_mid  (cost=0.00..100075.17 rows=6234997 width=0) (actual time=4414.860..4414.860 rows=6605058 loops=1)\n\n\n                     Index Cond: ((message_id)::text > '20151225'::text)\n\n\n               SubPlan 1\n\n\n                 ->  Bitmap Heap Scan on failed2 ff  (cost=19778.06..707046.73 rows=44634 width=46) (actual time=828.164..1949.687 rows=71500 loops=1)\n\n\n                       Recheck Cond: ((message_id)::text > '20151225'::text)\n\n\n                       Filter: ((severity)::text = 'permanent'::text)\n\n\n                       Rows Removed by Filter: 1257151\n\n\n                       Heap Blocks: exact=545606\n\n\n                       ->  Bitmap Index Scan on idx_failed_mid  (cost=0.00..19766.90 rows=1232978 width=0) (actual time=599.864..599.864 rows=1328651 loops=1)\n\n\n                             Index Cond: ((message_id)::text > '20151225'::text)\n\n\n         ->  Hash  (cost=4173912.75..4173912.75 rows=6168777 width=136) (actual time=264243.046..264243.046 rows=6516194 loops=1)\n\n\n               Buckets: 131072  Batches: 8  Memory Usage: 93253kB\n\n\n               ->  Hash Right Join  (cost=3443580.52..4173912.75 rows=6168777 width=136) (actual time=254876.487..261300.772 rows=6516194 loops=1)\n\n\n                     Hash Cond: ((fff.message_id)::text = (a.message_id)::text)\n\n\n                     ->  Bitmap Heap Scan on failed2 fff  (cost=19778.06..707046.73 rows=44634 width=53) (actual time=668.372..3876.360 rows=71500 loops=1)\n\n\n                           Recheck Cond: ((message_id)::text > '20151225'::text)\n\n\n                           Filter: ((severity)::text = 'permanent'::text)\n\n\n                           Rows Removed by Filter: 1257151\n\n\n                           Heap Blocks: exact=545606\n\n\n                           ->  Bitmap Index Scan on idx_failed_mid  (cost=0.00..19766.90 rows=1232978 width=0) (actual time=459.303..459.303 rows=1328651 loops=1)\n\n\n                                 Index Cond: ((message_id)::text > '20151225'::text)\n\n\n                     ->  Hash  (cost=3304523.24..3304523.24 rows=6168777 width=83) (actual time=254206.923..254206.923 rows=6516194 loops=1)\n\n\n                           Buckets: 131072  Batches: 8  Memory Usage: 92972kB\n\n\n                           ->  Bitmap Heap Scan on accepted2 a  (cost=102690.65..3304523.24 rows=6168777 width=83) (actual time=5493.239..248361.721 rows=6516194 loops=1)\n\n\n                                 Recheck Cond: ((message_id)::text > '20151225'::text)\n\n\n                                 Rows Removed by Index Recheck: 79374688\n\n\n                                 Filter: (((mrid)::text <> 'zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz'::text) AND ((mrid)::text <> 'BAT'::text) AND ((column2)::text <> 'text1'::text) AND ((column2)::text !~~ 'text2.%'::text))\n\n\n                                 Rows Removed by Filter: 163989\n\n\n                                 Heap Blocks: exact=1434533 lossy=3404597\n\n\n                                 ->  Bitmap Index Scan on idx_accepted_mid  (cost=0.00..101148.46 rows=6301568 width=0) (actual time=4806.816..4806.816 rows=6680183 loops=1)\n\n\n                                       Index Cond: ((message_id)::text > '20151225'::text)\n\n\n Planning time: 76.707 ms\n\n\n Execution time: 488939.880 ms\n\n\n \n\n\nAny suggestions on something else to try?\n\n\n\n \n\n\nThanks again,\n\n\nAnton", "msg_date": "Mon, 4 Jan 2016 09:04:23 +0000", "msg_from": "Adam Pearson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plan differences" }, { "msg_contents": "Hi,\n\n\n> Changing the locale to anything other than C or POSIX will\n> have a performance overhead. I’m pretty sure that just declaring the\n> locale on the indexes is just like plastering over the cracks.\n>\n>\n>\n> Is it possible to reload the database with the same locale as the original\n> database server?\n>\n\nSorry, I wasn't clear - I did end up recreating the DB with lc_collate =\n\"C\" and lc_ctype = \"C\" and loading all data and the plans are for this\nsituation (i.e., both are now the same, \"C\" everywhere) Maybe it is just a\ncase of optimisations being removed in the RC?\n\nCheers,\nAnton\n\nHi,                Changing the locale to anything other than C or POSIX will have a performance overhead.  I’m pretty sure that just declaring the locale on the\n indexes is just like plastering over the cracks.\n \nIs it possible to reload the database with the same locale as the original database server?Sorry, I wasn't clear - I did end up recreating the DB with lc_collate = \"C\" and lc_ctype = \"C\" and loading all data and the plans are for this situation (i.e., both are now the same, \"C\" everywhere) Maybe it is just a case of optimisations being removed in the RC?Cheers,Anton", "msg_date": "Mon, 4 Jan 2016 14:30:30 +0100", "msg_from": "Anton Melser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Plan differences" } ]
[ { "msg_contents": "I've recently been doing some performance testing with unlogged tables \nvs logged tables on 9.5-rc1. Basically we're trying to do big loads of \ndata into the database periodically. If on the very rare occasion the \nserver crashes half way through the import it's no big deal so I've been \nlooking specifically at unlogged tables with transactions having \nsynchronous_commit set to OFF. When we do the inserts on a logged table \nwith default WAL configuration settings we get a *lot* of disk IO \ngenerated (500mb/sec of pretty random IO - we have a nice ssd raid array \nbut even so this maxes it out). Tweaking WAL settings (commit_delay, \nmax_wal_size, min_wal_size) improves the situation quite a bit \n(50-100mb/sec of writes), but still we have no need to log the inserts \ninto the WAL at the moment.\n\nDoing the imports to unlogged tables we get virtually no IO until the \ninsert process has finished when the table gets flushed to disk which is \ngreat for us. However I read in the manuals that if the server ever has \nan unclean shutdown all unlogged tables will be truncated. Obviously \nwith 9.5 we can now alter tables to be logged/unlogged after insert but \nthis will still write all the inserts into the WAL. I can understand the \nrequirement to truncate tables with active IO at the point of unclean \nshutdown where you may get corrupted data; but I'm interested to find \nout how easy it would be to not perform the truncate for historical \nunlogged tables. If the last data modification statement was run more \nthan eg 30 seconds or 1 minute before an unclean shutdown (or the data \nwas otherwise flushed to disk and there was no IO since then) can we not \nassume that the data is not corrupted and hence not truncate the \nunlogged tables?\n\nThanks\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Jan 2016 11:59:56 +0200", "msg_from": "Mark Zealey <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal for unlogged tables" }, { "msg_contents": "On 2016-01-04 02:59, Mark Zealey wrote:\n> shutdown all unlogged tables will be truncated. Obviously with 9.5 we can now\n> alter tables to be logged/unlogged after insert but this will still write all\n> the inserts into the WAL.\n\nI haven't tried, but won't converting an unlogged table into a logged table\nwrite all the inserts at once instead of once per insert?\n\nOr are you wanting to do more bulk insert into that table later?\n\n> I can understand the requirement to truncate tables\n> with active IO at the point of unclean shutdown where you may get corrupted\n> data; but I'm interested to find out how easy it would be to not perform the\n> truncate for historical unlogged tables.\n\nAre you trying to avoid running a CHECKPOINT? Are you afraid the activity on\nthe other tables will create too much I/O?\n\n> If the last data modification\n> statement was run more than eg 30 seconds or 1 minute before an unclean\n> shutdown (or the data was otherwise flushed to disk and there was no IO since\n> then) can we not assume that the data is not corrupted and hence not truncate\n> the unlogged tables?\n\nI have to admit that I have been surprised by this, it feels like unlogged\ntables are never written properly unless you do an explicit CHECKSUM.\n\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Jan 2016 07:27:52 -0700", "msg_from": "Yves Dorfsman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for unlogged tables" }, { "msg_contents": "\n\nOn 04/01/16 16:27, Yves Dorfsman wrote:\n> I haven't tried, but won't converting an unlogged table into a logged \n> table write all the inserts at once instead of once per insert? Or are \n> you wanting to do more bulk insert into that table later?\n> Are you trying to avoid running a CHECKPOINT? Are you afraid the activity on\n> the other tables will create too much I/O?\n\nSetting a table to logged still pushes all the inserts into the WAL \nwhich we don't need and causes a lot of extra IO. It also takes quite a \nlong time as it is basically rewriting the table and all indexes (eg 60 \nseconds for 2m rows on one of my test tables). We can do this but a) it \ngenerates lots of additional IO which isn't really required for us, and \nb) it acquires an exclusive lock on the table which is also not nice for us.\n\n>> If the last data modification\n>> statement was run more than eg 30 seconds or 1 minute before an unclean\n>> shutdown (or the data was otherwise flushed to disk and there was no IO since\n>> then) can we not assume that the data is not corrupted and hence not truncate\n>> the unlogged tables?\n> I have to admit that I have been surprised by this, it feels like unlogged\n> tables are never written properly unless you do an explicit CHECKSUM.\n\nI don't know how the internals work but unlogged tables definitely \nflushed to disk and persist through normal server restarts. It is just \naccording to the docs if the server ever has an unclean shutdown the \ntables are truncated even if they have not been updated in a year. I \ncan't understand why it has to be like this and it seems that it would \nbe much nicer to not automatically truncate if it doesn't have to. This \nwould be great in the situation where you can tolerate a low chance of \ndata-loss but want very quick upserts.\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Jan 2016 16:38:40 +0200", "msg_from": "Mark Zealey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal for unlogged tables" }, { "msg_contents": "On 2016-01-04 16:38:40 +0200, Mark Zealey wrote:\n> I don't know how the internals work but unlogged tables definitely flushed\n> to disk and persist through normal server restarts. It is just according to\n> the docs if the server ever has an unclean shutdown the tables are truncated\n> even if they have not been updated in a year. I can't understand why it has\n> to be like this and it seems that it would be much nicer to not\n> automatically truncate if it doesn't have to.\n\nPages containing data of unlogged tables aren't ever flushed to disk\nunless\na) a shutdown checkpoint is performed\nb) a buffer containing data from an unlogged table is used for something\n else\nc) the database being copied is the the source of a CREATE DATABASE .. TEMPLATE\n\nHence, if there's an unclean shutdown, there's absolutely no guarantee\nabout the on-disk state of unlogged tables. Even if they haven't been\nmodified in ages - there could have been many many dirty pages in shared\nbuffers when crashing.\n\n\nAlways flushing dirty pages of unlogged tables at checkpoint would\ngreatly increase the overhead for memory resident, write heavy workloads\nthat use unlogged tables.\n\nAndres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Jan 2016 17:12:34 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for unlogged tables" }, { "msg_contents": "\n\nOn 04/01/16 18:12, Andres Freund wrote:\n> Pages containing data of unlogged tables aren't ever flushed to disk\n> unless\n> a) a shutdown checkpoint is performed\n> b) a buffer containing data from an unlogged table is used for something\n> else\n> c) the database being copied is the the source of a CREATE DATABASE .. TEMPLATE\n>\n> Hence, if there's an unclean shutdown, there's absolutely no guarantee\n> about the on-disk state of unlogged tables. Even if they haven't been\n> modified in ages - there could have been many many dirty pages in shared\n> buffers when crashing.\n>\n>\n> Always flushing dirty pages of unlogged tables at checkpoint would\n> greatly increase the overhead for memory resident, write heavy workloads\n> that use unlogged tables.\n\nIf there was a command to flush a specific unlogged table to disk it \nwould work around all these issues no? Perhaps if you marked the table \nas read only at the same time it would flush it to disk and ensure no \nmore data could be written to it eg (ALTER TABLE ... SET READ ONLY on an \nunlogged table would flush + not truncate after crash). In our case this \nwould be great as we want to use these as permanent tables for speed; \nbut after an initial data dump we don't change the data again so we \ncould just do this at the end of the import process.\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Jan 2016 19:12:22 +0200", "msg_from": "Mark Zealey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal for unlogged tables" }, { "msg_contents": "On 2016-01-04 19:12:22 +0200, Mark Zealey wrote:\n> If there was a command to flush a specific unlogged table to disk it would\n> work around all these issues no? Perhaps if you marked the table as read\n> only at the same time it would flush it to disk and ensure no more data\n> could be written to it eg (ALTER TABLE ... SET READ ONLY on an unlogged\n> table would flush + not truncate after crash). In our case this would be\n> great as we want to use these as permanent tables for speed; but after an\n> initial data dump we don't change the data again so we could just do this at\n> the end of the import process.\n\nIt's more complex than that, even unmodified tables need to be processed\nby vacuum every now and then (xid wraparound handling). It probably\npossible to work around such things, but it's not a line or ten.\n\nAndres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Jan 2016 18:16:40 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for unlogged tables" } ]
[ { "msg_contents": "Hello,\n\nI'm having trouble with the performance from a query used to create a\nmaterialized view.\n\nI need to be able to build the keyword_performance_flat_matview view in\naround 2-4 hours overnight. However, it currently takes in excess of 24\nhours. I'm wondering if there is anything I can do to improve the\nperformance?\n\nAs you can see below it's a big query, and I didn't want to overwhelm\neveryone with the schema, so let me know what bits you might need to help!\n\nAny help improving the performance will be greatly appreciated.\n\nThank you.\n\n\nEngineYard M3 Extra Large instance:\n Intel Xeon CPU E5-2670 v2 @ 2.50GHz (quad core)\n 15 GB Ram\n PostgreSQL 9.3.6\n\n\nqueries:\n\ncreate materialized view\n campaign_category_lookup\nas\nselect\n scenario_campaigns.id as campaign_id,\n join_website_budget_labels.label_id as category_id\nfrom scenario_campaigns\ninner join scenario_campaign_vendor_instances\n on scenario_campaign_vendor_instances.campaign_id = scenario_campaigns.id\ninner join join_website_budget_labels_campaign_vendor_instances_1f8636\n on\njoin_website_budget_labels_campaign_vendor_instances_1f8636.vendor_instance_id\n= scenario_campaign_vendor_instances.id\ninner join join_website_budget_labels\n on\njoin_website_budget_labels_campaign_vendor_instances_1f8636.website_budget_label_id\n= join_website_budget_labels.id\ninner join account_website_budgets\n on join_website_budget_labels.budget_id = account_website_budgets.id\n and account_website_budgets.state = 'approved'\nwhere scenario_campaign_vendor_instances.campaign_id = scenario_campaigns.id\n and account_website_budgets.start_date <= current_date\n and account_website_budgets.end_date >= current_date\norder by approved_at desc;\n\ncreate materialized view temp_keyword_perf_unaggr\nas\nselect\n account_websites.id as website_id,\n account_websites.namespace as website_namespace,\n scenario_keyword_vendor_instances.inventory_disabled as\ninventory_disabled,\n scenario_keyword_vendor_instances.condition_disabled as\ncondition_disabled,\n scenario_ad_groups.campaign_id,\n scenario_ad_groups.id as ad_group_id,\n scenario_keywords.id as keyword_id,\n scenario_keyword_texts.value as keyword_name,\n scenario_keyword_performances.*,\n (select category_id from campaign_category_lookup where\ncampaign_category_lookup.campaign_id = scenario_ad_groups.campaign_id limit\n1) as category_id\nfrom\n scenarios\n inner join account_websites\n on scenarios.website_id = account_websites.id\n inner join scenario_campaigns\n on scenario_campaigns.scenario_id = scenarios.id\n inner join scenario_ad_groups\n on scenario_ad_groups.campaign_id = scenario_campaigns.id\n inner join scenario_keywords\n on scenario_keywords.ad_group_id = scenario_ad_groups.id\n inner join scenario_keyword_texts\n on scenario_keyword_texts.id = scenario_keywords.text_id\n left outer join scenario_keyword_vendor_instances\n on scenario_keyword_vendor_instances.keyword_id = scenario_keywords.id\n left outer join scenario_keyword_performances\n on scenario_keyword_performances.api_id =\nscenario_keyword_vendor_instances.api_id\n and scenario_keyword_performances.date >= (date_trunc('month', now()) -\n'1 month'::interval)::date -- start of previous month\nwhere\n scenarios.deleted_at is null\n and scenario_keyword_texts.value is not null\n and account_websites.active = 't';\n\ncreate materialized view\n keyword_performance_flat_matview\nas\nselect\n website_id,\n website_namespace,\n campaign_id,\n ad_group_id,\n keyword_name,\n keyword_id,\n network,\n device,\n inventory_disabled,\n condition_disabled,\n category_id,\n date,\n sum(impressions) as impressions,\n sum(clicks) as clicks,\n sum(conv_one) as conv_one,\n sum(conv_many) as conv_many,\n sum(cost) as cost,\n sum(conv_value) as conv_value,\n avg(avg_position) as avg_position\nfrom temp_keyword_perf_unaggr\ngroup by\n website_id,\n website_namespace,\n campaign_id,\n ad_group_id,\n keyword_id,\n keyword_name,\n device,\n network,\n inventory_disabled,\n condition_disabled,\n category_id,\n date;\n\n\nExplain output for temp_keyword_perf_unaggr:\n\n Merge Right Join (cost=8796955.87..1685073792.18 rows=296873848 width=213)\n Merge Cond: (scenario_keyword_performances.api_id =\nscenario_keyword_vendor_instances.api_id)\n -> Index Scan using\nindex_keyword_performances_on_vendor_instance_id_and_date on\nscenario_keyword_performances (cost=0.44..203167.46 rows=392586 width=144)\n Index Cond: (date >= ((date_trunc('month'::text, now()) - '1\nmon'::interval))::date)\n -> Materialize (cost=8796955.43..8883724.51 rows=17353816 width=77)\n -> Sort (cost=8796955.43..8840339.97 rows=17353816 width=77)\n Sort Key: scenario_keyword_vendor_instances.api_id\n -> Hash Join (cost=2755544.36..5939172.05 rows=17353816\nwidth=77)\n Hash Cond: (scenario_keywords.text_id =\nscenario_keyword_texts.id)\n -> Hash Right Join (cost=2171209.00..4417042.21\nrows=17353816 width=48)\n Hash Cond:\n(scenario_keyword_vendor_instances.keyword_id = scenario_keywords.id)\n -> Seq Scan on\nscenario_keyword_vendor_instances (cost=0.00..821853.20 rows=33362520\nwidth=14)\n -> Hash (cost=1827291.60..1827291.60\nrows=16931312 width=38)\n -> Hash Join (cost=219154.58..1827291.60\nrows=16931312 width=38)\n Hash Cond:\n(scenario_keywords.ad_group_id = scenario_ad_groups.id)\n -> Seq Scan on scenario_keywords\n (cost=0.00..946491.60 rows=32550260 width=12)\n -> Hash (cost=186041.43..186041.43\nrows=1712492 width=30)\n -> Hash Join\n (cost=6569.88..186041.43 rows=1712492 width=30)\n Hash Cond:\n(scenario_ad_groups.campaign_id = scenario_campaigns.id)\n -> Seq Scan on\nscenario_ad_groups (cost=0.00..133539.47 rows=3292247 width=8)\n -> Hash\n (cost=5596.79..5596.79 rows=77847 width=26)\n -> Hash Join\n (cost=100.50..5596.79 rows=77847 width=26)\n Hash Cond:\n(scenario_campaigns.scenario_id = scenarios.id)\n -> Seq Scan\non scenario_campaigns (cost=0.00..4156.60 rows=149660 width=8)\n -> Hash\n (cost=85.98..85.98 rows=1161 width=26)\n ->\n Hash Join (cost=16.43..85.98 rows=1161 width=26)\n\n Hash Cond: (scenarios.website_id = account_websites.id)\n\n -> Seq Scan on scenarios (cost=0.00..50.32 rows=2032 width=8)\n\n Filter: (deleted_at IS NULL)\n\n -> Hash (cost=12.92..12.92 rows=281 width=22)\n\n -> Seq Scan on account_websites (cost=0.00..12.92 rows=281 width=22)\n\n Filter: active\n -> Hash (cost=292793.16..292793.16 rows=14352816\nwidth=37)\n -> Seq Scan on scenario_keyword_texts\n (cost=0.00..292793.16 rows=14352816 width=37)\n Filter: (value IS NOT NULL)\n SubPlan 1\n -> Limit (cost=0.28..5.63 rows=1 width=4)\n -> Index Scan using campaign_category_lookup_campaign_id_idx on\ncampaign_category_lookup (cost=0.28..10.99 rows=2 width=4)\n Index Cond: (campaign_id = scenario_ad_groups.campaign_id)\n\n\n-- \n[image: photo]\n*Tom McLoughlin*\nSoftware Developer\n| m: 08 8224 1711\ne: [email protected] | w: http://www.dynamiccreative.com/\n<https://www.facebook.com/pages/Dynamic-Creative/464934570224209?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>\n<https://twitter.com/dynamiccreative?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>\n<http://www.linkedin.com/company/2805415?trk=vsrp_companies_res_name&trkInfo=VSRPsearchId%3A995331391047787965%2CVSRPtargetId%3A2805415%2CVSRPcmpt%3Aprimary&utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>\nWe're looking for Developers, from Senior to Graduate\n<http://www.seek.com.au/job/29260404?pos=8&type=standout&engineConfig=&tier=no_tier&whereid=&utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>\nCheck out our FREE eBook: Google Shopping Best Practices for PLA's\n<http://resources.dynamiccreative.com/google-shopping-7-best-practices?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>\nPlease Note: Our team is not available 10am to 10.30am CST every Thursday\ndue to our weekly company meeting\n<http://www.dynamiccreative.com/contact-us?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>\nThe latest from our Team Blogs -PLA’s (Product Listing Ads) vs Google\nShopping Campaigns\n<http://blog.dynamiccreative.com/blog/plas-product-listing-ads-vs-google-shopping-campaigns?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>\n\nHello,I'm having trouble with the performance from a query used to create a materialized view. I need to be able to build the keyword_performance_flat_matview view in around 2-4 hours overnight. However, it currently takes in excess of 24 hours. I'm wondering if there is anything I can do to improve the performance?As you can see below it's a big query, and I didn't want to overwhelm everyone with the schema, so let me know what bits you might need to help!Any help improving the performance will be greatly appreciated.Thank you.EngineYard M3 Extra Large instance:  Intel Xeon CPU E5-2670 v2 @ 2.50GHz (quad core)  15 GB Ram  PostgreSQL 9.3.6queries:create materialized view  campaign_category_lookupasselect  scenario_campaigns.id as campaign_id,   join_website_budget_labels.label_id as category_idfrom scenario_campaignsinner join scenario_campaign_vendor_instances  on scenario_campaign_vendor_instances.campaign_id = scenario_campaigns.idinner join join_website_budget_labels_campaign_vendor_instances_1f8636  on join_website_budget_labels_campaign_vendor_instances_1f8636.vendor_instance_id = scenario_campaign_vendor_instances.idinner join join_website_budget_labels  on join_website_budget_labels_campaign_vendor_instances_1f8636.website_budget_label_id = join_website_budget_labels.idinner join account_website_budgets  on join_website_budget_labels.budget_id = account_website_budgets.id  and account_website_budgets.state = 'approved'where scenario_campaign_vendor_instances.campaign_id = scenario_campaigns.id  and account_website_budgets.start_date <= current_date  and account_website_budgets.end_date >= current_dateorder by approved_at desc;create materialized view temp_keyword_perf_unaggrasselect  account_websites.id as website_id,  account_websites.namespace as website_namespace,  scenario_keyword_vendor_instances.inventory_disabled as inventory_disabled,  scenario_keyword_vendor_instances.condition_disabled as condition_disabled,  scenario_ad_groups.campaign_id,  scenario_ad_groups.id as ad_group_id,  scenario_keywords.id as keyword_id,  scenario_keyword_texts.value as keyword_name,  scenario_keyword_performances.*,  (select category_id from campaign_category_lookup where campaign_category_lookup.campaign_id = scenario_ad_groups.campaign_id limit 1) as category_idfrom  scenarios  inner join account_websites    on scenarios.website_id = account_websites.id  inner join scenario_campaigns    on scenario_campaigns.scenario_id = scenarios.id  inner join scenario_ad_groups    on scenario_ad_groups.campaign_id = scenario_campaigns.id  inner join scenario_keywords    on scenario_keywords.ad_group_id = scenario_ad_groups.id  inner join scenario_keyword_texts    on scenario_keyword_texts.id = scenario_keywords.text_id  left outer join scenario_keyword_vendor_instances    on scenario_keyword_vendor_instances.keyword_id = scenario_keywords.id  left outer join scenario_keyword_performances    on scenario_keyword_performances.api_id = scenario_keyword_vendor_instances.api_id    and scenario_keyword_performances.date >= (date_trunc('month', now()) - '1 month'::interval)::date -- start of previous monthwhere  scenarios.deleted_at is null  and scenario_keyword_texts.value is not null  and account_websites.active = 't';create materialized view  keyword_performance_flat_matviewasselect  website_id,  website_namespace,  campaign_id,  ad_group_id,  keyword_name,  keyword_id,  network,  device,  inventory_disabled,  condition_disabled,  category_id,  date,  sum(impressions) as impressions,  sum(clicks) as clicks,  sum(conv_one) as conv_one,  sum(conv_many) as conv_many,  sum(cost) as cost,  sum(conv_value) as conv_value,  avg(avg_position) as avg_positionfrom temp_keyword_perf_unaggrgroup by   website_id,   website_namespace,  campaign_id,  ad_group_id,   keyword_id,   keyword_name,   device,   network,   inventory_disabled,   condition_disabled,  category_id,   date;Explain output for temp_keyword_perf_unaggr: Merge Right Join  (cost=8796955.87..1685073792.18 rows=296873848 width=213)   Merge Cond: (scenario_keyword_performances.api_id = scenario_keyword_vendor_instances.api_id)   ->  Index Scan using index_keyword_performances_on_vendor_instance_id_and_date on scenario_keyword_performances  (cost=0.44..203167.46 rows=392586 width=144)         Index Cond: (date >= ((date_trunc('month'::text, now()) - '1 mon'::interval))::date)   ->  Materialize  (cost=8796955.43..8883724.51 rows=17353816 width=77)         ->  Sort  (cost=8796955.43..8840339.97 rows=17353816 width=77)               Sort Key: scenario_keyword_vendor_instances.api_id               ->  Hash Join  (cost=2755544.36..5939172.05 rows=17353816 width=77)                     Hash Cond: (scenario_keywords.text_id = scenario_keyword_texts.id)                     ->  Hash Right Join  (cost=2171209.00..4417042.21 rows=17353816 width=48)                           Hash Cond: (scenario_keyword_vendor_instances.keyword_id = scenario_keywords.id)                           ->  Seq Scan on scenario_keyword_vendor_instances  (cost=0.00..821853.20 rows=33362520 width=14)                           ->  Hash  (cost=1827291.60..1827291.60 rows=16931312 width=38)                                 ->  Hash Join  (cost=219154.58..1827291.60 rows=16931312 width=38)                                       Hash Cond: (scenario_keywords.ad_group_id = scenario_ad_groups.id)                                       ->  Seq Scan on scenario_keywords  (cost=0.00..946491.60 rows=32550260 width=12)                                       ->  Hash  (cost=186041.43..186041.43 rows=1712492 width=30)                                             ->  Hash Join  (cost=6569.88..186041.43 rows=1712492 width=30)                                                   Hash Cond: (scenario_ad_groups.campaign_id = scenario_campaigns.id)                                                   ->  Seq Scan on scenario_ad_groups  (cost=0.00..133539.47 rows=3292247 width=8)                                                   ->  Hash  (cost=5596.79..5596.79 rows=77847 width=26)                                                         ->  Hash Join  (cost=100.50..5596.79 rows=77847 width=26)                                                               Hash Cond: (scenario_campaigns.scenario_id = scenarios.id)                                                               ->  Seq Scan on scenario_campaigns  (cost=0.00..4156.60 rows=149660 width=8)                                                               ->  Hash  (cost=85.98..85.98 rows=1161 width=26)                                                                     ->  Hash Join  (cost=16.43..85.98 rows=1161 width=26)                                                                           Hash Cond: (scenarios.website_id = account_websites.id)                                                                           ->  Seq Scan on scenarios  (cost=0.00..50.32 rows=2032 width=8)                                                                                 Filter: (deleted_at IS NULL)                                                                           ->  Hash  (cost=12.92..12.92 rows=281 width=22)                                                                                 ->  Seq Scan on account_websites  (cost=0.00..12.92 rows=281 width=22)                                                                                       Filter: active                     ->  Hash  (cost=292793.16..292793.16 rows=14352816 width=37)                           ->  Seq Scan on scenario_keyword_texts  (cost=0.00..292793.16 rows=14352816 width=37)                                 Filter: (value IS NOT NULL)   SubPlan 1     ->  Limit  (cost=0.28..5.63 rows=1 width=4)           ->  Index Scan using campaign_category_lookup_campaign_id_idx on campaign_category_lookup  (cost=0.28..10.99 rows=2 width=4)                 Index Cond: (campaign_id = scenario_ad_groups.campaign_id)-- Tom McLoughlin Software Developer | m: 08 8224 1711 e: [email protected] | w: http://www.dynamiccreative.com/ We're looking for Developers, from Senior to Graduate Check out our FREE eBook: Google Shopping Best Practices for PLA's Please Note: Our team is not available 10am to 10.30am CST every Thursday due to our weekly company meeting The latest from our Team Blogs -PLA’s (Product Listing Ads) vs Google Shopping Campaigns", "msg_date": "Wed, 6 Jan 2016 18:38:02 +1030", "msg_from": "Tom McLoughlin <[email protected]>", "msg_from_op": true, "msg_subject": "Materialized view performance problems" }, { "msg_contents": "\n\n> Tom McLoughlin <[email protected]> hat am 6. Januar 2016 um 09:08\n> geschrieben:\n> \n> \n\n> \n> As you can see below it's a big query, and I didn't want to overwhelm\n> everyone with the schema, so let me know what bits you might need to help!\n> \n> Any help improving the performance will be greatly appreciated.\n\ncan you show us the EXPLAIN ANALYSE - Output? I see a LOT of seq-scans, maybe\nyou should create some indexes.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jan 2016 12:40:16 +0100 (CET)", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Materialized view performance problems" }, { "msg_contents": "Thank you very much for your help.\n\nIt's difficult for me to run analyse explain for the query given because it\ntakes so long. However, the query below has a similar structure but has\nless data to process.\n\ncreate materialized view temp_camp_perf_unaggr\nas\nselect\n account_websites.id as website_id,\n account_websites.namespace as website_namespace,\n scenario_campaign_vendor_instances.inventory_disabled as\ninventory_disabled,\n scenario_campaign_vendor_instances.condition_disabled as\ncondition_disabled,\n scenario_campaign_vendor_instances.manually_disabled as paused,\n scenario_campaigns.id as campaign_id,\n scenario_campaign_performances.*,\n (select campaign_category_lookup.category_id from\ncampaign_category_lookup where campaign_category_lookup.campaign_id =\nscenario_campaigns.id limit 1) as category_id\nfrom\n scenarios\n inner join account_websites\n on scenarios.website_id = account_websites.id\n inner join scenario_campaigns\n on scenario_campaigns.scenario_id = scenarios.id\n left outer join scenario_campaign_vendor_instances\n on scenario_campaigns.id =\nscenario_campaign_vendor_instances.campaign_id\n left outer join scenario_campaign_performances\n on scenario_campaign_performances.api_id =\nscenario_campaign_vendor_instances.api_id\n and scenario_campaign_performances.date >= (date_trunc('month', now())\n- '1 month'::interval)::date -- start of previous month\nwhere\n scenarios.deleted_at is null\n and scenario_campaign_performances.campaign_name is not null\n and account_websites.active = 't';\n\n\nHere's it's EXPLAIN ANALYSE output:\n\n Hash Join (cost=13094.58..3450145.63 rows=373025 width=220) (actual\ntime=87677.770..226340.511 rows=232357 loops=1)\n Hash Cond: (scenario_campaign_performances.api_id =\nscenario_campaign_vendor_instances.api_id)\n -> Seq Scan on scenario_campaign_performances (cost=0.00..325848.93\nrows=351341 width=191) (actual time=86942.746..221871.357 rows=230889\nloops=1)\n Filter: ((campaign_name IS NOT NULL) AND (date >=\n((date_trunc('month'::text, now()) - '1 mon'::interval))::date))\n Rows Removed by Filter: 77185\n -> Hash (cost=12250.80..12250.80 rows=67502 width=37) (actual\ntime=709.034..709.034 rows=28545 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 1997kB\n -> Hash Join (cost=6621.17..12250.80 rows=67502 width=37)\n(actual time=164.772..690.399 rows=48805 loops=1)\n Hash Cond: (scenario_campaign_vendor_instances.campaign_id =\nscenario_campaigns.id)\n -> Seq Scan on scenario_campaign_vendor_instances\n (cost=0.00..3817.06 rows=130006 width=15) (actual time=0.049..405.396\nrows=149939 loops=1)\n -> Hash (cost=5641.32..5641.32 rows=78388 width=26)\n(actual time=164.647..164.647 rows=49081 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 2839kB\n -> Hash Join (cost=105.59..5641.32 rows=78388\nwidth=26) (actual time=55.543..145.975 rows=49081 loops=1)\n Hash Cond: (scenario_campaigns.scenario_id =\nscenarios.id)\n -> Seq Scan on scenario_campaigns\n (cost=0.00..4185.71 rows=150971 width=8) (actual time=0.024..47.185\nrows=150591 loops=1)\n -> Hash (cost=90.56..90.56 rows=1202 width=26)\n(actual time=55.499..55.499 rows=1428 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage:\n79kB\n -> Hash Join (cost=18.49..90.56\nrows=1202 width=26) (actual time=48.435..54.931 rows=1428 loops=1)\n Hash Cond: (scenarios.website_id =\naccount_websites.id)\n -> Seq Scan on scenarios\n (cost=0.00..52.15 rows=2108 width=8) (actual time=0.015..5.723 rows=2052\nloops=1)\n Filter: (deleted_at IS NULL)\n Rows Removed by Filter: 201\n -> Hash (cost=14.54..14.54\nrows=316 width=22) (actual time=48.402..48.402 rows=289 loops=1)\n Buckets: 1024 Batches: 1\n Memory Usage: 16kB\n -> Seq Scan on\naccount_websites (cost=0.00..14.54 rows=316 width=22) (actual\ntime=26.373..48.259 rows=289 loops=1)\n Filter: active\n Rows Removed by Filter:\n211\n SubPlan 1\n -> Limit (cost=0.28..8.30 rows=1 width=4) (actual time=0.014..0.014\nrows=0 loops=232357)\n -> Index Scan using campaign_category_lookup_campaign_id_idx on\ncampaign_category_lookup (cost=0.28..8.30 rows=1 width=4) (actual\ntime=0.014..0.014 rows=0 loops=232357)\n Index Cond: (campaign_id = scenario_campaigns.id)\n Total runtime: 228236.708 ms\n\nOn 6 January 2016 at 22:10, Andreas Kretschmer <[email protected]>\nwrote:\n\n>\n>\n> > Tom McLoughlin <[email protected]> hat am 6. Januar 2016 um 09:08\n> > geschrieben:\n> >\n> >\n>\n> >\n> > As you can see below it's a big query, and I didn't want to overwhelm\n> > everyone with the schema, so let me know what bits you might need to\n> help!\n> >\n> > Any help improving the performance will be greatly appreciated.\n>\n> can you show us the EXPLAIN ANALYSE - Output? I see a LOT of seq-scans,\n> maybe\n> you should create some indexes.\n>\n\nThank you very much for your help.It's difficult for me to run analyse explain for the query given because it takes so long. However, the query below has a similar structure but has less data to process.create materialized view temp_camp_perf_unaggrasselect  account_websites.id as website_id,  account_websites.namespace as website_namespace,  scenario_campaign_vendor_instances.inventory_disabled as inventory_disabled,  scenario_campaign_vendor_instances.condition_disabled as condition_disabled,  scenario_campaign_vendor_instances.manually_disabled as paused,  scenario_campaigns.id as campaign_id,  scenario_campaign_performances.*,  (select campaign_category_lookup.category_id from campaign_category_lookup where campaign_category_lookup.campaign_id = scenario_campaigns.id limit 1) as category_idfrom  scenarios  inner join account_websites    on scenarios.website_id = account_websites.id  inner join scenario_campaigns    on scenario_campaigns.scenario_id = scenarios.id  left outer join scenario_campaign_vendor_instances    on scenario_campaigns.id = scenario_campaign_vendor_instances.campaign_id  left outer join scenario_campaign_performances    on scenario_campaign_performances.api_id = scenario_campaign_vendor_instances.api_id    and scenario_campaign_performances.date >= (date_trunc('month', now()) - '1 month'::interval)::date -- start of previous monthwhere  scenarios.deleted_at is null  and scenario_campaign_performances.campaign_name is not null  and account_websites.active = 't'; Here's it's EXPLAIN ANALYSE output: Hash Join  (cost=13094.58..3450145.63 rows=373025 width=220) (actual time=87677.770..226340.511 rows=232357 loops=1)   Hash Cond: (scenario_campaign_performances.api_id = scenario_campaign_vendor_instances.api_id)   ->  Seq Scan on scenario_campaign_performances  (cost=0.00..325848.93 rows=351341 width=191) (actual time=86942.746..221871.357 rows=230889 loops=1)         Filter: ((campaign_name IS NOT NULL) AND (date >= ((date_trunc('month'::text, now()) - '1 mon'::interval))::date))         Rows Removed by Filter: 77185   ->  Hash  (cost=12250.80..12250.80 rows=67502 width=37) (actual time=709.034..709.034 rows=28545 loops=1)         Buckets: 8192  Batches: 1  Memory Usage: 1997kB         ->  Hash Join  (cost=6621.17..12250.80 rows=67502 width=37) (actual time=164.772..690.399 rows=48805 loops=1)               Hash Cond: (scenario_campaign_vendor_instances.campaign_id = scenario_campaigns.id)               ->  Seq Scan on scenario_campaign_vendor_instances  (cost=0.00..3817.06 rows=130006 width=15) (actual time=0.049..405.396 rows=149939 loops=1)               ->  Hash  (cost=5641.32..5641.32 rows=78388 width=26) (actual time=164.647..164.647 rows=49081 loops=1)                     Buckets: 8192  Batches: 1  Memory Usage: 2839kB                     ->  Hash Join  (cost=105.59..5641.32 rows=78388 width=26) (actual time=55.543..145.975 rows=49081 loops=1)                           Hash Cond: (scenario_campaigns.scenario_id = scenarios.id)                           ->  Seq Scan on scenario_campaigns  (cost=0.00..4185.71 rows=150971 width=8) (actual time=0.024..47.185 rows=150591 loops=1)                           ->  Hash  (cost=90.56..90.56 rows=1202 width=26) (actual time=55.499..55.499 rows=1428 loops=1)                                 Buckets: 1024  Batches: 1  Memory Usage: 79kB                                 ->  Hash Join  (cost=18.49..90.56 rows=1202 width=26) (actual time=48.435..54.931 rows=1428 loops=1)                                       Hash Cond: (scenarios.website_id = account_websites.id)                                       ->  Seq Scan on scenarios  (cost=0.00..52.15 rows=2108 width=8) (actual time=0.015..5.723 rows=2052 loops=1)                                             Filter: (deleted_at IS NULL)                                             Rows Removed by Filter: 201                                       ->  Hash  (cost=14.54..14.54 rows=316 width=22) (actual time=48.402..48.402 rows=289 loops=1)                                             Buckets: 1024  Batches: 1  Memory Usage: 16kB                                             ->  Seq Scan on account_websites  (cost=0.00..14.54 rows=316 width=22) (actual time=26.373..48.259 rows=289 loops=1)                                                   Filter: active                                                   Rows Removed by Filter: 211   SubPlan 1     ->  Limit  (cost=0.28..8.30 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=232357)           ->  Index Scan using campaign_category_lookup_campaign_id_idx on campaign_category_lookup  (cost=0.28..8.30 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=232357)                 Index Cond: (campaign_id = scenario_campaigns.id) Total runtime: 228236.708 msOn 6 January 2016 at 22:10, Andreas Kretschmer <[email protected]> wrote:\n\n> Tom McLoughlin <[email protected]> hat am 6. Januar 2016 um 09:08\n> geschrieben:\n>\n>\n\n>\n> As you can see below it's a big query, and I didn't want to overwhelm\n> everyone with the schema, so let me know what bits you might need to help!\n>\n> Any help improving the performance will be greatly appreciated.\n\ncan you show us the EXPLAIN ANALYSE - Output? I see a LOT of seq-scans, maybe\nyou should create some indexes.", "msg_date": "Wed, 6 Jan 2016 23:57:35 +1030", "msg_from": "Tom McLoughlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Materialized view performance problems" }, { "msg_contents": "Tom McLoughlin <[email protected]> wrote:\n\n> Thank you very much for your help.\n> \n> It's difficult for me to run analyse explain for the query given because it\n> takes so long. However, the query below has a similar structure but has less\n> data to process.\n\nSeems okay, but it's better to analyse that for the real query. You can\ninstall the auto_explain - module and enable auto_explain.log_analyze\nand set auto_explain.log_min_duration to a proper value.\n\nYou can find later the explain analse - ouput in the log.\n\nConsider all steps mit seq-scans, but as i see all seq-scan are on small\ntables and the majority of rows are in the result, so an index can't\nhelp much.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jan 2016 18:23:07 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Materialized view performance problems" } ]
[ { "msg_contents": "Hi all,\r\n\r\nI’m trying to track down why some queries on my database system are intermittently much slower than usual. I have some queries that run, on average, 2-3ms, and they run at a rate of about 10-20 queries/second. However, every 3-5 seconds, one of the queries will be 500-100ms. This is making the average query time turn out to be closer to 20ms, with a very large standard deviation.\r\n\r\nThis happens to a number of otherwise very fast queries, and I’m trying to trace the reason. I’ve turned on lock logging and checkpoint logging, and this behavior happens whether or not a checkpoint is occurring. There are no lock waits happening in the system either.\r\n\r\nHere’s my info:\r\n\r\n- PostgreSQL 9.2.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\r\n- Installed from Postgresql Yum repo\r\n- Changed Settings:\r\n\r\n\"application_name\";\"pgAdmin III - Query Tool\";\"client\"\r\n\"archive_command\";\"/usr/crsinc/bin/wal_processor.sh %f %p\";\"configuration file\"\r\n\"archive_mode\";\"on\";\"configuration file\"\r\n\"autovacuum_analyze_scale_factor\";\"0.1\";\"configuration file\"\r\n\"autovacuum_analyze_threshold\";\"500\";\"configuration file\"\r\n\"autovacuum_naptime\";\"10min\";\"configuration file\"\r\n\"autovacuum_vacuum_cost_delay\";\"50ms\";\"configuration file\"\r\n\"autovacuum_vacuum_cost_limit\";\"250\";\"configuration file\"\r\n\"autovacuum_vacuum_scale_factor\";\"0.1\";\"configuration file\"\r\n\"autovacuum_vacuum_threshold\";\"1000\";\"configuration file\"\r\n\"bgwriter_delay\";\"100ms\";\"configuration file\"\r\n\"bgwriter_lru_maxpages\";\"1000\";\"configuration file\"\r\n\"bytea_output\";\"escape\";\"session\"\r\n\"checkpoint_completion_target\";\"0.7\";\"configuration file\"\r\n\"checkpoint_segments\";\"128\";\"configuration file\"\r\n\"checkpoint_timeout\";\"30min\";\"configuration file\"\r\n\"checkpoint_warning\";\"895s\";\"configuration file\"\r\n\"client_encoding\";\"UNICODE\";\"session\"\r\n\"client_min_messages\";\"notice\";\"session\"\r\n\"cpu_tuple_cost\";\"0.001\";\"configuration file\"\r\n\"DateStyle\";\"ISO, MDY\";\"session\"\r\n\"default_text_search_config\";\"pg_catalog.english\";\"configuration file\"\r\n\"effective_cache_size\";\"94GB\";\"configuration file\"\r\n\"effective_io_concurrency\";\"100\";\"configuration file\"\r\n\"hot_standby\";\"off\";\"configuration file\"\r\n\"hot_standby_feedback\";\"off\";\"configuration file\"\r\n\"lc_messages\";\"en_US.UTF-8\";\"configuration file\"\r\n\"lc_monetary\";\"en_US.UTF-8\";\"configuration file\"\r\n\"lc_numeric\";\"en_US.UTF-8\";\"configuration file\"\r\n\"lc_time\";\"en_US.UTF-8\";\"configuration file\"\r\n\"listen_addresses\";\"*\";\"configuration file\"\r\n\"log_checkpoints\";\"on\";\"configuration file\"\r\n\"log_destination\";\"stderr\";\"configuration file\"\r\n\"log_directory\";\"pg_log\";\"configuration file\"\r\n\"log_filename\";\"postgresql-%a.log\";\"configuration file\"\r\n\"log_line_prefix\";\"< user=%u db=%d host=%h time=%t pid=%p xid=%x>\";\"configuration file\"\r\n\"log_lock_waits\";\"on\";\"configuration file\"\r\n\"log_min_duration_statement\";\"100ms\";\"configuration file\"\r\n\"log_rotation_age\";\"1d\";\"configuration file\"\r\n\"log_rotation_size\";\"0\";\"configuration file\"\r\n\"log_temp_files\";\"0\";\"configuration file\"\r\n\"log_truncate_on_rotation\";\"on\";\"configuration file\"\r\n\"logging_collector\";\"on\";\"configuration file\"\r\n\"maintenance_work_mem\";\"6047MB\";\"configuration file\"\r\n\"max_connections\";\"1300\";\"configuration file\"\r\n\"max_stack_depth\";\"2MB\";\"environment variable\"\r\n\"max_wal_senders\";\"4\";\"configuration file\"\r\n\"port\";\"5432\";\"command line\"\r\n\"random_page_cost\";\"1.9\";\"configuration file\"\r\n\"shared_buffers\";\"8GB\";\"configuration file\"\r\n\"ssl\";\"on\";\"configuration file\"\r\n\"ssl_renegotiation_limit\";\"0\";\"configuration file\"\r\n\"standard_conforming_strings\";\"off\";\"configuration file\"\r\n\"superuser_reserved_connections\";\"10\";\"configuration file\"\r\n\"TimeZone\";\"US/Pacific\";\"configuration file\"\r\n\"wal_keep_segments\";\"64\";\"configuration file\"\r\n\"wal_level\";\"hot_standby\";\"configuration file\"\r\n\"work_mem\";\"188MB\";\"configuration file\"\r\n\r\nOS: CentOS release 6.7 (Linux databasep1.crsinc.com 2.6.32-504.16.2.el6.x86_64 #1 SMP Wed Apr 22 06:48:29 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux). Our server has 24 cores and 128 GB of RAM.\r\n\r\nThis happens via psql, JDBC, PDO connections.\r\n\r\nThanks in advance for any help!\r\nScott\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n\n\n\n\n\nHi all,\n\n\nI’m trying to track down why some queries on my database system are intermittently much slower than usual.  I have some queries that run, on average, 2-3ms, and they run at a rate of about 10-20 queries/second.  However, every 3-5 seconds, one of the queries\r\n will be 500-100ms.  This is making the average query time turn out to be closer to 20ms, with a very large standard deviation.  \n\n\nThis happens to a number of otherwise very fast queries, and I’m trying to trace the reason.  I’ve turned on lock logging and checkpoint logging, and this behavior happens whether or not a checkpoint is occurring.  There are no lock waits happening in\r\n the system either.  \n\n\nHere’s my info: \n\n\n- PostgreSQL 9.2.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n- Installed from Postgresql Yum repo\n- Changed Settings:\n\n\n\n\"application_name\";\"pgAdmin III - Query Tool\";\"client\"\n\"archive_command\";\"/usr/crsinc/bin/wal_processor.sh %f %p\";\"configuration file\"\n\"archive_mode\";\"on\";\"configuration file\"\n\"autovacuum_analyze_scale_factor\";\"0.1\";\"configuration file\"\n\"autovacuum_analyze_threshold\";\"500\";\"configuration file\"\n\"autovacuum_naptime\";\"10min\";\"configuration file\"\n\"autovacuum_vacuum_cost_delay\";\"50ms\";\"configuration file\"\n\"autovacuum_vacuum_cost_limit\";\"250\";\"configuration file\"\n\"autovacuum_vacuum_scale_factor\";\"0.1\";\"configuration file\"\n\"autovacuum_vacuum_threshold\";\"1000\";\"configuration file\"\n\"bgwriter_delay\";\"100ms\";\"configuration file\"\n\"bgwriter_lru_maxpages\";\"1000\";\"configuration file\"\n\"bytea_output\";\"escape\";\"session\"\n\"checkpoint_completion_target\";\"0.7\";\"configuration file\"\n\"checkpoint_segments\";\"128\";\"configuration file\"\n\"checkpoint_timeout\";\"30min\";\"configuration file\"\n\"checkpoint_warning\";\"895s\";\"configuration file\"\n\"client_encoding\";\"UNICODE\";\"session\"\n\"client_min_messages\";\"notice\";\"session\"\n\"cpu_tuple_cost\";\"0.001\";\"configuration file\"\n\"DateStyle\";\"ISO, MDY\";\"session\"\n\"default_text_search_config\";\"pg_catalog.english\";\"configuration file\"\n\"effective_cache_size\";\"94GB\";\"configuration file\"\n\"effective_io_concurrency\";\"100\";\"configuration file\"\n\"hot_standby\";\"off\";\"configuration file\"\n\"hot_standby_feedback\";\"off\";\"configuration file\"\n\"lc_messages\";\"en_US.UTF-8\";\"configuration file\"\n\"lc_monetary\";\"en_US.UTF-8\";\"configuration file\"\n\"lc_numeric\";\"en_US.UTF-8\";\"configuration file\"\n\"lc_time\";\"en_US.UTF-8\";\"configuration file\"\n\"listen_addresses\";\"*\";\"configuration file\"\n\"log_checkpoints\";\"on\";\"configuration file\"\n\"log_destination\";\"stderr\";\"configuration file\"\n\"log_directory\";\"pg_log\";\"configuration file\"\n\"log_filename\";\"postgresql-%a.log\";\"configuration file\"\n\"log_line_prefix\";\"< user=%u db=%d host=%h time=%t pid=%p xid=%x>\";\"configuration file\"\n\"log_lock_waits\";\"on\";\"configuration file\"\n\"log_min_duration_statement\";\"100ms\";\"configuration file\"\n\"log_rotation_age\";\"1d\";\"configuration file\"\n\"log_rotation_size\";\"0\";\"configuration file\"\n\"log_temp_files\";\"0\";\"configuration file\"\n\"log_truncate_on_rotation\";\"on\";\"configuration file\"\n\"logging_collector\";\"on\";\"configuration file\"\n\"maintenance_work_mem\";\"6047MB\";\"configuration file\"\n\"max_connections\";\"1300\";\"configuration file\"\n\"max_stack_depth\";\"2MB\";\"environment variable\"\n\"max_wal_senders\";\"4\";\"configuration file\"\n\"port\";\"5432\";\"command line\"\n\"random_page_cost\";\"1.9\";\"configuration file\"\n\"shared_buffers\";\"8GB\";\"configuration file\"\n\"ssl\";\"on\";\"configuration file\"\n\"ssl_renegotiation_limit\";\"0\";\"configuration file\"\n\"standard_conforming_strings\";\"off\";\"configuration file\"\n\"superuser_reserved_connections\";\"10\";\"configuration file\"\n\"TimeZone\";\"US/Pacific\";\"configuration file\"\n\"wal_keep_segments\";\"64\";\"configuration file\"\n\"wal_level\";\"hot_standby\";\"configuration file\"\n\"work_mem\";\"188MB\";\"configuration file\"\n\n\n\nOS: CentOS release 6.7 (Linux databasep1.crsinc.com 2.6.32-504.16.2.el6.x86_64 #1 SMP Wed Apr 22 06:48:29 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux). Our server has 24 cores and 128 GB of RAM.  \n\n\nThis happens via psql, JDBC, PDO connections.  \n\n\nThanks in advance for any help!\nScott\n\n\n\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not\r\n be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the\r\n intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication\r\n in error, please notify sender immediately and destroy the original message.\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not\r\n intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.", "msg_date": "Wed, 6 Jan 2016 14:58:05 +0000", "msg_from": "Scott Rankin <[email protected]>", "msg_from_op": true, "msg_subject": "Queries intermittently slow" }, { "msg_contents": "Scott Rankin <[email protected]> writes:\n> I’m trying to track down why some queries on my database system are intermittently much slower than usual. I have some queries that run, on average, 2-3ms, and they run at a rate of about 10-20 queries/second. However, every 3-5 seconds, one of the queries will be 500-100ms. This is making the average query time turn out to be closer to 20ms, with a very large standard deviation.\n\n> This happens to a number of otherwise very fast queries, and I’m trying to trace the reason. I’ve turned on lock logging and checkpoint logging, and this behavior happens whether or not a checkpoint is occurring. There are no lock waits happening in the system either.\n\nI doubt you've proved that --- log_lock_waits will only report on waits\nlonger than deadlock_timeout, which you don't appear to have changed from\nits default of 1 sec. If you're trying to capture events that last a few\nhundred msec, you're going to need to reduce deadlock_timeout to maybe\n100ms.\n\nIt would help to know more about what the queries are, too. The cause\nmight be something like GIN index pending-list cleanup but we can't tell\non the basis of this much info.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 06 Jan 2016 10:19:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "On 1/6/16, 10:19 AM, \"Tom Lane\" <[email protected]> wrote:\r\n\r\n\r\n>Scott Rankin <[email protected]> writes:\r\n>> I’m trying to track down why some queries on my database system are intermittently much slower than usual. I have some queries that run, on average, 2-3ms, and they run at a rate of about 10-20 queries/second. However, every 3-5 seconds, one of the queries will be 500-100ms. This is making the average query time turn out to be closer to 20ms, with a very large standard deviation.\r\n>\r\n>> This happens to a number of otherwise very fast queries, and I’m trying to trace the reason. I’ve turned on lock logging and checkpoint logging, and this behavior happens whether or not a checkpoint is occurring. There are no lock waits happening in the system either.\r\n>\r\n>I doubt you've proved that --- log_lock_waits will only report on waits\r\n>longer than deadlock_timeout, which you don't appear to have changed from\r\n>its default of 1 sec. If you're trying to capture events that last a few\r\n>hundred msec, you're going to need to reduce deadlock_timeout to maybe\r\n>100ms.\r\n>\r\n>It would help to know more about what the queries are, too. The cause\r\n>might be something like GIN index pending-list cleanup but we can't tell\r\n>on the basis of this much info.\r\n>\r\n>regards, tom lane\r\n\r\nHi Tom,\r\n\r\nThanks for the update. The query in question is a pretty simple one - it joins 3 tables, all of which are static - they don’t have any writes being done against them. They have very few rows, and the query plan for them indicates that they are all sequential scans. When doing an EXPLAIN ANALYZE, the delay is not consistently on one table, it can vary between the three tables involved in the query.\r\n\r\nHere is the query plan for a fast run:\r\n\r\n\"Nested Loop (cost=0.00..4.20 rows=1 width=185) (actual time=0.069..0.069 rows=0 loops=1)\"\r\n\" Join Filter: (be.branding_id = b.branding_id)\"\r\n\" Rows Removed by Join Filter: 1\"\r\n\" -> Nested Loop (cost=0.00..3.19 rows=1 width=189) (actual time=0.040..0.057 rows=1 loops=1)\"\r\n\" Join Filter: (s.setting_id = be.setting_id)\"\r\n\" Rows Removed by Join Filter: 41\"\r\n\" -> Seq Scan on branding_extension be (cost=0.00..1.00 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=1)\"\r\n\" -> Seq Scan on setting s (cost=0.00..2.04 rows=42 width=185) (actual time=0.004..0.018 rows=42 loops=1)\"\r\n\" -> Seq Scan on branding b (cost=0.00..1.01 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=1)\"\r\n\" Filter: ((name)::text = 'crs'::text)\"\r\n\" Rows Removed by Filter: 1\"\r\n\"Total runtime: 0.150 ms\"\r\n\r\nAnd for a slow one:\r\n\r\n\r\n\"Nested Loop (cost=0.00..4.20 rows=1 width=185) (actual time=383.862..383.862 rows=0 loops=1)\"\r\n\" Join Filter: (be.branding_id = b.branding_id)\"\r\n\" Rows Removed by Join Filter: 1\"\r\n\" -> Nested Loop (cost=0.00..3.19 rows=1 width=189) (actual time=383.815..383.849 rows=1 loops=1)\"\r\n\" Join Filter: (s.setting_id = be.setting_id)\"\r\n\" Rows Removed by Join Filter: 41\"\r\n\" -> Seq Scan on branding_extension be (cost=0.00..1.00 rows=1 width=8) (actual time=383.776..383.776 rows=1 loops=1)\"\r\n\" -> Seq Scan on setting s (cost=0.00..2.04 rows=42 width=185) (actual time=0.005..0.037 rows=42 loops=1)\"\r\n\" -> Seq Scan on branding b (cost=0.00..1.01 rows=1 width=4) (actual time=0.007..0.009 rows=1 loops=1)\"\r\n\" Filter: ((name)::text = 'crs'::text)\"\r\n\" Rows Removed by Filter: 1\"\r\n\"Total runtime: 383.920 ms\"\r\n\r\nI will look at changing the deadlock_timeout, but that might have to wait for the weekend since this is a production system.\r\n\r\n\r\nThanks,\r\nScott\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jan 2016 15:30:56 +0000", "msg_from": "Scott Rankin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "Scott Rankin <[email protected]> writes:\n> Thanks for the update. The query in question is a pretty simple one - it joins 3 tables, all of which are static - they don’t have any writes being done against them. They have very few rows, and the query plan for them indicates that they are all sequential scans. When doing an EXPLAIN ANALYZE, the delay is not consistently on one table, it can vary between the three tables involved in the query.\n\nNo writes at all? That's pretty odd then. All the likely explanations\ninvolve autovacuum or other forms of deferred maintenance.\n\nA possible theory is that the slow cases represent times when the desired\npage is not in cache, but you'd have to have a seriously overloaded disk\nsubsystem for a disk fetch to take hundreds of ms. Unless maybe this is\nrunning on some cloud service with totally unspecified I/O bandwidth?\n\n> I will look at changing the deadlock_timeout, but that might have to wait for the weekend since this is a production system.\n\nYou needn't restart the server for that, just edit postgresql.conf and\nSIGHUP the postmaster.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 06 Jan 2016 10:38:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "On 1/6/16, 10:38 AM, \"Tom Lane\" <[email protected]> wrote:\r\n\r\n>\r\n>A possible theory is that the slow cases represent times when the desired\r\n>page is not in cache, but you'd have to have a seriously overloaded disk\r\n>subsystem for a disk fetch to take hundreds of ms. Unless maybe this is\r\n>running on some cloud service with totally unspecified I/O bandwidth?\r\n\r\nThis intrigues me. We are running on a, shall we say, less than name-brand cloud provider at the moment (transitioning to AWS later this month). Is there a reasonably straightforward way of confirming this hypothesis? We have had many performance issues with this vendor in the past, so I wouldn’t be surprised.\r\n\r\n>\r\n>> I will look at changing the deadlock_timeout, but that might have to wait for the weekend since this is a production system.\r\n>\r\n>You needn't restart the server for that, just edit postgresql.conf and\r\n>SIGHUP the postmaster.\r\n\r\nYep, we just try to limit any database changes until off-hours unless it’s an emergency.\r\n\r\n>\r\n>regards, tom lane\r\n\r\nThanks,\r\nScott\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jan 2016 15:56:58 +0000", "msg_from": "Scott Rankin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "Scott Rankin <[email protected]> writes:\n> On 1/6/16, 10:38 AM, \"Tom Lane\" <[email protected]> wrote:\n>> A possible theory is that the slow cases represent times when the desired\n>> page is not in cache, but you'd have to have a seriously overloaded disk\n>> subsystem for a disk fetch to take hundreds of ms. Unless maybe this is\n>> running on some cloud service with totally unspecified I/O bandwidth?\n\n> This intrigues me. We are running on a, shall we say, less than name-brand cloud provider at the moment (transitioning to AWS later this month). Is there a reasonably straightforward way of confirming this hypothesis? We have had many performance issues with this vendor in the past, so I wouldn’t be surprised.\n\nHm, well, given that you are able to capture instances of the behavior\nin EXPLAIN ANALYZE, I'd suggest trying EXPLAIN (ANALYZE,BUFFERS).\nThat will tell you the number of pages it found in shared buffers vs.\nhaving to read them. Now, a \"read\" just means we had to ask the kernel,\nnot necessarily that the page came all the way from disk; if it's in\nthe kernel's disk cache that won't be very much slower than a shared-\nbuffers hit. Still, if the slowdowns are reliably seen only when a read\noccurred, I'd say that's good evidence.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 06 Jan 2016 11:14:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": ">\r\n>Hm, well, given that you are able to capture instances of the behavior\r\n>in EXPLAIN ANALYZE, I'd suggest trying EXPLAIN (ANALYZE,BUFFERS).\r\n>That will tell you the number of pages it found in shared buffers vs.\r\n>having to read them. Now, a \"read\" just means we had to ask the kernel,\r\n>not necessarily that the page came all the way from disk; if it's in\r\n>the kernel's disk cache that won't be very much slower than a shared-\r\n>buffers hit. Still, if the slowdowns are reliably seen only when a read\r\n>occurred, I'd say that's good evidence.\r\n\r\nWell, that was a good thought - but:\r\n\r\nNested Loop (cost=0.00..4.20 rows=1 width=185) (actual time=567.113..567.113 rows=0 loops=1)\r\n Join Filter: (be.branding_id = b.branding_id)\r\n Rows Removed by Join Filter: 1\r\n Buffers: shared hit=4\r\n -> Nested Loop (cost=0.00..3.19 rows=1 width=189) (actual time=567.081..567.092 rows=1 loops=1)\r\n Join Filter: (s.setting_id = be.setting_id)\r\n Rows Removed by Join Filter: 41\r\n Buffers: shared hit=3\r\n -> Seq Scan on branding_extension be (cost=0.00..1.00 rows=1 width=8) (actual time=0.011..0.012 rows=1 loops=1)\r\n Buffers: shared hit=1\r\n -> Seq Scan on setting s (cost=0.00..2.04 rows=42 width=185) (actual time=567.041..567.049 rows=42 loops=1)\r\n Buffers: shared hit=2\r\n -> Seq Scan on branding b (cost=0.00..1.01 rows=1 width=4) (actual time=0.007..0.009 rows=1 loops=1)\r\n Filter: ((name)::text = 'crs'::text)\r\n Rows Removed by Filter: 1\r\n Buffers: shared hit=1\r\nTotal runtime: 567.185 ms\r\n\r\n\r\nI ran it several times and there were no reads, just shared hits, even on slow executions.\r\n\r\nIt looks like it didn’t go out to the kernel for the data. I guess next steps would be to adjust the deadlock_timeout? If I’m reading the docs correctly, this doesn’t change how Postgres determines a deadlock, it just makes it check for them sooner?\r\n\r\nThanks,\r\nScott\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jan 2016 16:28:16 +0000", "msg_from": "Scott Rankin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "I wrote:\n> A possible theory is that the slow cases represent times when the desired\n> page is not in cache, but you'd have to have a seriously overloaded disk\n> subsystem for a disk fetch to take hundreds of ms. Unless maybe this is\n> running on some cloud service with totally unspecified I/O bandwidth?\n\nBTW, a glaring flaw in that theory is that if this query is touching only\nabout four pages worth of data, and you are running it ten times a second,\nhow in heck would that data ever fall out of shared buffer cache at all?\nYour working set across your whole DB would have to be enormously more\nthan your 8GB shared_buffers setting for that to possibly happen.\n\nSo what seems more likely after more thought is that the pages are staying\nin our shared buffer arena just fine, but the kernel is randomly choosing\nto swap out parts of the arena, and the delays correspond to swap-in\nwaits. (There would still have to be a mighty crummy disk subsystem\nunderlying things for swap-in to take so long, but this is a more\nplausible theory for exactly what's invoking the disk read.)\n\nPostgres can't directly see when this is happening, but you could try\nwatching \"iostat 1\" and noticing whether swap-in events seem to be\ncorrelated with the slow queries.\n\nIf this is the problem, then the answer is to reduce the pressure on\nsystem memory so that swap-outs are less likely. You might find that\na smaller shared_buffer arena is a good thing (so that all of it stays\n\"hot\" and unswappable from the kernel's perspective). Or reduce the\nnumber of active backend processes.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 06 Jan 2016 11:32:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": ">\r\n>So what seems more likely after more thought is that the pages are staying\r\n>in our shared buffer arena just fine, but the kernel is randomly choosing\r\n>to swap out parts of the arena, and the delays correspond to swap-in\r\n>waits. (There would still have to be a mighty crummy disk subsystem\r\n>underlying things for swap-in to take so long, but this is a more\r\n>plausible theory for exactly what's invoking the disk read.)\r\n>\r\n>Postgres can't directly see when this is happening, but you could try\r\n>watching \"iostat 1\" and noticing whether swap-in events seem to be\r\n>correlated with the slow queries.\r\n>\r\n>If this is the problem, then the answer is to reduce the pressure on\r\n>system memory so that swap-outs are less likely. You might find that\r\n>a smaller shared_buffer arena is a good thing (so that all of it stays\r\n>\"hot\" and unswappable from the kernel's perspective). Or reduce the\r\n>number of active backend processes.\r\n>\r\n>regards, tom lane\r\n\r\nOne change to my setup that my server guy pointed out - this database server is *not* a cloud server, but in fact a physical Dell box with SSDs. So unless something is majorly wrong with our disk setup, disk IO should not be a factor.\r\n\r\nI also checked and the box is not swapping at all.\r\n\r\nI guess we’re back to lock contention?\r\n\r\nThanks,\r\nScott\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jan 2016 18:01:05 +0000", "msg_from": "Scott Rankin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "On 1/6/16 12:01 PM, Scott Rankin wrote:\n> I guess we’re back to lock contention?\n\nIs there by chance an anti-wraparound vacuum happening on that table?\n\nActually, for that matter... if autovacuum is hitting that table it's \nlocking could be causing problems, and it won't release it's locks until \nthe deadlock detector kicks in.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jan 2016 22:14:56 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "On 1/6/16, 11:14 PM, \"Jim Nasby\" <[email protected]> wrote:\r\n\r\n\r\n>On 1/6/16 12:01 PM, Scott Rankin wrote:\r\n>> I guess we’re back to lock contention?\r\n>\r\n>Is there by chance an anti-wraparound vacuum happening on that table?\r\n>\r\n>Actually, for that matter... if autovacuum is hitting that table it's\r\n>locking could be causing problems, and it won't release it's locks until\r\n>the deadlock detector kicks in.\r\n>--\r\n>Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX\r\n>Experts in Analytics, Data Architecture and PostgreSQL\r\n>Data in Trouble? Get it in Treble! http://BlueTreble.com\r\n\r\nHi Jim, Tom,\r\n\r\nThanks both of you for your help, by the way.\r\n\r\nI lowered the deadlock_timeout this morning to 100ms and ran for a little while, but I did not see any lock waits even when the statements ran slowly.\r\n\r\nJim, to your point, I enabled auto vacuum logging (log_autovacuum_min_duration=0) but no vacuums are taking place even when I see slow queries.\r\n\r\nAny other ideas?\r\n\r\nThanks,\r\nScott\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 7 Jan 2016 15:03:35 +0000", "msg_from": "Scott Rankin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "Scott Rankin <[email protected]> writes:\n> Any other ideas?\n\nIf there's no lock waits, and the tables are static, it's difficult to\narrive at any other conclusion but that the process is simply losing the\nCPU for long intervals. What else is going on on this box? Have you\nmade any attempt to correlate the slow queries with spikes in load\naverage, swap activity, etc?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 07 Jan 2016 10:19:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "\r\nOn 1/7/16, 10:19 AM, \"Tom Lane\" <[email protected]> wrote:\r\n\r\n>Scott Rankin <[email protected]> writes:\r\n>> Any other ideas?\r\n>\r\n>If there's no lock waits, and the tables are static, it's difficult to\r\n>arrive at any other conclusion but that the process is simply losing the\r\n>CPU for long intervals. What else is going on on this box? Have you\r\n>made any attempt to correlate the slow queries with spikes in load\r\n>average, swap activity, etc?\r\n>\r\n>regards, tom lane\r\n\r\nThis box is a dedicated postgresql box, the load is running around 2-3 (on a 24-core box). No swapping.\r\n\r\nI’ll keep digging!\r\n\r\nThanks,\r\nScott\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 7 Jan 2016 15:43:46 +0000", "msg_from": "Scott Rankin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "Winner! Both of those settings were set to always, and as soon as I turned them off, the query times smoothed right out.\r\n\r\nThank you all so much for your help!\r\n\r\n- Scott\r\n\r\nFrom: Bob Lunney <[email protected]<mailto:[email protected]>>\r\nDate: Thursday, January 7, 2016 at 12:13 PM\r\nTo: Scott Rankin <[email protected]<mailto:[email protected]>>\r\nCc: \"[email protected]<mailto:[email protected]>\" <[email protected]<mailto:[email protected]>>\r\nSubject: Re: [PERFORM] Queries intermittently slow\r\n\r\nScott,\r\n\r\nIs transparent huge pages turned off along with defrag? Check\r\n\r\n\r\n/sys/kernel/mm/transparent_hugepage/enabled\r\n\r\nand\r\n\r\n/sys/kernel/mm/transparent_hugepage/defrag\r\n\r\nBoth should be configured with \"never\".\r\n\r\nBob Lunney\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n\n\n\n\n\n\nWinner!  Both of those settings were set to always, and as soon as I turned them off, the query times smoothed right out. \n\n\nThank you all so much for your help! \n\n\n- Scott\n\n\n\n\n\n\n\n\nFrom: Bob Lunney <[email protected]>\nDate: Thursday, January 7, 2016 at 12:13 PM\nTo: Scott Rankin <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: Re: [PERFORM] Queries intermittently slow\n\n\n\n\nScott, \r\n\n\nIs transparent huge pages turned off along with defrag?  Check \n\n\n\n/sys/kernel/mm/transparent_hugepage/enabled\nand\n/sys/kernel/mm/transparent_hugepage/defragBoth should be configured with \"never\".Bob Lunney\n\n\n\n\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not\r\n be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the\r\n intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication\r\n in error, please notify sender immediately and destroy the original message.\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not\r\n intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.", "msg_date": "Thu, 7 Jan 2016 17:51:32 +0000", "msg_from": "Scott Rankin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "Is this generally true for all PostgreSQL systems on Linux, or only for\nspecific use cases?\n\nOn Thu, Jan 7, 2016 at 12:51 PM, Scott Rankin <[email protected]> wrote:\n\n> Winner! Both of those settings were set to always, and as soon as I\n> turned them off, the query times smoothed right out.\n>\n> Thank you all so much for your help!\n>\n> - Scott\n>\n> From: Bob Lunney <[email protected]>\n> Date: Thursday, January 7, 2016 at 12:13 PM\n> To: Scott Rankin <[email protected]>\n> Cc: \"[email protected]\" <[email protected]>\n> Subject: Re: [PERFORM] Queries intermittently slow\n>\n> Scott,\n>\n> Is transparent huge pages turned off along with defrag? Check\n>\n> /sys/kernel/mm/transparent_hugepage/enabled\n>\n> and\n>\n> /sys/kernel/mm/transparent_hugepage/defrag\n>\n> Both should be configured with \"never\".\n>\n> Bob Lunney\n>\n> This email message contains information that Motus, LLC considers\n> confidential and/or proprietary, or may later designate as confidential and\n> proprietary. It is intended only for use of the individual or entity named\n> above and should not be forwarded to any other persons or entities without\n> the express consent of Motus, LLC, nor should it be used for any purpose\n> other than in the course of any potential or actual business relationship\n> with Motus, LLC. If the reader of this message is not the intended\n> recipient, or the employee or agent responsible to deliver it to the\n> intended recipient, you are hereby notified that any dissemination,\n> distribution, or copying of this communication is strictly prohibited. If\n> you have received this communication in error, please notify sender\n> immediately and destroy the original message.\n>\n> Internal Revenue Service regulations require that certain types of written\n> advice include a disclaimer. To the extent the preceding message contains\n> advice relating to a Federal tax issue, unless expressly stated otherwise\n> the advice is not intended or written to be used, and it cannot be used by\n> the recipient or any other taxpayer, for the purpose of avoiding Federal\n> tax penalties, and was not written to support the promotion or marketing of\n> any transaction or matter discussed herein.\n>\n\nIs this generally true for all PostgreSQL systems on Linux, or only for specific use cases?On Thu, Jan 7, 2016 at 12:51 PM, Scott Rankin <[email protected]> wrote:\n\n\nWinner!  Both of those settings were set to always, and as soon as I turned them off, the query times smoothed right out. \n\n\nThank you all so much for your help! \n\n\n- Scott\n\n\n\n\n\n\n\n\nFrom: Bob Lunney <[email protected]>\nDate: Thursday, January 7, 2016 at 12:13 PM\nTo: Scott Rankin <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: Re: [PERFORM] Queries intermittently slow\n\n\n\n\nScott, \n\n\nIs transparent huge pages turned off along with defrag?  Check \n\n\n\n/sys/kernel/mm/transparent_hugepage/enabled\nand\n/sys/kernel/mm/transparent_hugepage/defragBoth should be configured with \"never\".Bob Lunney\n\n\n\n\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not\n be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the\n intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication\n in error, please notify sender immediately and destroy the original message.\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not\n intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.", "msg_date": "Thu, 7 Jan 2016 13:07:24 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "Rick Otten <[email protected]> writes:\n> On Thu, Jan 7, 2016 at 12:51 PM, Scott Rankin <[email protected]> wrote:\n>> Winner! Both of those settings were set to always, and as soon as I\n>> turned them off, the query times smoothed right out.\n\n> Is this generally true for all PostgreSQL systems on Linux, or only for\n> specific use cases?\n\nIt's fairly well established that the implementation of transparent\nhuge pages in Linux kernels from the 2.6-or-so era sucks, and you're\nbest off turning it off if you care about consistency of performance.\nI am not sure whether modern kernels have improved this area.\n\nI think you can get an idea of how big a problem you have by noting\nthe accumulated runtime of the khugepaged daemon.\n\n(BTW, it would likely be a good thing to collect some current wisdom\nin this area and add it to section 17.4 of our docs.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 07 Jan 2016 13:34:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "On 2016-01-07 13:34:51 -0500, Tom Lane wrote:\n> It's fairly well established that the implementation of transparent\n> huge pages in Linux kernels from the 2.6-or-so era sucks, and you're\n> best off turning it off if you care about consistency of performance.\n\nI think the feature wasn't introduced in original 2.6 kernels (3.2 or\nso?), but red hat had backported it to their 2.6.32 kernel.\n\n\n> I am not sure whether modern kernels have improved this area.\n\nI think the problem has largely been solved around 3.16. Around 4.1 I\ncould still reproduce problems, but the regressions were only in the\nsub 10% range in my test workload.\n\n\nAndres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 7 Jan 2016 20:47:59 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" }, { "msg_contents": "On 1/7/16 1:47 PM, Andres Freund wrote:\n> On 2016-01-07 13:34:51 -0500, Tom Lane wrote:\n>> It's fairly well established that the implementation of transparent\n>> huge pages in Linux kernels from the 2.6-or-so era sucks, and you're\n>> best off turning it off if you care about consistency of performance.\n>\n> I think the feature wasn't introduced in original 2.6 kernels (3.2 or\n> so?), but red hat had backported it to their 2.6.32 kernel.\n>\n>\n>> I am not sure whether modern kernels have improved this area.\n>\n> I think the problem has largely been solved around 3.16. Around 4.1 I\n> could still reproduce problems, but the regressions were only in the\n> sub 10% range in my test workload.\n\nBTW, looks like Scott blogged about this along with some nice graphs: \nhttps://sdwr98.wordpress.com/2016/01/07/transparent-huge-pages-or-why-dbas-are-great/\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 7 Jan 2016 15:04:33 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries intermittently slow" } ]
[ { "msg_contents": "Hi,\n\nI ask your help to solve a slow query which is taking more than 14 \nseconds to be executed.\nMaybe I am asking too much both from you and specially from postgresql, \nas it is really huge, envolving 16 tables.\n\nExplain:\nhttp://explain.depesz.com/s/XII9\n\nSchema:\nhttp://adj.com.br/erp/data_schema/\n\nVersion:\nPostgreSQL 9.2.14 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) \n4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n\nOS: Centos 7.1\n*Linux centos01.insoliti.com.br 3.10.0-327.3.1.el7.x86_64 #1 SMP Wed Dec \n9 14:09:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux\n*\n\n * contains large objects: no\n * has a large proportion of NULLs in several columns: maybe\n * receives a large number of UPDATEs or DELETEs regularly: no\n * is growing rapidly: no\n * has many indexes on it: maybe (please see schema)\n * uses triggers that may be executing database functions, or is\n calling functions directly: in some cases\n\n\n * *History:*the system is still being developed.\n * *Hardware*: this is the development environment, a Dell T110-II\n server, with 8GB of ram and cpu as follows\n\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 58\nmodel name : Intel(R) Pentium(R) CPU G2120 @ 3.10GHz\nstepping : 9\nmicrocode : 0x1b\ncpu MHz : 1663.101\ncache size : 3072 KB\nphysical id : 0\nsiblings : 2\ncore id : 0\ncpu cores : 2\napicid : 0\ninitial apicid : 0\nfpu : yes\nfpu_exception : yes\ncpuid level : 13\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge \nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe \nsyscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl \nxtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor \nds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt \ntsc_deadline_timer xsave lahf_lm arat epb pln pts dtherm tpr_shadow vnmi \nflexpriority ept vpid fsgsbase smep erms xsaveopt\nbogomips : 6185.92\nclflush size : 64\ncache_alignment : 64\naddress sizes : 36 bits physical, 48 bits virtual\npower management:\n\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 58\nmodel name : Intel(R) Pentium(R) CPU G2120 @ 3.10GHz\nstepping : 9\nmicrocode : 0x1b\ncpu MHz : 1647.722\ncache size : 3072 KB\nphysical id : 0\nsiblings : 2\ncore id : 1\ncpu cores : 2\napicid : 2\ninitial apicid : 2\nfpu : yes\nfpu_exception : yes\ncpuid level : 13\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge \nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe \nsyscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl \nxtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor \nds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt \ntsc_deadline_timer xsave lahf_lm arat epb pln pts dtherm tpr_shadow vnmi \nflexpriority ept vpid fsgsbase smep erms xsaveopt\nbogomips : 6185.92\nclflush size : 64\ncache_alignment : 64\naddress sizes : 36 bits physical, 48 bits virtual\npower management:\n\nConfiguration:\n name | current_setting | \nsource\n---------------------------------+-----------------------------------+----------------------\n application_name | psql | \nclient\n authentication_timeout | 1min | \nconfiguration file\n autovacuum | on | \nconfiguration file\n autovacuum_analyze_scale_factor | 0.05 | \nconfiguration file\n autovacuum_analyze_threshold | 10 | \nconfiguration file\n autovacuum_freeze_max_age | 200000000 | \nconfiguration file\n autovacuum_max_workers | 6 | \nconfiguration file\n autovacuum_naptime | 15s | \nconfiguration file\n autovacuum_vacuum_cost_delay | 10ms | \nconfiguration file\n autovacuum_vacuum_cost_limit | 1000 | \nconfiguration file\n autovacuum_vacuum_scale_factor | 0.1 | \nconfiguration file\n autovacuum_vacuum_threshold | 25 | \nconfiguration file\n bytea_output | hex | \nconfiguration file\n checkpoint_completion_target | 0.9 | \nconfiguration file\n checkpoint_segments | 32 | \nconfiguration file\n checkpoint_timeout | 10min | \nconfiguration file\n client_encoding | UTF8 | \nclient\n client_min_messages | log | \nconfiguration file\n cpu_index_tuple_cost | 0.005 | \nconfiguration file\n cpu_operator_cost | 0.0025 | \nconfiguration file\n cpu_tuple_cost | 0.01 | \nconfiguration file\n DateStyle | SQL, DMY | \nconfiguration file\n default_text_search_config | pg_catalog.english | \nconfiguration file\n effective_cache_size | 5632MB | \nconfiguration file\n enable_bitmapscan | on | \nconfiguration file\n enable_hashagg | on | \nconfiguration file\n enable_hashjoin | on | \nconfiguration file\n enable_indexonlyscan | on | \nconfiguration file\n enable_indexscan | on | \nconfiguration file\n enable_material | on | \nconfiguration file\n enable_mergejoin | on | \nconfiguration file\n enable_nestloop | on | \nconfiguration file\n enable_seqscan | on | \nconfiguration file\n enable_sort | on | \nconfiguration file\n enable_tidscan | on | \nconfiguration file\n lc_messages | pt_BR.UTF-8 | \nconfiguration file\n lc_monetary | pt_BR.UTF-8 | \nconfiguration file\n lc_numeric | pt_BR.UTF-8 | \nconfiguration file\n lc_time | pt_BR.UTF-8 | \nconfiguration file\n listen_addresses | 127.0.0.1, 192.168.1.199 | \nconfiguration file\n log_autovacuum_min_duration | 0 | \nconfiguration file\n log_connections | on | \nconfiguration file\n log_destination | stderr | \nconfiguration file\n log_directory | pg_log | \nconfiguration file\n log_disconnections | on | \nconfiguration file\n log_duration | on | \nconfiguration file\n log_filename | postgresql-%a.log | \nconfiguration file\n log_line_prefix | %t - (%h - %u) --> | \nconfiguration file\n log_min_duration_statement | -1 | \nconfiguration file\n log_min_error_statement | info | \nconfiguration file\n log_min_messages | info | \nconfiguration file\n log_rotation_age | 1d | \nconfiguration file\n log_rotation_size | 0 | \nconfiguration file\n log_statement | all | \nconfiguration file\n log_timezone | Brazil/East | \nconfiguration file\n log_truncate_on_rotation | on | \nconfiguration file\n logging_collector | on | \nconfiguration file\n maintenance_work_mem | 1GB | \nconfiguration file\n max_connections | 100 | \nconfiguration file\n max_stack_depth | 2MB | \nenvironment variable\n password_encryption | on | \nconfiguration file\n port | 5434 | \ncommand line\n random_page_cost | 2 | \nconfiguration file\n seq_page_cost | 1 | \nconfiguration file\n shared_buffers | 2GB | \nconfiguration file\n shared_preload_libraries | plugin_debugger | \nconfiguration file\n ssl | on | \nconfiguration file\n ssl_ca_file | /home/postgres/ssl/ca-bundle.crt | \nconfiguration file\n ssl_cert_file | /home/postgres/ssl/localhost.crt | \nconfiguration file\n ssl_ciphers | ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH | \nconfiguration file\n ssl_key_file | /home/postgres/ssl/localhost.key | \nconfiguration file\n ssl_renegotiation_limit | 512MB | \nconfiguration file\n synchronous_commit | off | \nconfiguration file\n syslog_facility | local0 | \nconfiguration file\n syslog_ident | postgres | \nconfiguration file\n TimeZone | Brazil/East | \nconfiguration file\n wal_buffers | 16MB | \nconfiguration file\n work_mem | 50MB | \nconfiguration file\n\n\nThank you very much.\n\nAtt.,\nAlmir de Oliveira Duarte Junior\n\n\n\n\n\n\n\nHi,\n\n I ask your help to solve a slow query which is taking more than 14\n seconds to be executed.\n Maybe I am asking too much both from you and specially from\n postgresql, as it is really huge, envolving 16 tables.\n\n Explain:\nhttp://explain.depesz.com/s/XII9\n\n Schema:\nhttp://adj.com.br/erp/data_schema/\n\n Version:\n PostgreSQL 9.2.14 on x86_64-redhat-linux-gnu, compiled by gcc\n (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n\n OS: Centos 7.1\nLinux centos01.insoliti.com.br 3.10.0-327.3.1.el7.x86_64 #1 SMP\n Wed Dec 9 14:09:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux\n\n\n\ncontains large objects: no\n\nhas a large proportion of NULLs\n in several columns: maybe\n\nreceives a large number of\n UPDATEs or DELETEs regularly: no\n\nis growing rapidly: no\n\nhas many indexes on it: maybe\n (please see schema)\n\nuses triggers that may be\n executing database functions, or is calling functions directly:\n in some cases\n\n\n\nHistory: the system is still\n being developed.\n\nHardware: this is the\n development environment, a Dell T110-II server, with 8GB of ram\n and cpu as follows\n\nprocessor       : 0\n vendor_id       : GenuineIntel\n cpu family      : 6\n model           : 58\n model name      : Intel(R) Pentium(R) CPU G2120 @ 3.10GHz\n stepping        : 9\n microcode       : 0x1b\n cpu MHz         : 1663.101\n cache size      : 3072 KB\n physical id     : 0\n siblings        : 2\n core id         : 0\n cpu cores       : 2\n apicid          : 0\n initial apicid  : 0\n fpu             : yes\n fpu_exception   : yes\n cpuid level     : 13\n wp              : yes\n flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr\n pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm\n pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts\n rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni\n pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm\n pcid sse4_1 sse4_2 popcnt tsc_deadline_timer xsave lahf_lm arat\n epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase\n smep erms xsaveopt\n bogomips        : 6185.92\n clflush size    : 64\n cache_alignment : 64\n address sizes   : 36 bits physical, 48 bits virtual\n power management:\n\n processor       : 1\n vendor_id       : GenuineIntel\n cpu family      : 6\n model           : 58\n model name      : Intel(R) Pentium(R) CPU G2120 @ 3.10GHz\n stepping        : 9\n microcode       : 0x1b\n cpu MHz         : 1647.722\n cache size      : 3072 KB\n physical id     : 0\n siblings        : 2\n core id         : 1\n cpu cores       : 2\n apicid          : 2\n initial apicid  : 2\n fpu             : yes\n fpu_exception   : yes\n cpuid level     : 13\n wp              : yes\n flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr\n pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm\n pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts\n rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni\n pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm\n pcid sse4_1 sse4_2 popcnt tsc_deadline_timer xsave lahf_lm arat\n epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase\n smep erms xsaveopt\n bogomips        : 6185.92\n clflush size    : 64\n cache_alignment : 64\n address sizes   : 36 bits physical, 48 bits virtual\n power management:\n\nConfiguration:\n              name               |         \n current_setting          |        source        \n---------------------------------+-----------------------------------+----------------------\n  application_name                |\n psql                              | client\n  authentication_timeout          |\n 1min                              | configuration file\n  autovacuum                      |\n on                                | configuration file\n  autovacuum_analyze_scale_factor |\n 0.05                              | configuration file\n  autovacuum_analyze_threshold    |\n 10                                | configuration file\n  autovacuum_freeze_max_age       |\n 200000000                         | configuration file\n  autovacuum_max_workers          |\n 6                                 | configuration file\n  autovacuum_naptime              |\n 15s                               | configuration file\n  autovacuum_vacuum_cost_delay    |\n 10ms                              | configuration file\n  autovacuum_vacuum_cost_limit    |\n 1000                              | configuration file\n  autovacuum_vacuum_scale_factor  |\n 0.1                               | configuration file\n  autovacuum_vacuum_threshold     |\n 25                                | configuration file\n  bytea_output                    |\n hex                               | configuration file\n  checkpoint_completion_target    |\n 0.9                               | configuration file\n  checkpoint_segments             |\n 32                                | configuration file\n  checkpoint_timeout              |\n 10min                             | configuration file\n  client_encoding                 |\n UTF8                              | client\n  client_min_messages             |\n log                               | configuration file\n  cpu_index_tuple_cost            |\n 0.005                             | configuration file\n  cpu_operator_cost               |\n 0.0025                            | configuration file\n  cpu_tuple_cost                  |\n 0.01                              | configuration file\n  DateStyle                       | SQL,\n DMY                          | configuration file\n  default_text_search_config      |\n pg_catalog.english                | configuration file\n  effective_cache_size            |\n 5632MB                            | configuration file\n  enable_bitmapscan               |\n on                                | configuration file\n  enable_hashagg                  |\n on                                | configuration file\n  enable_hashjoin                 |\n on                                | configuration file\n  enable_indexonlyscan            |\n on                                | configuration file\n  enable_indexscan                |\n on                                | configuration file\n  enable_material                 |\n on                                | configuration file\n  enable_mergejoin                |\n on                                | configuration file\n  enable_nestloop                 |\n on                                | configuration file\n  enable_seqscan                  |\n on                                | configuration file\n  enable_sort                     |\n on                                | configuration file\n  enable_tidscan                  |\n on                                | configuration file\n  lc_messages                     |\n pt_BR.UTF-8                       | configuration file\n  lc_monetary                     |\n pt_BR.UTF-8                       | configuration file\n  lc_numeric                      |\n pt_BR.UTF-8                       | configuration file\n  lc_time                         |\n pt_BR.UTF-8                       | configuration file\n  listen_addresses                | 127.0.0.1,\n 192.168.1.199          | configuration file\n  log_autovacuum_min_duration     |\n 0                                 | configuration file\n  log_connections                 |\n on                                | configuration file\n  log_destination                 |\n stderr                            | configuration file\n  log_directory                   |\n pg_log                            | configuration file\n  log_disconnections              |\n on                                | configuration file\n  log_duration                    |\n on                                | configuration file\n  log_filename                    |\n postgresql-%a.log                 | configuration file\n  log_line_prefix                 | %t - (%h - %u)\n -->                | configuration file\n  log_min_duration_statement      |\n -1                                | configuration file\n  log_min_error_statement         |\n info                              | configuration file\n  log_min_messages                |\n info                              | configuration file\n  log_rotation_age                |\n 1d                                | configuration file\n  log_rotation_size               |\n 0                                 | configuration file\n  log_statement                   |\n all                               | configuration file\n  log_timezone                    |\n Brazil/East                       | configuration file\n  log_truncate_on_rotation        |\n on                                | configuration file\n  logging_collector               |\n on                                | configuration file\n  maintenance_work_mem            |\n 1GB                               | configuration file\n  max_connections                 |\n 100                               | configuration file\n  max_stack_depth                 |\n 2MB                               | environment variable\n  password_encryption             |\n on                                | configuration file\n  port                            |\n 5434                              | command line\n  random_page_cost                |\n 2                                 | configuration file\n  seq_page_cost                   |\n 1                                 | configuration file\n  shared_buffers                  |\n 2GB                               | configuration file\n  shared_preload_libraries        |\n plugin_debugger                   | configuration file\n  ssl                             |\n on                                | configuration file\n  ssl_ca_file                     |\n /home/postgres/ssl/ca-bundle.crt  | configuration file\n  ssl_cert_file                   |\n /home/postgres/ssl/localhost.crt  | configuration file\n  ssl_ciphers                     |\n ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH | configuration file\n  ssl_key_file                    |\n /home/postgres/ssl/localhost.key  | configuration file\n  ssl_renegotiation_limit         |\n 512MB                             | configuration file\n  synchronous_commit              |\n off                               | configuration file\n  syslog_facility                 |\n local0                            | configuration file\n  syslog_ident                    |\n postgres                          | configuration file\n  TimeZone                        |\n Brazil/East                       | configuration file\n  wal_buffers                     |\n 16MB                              | configuration file\n  work_mem                        |\n 50MB                              | configuration file\n\n\nThank you very much.\n\n Att.,\nAlmir de Oliveira Duarte Junior", "msg_date": "Thu, 7 Jan 2016 02:17:21 -0200", "msg_from": "Almir de Oliveira Duarte Junior <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query help" }, { "msg_contents": "Hi, Almir.\n\nFor instance, number 4:\n\n===\n4. 3,888.460 9,649.531 ↓ 70.9 7,382,985 1\n→\n\nHash Left Join (cost=46,368.01..71,725.05 rows=104,205 width=2,356) (actual\ntime=1,013.778..9,649.531 *rows=7,382,985* loops=1)\n\n Hash Cond: (e7.ser_recall_id = e11.ser_recall_item_ser_recall_id)\n\n\n===\n\nTake care,\n\n--\nRafael Bernard Rodrigues Araújo\nabout.me/rafaelbernard\n\nOn Thu, Jan 7, 2016 at 12:40 PM, Almir de Oliveira Duarte Junior <\[email protected]> wrote:\n\n> Hi Rafael,\n>\n> Thank you very much.\n> It is strange, I don't have any table with more than 50,000 rows...\n> Anyway, I will try that...\n>\n>\n>\n> On 01/07/2016 12:28 PM, Rafael Bernard Rodrigues Araujo wrote:\n>\n> Hi, Almir.\n>\n> I would at first try to decrease the number of rows from some joined\n> tables at the join level instead of the where level, specially subqueries.\n> I could see that you have some huge tables with more than 1,000,000.\n>\n> Take care,\n>\n> --\n> Rafael Bernard Rodrigues Araújo\n> about.me/rafaelbernard\n>\n> On Thu, Jan 7, 2016 at 2:17 AM, Almir de Oliveira Duarte Junior <\n> <[email protected]>[email protected]> wrote:\n>\n>> Hi,\n>>\n>> I ask your help to solve a slow query which is taking more than 14\n>> seconds to be executed.\n>> Maybe I am asking too much both from you and specially from postgresql,\n>> as it is really huge, envolving 16 tables.\n>>\n>> Explain:\n>> http://explain.depesz.com/s/XII9\n>>\n>> Schema:\n>> http://adj.com.br/erp/data_schema/\n>>\n>> Version:\n>> PostgreSQL 9.2.14 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.3\n>> 20140911 (Red Hat 4.8.3-9), 64-bit\n>>\n>> OS: Centos 7.1\n>>\n>> *Linux centos01.insoliti.com.br <http://centos01.insoliti.com.br>\n>> 3.10.0-327.3.1.el7.x86_64 #1 SMP Wed Dec 9 14:09:15 UTC 2015 x86_64 x86_64\n>> x86_64 GNU/Linux *\n>>\n>> - contains large objects: no\n>> - has a large proportion of NULLs in several columns: maybe\n>> - receives a large number of UPDATEs or DELETEs regularly: no\n>> - is growing rapidly: no\n>> - has many indexes on it: maybe (please see schema)\n>> - uses triggers that may be executing database functions, or is\n>> calling functions directly: in some cases\n>>\n>>\n>>\n>> - *History:* the system is still being developed.\n>> - *Hardware*: this is the development environment, a Dell T110-II\n>> server, with 8GB of ram and cpu as follows\n>>\n>> processor : 0\n>> vendor_id : GenuineIntel\n>> cpu family : 6\n>> model : 58\n>> model name : Intel(R) Pentium(R) CPU G2120 @ 3.10GHz\n>> stepping : 9\n>> microcode : 0x1b\n>> cpu MHz : 1663.101\n>> cache size : 3072 KB\n>> physical id : 0\n>> siblings : 2\n>> core id : 0\n>> cpu cores : 2\n>> apicid : 0\n>> initial apicid : 0\n>> fpu : yes\n>> fpu_exception : yes\n>> cpuid level : 13\n>> wp : yes\n>> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\n>> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall\n>> nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology\n>> nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est\n>> tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer xsave\n>> lahf_lm arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid\n>> fsgsbase smep erms xsaveopt\n>> bogomips : 6185.92\n>> clflush size : 64\n>> cache_alignment : 64\n>> address sizes : 36 bits physical, 48 bits virtual\n>> power management:\n>>\n>> processor : 1\n>> vendor_id : GenuineIntel\n>> cpu family : 6\n>> model : 58\n>> model name : Intel(R) Pentium(R) CPU G2120 @ 3.10GHz\n>> stepping : 9\n>> microcode : 0x1b\n>> cpu MHz : 1647.722\n>> cache size : 3072 KB\n>> physical id : 0\n>> siblings : 2\n>> core id : 1\n>> cpu cores : 2\n>> apicid : 2\n>> initial apicid : 2\n>> fpu : yes\n>> fpu_exception : yes\n>> cpuid level : 13\n>> wp : yes\n>> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\n>> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall\n>> nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology\n>> nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est\n>> tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer xsave\n>> lahf_lm arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid\n>> fsgsbase smep erms xsaveopt\n>> bogomips : 6185.92\n>> clflush size : 64\n>> cache_alignment : 64\n>> address sizes : 36 bits physical, 48 bits virtual\n>> power management:\n>>\n>> Configuration:\n>> name | current_setting\n>> | source\n>>\n>> ---------------------------------+-----------------------------------+----------------------\n>> application_name | psql |\n>> client\n>> authentication_timeout | 1min |\n>> configuration file\n>> autovacuum | on |\n>> configuration file\n>> autovacuum_analyze_scale_factor | 0.05 |\n>> configuration file\n>> autovacuum_analyze_threshold | 10 |\n>> configuration file\n>> autovacuum_freeze_max_age | 200000000 |\n>> configuration file\n>> autovacuum_max_workers | 6 |\n>> configuration file\n>> autovacuum_naptime | 15s |\n>> configuration file\n>> autovacuum_vacuum_cost_delay | 10ms |\n>> configuration file\n>> autovacuum_vacuum_cost_limit | 1000 |\n>> configuration file\n>> autovacuum_vacuum_scale_factor | 0.1 |\n>> configuration file\n>> autovacuum_vacuum_threshold | 25 |\n>> configuration file\n>> bytea_output | hex |\n>> configuration file\n>> checkpoint_completion_target | 0.9 |\n>> configuration file\n>> checkpoint_segments | 32 |\n>> configuration file\n>> checkpoint_timeout | 10min |\n>> configuration file\n>> client_encoding | UTF8 |\n>> client\n>> client_min_messages | log |\n>> configuration file\n>> cpu_index_tuple_cost | 0.005 |\n>> configuration file\n>> cpu_operator_cost | 0.0025 |\n>> configuration file\n>> cpu_tuple_cost | 0.01 |\n>> configuration file\n>> DateStyle | SQL, DMY |\n>> configuration file\n>> default_text_search_config | pg_catalog.english |\n>> configuration file\n>> effective_cache_size | 5632MB |\n>> configuration file\n>> enable_bitmapscan | on |\n>> configuration file\n>> enable_hashagg | on |\n>> configuration file\n>> enable_hashjoin | on |\n>> configuration file\n>> enable_indexonlyscan | on |\n>> configuration file\n>> enable_indexscan | on |\n>> configuration file\n>> enable_material | on |\n>> configuration file\n>> enable_mergejoin | on |\n>> configuration file\n>> enable_nestloop | on |\n>> configuration file\n>> enable_seqscan | on |\n>> configuration file\n>> enable_sort | on |\n>> configuration file\n>> enable_tidscan | on |\n>> configuration file\n>> lc_messages | pt_BR.UTF-8 |\n>> configuration file\n>> lc_monetary | pt_BR.UTF-8 |\n>> configuration file\n>> lc_numeric | pt_BR.UTF-8 |\n>> configuration file\n>> lc_time | pt_BR.UTF-8 |\n>> configuration file\n>> listen_addresses | 127.0.0.1, 192.168.1.199 |\n>> configuration file\n>> log_autovacuum_min_duration | 0 |\n>> configuration file\n>> log_connections | on |\n>> configuration file\n>> log_destination | stderr |\n>> configuration file\n>> log_directory | pg_log |\n>> configuration file\n>> log_disconnections | on |\n>> configuration file\n>> log_duration | on |\n>> configuration file\n>> log_filename | postgresql-%a.log |\n>> configuration file\n>> log_line_prefix | %t - (%h - %u) --> |\n>> configuration file\n>> log_min_duration_statement | -1 |\n>> configuration file\n>> log_min_error_statement | info |\n>> configuration file\n>> log_min_messages | info |\n>> configuration file\n>> log_rotation_age | 1d |\n>> configuration file\n>> log_rotation_size | 0 |\n>> configuration file\n>> log_statement | all |\n>> configuration file\n>> log_timezone | Brazil/East |\n>> configuration file\n>> log_truncate_on_rotation | on |\n>> configuration file\n>> logging_collector | on |\n>> configuration file\n>> maintenance_work_mem | 1GB |\n>> configuration file\n>> max_connections | 100 |\n>> configuration file\n>> max_stack_depth | 2MB |\n>> environment variable\n>> password_encryption | on |\n>> configuration file\n>> port | 5434 |\n>> command line\n>> random_page_cost | 2 |\n>> configuration file\n>> seq_page_cost | 1 |\n>> configuration file\n>> shared_buffers | 2GB |\n>> configuration file\n>> shared_preload_libraries | plugin_debugger |\n>> configuration file\n>> ssl | on |\n>> configuration file\n>> ssl_ca_file | /home/postgres/ssl/ca-bundle.crt |\n>> configuration file\n>> ssl_cert_file | /home/postgres/ssl/localhost.crt |\n>> configuration file\n>> ssl_ciphers | ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH |\n>> configuration file\n>> ssl_key_file | /home/postgres/ssl/localhost.key |\n>> configuration file\n>> ssl_renegotiation_limit | 512MB |\n>> configuration file\n>> synchronous_commit | off |\n>> configuration file\n>> syslog_facility | local0 |\n>> configuration file\n>> syslog_ident | postgres |\n>> configuration file\n>> TimeZone | Brazil/East |\n>> configuration file\n>> wal_buffers | 16MB |\n>> configuration file\n>> work_mem | 50MB |\n>> configuration file\n>>\n>>\n>> Thank you very much.\n>>\n>> Att.,\n>> Almir de Oliveira Duarte Junior\n>>\n>>\n>\n>\n> --\n> Att.,\n>\n> Almir de Oliveira Duarte Junior, PMP\n>\n> ADJ Tecnologia da Informação\n> Diretor\n> Tel: +55 (21) 3079-4128\n> Cel: +55 (21) 99362-7627\n> Skype: almir.duarte.jr\n> Email: [email protected]\n> Rua São José, 90 - sala 613 - Centro\n> Rio de Janeiro - RJ - CEP: 20.010-901\n> ------------------------------\n> [image: FPASuite] <http://fpasuite.com.br>\n> [image: FPASuite] <http://fpasuite.com.br>\n> Copyright © 2013 FPASuite. Todos os direitos reservados\n>", "msg_date": "Thu, 7 Jan 2016 16:55:27 -0200", "msg_from": "Rafael Bernard Rodrigues Araujo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query help" }, { "msg_contents": ">I ask your help to solve a slow query which is taking more than 14 seconds to be executed.\n>Maybe I am asking too much both from you and specially from postgresql, as it is really huge, envolving 16 tables.\n>\n>Explain:\n>http://explain.depesz.com/s/XII9\n>\n>Schema:\n>http://adj.com.br/erp/data_schema/\n\nHello,\n\nIt seems that you don't pay much attention to column alignment.\ne.g. http://adj.com.br/erp/data_schema/tables/ERP_PUBLIC_sys_person.html\n\nThis probably won't make any significant difference in your case,\nbut this is something to be aware of when dealing with large tables.\nhere is a good starting link for this topic:\nhttp://stackoverflow.com/questions/12604744/does-the-order-of-columns-in-a-postgres-table-impact-performance\n\nregards,\n\nMarc Mamin\n\n\n\n\n\n\n\n\n\n>I ask your help to solve a slow query which is taking more than 14 seconds to be executed.\n>Maybe I am asking too much both from you and specially from postgresql, as it is really huge, envolving 16 tables.\n>\n>Explain:\n>http://explain.depesz.com/s/XII9\n>\n>Schema:\n>http://adj.com.br/erp/data_schema/\n\nHello,\n\nIt seems that you don't pay much attention to column alignment.\ne.g. http://adj.com.br/erp/data_schema/tables/ERP_PUBLIC_sys_person.html\n\nThis probably won't make any significant difference in your case,\nbut this is something to be aware of when dealing with large tables.\nhere is a good starting link for this topic: \nhttp://stackoverflow.com/questions/12604744/does-the-order-of-columns-in-a-postgres-table-impact-performance\n\nregards,\n\nMarc Mamin", "msg_date": "Thu, 7 Jan 2016 19:15:41 +0000", "msg_from": "Marc Mamin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query help" }, { "msg_contents": "Hi Marc,\n\nThank you very much!\n\nAlmir\n====\nOn 01/07/2016 05:15 PM, Marc Mamin wrote:\n>\n>\n> >I ask your help to solve a slow query which is taking more than 14 \n> seconds to be executed.\n> >Maybe I am asking too much both from you and specially from \n> postgresql, as it is really huge, envolving 16 tables.\n> >\n> >Explain:\n> >http://explain.depesz.com/s/XII9\n> >\n> >Schema:\n> >http://adj.com.br/erp/data_schema/\n>\n> Hello,\n>\n> It seems that you don't pay much attention to column alignment.\n> e.g. http://adj.com.br/erp/data_schema/tables/ERP_PUBLIC_sys_person.html\n>\n> This probably won't make any significant difference in your case,\n> but this is something to be aware of when dealing with large tables.\n> here is a good starting link for this topic:\n> http://stackoverflow.com/questions/12604744/does-the-order-of-columns-in-a-postgres-table-impact-performance\n>\n> regards,\n>\n> Marc Mamin\n\n\n-- \nAtt.,\n\nAlmir de Oliveira Duarte Junior, PMP\n\nADJ Tecnologia da Informa��o\nDiretor\nTel: +55 (21) 3079-4128\nCel: +55 (21) 99362-7627\nSkype: almir.duarte.jr\nEmail: [email protected]\nRua S�o Jos�, 90 - sala 613 - Centro\nRio de Janeiro - RJ - CEP: 20.010-901\n\n------------------------------------------------------------------------\nFPASuite <http://fpasuite.com.br>\nFPASuite <http://fpasuite.com.br>\nCopyright � 2013 FPASuite. Todos os direitos reservados", "msg_date": "Thu, 7 Jan 2016 17:30:23 -0200", "msg_from": "Almir de Oliveira Duarte Junior <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query help" } ]
[ { "msg_contents": "Hi all, I just wrote an article about the postgres performance \noptimizations I've been working on recently especially compared to our \nold MongoDB platform\n\nhttps://mark.zealey.org/2016/01/08/how-we-tweaked-postgres-upsert-performance-to-be-2-3-faster-than-mongodb\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 8 Jan 2016 18:37:36 +0200", "msg_from": "Mark Zealey <[email protected]>", "msg_from_op": true, "msg_subject": "How we made Postgres upserts 2-3* quicker than MongoDB" }, { "msg_contents": "Hello Mark,\n\nAs far as I know, MongoDB is able to get better writing performances\nthanks to scaling (easy to manage sharding). Postgresql cannot (is not\ndesigned for - complicated).\nWhy comparing postgresql & mongoDB performances on a standalone\ninstance since mongoDB is not really designed for that ?\n\nThanks for the answer and for sharing,\n\n\n2016-01-08 17:37 GMT+01:00 Mark Zealey <[email protected]>:\n> Hi all, I just wrote an article about the postgres performance optimizations\n> I've been working on recently especially compared to our old MongoDB\n> platform\n>\n> https://mark.zealey.org/2016/01/08/how-we-tweaked-postgres-upsert-performance-to-be-2-3-faster-than-mongodb\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 8 Jan 2016 18:07:56 +0100", "msg_from": "Nicolas Paris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How we made Postgres upserts 2-3* quicker than MongoDB" }, { "msg_contents": "On 08/01/16 19:07, Nicolas Paris wrote:\n> Hello Mark,\n>\n> As far as I know, MongoDB is able to get better writing performances\n> thanks to scaling (easy to manage sharding). Postgresql cannot (is not\n> designed for - complicated).\n> Why comparing postgresql & mongoDB performances on a standalone\n> instance since mongoDB is not really designed for that ?\n\nYes you can get better performance with mongo via the sharding route but \nthere are a number of quite bad downsides to mongo sharding - limited \nability to perform aggregation, loss of unique key constraints other \nthan the shard key, requires at minimum 4-6* the hardware (2 replicas \nfor each block = 4 + 2 * mongos gateway servers)... Actually pretty \nsimilar to the issues you see when trying to scale a RDBMS via \nsharding... We tried doing some mongo sharding and the result was a \nmassive drop in write performance so we gave up pretty quickly...\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 8 Jan 2016 19:16:34 +0200", "msg_from": "Mark Zealey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How we made Postgres upserts 2-3* quicker than MongoDB" }, { "msg_contents": "On Fri, Jan 8, 2016 at 10:07 AM, Nicolas Paris <[email protected]> wrote:\n>\n> 2016-01-08 17:37 GMT+01:00 Mark Zealey <[email protected]>:\n>> Hi all, I just wrote an article about the postgres performance optimizations\n>> I've been working on recently especially compared to our old MongoDB\n>> platform\n>>\n>> https://mark.zealey.org/2016/01/08/how-we-tweaked-postgres-upsert-performance-to-be-2-3-faster-than-mongodb\n\n> Hello Mark,\n>\n> As far as I know, MongoDB is able to get better writing performances\n> thanks to scaling (easy to manage sharding). Postgresql cannot (is not\n> designed for - complicated).\n> Why comparing postgresql & mongoDB performances on a standalone\n> instance since mongoDB is not really designed for that ?\n>\n> Thanks for the answer and for sharing,\n\nI think the part where he mentioned that it's a lot easy to do roll up\nand reporting queries etc. on a sql database justified the comparison\nbetween the two in general. At which point PostgreSQL would normally\nbe considered the underdog in write performance, at least in the past.\nSo comparing the two makes perfect sense.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 8 Jan 2016 10:43:29 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How we made Postgres upserts 2-3* quicker than MongoDB" } ]
[ { "msg_contents": "Hi All,\n\nThe database is postgresql 9.3, running on debian7, with 8 cpu cores and\n8096MB physical memory.\n\n\nThere is a big table, with 70 more columns. It would be constantly at 700\nrows/sec. It's not feasible to use COPY, because the data is not predefined\nor provisioned, and it's generated on demand by clients.\n\n\nTo make a clean test env, I clone a new table, removing the indexes\n(keeping the primary key) and triggers, and use pgbench to test insert\nstatement purely.\n\n\nHere is some key items in the postgresql.conf:\n\n--------------\n\nshared_buffers = 1024MB\n\nwork_mem = 32MB\n\nmaintenance_work_mem = 128MB\n\nbgwriter_delay = 20ms\n\nsynchronous_commit = off\n\ncheckpoint_segments = 64\n\ncheckpoint_completion_target = 0.9\n\neffective_cache_size = 4096MB\n\nlog_min_duration_statement = 1000\n\n--------------\n\nThe problem:\n\nThe log would show that the duration of some inserts exceed 1 second.\n\nI use iotop to view the io activities of the pg backend worker process, it\nshows that it writes some MB per second. It's the most confused part. The\ncommit is async, so it would not do and wait the wal writing, as well as\nthe shared buffers. I doubt it would be flush at the shared buffer\nallocation. So I modify the codes, print the pgBufferUsage.blk_write_time\nalong with the long duration printing. But I found it's a small fraction\nthe total duration. I also add codes to record the total time on lock\nwaiting within the statement execution and print it, and it's also a small\nfraction of the duration. I could not explain the result.\n\nThen I use systemtap to check what files the process frequenlty write out:\n-----\nTracing 20991 (/usr/lib/postgresql/9.3/bin/postgres)...\nPlease wait for 30 seconds.\n\n=== Top 10 file writes ===\n#1: 6004 times, 49184768 bytes writes in file 21986.44.\n#2: 400 times, 3276800 bytes writes in file 21988.3.\n#3: 12 times, 98304 bytes writes in file 57ED.\n#4: 10 times, 81920 bytes writes in file 57F0.\n#5: 10 times, 81920 bytes writes in file 57EE.\n#6: 9 times, 73728 bytes writes in file 57F1.\n#7: 9 times, 73728 bytes writes in file 57F3.\n#8: 8 times, 65536 bytes writes in file 57EB.\n#9: 8 times, 65536 bytes writes in file 57F2.\n#10: 4 times, 32768 bytes writes in file 57EF.\n-----\n\nThe \"21986.44\" is the table data file, and the \"21988.3\" is the primary key\nindex, and the others are subtrans files.\n\nObviously, the process does a lot of IO (vfs level), and I doubt the\nduration spikes are due to the IO contention from time to time.\n\nBut I do not understand that why the process do so many IO with async\ncommit? And it does not even happen at the shared buffer flushing and locks\nwaiting. Where's the code path doing these IO?\n\n\nRegards,\nJinhua Luo\n\nHi All,The database is postgresql 9.3, running on debian7, with 8 cpu cores and 8096MB physical memory.There is a big table, with 70 more columns. It would be constantly at 700 rows/sec. It's not feasible to use COPY, because the data is not predefined or provisioned, and it's generated on demand by clients.To make a clean test env, I clone a new table, removing the indexes (keeping the primary key) and triggers, and use pgbench to test insert statement purely.Here is some key items in the postgresql.conf:--------------shared_buffers = 1024MBwork_mem = 32MBmaintenance_work_mem = 128MBbgwriter_delay = 20mssynchronous_commit = offcheckpoint_segments = 64checkpoint_completion_target = 0.9effective_cache_size = 4096MBlog_min_duration_statement = 1000--------------The problem:The log would show that the duration of some inserts exceed 1 second.I use iotop to view the io activities of the pg backend worker process, it shows that it writes some MB per second. It's the most confused part. The commit is async, so it would not do and wait the wal writing, as well as the shared buffers. I doubt it would be flush at the shared buffer allocation. So I modify the codes, print the pgBufferUsage.blk_write_time along with the long duration printing. But I found it's a small fraction the total duration. I also add codes to record the total time on lock waiting within the statement execution and print it, and it's also a small fraction of the duration. I could not explain the result. Then I use systemtap to check what files the process frequenlty write out:-----Tracing 20991 (/usr/lib/postgresql/9.3/bin/postgres)...Please wait for 30 seconds.=== Top 10 file writes ===#1: 6004 times, 49184768 bytes writes in file 21986.44.#2: 400 times, 3276800 bytes writes in file 21988.3.#3: 12 times, 98304 bytes writes in file 57ED.#4: 10 times, 81920 bytes writes in file 57F0.#5: 10 times, 81920 bytes writes in file 57EE.#6: 9 times, 73728 bytes writes in file 57F1.#7: 9 times, 73728 bytes writes in file 57F3.#8: 8 times, 65536 bytes writes in file 57EB.#9: 8 times, 65536 bytes writes in file 57F2.#10: 4 times, 32768 bytes writes in file 57EF.-----The \"21986.44\" is the table data file, and the \"21988.3\" is the primary key index, and the others are subtrans files.Obviously, the process does a lot of IO (vfs level), and I doubt the duration spikes are due to the IO contention from time to time.But I do not understand that why the process do so many IO with async commit? And it does not even happen at the shared buffer flushing and locks waiting. Where's the code path doing these IO?Regards,Jinhua Luo", "msg_date": "Sun, 10 Jan 2016 13:57:38 +0800", "msg_from": "Jinhua Luo <[email protected]>", "msg_from_op": true, "msg_subject": "insert performance" }, { "msg_contents": "On 1/9/16 11:57 PM, Jinhua Luo wrote:\n> But I do not understand that why the process do so many IO with async\n> commit? And it does not even happen at the shared buffer flushing and\n> locks waiting. Where's the code path doing these IO?\n\nI assume you're asking about all the IO to the heap table. That is most \nlikely occurring as part of ReadBuffer(). As soon as you fill up shared \nbuffers, BufferAlloc() is likely to end up with a dirty buffer, \nresulting in it calling FlushBuffer() (see \nsrc/backend/storage/buffer/bufmgr.c#1084).\n\nNote that that call is tracked by \nTRACE_POSTGRESQL_BUFFER_WRITE_DIRTY_START(), so I'd expect you to see it \nin the relevant systemtap stats.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 10 Jan 2016 15:05:52 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance" }, { "msg_contents": "On Sat, Jan 9, 2016 at 9:57 PM, Jinhua Luo <[email protected]> wrote:\n>\n> To make a clean test env, I clone a new table, removing the indexes (keeping\n> the primary key) and triggers, and use pgbench to test insert statement\n> purely.\n\nCan you share the pgbench command line, and the sql file you feed to\nit (and whatever is needed to set up the schema)?\n\n\nThanks,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Jan 2016 11:20:03 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance" }, { "msg_contents": "Hi,\n\nI found the insert performance is not related to the table schema.\n\nIn fact, I could recur the issue using simple table:\n\ncreate table test(k bigserial primary key, a int, b int, c text, d text);\n\ntest.sql:\ninsert into test(a, b, c, d) values(3438, 1231,\n'ooooooooooooooooooooooooooooooooooooo',\n'kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk');\n\n\npgbench -r -N -n -c 4 -j 1 -T 600 -f test.sql\n\nI also compile and run it on the latest 9.4 version, the same issue.\n\nRegards,\nJinhua Luo\n\n\n\n\n2016-01-12 3:20 GMT+08:00 Jeff Janes <[email protected]>:\n> On Sat, Jan 9, 2016 at 9:57 PM, Jinhua Luo <[email protected]> wrote:\n>>\n>> To make a clean test env, I clone a new table, removing the indexes (keeping\n>> the primary key) and triggers, and use pgbench to test insert statement\n>> purely.\n>\n> Can you share the pgbench command line, and the sql file you feed to\n> it (and whatever is needed to set up the schema)?\n>\n>\n> Thanks,\n>\n> Jeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 13 Jan 2016 18:03:35 +0800", "msg_from": "Jinhua Luo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance" }, { "msg_contents": "Hi All,\n\nI found it's not related to file I/O.\n\nI use systemtap to diagnose the postgres backend process.\n\nThe systemtap script is a modified version of sample-bt-off-cpu:\n\nhttps://gist.github.com/kingluo/15b656998035cef193bc\n\n\nTest process:\n\n1) create a simple table:\n\n-----------\ncreate table test(k bigserial primary key, a int, b int, c text, d text);\n-----------\n\n2) test.sql:\n\n-----------\ninsert into test(a, b, c, d) values(3438, 1231,\n'ooooooooooooooooooooooooooooooooooooo',\n'kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk');\n-----------\n\n3) I run the pgbench, using 4 connections:\n\n-----------\n$ time pgbench -r -N -n -c 4 -j 1 -T 3600 -f test.sql\n-----------\n\n4) I run the systemtap script for 3 minutes (Here the 987 is the pid\nof one postgres backend process):\n\nNote that sum, max, avg, min time is in microsecond.\n\n-----------\n# ./sample-bt-off-cpu -a -v -p 987 -t 180 > /tmp/postgres.bt\nPass 1: parsed user script and 110 library script(s) using\n26916virt/22300res/4276shr/18740data kb, in 380usr/10sys/386real ms.\nPass 2: analyzed script: 18 probe(s), 13 function(s), 5 embed(s), 12\nglobal(s) using 54708virt/51956res/5920shr/46532data kb, in\n1870usr/360sys/2669real ms.\nPass 3: translated to C into\n\"/tmp/stapteVG3Q/stap_18d161acb3024931de917335496977c3_12813_src.c\"\nusing 54708virt/52156res/6120shr/46532data kb, in\n8680usr/250sys/24647real ms.\nPass 4: compiled C into\n\"stap_18d161acb3024931de917335496977c3_12813.ko\" in\n20450usr/610sys/43898real ms.\nPass 5: starting run.\nWARNING: Tracing 987 (/opt/postgresql/bin/postgres)...\nWARNING: Too many CFI instuctions\nWARNING: Time's up. Quitting now...(it may take a while)\nWARNING: query time, count=646253, sum=102991078, max=2474344, avg=159, min=83\nWARNING: lock time, count=142, sum=3306500, max=1158862, avg=23285, min=16\nWARNING: lwlock time, count=141272, sum=7260098, max=1383180, avg=51, min=1\nWARNING: Number of errors: 0, skipped probes: 24\nPass 5: run completed in 10usr/110sys/180744real ms.\n-----------\n\nDuring that 3 minutes, the postgres prints below logs:\n\n-----------\n2016-01-13 23:27:21 CST [987-157] postgres@postgres LOG: duration:\n2474.304 ms statement: insert into test(a, b, c, d) values(3438,\n1231, 'ooooooooooooooooooooooooooooooooooooo',\n'kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk');\n2016-01-13 23:27:48 CST [987-158] postgres@postgres LOG: duration:\n1383.359 ms statement: insert into test(a, b, c, d) values(3438,\n1231, 'ooooooooooooooooooooooooooooooooooooo',\n'kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk');\n2016-01-13 23:28:33 CST [987-159] postgres@postgres LOG: process 987\nstill waiting for ExclusiveLock on extension of relation 16386 of\ndatabase 12141 after 1000.212 ms\n2016-01-13 23:28:33 CST [987-160] postgres@postgres DETAIL: Process\nholding the lock: 990. Wait queue: 988, 987, 989.\n2016-01-13 23:28:33 CST [987-161] postgres@postgres STATEMENT: insert\ninto test(a, b, c, d) values(3438, 1231,\n'ooooooooooooooooooooooooooooooooooooo',\n'kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk');\n2016-01-13 23:28:33 CST [987-162] postgres@postgres LOG: process 987\nacquired ExclusiveLock on extension of relation 16386 of database\n12141 after 1158.818 ms\n2016-01-13 23:28:33 CST [987-163] postgres@postgres STATEMENT: insert\ninto test(a, b, c, d) values(3438, 1231,\n'ooooooooooooooooooooooooooooooooooooo',\n'kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk');\n2016-01-13 23:28:33 CST [987-164] postgres@postgres LOG: duration:\n1159.512 ms statement: insert into test(a, b, c, d) values(3438,\n1231, 'ooooooooooooooooooooooooooooooooooooo',\n'kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk');\n2016-01-13 23:28:45 CST [987-165] postgres@postgres LOG: duration:\n1111.664 ms statement: insert into test(a, b, c, d) values(3438,\n1231, 'ooooooooooooooooooooooooooooooooooooo',\n'kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk');\n-----------\n\nThe final backtrace result is converted into flame graph, see:\nhttp://luajit.io/systemtap/pgsql/postgres_1.svg\n\nAny advice?\n\n\nRegards,\nJinhua Luo\n\n2016-01-11 5:05 GMT+08:00 Jim Nasby <[email protected]>:\n> On 1/9/16 11:57 PM, Jinhua Luo wrote:\n>>\n>> But I do not understand that why the process do so many IO with async\n>> commit? And it does not even happen at the shared buffer flushing and\n>> locks waiting. Where's the code path doing these IO?\n>\n>\n> I assume you're asking about all the IO to the heap table. That is most\n> likely occurring as part of ReadBuffer(). As soon as you fill up shared\n> buffers, BufferAlloc() is likely to end up with a dirty buffer, resulting in\n> it calling FlushBuffer() (see src/backend/storage/buffer/bufmgr.c#1084).\n>\n> Note that that call is tracked by\n> TRACE_POSTGRESQL_BUFFER_WRITE_DIRTY_START(), so I'd expect you to see it in\n> the relevant systemtap stats.\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX\n> Experts in Analytics, Data Architecture and PostgreSQL\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 14 Jan 2016 00:41:45 +0800", "msg_from": "Jinhua Luo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance" }, { "msg_contents": "Hi,\n\nI thought with async commit enabled, the backend process would rarely\ndo file io. But in fact, it still involves a lot of file io.\n\nAfter inspecting the vfs probes using systemtap, and reading the\nsource codes of postgresql, I found the tight loop of insert or update\nwill cause heavy file io upon the data files (table, indexes) directly\nby the backend process! And those io has nothing to do with shared\nbuffer dirty writes.\n\nThe heap_insert() or heap_update() would invoke\nRelationGetBufferForTuple(), which in turn finally invoke\nReadBuffer_common():\n\n1) lookup or allocate the buffer from shared buffer, which may cause\ndirty write (but in my case, it's rare. Maybe the shared buffer is big\nenough and the checkpointer or bgwriter always clean it in time). If\nthe target buffer is found, skip following steps.\n\n2) if it needs to extend the relation (insert or update on new table\nwould normally fall in this case), then it would write zero-filled\npage into the disk (used to occupy the file space? But most file\nsystems support file hole or space reservation, so maybe this part\ncould be optimized?) This procedure would hold the exclusive lock on\nthe relation. So if the write is slow, it would slow down all pending\nqueries of the lock waiters.\n\n3) Otherwise, it would read from disk.\n\nThe target buffer would be locked exclusively until the insert or\nupdate finish. Note that the insert or update also involve xlog\ninsert, although with async commit enabled, the backend process would\nnot flush the xlog, but chances are that the xlog buffer dirty writes\nhappens (although it's also rare in my case).\n\nSo I guess the real reason is the file io with lock holding. If io\nspike happens, it would cause long query duration.\n\nAm I correct? Look forward to any advice.\n\nThanks.\n\nRegards,\nJinhua Luo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Jan 2016 12:50:16 +0800", "msg_from": "Jinhua Luo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance" }, { "msg_contents": "Hi,\n\nThere is another problem.\nWhen the autovacuum recycles the old pages, the ReadBuffer_common()\nwould do mdread() instead of mdextend().\nThe read is synchronous, while the write could be mostly asynchronous,\nso the frequent read is much worse than write version.\n\nAny help? Please.\n\nRegards,\nJinhua Luo\n\n\n2016-01-19 12:50 GMT+08:00 Jinhua Luo <[email protected]>:\n> Hi,\n>\n> I thought with async commit enabled, the backend process would rarely\n> do file io. But in fact, it still involves a lot of file io.\n>\n> After inspecting the vfs probes using systemtap, and reading the\n> source codes of postgresql, I found the tight loop of insert or update\n> will cause heavy file io upon the data files (table, indexes) directly\n> by the backend process! And those io has nothing to do with shared\n> buffer dirty writes.\n>\n> The heap_insert() or heap_update() would invoke\n> RelationGetBufferForTuple(), which in turn finally invoke\n> ReadBuffer_common():\n>\n> 1) lookup or allocate the buffer from shared buffer, which may cause\n> dirty write (but in my case, it's rare. Maybe the shared buffer is big\n> enough and the checkpointer or bgwriter always clean it in time). If\n> the target buffer is found, skip following steps.\n>\n> 2) if it needs to extend the relation (insert or update on new table\n> would normally fall in this case), then it would write zero-filled\n> page into the disk (used to occupy the file space? But most file\n> systems support file hole or space reservation, so maybe this part\n> could be optimized?) This procedure would hold the exclusive lock on\n> the relation. So if the write is slow, it would slow down all pending\n> queries of the lock waiters.\n>\n> 3) Otherwise, it would read from disk.\n>\n> The target buffer would be locked exclusively until the insert or\n> update finish. Note that the insert or update also involve xlog\n> insert, although with async commit enabled, the backend process would\n> not flush the xlog, but chances are that the xlog buffer dirty writes\n> happens (although it's also rare in my case).\n>\n> So I guess the real reason is the file io with lock holding. If io\n> spike happens, it would cause long query duration.\n>\n> Am I correct? Look forward to any advice.\n>\n> Thanks.\n>\n> Regards,\n> Jinhua Luo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jan 2016 13:54:48 +0800", "msg_from": "Jinhua Luo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance" }, { "msg_contents": "On 1/21/16 11:54 PM, Jinhua Luo wrote:\n> There is another problem.\n> When the autovacuum recycles the old pages, the ReadBuffer_common()\n> would do mdread() instead of mdextend().\n> The read is synchronous, while the write could be mostly asynchronous,\n> so the frequent read is much worse than write version.\n\nPlease don't top post.\n\nAFAICT your analysis is correct, though I don't see what autovacuum has \nto do with anything. When we need a new block to put data on we'll \neither find one with free space in the free space map and use it, or \nwe'll extend the relation.\n\n> Any help? Please.\n\nThere's been some discussion on ways to improve the performance of \nrelation extension (iirc one of those was to not zero the new page), but \nultimately you're at the mercy of the underlying OS and hardware.\n\nIf you have ideas for improving this you should speak up on -hackers, \nbut before doing so you should read the archives about what's been \nproposed in the past.\n\n> 2016-01-19 12:50 GMT+08:00 Jinhua Luo <[email protected]>:\n>> Hi,\n>>\n>> I thought with async commit enabled, the backend process would rarely\n>> do file io. But in fact, it still involves a lot of file io.\n>>\n>> After inspecting the vfs probes using systemtap, and reading the\n>> source codes of postgresql, I found the tight loop of insert or update\n>> will cause heavy file io upon the data files (table, indexes) directly\n>> by the backend process! And those io has nothing to do with shared\n>> buffer dirty writes.\n>>\n>> The heap_insert() or heap_update() would invoke\n>> RelationGetBufferForTuple(), which in turn finally invoke\n>> ReadBuffer_common():\n>>\n>> 1) lookup or allocate the buffer from shared buffer, which may cause\n>> dirty write (but in my case, it's rare. Maybe the shared buffer is big\n>> enough and the checkpointer or bgwriter always clean it in time). If\n>> the target buffer is found, skip following steps.\n>>\n>> 2) if it needs to extend the relation (insert or update on new table\n>> would normally fall in this case), then it would write zero-filled\n>> page into the disk (used to occupy the file space? But most file\n>> systems support file hole or space reservation, so maybe this part\n>> could be optimized?) This procedure would hold the exclusive lock on\n>> the relation. So if the write is slow, it would slow down all pending\n>> queries of the lock waiters.\n>>\n>> 3) Otherwise, it would read from disk.\n>>\n>> The target buffer would be locked exclusively until the insert or\n>> update finish. Note that the insert or update also involve xlog\n>> insert, although with async commit enabled, the backend process would\n>> not flush the xlog, but chances are that the xlog buffer dirty writes\n>> happens (although it's also rare in my case).\n>>\n>> So I guess the real reason is the file io with lock holding. If io\n>> spike happens, it would cause long query duration.\n>>\n>> Am I correct? Look forward to any advice.\n>>\n>> Thanks.\n>>\n>> Regards,\n>> Jinhua Luo\n\n\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 24 Jan 2016 17:24:09 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance" } ]
[ { "msg_contents": "Hey all, i've run into a performance problem with one of my queries that I\nam really not sure what is causing it.\n\nSetup info:\nPostgres version 9.4.4 on Debian 7. Server is virtual, with a single core\nand 512 ram available and ssd storage.\n\nChanges to postgresql.conf:\nmaintenance_work_mem = 30MB\ncheckpoint_completion_target = 0.7\neffective_cache_size = 352MB\nwork_mem = 24MB\nwal_buffers = 4MB\ncheckpoint_segments = 8\nshared_buffers = 120MB\nrandom_page_cost = 1.1\n\nThe problem:\n\nI have a view which sums up detail records to give totals at a header\nlevel, and performance is great when I select from it by limiting to a\nsingle record, but limiting it to multiple records seems to cause it to\nchoose a bad plan.\n\nExample 1:\n\n> SELECT *\n> FROM claim_totals\n> WHERE claim_id IN ('e8a38718-7997-4304-bbfa-138deb84aa82')\n> (2 ms)\n\n\nExample 2:\n\n> SELECT *\n> FROM claim_totals\n> WHERE claim_id IN ('324d2af8-46b3-45ad-b56a-0a49d0345653',\n> 'e8a38718-7997-4304-bbfa-138deb84aa82')\n> (5460 ms)\n\n\nThe view definition is:\n\nSELECT claim.claim_id,\n> COALESCE(lumpsum.lumpsum_count, 0::bigint)::integer AS lumpsum_count,\n> COALESCE(lumpsum.requested_amount, 0::numeric) AS\n> lumpsum_requested_total,\n> COALESCE(lumpsum.allowed_amount, 0::numeric) AS lumpsum_allowed_total,\n> COALESCE(claim_product.product_count, 0::bigint)::integer +\n> COALESCE(claim_adhoc_product.adhoc_product_count, 0::bigint)::integer AS\n> product_count,\n> COALESCE(claim_product.requested_amount, 0::numeric) +\n> COALESCE(claim_adhoc_product.requested_amount, 0::numeric) AS\n> product_requested_amount,\n> COALESCE(claim_product.allowed_amount, 0::numeric) +\n> COALESCE(claim_adhoc_product.allowed_amount, 0::numeric) AS\n> product_allowed_amount,\n> COALESCE(claim_product.requested_amount, 0::numeric) +\n> COALESCE(claim_adhoc_product.requested_amount, 0::numeric) +\n> COALESCE(lumpsum.requested_amount, 0::numeric) AS requested_total,\n> COALESCE(claim_product.allowed_amount, 0::numeric) +\n> COALESCE(claim_adhoc_product.allowed_amount, 0::numeric) +\n> COALESCE(lumpsum.allowed_amount, 0::numeric) AS allowed_total\n> FROM claim\n> LEFT JOIN ( SELECT claim_lumpsum.claim_id,\n> count(claim_lumpsum.claim_lumpsum_id) AS lumpsum_count,\n> sum(claim_lumpsum.requested_amount) AS requested_amount,\n> sum(claim_lumpsum.allowed_amount) AS allowed_amount\n> FROM claim_lumpsum\n> GROUP BY claim_lumpsum.claim_id) lumpsum ON lumpsum.claim_id =\n> claim.claim_id\n> LEFT JOIN ( SELECT claim_product_1.claim_id,\n> count(claim_product_1.claim_product_id) AS product_count,\n> sum(claim_product_1.rebate_requested_quantity *\n> claim_product_1.rebate_requested_rate) AS requested_amount,\n> sum(claim_product_1.rebate_allowed_quantity *\n> claim_product_1.rebate_allowed_rate) AS allowed_amount\n> FROM claim_product claim_product_1\n> GROUP BY claim_product_1.claim_id) claim_product ON\n> claim_product.claim_id = claim.claim_id\n> LEFT JOIN ( SELECT claim_adhoc_product_1.claim_id,\n> count(claim_adhoc_product_1.claim_adhoc_product_id) AS\n> adhoc_product_count,\n> sum(claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate) AS requested_amount,\n> sum(claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate) AS allowed_amount\n> FROM claim_adhoc_product claim_adhoc_product_1\n> GROUP BY claim_adhoc_product_1.claim_id) claim_adhoc_product ON\n> claim_adhoc_product.claim_id = claim.claim_id;\n\n\nHere are the respective explain / analyze for the two queries above:\n\nExample 1:\n\n> Nested Loop Left Join (cost=0.97..149.46 rows=2 width=232) (actual\n> time=0.285..0.289 rows=1 loops=1)\n> Output: claim.claim_id,\n> (COALESCE((count(claim_lumpsum.claim_lumpsum_id)), 0::bigint))::integer,\n> COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric),\n> COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric),\n> ((COALESCE((count(claim_product_1.claim_product_id)), 0::bigint))::integer\n> + (COALESCE((count(claim_adhoc_product_1.claim_adhoc_product_id)),\n> 0::bigint))::integer),\n> (COALESCE((sum((claim_product_1.rebate_requested_quantity *\n> claim_product_1.rebate_requested_rate))), 0::numeric) +\n> COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)),\n> (COALESCE((sum((claim_product_1.rebate_allowed_quantity *\n> claim_product_1.rebate_allowed_rate))), 0::numeric) +\n> COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)),\n> ((COALESCE((sum((claim_product_1.rebate_requested_quantity *\n> claim_product_1.rebate_requested_rate))), 0::numeric) +\n> COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)) +\n> COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric)),\n> ((COALESCE((sum((claim_product_1.rebate_allowed_quantity *\n> claim_product_1.rebate_allowed_rate))), 0::numeric) +\n> COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)) +\n> COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric))\n> Join Filter: (claim_lumpsum.claim_id = claim.claim_id)\n> -> Nested Loop Left Join (cost=0.97..135.31 rows=1 width=160) (actual\n> time=0.260..0.264 rows=1 loops=1)\n> Output: claim.claim_id, (count(claim_product_1.claim_product_id)),\n> (sum((claim_product_1.rebate_requested_quantity *\n> claim_product_1.rebate_requested_rate))),\n> (sum((claim_product_1.rebate_allowed_quantity *\n> claim_product_1.rebate_allowed_rate))),\n> (count(claim_adhoc_product_1.claim_adhoc_product_id)),\n> (sum((claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate))),\n> (sum((claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate)))\n> Join Filter: (claim_adhoc_product_1.claim_id = claim.claim_id)\n> -> Nested Loop Left Join (cost=0.97..122.14 rows=1 width=88)\n> (actual time=0.254..0.256 rows=1 loops=1)\n> Output: claim.claim_id,\n> (count(claim_product_1.claim_product_id)),\n> (sum((claim_product_1.rebate_requested_quantity *\n> claim_product_1.rebate_requested_rate))),\n> (sum((claim_product_1.rebate_allowed_quantity *\n> claim_product_1.rebate_allowed_rate)))\n> Join Filter: (claim_product_1.claim_id = claim.claim_id)\n> -> Index Only Scan using claim_pkey on\n> client_pinnacle.claim (cost=0.42..1.54 rows=1 width=16) (actual\n> time=0.078..0.079 rows=1 loops=1)\n> Output: claim.claim_id\n> Index Cond: (claim.claim_id =\n> 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)\n> Heap Fetches: 0\n> -> GroupAggregate (cost=0.55..120.58 rows=1 width=54)\n> (actual time=0.163..0.163 rows=1 loops=1)\n> Output: claim_product_1.claim_id,\n> count(claim_product_1.claim_product_id),\n> sum((claim_product_1.rebate_requested_quantity *\n> claim_product_1.rebate_requested_rate)),\n> sum((claim_product_1.rebate_allowed_quantity *\n> claim_product_1.rebate_allowed_rate))\n> Group Key: claim_product_1.claim_id\n> -> Index Scan using\n> claim_product_claim_id_product_id_distributor_company_id_lo_key on\n> client_pinnacle.claim_product claim_product_1 (cost=0.55..118.99 rows=105\n> width=54) (actual time=0.071..0.091 rows=12 loops=1)\n> Output: claim_product_1.claim_id,\n> claim_product_1.claim_product_id,\n> claim_product_1.rebate_requested_quantity,\n> claim_product_1.rebate_requested_rate,\n> claim_product_1.rebate_allowed_quantity, claim_product_1.rebate_allowed_rate\n> Index Cond: (claim_product_1.claim_id =\n> 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)\n> -> GroupAggregate (cost=0.00..13.15 rows=1 width=160) (actual\n> time=0.001..0.001 rows=0 loops=1)\n> Output: claim_adhoc_product_1.claim_id,\n> count(claim_adhoc_product_1.claim_adhoc_product_id),\n> sum((claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate)),\n> sum((claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate))\n> Group Key: claim_adhoc_product_1.claim_id\n> -> Seq Scan on client_pinnacle.claim_adhoc_product\n> claim_adhoc_product_1 (cost=0.00..13.12 rows=1 width=160) (actual\n> time=0.001..0.001 rows=0 loops=1)\n> Output: claim_adhoc_product_1.claim_adhoc_product_id,\n> claim_adhoc_product_1.claim_id, claim_adhoc_product_1.product_name,\n> claim_adhoc_product_1.product_number,\n> claim_adhoc_product_1.uom_type_description,\n> claim_adhoc_product_1.rebate_requested_quantity,\n> claim_adhoc_product_1.rebate_requested_rate,\n> claim_adhoc_product_1.rebate_allowed_quantity,\n> claim_adhoc_product_1.rebate_allowed_rate,\n> claim_adhoc_product_1.claimant_contract_name,\n> claim_adhoc_product_1.resolve_date\n> Filter: (claim_adhoc_product_1.claim_id =\n> 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)\n> -> GroupAggregate (cost=0.00..14.05 rows=2 width=96) (actual\n> time=0.001..0.001 rows=0 loops=1)\n> Output: claim_lumpsum.claim_id,\n> count(claim_lumpsum.claim_lumpsum_id), sum(claim_lumpsum.requested_amount),\n> sum(claim_lumpsum.allowed_amount)\n> Group Key: claim_lumpsum.claim_id\n> -> Seq Scan on client_pinnacle.claim_lumpsum (cost=0.00..14.00\n> rows=2 width=96) (actual time=0.000..0.000 rows=0 loops=1)\n> Output: claim_lumpsum.claim_lumpsum_id,\n> claim_lumpsum.claim_id, claim_lumpsum.lumpsum_id,\n> claim_lumpsum.requested_amount, claim_lumpsum.allowed_amount,\n> claim_lumpsum.event_date_range, claim_lumpsum.contract_lumpsum_id,\n> claim_lumpsum.claimant_contract_name,\n> claim_lumpsum.hint_contract_lumpsum_description\n> Filter: (claim_lumpsum.claim_id =\n> 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)\n> Planning time: 6.336 ms\n> Execution time: 0.753 ms\n\n\nExample 2:\n\n> Hash Right Join (cost=81278.79..81674.85 rows=2 width=232) (actual\n> time=5195.972..5458.916 rows=2 loops=1)\n> Output: claim.claim_id,\n> (COALESCE((count(claim_lumpsum.claim_lumpsum_id)), 0::bigint))::integer,\n> COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric),\n> COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric),\n> ((COALESCE((count(claim_product_1.claim_product_id)), 0::bigint))::integer\n> + (COALESCE((count(claim_adhoc_product_1.claim_adhoc_product_id)),\n> 0::bigint))::integer),\n> (COALESCE((sum((claim_product_1.rebate_requested_quantity *\n> claim_product_1.rebate_requested_rate))), 0::numeric) +\n> COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)),\n> (COALESCE((sum((claim_product_1.rebate_allowed_quantity *\n> claim_product_1.rebate_allowed_rate))), 0::numeric) +\n> COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)),\n> ((COALESCE((sum((claim_product_1.rebate_requested_quantity *\n> claim_product_1.rebate_requested_rate))), 0::numeric) +\n> COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)) +\n> COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric)),\n> ((COALESCE((sum((claim_product_1.rebate_allowed_quantity *\n> claim_product_1.rebate_allowed_rate))), 0::numeric) +\n> COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)) +\n> COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric))\n> Hash Cond: (claim_product_1.claim_id = claim.claim_id)\n> -> HashAggregate (cost=81231.48..81438.09 rows=13774 width=54) (actual\n> time=5182.546..5405.990 rows=95763 loops=1)\n> Output: claim_product_1.claim_id,\n> count(claim_product_1.claim_product_id),\n> sum((claim_product_1.rebate_requested_quantity *\n> claim_product_1.rebate_requested_rate)),\n> sum((claim_product_1.rebate_allowed_quantity *\n> claim_product_1.rebate_allowed_rate))\n> Group Key: claim_product_1.claim_id\n> -> Seq Scan on client_pinnacle.claim_product claim_product_1\n> (cost=0.00..55253.59 rows=1731859 width=54) (actual time=0.020..1684.826\n> rows=1731733 loops=1)\n> Output: claim_product_1.claim_id,\n> claim_product_1.claim_product_id,\n> claim_product_1.rebate_requested_quantity,\n> claim_product_1.rebate_requested_rate,\n> claim_product_1.rebate_allowed_quantity, claim_product_1.rebate_allowed_rate\n> -> Hash (cost=47.29..47.29 rows=2 width=160) (actual time=0.110..0.110\n> rows=2 loops=1)\n> Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)),\n> (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount)),\n> (count(claim_adhoc_product_1.claim_adhoc_product_id)),\n> (sum((claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate))),\n> (sum((claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate)))\n> Buckets: 1024 Batches: 1 Memory Usage: 1kB\n> -> Hash Right Join (cost=41.53..47.29 rows=2 width=160) (actual\n> time=0.105..0.108 rows=2 loops=1)\n> Output: claim.claim_id,\n> (count(claim_lumpsum.claim_lumpsum_id)),\n> (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount)),\n> (count(claim_adhoc_product_1.claim_adhoc_product_id)),\n> (sum((claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate))),\n> (sum((claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate)))\n> Hash Cond: (claim_adhoc_product_1.claim_id = claim.claim_id)\n> -> HashAggregate (cost=16.25..19.25 rows=200 width=160)\n> (actual time=0.001..0.001 rows=0 loops=1)\n> Output: claim_adhoc_product_1.claim_id,\n> count(claim_adhoc_product_1.claim_adhoc_product_id),\n> sum((claim_adhoc_product_1.rebate_requested_quantity *\n> claim_adhoc_product_1.rebate_requested_rate)),\n> sum((claim_adhoc_product_1.rebate_allowed_quantity *\n> claim_adhoc_product_1.rebate_allowed_rate))\n> Group Key: claim_adhoc_product_1.claim_id\n> -> Seq Scan on client_pinnacle.claim_adhoc_product\n> claim_adhoc_product_1 (cost=0.00..12.50 rows=250 width=160) (actual\n> time=0.001..0.001 rows=0 loops=1)\n> Output:\n> claim_adhoc_product_1.claim_adhoc_product_id,\n> claim_adhoc_product_1.claim_id, claim_adhoc_product_1.product_name,\n> claim_adhoc_product_1.product_number,\n> claim_adhoc_product_1.uom_type_description,\n> claim_adhoc_product_1.rebate_requested_quantity,\n> claim_adhoc_product_1.rebate_requested_rate,\n> claim_adhoc_product_1.rebate_allowed_quantity,\n> claim_adhoc_product_1.rebate_allowed_rate,\n> claim_adhoc_product_1.claimant_contract_name,\n> claim_adhoc_product_1.resolve_date\n> -> Hash (cost=25.25..25.25 rows=2 width=88) (actual\n> time=0.093..0.093 rows=2 loops=1)\n> Output: claim.claim_id,\n> (count(claim_lumpsum.claim_lumpsum_id)),\n> (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount))\n> Buckets: 1024 Batches: 1 Memory Usage: 1kB\n> -> Hash Right Join (cost=19.49..25.25 rows=2\n> width=88) (actual time=0.088..0.092 rows=2 loops=1)\n> Output: claim.claim_id,\n> (count(claim_lumpsum.claim_lumpsum_id)),\n> (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount))\n> Hash Cond: (claim_lumpsum.claim_id =\n> claim.claim_id)\n> -> HashAggregate (cost=16.40..19.40 rows=200\n> width=96) (actual time=0.003..0.003 rows=0 loops=1)\n> Output: claim_lumpsum.claim_id,\n> count(claim_lumpsum.claim_lumpsum_id), sum(claim_lumpsum.requested_amount),\n> sum(claim_lumpsum.allowed_amount)\n> Group Key: claim_lumpsum.claim_id\n> -> Seq Scan on\n> client_pinnacle.claim_lumpsum (cost=0.00..13.20 rows=320 width=96) (actual\n> time=0.001..0.001 rows=0 loops=1)\n> Output:\n> claim_lumpsum.claim_lumpsum_id, claim_lumpsum.claim_id,\n> claim_lumpsum.lumpsum_id, claim_lumpsum.requested_amount,\n> claim_lumpsum.allowed_amount, claim_lumpsum.event_date_range,\n> claim_lumpsum.contract_lumpsum_id, claim_lumpsum.claimant_contract_name,\n> claim_lumpsum.hint_contract_lumpsum_description\n> -> Hash (cost=3.07..3.07 rows=2 width=16)\n> (actual time=0.073..0.073 rows=2 loops=1)\n> Output: claim.claim_id\n> Buckets: 1024 Batches: 1 Memory Usage:\n> 1kB\n> -> Index Only Scan using claim_pkey on\n> client_pinnacle.claim (cost=0.42..3.07 rows=2 width=16) (actual\n> time=0.048..0.070 rows=2 loops=1)\n> Output: claim.claim_id\n> Index Cond: (claim.claim_id = ANY\n> ('{324d2af8-46b3-45ad-b56a-0a49d0345653,e8a38718-7997-4304-bbfa-138deb84aa82}'::uuid[]))\n> Heap Fetches: 0\n> Planning time: 1.020 ms\n> Execution time: 5459.461 ms\n\n\nPlease let me know if there is any more info I can provide to help figure\nout why it's choosing an undesirable plan with just a slight change in the\nthe clause.\n\nThanks for any help you may be able to provide.\n-Adam\n\nHey all, i've run into a performance problem with one of my queries that I am really not sure what is causing it.Setup info:Postgres version 9.4.4 on Debian 7. Server is virtual, with a single core and 512 ram available and ssd storage.Changes to postgresql.conf:maintenance_work_mem = 30MBcheckpoint_completion_target = 0.7effective_cache_size = 352MBwork_mem = 24MBwal_buffers = 4MBcheckpoint_segments = 8shared_buffers = 120MBrandom_page_cost = 1.1The problem:I have a view which sums up detail records to give totals at a header level, and performance is great when I select from it by limiting to a single record, but limiting it to multiple records seems to cause it to choose a bad plan.Example 1: SELECT *FROM claim_totalsWHERE claim_id IN ('e8a38718-7997-4304-bbfa-138deb84aa82')(2 ms)Example 2:SELECT *FROM claim_totalsWHERE claim_id IN ('324d2af8-46b3-45ad-b56a-0a49d0345653', 'e8a38718-7997-4304-bbfa-138deb84aa82')(5460 ms)The view definition is: SELECT claim.claim_id,    COALESCE(lumpsum.lumpsum_count, 0::bigint)::integer AS lumpsum_count,    COALESCE(lumpsum.requested_amount, 0::numeric) AS lumpsum_requested_total,    COALESCE(lumpsum.allowed_amount, 0::numeric) AS lumpsum_allowed_total,    COALESCE(claim_product.product_count, 0::bigint)::integer + COALESCE(claim_adhoc_product.adhoc_product_count, 0::bigint)::integer AS product_count,    COALESCE(claim_product.requested_amount, 0::numeric) + COALESCE(claim_adhoc_product.requested_amount, 0::numeric) AS product_requested_amount,    COALESCE(claim_product.allowed_amount, 0::numeric) + COALESCE(claim_adhoc_product.allowed_amount, 0::numeric) AS product_allowed_amount,    COALESCE(claim_product.requested_amount, 0::numeric) + COALESCE(claim_adhoc_product.requested_amount, 0::numeric) + COALESCE(lumpsum.requested_amount, 0::numeric) AS requested_total,    COALESCE(claim_product.allowed_amount, 0::numeric) + COALESCE(claim_adhoc_product.allowed_amount, 0::numeric) + COALESCE(lumpsum.allowed_amount, 0::numeric) AS allowed_total   FROM claim     LEFT JOIN ( SELECT claim_lumpsum.claim_id,            count(claim_lumpsum.claim_lumpsum_id) AS lumpsum_count,            sum(claim_lumpsum.requested_amount) AS requested_amount,            sum(claim_lumpsum.allowed_amount) AS allowed_amount           FROM claim_lumpsum          GROUP BY claim_lumpsum.claim_id) lumpsum ON lumpsum.claim_id = claim.claim_id     LEFT JOIN ( SELECT claim_product_1.claim_id,            count(claim_product_1.claim_product_id) AS product_count,            sum(claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate) AS requested_amount,            sum(claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate) AS allowed_amount           FROM claim_product claim_product_1          GROUP BY claim_product_1.claim_id) claim_product ON claim_product.claim_id = claim.claim_id     LEFT JOIN ( SELECT claim_adhoc_product_1.claim_id,            count(claim_adhoc_product_1.claim_adhoc_product_id) AS adhoc_product_count,            sum(claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate) AS requested_amount,            sum(claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate) AS allowed_amount           FROM claim_adhoc_product claim_adhoc_product_1          GROUP BY claim_adhoc_product_1.claim_id) claim_adhoc_product ON claim_adhoc_product.claim_id = claim.claim_id;Here are the respective explain / analyze for the two queries above:Example 1:Nested Loop Left Join  (cost=0.97..149.46 rows=2 width=232) (actual time=0.285..0.289 rows=1 loops=1)  Output: claim.claim_id, (COALESCE((count(claim_lumpsum.claim_lumpsum_id)), 0::bigint))::integer, COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric), COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric), ((COALESCE((count(claim_product_1.claim_product_id)), 0::bigint))::integer + (COALESCE((count(claim_adhoc_product_1.claim_adhoc_product_id)), 0::bigint))::integer), (COALESCE((sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)), (COALESCE((sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)), ((COALESCE((sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)) + COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric)), ((COALESCE((sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)) + COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric))  Join Filter: (claim_lumpsum.claim_id = claim.claim_id)  ->  Nested Loop Left Join  (cost=0.97..135.31 rows=1 width=160) (actual time=0.260..0.264 rows=1 loops=1)        Output: claim.claim_id, (count(claim_product_1.claim_product_id)), (sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), (sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), (count(claim_adhoc_product_1.claim_adhoc_product_id)), (sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), (sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate)))        Join Filter: (claim_adhoc_product_1.claim_id = claim.claim_id)        ->  Nested Loop Left Join  (cost=0.97..122.14 rows=1 width=88) (actual time=0.254..0.256 rows=1 loops=1)              Output: claim.claim_id, (count(claim_product_1.claim_product_id)), (sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), (sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate)))              Join Filter: (claim_product_1.claim_id = claim.claim_id)              ->  Index Only Scan using claim_pkey on client_pinnacle.claim  (cost=0.42..1.54 rows=1 width=16) (actual time=0.078..0.079 rows=1 loops=1)                    Output: claim.claim_id                    Index Cond: (claim.claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)                    Heap Fetches: 0              ->  GroupAggregate  (cost=0.55..120.58 rows=1 width=54) (actual time=0.163..0.163 rows=1 loops=1)                    Output: claim_product_1.claim_id, count(claim_product_1.claim_product_id), sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate)), sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))                    Group Key: claim_product_1.claim_id                    ->  Index Scan using claim_product_claim_id_product_id_distributor_company_id_lo_key on client_pinnacle.claim_product claim_product_1  (cost=0.55..118.99 rows=105 width=54) (actual time=0.071..0.091 rows=12 loops=1)                          Output: claim_product_1.claim_id, claim_product_1.claim_product_id, claim_product_1.rebate_requested_quantity, claim_product_1.rebate_requested_rate, claim_product_1.rebate_allowed_quantity, claim_product_1.rebate_allowed_rate                          Index Cond: (claim_product_1.claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)        ->  GroupAggregate  (cost=0.00..13.15 rows=1 width=160) (actual time=0.001..0.001 rows=0 loops=1)              Output: claim_adhoc_product_1.claim_id, count(claim_adhoc_product_1.claim_adhoc_product_id), sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate)), sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))              Group Key: claim_adhoc_product_1.claim_id              ->  Seq Scan on client_pinnacle.claim_adhoc_product claim_adhoc_product_1  (cost=0.00..13.12 rows=1 width=160) (actual time=0.001..0.001 rows=0 loops=1)                    Output: claim_adhoc_product_1.claim_adhoc_product_id, claim_adhoc_product_1.claim_id, claim_adhoc_product_1.product_name, claim_adhoc_product_1.product_number, claim_adhoc_product_1.uom_type_description, claim_adhoc_product_1.rebate_requested_quantity, claim_adhoc_product_1.rebate_requested_rate, claim_adhoc_product_1.rebate_allowed_quantity, claim_adhoc_product_1.rebate_allowed_rate, claim_adhoc_product_1.claimant_contract_name, claim_adhoc_product_1.resolve_date                    Filter: (claim_adhoc_product_1.claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)  ->  GroupAggregate  (cost=0.00..14.05 rows=2 width=96) (actual time=0.001..0.001 rows=0 loops=1)        Output: claim_lumpsum.claim_id, count(claim_lumpsum.claim_lumpsum_id), sum(claim_lumpsum.requested_amount), sum(claim_lumpsum.allowed_amount)        Group Key: claim_lumpsum.claim_id        ->  Seq Scan on client_pinnacle.claim_lumpsum  (cost=0.00..14.00 rows=2 width=96) (actual time=0.000..0.000 rows=0 loops=1)              Output: claim_lumpsum.claim_lumpsum_id, claim_lumpsum.claim_id, claim_lumpsum.lumpsum_id, claim_lumpsum.requested_amount, claim_lumpsum.allowed_amount, claim_lumpsum.event_date_range, claim_lumpsum.contract_lumpsum_id, claim_lumpsum.claimant_contract_name, claim_lumpsum.hint_contract_lumpsum_description              Filter: (claim_lumpsum.claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)Planning time: 6.336 msExecution time: 0.753 msExample 2:Hash Right Join  (cost=81278.79..81674.85 rows=2 width=232) (actual time=5195.972..5458.916 rows=2 loops=1)  Output: claim.claim_id, (COALESCE((count(claim_lumpsum.claim_lumpsum_id)), 0::bigint))::integer, COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric), COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric), ((COALESCE((count(claim_product_1.claim_product_id)), 0::bigint))::integer + (COALESCE((count(claim_adhoc_product_1.claim_adhoc_product_id)), 0::bigint))::integer), (COALESCE((sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)), (COALESCE((sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)), ((COALESCE((sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)) + COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric)), ((COALESCE((sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)) + COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric))  Hash Cond: (claim_product_1.claim_id = claim.claim_id)  ->  HashAggregate  (cost=81231.48..81438.09 rows=13774 width=54) (actual time=5182.546..5405.990 rows=95763 loops=1)        Output: claim_product_1.claim_id, count(claim_product_1.claim_product_id), sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate)), sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))        Group Key: claim_product_1.claim_id        ->  Seq Scan on client_pinnacle.claim_product claim_product_1  (cost=0.00..55253.59 rows=1731859 width=54) (actual time=0.020..1684.826 rows=1731733 loops=1)              Output: claim_product_1.claim_id, claim_product_1.claim_product_id, claim_product_1.rebate_requested_quantity, claim_product_1.rebate_requested_rate, claim_product_1.rebate_allowed_quantity, claim_product_1.rebate_allowed_rate  ->  Hash  (cost=47.29..47.29 rows=2 width=160) (actual time=0.110..0.110 rows=2 loops=1)        Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)), (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount)), (count(claim_adhoc_product_1.claim_adhoc_product_id)), (sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), (sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate)))        Buckets: 1024  Batches: 1  Memory Usage: 1kB        ->  Hash Right Join  (cost=41.53..47.29 rows=2 width=160) (actual time=0.105..0.108 rows=2 loops=1)              Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)), (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount)), (count(claim_adhoc_product_1.claim_adhoc_product_id)), (sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), (sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate)))              Hash Cond: (claim_adhoc_product_1.claim_id = claim.claim_id)              ->  HashAggregate  (cost=16.25..19.25 rows=200 width=160) (actual time=0.001..0.001 rows=0 loops=1)                    Output: claim_adhoc_product_1.claim_id, count(claim_adhoc_product_1.claim_adhoc_product_id), sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate)), sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))                    Group Key: claim_adhoc_product_1.claim_id                    ->  Seq Scan on client_pinnacle.claim_adhoc_product claim_adhoc_product_1  (cost=0.00..12.50 rows=250 width=160) (actual time=0.001..0.001 rows=0 loops=1)                          Output: claim_adhoc_product_1.claim_adhoc_product_id, claim_adhoc_product_1.claim_id, claim_adhoc_product_1.product_name, claim_adhoc_product_1.product_number, claim_adhoc_product_1.uom_type_description, claim_adhoc_product_1.rebate_requested_quantity, claim_adhoc_product_1.rebate_requested_rate, claim_adhoc_product_1.rebate_allowed_quantity, claim_adhoc_product_1.rebate_allowed_rate, claim_adhoc_product_1.claimant_contract_name, claim_adhoc_product_1.resolve_date              ->  Hash  (cost=25.25..25.25 rows=2 width=88) (actual time=0.093..0.093 rows=2 loops=1)                    Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)), (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount))                    Buckets: 1024  Batches: 1  Memory Usage: 1kB                    ->  Hash Right Join  (cost=19.49..25.25 rows=2 width=88) (actual time=0.088..0.092 rows=2 loops=1)                          Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)), (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount))                          Hash Cond: (claim_lumpsum.claim_id = claim.claim_id)                          ->  HashAggregate  (cost=16.40..19.40 rows=200 width=96) (actual time=0.003..0.003 rows=0 loops=1)                                Output: claim_lumpsum.claim_id, count(claim_lumpsum.claim_lumpsum_id), sum(claim_lumpsum.requested_amount), sum(claim_lumpsum.allowed_amount)                                Group Key: claim_lumpsum.claim_id                                ->  Seq Scan on client_pinnacle.claim_lumpsum  (cost=0.00..13.20 rows=320 width=96) (actual time=0.001..0.001 rows=0 loops=1)                                      Output: claim_lumpsum.claim_lumpsum_id, claim_lumpsum.claim_id, claim_lumpsum.lumpsum_id, claim_lumpsum.requested_amount, claim_lumpsum.allowed_amount, claim_lumpsum.event_date_range, claim_lumpsum.contract_lumpsum_id, claim_lumpsum.claimant_contract_name, claim_lumpsum.hint_contract_lumpsum_description                          ->  Hash  (cost=3.07..3.07 rows=2 width=16) (actual time=0.073..0.073 rows=2 loops=1)                                Output: claim.claim_id                                Buckets: 1024  Batches: 1  Memory Usage: 1kB                                ->  Index Only Scan using claim_pkey on client_pinnacle.claim  (cost=0.42..3.07 rows=2 width=16) (actual time=0.048..0.070 rows=2 loops=1)                                      Output: claim.claim_id                                      Index Cond: (claim.claim_id = ANY ('{324d2af8-46b3-45ad-b56a-0a49d0345653,e8a38718-7997-4304-bbfa-138deb84aa82}'::uuid[]))                                      Heap Fetches: 0Planning time: 1.020 msExecution time: 5459.461 msPlease let me know if there is any more info I can provide to help figure out why it's choosing an undesirable plan with just a slight change in the the clause.Thanks for any help you may be able to provide.-Adam", "msg_date": "Sun, 17 Jan 2016 16:41:24 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Query order of magnitude slower with slightly different where clause" }, { "msg_contents": "On 18 January 2016 at 10:41, Adam Brusselback <[email protected]>\nwrote:\n\n> Hey all, i've run into a performance problem with one of my queries that I\n> am really not sure what is causing it.\n>\n> Setup info:\n> Postgres version 9.4.4 on Debian 7. Server is virtual, with a single core\n> and 512 ram available and ssd storage.\n>\n> Changes to postgresql.conf:\n> maintenance_work_mem = 30MB\n> checkpoint_completion_target = 0.7\n> effective_cache_size = 352MB\n> work_mem = 24MB\n> wal_buffers = 4MB\n> checkpoint_segments = 8\n> shared_buffers = 120MB\n> random_page_cost = 1.1\n>\n> The problem:\n>\n> I have a view which sums up detail records to give totals at a header\n> level, and performance is great when I select from it by limiting to a\n> single record, but limiting it to multiple records seems to cause it to\n> choose a bad plan.\n>\n> Example 1:\n>\n>> SELECT *\n>> FROM claim_totals\n>> WHERE claim_id IN ('e8a38718-7997-4304-bbfa-138deb84aa82')\n>> (2 ms)\n>\n>\n> Example 2:\n>\n>> SELECT *\n>> FROM claim_totals\n>> WHERE claim_id IN ('324d2af8-46b3-45ad-b56a-0a49d0345653',\n>> 'e8a38718-7997-4304-bbfa-138deb84aa82')\n>> (5460 ms)\n>\n>\n> The view definition is:\n>\n> SELECT claim.claim_id,\n>> COALESCE(lumpsum.lumpsum_count, 0::bigint)::integer AS lumpsum_count,\n>> COALESCE(lumpsum.requested_amount, 0::numeric) AS\n>> lumpsum_requested_total,\n>> COALESCE(lumpsum.allowed_amount, 0::numeric) AS lumpsum_allowed_total,\n>> COALESCE(claim_product.product_count, 0::bigint)::integer +\n>> COALESCE(claim_adhoc_product.adhoc_product_count, 0::bigint)::integer AS\n>> product_count,\n>> COALESCE(claim_product.requested_amount, 0::numeric) +\n>> COALESCE(claim_adhoc_product.requested_amount, 0::numeric) AS\n>> product_requested_amount,\n>> COALESCE(claim_product.allowed_amount, 0::numeric) +\n>> COALESCE(claim_adhoc_product.allowed_amount, 0::numeric) AS\n>> product_allowed_amount,\n>> COALESCE(claim_product.requested_amount, 0::numeric) +\n>> COALESCE(claim_adhoc_product.requested_amount, 0::numeric) +\n>> COALESCE(lumpsum.requested_amount, 0::numeric) AS requested_total,\n>> COALESCE(claim_product.allowed_amount, 0::numeric) +\n>> COALESCE(claim_adhoc_product.allowed_amount, 0::numeric) +\n>> COALESCE(lumpsum.allowed_amount, 0::numeric) AS allowed_total\n>> FROM claim\n>> LEFT JOIN ( SELECT claim_lumpsum.claim_id,\n>> count(claim_lumpsum.claim_lumpsum_id) AS lumpsum_count,\n>> sum(claim_lumpsum.requested_amount) AS requested_amount,\n>> sum(claim_lumpsum.allowed_amount) AS allowed_amount\n>> FROM claim_lumpsum\n>> GROUP BY claim_lumpsum.claim_id) lumpsum ON lumpsum.claim_id =\n>> claim.claim_id\n>> LEFT JOIN ( SELECT claim_product_1.claim_id,\n>> count(claim_product_1.claim_product_id) AS product_count,\n>> sum(claim_product_1.rebate_requested_quantity *\n>> claim_product_1.rebate_requested_rate) AS requested_amount,\n>> sum(claim_product_1.rebate_allowed_quantity *\n>> claim_product_1.rebate_allowed_rate) AS allowed_amount\n>> FROM claim_product claim_product_1\n>> GROUP BY claim_product_1.claim_id) claim_product ON\n>> claim_product.claim_id = claim.claim_id\n>> LEFT JOIN ( SELECT claim_adhoc_product_1.claim_id,\n>> count(claim_adhoc_product_1.claim_adhoc_product_id) AS\n>> adhoc_product_count,\n>> sum(claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate) AS requested_amount,\n>> sum(claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate) AS allowed_amount\n>> FROM claim_adhoc_product claim_adhoc_product_1\n>> GROUP BY claim_adhoc_product_1.claim_id) claim_adhoc_product ON\n>> claim_adhoc_product.claim_id = claim.claim_id;\n>\n>\n> Here are the respective explain / analyze for the two queries above:\n>\n> Example 1:\n>\n>> Nested Loop Left Join (cost=0.97..149.46 rows=2 width=232) (actual\n>> time=0.285..0.289 rows=1 loops=1)\n>> Output: claim.claim_id,\n>> (COALESCE((count(claim_lumpsum.claim_lumpsum_id)), 0::bigint))::integer,\n>> COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric),\n>> COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric),\n>> ((COALESCE((count(claim_product_1.claim_product_id)), 0::bigint))::integer\n>> + (COALESCE((count(claim_adhoc_product_1.claim_adhoc_product_id)),\n>> 0::bigint))::integer),\n>> (COALESCE((sum((claim_product_1.rebate_requested_quantity *\n>> claim_product_1.rebate_requested_rate))), 0::numeric) +\n>> COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)),\n>> (COALESCE((sum((claim_product_1.rebate_allowed_quantity *\n>> claim_product_1.rebate_allowed_rate))), 0::numeric) +\n>> COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)),\n>> ((COALESCE((sum((claim_product_1.rebate_requested_quantity *\n>> claim_product_1.rebate_requested_rate))), 0::numeric) +\n>> COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)) +\n>> COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric)),\n>> ((COALESCE((sum((claim_product_1.rebate_allowed_quantity *\n>> claim_product_1.rebate_allowed_rate))), 0::numeric) +\n>> COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)) +\n>> COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric))\n>> Join Filter: (claim_lumpsum.claim_id = claim.claim_id)\n>> -> Nested Loop Left Join (cost=0.97..135.31 rows=1 width=160) (actual\n>> time=0.260..0.264 rows=1 loops=1)\n>> Output: claim.claim_id,\n>> (count(claim_product_1.claim_product_id)),\n>> (sum((claim_product_1.rebate_requested_quantity *\n>> claim_product_1.rebate_requested_rate))),\n>> (sum((claim_product_1.rebate_allowed_quantity *\n>> claim_product_1.rebate_allowed_rate))),\n>> (count(claim_adhoc_product_1.claim_adhoc_product_id)),\n>> (sum((claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate))),\n>> (sum((claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate)))\n>> Join Filter: (claim_adhoc_product_1.claim_id = claim.claim_id)\n>> -> Nested Loop Left Join (cost=0.97..122.14 rows=1 width=88)\n>> (actual time=0.254..0.256 rows=1 loops=1)\n>> Output: claim.claim_id,\n>> (count(claim_product_1.claim_product_id)),\n>> (sum((claim_product_1.rebate_requested_quantity *\n>> claim_product_1.rebate_requested_rate))),\n>> (sum((claim_product_1.rebate_allowed_quantity *\n>> claim_product_1.rebate_allowed_rate)))\n>> Join Filter: (claim_product_1.claim_id = claim.claim_id)\n>> -> Index Only Scan using claim_pkey on\n>> client_pinnacle.claim (cost=0.42..1.54 rows=1 width=16) (actual\n>> time=0.078..0.079 rows=1 loops=1)\n>> Output: claim.claim_id\n>> Index Cond: (claim.claim_id =\n>> 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)\n>> Heap Fetches: 0\n>> -> GroupAggregate (cost=0.55..120.58 rows=1 width=54)\n>> (actual time=0.163..0.163 rows=1 loops=1)\n>> Output: claim_product_1.claim_id,\n>> count(claim_product_1.claim_product_id),\n>> sum((claim_product_1.rebate_requested_quantity *\n>> claim_product_1.rebate_requested_rate)),\n>> sum((claim_product_1.rebate_allowed_quantity *\n>> claim_product_1.rebate_allowed_rate))\n>> Group Key: claim_product_1.claim_id\n>> -> Index Scan using\n>> claim_product_claim_id_product_id_distributor_company_id_lo_key on\n>> client_pinnacle.claim_product claim_product_1 (cost=0.55..118.99 rows=105\n>> width=54) (actual time=0.071..0.091 rows=12 loops=1)\n>> Output: claim_product_1.claim_id,\n>> claim_product_1.claim_product_id,\n>> claim_product_1.rebate_requested_quantity,\n>> claim_product_1.rebate_requested_rate,\n>> claim_product_1.rebate_allowed_quantity, claim_product_1.rebate_allowed_rate\n>> Index Cond: (claim_product_1.claim_id =\n>> 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)\n>> -> GroupAggregate (cost=0.00..13.15 rows=1 width=160) (actual\n>> time=0.001..0.001 rows=0 loops=1)\n>> Output: claim_adhoc_product_1.claim_id,\n>> count(claim_adhoc_product_1.claim_adhoc_product_id),\n>> sum((claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate)),\n>> sum((claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate))\n>> Group Key: claim_adhoc_product_1.claim_id\n>> -> Seq Scan on client_pinnacle.claim_adhoc_product\n>> claim_adhoc_product_1 (cost=0.00..13.12 rows=1 width=160) (actual\n>> time=0.001..0.001 rows=0 loops=1)\n>> Output: claim_adhoc_product_1.claim_adhoc_product_id,\n>> claim_adhoc_product_1.claim_id, claim_adhoc_product_1.product_name,\n>> claim_adhoc_product_1.product_number,\n>> claim_adhoc_product_1.uom_type_description,\n>> claim_adhoc_product_1.rebate_requested_quantity,\n>> claim_adhoc_product_1.rebate_requested_rate,\n>> claim_adhoc_product_1.rebate_allowed_quantity,\n>> claim_adhoc_product_1.rebate_allowed_rate,\n>> claim_adhoc_product_1.claimant_contract_name,\n>> claim_adhoc_product_1.resolve_date\n>> Filter: (claim_adhoc_product_1.claim_id =\n>> 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)\n>> -> GroupAggregate (cost=0.00..14.05 rows=2 width=96) (actual\n>> time=0.001..0.001 rows=0 loops=1)\n>> Output: claim_lumpsum.claim_id,\n>> count(claim_lumpsum.claim_lumpsum_id), sum(claim_lumpsum.requested_amount),\n>> sum(claim_lumpsum.allowed_amount)\n>> Group Key: claim_lumpsum.claim_id\n>> -> Seq Scan on client_pinnacle.claim_lumpsum (cost=0.00..14.00\n>> rows=2 width=96) (actual time=0.000..0.000 rows=0 loops=1)\n>> Output: claim_lumpsum.claim_lumpsum_id,\n>> claim_lumpsum.claim_id, claim_lumpsum.lumpsum_id,\n>> claim_lumpsum.requested_amount, claim_lumpsum.allowed_amount,\n>> claim_lumpsum.event_date_range, claim_lumpsum.contract_lumpsum_id,\n>> claim_lumpsum.claimant_contract_name,\n>> claim_lumpsum.hint_contract_lumpsum_description\n>> Filter: (claim_lumpsum.claim_id =\n>> 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)\n>> Planning time: 6.336 ms\n>> Execution time: 0.753 ms\n>\n>\n> Example 2:\n>\n>> Hash Right Join (cost=81278.79..81674.85 rows=2 width=232) (actual\n>> time=5195.972..5458.916 rows=2 loops=1)\n>> Output: claim.claim_id,\n>> (COALESCE((count(claim_lumpsum.claim_lumpsum_id)), 0::bigint))::integer,\n>> COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric),\n>> COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric),\n>> ((COALESCE((count(claim_product_1.claim_product_id)), 0::bigint))::integer\n>> + (COALESCE((count(claim_adhoc_product_1.claim_adhoc_product_id)),\n>> 0::bigint))::integer),\n>> (COALESCE((sum((claim_product_1.rebate_requested_quantity *\n>> claim_product_1.rebate_requested_rate))), 0::numeric) +\n>> COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)),\n>> (COALESCE((sum((claim_product_1.rebate_allowed_quantity *\n>> claim_product_1.rebate_allowed_rate))), 0::numeric) +\n>> COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)),\n>> ((COALESCE((sum((claim_product_1.rebate_requested_quantity *\n>> claim_product_1.rebate_requested_rate))), 0::numeric) +\n>> COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)) +\n>> COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric)),\n>> ((COALESCE((sum((claim_product_1.rebate_allowed_quantity *\n>> claim_product_1.rebate_allowed_rate))), 0::numeric) +\n>> COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)) +\n>> COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric))\n>> Hash Cond: (claim_product_1.claim_id = claim.claim_id)\n>> -> HashAggregate (cost=81231.48..81438.09 rows=13774 width=54)\n>> (actual time=5182.546..5405.990 rows=95763 loops=1)\n>> Output: claim_product_1.claim_id,\n>> count(claim_product_1.claim_product_id),\n>> sum((claim_product_1.rebate_requested_quantity *\n>> claim_product_1.rebate_requested_rate)),\n>> sum((claim_product_1.rebate_allowed_quantity *\n>> claim_product_1.rebate_allowed_rate))\n>> Group Key: claim_product_1.claim_id\n>> -> Seq Scan on client_pinnacle.claim_product claim_product_1\n>> (cost=0.00..55253.59 rows=1731859 width=54) (actual time=0.020..1684.826\n>> rows=1731733 loops=1)\n>> Output: claim_product_1.claim_id,\n>> claim_product_1.claim_product_id,\n>> claim_product_1.rebate_requested_quantity,\n>> claim_product_1.rebate_requested_rate,\n>> claim_product_1.rebate_allowed_quantity, claim_product_1.rebate_allowed_rate\n>> -> Hash (cost=47.29..47.29 rows=2 width=160) (actual\n>> time=0.110..0.110 rows=2 loops=1)\n>> Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)),\n>> (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount)),\n>> (count(claim_adhoc_product_1.claim_adhoc_product_id)),\n>> (sum((claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate))),\n>> (sum((claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate)))\n>> Buckets: 1024 Batches: 1 Memory Usage: 1kB\n>> -> Hash Right Join (cost=41.53..47.29 rows=2 width=160) (actual\n>> time=0.105..0.108 rows=2 loops=1)\n>> Output: claim.claim_id,\n>> (count(claim_lumpsum.claim_lumpsum_id)),\n>> (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount)),\n>> (count(claim_adhoc_product_1.claim_adhoc_product_id)),\n>> (sum((claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate))),\n>> (sum((claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate)))\n>> Hash Cond: (claim_adhoc_product_1.claim_id = claim.claim_id)\n>> -> HashAggregate (cost=16.25..19.25 rows=200 width=160)\n>> (actual time=0.001..0.001 rows=0 loops=1)\n>> Output: claim_adhoc_product_1.claim_id,\n>> count(claim_adhoc_product_1.claim_adhoc_product_id),\n>> sum((claim_adhoc_product_1.rebate_requested_quantity *\n>> claim_adhoc_product_1.rebate_requested_rate)),\n>> sum((claim_adhoc_product_1.rebate_allowed_quantity *\n>> claim_adhoc_product_1.rebate_allowed_rate))\n>> Group Key: claim_adhoc_product_1.claim_id\n>> -> Seq Scan on client_pinnacle.claim_adhoc_product\n>> claim_adhoc_product_1 (cost=0.00..12.50 rows=250 width=160) (actual\n>> time=0.001..0.001 rows=0 loops=1)\n>> Output:\n>> claim_adhoc_product_1.claim_adhoc_product_id,\n>> claim_adhoc_product_1.claim_id, claim_adhoc_product_1.product_name,\n>> claim_adhoc_product_1.product_number,\n>> claim_adhoc_product_1.uom_type_description,\n>> claim_adhoc_product_1.rebate_requested_quantity,\n>> claim_adhoc_product_1.rebate_requested_rate,\n>> claim_adhoc_product_1.rebate_allowed_quantity,\n>> claim_adhoc_product_1.rebate_allowed_rate,\n>> claim_adhoc_product_1.claimant_contract_name,\n>> claim_adhoc_product_1.resolve_date\n>> -> Hash (cost=25.25..25.25 rows=2 width=88) (actual\n>> time=0.093..0.093 rows=2 loops=1)\n>> Output: claim.claim_id,\n>> (count(claim_lumpsum.claim_lumpsum_id)),\n>> (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount))\n>> Buckets: 1024 Batches: 1 Memory Usage: 1kB\n>> -> Hash Right Join (cost=19.49..25.25 rows=2\n>> width=88) (actual time=0.088..0.092 rows=2 loops=1)\n>> Output: claim.claim_id,\n>> (count(claim_lumpsum.claim_lumpsum_id)),\n>> (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount))\n>> Hash Cond: (claim_lumpsum.claim_id =\n>> claim.claim_id)\n>> -> HashAggregate (cost=16.40..19.40 rows=200\n>> width=96) (actual time=0.003..0.003 rows=0 loops=1)\n>> Output: claim_lumpsum.claim_id,\n>> count(claim_lumpsum.claim_lumpsum_id), sum(claim_lumpsum.requested_amount),\n>> sum(claim_lumpsum.allowed_amount)\n>> Group Key: claim_lumpsum.claim_id\n>> -> Seq Scan on\n>> client_pinnacle.claim_lumpsum (cost=0.00..13.20 rows=320 width=96) (actual\n>> time=0.001..0.001 rows=0 loops=1)\n>> Output:\n>> claim_lumpsum.claim_lumpsum_id, claim_lumpsum.claim_id,\n>> claim_lumpsum.lumpsum_id, claim_lumpsum.requested_amount,\n>> claim_lumpsum.allowed_amount, claim_lumpsum.event_date_range,\n>> claim_lumpsum.contract_lumpsum_id, claim_lumpsum.claimant_contract_name,\n>> claim_lumpsum.hint_contract_lumpsum_description\n>> -> Hash (cost=3.07..3.07 rows=2 width=16)\n>> (actual time=0.073..0.073 rows=2 loops=1)\n>> Output: claim.claim_id\n>> Buckets: 1024 Batches: 1 Memory Usage:\n>> 1kB\n>> -> Index Only Scan using claim_pkey on\n>> client_pinnacle.claim (cost=0.42..3.07 rows=2 width=16) (actual\n>> time=0.048..0.070 rows=2 loops=1)\n>> Output: claim.claim_id\n>> Index Cond: (claim.claim_id = ANY\n>> ('{324d2af8-46b3-45ad-b56a-0a49d0345653,e8a38718-7997-4304-bbfa-138deb84aa82}'::uuid[]))\n>> Heap Fetches: 0\n>> Planning time: 1.020 ms\n>> Execution time: 5459.461 ms\n>\n>\n> Please let me know if there is any more info I can provide to help figure\n> out why it's choosing an undesirable plan with just a slight change in the\n> the clause.\n>\n\nHi Adam,\n\nThis is fairly simple to explain. The reason you see better performance\nwith the singe claim_id is that IN() clauses with a single 1 item are\nconverted to a single equality expression. For example: (just using system\ntables so you can try this too, without having to create any special tables)\n\n# explain select * from pg_class where oid in(1);\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29 rows=1\nwidth=219)\n Index Cond: (oid = '1'::oid)\n\nWe get an index scan with the index condition: oid = 1.\n\nIf we have 2 items, then we don't get this.\n\n# explain select * from pg_class where oid in(1,2);\n QUERY PLAN\n---------------------------------------------------------------------------------\n Bitmap Heap Scan on pg_class (cost=8.56..14.03 rows=2 width=219)\n Recheck Cond: (oid = ANY ('{1,2}'::oid[]))\n -> Bitmap Index Scan on pg_class_oid_index (cost=0.00..8.56 rows=2\nwidth=0)\n Index Cond: (oid = ANY ('{1,2}'::oid[]))\n(4 rows)\n\nNow I also need to explain that PostgreSQL will currently push ONLY\nequality expressions into other relations. For example, if we write:\n\n# explain select * from pg_class pc inner join pg_attribute pa on pc.oid =\npa.attrelid where pc.oid in(1);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.55..22.63 rows=4 width=422)\n -> Index Scan using pg_class_oid_index on pg_class pc (cost=0.27..8.29\nrows=1 width=223)\n Index Cond: (oid = '1'::oid)\n -> Index Scan using pg_attribute_relid_attnum_index on pg_attribute pa\n (cost=0.28..14.30 rows=4 width=203)\n Index Cond: (attrelid = '1'::oid)\n(5 rows)\n\nYou can see that I only put pg_class.oid = 1 in the query, but internally\nthe query planner also added the pg_attribute.attrelid = 1. It was able to\ndo this due to the join condition dictating that pc.oid = pa.attrelid,\ntherefore this will always be equal, and since pc.oid = 1, then pa.attrelid\nmust also be 1.\n\nIf we have 2 items in the IN() clause, then this no longer happens:\n\n# explain select * from pg_class pc inner join pg_attribute pa on pc.oid =\npa.attrelid where pc.oid in(1,2);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=8.84..54.84 rows=15 width=422)\n -> Bitmap Heap Scan on pg_class pc (cost=8.56..14.03 rows=2 width=223)\n Recheck Cond: (oid = ANY ('{1,2}'::oid[]))\n -> Bitmap Index Scan on pg_class_oid_index (cost=0.00..8.56\nrows=2 width=0)\n Index Cond: (oid = ANY ('{1,2}'::oid[]))\n -> Index Scan using pg_attribute_relid_attnum_index on pg_attribute pa\n (cost=0.28..20.33 rows=8 width=203)\n Index Cond: (attrelid = pc.oid)\n(7 rows)\n\nIn your case the claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid\nwas pushed down into the subqueries, thus giving them less work to do, and\nalso the flexibility of using indexes on claim_id in the tables contained\nwithin the subqueries. PostgreSQL currently does not push any inequality\npredicates down at all.\n\nA few months ago I did a little bit of work to try and lift this\nrestriction, although I only made it cover the >=, >, < and <= operators as\na first measure.\n\nDetails here:\nhttp://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com#CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com\n\nIf you didn't have the VIEW, you could manually push these predicates into\neach subquery. However this is not really possible to do with the VIEW.\nPerhaps something could be done with a function and using dynamic SQL to\ncraft a query manually, or you could just get rid of the view and have the\napplication build the query. If that's not an option then maybe you could\nresponse to the thread above to mention that you've been hit by this\nproblem and would +1 some solution to fix it, and perhaps cross link to\nthis thread. I did have a little bit of a hard time in convincing people\nthat this was in fact a fairly common problem in the above thread, so it\nwould be nice to see people who have hit this problem respond to that.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn 18 January 2016 at 10:41, Adam Brusselback <[email protected]> wrote:Hey all, i've run into a performance problem with one of my queries that I am really not sure what is causing it.Setup info:Postgres version 9.4.4 on Debian 7. Server is virtual, with a single core and 512 ram available and ssd storage.Changes to postgresql.conf:maintenance_work_mem = 30MBcheckpoint_completion_target = 0.7effective_cache_size = 352MBwork_mem = 24MBwal_buffers = 4MBcheckpoint_segments = 8shared_buffers = 120MBrandom_page_cost = 1.1The problem:I have a view which sums up detail records to give totals at a header level, and performance is great when I select from it by limiting to a single record, but limiting it to multiple records seems to cause it to choose a bad plan.Example 1: SELECT *FROM claim_totalsWHERE claim_id IN ('e8a38718-7997-4304-bbfa-138deb84aa82')(2 ms)Example 2:SELECT *FROM claim_totalsWHERE claim_id IN ('324d2af8-46b3-45ad-b56a-0a49d0345653', 'e8a38718-7997-4304-bbfa-138deb84aa82')(5460 ms)The view definition is: SELECT claim.claim_id,    COALESCE(lumpsum.lumpsum_count, 0::bigint)::integer AS lumpsum_count,    COALESCE(lumpsum.requested_amount, 0::numeric) AS lumpsum_requested_total,    COALESCE(lumpsum.allowed_amount, 0::numeric) AS lumpsum_allowed_total,    COALESCE(claim_product.product_count, 0::bigint)::integer + COALESCE(claim_adhoc_product.adhoc_product_count, 0::bigint)::integer AS product_count,    COALESCE(claim_product.requested_amount, 0::numeric) + COALESCE(claim_adhoc_product.requested_amount, 0::numeric) AS product_requested_amount,    COALESCE(claim_product.allowed_amount, 0::numeric) + COALESCE(claim_adhoc_product.allowed_amount, 0::numeric) AS product_allowed_amount,    COALESCE(claim_product.requested_amount, 0::numeric) + COALESCE(claim_adhoc_product.requested_amount, 0::numeric) + COALESCE(lumpsum.requested_amount, 0::numeric) AS requested_total,    COALESCE(claim_product.allowed_amount, 0::numeric) + COALESCE(claim_adhoc_product.allowed_amount, 0::numeric) + COALESCE(lumpsum.allowed_amount, 0::numeric) AS allowed_total   FROM claim     LEFT JOIN ( SELECT claim_lumpsum.claim_id,            count(claim_lumpsum.claim_lumpsum_id) AS lumpsum_count,            sum(claim_lumpsum.requested_amount) AS requested_amount,            sum(claim_lumpsum.allowed_amount) AS allowed_amount           FROM claim_lumpsum          GROUP BY claim_lumpsum.claim_id) lumpsum ON lumpsum.claim_id = claim.claim_id     LEFT JOIN ( SELECT claim_product_1.claim_id,            count(claim_product_1.claim_product_id) AS product_count,            sum(claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate) AS requested_amount,            sum(claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate) AS allowed_amount           FROM claim_product claim_product_1          GROUP BY claim_product_1.claim_id) claim_product ON claim_product.claim_id = claim.claim_id     LEFT JOIN ( SELECT claim_adhoc_product_1.claim_id,            count(claim_adhoc_product_1.claim_adhoc_product_id) AS adhoc_product_count,            sum(claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate) AS requested_amount,            sum(claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate) AS allowed_amount           FROM claim_adhoc_product claim_adhoc_product_1          GROUP BY claim_adhoc_product_1.claim_id) claim_adhoc_product ON claim_adhoc_product.claim_id = claim.claim_id;Here are the respective explain / analyze for the two queries above:Example 1:Nested Loop Left Join  (cost=0.97..149.46 rows=2 width=232) (actual time=0.285..0.289 rows=1 loops=1)  Output: claim.claim_id, (COALESCE((count(claim_lumpsum.claim_lumpsum_id)), 0::bigint))::integer, COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric), COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric), ((COALESCE((count(claim_product_1.claim_product_id)), 0::bigint))::integer + (COALESCE((count(claim_adhoc_product_1.claim_adhoc_product_id)), 0::bigint))::integer), (COALESCE((sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)), (COALESCE((sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)), ((COALESCE((sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)) + COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric)), ((COALESCE((sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)) + COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric))  Join Filter: (claim_lumpsum.claim_id = claim.claim_id)  ->  Nested Loop Left Join  (cost=0.97..135.31 rows=1 width=160) (actual time=0.260..0.264 rows=1 loops=1)        Output: claim.claim_id, (count(claim_product_1.claim_product_id)), (sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), (sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), (count(claim_adhoc_product_1.claim_adhoc_product_id)), (sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), (sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate)))        Join Filter: (claim_adhoc_product_1.claim_id = claim.claim_id)        ->  Nested Loop Left Join  (cost=0.97..122.14 rows=1 width=88) (actual time=0.254..0.256 rows=1 loops=1)              Output: claim.claim_id, (count(claim_product_1.claim_product_id)), (sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), (sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate)))              Join Filter: (claim_product_1.claim_id = claim.claim_id)              ->  Index Only Scan using claim_pkey on client_pinnacle.claim  (cost=0.42..1.54 rows=1 width=16) (actual time=0.078..0.079 rows=1 loops=1)                    Output: claim.claim_id                    Index Cond: (claim.claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)                    Heap Fetches: 0              ->  GroupAggregate  (cost=0.55..120.58 rows=1 width=54) (actual time=0.163..0.163 rows=1 loops=1)                    Output: claim_product_1.claim_id, count(claim_product_1.claim_product_id), sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate)), sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))                    Group Key: claim_product_1.claim_id                    ->  Index Scan using claim_product_claim_id_product_id_distributor_company_id_lo_key on client_pinnacle.claim_product claim_product_1  (cost=0.55..118.99 rows=105 width=54) (actual time=0.071..0.091 rows=12 loops=1)                          Output: claim_product_1.claim_id, claim_product_1.claim_product_id, claim_product_1.rebate_requested_quantity, claim_product_1.rebate_requested_rate, claim_product_1.rebate_allowed_quantity, claim_product_1.rebate_allowed_rate                          Index Cond: (claim_product_1.claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)        ->  GroupAggregate  (cost=0.00..13.15 rows=1 width=160) (actual time=0.001..0.001 rows=0 loops=1)              Output: claim_adhoc_product_1.claim_id, count(claim_adhoc_product_1.claim_adhoc_product_id), sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate)), sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))              Group Key: claim_adhoc_product_1.claim_id              ->  Seq Scan on client_pinnacle.claim_adhoc_product claim_adhoc_product_1  (cost=0.00..13.12 rows=1 width=160) (actual time=0.001..0.001 rows=0 loops=1)                    Output: claim_adhoc_product_1.claim_adhoc_product_id, claim_adhoc_product_1.claim_id, claim_adhoc_product_1.product_name, claim_adhoc_product_1.product_number, claim_adhoc_product_1.uom_type_description, claim_adhoc_product_1.rebate_requested_quantity, claim_adhoc_product_1.rebate_requested_rate, claim_adhoc_product_1.rebate_allowed_quantity, claim_adhoc_product_1.rebate_allowed_rate, claim_adhoc_product_1.claimant_contract_name, claim_adhoc_product_1.resolve_date                    Filter: (claim_adhoc_product_1.claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)  ->  GroupAggregate  (cost=0.00..14.05 rows=2 width=96) (actual time=0.001..0.001 rows=0 loops=1)        Output: claim_lumpsum.claim_id, count(claim_lumpsum.claim_lumpsum_id), sum(claim_lumpsum.requested_amount), sum(claim_lumpsum.allowed_amount)        Group Key: claim_lumpsum.claim_id        ->  Seq Scan on client_pinnacle.claim_lumpsum  (cost=0.00..14.00 rows=2 width=96) (actual time=0.000..0.000 rows=0 loops=1)              Output: claim_lumpsum.claim_lumpsum_id, claim_lumpsum.claim_id, claim_lumpsum.lumpsum_id, claim_lumpsum.requested_amount, claim_lumpsum.allowed_amount, claim_lumpsum.event_date_range, claim_lumpsum.contract_lumpsum_id, claim_lumpsum.claimant_contract_name, claim_lumpsum.hint_contract_lumpsum_description              Filter: (claim_lumpsum.claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid)Planning time: 6.336 msExecution time: 0.753 msExample 2:Hash Right Join  (cost=81278.79..81674.85 rows=2 width=232) (actual time=5195.972..5458.916 rows=2 loops=1)  Output: claim.claim_id, (COALESCE((count(claim_lumpsum.claim_lumpsum_id)), 0::bigint))::integer, COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric), COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric), ((COALESCE((count(claim_product_1.claim_product_id)), 0::bigint))::integer + (COALESCE((count(claim_adhoc_product_1.claim_adhoc_product_id)), 0::bigint))::integer), (COALESCE((sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)), (COALESCE((sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)), ((COALESCE((sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), 0::numeric)) + COALESCE((sum(claim_lumpsum.requested_amount)), 0::numeric)), ((COALESCE((sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))), 0::numeric) + COALESCE((sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))), 0::numeric)) + COALESCE((sum(claim_lumpsum.allowed_amount)), 0::numeric))  Hash Cond: (claim_product_1.claim_id = claim.claim_id)  ->  HashAggregate  (cost=81231.48..81438.09 rows=13774 width=54) (actual time=5182.546..5405.990 rows=95763 loops=1)        Output: claim_product_1.claim_id, count(claim_product_1.claim_product_id), sum((claim_product_1.rebate_requested_quantity * claim_product_1.rebate_requested_rate)), sum((claim_product_1.rebate_allowed_quantity * claim_product_1.rebate_allowed_rate))        Group Key: claim_product_1.claim_id        ->  Seq Scan on client_pinnacle.claim_product claim_product_1  (cost=0.00..55253.59 rows=1731859 width=54) (actual time=0.020..1684.826 rows=1731733 loops=1)              Output: claim_product_1.claim_id, claim_product_1.claim_product_id, claim_product_1.rebate_requested_quantity, claim_product_1.rebate_requested_rate, claim_product_1.rebate_allowed_quantity, claim_product_1.rebate_allowed_rate  ->  Hash  (cost=47.29..47.29 rows=2 width=160) (actual time=0.110..0.110 rows=2 loops=1)        Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)), (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount)), (count(claim_adhoc_product_1.claim_adhoc_product_id)), (sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), (sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate)))        Buckets: 1024  Batches: 1  Memory Usage: 1kB        ->  Hash Right Join  (cost=41.53..47.29 rows=2 width=160) (actual time=0.105..0.108 rows=2 loops=1)              Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)), (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount)), (count(claim_adhoc_product_1.claim_adhoc_product_id)), (sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate))), (sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate)))              Hash Cond: (claim_adhoc_product_1.claim_id = claim.claim_id)              ->  HashAggregate  (cost=16.25..19.25 rows=200 width=160) (actual time=0.001..0.001 rows=0 loops=1)                    Output: claim_adhoc_product_1.claim_id, count(claim_adhoc_product_1.claim_adhoc_product_id), sum((claim_adhoc_product_1.rebate_requested_quantity * claim_adhoc_product_1.rebate_requested_rate)), sum((claim_adhoc_product_1.rebate_allowed_quantity * claim_adhoc_product_1.rebate_allowed_rate))                    Group Key: claim_adhoc_product_1.claim_id                    ->  Seq Scan on client_pinnacle.claim_adhoc_product claim_adhoc_product_1  (cost=0.00..12.50 rows=250 width=160) (actual time=0.001..0.001 rows=0 loops=1)                          Output: claim_adhoc_product_1.claim_adhoc_product_id, claim_adhoc_product_1.claim_id, claim_adhoc_product_1.product_name, claim_adhoc_product_1.product_number, claim_adhoc_product_1.uom_type_description, claim_adhoc_product_1.rebate_requested_quantity, claim_adhoc_product_1.rebate_requested_rate, claim_adhoc_product_1.rebate_allowed_quantity, claim_adhoc_product_1.rebate_allowed_rate, claim_adhoc_product_1.claimant_contract_name, claim_adhoc_product_1.resolve_date              ->  Hash  (cost=25.25..25.25 rows=2 width=88) (actual time=0.093..0.093 rows=2 loops=1)                    Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)), (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount))                    Buckets: 1024  Batches: 1  Memory Usage: 1kB                    ->  Hash Right Join  (cost=19.49..25.25 rows=2 width=88) (actual time=0.088..0.092 rows=2 loops=1)                          Output: claim.claim_id, (count(claim_lumpsum.claim_lumpsum_id)), (sum(claim_lumpsum.requested_amount)), (sum(claim_lumpsum.allowed_amount))                          Hash Cond: (claim_lumpsum.claim_id = claim.claim_id)                          ->  HashAggregate  (cost=16.40..19.40 rows=200 width=96) (actual time=0.003..0.003 rows=0 loops=1)                                Output: claim_lumpsum.claim_id, count(claim_lumpsum.claim_lumpsum_id), sum(claim_lumpsum.requested_amount), sum(claim_lumpsum.allowed_amount)                                Group Key: claim_lumpsum.claim_id                                ->  Seq Scan on client_pinnacle.claim_lumpsum  (cost=0.00..13.20 rows=320 width=96) (actual time=0.001..0.001 rows=0 loops=1)                                      Output: claim_lumpsum.claim_lumpsum_id, claim_lumpsum.claim_id, claim_lumpsum.lumpsum_id, claim_lumpsum.requested_amount, claim_lumpsum.allowed_amount, claim_lumpsum.event_date_range, claim_lumpsum.contract_lumpsum_id, claim_lumpsum.claimant_contract_name, claim_lumpsum.hint_contract_lumpsum_description                          ->  Hash  (cost=3.07..3.07 rows=2 width=16) (actual time=0.073..0.073 rows=2 loops=1)                                Output: claim.claim_id                                Buckets: 1024  Batches: 1  Memory Usage: 1kB                                ->  Index Only Scan using claim_pkey on client_pinnacle.claim  (cost=0.42..3.07 rows=2 width=16) (actual time=0.048..0.070 rows=2 loops=1)                                      Output: claim.claim_id                                      Index Cond: (claim.claim_id = ANY ('{324d2af8-46b3-45ad-b56a-0a49d0345653,e8a38718-7997-4304-bbfa-138deb84aa82}'::uuid[]))                                      Heap Fetches: 0Planning time: 1.020 msExecution time: 5459.461 msPlease let me know if there is any more info I can provide to help figure out why it's choosing an undesirable plan with just a slight change in the the clause.Hi Adam,This is fairly simple to explain. The reason you see better performance with the singe claim_id is that IN() clauses with a single 1 item are converted to a single equality expression. For example: (just using system tables so you can try this too, without having to create any special tables)# explain select * from pg_class where oid in(1);                                     QUERY PLAN------------------------------------------------------------------------------------- Index Scan using pg_class_oid_index on pg_class  (cost=0.27..8.29 rows=1 width=219)   Index Cond: (oid = '1'::oid)We get an index scan with the index condition: oid = 1.If we have 2 items, then we don't get this.# explain select * from pg_class where oid in(1,2);                                   QUERY PLAN--------------------------------------------------------------------------------- Bitmap Heap Scan on pg_class  (cost=8.56..14.03 rows=2 width=219)   Recheck Cond: (oid = ANY ('{1,2}'::oid[]))   ->  Bitmap Index Scan on pg_class_oid_index  (cost=0.00..8.56 rows=2 width=0)         Index Cond: (oid = ANY ('{1,2}'::oid[]))(4 rows)Now I also need to explain that PostgreSQL will currently push ONLY equality expressions into other relations. For example, if we write:# explain select * from pg_class pc inner join pg_attribute pa on pc.oid = pa.attrelid where pc.oid in(1);                                                   QUERY PLAN---------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.55..22.63 rows=4 width=422)   ->  Index Scan using pg_class_oid_index on pg_class pc  (cost=0.27..8.29 rows=1 width=223)         Index Cond: (oid = '1'::oid)   ->  Index Scan using pg_attribute_relid_attnum_index on pg_attribute pa  (cost=0.28..14.30 rows=4 width=203)         Index Cond: (attrelid = '1'::oid)(5 rows)You can see that I only put pg_class.oid = 1 in the query, but internally the query planner also added the pg_attribute.attrelid = 1. It was able to do this due to the join condition dictating that pc.oid = pa.attrelid, therefore this will always be equal, and since pc.oid = 1, then pa.attrelid must also be 1.If we have 2 items in the IN() clause, then this no longer happens:# explain select * from pg_class pc inner join pg_attribute pa on pc.oid = pa.attrelid where pc.oid in(1,2);                                                   QUERY PLAN---------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=8.84..54.84 rows=15 width=422)   ->  Bitmap Heap Scan on pg_class pc  (cost=8.56..14.03 rows=2 width=223)         Recheck Cond: (oid = ANY ('{1,2}'::oid[]))         ->  Bitmap Index Scan on pg_class_oid_index  (cost=0.00..8.56 rows=2 width=0)               Index Cond: (oid = ANY ('{1,2}'::oid[]))   ->  Index Scan using pg_attribute_relid_attnum_index on pg_attribute pa  (cost=0.28..20.33 rows=8 width=203)         Index Cond: (attrelid = pc.oid)(7 rows)In your case the claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid was pushed down into the subqueries, thus giving them less work to do, and also the flexibility of using indexes on claim_id in the tables contained within the subqueries. PostgreSQL currently does not push any inequality predicates down at all.A few months ago I did a little bit of work to try and lift this restriction, although I only made it cover the >=, >, < and <= operators as a first measure.Details here:http://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com#CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.comIf you didn't have the VIEW, you could manually push these predicates into each subquery. However this is not really possible to do with the VIEW. Perhaps something could be done with a function and using dynamic SQL to craft a query manually, or you could just get rid of the view and have the application build the query. If that's not an option then maybe you could response to the thread above to mention that you've been hit by this problem and would +1 some solution to fix it, and perhaps cross link to this thread. I did have a little bit of a hard time in convincing people that this was in fact a fairly common problem in the above thread, so it would be nice to see people who have hit this problem respond to that. --  David Rowley                   http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Mon, 18 Jan 2016 11:42:23 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query order of magnitude slower with slightly different\n where clause" } ]
[ { "msg_contents": "I am running Postgresql on a Windows Server 2008 server. I have noticed\nthat queries have very high planning times now and then. Planning times go\ndown for the same query immediately after the query runs the first time,\nbut then go up again after if the query is not re-run for 5 minutes or so.\n\nI am not able to find any specific information in the documentation that\nwould explain the issue or explains how to address it, so am asking for\nadvice here.\n\nHere is an example.\n\nexplain analyze\nselect * from message\nlimit 1\n\n\"Limit (cost=0.00..0.44 rows=1 width=1517) (actual time=0.009..0.009\nrows=1 loops=1)\"\n\" -> Seq Scan on message (cost=0.00..28205.48 rows=64448 width=1517)\n(actual time=0.007..0.007 rows=1 loops=1)\"\n\"Planning time: 3667.361 ms\"\n\"Execution time: 1.652 ms\"\n\nAs you can see the query is simple and does not justify 3 seconds of\nplanning time. It would appear that there is an issue with my configuration\nbut I am not able to find anything that looks out of sorts in the query\nplanning configuration variables. Any advice about what I should be looking\nfor to fix this would be appreciated.\n\nI am running Postgresql on a Windows Server 2008 server. I have noticed that queries have very high planning times now and then. Planning times go down for the same query immediately after the query runs the first time, but then go up again after if the query is not re-run for 5 minutes or so.I am not able to find any specific information in the documentation that would explain the issue or explains how to address it, so am asking for advice here.Here is an example.explain analyze select * from message limit 1\"Limit  (cost=0.00..0.44 rows=1 width=1517) (actual time=0.009..0.009 rows=1 loops=1)\"\"  ->  Seq Scan on message  (cost=0.00..28205.48 rows=64448 width=1517) (actual time=0.007..0.007 rows=1 loops=1)\"\"Planning time: 3667.361 ms\"\"Execution time: 1.652 ms\"As you can see the query is simple and does not justify 3 seconds of planning time. It would appear that there is an issue with my configuration but I am not able to find anything that looks out of sorts in the query planning configuration variables. Any advice about what I should be looking for to fix this would be appreciated.", "msg_date": "Thu, 21 Jan 2016 16:30:03 -0800", "msg_from": "Phil S <[email protected]>", "msg_from_op": true, "msg_subject": "High Planning Time" }, { "msg_contents": "Phil S wrote:\r\n> I am running Postgresql on a Windows Server 2008 server. I have noticed that queries have very high\r\n> planning times now and then. Planning times go down for the same query immediately after the query\r\n> runs the first time, but then go up again after if the query is not re-run for 5 minutes or so.\r\n> \r\n> I am not able to find any specific information in the documentation that would explain the issue or\r\n> explains how to address it, so am asking for advice here.\r\n> \r\n> Here is an example.\r\n> \r\n> explain analyze\r\n> select * from message\r\n> limit 1\r\n> \r\n> \"Limit (cost=0.00..0.44 rows=1 width=1517) (actual time=0.009..0.009 rows=1 loops=1)\"\r\n> \" -> Seq Scan on message (cost=0.00..28205.48 rows=64448 width=1517) (actual time=0.007..0.007\r\n> rows=1 loops=1)\"\r\n> \"Planning time: 3667.361 ms\"\r\n> \"Execution time: 1.652 ms\"\r\n> \r\n> As you can see the query is simple and does not justify 3 seconds of planning time. It would appear\r\n> that there is an issue with my configuration but I am not able to find anything that looks out of\r\n> sorts in the query planning configuration variables. Any advice about what I should be looking for to\r\n> fix this would be appreciated.\r\n\r\nThis is odd.\r\n\r\nCould you profile the backend during such a statement to see where the time is spent?\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jan 2016 07:59:59 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Planning Time" }, { "msg_contents": "It can be because of catalog stats file leak. I had it on some older\nPostgreSQL version when tests were rolling back schema changes often. The\nfile grew to some enormous size and as soon as it got out of cache - it\nwould be very slow to access catalog. I had just to kill the file.\n\nBest regards, Vitalii Tymchyshyn\n\nЧт, 21 січ. 2016 19:30 Phil S <[email protected]> пише:\n\n> I am running Postgresql on a Windows Server 2008 server. I have noticed\n> that queries have very high planning times now and then. Planning times go\n> down for the same query immediately after the query runs the first time,\n> but then go up again after if the query is not re-run for 5 minutes or so.\n>\n> I am not able to find any specific information in the documentation that\n> would explain the issue or explains how to address it, so am asking for\n> advice here.\n>\n> Here is an example.\n>\n> explain analyze\n> select * from message\n> limit 1\n>\n> \"Limit (cost=0.00..0.44 rows=1 width=1517) (actual time=0.009..0.009\n> rows=1 loops=1)\"\n> \" -> Seq Scan on message (cost=0.00..28205.48 rows=64448 width=1517)\n> (actual time=0.007..0.007 rows=1 loops=1)\"\n> \"Planning time: 3667.361 ms\"\n> \"Execution time: 1.652 ms\"\n>\n> As you can see the query is simple and does not justify 3 seconds of\n> planning time. It would appear that there is an issue with my configuration\n> but I am not able to find anything that looks out of sorts in the query\n> planning configuration variables. Any advice about what I should be looking\n> for to fix this would be appreciated.\n>\n>\n\nIt can be because of catalog stats file leak. I had it on some older PostgreSQL version when tests were rolling back schema changes often. The file grew to some enormous size and as soon as it got out of cache - it would be very slow to access catalog. I had just to kill the file.Best regards, Vitalii TymchyshynЧт, 21 січ. 2016 19:30 Phil S <[email protected]> пише:I am running Postgresql on a Windows Server 2008 server. I have noticed that queries have very high planning times now and then. Planning times go down for the same query immediately after the query runs the first time, but then go up again after if the query is not re-run for 5 minutes or so.I am not able to find any specific information in the documentation that would explain the issue or explains how to address it, so am asking for advice here.Here is an example.explain analyze select * from message limit 1\"Limit  (cost=0.00..0.44 rows=1 width=1517) (actual time=0.009..0.009 rows=1 loops=1)\"\"  ->  Seq Scan on message  (cost=0.00..28205.48 rows=64448 width=1517) (actual time=0.007..0.007 rows=1 loops=1)\"\"Planning time: 3667.361 ms\"\"Execution time: 1.652 ms\"As you can see the query is simple and does not justify 3 seconds of planning time. It would appear that there is an issue with my configuration but I am not able to find anything that looks out of sorts in the query planning configuration variables. Any advice about what I should be looking for to fix this would be appreciated.", "msg_date": "Fri, 22 Jan 2016 14:11:11 +0000", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Planning Time" }, { "msg_contents": "Albe Laurenz <[email protected]> writes:\n> Phil S wrote:\n>> explain analyze\n>> select * from message\n>> limit 1\n>> \n>> \"Limit (cost=0.00..0.44 rows=1 width=1517) (actual time=0.009..0.009 rows=1 loops=1)\"\n>> \" -> Seq Scan on message (cost=0.00..28205.48 rows=64448 width=1517) (actual time=0.007..0.007\n>> rows=1 loops=1)\"\n>> \"Planning time: 3667.361 ms\"\n>> \"Execution time: 1.652 ms\"\n>> \n>> As you can see the query is simple and does not justify 3 seconds of planning time. It would appear\n>> that there is an issue with my configuration but I am not able to find anything that looks out of\n>> sorts in the query planning configuration variables. Any advice about what I should be looking for to\n>> fix this would be appreciated.\n\n> This is odd.\n> Could you profile the backend during such a statement to see where the time is spent?\n\n\nI'm wondering about locks. Perhaps turning on log_lock_waits would\nyield useful info.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jan 2016 09:35:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Planning Time" } ]
[ { "msg_contents": "Hi all,\n\nI'm using PostgreSQL 9.4.5 and I have a weird issue.\n\nI have the following three tables:\n\nvisit\n( nb bigint NOT NULL,\n CONSTRAINT visit_pkey PRIMARY KEY (nb)\n)\nwith ~ 750'000 rows\n\ninvoice\n( id bigint NOT NULL,\n CONSTRAINT invoice_pkey PRIMARY KEY (id)\n)\nwith ~ 3'000'000 rows\n\nvisit_invoice\n( invoice_id bigint NOT NULL,\n visit_nb bigint NOT NULL,\n CONSTRAINT visit_invoice_pkey PRIMARY KEY (visit_nb, invoice_id),\n CONSTRAINT fk_vis_inv_inv FOREIGN KEY (invoice_id)\n REFERENCES invoice (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT fk_vis_inv_vis FOREIGN KEY (visit_nb)\n REFERENCES visit (nb) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE\n)\nwith ~ 3'000'000 rows\n\nWhen I delete a row in visit table, it runs the trigger for constraint\nfk_vis_inv_vis and it seems to use the primary key index on visit_invoice:\n\nexplain analyze DELETE FROM visit WHERE nb = 2000013;\n------------------------------------------------------------------------------------------------------------------------\n Delete on visit (cost=0.42..8.44 rows=1 width=6) (actual\ntime=2.225..2.225 rows=0 loops=1)\n -> Index Scan using visit_pkey on visit (cost=0.42..8.44 rows=1\nwidth=6) (actual time=2.084..2.088 rows=1 loops=1)\n Index Cond: (nb = 2000013)\n Planning time: 0.201 ms\n Trigger for constraint fk_vis_inv_vis: time=0.673 calls=1\n\nBut when I delete a record in the table invoice, it runs the trigger for\nconstraint fk_vis_inv_vis and it doesn't seem to use the primary key index\non visit_invoice:\n\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\n----------------------------------------------------------------------------------------------------------------------------\n Delete on invoice (cost=0.43..8.45 rows=1 width=6) (actual\ntime=0.109..0.109 rows=0 loops=1)\n -> Index Scan using invoice_pkey on invoice (cost=0.43..8.45 rows=1\nwidth=6) (actual time=0.060..0.060 rows=1 loops=1)\n Index Cond: (id = 30140470)\n Planning time: 0.156 ms\n Trigger for constraint fk_vis_inv_inv: time=219.122 calls=1\n\nSo, if I create explicitly an index for the second column (which is already\npart of the primary key), it seems to use it because the trigger execution\nis really faster:\n\nCREATE INDEX fki_vis_inv_inv\n ON visit_invoice\n USING btree\n (invoice_id);\n\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\n----------------------------------------------------------------------------------------------------------------------------\n Delete on invoice (cost=0.43..8.45 rows=1 width=6) (actual\ntime=0.057..0.057 rows=0 loops=1)\n -> Index Scan using invoice_pkey on invoice (cost=0.43..8.45 rows=1\nwidth=6) (actual time=0.039..0.040 rows=1 loops=1)\n Index Cond: (id = 120043571)\n Planning time: 0.074 ms\n Trigger for constraint fk_vis_inv_inv: time=0.349 calls=1\n\nSo I have tried to create the primary key differently, like PRIMARY KEY\n(invoice_id, visit_nb), and in that case it is the opposite, the deletion\nof the invoice is very fast and the deletion of the visit is really slower,\nunless I create a specific index as above.\n\nSo my question is: why is my index on the primary key not used by both\ntriggers and why should I always create an explicit index on the second\ncolumn ?\n\nThanks.\n\nFlorian\n\nHi all,I'm using PostgreSQL 9.4.5 and I have a weird issue.I have the following three tables:visit( nb bigint NOT NULL,  CONSTRAINT visit_pkey PRIMARY KEY (nb))with ~ 750'000 rowsinvoice( id bigint NOT NULL,  CONSTRAINT invoice_pkey PRIMARY KEY (id))with ~ 3'000'000 rowsvisit_invoice( invoice_id bigint NOT NULL,  visit_nb bigint NOT NULL,  CONSTRAINT visit_invoice_pkey PRIMARY KEY (visit_nb, invoice_id),  CONSTRAINT fk_vis_inv_inv FOREIGN KEY (invoice_id)      REFERENCES invoice (id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE CASCADE,  CONSTRAINT fk_vis_inv_vis FOREIGN KEY (visit_nb)      REFERENCES visit (nb) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE CASCADE)with ~ 3'000'000 rowsWhen I delete a row in visit table, it runs the trigger for constraint fk_vis_inv_vis and it seems to use the primary key index on visit_invoice:explain analyze DELETE FROM visit WHERE nb = 2000013;------------------------------------------------------------------------------------------------------------------------ Delete on visit  (cost=0.42..8.44 rows=1 width=6) (actual time=2.225..2.225 rows=0 loops=1)   ->  Index Scan using visit_pkey on visit  (cost=0.42..8.44 rows=1 width=6) (actual time=2.084..2.088 rows=1 loops=1)         Index Cond: (nb = 2000013) Planning time: 0.201 ms Trigger for constraint fk_vis_inv_vis: time=0.673 calls=1 But when I delete a record in the table invoice, it runs the trigger for constraint fk_vis_inv_vis and it doesn't seem to use the primary key index on visit_invoice:explain analyze DELETE FROM invoice WHERE id = 30140470;---------------------------------------------------------------------------------------------------------------------------- Delete on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.109..0.109 rows=0 loops=1)   ->  Index Scan using invoice_pkey on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.060..0.060 rows=1 loops=1)         Index Cond: (id = 30140470) Planning time: 0.156 ms Trigger for constraint fk_vis_inv_inv: time=219.122 calls=1So, if I create explicitly an index for the second column (which is already part of the primary key), it seems to use it because the trigger execution is really faster:CREATE INDEX fki_vis_inv_inv  ON visit_invoice  USING btree  (invoice_id);explain analyze DELETE FROM invoice WHERE id = 30140470;---------------------------------------------------------------------------------------------------------------------------- Delete on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.057..0.057 rows=0 loops=1)   ->  Index Scan using invoice_pkey on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.039..0.040 rows=1 loops=1)         Index Cond: (id = 120043571) Planning time: 0.074 ms Trigger for constraint fk_vis_inv_inv: time=0.349 calls=1So I have tried to create the primary key differently, like PRIMARY KEY (invoice_id, visit_nb), and in that case it is the opposite, the deletion of the invoice is very fast and the deletion of the visit is really slower, unless I create a specific index as above.So my question is: why is my index on the primary key not used by both triggers and why should I always create an explicit index on the second column ?Thanks.Florian", "msg_date": "Tue, 26 Jan 2016 16:52:22 +0100", "msg_from": "Florian Gossin <[email protected]>", "msg_from_op": true, "msg_subject": "Primary key index partially used" }, { "msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Florian Gossin\r\nSent: Tuesday, January 26, 2016 10:52 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] Primary key index partially used\r\n\r\nHi all,\r\nI'm using PostgreSQL 9.4.5 and I have a weird issue.\r\nI have the following three tables:\r\n\r\nvisit\r\n( nb bigint NOT NULL,\r\n CONSTRAINT visit_pkey PRIMARY KEY (nb)\r\n)\r\nwith ~ 750'000 rows\r\n\r\ninvoice\r\n( id bigint NOT NULL,\r\n CONSTRAINT invoice_pkey PRIMARY KEY (id)\r\n)\r\nwith ~ 3'000'000 rows\r\n\r\nvisit_invoice\r\n( invoice_id bigint NOT NULL,\r\n visit_nb bigint NOT NULL,\r\n CONSTRAINT visit_invoice_pkey PRIMARY KEY (visit_nb, invoice_id),\r\n CONSTRAINT fk_vis_inv_inv FOREIGN KEY (invoice_id)\r\n REFERENCES invoice (id) MATCH SIMPLE\r\n ON UPDATE NO ACTION ON DELETE CASCADE,\r\n CONSTRAINT fk_vis_inv_vis FOREIGN KEY (visit_nb)\r\n REFERENCES visit (nb) MATCH SIMPLE\r\n ON UPDATE NO ACTION ON DELETE CASCADE\r\n)\r\nwith ~ 3'000'000 rows\r\n\r\nWhen I delete a row in visit table, it runs the trigger for constraint fk_vis_inv_vis and it seems to use the primary key index on visit_invoice:\r\nexplain analyze DELETE FROM visit WHERE nb = 2000013;\r\n------------------------------------------------------------------------------------------------------------------------\r\n Delete on visit (cost=0.42..8.44 rows=1 width=6) (actual time=2.225..2.225 rows=0 loops=1)\r\n -> Index Scan using visit_pkey on visit (cost=0.42..8.44 rows=1 width=6) (actual time=2.084..2.088 rows=1 loops=1)\r\n Index Cond: (nb = 2000013)\r\n Planning time: 0.201 ms\r\n Trigger for constraint fk_vis_inv_vis: time=0.673 calls=1\r\n\r\nBut when I delete a record in the table invoice, it runs the trigger for constraint fk_vis_inv_vis and it doesn't seem to use the primary key index on visit_invoice:\r\n\r\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\r\n----------------------------------------------------------------------------------------------------------------------------\r\n Delete on invoice (cost=0.43..8.45 rows=1 width=6) (actual time=0.109..0.109 rows=0 loops=1)\r\n -> Index Scan using invoice_pkey on invoice (cost=0.43..8.45 rows=1 width=6) (actual time=0.060..0.060 rows=1 loops=1)\r\n Index Cond: (id = 30140470)\r\n Planning time: 0.156 ms\r\n Trigger for constraint fk_vis_inv_inv: time=219.122 calls=1\r\nSo, if I create explicitly an index for the second column (which is already part of the primary key), it seems to use it because the trigger execution is really faster:\r\n\r\nCREATE INDEX fki_vis_inv_inv\r\n ON visit_invoice\r\n USING btree\r\n (invoice_id);\r\n\r\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\r\n----------------------------------------------------------------------------------------------------------------------------\r\n Delete on invoice (cost=0.43..8.45 rows=1 width=6) (actual time=0.057..0.057 rows=0 loops=1)\r\n -> Index Scan using invoice_pkey on invoice (cost=0.43..8.45 rows=1 width=6) (actual time=0.039..0.040 rows=1 loops=1)\r\n Index Cond: (id = 120043571)\r\n Planning time: 0.074 ms\r\n Trigger for constraint fk_vis_inv_inv: time=0.349 calls=1\r\nSo I have tried to create the primary key differently, like PRIMARY KEY (invoice_id, visit_nb), and in that case it is the opposite, the deletion of the invoice is very fast and the deletion of the visit is really slower, unless I create a specific index as above.\r\nSo my question is: why is my index on the primary key not used by both triggers and why should I always create an explicit index on the second column ?\r\nThanks.\r\nFlorian\r\n\r\nFirst, It’s a god (for performance) practice to create indexes on FK columns in “child” table.\r\nSecond, PG is using index only if the first column in concatenated index is used in WHERE clause. That is exactly what you observe.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Florian Gossin\nSent: Tuesday, January 26, 2016 10:52 AM\nTo: [email protected]\nSubject: [PERFORM] Primary key index partially used\n \n\n\n\n\n\n\nHi all,\n\nI'm using PostgreSQL 9.4.5 and I have a weird issue.\n\nI have the following three tables:\n\r\nvisit\r\n( nb bigint NOT NULL,\r\n  CONSTRAINT visit_pkey PRIMARY KEY (nb)\r\n)\n\nwith ~ 750'000 rows\n\r\ninvoice\r\n( id bigint NOT NULL,\r\n  CONSTRAINT invoice_pkey PRIMARY KEY (id)\r\n)\n\nwith ~ 3'000'000 rows\n\r\nvisit_invoice\r\n( invoice_id bigint NOT NULL,\r\n  visit_nb bigint NOT NULL,\r\n  CONSTRAINT visit_invoice_pkey PRIMARY KEY (visit_nb, invoice_id),\r\n  CONSTRAINT fk_vis_inv_inv FOREIGN KEY (invoice_id)\r\n      REFERENCES invoice (id) MATCH SIMPLE\r\n      ON UPDATE NO ACTION ON DELETE CASCADE,\r\n  CONSTRAINT fk_vis_inv_vis FOREIGN KEY (visit_nb)\r\n      REFERENCES visit (nb) MATCH SIMPLE\r\n      ON UPDATE NO ACTION ON DELETE CASCADE\r\n)\n\n\nwith ~ 3'000'000 rows\n\n\n \n\nWhen I delete a row in visit table, it runs the trigger for constraint fk_vis_inv_vis and it seems to use the primary key index on visit_invoice:\n\n\n\nexplain analyze DELETE FROM visit WHERE nb = 2000013;\r\n------------------------------------------------------------------------------------------------------------------------\r\n Delete on visit  (cost=0.42..8.44 rows=1 width=6) (actual time=2.225..2.225 rows=0 loops=1)\r\n   ->  Index Scan using visit_pkey on visit  (cost=0.42..8.44 rows=1 width=6) (actual time=2.084..2.088 rows=1 loops=1)\r\n         Index Cond: (nb = 2000013)\r\n Planning time: 0.201 ms\r\n Trigger for constraint fk_vis_inv_vis: time=0.673 calls=1\r\n \n\n\nBut when I delete a record in the table invoice, it runs the trigger for constraint fk_vis_inv_vis and it doesn't seem to use the primary key index on visit_invoice:\n\r\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\r\n----------------------------------------------------------------------------------------------------------------------------\r\n Delete on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.109..0.109 rows=0 loops=1)\r\n   ->  Index Scan using invoice_pkey on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.060..0.060 rows=1 loops=1)\r\n         Index Cond: (id = 30140470)\r\n Planning time: 0.156 ms\r\n Trigger for constraint fk_vis_inv_inv: time=219.122 calls=1\n\n\nSo, if I create explicitly an index for the second column (which is already part of the primary key), it seems to use it because the trigger execution is really faster:\n\r\nCREATE INDEX fki_vis_inv_inv\r\n  ON visit_invoice\r\n  USING btree\r\n  (invoice_id);\n\r\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\r\n----------------------------------------------------------------------------------------------------------------------------\r\n Delete on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.057..0.057 rows=0 loops=1)\r\n   ->  Index Scan using invoice_pkey on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.039..0.040 rows=1 loops=1)\r\n         Index Cond: (id = 120043571)\r\n Planning time: 0.074 ms\r\n Trigger for constraint fk_vis_inv_inv: time=0.349 calls=1\n\n\nSo I have tried to create the primary key differently, like PRIMARY KEY (invoice_id, visit_nb), and in that case it is the opposite, the deletion of the invoice is very fast and the deletion of the visit is\r\n really slower, unless I create a specific index as above.\n\n\nSo my question is: why is my index on the primary key not used by both triggers and why should I always create an explicit index on the second column ?\n\n\nThanks.\n\n\n\nFlorian\n\n \nFirst, It’s a god (for performance) practice to create indexes on FK columns in “child” table.\nSecond, PG is using index only if the first column in concatenated index is used in WHERE clause.  That is exactly what you observe.\n \nRegards,\nIgor Neyman", "msg_date": "Tue, 26 Jan 2016 16:01:26 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Primary key index partially used" }, { "msg_contents": "From: Igor Neyman\r\nSent: Tuesday, January 26, 2016 11:01 AM\r\nTo: 'Florian Gossin' <[email protected]>; [email protected]\r\nSubject: RE: [PERFORM] Primary key index partially used\r\n\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]] On Behalf Of Florian Gossin\r\nSent: Tuesday, January 26, 2016 10:52 AM\r\nTo: [email protected]<mailto:[email protected]>\r\nSubject: [PERFORM] Primary key index partially used\r\n\r\nHi all,\r\nI'm using PostgreSQL 9.4.5 and I have a weird issue.\r\nI have the following three tables:\r\n\r\nvisit\r\n( nb bigint NOT NULL,\r\n CONSTRAINT visit_pkey PRIMARY KEY (nb)\r\n)\r\nwith ~ 750'000 rows\r\n\r\ninvoice\r\n( id bigint NOT NULL,\r\n CONSTRAINT invoice_pkey PRIMARY KEY (id)\r\n)\r\nwith ~ 3'000'000 rows\r\n\r\nvisit_invoice\r\n( invoice_id bigint NOT NULL,\r\n visit_nb bigint NOT NULL,\r\n CONSTRAINT visit_invoice_pkey PRIMARY KEY (visit_nb, invoice_id),\r\n CONSTRAINT fk_vis_inv_inv FOREIGN KEY (invoice_id)\r\n REFERENCES invoice (id) MATCH SIMPLE\r\n ON UPDATE NO ACTION ON DELETE CASCADE,\r\n CONSTRAINT fk_vis_inv_vis FOREIGN KEY (visit_nb)\r\n REFERENCES visit (nb) MATCH SIMPLE\r\n ON UPDATE NO ACTION ON DELETE CASCADE\r\n)\r\nwith ~ 3'000'000 rows\r\n\r\nWhen I delete a row in visit table, it runs the trigger for constraint fk_vis_inv_vis and it seems to use the primary key index on visit_invoice:\r\nexplain analyze DELETE FROM visit WHERE nb = 2000013;\r\n------------------------------------------------------------------------------------------------------------------------\r\n Delete on visit (cost=0.42..8.44 rows=1 width=6) (actual time=2.225..2.225 rows=0 loops=1)\r\n -> Index Scan using visit_pkey on visit (cost=0.42..8.44 rows=1 width=6) (actual time=2.084..2.088 rows=1 loops=1)\r\n Index Cond: (nb = 2000013)\r\n Planning time: 0.201 ms\r\n Trigger for constraint fk_vis_inv_vis: time=0.673 calls=1\r\n\r\nBut when I delete a record in the table invoice, it runs the trigger for constraint fk_vis_inv_vis and it doesn't seem to use the primary key index on visit_invoice:\r\n\r\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\r\n----------------------------------------------------------------------------------------------------------------------------\r\n Delete on invoice (cost=0.43..8.45 rows=1 width=6) (actual time=0.109..0.109 rows=0 loops=1)\r\n -> Index Scan using invoice_pkey on invoice (cost=0.43..8.45 rows=1 width=6) (actual time=0.060..0.060 rows=1 loops=1)\r\n Index Cond: (id = 30140470)\r\n Planning time: 0.156 ms\r\n Trigger for constraint fk_vis_inv_inv: time=219.122 calls=1\r\nSo, if I create explicitly an index for the second column (which is already part of the primary key), it seems to use it because the trigger execution is really faster:\r\n\r\nCREATE INDEX fki_vis_inv_inv\r\n ON visit_invoice\r\n USING btree\r\n (invoice_id);\r\n\r\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\r\n----------------------------------------------------------------------------------------------------------------------------\r\n Delete on invoice (cost=0.43..8.45 rows=1 width=6) (actual time=0.057..0.057 rows=0 loops=1)\r\n -> Index Scan using invoice_pkey on invoice (cost=0.43..8.45 rows=1 width=6) (actual time=0.039..0.040 rows=1 loops=1)\r\n Index Cond: (id = 120043571)\r\n Planning time: 0.074 ms\r\n Trigger for constraint fk_vis_inv_inv: time=0.349 calls=1\r\nSo I have tried to create the primary key differently, like PRIMARY KEY (invoice_id, visit_nb), and in that case it is the opposite, the deletion of the invoice is very fast and the deletion of the visit is really slower, unless I create a specific index as above.\r\nSo my question is: why is my index on the primary key not used by both triggers and why should I always create an explicit index on the second column ?\r\nThanks.\r\nFlorian\r\n\r\nFirst, It’s a god (for performance) practice to create indexes on FK columns in “child” table.\r\nSecond, PG is using index only if the first column in concatenated index is used in WHERE clause. That is exactly what you observe.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n“god” -> good ☺\r\n\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: Igor Neyman\r\n\nSent: Tuesday, January 26, 2016 11:01 AM\nTo: 'Florian Gossin' <[email protected]>; [email protected]\nSubject: RE: [PERFORM] Primary key index partially used\n\n\n \n \nFrom:\[email protected] [mailto:[email protected]]\r\nOn Behalf Of Florian Gossin\nSent: Tuesday, January 26, 2016 10:52 AM\nTo: [email protected]\nSubject: [PERFORM] Primary key index partially used\n \n\n\n\n\n\n\nHi all,\n\nI'm using PostgreSQL 9.4.5 and I have a weird issue.\n\nI have the following three tables:\n\r\nvisit\r\n( nb bigint NOT NULL,\r\n  CONSTRAINT visit_pkey PRIMARY KEY (nb)\r\n)\n\nwith ~ 750'000 rows\n\r\ninvoice\r\n( id bigint NOT NULL,\r\n  CONSTRAINT invoice_pkey PRIMARY KEY (id)\r\n)\n\nwith ~ 3'000'000 rows\n\r\nvisit_invoice\r\n( invoice_id bigint NOT NULL,\r\n  visit_nb bigint NOT NULL,\r\n  CONSTRAINT visit_invoice_pkey PRIMARY KEY (visit_nb, invoice_id),\r\n  CONSTRAINT fk_vis_inv_inv FOREIGN KEY (invoice_id)\r\n      REFERENCES invoice (id) MATCH SIMPLE\r\n      ON UPDATE NO ACTION ON DELETE CASCADE,\r\n  CONSTRAINT fk_vis_inv_vis FOREIGN KEY (visit_nb)\r\n      REFERENCES visit (nb) MATCH SIMPLE\r\n      ON UPDATE NO ACTION ON DELETE CASCADE\r\n)\n\n\nwith ~ 3'000'000 rows\n\n\n \n\nWhen I delete a row in visit table, it runs the trigger for constraint fk_vis_inv_vis and it seems to use the primary key index on visit_invoice:\n\n\n\nexplain analyze DELETE FROM visit WHERE nb = 2000013;\r\n------------------------------------------------------------------------------------------------------------------------\r\n Delete on visit  (cost=0.42..8.44 rows=1 width=6) (actual time=2.225..2.225 rows=0 loops=1)\r\n   ->  Index Scan using visit_pkey on visit  (cost=0.42..8.44 rows=1 width=6) (actual time=2.084..2.088 rows=1 loops=1)\r\n         Index Cond: (nb = 2000013)\r\n Planning time: 0.201 ms\r\n Trigger for constraint fk_vis_inv_vis: time=0.673 calls=1\r\n \n\n\nBut when I delete a record in the table invoice, it runs the trigger for constraint fk_vis_inv_vis and it doesn't seem to use the primary key index on visit_invoice:\n\r\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\r\n----------------------------------------------------------------------------------------------------------------------------\r\n Delete on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.109..0.109 rows=0 loops=1)\r\n   ->  Index Scan using invoice_pkey on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.060..0.060 rows=1 loops=1)\r\n         Index Cond: (id = 30140470)\r\n Planning time: 0.156 ms\r\n Trigger for constraint fk_vis_inv_inv: time=219.122 calls=1\n\n\nSo, if I create explicitly an index for the second column (which is already part of the primary key), it seems to use it because the trigger execution is really faster:\n\r\nCREATE INDEX fki_vis_inv_inv\r\n  ON visit_invoice\r\n  USING btree\r\n  (invoice_id);\n\r\nexplain analyze DELETE FROM invoice WHERE id = 30140470;\r\n----------------------------------------------------------------------------------------------------------------------------\r\n Delete on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.057..0.057 rows=0 loops=1)\r\n   ->  Index Scan using invoice_pkey on invoice  (cost=0.43..8.45 rows=1 width=6) (actual time=0.039..0.040 rows=1 loops=1)\r\n         Index Cond: (id = 120043571)\r\n Planning time: 0.074 ms\r\n Trigger for constraint fk_vis_inv_inv: time=0.349 calls=1\n\n\nSo I have tried to create the primary key differently, like PRIMARY KEY (invoice_id, visit_nb), and in that case it is the opposite, the deletion of the invoice is very fast and the deletion of the visit is\r\n really slower, unless I create a specific index as above.\n\n\nSo my question is: why is my index on the primary key not used by both triggers and why should I always create an explicit index on the second column ?\n\n\nThanks.\n\n\n\nFlorian\n\n \nFirst, It’s a god (for performance) practice to create indexes on FK columns in “child” table.\nSecond, PG is using index only if the first column in concatenated index is used in WHERE clause.  That is exactly what you observe.\n \nRegards,\n\nIgor Neyman\n\n \n“god” -> good\r\nJ", "msg_date": "Tue, 26 Jan 2016 16:02:25 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Primary key index partially used" } ]
[ { "msg_contents": "Hi I have a master table and the inherited tables broken up by month.\n\n/e.g. CONSTRAINT transactions_january_log_date_check CHECK\n(date_part('month'::text, log_date) = 1::double precision);/\n\n So transactions_master is the master table, and then transactions_january,\ntransactions_february, etc. I have the rules in place and an index on the\ndate field in each child table. Currently i only have data in the january\ntable. But when I query the master table. \n\n/explain select * from transactions_master where log_tstamp='1/23/2016' \n/\n\nI see that it goes through all the tables. Should it be querying the january\ntable first? And not do the others once its comes across the data in\njanuary?\n\n'Append (cost=0.00..82.88 rows=37 width=165)'\n' -> Seq Scan on transactions_master (cost=0.00..0.00 rows=1 width=176)'\n' Filter: (log_logdate = '2016-01-23 00:00:00'::timestamp without\ntime zone)'\n' -> Bitmap Heap Scan on transactions_february (cost=2.16..5.29 rows=2\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_feb_logdate (cost=0.00..2.16\nrows=2 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Heap Scan on transactions_january (cost=2.16..5.29 rows=2\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_jan_logdate (cost=0.00..2.16\nrows=2 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Heap Scan on transactions_march (cost=2.16..5.29 rows=2\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_mar_system (cost=0.00..2.16\nrows=2 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Heap Scan on transactions_april (cost=2.16..5.29 rows=2\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_apr_logdate (cost=0.00..2.16\nrows=2 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Heap Scan on transactions_may (cost=2.16..5.29 rows=2\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_may_logdate (cost=0.00..2.16\nrows=2 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Heap Scan on transactions_june (cost=2.16..5.34 rows=2\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_jun_logdate (cost=0.00..2.16\nrows=2 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Heap Scan on transactions_july (cost=2.31..8.82 rows=4\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_jul_logdate (cost=0.00..2.30\nrows=4 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Index Scan using idx_trans_aug_logdate on transactions_august \n(cost=0.29..9.97 rows=5 width=96)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp without\ntime zone)'\n' -> Bitmap Heap Scan on transactions_september (cost=2.31..8.79 rows=4\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_sep_logdate (cost=0.00..2.30\nrows=4 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Heap Scan on transactions_november (cost=2.31..8.14 rows=4\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_nov_logdate (cost=0.00..2.30\nrows=4 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Heap Scan on transactions_december (cost=2.30..7.14 rows=3\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_dec_logdate (cost=0.00..2.30\nrows=3 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Heap Scan on transactions_october (cost=2.31..8.22 rows=4\nwidth=176)'\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n' -> Bitmap Index Scan on idx_trans_oct_logdate (cost=0.00..2.30\nrows=4 width=0)'\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Postgres-partitions-query-scanning-all-child-tables-tp5884497.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 27 Jan 2016 15:09:50 -0700 (MST)", "msg_from": "rverghese <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres partitions-query scanning all child tables" }, { "msg_contents": "On Wed, Jan 27, 2016 at 5:09 PM, rverghese <[email protected]> wrote:\n\n> Hi I have a master table and the inherited tables broken up by month.\n>\n> /e.g. CONSTRAINT transactions_january_log_date_check CHECK\n> (date_part('month'::text, log_date) = 1::double precision);/\n>\n> So transactions_master is the master table, and then transactions_january,\n> transactions_february, etc. I have the rules in place and an index on the\n> date field in each child table. Currently i only have data in the january\n> table. But when I query the master table.\n>\n> /explain select * from transactions_master where log_tstamp='1/23/2016'\n> /\n>\n> I see that it goes through all the tables. Should it be querying the\n> january\n> table first? And not do the others once its comes across the data in\n> january?\n>\n> 'Append (cost=0.00..82.88 rows=37 width=165)'\n> ' -> Seq Scan on transactions_master (cost=0.00..0.00 rows=1 width=176)'\n> ' Filter: (log_logdate = '2016-01-23 00:00:00'::timestamp without\n> time zone)'\n> ' -> Bitmap Heap Scan on transactions_february (cost=2.16..5.29 rows=2\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_feb_logdate (cost=0.00..2.16\n> rows=2 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Heap Scan on transactions_january (cost=2.16..5.29 rows=2\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_jan_logdate (cost=0.00..2.16\n> rows=2 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Heap Scan on transactions_march (cost=2.16..5.29 rows=2\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_mar_system (cost=0.00..2.16\n> rows=2 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Heap Scan on transactions_april (cost=2.16..5.29 rows=2\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_apr_logdate (cost=0.00..2.16\n> rows=2 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Heap Scan on transactions_may (cost=2.16..5.29 rows=2\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_may_logdate (cost=0.00..2.16\n> rows=2 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Heap Scan on transactions_june (cost=2.16..5.34 rows=2\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_jun_logdate (cost=0.00..2.16\n> rows=2 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Heap Scan on transactions_july (cost=2.31..8.82 rows=4\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_jul_logdate (cost=0.00..2.30\n> rows=4 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Index Scan using idx_trans_aug_logdate on transactions_august\n> (cost=0.29..9.97 rows=5 width=96)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without\n> time zone)'\n> ' -> Bitmap Heap Scan on transactions_september (cost=2.31..8.79 rows=4\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_sep_logdate (cost=0.00..2.30\n> rows=4 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Heap Scan on transactions_november (cost=2.31..8.14 rows=4\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_nov_logdate (cost=0.00..2.30\n> rows=4 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Heap Scan on transactions_december (cost=2.30..7.14 rows=3\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_dec_logdate (cost=0.00..2.30\n> rows=3 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Heap Scan on transactions_october (cost=2.31..8.22 rows=4\n> width=176)'\n> ' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n> ' -> Bitmap Index Scan on idx_trans_oct_logdate (cost=0.00..2.30\n> rows=4 width=0)'\n> ' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n> without time zone)'\n>\n\n\nhttp://www.postgresql.org/message-id/[email protected]\n\ntl;dr - constraint exclusion only works with IN, BETWEEN, =, <, <=, >, >=,\n<> and only where values are immutable.\n\nI ran into this when attempting to use <@ operators for my range\npartitioning extension.\n\nSo date_part() won't work because constraint exclusion can't see into it.\n\nYou'll have better luck with something like\n CHECK(log_date >= '2016-01-01'::timestamp and log_date <\n'2016-02-01'::timestamp)\n\nOn Wed, Jan 27, 2016 at 5:09 PM, rverghese <[email protected]> wrote:Hi I have a master table and the inherited tables broken up by month.\n\n/e.g. CONSTRAINT transactions_january_log_date_check CHECK\n(date_part('month'::text, log_date) = 1::double precision);/\n\n So transactions_master is the master table, and then transactions_january,\ntransactions_february, etc. I have the rules in place and an index on the\ndate field in each child table. Currently i only have data in the january\ntable. But when I query the master table.\n\n/explain select * from transactions_master  where log_tstamp='1/23/2016'\n/\n\nI see that it goes through all the tables. Should it be querying the january\ntable first? And not do the others once its comes across the data in\njanuary?\n\n'Append  (cost=0.00..82.88 rows=37 width=165)'\n'  ->  Seq Scan on transactions_master  (cost=0.00..0.00 rows=1 width=176)'\n'        Filter: (log_logdate = '2016-01-23 00:00:00'::timestamp without\ntime zone)'\n'  ->  Bitmap Heap Scan on transactions_february  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_feb_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_january  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_jan_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_march  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_mar_system  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_april  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_apr_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_may  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_may_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_june  (cost=2.16..5.34 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_jun_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_july  (cost=2.31..8.82 rows=4\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_jul_logdate  (cost=0.00..2.30\nrows=4 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Index Scan using idx_trans_aug_logdate on transactions_august\n(cost=0.29..9.97 rows=5 width=96)'\n'        Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp without\ntime zone)'\n'  ->  Bitmap Heap Scan on transactions_september  (cost=2.31..8.79 rows=4\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_sep_logdate  (cost=0.00..2.30\nrows=4 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_november  (cost=2.31..8.14 rows=4\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_nov_logdate  (cost=0.00..2.30\nrows=4 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_december  (cost=2.30..7.14 rows=3\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_dec_logdate  (cost=0.00..2.30\nrows=3 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_october  (cost=2.31..8.22 rows=4\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_oct_logdate  (cost=0.00..2.30\nrows=4 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'http://www.postgresql.org/message-id/[email protected];dr - constraint exclusion only works with IN, BETWEEN, =, <, <=, >, >=, <> and only where values are immutable.I ran into this when attempting to use <@ operators for my range partitioning extension.So date_part() won't work because constraint exclusion can't see into it.You'll have better luck with something like      CHECK(log_date >= '2016-01-01'::timestamp and log_date < '2016-02-01'::timestamp)", "msg_date": "Wed, 27 Jan 2016 23:30:46 -0500", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres partitions-query scanning all child tables" }, { "msg_contents": "Ok, thanks. Thats a bummer though. That means I need a table for every month/year combination. I was hoping to limit it to 12 tables.\nRiya\n\nDate: Wed, 27 Jan 2016 21:31:35 -0700\nFrom: [email protected]\nTo: [email protected]\nSubject: Re: Postgres partitions-query scanning all child tables\n\n\n\n\tOn Wed, Jan 27, 2016 at 5:09 PM, rverghese <[hidden email]> wrote:\nHi I have a master table and the inherited tables broken up by month.\n\n\n\n/e.g. CONSTRAINT transactions_january_log_date_check CHECK\n\n(date_part('month'::text, log_date) = 1::double precision);/\n\n\n\n So transactions_master is the master table, and then transactions_january,\n\ntransactions_february, etc. I have the rules in place and an index on the\n\ndate field in each child table. Currently i only have data in the january\n\ntable. But when I query the master table.\n\n\n\n/explain select * from transactions_master where log_tstamp='1/23/2016'\n\n/\n\n\n\nI see that it goes through all the tables. Should it be querying the january\n\ntable first? And not do the others once its comes across the data in\n\njanuary?\n\n\n\n'Append (cost=0.00..82.88 rows=37 width=165)'\n\n' -> Seq Scan on transactions_master (cost=0.00..0.00 rows=1 width=176)'\n\n' Filter: (log_logdate = '2016-01-23 00:00:00'::timestamp without\n\ntime zone)'\n\n' -> Bitmap Heap Scan on transactions_february (cost=2.16..5.29 rows=2\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_feb_logdate (cost=0.00..2.16\n\nrows=2 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Heap Scan on transactions_january (cost=2.16..5.29 rows=2\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_jan_logdate (cost=0.00..2.16\n\nrows=2 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Heap Scan on transactions_march (cost=2.16..5.29 rows=2\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_mar_system (cost=0.00..2.16\n\nrows=2 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Heap Scan on transactions_april (cost=2.16..5.29 rows=2\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_apr_logdate (cost=0.00..2.16\n\nrows=2 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Heap Scan on transactions_may (cost=2.16..5.29 rows=2\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_may_logdate (cost=0.00..2.16\n\nrows=2 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Heap Scan on transactions_june (cost=2.16..5.34 rows=2\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_jun_logdate (cost=0.00..2.16\n\nrows=2 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Heap Scan on transactions_july (cost=2.31..8.82 rows=4\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_jul_logdate (cost=0.00..2.30\n\nrows=4 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Index Scan using idx_trans_aug_logdate on transactions_august\n\n(cost=0.29..9.97 rows=5 width=96)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp without\n\ntime zone)'\n\n' -> Bitmap Heap Scan on transactions_september (cost=2.31..8.79 rows=4\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_sep_logdate (cost=0.00..2.30\n\nrows=4 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Heap Scan on transactions_november (cost=2.31..8.14 rows=4\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_nov_logdate (cost=0.00..2.30\n\nrows=4 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Heap Scan on transactions_december (cost=2.30..7.14 rows=3\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_dec_logdate (cost=0.00..2.30\n\nrows=3 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Heap Scan on transactions_october (cost=2.31..8.22 rows=4\n\nwidth=176)'\n\n' Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n' -> Bitmap Index Scan on idx_trans_oct_logdate (cost=0.00..2.30\n\nrows=4 width=0)'\n\n' Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\n\nwithout time zone)'\n\n\nhttp://www.postgresql.org/message-id/1121251997.3970.237.camel@...\n\ntl;dr - constraint exclusion only works with IN, BETWEEN, =, <, <=, >, >=, <> and only where values are immutable.\nI ran into this when attempting to use <@ operators for my range partitioning extension.\n\nSo date_part() won't work because constraint exclusion can't see into it.\nYou'll have better luck with something like CHECK(log_date >= '2016-01-01'::timestamp and log_date < '2016-02-01'::timestamp)\n\n\n\n\t\n\t\n\t\n\t\n\n\t\n\n\t\n\t\n\t\tIf you reply to this email, your message will be added to the discussion below:\n\t\thttp://postgresql.nabble.com/Postgres-partitions-query-scanning-all-child-tables-tp5884497p5884560.html\n\t\n\t\n\t\t\n\t\tTo unsubscribe from Postgres partitions-query scanning all child tables, click here.\n\n\t\tNAML\n\t \t\t \t \t\t \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Postgres-partitions-query-scanning-all-child-tables-tp5884497p5884570.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nOk, thanks. Thats a bummer though. That means I need a table for every month/year combination. I was hoping to limit it to 12 tables.RiyaDate: Wed, 27 Jan 2016 21:31:35 -0700From: [hidden email]To: [hidden email]Subject: Re: Postgres partitions-query scanning all child tables\nOn Wed, Jan 27, 2016 at 5:09 PM, rverghese <[hidden email]> wrote:Hi I have a master table and the inherited tables broken up by month.\n\n/e.g. CONSTRAINT transactions_january_log_date_check CHECK\n(date_part('month'::text, log_date) = 1::double precision);/\n\n So transactions_master is the master table, and then transactions_january,\ntransactions_february, etc. I have the rules in place and an index on the\ndate field in each child table. Currently i only have data in the january\ntable. But when I query the master table.\n\n/explain select * from transactions_master  where log_tstamp='1/23/2016'\n/\n\nI see that it goes through all the tables. Should it be querying the january\ntable first? And not do the others once its comes across the data in\njanuary?\n\n'Append  (cost=0.00..82.88 rows=37 width=165)'\n'  ->  Seq Scan on transactions_master  (cost=0.00..0.00 rows=1 width=176)'\n'        Filter: (log_logdate = '2016-01-23 00:00:00'::timestamp without\ntime zone)'\n'  ->  Bitmap Heap Scan on transactions_february  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_feb_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_january  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_jan_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_march  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_mar_system  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_april  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_apr_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_may  (cost=2.16..5.29 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_may_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_june  (cost=2.16..5.34 rows=2\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_jun_logdate  (cost=0.00..2.16\nrows=2 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_july  (cost=2.31..8.82 rows=4\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_jul_logdate  (cost=0.00..2.30\nrows=4 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Index Scan using idx_trans_aug_logdate on transactions_august\n(cost=0.29..9.97 rows=5 width=96)'\n'        Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp without\ntime zone)'\n'  ->  Bitmap Heap Scan on transactions_september  (cost=2.31..8.79 rows=4\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_sep_logdate  (cost=0.00..2.30\nrows=4 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_november  (cost=2.31..8.14 rows=4\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_nov_logdate  (cost=0.00..2.30\nrows=4 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_december  (cost=2.30..7.14 rows=3\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_dec_logdate  (cost=0.00..2.30\nrows=3 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'  ->  Bitmap Heap Scan on transactions_october  (cost=2.31..8.22 rows=4\nwidth=176)'\n'        Recheck Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'\n'        ->  Bitmap Index Scan on idx_trans_oct_logdate  (cost=0.00..2.30\nrows=4 width=0)'\n'              Index Cond: (log_logdate = '2016-01-23 00:00:00'::timestamp\nwithout time zone)'http://www.postgresql.org/message-id/[email protected];dr - constraint exclusion only works with IN, BETWEEN, =, <, <=, >, >=, <> and only where values are immutable.I ran into this when attempting to use <@ operators for my range partitioning extension.So date_part() won't work because constraint exclusion can't see into it.You'll have better luck with something like      CHECK(log_date >= '2016-01-01'::timestamp and log_date < '2016-02-01'::timestamp)\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.nabble.com/Postgres-partitions-query-scanning-all-child-tables-tp5884497p5884560.html\n\n\n\t\t\n\t\tTo unsubscribe from Postgres partitions-query scanning all child tables, click here.\nNAML\n \n\nView this message in context: RE: Postgres partitions-query scanning all child tables\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.", "msg_date": "Wed, 27 Jan 2016 23:10:35 -0700 (MST)", "msg_from": "rverghese <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres partitions-query scanning all child tables" }, { "msg_contents": "On Thu, Jan 28, 2016 at 1:10 AM, rverghese <[email protected]> wrote:\n\n> Ok, thanks. Thats a bummer though. That means I need a table for every\n> month/year combination. I was hoping to limit it to 12 tables.\n>\n> Riya\n>\n>\nIf you wanted to have a column called month_num or something like that, and\nif *all* of your queries extract the month date_part() in every where\nclause, then yes, you could have just 12 tables.\n\nBut you won't like that partitioning scheme for other reasons:\n- queries that don't \"play by the rules\" will be slow\n- very old data will slow down recent-day queries\n- no ability to quickly remove obsolete data by dropping partitions that\nare no longer useful\n\nOn Thu, Jan 28, 2016 at 1:10 AM, rverghese <[email protected]> wrote:\nOk, thanks. Thats a bummer though. That means I need a table for every month/year combination. I was hoping to limit it to 12 tables.RiyaIf you wanted to have a column called month_num or something like that, and if *all* of your queries extract the month date_part() in every where clause, then yes, you could have just 12 tables.But you won't like that partitioning scheme for other reasons:- queries that don't \"play by the rules\" will be slow- very old data will slow down recent-day queries- no ability to quickly remove obsolete data by dropping partitions that are no longer useful", "msg_date": "Thu, 28 Jan 2016 02:47:20 -0500", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres partitions-query scanning all child tables" }, { "msg_contents": "Yeah that would be a pain to have the date_part in each query. Thanks for the info!\n\nDate: Thu, 28 Jan 2016 00:48:10 -0700\nFrom: [email protected]\nTo: [email protected]\nSubject: Re: Postgres partitions-query scanning all child tables\n\n\n\n\tOn Thu, Jan 28, 2016 at 1:10 AM, rverghese <[hidden email]> wrote:\n\n\n\nOk, thanks. Thats a bummer though. That means I need a table for every month/year combination. I was hoping to limit it to 12 tables.\nRiya\n\n\nIf you wanted to have a column called month_num or something like that, and if *all* of your queries extract the month date_part() in every where clause, then yes, you could have just 12 tables.\n\nBut you won't like that partitioning scheme for other reasons:- queries that don't \"play by the rules\" will be slow- very old data will slow down recent-day queries- no ability to quickly remove obsolete data by dropping partitions that are no longer useful\n\n\n\n\n\t\n\t\n\t\n\t\n\n\t\n\n\t\n\t\n\t\tIf you reply to this email, your message will be added to the discussion below:\n\t\thttp://postgresql.nabble.com/Postgres-partitions-query-scanning-all-child-tables-tp5884497p5884581.html\n\t\n\t\n\t\t\n\t\tTo unsubscribe from Postgres partitions-query scanning all child tables, click here.\n\n\t\tNAML\n\t \t\t \t \t\t \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Postgres-partitions-query-scanning-all-child-tables-tp5884497p5884729.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nYeah that would be a pain to have the date_part in each query. Thanks for the info!Date: Thu, 28 Jan 2016 00:48:10 -0700From: [hidden email]To: [hidden email]Subject: Re: Postgres partitions-query scanning all child tables\nOn Thu, Jan 28, 2016 at 1:10 AM, rverghese <[hidden email]> wrote:\nOk, thanks. Thats a bummer though. That means I need a table for every month/year combination. I was hoping to limit it to 12 tables.RiyaIf you wanted to have a column called month_num or something like that, and if *all* of your queries extract the month date_part() in every where clause, then yes, you could have just 12 tables.But you won't like that partitioning scheme for other reasons:- queries that don't \"play by the rules\" will be slow- very old data will slow down recent-day queries- no ability to quickly remove obsolete data by dropping partitions that are no longer useful\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.nabble.com/Postgres-partitions-query-scanning-all-child-tables-tp5884497p5884581.html\n\n\n\t\t\n\t\tTo unsubscribe from Postgres partitions-query scanning all child tables, click here.\nNAML\n \n\nView this message in context: RE: Postgres partitions-query scanning all child tables\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.", "msg_date": "Thu, 28 Jan 2016 12:51:38 -0700 (MST)", "msg_from": "rverghese <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres partitions-query scanning all child tables" } ]
[ { "msg_contents": "in postgres 9.5.0 i have partitioned table, that collect data by months, i tried to use new postgres feature foreign table inheritance & pushed one month of data to another postgres server, so i got foreign table. when i am running my query from my primary server, query takes 7x more time to execute query, then on another server where i have foreign table. i am not passing a lot of data by network, my query looking like\n\nexplain analyze SELECT source, global_action, paid, organic, device, count(*) as count, sum(price) as sum FROM \"toys\" WHERE \"toys\".\"container_id\" = 857 AND (toys.created_at >= '2015-12-02 05:00:00.000000') AND (toys.created_at <= '2015-12-30 04:59:59.999999') AND (\"toys\".\"source\" IS NOT NULL) GROUP BY \"toys\".\"source\", \"toys\".\"global_action\", \"toys\".\"paid\", \"toys\".\"organic\", \"toys\".\"device\";\n\nHashAggregate (cost=1143634.94..1143649.10 rows=1133 width=15) (actual time=1556.894..1557.017 rows=372 loops=1) Group Key: toys.source, toys.global_action, toys.paid, toys.organic, toys.device -> Append (cost=0.00..1143585.38 rows=2832 width=15) (actual time=113.420..1507.373 rows=76593 loops=1) -> Seq Scan on toys (cost=0.00..0.00 rows=1 width=242) (actual time=0.001..0.001 rows=0 loops=1) Filter: ((source IS NOT NULL) AND (created_at >= '2015-12-02 05:00:00'::timestamp without time zone) AND (created_at <= '2015-12-30 04:59:59.999999'::timestamp without time zone) AND (container_id = 857)) -> Foreign Scan on job_stats_201512_new (cost=100.00..1143585.38 rows=2831 width=15) (actual time=113.419..1488.445 rows=76593 loops=1) Planning time: 2.990 ms Execution time: 1560.131 ms\n\ndoes postgres use indexes on foreign tables ?(i have indexes defined in foreign table), if i run query directly on that server it takes - 200ms.\n\n\n\n\nin postgres 9.5.0 i have partitioned table, that collect data by months, i tried to use new postgres feature foreign table inheritance & pushed one month of data to another postgres server, so i got foreign table. when i am running my query from my primary server, query takes 7x more time to execute query, then on another server where i have foreign table. i am not passing a lot of data by network, my query looking likeexplain analyze SELECT source, global_action, paid, organic, device, count(*) as count, sum(price) as sum FROM \"toys\" WHERE \"toys\".\"container_id\" = 857 AND (toys.created_at >= '2015-12-02 05:00:00.000000') AND (toys.created_at <= '2015-12-30 04:59:59.999999') AND (\"toys\".\"source\" IS NOT NULL) GROUP BY \"toys\".\"source\", \"toys\".\"global_action\", \"toys\".\"paid\", \"toys\".\"organic\", \"toys\".\"device\";HashAggregate (cost=1143634.94..1143649.10 rows=1133 width=15) (actual time=1556.894..1557.017 rows=372 loops=1) Group Key: toys.source, toys.global_action, toys.paid, toys.organic, toys.device -> Append (cost=0.00..1143585.38 rows=2832 width=15) (actual time=113.420..1507.373 rows=76593 loops=1) -> Seq Scan on toys (cost=0.00..0.00 rows=1 width=242) (actual time=0.001..0.001 rows=0 loops=1) Filter: ((source IS NOT NULL) AND (created_at >= '2015-12-02 05:00:00'::timestamp without time zone) AND (created_at <= '2015-12-30 04:59:59.999999'::timestamp without time zone) AND (container_id = 857)) -> Foreign Scan on job_stats_201512_new (cost=100.00..1143585.38 rows=2831 width=15) (actual time=113.419..1488.445 rows=76593 loops=1) Planning time: 2.990 ms Execution time: 1560.131 msdoes postgres use indexes on foreign tables ?(i have indexes defined in foreign table), if i run query directly on that server it takes - 200ms.", "msg_date": "Wed, 27 Jan 2016 18:46:00 -0500", "msg_from": "Dzmitry Nikitsin <[email protected]>", "msg_from_op": true, "msg_subject": "performance issue with inherited foreign table " } ]
[ { "msg_contents": "I have a query that runs *slower* if I increase work_mem.\r\n\r\nThe execution plans are identical in both cases, except that a temp file\r\nis used when work_mem is smaller.\r\n\r\nThe relevant lines of EXPLAIN ANALYZE output are:\r\n\r\nWith work_mem='100MB':\r\n-> Hash Join (cost=46738.74..285400.61 rows=292 width=8) (actual time=4296.986..106087.683 rows=187222 loops=1)\r\n Hash Cond: (\"*SELECT* 1_2\".postadresse_id = p.postadresse_id)\r\n Buffers: shared hit=1181177 dirtied=1, temp read=7232 written=7230\r\n\r\nWith work_mem='500MB':\r\n-> Hash Join (cost=46738.74..285400.61 rows=292 width=8) (actual time=3802.849..245970.049 rows=187222 loops=1)\r\n Hash Cond: (\"*SELECT* 1_2\".postadresse_id = p.postadresse_id)\r\n Buffers: shared hit=1181175 dirtied=111\r\n\r\nI ran operf on both backends, and they look quite similar, except that the\r\nnumber of samples is different (this is \"opreport -c\" output):\r\n\r\nCPU: Intel Sandy Bridge microarchitecture, speed 2899.8 MHz (estimated)\r\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 90000\r\nsamples % image name symbol name\r\n-------------------------------------------------------------------------------\r\n 112 0.0019 postgres ExecProcNode\r\n 3020116 49.9904 postgres ExecScanHashBucket\r\n 3021162 50.0077 postgres ExecHashJoin\r\n3020116 92.8440 postgres ExecScanHashBucket\r\n 3020116 49.9207 postgres ExecScanHashBucket [self]\r\n 3020116 49.9207 postgres ExecScanHashBucket\r\n 8190 0.1354 vmlinux apic_timer_interrupt\r\n\r\nWhat could be an explanation for this?\r\nIs this known behaviour?\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 29 Jan 2016 15:17:49 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": true, "msg_subject": "Hash join gets slower as work_mem increases?" }, { "msg_contents": "Hi\n\n\n\n> I ran operf on both backends, and they look quite similar, except that the\n> number of samples is different (this is \"opreport -c\" output):\n>\n> CPU: Intel Sandy Bridge microarchitecture, speed 2899.8 MHz (estimated)\n> Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit\n> mask of 0x00 (No unit mask) count 90000\n> samples % image name symbol name\n>\n> -------------------------------------------------------------------------------\n> 112 0.0019 postgres ExecProcNode\n> 3020116 49.9904 postgres ExecScanHashBucket\n> 3021162 50.0077 postgres ExecHashJoin\n> 3020116 92.8440 postgres ExecScanHashBucket\n> 3020116 49.9207 postgres ExecScanHashBucket [self]\n> 3020116 49.9207 postgres ExecScanHashBucket\n> 8190 0.1354 vmlinux apic_timer_interrupt\n>\n> What could be an explanation for this?\n> Is this known behaviour?\n>\n\none issue was fixed in 9.5\n\nlarge hash table can introduce a lot of outs from L1, L2 caches.\n\nPavel\n\n\n>\n> Yours,\n> Laurenz Albe\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi\n\nI ran operf on both backends, and they look quite similar, except that the\nnumber of samples is different (this is \"opreport -c\" output):\n\nCPU: Intel Sandy Bridge microarchitecture, speed 2899.8 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 90000\nsamples  %        image name               symbol name\n-------------------------------------------------------------------------------\n  112       0.0019  postgres                 ExecProcNode\n  3020116  49.9904  postgres                 ExecScanHashBucket\n  3021162  50.0077  postgres                 ExecHashJoin\n3020116  92.8440  postgres                 ExecScanHashBucket\n  3020116  49.9207  postgres                 ExecScanHashBucket [self]\n  3020116  49.9207  postgres                 ExecScanHashBucket\n  8190      0.1354  vmlinux                  apic_timer_interrupt\n\nWhat could be an explanation for this?\nIs this known behaviour?one issue was fixed in 9.5large hash table can introduce a lot of outs from L1, L2 caches.Pavel \n\nYours,\nLaurenz Albe\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 29 Jan 2016 16:21:41 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join gets slower as work_mem increases?" }, { "msg_contents": "Hi,\n\nOn 01/29/2016 04:17 PM, Albe Laurenz wrote:\n> I have a query that runs *slower* if I increase work_mem.\n>\n> The execution plans are identical in both cases, except that a temp file\n> is used when work_mem is smaller.\n>\n> The relevant lines of EXPLAIN ANALYZE output are:\n>\n> With work_mem='100MB':\n> -> Hash Join (cost=46738.74..285400.61 rows=292 width=8) (actual time=4296.986..106087.683 rows=187222 loops=1)\n> Hash Cond: (\"*SELECT* 1_2\".postadresse_id = p.postadresse_id)\n> Buffers: shared hit=1181177 dirtied=1, temp read=7232 written=7230\n>\n> With work_mem='500MB':\n> -> Hash Join (cost=46738.74..285400.61 rows=292 width=8) (actual time=3802.849..245970.049 rows=187222 loops=1)\n> Hash Cond: (\"*SELECT* 1_2\".postadresse_id = p.postadresse_id)\n> Buffers: shared hit=1181175 dirtied=111\n>\n> I ran operf on both backends, and they look quite similar, except that the\n> number of samples is different (this is \"opreport -c\" output):\n>\n> CPU: Intel Sandy Bridge microarchitecture, speed 2899.8 MHz (estimated)\n> Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 90000\n> samples % image name symbol name\n> -------------------------------------------------------------------------------\n> 112 0.0019 postgres ExecProcNode\n> 3020116 49.9904 postgres ExecScanHashBucket\n> 3021162 50.0077 postgres ExecHashJoin\n> 3020116 92.8440 postgres ExecScanHashBucket\n> 3020116 49.9207 postgres ExecScanHashBucket [self]\n> 3020116 49.9207 postgres ExecScanHashBucket\n> 8190 0.1354 vmlinux apic_timer_interrupt\n>\n> What could be an explanation for this?\n> Is this known behaviour?\n\nThere is a bunch of possible causes for such behavior, but it's quite \nimpossible to say if this is an example of one of them as you have not \nposted the interesting parts of the explain plan. Also, knowing \nPostgreSQL version would be useful.\n\nI don't think the example you posted is due to exceeding on-CPU cache as \nthat's just a few MBs per socket, so the smaller work_mem is \nsignificantly larger.\n\nWhat I'd expect to be the issue here is under-estimate of the hash table \nsize, resulting in too few buckets and thus long chains of tuples that \nneed to be searched sequentially. Smaller work_mem values usually limit \nthe length of those chains in favor of batching.\n\nPlease, post the whole explain plan - especially the info about number \nof buckets/batches and the Hash node details.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 30 Jan 2016 15:13:31 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join gets slower as work_mem increases?" }, { "msg_contents": "Tomas Vondra wrote:\r\n> On 01/29/2016 04:17 PM, Albe Laurenz wrote:\r\n>> I have a query that runs *slower* if I increase work_mem.\r\n>>\r\n>> The execution plans are identical in both cases, except that a temp file\r\n>> is used when work_mem is smaller.\r\n\r\n>> What could be an explanation for this?\r\n>> Is this known behaviour?\r\n> \r\n> There is a bunch of possible causes for such behavior, but it's quite\r\n> impossible to say if this is an example of one of them as you have not\r\n> posted the interesting parts of the explain plan. Also, knowing\r\n> PostgreSQL version would be useful.\r\n> \r\n> I don't think the example you posted is due to exceeding on-CPU cache as\r\n> that's just a few MBs per socket, so the smaller work_mem is\r\n> significantly larger.\r\n> \r\n> What I'd expect to be the issue here is under-estimate of the hash table\r\n> size, resulting in too few buckets and thus long chains of tuples that\r\n> need to be searched sequentially. Smaller work_mem values usually limit\r\n> the length of those chains in favor of batching.\r\n> \r\n> Please, post the whole explain plan - especially the info about number\r\n> of buckets/batches and the Hash node details.\r\n\r\nThanks for looking at this.\r\nSorry, I forgot to mention that this is PostgreSQL 9.3.10.\r\n\r\nI didn't post the whole plan since it is awfully long, I'll include hyperlinks\r\nfor the whole plan.\r\n\r\nwork_mem = '100MB' (http://explain.depesz.com/s/7b6a):\r\n\r\n-> Hash Join (cost=46738.74..285400.61 rows=292 width=8) (actual time=4296.986..106087.683 rows=187222 loops=1)\r\n Hash Cond: (\"*SELECT* 1_2\".postadresse_id = p.postadresse_id)\r\n Buffers: shared hit=1181177 dirtied=1, temp read=7232 written=7230\r\n[...]\r\n -> Hash (cost=18044.92..18044.92 rows=4014 width=8) (actual time=4206.892..4206.892 rows=3096362 loops=1)\r\n Buckets: 1024 Batches: 2 (originally 1) Memory Usage: 102401kB\r\n Buffers: shared hit=1134522 dirtied=1, temp written=5296\r\n\r\nwork_mem = '500MB' (http://explain.depesz.com/s/Cgkl):\r\n\r\n-> Hash Join (cost=46738.74..285400.61 rows=292 width=8) (actual time=3802.849..245970.049 rows=187222 loops=1)\r\n Hash Cond: (\"*SELECT* 1_2\".postadresse_id = p.postadresse_id)\r\n Buffers: shared hit=1181175 dirtied=111\r\n[...]\r\n -> Hash (cost=18044.92..18044.92 rows=4014 width=8) (actual time=3709.584..3709.584 rows=3096360 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 120952kB\r\n Buffers: shared hit=1134520 dirtied=111\r\n\r\nDoes that support your theory?\r\n\r\nThere is clearly an underestimate here, caused by correlated attributes, but\r\nis that the cause for the bad performance with increased work_mem?\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 1 Feb 2016 09:38:31 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join gets slower as work_mem increases?" }, { "msg_contents": "On 02/01/2016 10:38 AM, Albe Laurenz wrote:\n> Tomas Vondra wrote:\n...\n> I didn't post the whole plan since it is awfully long, I'll include hyperlinks\n> for the whole plan.\n>\n> work_mem = '100MB' (http://explain.depesz.com/s/7b6a):\n>\n> -> Hash Join (cost=46738.74..285400.61 rows=292 width=8) (actual time=4296.986..106087.683 rows=187222 loops=1)\n> Hash Cond: (\"*SELECT* 1_2\".postadresse_id = p.postadresse_id)\n> Buffers: shared hit=1181177 dirtied=1, temp read=7232 written=7230\n> [...]\n> -> Hash (cost=18044.92..18044.92 rows=4014 width=8) (actual time=4206.892..4206.892 rows=3096362 loops=1)\n> Buckets: 1024 Batches: 2 (originally 1) Memory Usage: 102401kB\n> Buffers: shared hit=1134522 dirtied=1, temp written=5296\n>\n> work_mem = '500MB' (http://explain.depesz.com/s/Cgkl):\n>\n> -> Hash Join (cost=46738.74..285400.61 rows=292 width=8) (actual time=3802.849..245970.049 rows=187222 loops=1)\n> Hash Cond: (\"*SELECT* 1_2\".postadresse_id = p.postadresse_id)\n> Buffers: shared hit=1181175 dirtied=111\n> [...]\n> -> Hash (cost=18044.92..18044.92 rows=4014 width=8) (actual time=3709.584..3709.584 rows=3096360 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 120952kB\n> Buffers: shared hit=1134520 dirtied=111\n>\n> Does that support your theory?\n>\n> There is clearly an underestimate here, caused by correlated attributes, but\n> is that the cause for the bad performance with increased work_mem?\n\nYes, that's clearly the culprit here. In both cases we estimate here are \nonly ~4000 tuples in the hash, and 9.3 sizes the hash table to have at \nmost ~10 tuples per bucket (in a linked list).\n\nHowever we actually get ~3M rows, so there will be ~3000 tuples per \nbucket, and that's extremely expensive to walk. The reason why 100MB is \nfaster is that it's using 2 batches, thus making the lists \"just\" ~1500 \ntuples long.\n\nThis is pretty much exactly the reason why I reworked hash joins in 9.5. \nI'd bet it's going to be ~20x faster on that version.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 1 Feb 2016 10:59:51 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join gets slower as work_mem increases?" }, { "msg_contents": "Tomas Vondra wrote:\r\n> Yes, that's clearly the culprit here. In both cases we estimate here are\r\n> only ~4000 tuples in the hash, and 9.3 sizes the hash table to have at\r\n> most ~10 tuples per bucket (in a linked list).\r\n> \r\n> However we actually get ~3M rows, so there will be ~3000 tuples per\r\n> bucket, and that's extremely expensive to walk. The reason why 100MB is\r\n> faster is that it's using 2 batches, thus making the lists \"just\" ~1500\r\n> tuples long.\r\n> \r\n> This is pretty much exactly the reason why I reworked hash joins in 9.5.\r\n> I'd bet it's going to be ~20x faster on that version.\r\n\r\nThank you for the explanation!\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 1 Feb 2016 10:08:55 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash join gets slower as work_mem increases?" } ]
[ { "msg_contents": "The jsonb_agg function seems to have significantly worse performance\nthan its json_agg counterpart:\n\n=> explain analyze select pa.product_id, jsonb_agg(attributes) from\nproduct_attributes2 pa group by pa.product_id;\n                                                             \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate  (cost=1127.54..1231.62 rows=3046 width=380) (actual\ntime=28.632..241.647 rows=3046 loops=1)\n   Group Key: product_id\n   ->  Sort  (cost=1127.54..1149.54 rows=8800 width=380) (actual\ntime=28.526..32.826 rows=8800 loops=1)\n         Sort Key: product_id\n         Sort Method: external sort  Disk: 3360kB\n         ->  Seq Scan on product_attributes2 pa \n(cost=0.00..551.00 rows=8800 width=380) (actual time=0.010..7.231\nrows=8800 loops=1)\n Planning time: 0.376 ms\n Execution time: 242.963 ms\n(8 rows)\n\n=> explain analyze select pa.product_id, json_agg(attributes) from\nproduct_attributes3 pa group by pa.product_id;\n                                                             \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate  (cost=1136.54..1240.62 rows=3046 width=387) (actual\ntime=17.731..30.126 rows=3046 loops=1)\n   Group Key: product_id\n   ->  Sort  (cost=1136.54..1158.54 rows=8800 width=387) (actual\ntime=17.707..20.705 rows=8800 loops=1)\n         Sort Key: product_id\n         Sort Method: external sort  Disk: 3416kB\n         ->  Seq Scan on product_attributes3 pa \n(cost=0.00..560.00 rows=8800 width=387) (actual time=0.006.5.568\nrows=8800 loops=1)\n Planning time: 0.181 ms\n Execution time: 31.276 ms\n(8 rows)\n\nThe only difference between the two tables is the type of the\nattributes column (jsonb vs json).  Each table contains the same 8800\nrows.  Even running json_agg on the jsonb column seems to be faster:\n\n=> explain analyze select pa.product_id, json_agg(attributes) from\nproduct_attributes2 pa group by pa.product_id;\n                                                             \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate  (cost=1127.54..1231.62 rows=3046 width=380) (actual\ntime=30.626..62.943 rows=3046 loops=1)\n   Group Key: product_id\n   ->  Sort  (cost=1127.54..1149.54 rows=8800 width=380) (actual\ntime=30.590..34.157 rows=8800 loops=1)\n         Sort Key: product_id\n         Sort Method: external sort  Disk: 3360kB\n         ->  Seq Scan on product_attributes2 pa \n(cost=0.00..551.00 rows=8800 width=380) (actual time=0.014..7.388\nrows=8800 loops=1)\n Planning time: 0.142 ms\n Execution time: 64.504 ms\n(8 rows)\n\nIs it expected that jsonb_agg performance would be that much worse\nthan json_agg?\n\nThe jsonb_agg function seems to have significantly worse performance than its json_agg counterpart:=> explain analyze select pa.product_id, jsonb_agg(attributes) from product_attributes2 pa group by pa.product_id;                                                              QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------- GroupAggregate  (cost=1127.54..1231.62 rows=3046 width=380) (actual time=28.632..241.647 rows=3046 loops=1)   Group Key: product_id   ->  Sort  (cost=1127.54..1149.54 rows=8800 width=380) (actual time=28.526..32.826 rows=8800 loops=1)         Sort Key: product_id         Sort Method: external sort  Disk: 3360kB        \n ->  Seq Scan on product_attributes2 pa  (cost=0.00..551.00 rows=8800\n width=380) (actual time=0.010..7.231 rows=8800 loops=1) Planning time: 0.376 ms Execution time: 242.963 ms(8 rows)=> explain analyze select pa.product_id, json_agg(attributes) from product_attributes3 pa group by pa.product_id;                                                              QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------- GroupAggregate  (cost=1136.54..1240.62 rows=3046 width=387) (actual time=17.731..30.126 rows=3046 loops=1)   Group Key: product_id   ->  Sort  (cost=1136.54..1158.54 rows=8800 width=387) (actual time=17.707..20.705 rows=8800 loops=1)         Sort Key: product_id         Sort Method: external sort  Disk: 3416kB        \n ->  Seq Scan on product_attributes3 pa  (cost=0.00..560.00 rows=8800\n width=387) (actual time=0.006..5.568 rows=8800 loops=1) Planning time: 0.181 ms Execution time: 31.276 ms(8 rows)The\n only difference between the two tables is the type of the attributes \ncolumn (jsonb vs json).  Each table contains the same 8800 rows.  Even \nrunning json_agg on the jsonb column seems to be faster:=> explain analyze select pa.product_id, json_agg(attributes) from product_attributes2 pa group by pa.product_id;                                                              QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------- GroupAggregate  (cost=1127.54..1231.62 rows=3046 width=380) (actual time=30.626..62.943 rows=3046 loops=1)   Group Key: product_id   ->  Sort  (cost=1127.54..1149.54 rows=8800 width=380) (actual time=30.590..34.157 rows=8800 loops=1)         Sort Key: product_id         Sort Method: external sort  Disk: 3360kB        \n ->  Seq Scan on product_attributes2 pa  (cost=000..551.00 rows=8800\n width=380) (actual time=0.014..7.388 rows=8800 loops=1) Planning time: 0.142 ms Execution time: 64.504 ms(8 rows)Is it expected that jsonb_agg performance would be that much worse than json_agg?", "msg_date": "Fri, 29 Jan 2016 14:06:23 -0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "jsonb_agg performance" }, { "msg_contents": "\n\nOn 01/29/2016 05:06 PM, [email protected] wrote:\n> The jsonb_agg function seems to have significantly worse performance \n> than its json_agg counterpart:\n>\n> => explain analyze select pa.product_id, jsonb_agg(attributes) from \n> product_attributes2 pa group by pa.product_id;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=1127.54..1231.62 rows=3046 width=380) (actual \n> time=28.632..241.647 rows=3046 loops=1)\n> Group Key: product_id\n> -> Sort (cost=1127.54..1149.54 rows=8800 width=380) (actual \n> time=28.526..32.826 rows=8800 loops=1)\n> Sort Key: product_id\n> Sort Method: external sort Disk: 3360kB\n> -> Seq Scan on product_attributes2 pa (cost=0.00..551.00 \n> rows=8800 width=380) (actual time=0.010..7.231 rows=8800 loops=1)\n> Planning time: 0.376 ms\n> Execution time: 242.963 ms\n> (8 rows)\n>\n> => explain analyze select pa.product_id, json_agg(attributes) from \n> product_attributes3 pa group by pa.product_id;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=1136.54..1240.62 rows=3046 width=387) (actual \n> time=17.731..30.126 rows=3046 loops=1)\n> Group Key: product_id\n> -> Sort (cost=1136.54..1158.54 rows=8800 width=387) (actual \n> time=17.707..20.705 rows=8800 loops=1)\n> Sort Key: product_id\n> Sort Method: external sort Disk: 3416kB\n> -> Seq Scan on product_attributes3 pa (cost=0.00..560.00 \n> rows=8800 width=387) (actual time=0.006..5.568 rows=8800 loops=1)\n> Planning time: 0.181 ms\n> Execution time: 31.276 ms\n> (8 rows)\n>\n> The only difference between the two tables is the type of the \n> attributes column (jsonb vs json). Each table contains the same 8800 \n> rows. Even running json_agg on the jsonb column seems to be faster:\n>\n> => explain analyze select pa.product_id, json_agg(attributes) from \n> product_attributes2 pa group by pa.product_id;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=1127.54..1231.62 rows=3046 width=380) (actual \n> time=30.626..62.943 rows=3046 loops=1)\n> Group Key: product_id\n> -> Sort (cost=1127.54..1149.54 rows=8800 width=380) (actual \n> time=30.590..34.157 rows=8800 loops=1)\n> Sort Key: product_id\n> Sort Method: external sort Disk: 3360kB\n> -> Seq Scan on product_attributes2 pa (cost=000..551.00 \n> rows=8800 width=380) (actual time=0.014..7.388 rows=8800 loops=1)\n> Planning time: 0.142 ms\n> Execution time: 64.504 ms\n> (8 rows)\n>\n> Is it expected that jsonb_agg performance would be that much worse \n> than json_agg? \n\n\nI do expect it to be significantly worse. Constructing jsonb is quite a \nlot more expensive than constructing json, it's the later processing \nthat provides the performance benefit of jsonb. For 99 out of 100 uses \nthat I have seen there is no need to be using jsonb_agg, since the \noutput is almost always fed straight back to the client, not stored or \nprocessed further in the database. Rendering json to the client is \nextremely cheap, since it's already just text. Rendering jsonb as text \nto the client involves a lot more processing.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 29 Jan 2016 18:34:34 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonb_agg performance" } ]
[ { "msg_contents": "Dear all,\nFirst of all, I should apologize if my email doesn't follow all the guidelines.\nI'm trying to do that though!\n\nIf referencing to links is OK, you can find the full description of\nthe issue at:\nhttp://dba.stackexchange.com/questions/127082/postgresql-seems-to-create-inefficient-plans-in-simple-conditional-joins\n\nIt contains table definitions, queries, explan/explan analyze for them, and\na description of test conditions. But I'll provide a summary of the planning\nissue below.\n\nI'm using postgresql 9.3. I've run VACCUME ANALYZE on DB and it is\nnot modified after that.\n\nConsider these tables:\nCREATE TABLE t1\n(\n id bigint NOT NULL DEFAULT nextval('ids_seq'::regclass),\n total integer NOT NULL,\n price integer NOT NULL,\n CONSTRAINT pk_t1 PRIMARY KEY (id)\n)\n\nCREATE TABLE t2\n(\n id bigint NOT NULL,\n category smallint NOT NULL,\n CONSTRAINT pk_t2 PRIMARY KEY (id),\n CONSTRAINT fk_id FOREIGN KEY (id)\n REFERENCES t1 (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\n\nPersonally, I expect both queries below to perform exactly the same:\n\nSELECT\n t1.id, *\nFROM\n t1\nINNER JOIN\n t2 ON t1.id = t2.id\n where t1.id > -9223372036513411363;\n\nAnd:\n\nSELECT\n t1.id, *\nFROM\n t1\nINNER JOIN\n t2 ON t1.id = t2.id\n where t1.id > -9223372036513411363 and t2.id > -9223372036513411363;\n\nUnfortunately, they do not. PostgreSQL creates different plans for these\nqueries, which results in very poor performance for the first one compared\nto the second (What I'm testing against is a DB with around 350 million\nrows in t1, and slightly less in t2).\n\nEXPLAIN output:\nFirst query: http://explain.depesz.com/s/uauk\nSecond query: link: http://explain.depesz.com/s/uQd\n\nThe problem with the plan for the first query is that it limits\nindex scan on t1 with the where condition, but doesn't do so for t2.\n\nA similar behavior happens if you replace INNER JOIN with LEFT JOIN,\nand if you use \"USING (id) where id > -9223372036513411363\" instead\nof \"ON ...\".\n\nBut it is important to get the first query right. Consider that I want to create\na view on SELECT statement (without condition) to simplify creating queries on\nthe data. If providing a single id column in the view, a SELECT query\non the view\nwith such a condition on id column will result in a query similar to\nthe first one.\nWith this problem, I should provide both ID columns in the view so that queries\ncan add each condition on ID column for both of them. Now assume what happens\nwhen we are joining many tables together with ID column...\n\nIs there anything wrong with my queries or with me expecting both queries to be\nthe sam? Can I do anything so that PostgreSQL will behave similarly for the\nfirst query? Or if this is fixed in newer versions?\n\nThanks in advance,\nHedayat\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 30 Jan 2016 16:00:54 +0330", "msg_from": "Hedayat Vatankhah <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL seems to create inefficient plans in simple conditional\n joins" }, { "msg_contents": "On 31 January 2016 at 01:30, Hedayat Vatankhah <[email protected]> wrote:\n> Personally, I expect both queries below to perform exactly the same:\n>\n> SELECT\n> t1.id, *\n> FROM\n> t1\n> INNER JOIN\n> t2 ON t1.id = t2.id\n> where t1.id > -9223372036513411363;\n>\n> And:\n>\n> SELECT\n> t1.id, *\n> FROM\n> t1\n> INNER JOIN\n> t2 ON t1.id = t2.id\n> where t1.id > -9223372036513411363 and t2.id > -9223372036513411363;\n>\n> Unfortunately, they do not. PostgreSQL creates different plans for these\n> queries, which results in very poor performance for the first one compared\n> to the second (What I'm testing against is a DB with around 350 million\n> rows in t1, and slightly less in t2).\n>\n> EXPLAIN output:\n> First query: http://explain.depesz.com/s/uauk\n> Second query: link: http://explain.depesz.com/s/uQd\n\nYes, unfortunately you've done about the only thing that you can do,\nand that's just include both conditions in the query. Is there some\nspecial reason why you can't just write the t2.id > ... condition in\nthe query too? or is the query generated dynamically by some software\nthat you have no control over?\n\nI'd personally quite like to see improvements in this area, and even\nwrote a patch [1] which fixes this problem too. The problem I had when\nproposing the fix for this was that I was unable to report details\nabout how many people are hit by this planner limitation. The patch I\nproposed caused a very small impact on planning time for many queries,\nand was thought by many not to apply in enough cases for it to be\nworth slowing down queries which cannot possibly benefit. Of course I\nagree with this, I've no interest in slowing down planning on queries,\nbut at the same time understand the annoying poor optimisation in this\narea.\n\nAlthough please remember the patch I proposed was merely a first draft\nproposal. Not for production use.\n\n[1] http://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com#CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 31 Jan 2016 04:57:04 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL seems to create inefficient plans in simple\n conditional joins" }, { "msg_contents": "It may be more for -hackers, but I often hear \"this wont be used because of\nplanning time increase\". Now as I know we have statistics on real query\ntime after few runs that is used to decide if plan should be switched.\nCan this statistics be used to apply advanced planning features for\nrelatively long running queries? E.g. a parameter like\nsophisticated_planning_l1_threshold=500ms. If query runs over this\nthreshold, replan it with more sophisticated features taking few more\nmillis. Possibly different levels can be introduced. Also allow to set\nthreshold to 0, saying \"apply to all queries right away\".\nAnother good option is to threshold against cumulative query time. E.g. if\nthere was 10000 runs 0.5 millis each, it may be beneficial to spend few\nmillis to get 0.2 millis each.\n\nBest regards, Vitalii Tymchyshyn\n\nСб, 30 січ. 2016 10:57 David Rowley <[email protected]> пише:\n\n> On 31 January 2016 at 01:30, Hedayat Vatankhah <[email protected]>\n> wrote:\n> > Personally, I expect both queries below to perform exactly the same:\n> >\n> > SELECT\n> > t1.id, *\n> > FROM\n> > t1\n> > INNER JOIN\n> > t2 ON t1.id = t2.id\n> > where t1.id > -9223372036513411363;\n> >\n> > And:\n> >\n> > SELECT\n> > t1.id, *\n> > FROM\n> > t1\n> > INNER JOIN\n> > t2 ON t1.id = t2.id\n> > where t1.id > -9223372036513411363 and t2.id > -9223372036513411363;\n> >\n> > Unfortunately, they do not. PostgreSQL creates different plans for these\n> > queries, which results in very poor performance for the first one\n> compared\n> > to the second (What I'm testing against is a DB with around 350 million\n> > rows in t1, and slightly less in t2).\n> >\n> > EXPLAIN output:\n> > First query: http://explain.depesz.com/s/uauk\n> > Second query: link: http://explain.depesz.com/s/uQd\n>\n> Yes, unfortunately you've done about the only thing that you can do,\n> and that's just include both conditions in the query. Is there some\n> special reason why you can't just write the t2.id > ... condition in\n> the query too? or is the query generated dynamically by some software\n> that you have no control over?\n>\n> I'd personally quite like to see improvements in this area, and even\n> wrote a patch [1] which fixes this problem too. The problem I had when\n> proposing the fix for this was that I was unable to report details\n> about how many people are hit by this planner limitation. The patch I\n> proposed caused a very small impact on planning time for many queries,\n> and was thought by many not to apply in enough cases for it to be\n> worth slowing down queries which cannot possibly benefit. Of course I\n> agree with this, I've no interest in slowing down planning on queries,\n> but at the same time understand the annoying poor optimisation in this\n> area.\n>\n> Although please remember the patch I proposed was merely a first draft\n> proposal. Not for production use.\n>\n> [1]\n> http://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com#CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIt may be more for -hackers, but I often hear \"this wont be used because of planning time increase\". Now as I know we have statistics on real query time after few runs that is used to decide if plan should be switched. \nCan this statistics be used to apply advanced planning features for relatively long running queries? E.g. a parameter like sophisticated_planning_l1_threshold=500ms. If query runs over this threshold, replan it with more sophisticated features taking few more millis. Possibly different levels can be introduced. Also allow to set threshold to 0, saying \"apply to all queries right away\".\nAnother good option is to threshold against cumulative query time. E.g. if there was 10000 runs 0.5 millis each, it may be beneficial to spend few millis to get 0.2 millis each.\nBest regards, Vitalii Tymchyshyn\nСб, 30 січ. 2016 10:57 David Rowley <[email protected]> пише:On 31 January 2016 at 01:30, Hedayat Vatankhah <[email protected]> wrote:\n> Personally, I expect both queries below to perform exactly the same:\n>\n> SELECT\n>     t1.id, *\n> FROM\n>     t1\n> INNER JOIN\n>     t2 ON t1.id = t2.id\n>     where t1.id > -9223372036513411363;\n>\n> And:\n>\n> SELECT\n>     t1.id, *\n> FROM\n>     t1\n> INNER JOIN\n>     t2 ON t1.id = t2.id\n>     where t1.id > -9223372036513411363 and t2.id > -9223372036513411363;\n>\n> Unfortunately, they do not. PostgreSQL creates different plans for these\n> queries, which results in very poor performance for the first one compared\n> to the second (What I'm testing against is a DB with around 350 million\n> rows in t1, and slightly less in t2).\n>\n> EXPLAIN output:\n> First query: http://explain.depesz.com/s/uauk\n> Second query: link: http://explain.depesz.com/s/uQd\n\nYes, unfortunately you've done about the only thing that you can do,\nand that's just include both conditions in the query. Is there some\nspecial reason why you can't just write the t2.id > ... condition in\nthe query too? or is the query generated dynamically by some software\nthat you have no control over?\n\nI'd personally quite like to see improvements in this area, and even\nwrote a patch [1] which fixes this problem too. The problem I had when\nproposing the fix for this was that I was unable to report details\nabout how many people are hit by this planner limitation. The patch I\nproposed caused a very small impact on planning time for many queries,\nand was thought by many not to apply in enough cases for it to be\nworth slowing down queries which cannot possibly benefit. Of course I\nagree with this, I've no interest in slowing down planning on queries,\nbut at the same time understand the annoying poor optimisation in this\narea.\n\nAlthough please remember the patch I proposed was merely a first draft\nproposal. Not for production use.\n\n[1] http://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com#CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com\n\n--\n David Rowley                   http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 30 Jan 2016 17:14:47 +0000", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL seems to create inefficient plans in simple\n conditional joins" }, { "msg_contents": "On 31 January 2016 at 06:14, Vitalii Tymchyshyn <[email protected]> wrote:\n> It may be more for -hackers, but I often hear \"this wont be used because of\n> planning time increase\". Now as I know we have statistics on real query time\n> after few runs that is used to decide if plan should be switched.\n> Can this statistics be used to apply advanced planning features for\n> relatively long running queries? E.g. a parameter like\n> sophisticated_planning_l1_threshold=500ms. If query runs over this\n> threshold, replan it with more sophisticated features taking few more\n> millis. Possibly different levels can be introduced. Also allow to set\n> threshold to 0, saying \"apply to all queries right away\".\n> Another good option is to threshold against cumulative query time. E.g. if\n> there was 10000 runs 0.5 millis each, it may be beneficial to spend few\n> millis to get 0.2 millis each.\n\nI agree with you. I recently was working with long running queries on\na large 3TB database. I discovered a new optimisation was possible,\nand wrote a patch to implement. On testing the extra work which the\noptimiser performed took 7 micoseconds, and this saved 6 hours of\nexecution time. Now, I've never been much of an investor in my life,\nbut a 3 billion times return on an investment seems quite favourable.\nOf course, that's quite an extreme case, but it's hard to ignore the\nbenefit is still significant in less extreme cases.\n\nThe idea you've mentioned here is very similar to what I bought up at\nthe developer meeting a few days ago, see AOB section in [1]\n\nUnfortunately I didn't really get many of the correct people on my\nside with it, and some wanted examples of specific patches, which is\ncompletely not what I wanted to talk about. I was more aiming for some\nagreement for generic infrastructure to do exactly as you describe.\n\n[1] https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2016_Developer_Meeting\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 31 Jan 2016 06:31:05 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL seems to create inefficient plans in simple\n conditional joins" }, { "msg_contents": "Well, as I can see it was just few phrases unless I miss something. May be\nit's worth to bring it to -hackers for a wider discussion?\n\nBest regards, Vitalii Tymchyshyn\n\nСб, 30 січ. 2016 12:31 David Rowley <[email protected]> пише:\n\n> On 31 January 2016 at 06:14, Vitalii Tymchyshyn <[email protected]> wrote:\n> > It may be more for -hackers, but I often hear \"this wont be used because\n> of\n> > planning time increase\". Now as I know we have statistics on real query\n> time\n> > after few runs that is used to decide if plan should be switched.\n> > Can this statistics be used to apply advanced planning features for\n> > relatively long running queries? E.g. a parameter like\n> > sophisticated_planning_l1_threshold=500ms. If query runs over this\n> > threshold, replan it with more sophisticated features taking few more\n> > millis. Possibly different levels can be introduced. Also allow to set\n> > threshold to 0, saying \"apply to all queries right away\".\n> > Another good option is to threshold against cumulative query time. E.g.\n> if\n> > there was 10000 runs 0.5 millis each, it may be beneficial to spend few\n> > millis to get 0.2 millis each.\n>\n> I agree with you. I recently was working with long running queries on\n> a large 3TB database. I discovered a new optimisation was possible,\n> and wrote a patch to implement. On testing the extra work which the\n> optimiser performed took 7 micoseconds, and this saved 6 hours of\n> execution time. Now, I've never been much of an investor in my life,\n> but a 3 billion times return on an investment seems quite favourable.\n> Of course, that's quite an extreme case, but it's hard to ignore the\n> benefit is still significant in less extreme cases.\n>\n> The idea you've mentioned here is very similar to what I bought up at\n> the developer meeting a few days ago, see AOB section in [1]\n>\n> Unfortunately I didn't really get many of the correct people on my\n> side with it, and some wanted examples of specific patches, which is\n> completely not what I wanted to talk about. I was more aiming for some\n> agreement for generic infrastructure to do exactly as you describe.\n>\n> [1] https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2016_Developer_Meeting\n>\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWell, as I can see it was just few phrases unless I miss something. May be it's worth to bring it to -hackers for a wider discussion?\nBest regards, Vitalii Tymchyshyn\nСб, 30 січ. 2016 12:31 David Rowley <[email protected]> пише:On 31 January 2016 at 06:14, Vitalii Tymchyshyn <[email protected]> wrote:\n> It may be more for -hackers, but I often hear \"this wont be used because of\n> planning time increase\". Now as I know we have statistics on real query time\n> after few runs that is used to decide if plan should be switched.\n> Can this statistics be used to apply advanced planning features for\n> relatively long running queries? E.g. a parameter like\n> sophisticated_planning_l1_threshold=500ms. If query runs over this\n> threshold, replan it with more sophisticated features taking few more\n> millis. Possibly different levels can be introduced. Also allow to set\n> threshold to 0, saying \"apply to all queries right away\".\n> Another good option is to threshold against cumulative query time. E.g. if\n> there was 10000 runs 0.5 millis each, it may be beneficial to spend few\n> millis to get 0.2 millis each.\n\nI agree with you. I recently was working with long running queries on\na large 3TB database. I discovered a new optimisation was possible,\nand wrote a patch to implement. On testing the extra work which the\noptimiser performed took 7 micoseconds, and this saved 6 hours of\nexecution time. Now, I've never been much of an investor in my life,\nbut a 3 billion times return on an investment seems quite favourable.\nOf course, that's quite an extreme case, but it's hard to ignore the\nbenefit is still significant in less extreme cases.\n\nThe idea you've mentioned here is very similar to what I bought up at\nthe developer meeting a few days ago, see AOB section in [1]\n\nUnfortunately I didn't really get many of the correct people on my\nside with it, and some wanted examples of specific patches, which is\ncompletely not what I wanted to talk about. I was more aiming for some\nagreement for generic infrastructure to do exactly as you describe.\n\n[1]  https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2016_Developer_Meeting\n\n\n--\n David Rowley                   http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 30 Jan 2016 18:52:09 +0000", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL seems to create inefficient plans in simple\n conditional joins" }, { "msg_contents": "Hi,\n\n/*David Rowley*/ wrote on Sun, 31 Jan 2016 04:57:04 +1300:\n> On 31 January 2016 at 01:30, Hedayat Vatankhah <[email protected]> wrote:\n>> Personally, I expect both queries below to perform exactly the same:\n>>\n>> SELECT\n>> t1.id, *\n>> FROM\n>> t1\n>> INNER JOIN\n>> t2 ON t1.id = t2.id\n>> where t1.id > -9223372036513411363;\n>>\n>> And:\n>>\n>> SELECT\n>> t1.id, *\n>> FROM\n>> t1\n>> INNER JOIN\n>> t2 ON t1.id = t2.id\n>> where t1.id > -9223372036513411363 and t2.id > -9223372036513411363;\n>>\n>> Unfortunately, they do not. PostgreSQL creates different plans for these\n>> queries, which results in very poor performance for the first one compared\n>> to the second (What I'm testing against is a DB with around 350 million\n>> rows in t1, and slightly less in t2).\n>>\n>> EXPLAIN output:\n>> First query: http://explain.depesz.com/s/uauk\n>> Second query: link: http://explain.depesz.com/s/uQd\n> Yes, unfortunately you've done about the only thing that you can do,\n> and that's just include both conditions in the query. Is there some\n> special reason why you can't just write the t2.id > ... condition in\n> the query too? or is the query generated dynamically by some software\n> that you have no control over?\nI can, but it would make my application code much more complex. I was \nhoping to be able to hide the complexity of DB data model in DB itself \nusing views, triggers etc. If I want to add such conditions, the query \ngenerator in my application code would be more complex, and certainly \nthe internal structure of DB will be visible to it.\n\nI'm working to re-design a DB which can grow large and slow, as I guess \nthat we can find a more optimal design before trying optimizations like \nusing materialized views and other common optimizations. I've found two \ncompletely different approaches for such problems: de-normalizing data, \nhighly normalizing data (6NF) like Anchor Modeling approach. I decided \nto experiment with something similar to the latter one (not that \nextreme!) specially since our current design was not that normalized, \nand it performs poorly. I'm investigating why it should perform so bad \nwith my queries, and this problem was one of the reasons. In such a \ndesign, views are used to present the JOIN of many tables as a single \ntable, so that using the model is easy and transparent. But usually a \nsingle table doesn't have 10 ID columns (which can change as the model \nchanges) for which you should repeat any conditions to get acceptable \nresults!\nWhile it can be done, it is so annoying: the application should know how \nmany tables are joined together, and repeat the condition for all such \ncolumns. And the problem become worse when you are going to create a \nrelation between two different IDs of different data, e.g. relating \ncustomer info (composed of joining 5 tables) with info about items (s)he \nbought (composed of joining 3 tables).\n\nAnyway, it seems that this is what I should implement in my application \ncode. I just hope that adding explicit conditions for each joined table \nwill not turn off any other optimizations!\n\nSuch an optimization seemed so natural to me that I didn't believe that \nPostgreSQL doesn't understand that a condition on ID applies to all id \ncolumns in a JOINed query, that I simplified my query step by step until \nI reached the minimum problematic query which is very similar to the one \nI posted here. It was at this point that I finally realized that maybe \nPostgreSQL really doesn't understand it, and I was ... shocked!\n\n\n> I'd personally quite like to see improvements in this area, and even\n> wrote a patch [1] which fixes this problem too. The problem I had when\n> proposing the fix for this was that I was unable to report details\n> about how many people are hit by this planner limitation. The patch I\n> proposed caused a very small impact on planning time for many queries,\n> and was thought by many not to apply in enough cases for it to be\n> worth slowing down queries which cannot possibly benefit. Of course I\n> agree with this, I've no interest in slowing down planning on queries,\n> but at the same time understand the annoying poor optimisation in this\n> area.\n>\n> Although please remember the patch I proposed was merely a first draft\n> proposal. Not for production use.\n>\n> [1] http://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com#CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A@mail.gmail.com\n>\nThat's great, I might consider experimenting with this too.\n\nRegards,\nHedayat\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 31 Jan 2016 01:20:53 +0330", "msg_from": "Hedayat Vatankhah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL seems to create inefficient plans in simple\n conditional joins" }, { "msg_contents": "Hi again,\n\n/*Hedayat Vatankhah*/ wrote on Sun, 31 Jan 2016 01:20:53 +0330:\n> Hi,\n>\n> /*David Rowley*/ wrote on Sun, 31 Jan 2016 04:57:04 +1300:\n>> On 31 January 2016 at 01:30, Hedayat Vatankhah\n>> <[email protected]> wrote:\n>>> Personally, I expect both queries below to perform exactly the same:\n>>>\n>>> SELECT\n>>> t1.id, *\n>>> FROM\n>>> t1\n>>> INNER JOIN\n>>> t2 ON t1.id = t2.id\n>>> where t1.id > -9223372036513411363;\n>>>\n>>> And:\n>>>\n>>> SELECT\n>>> t1.id, *\n>>> FROM\n>>> t1\n>>> INNER JOIN\n>>> t2 ON t1.id = t2.id\n>>> where t1.id > -9223372036513411363 and t2.id >\n>>> -9223372036513411363;\n>>>\n>>> Unfortunately, they do not. PostgreSQL creates different plans for\n>>> these\n>>> queries, which results in very poor performance for the first one\n>>> compared\n>>> to the second (What I'm testing against is a DB with around 350 million\n>>> rows in t1, and slightly less in t2).\n>>>\n>>> EXPLAIN output:\n>>> First query: http://explain.depesz.com/s/uauk\n>>> Second query: link: http://explain.depesz.com/s/uQd\n>> Yes, unfortunately you've done about the only thing that you can do,\n>> and that's just include both conditions in the query. Is there some\n>> special reason why you can't just write the t2.id > ... condition in\n>> the query too? or is the query generated dynamically by some software\n>> that you have no control over?\n\nI just found another issue with using a query like the second one (using \nLEFT JOINs instead of INNER JOINs): referencing id columns of joined \ntables explicitly disables PostgreSQL join removal optimization when you \nonly select column(s) from t1! :(\nI should forget about creating views on top of JOIN queries, and build \nappropriate JOIN queries with referenced table and appropriate \nconditions manually, so the whole data model should be exposed to the \napplication.\n\nIf I'm not wrong, PostgreSQL should understand that ANY condition on t2 \ndoesn't change the LEFT JOIN output when t2 columns are not SELECTed.\n\nRegards,\nHedayat\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 8 Feb 2016 12:20:49 +0330", "msg_from": "Hedayat Vatankhah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL seems to create inefficient plans in simple\n conditional joins" } ]
[ { "msg_contents": "Hi all,\n\nI have a recursive part in my database logic that I want to isolate and\nreuse as a view. I had found a blog that explained how move a function\nparameter into a view. The SQL is in attachment.\nWhen I write a query based on that view with a fixed value (or values) for\nthe (input) parameter, the planner does fine and only evaluates the\nfunction once.\nHowever, when the value of the parameter should be deduced from something\nelse, the planner doesn't understand that and will evaluate the function\nfor each possible value.\n\nAny pointers to what I'm doing wrong or on how to optimize it?\n\nAttachment contains the queries and explain plans.\n\nThanks!\n\nKind regards,\nMathieu\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 01 Feb 2016 07:23:40 +0000", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "View containing a recursive function" }, { "msg_contents": "Mathieu De Zutter <[email protected]> writes:\n> I have a recursive part in my database logic that I want to isolate and\n> reuse as a view. I had found a blog that explained how move a function\n> parameter into a view. The SQL is in attachment.\n> When I write a query based on that view with a fixed value (or values) for\n> the (input) parameter, the planner does fine and only evaluates the\n> function once.\n> However, when the value of the parameter should be deduced from something\n> else, the planner doesn't understand that and will evaluate the function\n> for each possible value.\n\nI do not think this has anything to do with whether the query inside the\nfunction is recursive. Rather, the problem is that the view has a\nset-returning function in its targetlist, which prevents the view from\nbeing flattened into the outer query, per this bit in is_simple_subquery():\n\n /*\n * Don't pull up a subquery that has any set-returning functions in its\n * targetlist. Otherwise we might well wind up inserting set-returning\n * functions into places where they mustn't go, such as quals of higher\n * queries. This also ensures deletion of an empty jointree is valid.\n */\n if (expression_returns_set((Node *) subquery->targetList))\n return false;\n\nLack of flattening disables a lot of join optimizations, including the\none you want.\n\nAssuming you have a reasonably late-model PG, you could rewrite the\nview with a lateral function call:\n\nCREATE OR REPLACE VIEW covering_works_r AS\n SELECT\n w.id AS work_id,\n fn.f AS covering_work_id\n FROM work w, fn_covering_works(w.id) as fn(f);\n\nwhich puts the SRF into FROM where the planner can deal with it much\nbetter.\n\nAnother problem is that you let the function default to being VOLATILE,\nwhich would have disabled view flattening even if this didn't. I see\nno reason for this function not to be marked STABLE.\n\nDoing both of those things gives me a plan like this:\n\n Nested Loop (cost=448.24..509.53 rows=1131 width=4)\n -> Nested Loop (cost=0.31..16.36 rows=1 width=8)\n -> Index Scan using work_first_release_id_idx on work w (cost=0.15..8.17 rows=1 width=4)\n Index Cond: (first_release_id = 4249)\n -> Index Only Scan using work_pkey on work w_1 (cost=0.15..8.17 rows=1 width=4)\n Index Cond: (id = w.id)\n -> CTE Scan on func (cost=447.93..470.55 rows=1131 width=0)\n CTE func\n -> Recursive Union (cost=0.00..447.93 rows=1131 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Hash Join (cost=0.33..42.53 rows=113 width=4)\n Hash Cond: (ad.original_id = f.work_id)\n -> Seq Scan on adaptation ad (cost=0.00..32.60 rows=2260 width=8)\n -> Hash (cost=0.20..0.20 rows=10 width=4)\n -> WorkTable Scan on func f (cost=0.00..0.20 rows=10 width=4)\n\nwhich looks hairier, but that's because the function has been inlined\nwhich is usually what you want for a SQL-language function. The join\nis happening the way you want.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 01 Feb 2016 10:44:36 +0100", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View containing a recursive function" }, { "msg_contents": "On Mon, 1 Feb 2016 at 10:45 Tom Lane <[email protected]> wrote:\n\n> Mathieu De Zutter <[email protected]> writes:\n> Assuming you have a reasonably late-model PG, you could rewrite the\n> view with a lateral function call:\n>\n> CREATE OR REPLACE VIEW covering_works_r AS\n> SELECT\n> w.id AS work_id,\n> fn.f AS covering_work_id\n> FROM work w, fn_covering_works(w.id) as fn(f);\n>\n> which puts the SRF into FROM where the planner can deal with it much\n> better.\n>\n\nThanks a lot. That fixes it!\n\nAnother problem is that you let the function default to being VOLATILE,\n> which would have disabled view flattening even if this didn't. I see\n> no reason for this function not to be marked STABLE.\n>\n\nBy marking it STABLE, it ignores my row estimate of 1 - I guess because of\nthe inlining. The number of results is usually just 1, though the number\ncan go up to 10 in exceptional cases. That's still a lot better than the\ninexplicable estimate of the planner (101) when marked STABLE, which often\nleads to triggering a hash join instead of a nested loop in complex queries:\n\n-> Recursive Union (cost=0.00..795.53 rows=*101* width=4) (actual\ntime=0.001..0.009 rows=1 loops=4)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.001..0.001 rows=1 loops=4)\n -> Nested Loop (cost=0.29..79.35 rows=10 width=4) (actual\ntime=0.005..0.005 rows=0 loops=5)\n -> WorkTable Scan on func f (cost=0.00..0.20 rows=10 width=4)\n(actual time=0.000..0.000 rows=1 loops=5)\n -> Index Scan using adaptation_adapted_idx on adaptation ad\n (cost=0.29..7.91 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=5)\n Index Cond: (adapted_id = f.work_id)\n\n\nThanks again,\n\nMathieu\n\nOn Mon, 1 Feb 2016 at 10:45 Tom Lane <[email protected]> wrote:Mathieu De Zutter <[email protected]> writes:Assuming you have a reasonably late-model PG, you could rewrite the\nview with a lateral function call:\n\nCREATE OR REPLACE VIEW covering_works_r AS\n  SELECT\n    w.id                    AS work_id,\n    fn.f                    AS covering_work_id\n  FROM work w, fn_covering_works(w.id) as fn(f);\n\nwhich puts the SRF into FROM where the planner can deal with it much\nbetter. Thanks a lot. That fixes it! \nAnother problem is that you let the function default to being VOLATILE,\nwhich would have disabled view flattening even if this didn't.  I see\nno reason for this function not to be marked STABLE.By marking it STABLE, it ignores my row estimate of 1 - I guess because of the inlining. The number of results is usually just 1, though the number can go up to 10 in exceptional cases. That's still a lot better than the inexplicable estimate of the planner (101) when marked STABLE, which often leads to triggering a hash join instead of a nested loop in complex queries:->  Recursive Union  (cost=0.00..795.53 rows=101 width=4) (actual time=0.001..0.009 rows=1 loops=4)      ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=4)      ->  Nested Loop  (cost=0.29..79.35 rows=10 width=4) (actual time=0.005..0.005 rows=0 loops=5)            ->  WorkTable Scan on func f  (cost=0.00..0.20 rows=10 width=4) (actual time=0.000..0.000 rows=1 loops=5)            ->  Index Scan using adaptation_adapted_idx on adaptation ad  (cost=0.29..7.91 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=5)                  Index Cond: (adapted_id = f.work_id)Thanks again,Mathieu", "msg_date": "Fri, 05 Feb 2016 15:43:44 +0000", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View containing a recursive function" } ]
[ { "msg_contents": "Hi,\n\nI'm trying to understand how to estimate and minimize memory\nconsumption of ANALYZE operations on \"big\" tsvector columns.\n\nContext:\n\nPostgresql server is 9.1.19 (Ubuntu package 9.1.19-0ubuntu0.12.04).\n\nI have a database on which I noticed that autovacuum operations could\nconsume up to 2 GB of resident memory (observed with top's RSS\ncolumn).\n\nThis is sometime problematic because with 3 autovacuum processes\n(default value on Ubuntu), this leads to a peak usage of 6 GB, and\nsends the server into swapin/swapout madness.\n\nThis typically happens during restoration of dumps or massive updates\nin the database, which triggers the autovacuum processes, and slows\ndown the server during the execution of these operations due to the\nswapin/swapout.\n\nUp to today we addressed this behavior by either disabling autovacuum\nor temporarily bumping the VM's memory limit for the duration of the\noperation.\n\nNow, I think I managed to replicate and isolate the problem.\n\nMy analysis:\n\nI have a table with ~20k tuples, and 350 columns with type int, text\nand tsvector.\n\nI created a copy of this table and iteratively dropped some columns to\nsee if a specific column was the cause of this spike in memory usage.\nAnd I came to the simple case of a table with a single tsvector column\nthat causes ANALYZE to consume up to 2 GB or memory.\n\nSo, this table has a single column of type tsvector, and this column\nis quite big because as it is originally the concatenation of all the\nother tsvector columns from the table (and this tsvector columns also\nhas a GIST index).\n\nHere is the top 10 length for this column :\n\n--8<--\n# SELECT length(fulltext) FROM test ORDER BY length DESC LIMIT 10;\nlength\n--------\n87449\n87449\n87401\n87272\n87261\n87259\n87259\n87259\n87257\n87257\n(10 rows)\n-->8--\n\nI tried playing with \"default_statistics_target\" (which is set to\n100): if I reduce it to 5, then the ANALYZE is almost immediate and\nconsumes less than ~200 MB. At 10, the process starts to consume up to\n~440 MB.\n\nI see no difference in Postgresql's planning selection between\n\"default_statistics_target\" 1 and 100: EXPLAIN ANALYZE shows the same\nplan being executed using the GIST index (for a simple \"SELECT\ncount(ctid) FROM test WHERE fulltext @@ 'Hello'\").\n\nSo:\n- Is there a way to estimate or reduce ANALYZE's peak memory usage on\nthis kind of tables?\n- Is it \"safe\" to set STATISTICS = 1 on this particular \"big\" tsvector\ncolumns? Or could it have an adverse effect on query plan selection?\n\nI'm currently in the process of upgrading to Postgresql 9.5, so I'll\nsee if the behavior changes or not on this version.\n\nThanks,\nJérôme\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 4 Feb 2016 10:12:48 +0100", "msg_from": "=?UTF-8?B?SsOpcsO0bWUgQXVnw6k=?= <[email protected]>", "msg_from_op": true, "msg_subject": "Understanding ANALYZE memory usage with \"big\" tsvector columns" } ]
[ { "msg_contents": "Hello all,\n\n\nI've been trying to get a query use indexes and it has raised a doubt \nwhether pgsql supports bitmap and-ing between a multi-column btree index \nand a gin index.\n\nThe idea is to do a full-text search on a tsvector that is indexed with \ngin. Then there are typical extra filters like is_active that you would \nput in a btree index. Instead of using OFFSET I use a > operation on the \nid. Finally, to make sure the results listing always appear in the same \norder, I do an ORDER BY the id of the row. So something like this:\n\nCREATE INDEX idx_gin_page ON page USING gin(search_vector);\n\nCREATE INDEX idx_btree_active_iddesc ON page USING btree(is_active, id DESC);\n\nSELECT * FROM page WHERE (( (page.search_vector) @@ (plainto_tsquery('pg_catalog.english', 'myquery'))) AND page.is_active = 1 AND page.id > 100) ORDER BY page.id DESC LIMIT 100;\n\n\nSome options I considered:\n- One big multi-column index with the btree_gin module, but that didn't \nwork. I suppose it's because just like gin, it doesn't support sorting.\n- Seperate indexes as above, but that didn't work. The planner would \nalways choose the btree index to do the is_active=1 and id>100 filter \nand the sorting, and within those results do a manual filter on the \ntsvector, being extremely slow.\n\nBUT: when I remove the ORDER BY statement, the query runs really fast. \nIt uses the 2 indexes seperately and bitmap-ands them together, \nresulting in a fast executing query.\n\nSo my question is whether there is something wrong with my query or \nindexes, or does pgsql not support sorting and bitmap and-ing?\n\n\nThanks and have a nice day\nJordi\n\n\n\n\n\n\n\n Hello all,\n\n\n I've been trying to get a query use indexes and it has raised a\n doubt whether pgsql supports bitmap and-ing between a multi-column\n btree index and a gin index.\n\n The idea is to do a full-text search on a tsvector that is indexed\n with gin. Then there are typical extra filters like is_active that\n you would put in a btree index. Instead of using OFFSET I use a >\n operation on the id. Finally, to make sure the results listing\n always appear in the same order, I do an ORDER BY the id of the row.\n So something like this:\n\n\nCREATE INDEX idx_gin_page ON page USING gin(search_vector);\nCREATE INDEX idx_btree_active_iddesc ON page USING btree(is_active, id DESC);\n\nSELECT * FROM page WHERE (( (page.search_vector) @@ (plainto_tsquery('pg_catalog.english', 'myquery'))) AND page.is_active = 1 AND page.id > 100) ORDER BY page.id DESC LIMIT 100;\n\n Some options I considered:\n - One big multi-column index with the btree_gin module, but that\n didn't work. I suppose it's because just like gin, it doesn't\n support sorting.\n - Seperate indexes as above, but that didn't work. The planner would\n always choose the btree index to do the is_active=1 and id>100\n filter and the sorting, and within those results do a manual filter\n on the tsvector, being extremely slow.\n\n BUT: when I remove the ORDER BY statement, the query runs really\n fast. It uses the 2 indexes seperately and bitmap-ands them\n together, resulting in a fast executing query.\n\n So my question is whether there is something wrong with my query or\n indexes, or does pgsql not support sorting and bitmap and-ing?\n\n\n Thanks and have a nice day\n Jordi", "msg_date": "Thu, 4 Feb 2016 12:15:02 +0100", "msg_from": "Jordi <[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap and-ing between btree and gin?" }, { "msg_contents": "Jordi <[email protected]> writes:\n> I've been trying to get a query use indexes and it has raised a doubt \n> whether pgsql supports bitmap and-ing between a multi-column btree index \n> and a gin index.\n\nSure. But such a plan would give an unordered result, meaning that we'd\nhave to process the whole table before doing the ORDER BY/LIMIT. The\nplanner evidently thinks that it's better to try to process the rows in\nID order so it can stop as soon as it's got 100. If it's wrong about\nthat, that's likely because it's got a bad estimate of the selectivity of\nthe other WHERE conditions. You might see if you can improve the\nstatistics for the search_vector column.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 04 Feb 2016 11:08:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap and-ing between btree and gin?" }, { "msg_contents": "Hi Tom, thanks for your reply, much appreciated.\n\nSo basically you're saying it's hard to do sorting in any way when a gin \nindex is involved? Neither with a complete multi-column btree_gin index \nbecause it doesn't support sorting per definition, nor with a seperate \ngin and btree because there would be an extra post-sorting step involved \nover the FULL resultset (because of the LIMIT).\n\nThen would you have any hint on how to implement pagination when doing \nfull text search?\nCause in theory, if I gave it a id>100 LIMIT 100, it might just as well \nreturn me results 150 to 250, instead of 100 to 200...\n\nPS: I already tried maxing the statistics target setting and running \nANALYSE after, with no change.\n\nRegards,\nJordi\n\n\nOn 04-02-16 17:08, Tom Lane wrote:\n> Jordi <[email protected]> writes:\n>> I've been trying to get a query use indexes and it has raised a doubt\n>> whether pgsql supports bitmap and-ing between a multi-column btree index\n>> and a gin index.\n> Sure. But such a plan would give an unordered result, meaning that we'd\n> have to process the whole table before doing the ORDER BY/LIMIT. The\n> planner evidently thinks that it's better to try to process the rows in\n> ID order so it can stop as soon as it's got 100. If it's wrong about\n> that, that's likely because it's got a bad estimate of the selectivity of\n> the other WHERE conditions. You might see if you can improve the\n> statistics for the search_vector column.\n>\n> \t\t\tregards, tom lane\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 4 Feb 2016 18:19:31 +0100", "msg_from": "Jordi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap and-ing between btree and gin?" }, { "msg_contents": "On Thu, Feb 4, 2016 at 9:19 AM, Jordi <[email protected]> wrote:\n\nThe custom here is to respond in line, not to top-post. Thanks.\n\n>\n> So basically you're saying it's hard to do sorting in any way when a gin\n> index is involved? Neither with a complete multi-column btree_gin index\n> because it doesn't support sorting per definition, nor with a seperate gin\n> and btree because there would be an extra post-sorting step involved over\n> the FULL resultset (because of the LIMIT).\n\nIn principle there is no reason (that I can think of) that a normal\nbtree index range scan couldn't accept a bitmap as an optional input,\nand then use that as a filter which would allow it to walk the index\nin order while throwing out tuples that can't match the other\nconditions. You are not the first person who would benefit from such\na feature. But it would certainly not be trivial to implement. It is\nnot on anyone's to-do list as far as I know.\n\n From your earlier email:\n\n> BUT: when I remove the ORDER BY statement, the query runs really fast. It uses the 2 indexes seperately and bitmap-ands them together, resulting in a fast executing query.\n\nWhen you removed the ORDER BY, did you also remove the LIMIT? If you\nremoved the ORDER BY and kept the LIMIT, that is pretty much a\nmeaningless comparison. You are asking a much easier question at that\npoint.\n\n> Then would you have any hint on how to implement pagination when doing full\n> text search?\n> Cause in theory, if I gave it a id>100 LIMIT 100, it might just as well\n> return me results 150 to 250, instead of 100 to 200...\n\nCan you use a method that maintains state (cursor with fetching, or\ntemporary storage) so that it doesn't have to recalculate the query\nfor each page?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 8 Feb 2016 10:13:46 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap and-ing between btree and gin?" } ]
[ { "msg_contents": "Hi.\n\nA table has a trigger.\nThe trigger sends a NOTIFY.\n\nTest with COPY FROM shows non-linear correlation between number of inserted\nrows and COPY duration.\n\n Table \"test.tab\"\n Column | Type | Modifiers\n---------+---------+-------------------------------------------------------\n id | integer | not null default nextval('test.tab_id_seq'::regclass)\n payload | text |\nIndexes:\n \"tab_pkey\" PRIMARY KEY, btree (id)\nTriggers:\n trg AFTER INSERT ON test.tab FOR EACH ROW EXECUTE PROCEDURE test.fun()\n\n\nTest Series 1. Trigger code:\nBEGIN RETURN NULL; END\nYou can see linear scaling.\n\nrows=40000\n 191ms COPY test.tab FROM '/tmp/test.dat'\n 201ms COPY test.tab FROM '/tmp/test.dat'\nrows=80000\n 426ms COPY test.tab FROM '/tmp/test.dat'\n 415ms COPY test.tab FROM '/tmp/test.dat'\nrows=120000\n 634ms COPY test.tab FROM '/tmp/test.dat'\n 616ms COPY test.tab FROM '/tmp/test.dat'\nrows=160000\n 843ms COPY test.tab FROM '/tmp/test.dat'\n 834ms COPY test.tab FROM '/tmp/test.dat'\nrows=200000\n 1101ms COPY test.tab FROM '/tmp/test.dat'\n 1094ms COPY test.tab FROM '/tmp/test.dat'\n\n\nTest Series 2. Trigger code:\nBEGIN PERFORM pg_notify('test',NEW.id::text); RETURN NULL;\nYou can see non-linear scaling.\n\nrows=40000\n 9767ms COPY test.tab FROM '/tmp/test.dat'\n 8901ms COPY test.tab FROM '/tmp/test.dat'\nrows=80000\n 37409ms COPY test.tab FROM '/tmp/test.dat'\n 38015ms COPY test.tab FROM '/tmp/test.dat'\nrows=120000\n 90227ms COPY test.tab FROM '/tmp/test.dat'\n 87838ms COPY test.tab FROM '/tmp/test.dat'\nrows=160000\n 160080ms COPY test.tab FROM '/tmp/test.dat'\n 159801ms COPY test.tab FROM '/tmp/test.dat'\nrows=200000\n 247330ms COPY test.tab FROM '/tmp/test.dat'\n 251191ms COPY test.tab FROM '/tmp/test.dat'\n\n\nO(N^2) ????\n\nHi.A table has a trigger.The trigger sends a NOTIFY.Test with COPY FROM shows non-linear correlation between number of inserted rows and COPY duration.                             Table \"test.tab\" Column  |  Type   |                       Modifiers                       ---------+---------+------------------------------------------------------- id      | integer | not null default nextval('test.tab_id_seq'::regclass) payload | text    | Indexes:    \"tab_pkey\" PRIMARY KEY, btree (id)Triggers:    trg AFTER INSERT ON test.tab FOR EACH ROW EXECUTE PROCEDURE test.fun()Test Series 1. Trigger code: BEGIN RETURN NULL; END You can see linear scaling.rows=40000      191ms COPY test.tab FROM '/tmp/test.dat'      201ms COPY test.tab FROM '/tmp/test.dat'rows=80000      426ms COPY test.tab FROM '/tmp/test.dat'      415ms COPY test.tab FROM '/tmp/test.dat'rows=120000      634ms COPY test.tab FROM '/tmp/test.dat'      616ms COPY test.tab FROM '/tmp/test.dat'rows=160000      843ms COPY test.tab FROM '/tmp/test.dat'      834ms COPY test.tab FROM '/tmp/test.dat'rows=200000     1101ms COPY test.tab FROM '/tmp/test.dat'     1094ms COPY test.tab FROM '/tmp/test.dat'Test Series 2. Trigger code:BEGIN PERFORM pg_notify('test',NEW.id::text); RETURN NULL; You can see non-linear scaling.rows=40000     9767ms COPY test.tab FROM '/tmp/test.dat'     8901ms COPY test.tab FROM '/tmp/test.dat'rows=80000    37409ms COPY test.tab FROM '/tmp/test.dat'    38015ms COPY test.tab FROM '/tmp/test.dat'rows=120000    90227ms COPY test.tab FROM '/tmp/test.dat'    87838ms COPY test.tab FROM '/tmp/test.dat'rows=160000   160080ms COPY test.tab FROM '/tmp/test.dat'   159801ms COPY test.tab FROM '/tmp/test.dat'rows=200000   247330ms COPY test.tab FROM '/tmp/test.dat'   251191ms COPY test.tab FROM '/tmp/test.dat'O(N^2) ????", "msg_date": "Thu, 4 Feb 2016 22:12:39 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "bad COPY performance with NOTIFY in a trigger" }, { "msg_contents": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]> writes:\n> A table has a trigger.\n> The trigger sends a NOTIFY.\n> Test with COPY FROM shows non-linear correlation between number of inserted\n> rows and COPY duration.\n\nNo surprise, see AsyncExistsPendingNotify. You would have a lot of other\nperformance issues with sending hundreds of thousands of distinct notify\nevents from one transaction anyway, so I can't get terribly excited about\nthis.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 04 Feb 2016 17:41:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad COPY performance with NOTIFY in a trigger" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> No surprise, see AsyncExistsPendingNotify. You would have a lot of other\n> performance issues with sending hundreds of thousands of distinct notify\n> events from one transaction anyway, so I can't get terribly excited about\n> this.\n\n@Filip: you probably want a per-statement trigger rather than a per-row\ntrigger: insert all rows with COPY, then send one notification.\n\nYou have to mark the new rows somehow yourself; unfortunately PostgreSQL\nhas no way to tell them in a statement trigger.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 05 Feb 2016 09:40:44 +0100", "msg_from": "Harald Fuchs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad COPY performance with NOTIFY in a trigger" }, { "msg_contents": "On Thu, Feb 4, 2016 at 11:41 PM, Tom Lane <[email protected]> wrote:\n\n> =?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>\n> writes:\n> > A table has a trigger.\n> > The trigger sends a NOTIFY.\n> > Test with COPY FROM shows non-linear correlation between number of\n> inserted\n> > rows and COPY duration.\n>\n> No surprise, see AsyncExistsPendingNotify. You would have a lot of other\n> performance issues with sending hundreds of thousands of distinct notify\n> events from one transaction anyway, so I can't get terribly excited about\n> this.\n>\n\n\nWhat kind of issues? Do you mean, problems in postgres or problems in\nclient?\n\nIs there an additional non-linear cost on COMMIT (extra to the cost I\nalready showed)?\n\nThe 8GB internal queue (referenced in a Note at\nhttp://www.postgresql.org/docs/current/static/sql-notify.html) should be\nable to keep ~ 1E8 such notifications (assumed one notification will fit in\n80 bytes).\n\nOn client side, this seems legit - the LISTENer deamon will collect these\nnotifications and process them in line.\nThere might be no LISTENer running at all.\n\nStill, the main problem I get with this approach is quadratic cost on big\ninsert transactions.\nI wonder if this behavior is possible to change in future postgres\nversions. And how much programming work does it require.\n\nIs duplicate-elimination a fundamental, non-negotiable requirement?\n\n\n\nThank you,\nFilip\n\nOn Thu, Feb 4, 2016 at 11:41 PM, Tom Lane <[email protected]> wrote:=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]> writes:\n> A table has a trigger.\n> The trigger sends a NOTIFY.\n> Test with COPY FROM shows non-linear correlation between number of inserted\n> rows and COPY duration.\n\nNo surprise, see AsyncExistsPendingNotify.  You would have a lot of other\nperformance issues with sending hundreds of thousands of distinct notify\nevents from one transaction anyway, so I can't get terribly excited about\nthis.What kind of issues? Do you mean, problems in postgres or problems in client?Is there an additional non-linear cost on COMMIT (extra to the cost I already showed)? The 8GB internal queue (referenced in a Note at http://www.postgresql.org/docs/current/static/sql-notify.html) should be able to keep ~ 1E8 such notifications (assumed one notification will fit in 80 bytes).On client side, this seems legit - the LISTENer deamon will collect these notifications and process them in line. There might be no LISTENer running at all. Still, the main problem I get with this approach is quadratic cost on big insert transactions. I wonder if this behavior is possible to change in future postgres versions. And how much programming work does it require.Is duplicate-elimination a fundamental, non-negotiable requirement?Thank you,Filip", "msg_date": "Fri, 5 Feb 2016 13:45:10 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad COPY performance with NOTIFY in a trigger" }, { "msg_contents": "patch submitted on -hackers list.\nhttp://www.postgresql.org/message-id/CAP_rwwn2z0gPOn8GuQ3qDVS5+HgEcG2EzEOyiJtcA=vpDEhoCg@mail.gmail.com\n\nresults after the patch:\n\ntrigger= BEGIN RETURN NULL; END\nrows=40000\n 228ms COPY test.tab FROM '/tmp/test.dat'\n 205ms COPY test.tab FROM '/tmp/test.dat'\nrows=80000\n 494ms COPY test.tab FROM '/tmp/test.dat'\n 395ms COPY test.tab FROM '/tmp/test.dat'\nrows=120000\n 678ms COPY test.tab FROM '/tmp/test.dat'\n 652ms COPY test.tab FROM '/tmp/test.dat'\nrows=160000\n 956ms COPY test.tab FROM '/tmp/test.dat'\n 822ms COPY test.tab FROM '/tmp/test.dat'\nrows=200000\n 1184ms COPY test.tab FROM '/tmp/test.dat'\n 1072ms COPY test.tab FROM '/tmp/test.dat'\ntrigger= BEGIN PERFORM pg_notify('test',NEW.id::text); RETURN NULL; END\nrows=40000\n 440ms COPY test.tab FROM '/tmp/test.dat'\n 406ms COPY test.tab FROM '/tmp/test.dat'\nrows=80000\n 887ms COPY test.tab FROM '/tmp/test.dat'\n 769ms COPY test.tab FROM '/tmp/test.dat'\nrows=120000\n 1346ms COPY test.tab FROM '/tmp/test.dat'\n 1171ms COPY test.tab FROM '/tmp/test.dat'\nrows=160000\n 1710ms COPY test.tab FROM '/tmp/test.dat'\n 1709ms COPY test.tab FROM '/tmp/test.dat'\nrows=200000\n 2189ms COPY test.tab FROM '/tmp/test.dat'\n 2206ms COPY test.tab FROM '/tmp/test.dat'\n\n\n\nOn Fri, Feb 5, 2016 at 1:45 PM, Filip Rembiałkowski <\[email protected]> wrote:\n\n> On Thu, Feb 4, 2016 at 11:41 PM, Tom Lane <[email protected]> wrote:\n>\n>> =?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>\n>> writes:\n>> > A table has a trigger.\n>> > The trigger sends a NOTIFY.\n>> > Test with COPY FROM shows non-linear correlation between number of\n>> inserted\n>> > rows and COPY duration.\n>>\n>> No surprise, see AsyncExistsPendingNotify. You would have a lot of other\n>> performance issues with sending hundreds of thousands of distinct notify\n>> events from one transaction anyway, so I can't get terribly excited about\n>> this.\n>>\n>\n>\n> What kind of issues? Do you mean, problems in postgres or problems in\n> client?\n>\n> Is there an additional non-linear cost on COMMIT (extra to the cost I\n> already showed)?\n>\n> The 8GB internal queue (referenced in a Note at\n> http://www.postgresql.org/docs/current/static/sql-notify.html) should be\n> able to keep ~ 1E8 such notifications (assumed one notification will fit in\n> 80 bytes).\n>\n> On client side, this seems legit - the LISTENer deamon will collect these\n> notifications and process them in line.\n> There might be no LISTENer running at all.\n>\n> Still, the main problem I get with this approach is quadratic cost on big\n> insert transactions.\n> I wonder if this behavior is possible to change in future postgres\n> versions. And how much programming work does it require.\n>\n> Is duplicate-elimination a fundamental, non-negotiable requirement?\n>\n>\n>\n> Thank you,\n> Filip\n>\n>\n\npatch submitted on -hackers list. http://www.postgresql.org/message-id/CAP_rwwn2z0gPOn8GuQ3qDVS5+HgEcG2EzEOyiJtcA=vpDEhoCg@mail.gmail.comresults after the patch:trigger= BEGIN RETURN NULL; END rows=40000      228ms COPY test.tab FROM '/tmp/test.dat'      205ms COPY test.tab FROM '/tmp/test.dat'rows=80000      494ms COPY test.tab FROM '/tmp/test.dat'      395ms COPY test.tab FROM '/tmp/test.dat'rows=120000      678ms COPY test.tab FROM '/tmp/test.dat'      652ms COPY test.tab FROM '/tmp/test.dat'rows=160000      956ms COPY test.tab FROM '/tmp/test.dat'      822ms COPY test.tab FROM '/tmp/test.dat'rows=200000     1184ms COPY test.tab FROM '/tmp/test.dat'     1072ms COPY test.tab FROM '/tmp/test.dat'trigger= BEGIN PERFORM pg_notify('test',NEW.id::text); RETURN NULL; END rows=40000      440ms COPY test.tab FROM '/tmp/test.dat'      406ms COPY test.tab FROM '/tmp/test.dat'rows=80000      887ms COPY test.tab FROM '/tmp/test.dat'      769ms COPY test.tab FROM '/tmp/test.dat'rows=120000     1346ms COPY test.tab FROM '/tmp/test.dat'     1171ms COPY test.tab FROM '/tmp/test.dat'rows=160000     1710ms COPY test.tab FROM '/tmp/test.dat'     1709ms COPY test.tab FROM '/tmp/test.dat'rows=200000     2189ms COPY test.tab FROM '/tmp/test.dat'     2206ms COPY test.tab FROM '/tmp/test.dat'On Fri, Feb 5, 2016 at 1:45 PM, Filip Rembiałkowski <[email protected]> wrote:On Thu, Feb 4, 2016 at 11:41 PM, Tom Lane <[email protected]> wrote:=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]> writes:\n> A table has a trigger.\n> The trigger sends a NOTIFY.\n> Test with COPY FROM shows non-linear correlation between number of inserted\n> rows and COPY duration.\n\nNo surprise, see AsyncExistsPendingNotify.  You would have a lot of other\nperformance issues with sending hundreds of thousands of distinct notify\nevents from one transaction anyway, so I can't get terribly excited about\nthis.What kind of issues? Do you mean, problems in postgres or problems in client?Is there an additional non-linear cost on COMMIT (extra to the cost I already showed)? The 8GB internal queue (referenced in a Note at http://www.postgresql.org/docs/current/static/sql-notify.html) should be able to keep ~ 1E8 such notifications (assumed one notification will fit in 80 bytes).On client side, this seems legit - the LISTENer deamon will collect these notifications and process them in line. There might be no LISTENer running at all. Still, the main problem I get with this approach is quadratic cost on big insert transactions. I wonder if this behavior is possible to change in future postgres versions. And how much programming work does it require.Is duplicate-elimination a fundamental, non-negotiable requirement?Thank you,Filip", "msg_date": "Fri, 5 Feb 2016 16:33:28 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad COPY performance with NOTIFY in a trigger" }, { "msg_contents": "On Fri, Feb 5, 2016 at 9:33 AM, Filip Rembiałkowski\n<[email protected]> wrote:\n> patch submitted on -hackers list.\n> http://www.postgresql.org/message-id/CAP_rwwn2z0gPOn8GuQ3qDVS5+HgEcG2EzEOyiJtcA=vpDEhoCg@mail.gmail.com\n>\n> results after the patch:\n>\n> trigger= BEGIN RETURN NULL; END\n> rows=40000\n> 228ms COPY test.tab FROM '/tmp/test.dat'\n> 205ms COPY test.tab FROM '/tmp/test.dat'\n> rows=80000\n> 494ms COPY test.tab FROM '/tmp/test.dat'\n> 395ms COPY test.tab FROM '/tmp/test.dat'\n> rows=120000\n> 678ms COPY test.tab FROM '/tmp/test.dat'\n> 652ms COPY test.tab FROM '/tmp/test.dat'\n> rows=160000\n> 956ms COPY test.tab FROM '/tmp/test.dat'\n> 822ms COPY test.tab FROM '/tmp/test.dat'\n> rows=200000\n> 1184ms COPY test.tab FROM '/tmp/test.dat'\n> 1072ms COPY test.tab FROM '/tmp/test.dat'\n> trigger= BEGIN PERFORM pg_notify('test',NEW.id::text); RETURN NULL; END\n> rows=40000\n> 440ms COPY test.tab FROM '/tmp/test.dat'\n> 406ms COPY test.tab FROM '/tmp/test.dat'\n> rows=80000\n> 887ms COPY test.tab FROM '/tmp/test.dat'\n> 769ms COPY test.tab FROM '/tmp/test.dat'\n> rows=120000\n> 1346ms COPY test.tab FROM '/tmp/test.dat'\n> 1171ms COPY test.tab FROM '/tmp/test.dat'\n> rows=160000\n> 1710ms COPY test.tab FROM '/tmp/test.dat'\n> 1709ms COPY test.tab FROM '/tmp/test.dat'\n> rows=200000\n> 2189ms COPY test.tab FROM '/tmp/test.dat'\n> 2206ms COPY test.tab FROM '/tmp/test.dat'\n\nI'm not so sure that this is a great idea. Generally, we tend to\ndiscourage GUCs that control behavior at the SQL level. Are you 100%\ncertain that there is no path to optimizing this case without changing\nbehvior?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 5 Feb 2016 13:52:51 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad COPY performance with NOTIFY in a trigger" }, { "msg_contents": "Thanks for the feedback.\n\nThis patch is my first and obvious approach.\n\n@Merlin, I'm not sure if I get your idea:\n- keep previous behaviour as obligatory? (which is: automatic\nde-duplicating of incoming messages by channel+payload),\n- instead of trivial search (sorting by browsing) use some kind of\nfaster lookups?\n\nI'm not sure if this statement in async.c is carved in stone:\n\n* Duplicate notifications from the same transaction are sent out as one\n* notification only. This is done to save work when for example a trigger\n* on a 2 million row table fires a notification for each row that has been\n* changed. If the application needs to receive every single notification\n* that has been sent, it can easily add some unique string into the extra\n* payload parameter.\n\n1) \"work-saving\" is disputable in some cases\n\n2) an idea to \"add some unique string\" is OK logical-wise but it's not\nOK performance-wise.\n\nCurrent search code is a sequential search:\nhttps://github.com/filiprem/postgres/blob/master/src/backend/commands/async.c#L2139\n\nI'm not that smart to devise an algorithm for faster lookups -\nprobably you guys can give some advice.\n\nAgain, my rationale is... This feature can burn a lot of CPU for\nnothing. I was hoping to use NOTIFY/LISTEN as superfast notification\nmechanism. Superfast regardless on whether you insert 100, 10k or 1m\nrows.\n\n\n\n\nOn Fri, Feb 5, 2016 at 8:52 PM, Merlin Moncure <[email protected]> wrote:\n> On Fri, Feb 5, 2016 at 9:33 AM, Filip Rembiałkowski\n> <[email protected]> wrote:\n>> patch submitted on -hackers list.\n>> http://www.postgresql.org/message-id/CAP_rwwn2z0gPOn8GuQ3qDVS5+HgEcG2EzEOyiJtcA=vpDEhoCg@mail.gmail.com\n>>\n>> results after the patch:\n>>\n>> trigger= BEGIN RETURN NULL; END\n>> rows=40000\n>> 228ms COPY test.tab FROM '/tmp/test.dat'\n>> 205ms COPY test.tab FROM '/tmp/test.dat'\n>> rows=80000\n>> 494ms COPY test.tab FROM '/tmp/test.dat'\n>> 395ms COPY test.tab FROM '/tmp/test.dat'\n>> rows=120000\n>> 678ms COPY test.tab FROM '/tmp/test.dat'\n>> 652ms COPY test.tab FROM '/tmp/test.dat'\n>> rows=160000\n>> 956ms COPY test.tab FROM '/tmp/test.dat'\n>> 822ms COPY test.tab FROM '/tmp/test.dat'\n>> rows=200000\n>> 1184ms COPY test.tab FROM '/tmp/test.dat'\n>> 1072ms COPY test.tab FROM '/tmp/test.dat'\n>> trigger= BEGIN PERFORM pg_notify('test',NEW.id::text); RETURN NULL; END\n>> rows=40000\n>> 440ms COPY test.tab FROM '/tmp/test.dat'\n>> 406ms COPY test.tab FROM '/tmp/test.dat'\n>> rows=80000\n>> 887ms COPY test.tab FROM '/tmp/test.dat'\n>> 769ms COPY test.tab FROM '/tmp/test.dat'\n>> rows=120000\n>> 1346ms COPY test.tab FROM '/tmp/test.dat'\n>> 1171ms COPY test.tab FROM '/tmp/test.dat'\n>> rows=160000\n>> 1710ms COPY test.tab FROM '/tmp/test.dat'\n>> 1709ms COPY test.tab FROM '/tmp/test.dat'\n>> rows=200000\n>> 2189ms COPY test.tab FROM '/tmp/test.dat'\n>> 2206ms COPY test.tab FROM '/tmp/test.dat'\n>\n> I'm not so sure that this is a great idea. Generally, we tend to\n> discourage GUCs that control behavior at the SQL level. Are you 100%\n> certain that there is no path to optimizing this case without changing\n> behvior?\n>\n> merlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 6 Feb 2016 13:03:53 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad COPY performance with NOTIFY in a trigger" }, { "msg_contents": "On Sat, Feb 6, 2016 at 6:03 AM, Filip Rembiałkowski\n<[email protected]> wrote:\n> Thanks for the feedback.\n>\n> This patch is my first and obvious approach.\n>\n> @Merlin, I'm not sure if I get your idea:\n> - keep previous behaviour as obligatory? (which is: automatic\n> de-duplicating of incoming messages by channel+payload),\n> - instead of trivial search (sorting by browsing) use some kind of\n> faster lookups?\n>\n> I'm not sure if this statement in async.c is carved in stone:\n>\n> * Duplicate notifications from the same transaction are sent out as one\n> * notification only. This is done to save work when for example a trigger\n> * on a 2 million row table fires a notification for each row that has been\n> * changed. If the application needs to receive every single notification\n> * that has been sent, it can easily add some unique string into the extra\n> * payload parameter.\n>\n> 1) \"work-saving\" is disputable in some cases\n>\n> 2) an idea to \"add some unique string\" is OK logical-wise but it's not\n> OK performance-wise.\n>\n> Current search code is a sequential search:\n> https://github.com/filiprem/postgres/blob/master/src/backend/commands/async.c#L2139\n>\n> I'm not that smart to devise an algorithm for faster lookups -\n> probably you guys can give some advice.\n>\n> Again, my rationale is... This feature can burn a lot of CPU for\n> nothing. I was hoping to use NOTIFY/LISTEN as superfast notification\n> mechanism. Superfast regardless on whether you insert 100, 10k or 1m\n> rows.\n\nSure, I get it -- you want to have fast notification events -- this is\na good thing to want to have. However, a GUC is probably not the best\nway to do that in this particular case. It's way to fringey and the\nbar for behavior controlling GUC is incredibly high (short version:\nmost modern introductions were to manage security issues). I'm far\nfrom the last word on this thoug, but it's better to get this all\nsorted out now.\n\nAnyways, it should be possible to micro-optimize that path. Perhaps\nusing a hash table? I'm not sure.\n\nAnother possible way to work things out here is to expose your switch\nin the syntax of the command itself, or perhaps via the pg_notify\nfunction to avoid syntax issues.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 8 Feb 2016 08:35:58 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad COPY performance with NOTIFY in a trigger" }, { "msg_contents": "On Mon, Feb 8, 2016 at 8:35 AM, Merlin Moncure <[email protected]> wrote:\n> On Sat, Feb 6, 2016 at 6:03 AM, Filip Rembiałkowski\n> <[email protected]> wrote:\n>> Thanks for the feedback.\n>>\n>> This patch is my first and obvious approach.\n>>\n>> @Merlin, I'm not sure if I get your idea:\n>> - keep previous behaviour as obligatory? (which is: automatic\n>> de-duplicating of incoming messages by channel+payload),\n>> - instead of trivial search (sorting by browsing) use some kind of\n>> faster lookups?\n>>\n>> I'm not sure if this statement in async.c is carved in stone:\n>>\n>> * Duplicate notifications from the same transaction are sent out as one\n>> * notification only. This is done to save work when for example a trigger\n>> * on a 2 million row table fires a notification for each row that has been\n>> * changed. If the application needs to receive every single notification\n>> * that has been sent, it can easily add some unique string into the extra\n>> * payload parameter.\n>>\n>> 1) \"work-saving\" is disputable in some cases\n>>\n>> 2) an idea to \"add some unique string\" is OK logical-wise but it's not\n>> OK performance-wise.\n>>\n>> Current search code is a sequential search:\n>> https://github.com/filiprem/postgres/blob/master/src/backend/commands/async.c#L2139\n>>\n>> I'm not that smart to devise an algorithm for faster lookups -\n>> probably you guys can give some advice.\n>>\n>> Again, my rationale is... This feature can burn a lot of CPU for\n>> nothing. I was hoping to use NOTIFY/LISTEN as superfast notification\n>> mechanism. Superfast regardless on whether you insert 100, 10k or 1m\n>> rows.\n>\n> Sure, I get it -- you want to have fast notification events -- this is\n> a good thing to want to have. However, a GUC is probably not the best\n> way to do that in this particular case. It's way to fringey and the\n> bar for behavior controlling GUC is incredibly high (short version:\n> most modern introductions were to manage security issues). I'm far\n> from the last word on this thoug, but it's better to get this all\n> sorted out now.\n>\n> Anyways, it should be possible to micro-optimize that path. Perhaps\n> using a hash table? I'm not sure.\n>\n> Another possible way to work things out here is to expose your switch\n> in the syntax of the command itself, or perhaps via the pg_notify\n> function to avoid syntax issues.\n\nwhoops, I just noticed this thread moved to -hackers -- so please respond there.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 8 Feb 2016 16:58:10 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad COPY performance with NOTIFY in a trigger" } ]
[ { "msg_contents": "http://explain.depesz.com/s/wKv7\nPostgres Version 9.3.10 (Linux)\n\nHello,\nthis is a large daily table that only get bulk inserts (200-400 /days) with no update.\nAfter rebuilding the whole table, the Bitmap Index Scan on r_20160204_ix_toprid falls under 1 second (from 800)\n\nFastupdate is using the default, but autovacuum is disabled on that table which contains 30 Mio rows.\nAnother specificity is that the cardinality of the indexed column is very high. The average count per distinct values is only 2.7\n\nI'm not sure what the problem is. Does the missing vacuum affect the gin index sanity further than not cleaning the pending list?\nAs I understand it, this list will be merged into the index automatically when it get full, independently from the vaccum setting.\n\nCan it be an index bloating issue ?\n\nand last but not least, can I reduce the problem by configuration ?\n\nregards,\n\nMarc Mamin\n\n\n\n\n\n\n\n\n\n\n\nhttp://explain.depesz.com/s/wKv7 \nPostgres Version 9.3.10 (Linux)\n \nHello,\nthis is a large daily table that only get bulk inserts (200-400 /days) with no update.\nAfter rebuilding the whole table, the Bitmap Index Scan on r_20160204_ix_toprid falls under 1 second (from 800)\n \nFastupdate is using the default, but autovacuum is disabled on that table which contains 30 Mio rows.\nAnother specificity is that the cardinality of the indexed column is very high. The average count per distinct values is only 2.7\n \nI'm not sure what the problem is. Does the missing vacuum affect the gin index sanity further than not cleaning the pending list?\nAs I understand it, this list will be merged into the index automatically when it get full, independently from the vaccum setting.\n \nCan it be an index bloating issue ?\n \nand last but not least, can I reduce the problem by configuration ?\n \nregards,\n \nMarc Mamin", "msg_date": "Fri, 5 Feb 2016 11:28:08 +0000", "msg_from": "Marc Mamin <[email protected]>", "msg_from_op": true, "msg_subject": "gin performance issue." }, { "msg_contents": "Marc Mamin <[email protected]> writes:\n> http://explain.depesz.com/s/wKv7\n> Postgres Version 9.3.10 (Linux)\n\n> Hello,\n> this is a large daily table that only get bulk inserts (200-400 /days) with no update.\n> After rebuilding the whole table, the Bitmap Index Scan on r_20160204_ix_toprid falls under 1 second (from 800)\n\n> Fastupdate is using the default, but autovacuum is disabled on that table which contains 30 Mio rows.\n\nPre-9.5, it's a pretty bad idea to disable autovacuum on a GIN index,\nbecause then the \"pending list\" only gets flushed when it exceeds\nwork_mem. (Obviously, using a large work_mem setting makes this worse.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 05 Feb 2016 10:07:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gin performance issue." }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Freitag, 5. Februar 2016 16:07\n \n\n> > http://explain.depesz.com/s/wKv7\n> > Postgres Version 9.3.10 (Linux)\n> > \n> > Hello,\n> > this is a large daily table that only get bulk inserts (200-400 /days) with no update.\n> > After rebuilding the whole table, the Bitmap Index Scan on\n> > r_20160204_ix_toprid falls under 1 second (from 800)\n> >\n> > Fastupdate is using the default, but autovacuum is disabled on that\n> > table which contains 30 Mio rows.\n\n\n> Pre-9.5, it's a pretty bad idea to disable autovacuum on a GIN index,\n> because then the \"pending list\" only gets flushed when it exceeds\n> work_mem. (Obviously, using a large work_mem setting makes this\n> worse.)\n> \n> \t\t\tregards, tom lane\n\n\nHello,\nknowing what the problem is don't really help here:\n\n- auto vacuum will not run as these are insert only tables\n- according to this post, auto analyze would also do the job:\n http://postgresql.nabble.com/Performance-problem-with-gin-index-td5867870.html\n It seems that this information is missing in the doc\n \n but it sadly neither triggers in our case as we have manual analyzes called during the dataprocesssing just following the imports.\n Manual vacuum is just too expensive here.\n \n Hence disabling fast update seems to be our only option. \n \n I hope this problem will help push up the 9.5 upgrade on our todo list :)\n \n Ideally, we would then like to flush the pending list inconditionally after the imports. \n I guess we could achieve something approaching while modifying the analyze scale factor and gin_pending_list_limit\n before/after the (bulk) imports, but having the possibility to flush it per SQL would be better. \n Is this a reasonable feature wish?\n \n And a last question: how does the index update work with bulk (COPY) inserts:\n without pending list: is it like a per row trigger or will the index be cared of afterwards ?\n with small pending lists : is there a concurrency problem, or can both tasks cleanly work in parallel ?\n \n best regards,\n \n Marc mamin\n \n \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 8 Feb 2016 10:21:53 +0000", "msg_from": "Marc Mamin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gin performance issue." }, { "msg_contents": "On Mon, Feb 8, 2016 at 2:21 AM, Marc Mamin <[email protected]> wrote:\n>\n> - auto vacuum will not run as these are insert only tables\n> - according to this post, auto analyze would also do the job:\n> http://postgresql.nabble.com/Performance-problem-with-gin-index-td5867870.html\n> It seems that this information is missing in the doc\n>\n> but it sadly neither triggers in our case as we have manual analyzes called during the dataprocesssing just following the imports.\n> Manual vacuum is just too expensive here.\n>\n> Hence disabling fast update seems to be our only option.\n\nDoes disabling fast update cause problems? I always start with\nfastupdate disabled, and only turn on if it I have a demonstrable\nproblem with it being off.\n\nI would think \"off\" is likely to be better for you. You say each\ndistinct key only appears in 2.7 rows. So you won't get much benefit\nfrom aggregating together all the new rows for each key before\nupdating the index for that key, as there is very little to aggregate.\n\nAlso, you say the inserts come in bulk. It is generally a good thing\nto slow down bulk operations by making them clean up their own messes,\nfor the sake of everyone else.\n\n\n> I hope this problem will help push up the 9.5 upgrade on our todo list :)\n>\n> Ideally, we would then like to flush the pending list inconditionally after the imports.\n> I guess we could achieve something approaching while modifying the analyze scale factor and gin_pending_list_limit\n> before/after the (bulk) imports, but having the possibility to flush it per SQL would be better.\n> Is this a reasonable feature wish?\n\nThat feature has already been committed for the 9.6 branch.\n\n> And a last question: how does the index update work with bulk (COPY) inserts:\n> without pending list: is it like a per row trigger or will the index be cared of afterwards ?\n\nDone for each row.\n\n> with small pending lists : is there a concurrency problem, or can both tasks cleanly work in parallel ?\n\nI don't understand the question. What are the two tasks you are\nreferring to? Do you have multiple COPY running at the same time in\ndifferent processes?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 8 Feb 2016 11:16:00 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gin performance issue." } ]
[ { "msg_contents": "Hi,\r\n\r\nQuestion:\r\n\r\nWhat may cause a primary key index to suddenly become very slow? Index scan for single row taking 2-3 seconds. A manual vacuum resolved the problem.\r\n\r\n\r\nBackground:\r\n\r\nWe have a simple table ‘KONTO’ with about 600k rows.\r\n\r\n\r\n Column | Type | Modifiers\r\n------------------------------+-----------------------------+---------------\r\n id | bigint | not null\r\n...\r\n\r\nIndexes:\r\n \"konto_pk\" PRIMARY KEY, btree (id)\r\n...\r\n\r\n\r\nOver the weekend we experienced that lookups using the primary key index (‘konto_pk’) became very slow, in the region 2-3s for fetching a single record:\r\n\r\nQUERY PLAN\r\nIndex Scan using konto_pk on konto (cost=0.42..6.44 rows=1 width=164) (actual time=0.052..2094.549 rows=1 loops=1)\r\n Index Cond: (id = 2121172829)\r\nPlanning time: 0.376 ms\r\nExecution time: 2094.585 ms\r\n\r\n\r\nAfter a manual Vacuum the execution time is OK:\r\n\r\nQUERY PLAN\r\nIndex Scan using konto_pk on konto (cost=0.42..6.44 rows=1 width=164) (actual time=0.037..2.876 rows=1 loops=1)\r\n Index Cond: (id = 2121172829)\r\nPlanning time: 0.793 ms\r\nExecution time: 2.971 ms\r\n\r\n\r\nSo things are working OK again, but we would like to know what may cause such a degradation of the index scan, to avoid this happening again? (We are using Postgresql version 9.4.4)\r\n\r\n\r\n\r\nRegards,\r\nGustav\r\n\n\n\n\n\n\r\nHi,\r\n\n\nQuestion:\n\n\nWhat may cause a primary key index to suddenly become very slow? Index scan for single row taking 2-3 seconds. A manual vacuum resolved the problem.\n\n\n\n\nBackground:\n\n\nWe have a simple table ‘KONTO’ with about 600k rows. \n\n\n\n\n\n            Column            |            Type             |   Modifiers\n------------------------------+-----------------------------+---------------\n id                           | bigint                      | not null\n\n...\n\n\n\nIndexes:\n    \"konto_pk\" PRIMARY KEY, btree (id)\n\n...\n\n\n\n\nOver the weekend we experienced that lookups using the primary key index (‘konto_pk’) became very slow, in the region 2-3s for fetching a single record:\n\n\n\nQUERY PLAN\nIndex Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.052..2094.549 rows=1 loops=1)\n  Index Cond: (id = 2121172829)\nPlanning time: 0.376 ms\nExecution time: 2094.585 ms\n\n\n\n\n\nAfter a manual Vacuum the execution time is OK:\n\n\n\nQUERY PLAN\nIndex Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.037..2.876 rows=1 loops=1)\n  Index Cond: (id = 2121172829)\nPlanning time: 0.793 ms\nExecution time: 2.971 ms\n\n\n\n\n\nSo things are working OK again, but we would like to know what may cause such a degradation of the index scan, to avoid this happening again? (We are using Postgresql version 9.4.4)\n\n\n\n\n\n\nRegards,\nGustav", "msg_date": "Mon, 8 Feb 2016 09:45:24 +0000", "msg_from": "Gustav Karlsson <[email protected]>", "msg_from_op": true, "msg_subject": "Primary key index suddenly became very slow" }, { "msg_contents": "Additional information:\r\n\r\nThe problematic row has likely received many hot updates (100k+). Could this be a likely explanation for the high execution time?\r\n\r\n\r\nRegards,\r\nGustav\r\n\r\n\r\n\r\nOn Feb 8, 2016, at 10:45 AM, Gustav Karlsson <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nHi,\r\n\r\nQuestion:\r\n\r\nWhat may cause a primary key index to suddenly become very slow? Index scan for single row taking 2-3 seconds. A manual vacuum resolved the problem.\r\n\r\n\r\nBackground:\r\n\r\nWe have a simple table ‘KONTO’ with about 600k rows.\r\n\r\n\r\n Column | Type | Modifiers\r\n------------------------------+-----------------------------+---------------\r\n id | bigint | not null\r\n...\r\n\r\nIndexes:\r\n \"konto_pk\" PRIMARY KEY, btree (id)\r\n...\r\n\r\n\r\nOver the weekend we experienced that lookups using the primary key index (‘konto_pk’) became very slow, in the region 2-3s for fetching a single record:\r\n\r\nQUERY PLAN\r\nIndex Scan using konto_pk on konto (cost=0.42..6.44 rows=1 width=164) (actual time=0.052..2094.549 rows=1 loops=1)\r\n Index Cond: (id = 2121172829)\r\nPlanning time: 0.376 ms\r\nExecution time: 2094.585 ms\r\n\r\n\r\nAfter a manual Vacuum the execution time is OK:\r\n\r\nQUERY PLAN\r\nIndex Scan using konto_pk on konto (cost=0.42..6.44 rows=1 width=164) (actual time=0.037..2.876 rows=1 loops=1)\r\n Index Cond: (id = 2121172829)\r\nPlanning time: 0.793 ms\r\nExecution time: 2.971 ms\r\n\r\n\r\nSo things are working OK again, but we would like to know what may cause such a degradation of the index scan, to avoid this happening again? (We are using Postgresql version 9.4.4)\r\n\r\n\r\n\r\nRegards,\r\nGustav\r\n\r\n\n\n\n\n\n\r\nAdditional information:\r\n\n\nThe problematic row has likely received many hot updates (100k+). Could this be a likely explanation for the high execution time?\n\n\n\n\nRegards,\nGustav\n\n\n\n\n\n\n\n\n\nOn Feb 8, 2016, at 10:45 AM, Gustav Karlsson <[email protected]> wrote:\n\n\n\r\nHi,\r\n\n\nQuestion:\n\n\nWhat may cause a primary key index to suddenly become very slow? Index scan for single row taking 2-3 seconds. A manual vacuum resolved the problem.\n\n\n\n\nBackground:\n\n\nWe have a simple table ‘KONTO’ with about 600k rows. \n\n\n\n\n\n            Column            |            Type             |   Modifiers\n------------------------------+-----------------------------+---------------\n id                           | bigint                      | not null\n\n...\n\n\n\nIndexes:\n    \"konto_pk\" PRIMARY KEY, btree (id)\n\n...\n\n\n\n\nOver the weekend we experienced that lookups using the primary key index (‘konto_pk’) became very slow, in the region 2-3s for fetching a single record:\n\n\n\nQUERY PLAN\nIndex Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.052..2094.549 rows=1 loops=1)\n  Index Cond: (id = 2121172829)\nPlanning time: 0.376 ms\nExecution time: 2094.585 ms\n\n\n\n\n\nAfter a manual Vacuum the execution time is OK:\n\n\n\nQUERY PLAN\nIndex Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.037..2.876 rows=1 loops=1)\n  Index Cond: (id = 2121172829)\nPlanning time: 0.793 ms\nExecution time: 2.971 ms\n\n\n\n\n\nSo things are working OK again, but we would like to know what may cause such a degradation of the index scan, to avoid this happening again? (We are using Postgresql version 9.4.4)\n\n\n\n\n\n\nRegards,\nGustav", "msg_date": "Mon, 8 Feb 2016 10:04:58 +0000", "msg_from": "Gustav Karlsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Primary key index suddenly became very slow" }, { "msg_contents": "El lun, 08-02-2016 a las 10:04 +0000, Gustav Karlsson escribió:\n> Additional information:\n> \n> The problematic row has likely received many hot updates (100k+).\n> Could this be a likely explanation for the high execution time?\n> \n> \nCould you check if autovacuum is doing its job with this query:\nselect * from pg_stat_user_tables where relname='konto' , is it last_autovaccum and last_autoanalyze recent ?\nif you don't reduce n_dead_tup in a short time after the bulk process of hot update, it will be a explanation, and also a \"idle in transaction\" connection could cause it.\nThis link: https://brandur.org/postgres-queues could help you.\n\n> \n\n> \n> Regards,\n> \n> Gustav\n> \n\n> \n\n> \n\n> \n\n> \n> \n> \n\n> \n> \n> > \n> > On Feb 8, 2016, at 10:45 AM, Gustav Karlsson <[email protected]> wrote:\n> > \n\n> > \n> > \n> > \n> > Hi,\n> > \n\n> > \n\n> > \n> > Question:\n> > \n\n> > \n\n> > \n> > What may cause a primary key index to suddenly become very slow? Index scan for single row taking 2-3 seconds. A manual vacuum resolved the problem.\n> > \n\n> > \n\n> > \n\n> > \n\n> > \n> > Background:\n> > \n\n> > \n\n> > \n> > We have a simple table ‘KONTO’ with about 600k rows. \n> > \n\n> > \n\n> > \n\n> > \n\n> > \n> > \n> > Column | Type | Modifiers\n> > \n> > ------------------------------+-----------------------------+---------------\n> > \n> > id | bigint | not null\n> > \n\n> > \n> > ...\n> > \n\n> > \n\n> > \n> > \n> > Indexes:\n> > \n> > \"konto_pk\" PRIMARY KEY, btree (id)\n> > \n\n> > \n> > ...\n> > \n\n> > \n\n> > \n\n> > \n\n> > \n> > Over the weekend we experienced that lookups using the primary key index (‘konto_pk’) became very slow, in the region 2-3s for fetching a single record:\n> > \n\n> > \n\n> > \n> > \n> > QUERY PLAN\n> > \n> > Index Scan using konto_pk on konto (cost=0.42..6.44 rows=1 width=164) (actual time=0.052..2094.549 rows=1 loops=1)\n> > \n> > Index Cond: (id = 2121172829)\n> > \n> > Planning time: 0.376 ms\n> > \n> > Execution time: 2094.585 ms\n> > \n\n> > \n\n> > \n\n> > \n\n> > \n\n> > \n> > After a manual Vacuum the execution time is OK:\n> > \n\n> > \n\n> > \n> > \n> > QUERY PLAN\n> > \n> > Index Scan using konto_pk on konto (cost=0.42..6.44 rows=1 width=164) (actual time=0.037..2.876 rows=1 loops=1)\n> > \n> > Index Cond: (id = 2121172829)\n> > \n> > Planning time: 0.793 ms\n> > \n> > Execution time: 2.971 ms\n> > \n\n> > \n\n> > \n\n> > \n\n> > \n\n> > \n> > So things are working OK again, but we would like to know what may cause such a degradation of the index scan, to avoid this happening again? (We are using Postgresql version 9.4.4)\n> > \n\n> > \n\n> > \n\n> > \n\n> > \n\n> > \n\n> > \n> > Regards,\n> > \n> > Gustav\n> > \n\n> > \n\n> > \n\n> > \n\n> \n\n> \n\n> \n\n> \n> \n> \n> \n\n\n\n\nEl lun, 08-02-2016 a las 10:04 +0000, Gustav Karlsson escribió:\nAdditional information:\n\n\nThe problematic row has likely received many hot updates (100k+). Could this be a likely explanation for the high execution time?\n\n\nCould you check if autovacuum is doing its job with this query:select * from pg_stat_user_tables where relname='konto' , is it last_autovaccum and last_autoanalyze recent ?if you don't reduce n_dead_tup in a short time after the bulk process of hot update, it will  be a explanation, and also a \"idle in transaction\" connection could cause it.This link: https://brandur.org/postgres-queues could help you.\n\nRegards,\nGustav\n\n\n\n\n\n\n\n\n\nOn Feb 8, 2016, at 10:45 AM, Gustav Karlsson <[email protected]> wrote:\n\n\n\nHi,\n\n\nQuestion:\n\n\nWhat may cause a primary key index to suddenly become very slow? Index scan for single row taking 2-3 seconds. A manual vacuum resolved the problem.\n\n\n\n\nBackground:\n\n\nWe have a simple table ‘KONTO’ with about 600k rows. \n\n\n\n\n\n            Column            |            Type             |   Modifiers\n------------------------------+-----------------------------+---------------\n id                           | bigint                      | not null\n\n...\n\n\n\nIndexes:\n    \"konto_pk\" PRIMARY KEY, btree (id)\n\n...\n\n\n\n\nOver the weekend we experienced that lookups using the primary key index (‘konto_pk’) became very slow, in the region 2-3s for fetching a single record:\n\n\n\nQUERY PLAN\nIndex Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.052..2094.549 rows=1 loops=1)\n  Index Cond: (id = 2121172829)\nPlanning time: 0.376 ms\nExecution time: 2094.585 ms\n\n\n\n\n\nAfter a manual Vacuum the execution time is OK:\n\n\n\nQUERY PLAN\nIndex Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.037..2.876 rows=1 loops=1)\n  Index Cond: (id = 2121172829)\nPlanning time: 0.793 ms\nExecution time: 2.971 ms\n\n\n\n\n\nSo things are working OK again, but we would like to know what may cause such a degradation of the index scan, to avoid this happening again? (We are using Postgresql version 9.4.4)\n\n\n\n\n\n\nRegards,\nGustav", "msg_date": "Tue, 16 Feb 2016 13:52:49 +0100", "msg_from": "jaime soler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Primary key index suddenly became very slow" }, { "msg_contents": "On Mon, Feb 8, 2016 at 9:04 PM, Gustav Karlsson <[email protected]>\nwrote:\n\n> Additional information:\n>\n> The problematic row has likely received many hot updates (100k+). Could\n> this be a likely explanation for the high execution time?\n>\n\nQuery immediately after the bulk updates before VACUUM will take longer\ntime. Since the VACUUM might have cleared the dead tuples and might have\nupdated the hint-bits, the query's execution time has become much better.\n\nAs the updates are hot, you may not need to consider other factors like,\ntable size growth and if the indexes have grown in size.\n\nRegards,\nVenkata B N\n\nFujitsu Australia\n\n\n>\n>\n> On Feb 8, 2016, at 10:45 AM, Gustav Karlsson <[email protected]>\n> wrote:\n>\n> Hi,\n>\n> Question:\n>\n> What may cause a primary key index to suddenly become very slow? Index\n> scan for single row taking 2-3 seconds. A manual vacuum resolved the\n> problem.\n>\n>\n> Background:\n>\n> We have a simple table ‘KONTO’ with about 600k rows.\n>\n>\n> Column | Type | Modifiers\n>\n> ------------------------------+-----------------------------+---------------\n> id | bigint | not null\n> ...\n>\n> Indexes:\n> \"konto_pk\" PRIMARY KEY, btree (id)\n> ...\n>\n>\n> Over the weekend we experienced that lookups using the primary key index\n> (‘konto_pk’) became very slow, in the region 2-3s for fetching a single\n> record:\n>\n> QUERY PLAN\n> Index Scan using konto_pk on konto (cost=0.42..6.44 rows=1 width=164)\n> (actual time=0.052..2094.549 rows=1 loops=1)\n> Index Cond: (id = 2121172829)\n> Planning time: 0.376 ms\n> Execution time: 2094.585 ms\n>\n>\n> After a manual Vacuum the execution time is OK:\n>\n> QUERY PLAN\n> Index Scan using konto_pk on konto (cost=0.42..6.44 rows=1 width=164)\n> (actual time=0.037..2.876 rows=1 loops=1)\n> Index Cond: (id = 2121172829)\n> Planning time: 0.793 ms\n> Execution time: 2.971 ms\n>\n>\n> So things are working OK again, but we would like to know what may cause\n> such a degradation of the index scan, to avoid this happening again? (We\n> are using Postgresql version 9.4.4)\n>\n>\n>\n> Regards,\n> Gustav\n>\n>\n>\n\nOn Mon, Feb 8, 2016 at 9:04 PM, Gustav Karlsson <[email protected]> wrote:\n\nAdditional information:\n\n\nThe problematic row has likely received many hot updates (100k+). Could this be a likely explanation for the high execution time?Query immediately after the bulk updates before VACUUM will take longer time. Since the VACUUM might have cleared the dead tuples and might have updated the hint-bits, the query's execution time has become much better.As the updates are hot, you may not need to consider other factors like, table size growth and if the indexes have grown in size.Regards,Venkata B NFujitsu Australia \n\n\n\n\n\n\n\nOn Feb 8, 2016, at 10:45 AM, Gustav Karlsson <[email protected]> wrote:\n\n\n\nHi,\n\n\nQuestion:\n\n\nWhat may cause a primary key index to suddenly become very slow? Index scan for single row taking 2-3 seconds. A manual vacuum resolved the problem.\n\n\n\n\nBackground:\n\n\nWe have a simple table ‘KONTO’ with about 600k rows. \n\n\n\n\n\n            Column            |            Type             |   Modifiers\n------------------------------+-----------------------------+---------------\n id                           | bigint                      | not null\n\n...\n\n\n\nIndexes:\n    \"konto_pk\" PRIMARY KEY, btree (id)\n\n...\n\n\n\n\nOver the weekend we experienced that lookups using the primary key index (‘konto_pk’) became very slow, in the region 2-3s for fetching a single record:\n\n\n\nQUERY PLAN\nIndex Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.052..2094.549 rows=1 loops=1)\n  Index Cond: (id = 2121172829)\nPlanning time: 0.376 ms\nExecution time: 2094.585 ms\n\n\n\n\n\nAfter a manual Vacuum the execution time is OK:\n\n\n\nQUERY PLAN\nIndex Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.037..2.876 rows=1 loops=1)\n  Index Cond: (id = 2121172829)\nPlanning time: 0.793 ms\nExecution time: 2.971 ms\n\n\n\n\n\nSo things are working OK again, but we would like to know what may cause such a degradation of the index scan, to avoid this happening again? (We are using Postgresql version 9.4.4)\n\n\n\n\n\n\nRegards,\nGustav", "msg_date": "Wed, 17 Feb 2016 07:06:45 +1100", "msg_from": "Venkata Balaji N <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Primary key index suddenly became very slow" } ]
[ { "msg_contents": "I have a wee database server which regularly tries to insert 1.5 million or even 15 million new rows into a 400 million row table. Sometimes these inserts take hours.\n\nThe actual query to produces the join is fast. It's the insert which is slow.\n\nINSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5, DeltaSeq)\n SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId, Filename.FilenameId, batch_testing.LStat, batch_testing.MD5, batch_testing.DeltaSeq\n FROM batch_testing JOIN Path ON (batch_testing.Path = Path.Path)\n JOIN Filename ON (batch_testing.Name = Filename.Name);\n\nThis is part of the plan: http://img.ly/images/9374145/full <http://img.ly/images/9374145/full> created via http://tatiyants.com/pev/#/plans <http://tatiyants.com/pev/#/plans>\n\nThis gist contains postgresql.conf, zfs settings, slog, disk partitions.\n\n https://gist.github.com/dlangille/33331a8c8cc62fa13b9f <https://gist.github.com/dlangille/33331a8c8cc62fa13b9f>\n\nI'm tempted to move it to faster hardware, but in case I've missed something basic...\n\nThank you.\n\n--\nDan Langille - BSDCan / PGCon\[email protected]", "msg_date": "Tue, 9 Feb 2016 19:09:25 -0500", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": true, "msg_subject": "Running lots of inserts from selects on 9.4.5" }, { "msg_contents": "On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille <[email protected]> wrote:\n> I have a wee database server which regularly tries to insert 1.5 million or\n> even 15 million new rows into a 400 million row table. Sometimes these\n> inserts take hours.\n>\n> The actual query to produces the join is fast. It's the insert which is\n> slow.\n>\n> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5,\n> DeltaSeq)\n> SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId,\n> Filename.FilenameId, batch_testing.LStat, batch_testing.MD5,\n> batch_testing.DeltaSeq\n> FROM batch_testing JOIN Path ON (batch_testing.Path = Path.Path)\n> JOIN Filename ON (batch_testing.Name =\n> Filename.Name);\n>\n> This is part of the plan: http://img.ly/images/9374145/full created via\n> http://tatiyants.com/pev/#/plans\n>\n> This gist contains postgresql.conf, zfs settings, slog, disk partitions.\n>\n> https://gist.github.com/dlangille/33331a8c8cc62fa13b9f\n\nThe table you are inserting into has 7 indexes, all of which have to\nbe maintained. The index on the sequence column should be efficient\nto maintain. But for the rest, if the inserted rows are not naturally\nordered by any of the indexed columns then it would end up reading 6\nrandom scattered leaf pages in order to insert row pointers. If none\nthose pages are in memory, that is going to be slow to read off from\nhdd in single-file. Also, you are going dirty all of those scattered\npages, and they will be slow to write back to hdd because there\nprobably won't be much opportunity for write-combining.\n\nDo you really need all of those indexes?\n\nWon't the index on (jobid, pathid, filenameid) service any query that\n(jobid) does, so you can get rid of the latter?\n\nAnd unless you have range queries on fileindex, like \"where jobid = 12\nand fileindex between 4 and 24\" then you should be able to replace\n(jobid, fileindex) with (fileindex,jobid) and then get rid of the\nstand-alone index on (fileindex).\n\nIf you add an \"order by\" to the select statement which order by the\nfields of one of the remaining indexes, than you could make the\nmaintenance of that index become much cheaper.\n\nCould you move the indexes for this table to SSD?\n\nSSD is probably wasted on your WAL. If your main concern is bulk\ninsertions, then WAL is going to written sequentially with few fsyncs.\nThat is ideal for HDD. Even if you also have smaller transactions,\nWAL is still sequentially written as long as you have a non-volatile\ncache on your RAID controller which can absorb fsyncs efficiently.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 9 Feb 2016 23:47:02 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running lots of inserts from selects on 9.4.5" }, { "msg_contents": "> On Feb 10, 2016, at 2:47 AM, Jeff Janes <[email protected]> wrote:\n> \n> On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille <[email protected]> wrote:\n>> I have a wee database server which regularly tries to insert 1.5 million or\n>> even 15 million new rows into a 400 million row table. Sometimes these\n>> inserts take hours.\n>> \n>> The actual query to produces the join is fast. It's the insert which is\n>> slow.\n>> \n>> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5,\n>> DeltaSeq)\n>> SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId,\n>> Filename.FilenameId, batch_testing.LStat, batch_testing.MD5,\n>> batch_testing.DeltaSeq\n>> FROM batch_testing JOIN Path ON (batch_testing.Path = Path.Path)\n>> JOIN Filename ON (batch_testing.Name =\n>> Filename.Name);\n>> \n>> This is part of the plan: http://img.ly/images/9374145/full created via\n>> http://tatiyants.com/pev/#/plans\n>> \n>> This gist contains postgresql.conf, zfs settings, slog, disk partitions.\n>> \n>> https://gist.github.com/dlangille/33331a8c8cc62fa13b9f\n> \n> The table you are inserting into has 7 indexes, all of which have to\n> be maintained. The index on the sequence column should be efficient\n> to maintain. But for the rest, if the inserted rows are not naturally\n> ordered by any of the indexed columns then it would end up reading 6\n> random scattered leaf pages in order to insert row pointers. If none\n> those pages are in memory, that is going to be slow to read off from\n> hdd in single-file. Also, you are going dirty all of those scattered\n> pages, and they will be slow to write back to hdd because there\n> probably won't be much opportunity for write-combining.\n> \n> Do you really need all of those indexes?\n> \n> Won't the index on (jobid, pathid, filenameid) service any query that\n> (jobid) does, so you can get rid of the latter?\n> \n> And unless you have range queries on fileindex, like \"where jobid = 12\n> and fileindex between 4 and 24\" then you should be able to replace\n> (jobid, fileindex) with (fileindex,jobid) and then get rid of the\n> stand-alone index on (fileindex).\n> \n> If you add an \"order by\" to the select statement which order by the\n> fields of one of the remaining indexes, than you could make the\n> maintenance of that index become much cheaper.\n\nI will make these changes one-by-one and test each. This will be interesting.\n\n> Could you move the indexes for this table to SSD?\n\nNow that's a clever idea.\n\nbacula=# select pg_size_pretty(pg_indexes_size('file'));\n pg_size_pretty \n----------------\n 100 GB\n(1 row)\n\nbacula=# select pg_size_pretty(pg_table_size('file'));\n pg_size_pretty \n----------------\n 63 GB\n(1 row)\n\nbacula=# \n\nNo suprising that the indexes are larger than the data.\n\nThe SSD is 30GB. I don't have enough space. Buying 2x500GB SSDs\nwould allow me to put all the data onto SSD. I'm using about 306G for the \ndatabases now.\n\n> SSD is probably wasted on your WAL. If your main concern is bulk\n> insertions, then WAL is going to written sequentially with few fsyncs.\n> That is ideal for HDD. Even if you also have smaller transactions,\n\nOK.\n\n> WAL is still sequentially written as long as you have a non-volatile\n> cache on your RAID controller which can absorb fsyncs efficiently.\n\nOf note, no RAID controller or non-volatile cache here. I'm running ZFS with plain HBA controllers.\n\nThank you. I have some interesting changes to test.\n\n-- \nDan Langille - BSDCan / PGCon\[email protected]\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 10 Feb 2016 05:13:12 -0500", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running lots of inserts from selects on 9.4.5" }, { "msg_contents": "> On Feb 10, 2016, at 5:13 AM, Dan Langille <[email protected]> wrote:\n> \n>> On Feb 10, 2016, at 2:47 AM, Jeff Janes <[email protected]> wrote:\n>> \n>> On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille <[email protected]> wrote:\n>>> I have a wee database server which regularly tries to insert 1.5 million or\n>>> even 15 million new rows into a 400 million row table. Sometimes these\n>>> inserts take hours.\n>>> \n>>> The actual query to produces the join is fast. It's the insert which is\n>>> slow.\n>>> \n>>> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5,\n>>> DeltaSeq)\n>>> SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId,\n>>> Filename.FilenameId, batch_testing.LStat, batch_testing.MD5,\n>>> batch_testing.DeltaSeq\n>>> FROM batch_testing JOIN Path ON (batch_testing.Path = Path.Path)\n>>> JOIN Filename ON (batch_testing.Name =\n>>> Filename.Name);\n>>> \n>>> This is part of the plan: http://img.ly/images/9374145/full created via\n>>> http://tatiyants.com/pev/#/plans\n>>> \n>>> This gist contains postgresql.conf, zfs settings, slog, disk partitions.\n>>> \n>>> https://gist.github.com/dlangille/33331a8c8cc62fa13b9f\n>> \n>> The table you are inserting into has 7 indexes, all of which have to\n>> be maintained. The index on the sequence column should be efficient\n>> to maintain. But for the rest, if the inserted rows are not naturally\n>> ordered by any of the indexed columns then it would end up reading 6\n>> random scattered leaf pages in order to insert row pointers. If none\n>> those pages are in memory, that is going to be slow to read off from\n>> hdd in single-file. Also, you are going dirty all of those scattered\n>> pages, and they will be slow to write back to hdd because there\n>> probably won't be much opportunity for write-combining.\n>> \n>> Do you really need all of those indexes?\n>> \n>> Won't the index on (jobid, pathid, filenameid) service any query that\n>> (jobid) does, so you can get rid of the latter?\n>> \n>> And unless you have range queries on fileindex, like \"where jobid = 12\n>> and fileindex between 4 and 24\" then you should be able to replace\n>> (jobid, fileindex) with (fileindex,jobid) and then get rid of the\n>> stand-alone index on (fileindex).\n>> \n>> If you add an \"order by\" to the select statement which order by the\n>> fields of one of the remaining indexes, than you could make the\n>> maintenance of that index become much cheaper.\n> \n> I will make these changes one-by-one and test each. This will be interesting.\n\nOn a test server, the original insert takes about 45 minutes. I removed all indexes. 25 minutes.\n\nThank you.\n\n-- \nDan Langille - BSDCan / PGCon\[email protected]\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 11 Feb 2016 16:41:22 -0500", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running lots of inserts from selects on 9.4.5" }, { "msg_contents": "> On Feb 11, 2016, at 4:41 PM, Dan Langille <[email protected]> wrote:\n> \n>> On Feb 10, 2016, at 5:13 AM, Dan Langille <[email protected]> wrote:\n>> \n>>> On Feb 10, 2016, at 2:47 AM, Jeff Janes <[email protected]> wrote:\n>>> \n>>> On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille <[email protected]> wrote:\n>>>> I have a wee database server which regularly tries to insert 1.5 million or\n>>>> even 15 million new rows into a 400 million row table. Sometimes these\n>>>> inserts take hours.\n>>>> \n>>>> The actual query to produces the join is fast. It's the insert which is\n>>>> slow.\n>>>> \n>>>> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5,\n>>>> DeltaSeq)\n>>>> SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId,\n>>>> Filename.FilenameId, batch_testing.LStat, batch_testing.MD5,\n>>>> batch_testing.DeltaSeq\n>>>> FROM batch_testing JOIN Path ON (batch_testing.Path = Path.Path)\n>>>> JOIN Filename ON (batch_testing.Name =\n>>>> Filename.Name);\n>>>> \n>>>> This is part of the plan: http://img.ly/images/9374145/full created via\n>>>> http://tatiyants.com/pev/#/plans\n>>>> \n>>>> This gist contains postgresql.conf, zfs settings, slog, disk partitions.\n>>>> \n>>>> https://gist.github.com/dlangille/33331a8c8cc62fa13b9f\n>>> \n>>> The table you are inserting into has 7 indexes, all of which have to\n>>> be maintained. The index on the sequence column should be efficient\n>>> to maintain. But for the rest, if the inserted rows are not naturally\n>>> ordered by any of the indexed columns then it would end up reading 6\n>>> random scattered leaf pages in order to insert row pointers. If none\n>>> those pages are in memory, that is going to be slow to read off from\n>>> hdd in single-file. Also, you are going dirty all of those scattered\n>>> pages, and they will be slow to write back to hdd because there\n>>> probably won't be much opportunity for write-combining.\n>>> \n>>> Do you really need all of those indexes?\n>>> \n>>> Won't the index on (jobid, pathid, filenameid) service any query that\n>>> (jobid) does, so you can get rid of the latter?\n>>> \n>>> And unless you have range queries on fileindex, like \"where jobid = 12\n>>> and fileindex between 4 and 24\" then you should be able to replace\n>>> (jobid, fileindex) with (fileindex,jobid) and then get rid of the\n>>> stand-alone index on (fileindex).\n>>> \n>>> If you add an \"order by\" to the select statement which order by the\n>>> fields of one of the remaining indexes, than you could make the\n>>> maintenance of that index become much cheaper.\n>> \n>> I will make these changes one-by-one and test each. This will be interesting.\n> \n> On a test server, the original insert takes about 45 minutes. I removed all indexes. 25 minutes.\n> \n> Thank you.\n\nToday I tackled the production server. After discussion on the Bacula devel mailing list (http://marc.info/?l=bacula-devel&m=145537742804482&w=2 <http://marc.info/?l=bacula-devel&m=145537742804482&w=2>)\nI compared my schema to the stock schema provided with Bacula. Yes, I found\nextra indexes. I saved the existing schema and proceeded to remove the indexes\nfrom prod not found in the default.\n\nThe query time went from 223 minute to 4.5 minutes. That is 50 times faster.\n\nI think I can live with that. :)\n\nJeff: if you show up at PGCon, dinner is on me. Thank you.\n\n--\nDan Langille - BSDCan / PGCon\[email protected]", "msg_date": "Sat, 13 Feb 2016 10:43:30 -0500", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running lots of inserts from selects on 9.4.5" }, { "msg_contents": "> On Feb 13, 2016, at 10:43 AM, Dan Langille <[email protected]> wrote:\n> \n> Today I tackled the production server. After discussion on the Bacula devel mailing list (http://marc.info/?l=bacula-devel&m=145537742804482&w=2 <http://marc.info/?l=bacula-devel&m=145537742804482&w=2>)\n> I compared my schema to the stock schema provided with Bacula. Yes, I found\n> extra indexes. I saved the existing schema and proceeded to remove the indexes\n> from prod not found in the default.\n> \n> The query time went from 223 minute to 4.5 minutes. That is 50 times faster.\n\nThe query plans: https://twitter.com/DLangille/status/698528182383804416\n\n--\nDan Langille - BSDCan / PGCon\[email protected]", "msg_date": "Sat, 13 Feb 2016 10:45:46 -0500", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running lots of inserts from selects on 9.4.5" } ]
[ { "msg_contents": "Hi everyone,\n I have a question that I hope fits in this discussion group.\n\nI'm migrating my actual server into a new, more powerful architecture on \nGoogle Cloud Platform.\nATM the server is a VM with 4 vCPUs (the host has 4 Xeon E2xx 3,1 GHZ, \nif I remember) and 32 GB RAM, just running Ubuntu Server 12.04 and \nPostgreSQL 9.1\nThe server contains about 350 DBs and the same number of roles (every \nrole has its own database). Databases are made of about 75 tables that \ncan contain blobs (one table is peculiar in containing blobs) and single \nblob size can grow up to 20-25 megabytes. ATM our biggest DB is about 30 \nGB, 95% made of blobs.\n\nApart from user growth, that means more resource consumption, we are \nstarting new services, that will have more and more impact on databases.\nI read about how blobs are SLOW and I'm a bit worried on how to manage them.\n\nNow, the actual question, is:\nHaving a VM that can be upgraded with a click on a panel and a reboot, \nand that the server fault is only related to a OS failure, should I keep \na single-server solution (but I fear that I/O throughput will become \neven more inadequate) or is it convenient to migrate in a 2-server \nsystem? And, in case of 2-server configuration, what would you recommend?\n\nScenario 1:\nGiven 350 databases, I split them in 2, 175 on server 1 and 175 on \nserver 2, having pgBouncer to resolve the connections and each server \nhas its own workload\n\nScenario 2:\nServer 1 -> Master, Server 2 -> Slave (Replicated with Slony or...?), \nServer 1 for writes, Server 2 for reads\n\nLast thing: should blobs (or the whole database directory itself) go in \na different partition, to optimize performance, or in VM environment \nthis is not a concern anymore?\n\nI tried to be as brief as possible, if you need some more details.... \njust ask :-)\n\nThanks in advance,\nMoreno.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 11 Feb 2016 19:06:42 +0100", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": true, "msg_subject": "Architectural question" }, { "msg_contents": "On 2/11/16 12:06 PM, Moreno Andreo wrote:\n> Now, the actual question, is:\n> Having a VM that can be upgraded with a click on a panel and a reboot,\n> and that the server fault is only related to a OS failure, should I keep\n> a single-server solution (but I fear that I/O throughput will become\n> even more inadequate) or is it convenient to migrate in a 2-server\n> system? And, in case of 2-server configuration, what would you recommend?\n\nMuch of that depends on your disaster recovery strategy.\n\n> Scenario 1:\n> Given 350 databases, I split them in 2, 175 on server 1 and 175 on\n> server 2, having pgBouncer to resolve the connections and each server\n> has its own workload\n>\n> Scenario 2:\n> Server 1 -> Master, Server 2 -> Slave (Replicated with Slony or...?),\n> Server 1 for writes, Server 2 for reads\n\nPersonally I'd do kind of a hybrid at this point.\n\nFirst, I'd split the masters across both servers, with a way to easily \nfail over if one of the servers dies.\n\nNext, I'd get streaming replication setup so that the half with masters \non A have replicas on B and vice-versa. That way you can easily recover \nfrom one server or the other failing.\n\nDepending on your needs, could could use synchronous replication as part \nof that setup. You can even do that at a per-transaction level, so maybe \nyou use sync rep most of the time, and just turn it off when inserting \nor updating BLOBS.\n\n> Last thing: should blobs (or the whole database directory itself) go in\n> a different partition, to optimize performance, or in VM environment\n> this is not a concern anymore?\n\nFirst: IMO concerns about blobs in the database are almost always \noverblown. 30GB of blobs on modern hardware really isn't a big deal, and \nthere's a *lot* to be said for not having to write the extra code to \nmanage all that by hand.\n\nWhen it comes to your disk layout, the first things I'd look at would be:\n\n- Move the temporary statistics directory to a RAM disk\n- Move pg_xlog to it's own partition\n\nThose don't always help, but frequently they do. And when they do, it \nusually makes a big difference.\n\nBeyond that, there might be some advantage to putting blobs on their own \ntablespace. Hard to say without trying it.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 18 Feb 2016 14:33:48 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architectural question" }, { "msg_contents": "Il 18/02/2016 21:33, Jim Nasby ha scritto:\n\nJust before we go on, I have to say that I'm working on PostgreSQL for \nabout 10 years now, but while in the past \"leave everything as it is\" \nworked, in the last 15 months I began to research and study how to \nimprove my server performance, so I'm quite a bit of novice in being a \nDBA (but a novice that when needed reads a lot of documentation :-) )\nSo, if some questions may sound \"strange\", \"noobish\" to you, that's the \nreason.\n\n> On 2/11/16 12:06 PM, Moreno Andreo wrote:\n>> Now, the actual question, is:\n>> Having a VM that can be upgraded with a click on a panel and a reboot,\n>> and that the server fault is only related to a OS failure, should I keep\n>> a single-server solution (but I fear that I/O throughput will become\n>> even more inadequate) or is it convenient to migrate in a 2-server\n>> system? And, in case of 2-server configuration, what would you \n>> recommend?\n>\n> Much of that depends on your disaster recovery strategy.\nI'm planning to have a cron job that backups data (only data) overnight \n(I was thinking something like pg_dumpall) and takes a snapshot of the \nwhole server over the weekend (If I'm not wrong, VMWare allows live \nsnapshots), so if something bad happens, I'll recover the snapshot from \nlast save and restore all databases from latest backup.\n\n>\n>> Scenario 1:\n>> Given 350 databases, I split them in 2, 175 on server 1 and 175 on\n>> server 2, having pgBouncer to resolve the connections and each server\n>> has its own workload\n>>\n>> Scenario 2:\n>> Server 1 -> Master, Server 2 -> Slave (Replicated with Slony or...?),\n>> Server 1 for writes, Server 2 for reads\n>\n> Personally I'd do kind of a hybrid at this point.\n>\n> First, I'd split the masters across both servers, with a way to easily \n> fail over if one of the servers dies.\n>\n> Next, I'd get streaming replication setup so that the half with \n> masters on A have replicas on B and vice-versa. That way you can \n> easily recover from one server or the other failing.\n>\n> Depending on your needs, could could use synchronous replication as \n> part of that setup. You can even do that at a per-transaction level, \n> so maybe you use sync rep most of the time, and just turn it off when \n> inserting or updating BLOBS.\nThis sounds good, and when everything is OK we have I/O operation split \nacross the two servers; a small delay in synchronizing blobs should not \nbe a big deal, even if something bad happens (because of XLOG), right?\n\n>\n>> Last thing: should blobs (or the whole database directory itself) go in\n>> a different partition, to optimize performance, or in VM environment\n>> this is not a concern anymore?\n>\n> First: IMO concerns about blobs in the database are almost always \n> overblown. \nIn many places I've been they say, at last, \"BLOBs are slow\". So I \nconsidered this as another point to analyze while designing server \narchitecture. If you say \"don't mind\", then I won't.\n\n> 30GB of blobs on modern hardware really isn't a big deal, and there's \n> a *lot* to be said for not having to write the extra code to manage \n> all that by hand.\nWhat do you mean? Extra code?\n\n>\n> When it comes to your disk layout, the first things I'd look at would be:\n>\n> - Move the temporary statistics directory to a RAM disk\n> - Move pg_xlog to it's own partition\nSo I need another vDisk, not that big, for pg_xlog?\n\n> Those don't always help, but frequently they do. And when they do, it \n> usually makes a big difference.\n>\n> Beyond that, there might be some advantage to putting blobs on their \n> own tablespace. Hard to say without trying it.\nI'm thinking about it, because while the most of the blobs are < 1MB, \nthere are some that reach 20, 50 and even 100 megabytes, and I'm quite \nconcerned in overall performance of the whole system (even if it's on \nmodern hardware, 100 megs to extract are not that fast...) when these \nhave to be sent to whom is requesting them...\n\nSo, my ideas are clearer now, but the first step is to decide if there's \nneed for only one server (my budget will be happier, because they seem \nvery good, but quite expensive, at GCP...) or it's best with two, using \npgBouncer, and where to put pgBouncer... :-)\n\nThanks\nMoreno\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 22 Feb 2016 15:40:39 +0100", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SPAM] Re: Architectural question" }, { "msg_contents": "On 2/22/16 8:40 AM, Moreno Andreo wrote:\n> Il 18/02/2016 21:33, Jim Nasby ha scritto:\n>> Depending on your needs, could could use synchronous replication as\n>> part of that setup. You can even do that at a per-transaction level,\n>> so maybe you use sync rep most of the time, and just turn it off when\n>> inserting or updating BLOBS.\n> This sounds good, and when everything is OK we have I/O operation split\n> across the two servers; a small delay in synchronizing blobs should not\n> be a big deal, even if something bad happens (because of XLOG), right?\n\nIt all depends on what you can tolerate. You also don't have to use \nsynchronous replication; normal streaming replication is async, so if \nyou can stand to lose some data if one of the servers dies then you can \ndo that.\n\n>>> Last thing: should blobs (or the whole database directory itself) go in\n>>> a different partition, to optimize performance, or in VM environment\n>>> this is not a concern anymore?\n>>\n>> First: IMO concerns about blobs in the database are almost always\n>> overblown.\n> In many places I've been they say, at last, \"BLOBs are slow\". So I\n> considered this as another point to analyze while designing server\n> architecture. If you say \"don't mind\", then I won't.\n\nIt all depends. They're certainly a lot slower than handling a single \nint, but in many cases the difference just doesn't matter.\n\n>> 30GB of blobs on modern hardware really isn't a big deal, and there's\n>> a *lot* to be said for not having to write the extra code to manage\n>> all that by hand.\n> What do you mean? Extra code?\n\nIf the blob is in the database then you have nothing extra to do. It's \nhandled just like all your other data.\n\nIf it's a file in a file system then you need to:\n\n- Have application code that knows how and where to get at the file\n- Have a way to make those files available on all your webservers\n- Have completely separate backup and recovery plans for those files\n\nThat's a lot of extra work. Sometimes it's necessary, but many times \nit's not.\n\n>> When it comes to your disk layout, the first things I'd look at would be:\n>>\n>> - Move the temporary statistics directory to a RAM disk\n>> - Move pg_xlog to it's own partition\n> So I need another vDisk, not that big, for pg_xlog?\n\nYeah, but note that with virtualization that may or may not help.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Mar 2016 10:37:35 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM] Re: Architectural question" }, { "msg_contents": "Il 11/03/2016 17:37, Jim Nasby ha scritto:\n> On 2/22/16 8:40 AM, Moreno Andreo wrote:\n>> Il 18/02/2016 21:33, Jim Nasby ha scritto:\n>>> Depending on your needs, could could use synchronous replication as\n>>> part of that setup. You can even do that at a per-transaction level,\n>>> so maybe you use sync rep most of the time, and just turn it off when\n>>> inserting or updating BLOBS.\n>> This sounds good, and when everything is OK we have I/O operation split\n>> across the two servers; a small delay in synchronizing blobs should not\n>> be a big deal, even if something bad happens (because of XLOG), right?\n>\n> It all depends on what you can tolerate. You also don't have to use \n> synchronous replication; normal streaming replication is async, so if \n> you can stand to lose some data if one of the servers dies then you \n> can do that.\nI can't tolerate data loss, so synchronous replication is mandatory (I \nhad a case this week of a customer asking for an old document that I \ncouldn't find in the database, either if the \"attach present\" flag was \ntrue... and I had a bit of a hard time trying to convince the customer \nit was his fault... :-) )\n>\n>>>> Last thing: should blobs (or the whole database directory itself) \n>>>> go in\n>>>> a different partition, to optimize performance, or in VM environment\n>>>> this is not a concern anymore?\n>>>\n>>> First: IMO concerns about blobs in the database are almost always\n>>> overblown.\n>> In many places I've been they say, at last, \"BLOBs are slow\". So I\n>> considered this as another point to analyze while designing server\n>> architecture. If you say \"don't mind\", then I won't.\n>\n> It all depends. They're certainly a lot slower than handling a single \n> int, but in many cases the difference just doesn't matter.\nThe main goal is to be *quick*. A doctor with a patient on the other \nside of his desk does not want to wait, say, 30 seconds for a clinical \nrecord to open.\nLet me explain what is the main problem (actually there are 2 problems).\n1. I'm handling health data, and sometines they store large images (say \nan hi-res image of an x-ray). When their team mates (spread all over the \ncity, not in the same building) ask for that bitmap (that is, 20 \nmegabytes), surely it can't be cached (images are loaded only if \nrequested by user) and searching a 35k rows, 22 GB table for the \nmatching image should not be that fast, even with proper indexing \n(patient record number)\n2. When I load patient list, their photo must be loaded as well, because \nwhen I click on the table row, a small preview is shown (including a \nsmall thumbnail of the patient's photo). Obviously I can't load all \nthumbs while loading the whole patient list (the list can be up to \n4-5000 records and photo size is about 4-500kBytes, so it would be an \nenormous piece of data to be downloaded.\n>\n>>> 30GB of blobs on modern hardware really isn't a big deal, and there's\n>>> a *lot* to be said for not having to write the extra code to manage\n>>> all that by hand.\n>> What do you mean? Extra code?\n>\n> If the blob is in the database then you have nothing extra to do. It's \n> handled just like all your other data.\n>\n> If it's a file in a file system then you need to:\n>\n> - Have application code that knows how and where to get at the file\n> - Have a way to make those files available on all your webservers\n> - Have completely separate backup and recovery plans for those files\n>\n> That's a lot of extra work. Sometimes it's necessary, but many times \n> it's not.\nIn my case I think it's not necessary, since all blobs go into a bytea \nfield in a table that's just for them. It's an approach that helps us \nkeeping up with privacy, since all blobs are encrypted, and can be \naccessed only by application.\n>\n>>> When it comes to your disk layout, the first things I'd look at \n>>> would be:\n>>>\n>>> - Move the temporary statistics directory to a RAM disk\n>>> - Move pg_xlog to it's own partition\n>> So I need another vDisk, not that big, for pg_xlog?\n>\n> Yeah, but note that with virtualization that may or may not help.\nI was afraid of that. With virtualization we are bound to that hardware \nlying behind us, and that we can't see nor control. Even if we create 2 \nvDisk, they should be bound to the same host spindles, and so having two \nvDisk is completely useless.\nI'm thinking of increase checkpoint_segments interval, so\nIn the next two week I should have the VM deployed, so I'll see what \nI'll have in terms of speed and response (looking at the amount we are \npaying, I hope it will be a very FAST machine... :-D)\n\nThanks\nMoreno.-\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Mar 2016 10:14:06 +0100", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SPAM] Re: Architectural question" }, { "msg_contents": "Jim Nasby schrieb am 11.03.2016 um 17:37:\n> If the blob is in the database then you have nothing extra to do. It's handled just like all your other data.\n> \n> If it's a file in a file system then you need to:\n> \n> - Have application code that knows how and where to get at the file\n> - Have a way to make those files available on all your webservers\n> - Have completely separate backup and recovery plans for those files\n> \n> That's a lot of extra work. Sometimes it's necessary, but many times it's not.\n\nDon't forget the code you need to write to properly handle transactional access (writing, deleting) to the files\n\nYou usually also need to distribute the files over many directories. \nHaving millions of files in a single directory is usually not such a good idea. \n\nIn my experience you also need some cleanup job that removes orphaned files from the file system. \nBecause no matter how hard you try, to get updates/writes to the file system right, at some point this fails.\n\nAlso from a security point of view having this in the database is more robust then in the file system.\n\nThe downside of bytea is that you can't stream them to the client. The application always needs to read the whole blob into memory before it can be used. This might put some memory pressure on the application server. \n\nThomas\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Mar 2016 10:50:46 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architectural question" }, { "msg_contents": "Il 23/03/2016 10:50, Thomas Kellerer ha scritto:\n> Jim Nasby schrieb am 11.03.2016 um 17:37:\n>> If the blob is in the database then you have nothing extra to do. It's handled just like all your other data.\n>>\n>> If it's a file in a file system then you need to:\n>>\n>> - Have application code that knows how and where to get at the file\n>> - Have a way to make those files available on all your webservers\n>> - Have completely separate backup and recovery plans for those files\n>>\n>> That's a lot of extra work. Sometimes it's necessary, but many times it's not.\n> Don't forget the code you need to write to properly handle transactional access (writing, deleting) to the files\n>\n> You usually also need to distribute the files over many directories.\n> Having millions of files in a single directory is usually not such a good idea.\n>\n> In my experience you also need some cleanup job that removes orphaned files from the file system.\n> Because no matter how hard you try, to get updates/writes to the file system right, at some point this fails.\n>\n> Also from a security point of view having this in the database is more robust then in the file system.\n>\n> The downside of bytea is that you can't stream them to the client. The application always needs to read the whole blob into memory before it can be used. This might put some memory pressure on the application server.\n>\n> Thomas\n>\n>\n>\n>\nI just wrote about it in my last message that I sent a few minutes ago\nWe have blobs in a reserved table in each customer database, so we can \nkeep up with privacy, since every blob is encrypted... so no extra work :-)\n\nThanks\nMoreno.-\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Mar 2016 11:44:46 +0100", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Architectural question" }, { "msg_contents": "> -----Original Message-----\n> Thomas Kellerer Wednesday, March 23, 2016 2:51 AM\n> \n> Jim Nasby schrieb am 11.03.2016 um 17:37:\n> > If the blob is in the database then you have nothing extra to do. It's handled\n> just like all your other data.\n> >\n> > If it's a file in a file system then you need to:\n> >\n> > - Have application code that knows how and where to get at the file\n> > - Have a way to make those files available on all your webservers\n> > - Have completely separate backup and recovery plans for those files\n> >\n> > That's a lot of extra work. Sometimes it's necessary, but many times it's not.\n> \n> Don't forget the code you need to write to properly handle transactional access\n> (writing, deleting) to the files\n> \n> You usually also need to distribute the files over many directories.\n> Having millions of files in a single directory is usually not such a good idea.\n> \n> In my experience you also need some cleanup job that removes orphaned files\n> from the file system.\n> Because no matter how hard you try, to get updates/writes to the file system\n> right, at some point this fails.\n> \n> Also from a security point of view having this in the database is more robust\n> then in the file system.\n> \n> The downside of bytea is that you can't stream them to the client. The\n> application always needs to read the whole blob into memory before it can be\n> used. This might put some memory pressure on the application server.\n> \n> Thomas\n\nThis is really an excellent conversation, and highlights the never-ending contemplation\nof blob storage. I've had to go through this dialog in two different industries - healthcare\nand now genomics, creating a new EMR (electronic medical record) system and storing\nand manipulating huge genomic data sets.\n\nI have, in both cases, ended up leaving the blob-type data outside of the database. Even\nthough, as Thomas mentioned, it requires more database and app code to manage, it\nends up allowing for both systems to be optimized for their respective duties.\n\nIn addition, the vastly smaller database sizes result in far faster backups and restores, \ntransactional replication maintains it's speed, and in general, I find the fault tolerant\nbehaviors to be excellent. \n\nYes, losing track of a file would be very bad, and...we're only storing things like xray photos\nor ct scans (healthcare), or genomic processing results. In both cases, usually, the results\ncan be recreated. That said, I've never lost a file so haven't needed to pull on that lever.\n\nMy latest model is placing large genomic data onto the AWS S3 file system, keeping all of\nthe metadata inside the database. It's working very well so far, but we're still in development.\n\nMike\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Mar 2016 05:29:12 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architectural question" }, { "msg_contents": "I have another suggestion. How about putting the images in RethinkDB?\n\nRethinkDB is easy to set up and manage, and is scalable and easy (almost\ntrivial) to cluster. Many of the filesystem disadvantages you mention\nwould be much more easily managed by RethinkDB.\n\nA while back I wrote a Foreign Data Wrapper for RethinkDB. I haven't\nupdated it to the latest version, but it wouldn't be hard to bring it up to\ndate. (It might even work as-is.) By leveraging the FDW, you could have\nall of the awesome Relational Power and performance of PostgreSQL combined\nwith the scalable, easily clustered, NoSQL powers of RethinkDB, yet still\nhave a common interface - if you need it.\n\n\n\nOn Wed, Mar 23, 2016 at 8:29 AM, Mike Sofen <[email protected]> wrote:\n\n> > -----Original Message-----\n> > Thomas Kellerer Wednesday, March 23, 2016 2:51 AM\n> >\n> > Jim Nasby schrieb am 11.03.2016 um 17:37:\n> > > If the blob is in the database then you have nothing extra to do. It's\n> handled\n> > just like all your other data.\n> > >\n> > > If it's a file in a file system then you need to:\n> > >\n> > > - Have application code that knows how and where to get at the file\n> > > - Have a way to make those files available on all your webservers\n> > > - Have completely separate backup and recovery plans for those files\n> > >\n> > > That's a lot of extra work. Sometimes it's necessary, but many times\n> it's not.\n> >\n> > Don't forget the code you need to write to properly handle transactional\n> access\n> > (writing, deleting) to the files\n> >\n> > You usually also need to distribute the files over many directories.\n> > Having millions of files in a single directory is usually not such a\n> good idea.\n> >\n> > In my experience you also need some cleanup job that removes orphaned\n> files\n> > from the file system.\n> > Because no matter how hard you try, to get updates/writes to the file\n> system\n> > right, at some point this fails.\n> >\n> > Also from a security point of view having this in the database is more\n> robust\n> > then in the file system.\n> >\n> > The downside of bytea is that you can't stream them to the client. The\n> > application always needs to read the whole blob into memory before it\n> can be\n> > used. This might put some memory pressure on the application server.\n> >\n> > Thomas\n>\n> This is really an excellent conversation, and highlights the never-ending\n> contemplation\n> of blob storage. I've had to go through this dialog in two different\n> industries - healthcare\n> and now genomics, creating a new EMR (electronic medical record) system\n> and storing\n> and manipulating huge genomic data sets.\n>\n> I have, in both cases, ended up leaving the blob-type data outside of the\n> database. Even\n> though, as Thomas mentioned, it requires more database and app code to\n> manage, it\n> ends up allowing for both systems to be optimized for their respective\n> duties.\n>\n> In addition, the vastly smaller database sizes result in far faster\n> backups and restores,\n> transactional replication maintains it's speed, and in general, I find the\n> fault tolerant\n> behaviors to be excellent.\n>\n> Yes, losing track of a file would be very bad, and...we're only storing\n> things like xray photos\n> or ct scans (healthcare), or genomic processing results. In both cases,\n> usually, the results\n> can be recreated. That said, I've never lost a file so haven't needed to\n> pull on that lever.\n>\n> My latest model is placing large genomic data onto the AWS S3 file system,\n> keeping all of\n> the metadata inside the database. It's working very well so far, but\n> we're still in development.\n>\n> Mike\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI have another suggestion.  How about putting the images in RethinkDB?RethinkDB is easy to set up and manage, and is scalable and easy (almost trivial) to cluster.  Many of the filesystem disadvantages you mention would be much more easily managed by RethinkDB.A while back I wrote a Foreign Data Wrapper for RethinkDB.  I haven't updated it to the latest version, but it wouldn't be hard to bring it up to date.  (It might even work as-is.)   By leveraging the FDW, you could have all of the awesome Relational Power and performance of PostgreSQL combined with the scalable, easily clustered, NoSQL powers of RethinkDB, yet still have a common interface - if you need it.  On Wed, Mar 23, 2016 at 8:29 AM, Mike Sofen <[email protected]> wrote:> -----Original Message-----\n> Thomas Kellerer Wednesday, March 23, 2016 2:51 AM\n>\n> Jim Nasby schrieb am 11.03.2016 um 17:37:\n> > If the blob is in the database then you have nothing extra to do. It's handled\n> just like all your other data.\n> >\n> > If it's a file in a file system then you need to:\n> >\n> > - Have application code that knows how and where to get at the file\n> > - Have a way to make those files available on all your webservers\n> > - Have completely separate backup and recovery plans for those files\n> >\n> > That's a lot of extra work. Sometimes it's necessary, but many times it's not.\n>\n> Don't forget the code you need to write to properly handle transactional access\n> (writing, deleting) to the files\n>\n> You usually also need to distribute the files over many directories.\n> Having millions of files in a single directory is usually not such a good idea.\n>\n> In my experience you also need some cleanup job that removes orphaned files\n> from the file system.\n> Because no matter how hard you try, to get updates/writes to the file system\n> right, at some point this fails.\n>\n> Also from a security point of view having this in the database is more robust\n> then in the file system.\n>\n> The downside of bytea is that you can't stream them to the client. The\n> application always needs to read the whole blob into memory before it can be\n> used. This might put some memory pressure on the application server.\n>\n> Thomas\n\nThis is really an excellent conversation, and highlights the never-ending contemplation\nof blob storage.  I've had to go through this dialog in two different industries - healthcare\nand now genomics, creating a new EMR (electronic medical record) system and storing\nand manipulating huge genomic data sets.\n\nI have, in both cases, ended up leaving the blob-type data outside of the database.  Even\nthough, as Thomas mentioned, it requires more database and app code to manage, it\nends up allowing for both systems to be optimized for their respective duties.\n\nIn addition, the vastly smaller database sizes result in far faster backups and restores,\ntransactional replication maintains it's speed, and in general, I find the fault tolerant\nbehaviors to be excellent.\n\nYes, losing track of a file would be very bad, and...we're only storing things like xray photos\nor ct scans (healthcare), or genomic processing results.  In both cases, usually, the results\ncan be recreated.  That said, I've never lost a file so haven't needed to pull on that lever.\n\nMy latest model is placing large genomic data onto the AWS S3 file system, keeping all of\nthe metadata inside the database.  It's working very well so far, but we're still in development.\n\nMike\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 23 Mar 2016 09:18:35 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Architectural question" }, { "msg_contents": "Il 23/03/2016 13:29, Mike Sofen ha scritto:\n>> -----Original Message-----\n>> Thomas Kellerer Wednesday, March 23, 2016 2:51 AM\n>>\n>> Jim Nasby schrieb am 11.03.2016 um 17:37:\n>>> If the blob is in the database then you have nothing extra to do. It's handled\n>> just like all your other data.\n>>> If it's a file in a file system then you need to:\n>>>\n>>> - Have application code that knows how and where to get at the file\n>>> - Have a way to make those files available on all your webservers\n>>> - Have completely separate backup and recovery plans for those files\n>>>\n>>> That's a lot of extra work. Sometimes it's necessary, but many times it's not.\n>> Don't forget the code you need to write to properly handle transactional access\n>> (writing, deleting) to the files\n>>\n>> You usually also need to distribute the files over many directories.\n>> Having millions of files in a single directory is usually not such a good idea.\n>>\n>> In my experience you also need some cleanup job that removes orphaned files\n>> from the file system.\n>> Because no matter how hard you try, to get updates/writes to the file system\n>> right, at some point this fails.\n>>\n>> Also from a security point of view having this in the database is more robust\n>> then in the file system.\n>>\n>> The downside of bytea is that you can't stream them to the client. The\n>> application always needs to read the whole blob into memory before it can be\n>> used. This might put some memory pressure on the application server.\n>>\n>> Thomas\n> This is really an excellent conversation, and highlights the never-ending contemplation\n> of blob storage.\nThat seems like discussing about politics or religion :-)\n> I've had to go through this dialog in two different industries - healthcare\n> and now genomics, creating a new EMR (electronic medical record) system and storing\n> and manipulating huge genomic data sets.\n>\n> I have, in both cases, ended up leaving the blob-type data outside of the database. Even\n> though, as Thomas mentioned, it requires more database and app code to manage, it\n> ends up allowing for both systems to be optimized for their respective duties.\nOur approach, still mantaining BLOBs in databases, is quite an hybrid, \nbecause BLOBs are not spread among DB tables, but we have a dedicated \ntable, with an appropriate indexing, where 95% of our blobs (and 99% of \nblob storage) reside, so if we need to have a quick dump, we can exclude \nBLOBs table or treat it in a separate way (i.e. backup util in our app \nis made of two separate steps, clinical data and blobs).\n\nAs I wrote in a previous post, we have our blobs encrypted, so it's more \nhandy keeping them in DB rather than saving to a file (and, I think, \nquicker when the user request for any of these)\n> In addition, the vastly smaller database sizes result in far faster backups and restores,\n> transactional replication maintains it's speed, and in general, I find the fault tolerant\n> behaviors to be excellent.\n>\n> Yes, losing track of a file would be very bad, and...we're only storing things like xray photos\n> or ct scans (healthcare), or genomic processing results. In both cases, usually, the results\n> can be recreated. That said, I've never lost a file so haven't needed to pull on that lever.\nIn our case we have to assume that blob contents cannot be recreated. \nPatients can change family doctor... if a trial arise and a critical \ndocument is lost, he's on his own. That's why we have a daily-based \nautomatic backup policy on the customer local server.\n> My latest model is placing large genomic data onto the AWS S3 file system, keeping all of\n> the metadata inside the database. It's working very well so far, but we're still in development.\n>\n> Mike\n>\n>\n>\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Mar 2016 19:06:20 +0100", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Architectural question" }, { "msg_contents": "On 3/23/16 4:14 AM, Moreno Andreo wrote:\n> The main goal is to be *quick*. A doctor with a patient on the other\n> side of his desk does not want to wait, say, 30 seconds for a clinical\n> record to open.\n> Let me explain what is the main problem (actually there are 2 problems).\n> 1. I'm handling health data, and sometines they store large images (say\n> an hi-res image of an x-ray). When their team mates (spread all over the\n> city, not in the same building) ask for that bitmap (that is, 20\n> megabytes), surely it can't be cached (images are loaded only if\n> requested by user) and searching a 35k rows, 22 GB table for the\n> matching image should not be that fast, even with proper indexing\n> (patient record number)\n\nWhy wouldn't that be fast? Unless the TOAST table for that particular \ntable is pretty fragmented, pulling up thumbnails should be very fast. \nI'd expect it to be the cost of reading a few pages sequentially.\n\nIf you're mixing all your blobs together, then you might end up with a \nproblem. It might be worth partitioning the blob table based on the size \nof what you're storing.\n\n> 2. When I load patient list, their photo must be loaded as well, because\n> when I click on the table row, a small preview is shown (including a\n> small thumbnail of the patient's photo). Obviously I can't load all\n> thumbs while loading the whole patient list (the list can be up to\n> 4-5000 records and photo size is about 4-500kBytes, so it would be an\n> enormous piece of data to be downloaded.\n\nI would think a thumbnail would be 30-40k or less, not 500k. It sounds \nlike part of the problem is you should keep the thumbnails separate from \nthe high-res file. But really you should probably do that for \neverything... I suspect there's parts of the UI when you want to display \na fairly low-res version of something like an xray, only pulling the raw \nimage if someone actually needs it.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Mar 2016 13:51:17 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM] Re: Architectural question" }, { "msg_contents": "Il 23/03/2016 19:51, Jim Nasby ha scritto:\n> On 3/23/16 4:14 AM, Moreno Andreo wrote:\n>> The main goal is to be *quick*. A doctor with a patient on the other\n>> side of his desk does not want to wait, say, 30 seconds for a clinical\n>> record to open.\n>> Let me explain what is the main problem (actually there are 2 problems).\n>> 1. I'm handling health data, and sometines they store large images (say\n>> an hi-res image of an x-ray). When their team mates (spread all over the\n>> city, not in the same building) ask for that bitmap (that is, 20\n>> megabytes), surely it can't be cached (images are loaded only if\n>> requested by user) and searching a 35k rows, 22 GB table for the\n>> matching image should not be that fast, even with proper indexing\n>> (patient record number)\n>\n> Why wouldn't that be fast? Unless the TOAST table for that particular \n> table is pretty fragmented, \nI'm running on Debian with ext4 file system. I'm not expecting \nfragmentation. Am I wrong?\n> pulling up thumbnails should be very fast. I'd expect it to be the \n> cost of reading a few pages sequentially.\nI'm not extracting thumbnails. I have a layout that is similar to an \nemail client, with all rows with data and, in a column, a clip, that \nlets user to load the real image, not its thumbnail.\n\n> If you're mixing all your blobs together, then you might end up with a \n> problem. It might be worth partitioning the blob table based on the \n> size of what you're storing.\nOK, I went to documentation and read about partitioning :-) I knew about \ninheritance, but I was totally unaware of partitioning. Today it's a \ngood day, because I've learned something new.\nYou're saying that it would be better creating, for example, a table for \nblobs < 1 MB, another for blobs between 1 and 5 MB and another for blobs \n > 5 MB? And what about the master table? Should it be one of these three?\nBlobs data and size are unpredictable (from 2k RTF to 20 MB JPG),\n>\n>> 2. When I load patient list, their photo must be loaded as well, because\n>> when I click on the table row, a small preview is shown (including a\n>> small thumbnail of the patient's photo). Obviously I can't load all\n>> thumbs while loading the whole patient list (the list can be up to\n>> 4-5000 records and photo size is about 4-500kBytes, so it would be an\n>> enormous piece of data to be downloaded.\n>\n> I would think a thumbnail would be 30-40k or less, not 500k. \nYou have a point. We adviced of that the users, but they don't care, or \nsimply don't know what they are doing. We need to change the application \nto accept max 50k files.\n> It sounds like part of the problem is you should keep the thumbnails \n> separate from the high-res file. But really you should probably do \n> that for everything... I suspect there's parts of the UI when you want \n> to display a fairly low-res version of something like an xray, only \n> pulling the raw image if someone actually needs it.\nThat's what we are doing. thumbnails are only patient portraits, while \nno other blob (clinical scans) is read until someone asks for it\n\nThanks\nMoreno.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 24 Mar 2016 11:56:52 +0100", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SPAM] Re: Architectural question" } ]
[ { "msg_contents": "Hello\n\nI have a bad query on PostgreSQL 9.0.23 - 64bit - Windows 2012 R2 - 48GB Ram\n\nexplain analyze select d.data_id, d.table_name, d.event_type, d.row_data,\nd.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id,\nd.transaction_id, d.source_node_id, d.external_data, '' from sym_data d\ninner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id\nand g.end_id where d.channel_id='sale_transaction' order by d.data_id asc;\n\nHere is result\n\nNested Loop (cost=319.42..4879348246.58 rows=32820035265 width=1525)\n(actual time=64656.747..5594654.189 rows=3617090 loops=1)\n -> Index Scan using sym_data_pkey on sym_data d\n(cost=0.00..3671742.82 rows=3867095 width=1525) (actual\ntime=9.775..12465.153 rows=3866359 loops=1)\n Filter: ((channel_id)::text = 'sale_transaction'::text)\n -> Bitmap Heap Scan on sym_data_gap g (cost=319.42..1133.51\nrows=8487 width=8) (actual time=1.438..1.439 rows=1 loops=3866359)\n Recheck Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))\n Filter: (g.status = 'GP'::bpchar)\n -> Bitmap Index Scan on sym_data_gap_pkey (cost=0.00..317.30\nrows=8487 width=0) (actual time=1.436..1.436 rows=1 loops=3866359)\n Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <=\ng.end_id))\n\nhttp://explain.depesz.com/s/c3DT\n\n\nI have run vaccum full. Here is my PostgreSQL config\n\nshared_buffers = 2GB\nwork_mem = 64MB\nmaintenance_work_mem = 1GB\nwal_buffers = 256\neffective_cache_size = 4GB\ncheckpoint_segments = 256\nwal_level = hot_standby\nmax_wal_senders = 5\nwal_keep_segments = 256\nrandom_page_cost = 3.5\nautovacuum_vacuum_threshold = 1000\nautovacuum_analyze_threshold = 250\nmax_locks_per_transaction = 2000\n\nWhen I check taskmanager, I found postgres process is user 4-5MB\n\nWhat happened with my PostgreSQL. Please help me\n\nThank you in advance.\n\nTuan Hoang Anh\n\nHelloI have a bad query on PostgreSQL 9.0.23 - 64bit - Windows 2012 R2 - 48GB Ramexplain analyze select d.data_id, d.table_name, d.event_type, d.row_data, d.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id, d.transaction_id, d.source_node_id, d.external_data, '' from sym_data d inner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id and g.end_id where d.channel_id='sale_transaction' order by d.data_id asc;Here is resultNested Loop (cost=319.42..4879348246.58 rows=32820035265 width=1525) (actual time=64656.747..5594654.189 rows=3617090 loops=1)\n -> Index Scan using sym_data_pkey on sym_data d (cost=0.00..3671742.82 rows=3867095 width=1525) (actual time=9.775..12465.153 rows=3866359 loops=1)\n Filter: ((channel_id)::text = 'sale_transaction'::text)\n -> Bitmap Heap Scan on sym_data_gap g (cost=319.42..1133.51 rows=8487 width=8) (actual time=1.438..1.439 rows=1 loops=3866359)\n Recheck Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))\n Filter: (g.status = 'GP'::bpchar)\n -> Bitmap Index Scan on sym_data_gap_pkey (cost=0.00..317.30 rows=8487 width=0) (actual time=1.436..1.436 rows=1 loops=3866359)\n Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))http://explain.depesz.com/s/c3DTI have run vaccum full. Here is my PostgreSQL configshared_buffers = 2GBwork_mem = 64MBmaintenance_work_mem = 1GBwal_buffers = 256effective_cache_size = 4GBcheckpoint_segments = 256wal_level = hot_standbymax_wal_senders = 5wal_keep_segments = 256random_page_cost = 3.5autovacuum_vacuum_threshold = 1000autovacuum_analyze_threshold = 250max_locks_per_transaction = 2000When I check taskmanager, I found postgres process is user 4-5MBWhat happened with my PostgreSQL. Please help meThank you in advance.Tuan Hoang Anh", "msg_date": "Sat, 20 Feb 2016 23:46:38 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "Why Postgres use a little memory on Windows." }, { "msg_contents": "On 02/20/2016 08:46 AM, tuanhoanganh wrote:\n> Hello\n>\n> I have a bad query on PostgreSQL 9.0.23 - 64bit - Windows 2012 R2 - 48GB Ram\n>\n> explain analyze select d.data_id, d.table_name, d.event_type,\n> d.row_data, d.pk_data, d.old_data, d.create_time, d.trigger_hist_id,\n> d.channel_id, d.transaction_id, d.source_node_id, d.external_data, ''\n> from sym_data d inner join sym_data_gap g on g.status='GP' and d.data_id\n> between g.start_id and g.end_id where d.channel_id='sale_transaction'\n> order by d.data_id asc;\n\nTook liberty of reformatting the above here:\nhttp://sqlformat.darold.net/\n\nEXPLAIN ANALYZE\nSELECT\n d.data_id,\n d.table_name,\n d.event_type,\n d.row_data,\n d.pk_data,\n d.old_data,\n d.create_time,\n d.trigger_hist_id,\n d.channel_id,\n d.transaction_id,\n d.source_node_id,\n d.external_data,\n ''\nFROM\n sym_data d INNER JOIN sym_data_gap g ON g.status = 'GP'\n AND d.data_id BETWEEN g.start_id\n AND g.end_id\nWHERE\n d.channel_id = 'sale_transaction'\nORDER BY\n d.data_id ASC;\n\nThe thing that stands out to me is that I do not see that sym_data and \nsym_data_gp are actually joined on anything.\n\nAlso is it possible to see the schema definitions for the two tables?\n\n\n\n>\n> Here is result\n>\n> Nested Loop (cost=319.42..4879348246.58 rows=32820035265 width=1525) (actual time=64656.747..5594654.189 rows=3617090 loops=1)\n> -> Index Scan using sym_data_pkey on sym_data d (cost=0.00..3671742.82 rows=3867095 width=1525) (actual time=9.775..12465.153 rows=3866359 loops=1)\n> Filter: ((channel_id)::text = 'sale_transaction'::text)\n> -> Bitmap Heap Scan on sym_data_gap g (cost=319.42..1133.51 rows=8487 width=8) (actual time=1.438..1.439 rows=1 loops=3866359)\n> Recheck Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))\n> Filter: (g.status = 'GP'::bpchar)\n> -> Bitmap Index Scan on sym_data_gap_pkey (cost=0.00..317.30 rows=8487 width=0) (actual time=1.436..1.436 rows=1 loops=3866359)\n> Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))\n>\n> http://explain.depesz.com/s/c3DT\n>\n>\n> I have run vaccum full. Here is my PostgreSQL config\n>\n> shared_buffers = 2GB\n> work_mem = 64MB\n> maintenance_work_mem = 1GB\n> wal_buffers = 256\n> effective_cache_size = 4GB\n> checkpoint_segments = 256\n> wal_level = hot_standby\n> max_wal_senders = 5\n> wal_keep_segments = 256\n> random_page_cost = 3.5\n> autovacuum_vacuum_threshold = 1000\n> autovacuum_analyze_threshold = 250\n> max_locks_per_transaction = 2000\n>\n> When I check taskmanager, I found postgres process is user 4-5MB\n>\n> What happened with my PostgreSQL. Please help me\n>\n> Thank you in advance.\n>\n> Tuan Hoang Anh\n>\n>\n>\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 20 Feb 2016 10:13:18 -0800", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Why Postgres use a little memory on Windows." }, { "msg_contents": "Adrian Klaver <[email protected]> writes:\n> Took liberty of reformatting the above here:\n> ...\n> FROM\n> sym_data d INNER JOIN sym_data_gap g ON g.status = 'GP'\n> AND d.data_id BETWEEN g.start_id\n> AND g.end_id\n> WHERE\n> d.channel_id = 'sale_transaction'\n> ORDER BY\n> d.data_id ASC;\n\n> The thing that stands out to me is that I do not see that sym_data and \n> sym_data_gp are actually joined on anything.\n\nThe \"d.data_id BETWEEN g.start_id AND g.end_id\" part is a join condition\n... but not one that can be handled by either hash or merge join, because\nthose require simple equality join conditions. So the nestloop plan shown\nhere is really about as good as you're going to get without redesigning\nthe query and/or the data representation.\n\nIt looks like the bitmap heap scan generally returns exactly one row for\neach outer row, which makes me wonder if the BETWEEN couldn't be replaced\nwith some sort of equality. But that might take some rethinking of the\ndata.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 20 Feb 2016 13:37:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Why Postgres use a little memory on Windows." }, { "msg_contents": "On Sat, Feb 20, 2016 at 7:13 PM, Adrian Klaver\n<[email protected]> wrote:\n.....\n> FROM\n> sym_data d INNER JOIN sym_data_gap g ON g.status = 'GP'\n> AND d.data_id BETWEEN g.start_id\n> AND g.end_id\n.....\n> The thing that stands out to me is that I do not see that sym_data and\n> sym_data_gp are actually joined on anything.\n\nYes they are, although the formatting hid it somehow.\n\nIt is a classic, data_gap defines intervals via start+end id over\ndata, he wants to join every data with the corresponding gap. It is a\nhard optimization problem without knowing more of the data\ndistributions, maybe the interval types and ginindexes can help him.\nWhen faced with this kind of structure, depending on the data\ndistribution, I've solved it via two paralell queries ( gap sorted by\nstart plus end, data sorted by id, sweep them in paralell joining by\ncode, typical tape-update problem, works like a charm for\nnon-overlapping ranges and even for some overlapping ones with a\ncouple of queues ) . And he seems to want all of the data ( sometime\nthis goes faster if you can add a couple of range conditions for\ndata.id / gap.start/end_id.\n\n> Also is it possible to see the schema definitions for the two tables?\n\nMy bet is on somethink like data.id ~serial primary key,\ngap.start/end_id foreign key to that.\n\nFrancisco Olarte.\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Sat, 20 Feb 2016 19:39:23 +0100", "msg_from": "Francisco Olarte <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Postgres use a little memory on Windows." }, { "msg_contents": "On Sat, Feb 20, 2016 at 7:37 PM, Tom Lane <[email protected]> wrote:\n> It looks like the bitmap heap scan generally returns exactly one row for\n> each outer row, which makes me wonder if the BETWEEN couldn't be replaced\n> with some sort of equality.\n\nMm, I'm not good reading explains, but that seems to confirm my\nsuspicion that gaps partition the id range in non overlapping ranges.\n\n> But that might take some rethinking of the data.\n\nIf id is a series, gap defines a range, he can do something with an\nauxiliary table, like\n\nselect start as a, 0 as b from gaps where status = 'GP'\nunion all\nselect id as a,1 as b from data\nunion all end-1 as a, 2 as b from gaps where status='gp' -- to end-1\nto make intervals half open.\norder by a,b\n\nwhich would give all the ids in a with b=1 surrounded by (0,2) when\nvalid and by (2,0) when invalid.\n\nand then, with a creative window clause or a small function, filter\nthat and join with data.id. I suppose adding a third c column, null on\nb=1 and =b on b=0/2 and selecting the previous non-null in the\nsequence could do it, but it's somehow above my window-fu, I'm more of\na code gouy and would do it with two nested loops on a function.\n\nFrancisco Olarte.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 20 Feb 2016 19:58:14 +0100", "msg_from": "Francisco Olarte <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Why Postgres use a little memory on Windows." }, { "msg_contents": "On 02/20/2016 10:39 AM, Francisco Olarte wrote:\n> On Sat, Feb 20, 2016 at 7:13 PM, Adrian Klaver\n> <[email protected]> wrote:\n> .....\n>> FROM\n>> sym_data d INNER JOIN sym_data_gap g ON g.status = 'GP'\n>> AND d.data_id BETWEEN g.start_id\n>> AND g.end_id\n> .....\n>> The thing that stands out to me is that I do not see that sym_data and\n>> sym_data_gp are actually joined on anything.\n>\n> Yes they are, although the formatting hid it somehow.\n>\n> It is a classic, data_gap defines intervals via start+end id over\n> data, he wants to join every data with the corresponding gap. It is a\n> hard optimization problem without knowing more of the data\n> distributions, maybe the interval types and ginindexes can help him.\n> When faced with this kind of structure, depending on the data\n> distribution, I've solved it via two paralell queries ( gap sorted by\n> start plus end, data sorted by id, sweep them in paralell joining by\n> code, typical tape-update problem, works like a charm for\n> non-overlapping ranges and even for some overlapping ones with a\n> couple of queues ) . And he seems to want all of the data ( sometime\n> this goes faster if you can add a couple of range conditions for\n> data.id / gap.start/end_id.\n\nThanks to you and Tom for enlightening me. I am going to have to spend \nsome time puzzling this out to convert what you have shown into \nsomething that I can wrap my head around.\n\n>\n>> Also is it possible to see the schema definitions for the two tables?\n>\n> My bet is on somethink like data.id ~serial primary key,\n> gap.start/end_id foreign key to that.\n>\n> Francisco Olarte.\n>\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 20 Feb 2016 13:49:31 -0800", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Why Postgres use a little memory on Windows." }, { "msg_contents": "On Sat, Feb 20, 2016 at 8:46 AM, tuanhoanganh <[email protected]> wrote:\n> Hello\n>\n> I have a bad query on PostgreSQL 9.0.23 - 64bit - Windows 2012 R2 - 48GB Ram\n\n9.0 is no longer supported. You should work toward upgrading to a\nnewer version. It might not solve this problem, but it would give you\nbetter tools for diagnosing the problem. Which is a pretty good step\ntoward solving it.\n\n> When I check taskmanager, I found postgres process is user 4-5MB\n\nOther people have explained the details of how the query is being run\nand why it is being run that way. But I would like to take a step\nback from that, and tell you that the reason that PostgreSQL is not\nusing more memory, is that it doesn't think that using more memory\nwould help.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Sat, 20 Feb 2016 18:21:38 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Why Postgres use a little memory on Windows." }, { "msg_contents": "Thanks for all help of everyone.\n\nI have tried to change effective_cache_size = 24GB and it run well.\n\nTuan Hoang Anh\n\nThanks for all help of everyone.I have tried to change effective_cache_size = 24GB and it run well.Tuan Hoang Anh", "msg_date": "Sun, 21 Feb 2016 23:45:01 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why Postgres use a little memory on Windows." } ]
[ { "msg_contents": "I'm about to install a new production server and wanted some advice regarding\nfilesystems and disk partitioning.\n\nThe server is:\n- Dell PowerEdge R430\n- 1 x Intel Xeon E5-2620 2.4GHz\n- 32 GB RAM\n- 4 x 600GB 10k SAS \n- PERC H730P Raid Controller with 2GB cache\n\nThe drives will be set up in one RAID-10 volume and I'll be installing\nUbuntu 14.04 LTS as the OS. The server will be dedicated to running\nPostgreSQL.\n\nI'm trying to decide:\n\n1) Which filesystem to use (most people seem to suggest xfs).\n2) Whether to use LVM (I'm leaning against it because it seems like it adds\nadditional complexity).\n3) How to partition the volume. Should I just create one partition on / and\ncreate a 16-32GB swap partition? Any reason to get fancy with additional\npartitions given it's all on one volume?\n\nI'd like to keep things simple to start, but not shoot myself in the foot at\nthe same time.\n\nThanks!\n\nDave\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 23 Feb 2016 21:28:50 -0700 (MST)", "msg_from": "dstibrany <[email protected]>", "msg_from_op": true, "msg_subject": "Filesystem and Disk Partitioning for New Server Setup" }, { "msg_contents": "1) I'd go with xfs. zfs might be a good alternative, but the last time I\ntried it, it was really unstable (on Linux). I may have gotten a lot\nbetter, but xfs is a safe bet and well understood.\n\n2) An LVM is just an extra couple of commands. These days that is not a\nlot of complexity given what you gain. The main advantage is that you can\nextend or grow the file system on the fly. Over the life of the database\nit is quite possible you'll find yourself pressed for disk space - either\nto drop in more csv files to load with the 'copy' command, to store more\nlogs (because you need to turn up logging verbosity, etc...), you need more\ntransaction logs live on the system, you need to take a quick database\ndump, or simply you collect more data than you expected. It is not always\nconvenient to change the log location, or move tablespaces around to make\nroom. In the cloud you might provision more volumes and attach them to the\nserver. On a SAN you might attach more disk, and with a stand alone\nserver, you might stick more disks on the server. In all those scenarios,\nbeing able to simply merge them into your existing volume can be really\nhandy.\n\n3) The main advantage of partitioning a single volume (these days) is\nsimply that if one partition fills up, it doesn't impact the rest of the\nsystem. Putting things that are likely to fill up the disk on their own\npartition is generally a good practice. User home directories is one\nexample. System logs. That sort of thing. Isolating them on their own\npartition will improve the long term reliability of your database. The\nmain disadvantage is those things get boxed into a much smaller amount of\nspace than they would normally have if they could share a partition with\nthe whole system.\n\n\nOn Tue, Feb 23, 2016 at 11:28 PM, dstibrany <[email protected]> wrote:\n\n> I'm about to install a new production server and wanted some advice\n> regarding\n> filesystems and disk partitioning.\n>\n> The server is:\n> - Dell PowerEdge R430\n> - 1 x Intel Xeon E5-2620 2.4GHz\n> - 32 GB RAM\n> - 4 x 600GB 10k SAS\n> - PERC H730P Raid Controller with 2GB cache\n>\n> The drives will be set up in one RAID-10 volume and I'll be installing\n> Ubuntu 14.04 LTS as the OS. The server will be dedicated to running\n> PostgreSQL.\n>\n> I'm trying to decide:\n>\n> 1) Which filesystem to use (most people seem to suggest xfs).\n> 2) Whether to use LVM (I'm leaning against it because it seems like it adds\n> additional complexity).\n> 3) How to partition the volume. Should I just create one partition on / and\n> create a 16-32GB swap partition? Any reason to get fancy with additional\n> partitions given it's all on one volume?\n>\n> I'd like to keep things simple to start, but not shoot myself in the foot\n> at\n> the same time.\n>\n> Thanks!\n>\n> Dave\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n1) I'd go with xfs.  zfs might be a good alternative, but the last time I tried it, it was really unstable (on Linux).  I may have gotten a lot better, but xfs is a safe bet and well understood.2) An LVM is just an extra couple of commands.  These days that is not a lot of complexity given what you gain. The main advantage is that you can extend or grow the file system on the fly.  Over the life of the database it is quite possible you'll find yourself pressed for disk space - either to drop in more csv files to load with the 'copy' command, to store more logs (because you need to turn up logging verbosity, etc...), you need more transaction logs live on the system, you need to take a quick database dump, or simply you collect more data than you expected.  It is not always convenient to change the log location, or move tablespaces around to make room.  In the cloud you might provision more volumes and attach them to the server.  On a SAN you might attach more disk, and with a stand alone server, you might stick more disks on the server.  In all those scenarios, being able to simply merge them into your existing volume can be really handy.3) The main advantage of partitioning a single volume (these days) is simply that if one partition fills up, it doesn't impact the rest of the system.  Putting things that are likely to fill up the disk on their own partition is generally a good practice.   User home directories is one example.  System logs.  That sort of thing.  Isolating them on their own partition will improve the long term reliability of your database.   The main disadvantage is those things get boxed into a much smaller amount of space than they would normally have if they could share a partition with the whole system.On Tue, Feb 23, 2016 at 11:28 PM, dstibrany <[email protected]> wrote:I'm about to install a new production server and wanted some advice regarding\nfilesystems and disk partitioning.\n\nThe server is:\n- Dell PowerEdge R430\n- 1 x Intel Xeon E5-2620 2.4GHz\n- 32 GB RAM\n- 4 x 600GB 10k SAS\n- PERC H730P Raid Controller with 2GB cache\n\nThe drives will be set up in one RAID-10 volume and I'll be installing\nUbuntu 14.04 LTS as the OS. The server will be dedicated to running\nPostgreSQL.\n\nI'm trying to decide:\n\n1) Which filesystem to use (most people seem to suggest xfs).\n2) Whether to use LVM (I'm leaning against it because it seems like it adds\nadditional complexity).\n3) How to partition the volume. Should I just create one partition on / and\ncreate a 16-32GB swap partition? Any reason to get fancy with additional\npartitions given it's all on one volume?\n\nI'd like to keep things simple to start, but not shoot myself in the foot at\nthe same time.\n\nThanks!\n\nDave\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 24 Feb 2016 06:08:47 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Fwd: Filesystem and Disk Partitioning for New Server Setup" }, { "msg_contents": "Thanks for the advice, Rick.\n\nI have an 8 disk chassis, so possible extension paths down the line are\nadding raid1 for WALs, adding another RAID10, or creating a 8 disk RAID10.\nWould LVM make this type of addition easier?\n\n\nOn Wed, Feb 24, 2016 at 6:08 AM, Rick Otten <[email protected]>\nwrote:\n\n>\n> 1) I'd go with xfs. zfs might be a good alternative, but the last time I\n> tried it, it was really unstable (on Linux). I may have gotten a lot\n> better, but xfs is a safe bet and well understood.\n>\n> 2) An LVM is just an extra couple of commands. These days that is not a\n> lot of complexity given what you gain. The main advantage is that you can\n> extend or grow the file system on the fly. Over the life of the database\n> it is quite possible you'll find yourself pressed for disk space - either\n> to drop in more csv files to load with the 'copy' command, to store more\n> logs (because you need to turn up logging verbosity, etc...), you need more\n> transaction logs live on the system, you need to take a quick database\n> dump, or simply you collect more data than you expected. It is not always\n> convenient to change the log location, or move tablespaces around to make\n> room. In the cloud you might provision more volumes and attach them to the\n> server. On a SAN you might attach more disk, and with a stand alone\n> server, you might stick more disks on the server. In all those scenarios,\n> being able to simply merge them into your existing volume can be really\n> handy.\n>\n> 3) The main advantage of partitioning a single volume (these days) is\n> simply that if one partition fills up, it doesn't impact the rest of the\n> system. Putting things that are likely to fill up the disk on their own\n> partition is generally a good practice. User home directories is one\n> example. System logs. That sort of thing. Isolating them on their own\n> partition will improve the long term reliability of your database. The\n> main disadvantage is those things get boxed into a much smaller amount of\n> space than they would normally have if they could share a partition with\n> the whole system.\n>\n>\n> On Tue, Feb 23, 2016 at 11:28 PM, dstibrany <[email protected]> wrote:\n>\n>> I'm about to install a new production server and wanted some advice\n>> regarding\n>> filesystems and disk partitioning.\n>>\n>> The server is:\n>> - Dell PowerEdge R430\n>> - 1 x Intel Xeon E5-2620 2.4GHz\n>> - 32 GB RAM\n>> - 4 x 600GB 10k SAS\n>> - PERC H730P Raid Controller with 2GB cache\n>>\n>> The drives will be set up in one RAID-10 volume and I'll be installing\n>> Ubuntu 14.04 LTS as the OS. The server will be dedicated to running\n>> PostgreSQL.\n>>\n>> I'm trying to decide:\n>>\n>> 1) Which filesystem to use (most people seem to suggest xfs).\n>> 2) Whether to use LVM (I'm leaning against it because it seems like it\n>> adds\n>> additional complexity).\n>> 3) How to partition the volume. Should I just create one partition on /\n>> and\n>> create a 16-32GB swap partition? Any reason to get fancy with additional\n>> partitions given it's all on one volume?\n>>\n>> I'd like to keep things simple to start, but not shoot myself in the foot\n>> at\n>> the same time.\n>>\n>> Thanks!\n>>\n>> Dave\n>>\n>>\n>>\n>> --\n>> View this message in context:\n>> http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html\n>> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n\n\n-- \n*THIS IS A TEST*\n\nThanks for the advice, Rick.I have an 8 disk chassis, so possible extension paths down the line are adding raid1 for WALs, adding another RAID10, or creating a 8 disk RAID10. Would LVM make this type of addition easier?On Wed, Feb 24, 2016 at 6:08 AM, Rick Otten <[email protected]> wrote:1) I'd go with xfs.  zfs might be a good alternative, but the last time I tried it, it was really unstable (on Linux).  I may have gotten a lot better, but xfs is a safe bet and well understood.2) An LVM is just an extra couple of commands.  These days that is not a lot of complexity given what you gain. The main advantage is that you can extend or grow the file system on the fly.  Over the life of the database it is quite possible you'll find yourself pressed for disk space - either to drop in more csv files to load with the 'copy' command, to store more logs (because you need to turn up logging verbosity, etc...), you need more transaction logs live on the system, you need to take a quick database dump, or simply you collect more data than you expected.  It is not always convenient to change the log location, or move tablespaces around to make room.  In the cloud you might provision more volumes and attach them to the server.  On a SAN you might attach more disk, and with a stand alone server, you might stick more disks on the server.  In all those scenarios, being able to simply merge them into your existing volume can be really handy.3) The main advantage of partitioning a single volume (these days) is simply that if one partition fills up, it doesn't impact the rest of the system.  Putting things that are likely to fill up the disk on their own partition is generally a good practice.   User home directories is one example.  System logs.  That sort of thing.  Isolating them on their own partition will improve the long term reliability of your database.   The main disadvantage is those things get boxed into a much smaller amount of space than they would normally have if they could share a partition with the whole system.On Tue, Feb 23, 2016 at 11:28 PM, dstibrany <[email protected]> wrote:I'm about to install a new production server and wanted some advice regarding\nfilesystems and disk partitioning.\n\nThe server is:\n- Dell PowerEdge R430\n- 1 x Intel Xeon E5-2620 2.4GHz\n- 32 GB RAM\n- 4 x 600GB 10k SAS\n- PERC H730P Raid Controller with 2GB cache\n\nThe drives will be set up in one RAID-10 volume and I'll be installing\nUbuntu 14.04 LTS as the OS. The server will be dedicated to running\nPostgreSQL.\n\nI'm trying to decide:\n\n1) Which filesystem to use (most people seem to suggest xfs).\n2) Whether to use LVM (I'm leaning against it because it seems like it adds\nadditional complexity).\n3) How to partition the volume. Should I just create one partition on / and\ncreate a 16-32GB swap partition? Any reason to get fancy with additional\npartitions given it's all on one volume?\n\nI'd like to keep things simple to start, but not shoot myself in the foot at\nthe same time.\n\nThanks!\n\nDave\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- THIS IS A TEST", "msg_date": "Wed, 24 Feb 2016 09:25:24 -0500", "msg_from": "Dave Stibrany <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem and Disk Partitioning for New Server Setup" }, { "msg_contents": "An LVM gives you more options.\n\nWithout an LVM you would add a disk to the system, create a tablespace, and\nthen move some of your tables over to the new disk. Or, you'd take a full\nbackup, rebuild your file system, and then restore from backup onto the\nnewer, larger disk configuration. Or you'd make softlinks to pg_log or\npg_xlog or something to stick the extra disk in your system somehow.\n\nYou can do that with an LVM too. However, with an LVM you can add the disk\nto the system, extend the file system, and just keep running. Live. No\nneed to figure out which tables or files should go where.\n\nSometimes it is really nice to have that option.\n\n\n\n\nOn Wed, Feb 24, 2016 at 9:25 AM, Dave Stibrany <[email protected]> wrote:\n\n> Thanks for the advice, Rick.\n>\n> I have an 8 disk chassis, so possible extension paths down the line are\n> adding raid1 for WALs, adding another RAID10, or creating a 8 disk RAID10.\n> Would LVM make this type of addition easier?\n>\n>\n> On Wed, Feb 24, 2016 at 6:08 AM, Rick Otten <[email protected]>\n> wrote:\n>\n>>\n>> 1) I'd go with xfs. zfs might be a good alternative, but the last time I\n>> tried it, it was really unstable (on Linux). I may have gotten a lot\n>> better, but xfs is a safe bet and well understood.\n>>\n>> 2) An LVM is just an extra couple of commands. These days that is not a\n>> lot of complexity given what you gain. The main advantage is that you can\n>> extend or grow the file system on the fly. Over the life of the database\n>> it is quite possible you'll find yourself pressed for disk space - either\n>> to drop in more csv files to load with the 'copy' command, to store more\n>> logs (because you need to turn up logging verbosity, etc...), you need more\n>> transaction logs live on the system, you need to take a quick database\n>> dump, or simply you collect more data than you expected. It is not always\n>> convenient to change the log location, or move tablespaces around to make\n>> room. In the cloud you might provision more volumes and attach them to the\n>> server. On a SAN you might attach more disk, and with a stand alone\n>> server, you might stick more disks on the server. In all those scenarios,\n>> being able to simply merge them into your existing volume can be really\n>> handy.\n>>\n>> 3) The main advantage of partitioning a single volume (these days) is\n>> simply that if one partition fills up, it doesn't impact the rest of the\n>> system. Putting things that are likely to fill up the disk on their own\n>> partition is generally a good practice. User home directories is one\n>> example. System logs. That sort of thing. Isolating them on their own\n>> partition will improve the long term reliability of your database. The\n>> main disadvantage is those things get boxed into a much smaller amount of\n>> space than they would normally have if they could share a partition with\n>> the whole system.\n>>\n>>\n>> On Tue, Feb 23, 2016 at 11:28 PM, dstibrany <[email protected]> wrote:\n>>\n>>> I'm about to install a new production server and wanted some advice\n>>> regarding\n>>> filesystems and disk partitioning.\n>>>\n>>> The server is:\n>>> - Dell PowerEdge R430\n>>> - 1 x Intel Xeon E5-2620 2.4GHz\n>>> - 32 GB RAM\n>>> - 4 x 600GB 10k SAS\n>>> - PERC H730P Raid Controller with 2GB cache\n>>>\n>>> The drives will be set up in one RAID-10 volume and I'll be installing\n>>> Ubuntu 14.04 LTS as the OS. The server will be dedicated to running\n>>> PostgreSQL.\n>>>\n>>> I'm trying to decide:\n>>>\n>>> 1) Which filesystem to use (most people seem to suggest xfs).\n>>> 2) Whether to use LVM (I'm leaning against it because it seems like it\n>>> adds\n>>> additional complexity).\n>>> 3) How to partition the volume. Should I just create one partition on /\n>>> and\n>>> create a 16-32GB swap partition? Any reason to get fancy with additional\n>>> partitions given it's all on one volume?\n>>>\n>>> I'd like to keep things simple to start, but not shoot myself in the\n>>> foot at\n>>> the same time.\n>>>\n>>> Thanks!\n>>>\n>>> Dave\n>>>\n>>>\n>>>\n>>> --\n>>> View this message in context:\n>>> http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html\n>>> Sent from the PostgreSQL - performance mailing list archive at\n>>> Nabble.com.\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list (\n>>> [email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>>\n>>\n>\n>\n> --\n> *THIS IS A TEST*\n>\n\nAn LVM gives you more options.Without an LVM you would add a disk to the system, create a tablespace, and then move some of your tables over to the new disk.  Or, you'd take a full backup, rebuild your file system, and then restore from backup onto the newer, larger disk configuration.  Or you'd make softlinks to pg_log or pg_xlog or something to stick the extra disk in your system somehow.You can do that with an LVM too.  However, with an LVM you can add the disk to the system, extend the file system, and just keep running.  Live.  No need to figure out which tables or files should go where.Sometimes it is really nice to have that option.On Wed, Feb 24, 2016 at 9:25 AM, Dave Stibrany <[email protected]> wrote:Thanks for the advice, Rick.I have an 8 disk chassis, so possible extension paths down the line are adding raid1 for WALs, adding another RAID10, or creating a 8 disk RAID10. Would LVM make this type of addition easier?On Wed, Feb 24, 2016 at 6:08 AM, Rick Otten <[email protected]> wrote:1) I'd go with xfs.  zfs might be a good alternative, but the last time I tried it, it was really unstable (on Linux).  I may have gotten a lot better, but xfs is a safe bet and well understood.2) An LVM is just an extra couple of commands.  These days that is not a lot of complexity given what you gain. The main advantage is that you can extend or grow the file system on the fly.  Over the life of the database it is quite possible you'll find yourself pressed for disk space - either to drop in more csv files to load with the 'copy' command, to store more logs (because you need to turn up logging verbosity, etc...), you need more transaction logs live on the system, you need to take a quick database dump, or simply you collect more data than you expected.  It is not always convenient to change the log location, or move tablespaces around to make room.  In the cloud you might provision more volumes and attach them to the server.  On a SAN you might attach more disk, and with a stand alone server, you might stick more disks on the server.  In all those scenarios, being able to simply merge them into your existing volume can be really handy.3) The main advantage of partitioning a single volume (these days) is simply that if one partition fills up, it doesn't impact the rest of the system.  Putting things that are likely to fill up the disk on their own partition is generally a good practice.   User home directories is one example.  System logs.  That sort of thing.  Isolating them on their own partition will improve the long term reliability of your database.   The main disadvantage is those things get boxed into a much smaller amount of space than they would normally have if they could share a partition with the whole system.On Tue, Feb 23, 2016 at 11:28 PM, dstibrany <[email protected]> wrote:I'm about to install a new production server and wanted some advice regarding\nfilesystems and disk partitioning.\n\nThe server is:\n- Dell PowerEdge R430\n- 1 x Intel Xeon E5-2620 2.4GHz\n- 32 GB RAM\n- 4 x 600GB 10k SAS\n- PERC H730P Raid Controller with 2GB cache\n\nThe drives will be set up in one RAID-10 volume and I'll be installing\nUbuntu 14.04 LTS as the OS. The server will be dedicated to running\nPostgreSQL.\n\nI'm trying to decide:\n\n1) Which filesystem to use (most people seem to suggest xfs).\n2) Whether to use LVM (I'm leaning against it because it seems like it adds\nadditional complexity).\n3) How to partition the volume. Should I just create one partition on / and\ncreate a 16-32GB swap partition? Any reason to get fancy with additional\npartitions given it's all on one volume?\n\nI'd like to keep things simple to start, but not shoot myself in the foot at\nthe same time.\n\nThanks!\n\nDave\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- THIS IS A TEST", "msg_date": "Wed, 24 Feb 2016 10:05:55 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem and Disk Partitioning for New Server Setup" }, { "msg_contents": "FYI, If your volume for pg data is the last partition, you can always add drives to the Dell PERC RAID group, extend the volume, then extend the partition and extend the filesystem.\r\n\r\nAll of this can also be done live.\r\n\r\nWes Vaske\r\n\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Rick Otten\r\nSent: Wednesday, February 24, 2016 9:06 AM\r\nTo: Dave Stibrany\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Filesystem and Disk Partitioning for New Server Setup\r\n\r\nAn LVM gives you more options.\r\n\r\nWithout an LVM you would add a disk to the system, create a tablespace, and then move some of your tables over to the new disk. Or, you'd take a full backup, rebuild your file system, and then restore from backup onto the newer, larger disk configuration. Or you'd make softlinks to pg_log or pg_xlog or something to stick the extra disk in your system somehow.\r\n\r\nYou can do that with an LVM too. However, with an LVM you can add the disk to the system, extend the file system, and just keep running. Live. No need to figure out which tables or files should go where.\r\n\r\nSometimes it is really nice to have that option.\r\n\r\n\r\n\r\n\r\nOn Wed, Feb 24, 2016 at 9:25 AM, Dave Stibrany <[email protected]<mailto:[email protected]>> wrote:\r\nThanks for the advice, Rick.\r\n\r\nI have an 8 disk chassis, so possible extension paths down the line are adding raid1 for WALs, adding another RAID10, or creating a 8 disk RAID10. Would LVM make this type of addition easier?\r\n\r\n\r\nOn Wed, Feb 24, 2016 at 6:08 AM, Rick Otten <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n1) I'd go with xfs. zfs might be a good alternative, but the last time I tried it, it was really unstable (on Linux). I may have gotten a lot better, but xfs is a safe bet and well understood.\r\n\r\n2) An LVM is just an extra couple of commands. These days that is not a lot of complexity given what you gain. The main advantage is that you can extend or grow the file system on the fly. Over the life of the database it is quite possible you'll find yourself pressed for disk space - either to drop in more csv files to load with the 'copy' command, to store more logs (because you need to turn up logging verbosity, etc...), you need more transaction logs live on the system, you need to take a quick database dump, or simply you collect more data than you expected. It is not always convenient to change the log location, or move tablespaces around to make room. In the cloud you might provision more volumes and attach them to the server. On a SAN you might attach more disk, and with a stand alone server, you might stick more disks on the server. In all those scenarios, being able to simply merge them into your existing volume can be really handy.\r\n\r\n3) The main advantage of partitioning a single volume (these days) is simply that if one partition fills up, it doesn't impact the rest of the system. Putting things that are likely to fill up the disk on their own partition is generally a good practice. User home directories is one example. System logs. That sort of thing. Isolating them on their own partition will improve the long term reliability of your database. The main disadvantage is those things get boxed into a much smaller amount of space than they would normally have if they could share a partition with the whole system.\r\n\r\n\r\nOn Tue, Feb 23, 2016 at 11:28 PM, dstibrany <[email protected]<mailto:[email protected]>> wrote:\r\nI'm about to install a new production server and wanted some advice regarding\r\nfilesystems and disk partitioning.\r\n\r\nThe server is:\r\n- Dell PowerEdge R430\r\n- 1 x Intel Xeon E5-2620 2.4GHz\r\n- 32 GB RAM\r\n- 4 x 600GB 10k SAS\r\n- PERC H730P Raid Controller with 2GB cache\r\n\r\nThe drives will be set up in one RAID-10 volume and I'll be installing\r\nUbuntu 14.04 LTS as the OS. The server will be dedicated to running\r\nPostgreSQL.\r\n\r\nI'm trying to decide:\r\n\r\n1) Which filesystem to use (most people seem to suggest xfs).\r\n2) Whether to use LVM (I'm leaning against it because it seems like it adds\r\nadditional complexity).\r\n3) How to partition the volume. Should I just create one partition on / and\r\ncreate a 16-32GB swap partition? Any reason to get fancy with additional\r\npartitions given it's all on one volume?\r\n\r\nI'd like to keep things simple to start, but not shoot myself in the foot at\r\nthe same time.\r\n\r\nThanks!\r\n\r\nDave\r\n\r\n\r\n\r\n--\r\nView this message in context: http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html\r\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n\r\n\r\n\r\n\r\n--\r\nTHIS IS A TEST\r\n\r\n\n\n\n\n\n\n\n\n\nFYI, If your volume for pg data is the last partition, you can always add drives to the Dell PERC RAID group, extend the volume, then extend the partition and\r\n extend the filesystem.\n \nAll of this can also be done live.\n \nWes Vaske\r\n\n \n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Rick Otten\nSent: Wednesday, February 24, 2016 9:06 AM\nTo: Dave Stibrany\nCc: [email protected]\nSubject: Re: [PERFORM] Filesystem and Disk Partitioning for New Server Setup\n \n\nAn LVM gives you more options.\n\n \n\n\nWithout an LVM you would add a disk to the system, create a tablespace, and then move some of your tables over to the new disk.  Or, you'd take a full backup, rebuild your file system, and then restore from backup onto the newer, larger\r\n disk configuration.  Or you'd make softlinks to pg_log or pg_xlog or something to stick the extra disk in your system somehow.\n\n\n \n\n\nYou can do that with an LVM too.  However, with an LVM you can add the disk to the system, extend the file system, and just keep running.  Live.  No need to figure out which tables or files should go where.\n\n\n \n\n\nSometimes it is really nice to have that option.\n\n\n \n\n\n \n\n\n \n\n\n\n \n\nOn Wed, Feb 24, 2016 at 9:25 AM, Dave Stibrany <[email protected]> wrote:\n\n\nThanks for the advice, Rick.\n\n \n\n\nI have an 8 disk chassis, so possible extension paths down the line are adding raid1 for WALs, adding another RAID10, or creating a 8 disk RAID10. Would LVM make this type of addition easier?\n\n\n \n\n\n\n\n\n \n\nOn Wed, Feb 24, 2016 at 6:08 AM, Rick Otten <[email protected]> wrote:\n\n\n\n\n\n \n\n1) I'd go with xfs.  zfs might be a good alternative, but the last time I tried it, it was really unstable (on Linux).  I may have gotten a lot better, but xfs is a safe bet and well understood.\n\n \n\n\n2) An LVM is just an extra couple of commands.  These days that is not a lot of complexity given what you gain. The main advantage is that you can extend or grow the file system on the fly.  Over the life of the database it is quite possible\r\n you'll find yourself pressed for disk space - either to drop in more csv files to load with the 'copy' command, to store more logs (because you need to turn up logging verbosity, etc...), you need more transaction logs live on the system, you need to take\r\n a quick database dump, or simply you collect more data than you expected.  It is not always convenient to change the log location, or move tablespaces around to make room.  In the cloud you might provision more volumes and attach them to the server.  On a\r\n SAN you might attach more disk, and with a stand alone server, you might stick more disks on the server.  In all those scenarios, being able to simply merge them into your existing volume can be really handy.\n\n\n \n\n\n3) The main advantage of partitioning a single volume (these days) is simply that if one partition fills up, it doesn't impact the rest of the system.  Putting things that are likely to fill up the disk on their own partition is generally\r\n a good practice.   User home directories is one example.  System logs.  That sort of thing.  Isolating them on their own partition will improve the long term reliability of your database.   The main disadvantage is those things get boxed into a much smaller\r\n amount of space than they would normally have if they could share a partition with the whole system.\n\n\n \n\n\n\n\n\n \n\nOn Tue, Feb 23, 2016 at 11:28 PM, dstibrany <[email protected]> wrote:\n\nI'm about to install a new production server and wanted some advice regarding\r\nfilesystems and disk partitioning.\n\r\nThe server is:\r\n- Dell PowerEdge R430\r\n- 1 x Intel Xeon E5-2620 2.4GHz\r\n- 32 GB RAM\r\n- 4 x 600GB 10k SAS\r\n- PERC H730P Raid Controller with 2GB cache\n\r\nThe drives will be set up in one RAID-10 volume and I'll be installing\r\nUbuntu 14.04 LTS as the OS. The server will be dedicated to running\r\nPostgreSQL.\n\r\nI'm trying to decide:\n\r\n1) Which filesystem to use (most people seem to suggest xfs).\r\n2) Whether to use LVM (I'm leaning against it because it seems like it adds\r\nadditional complexity).\r\n3) How to partition the volume. Should I just create one partition on / and\r\ncreate a 16-32GB swap partition? Any reason to get fancy with additional\r\npartitions given it's all on one volume?\n\r\nI'd like to keep things simple to start, but not shoot myself in the foot at\r\nthe same time.\n\r\nThanks!\n\r\nDave\n\n\n\r\n--\r\nView this message in context: \r\nhttp://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html\r\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n \n\n\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\n-- \n\n\nTHIS IS A TEST", "msg_date": "Wed, 24 Feb 2016 15:44:59 +0000", "msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem and Disk Partitioning for New Server Setup" } ]
[ { "msg_contents": "At some point in the next year we're going to reconsider our hosting\nenvironment, currently consisting of several medium-sized servers (2x4\nCPUs, 48GB RAM, 12-disk RAID system with 8 in RAID 10 and 2 in RAID 1 for\nWAL). We use barman to keep a hot standby and an archive.\n\nThe last time we dug into this, we were initially excited, but our\nexcitement turned to disappointment when we calculated the real costs of\nhosted services, and the constraints on performance and customizability.\n\nDue to the nature of our business, we need a system where we can install\nplug-ins to Postgres. I expect that alone will limit our choices. In\naddition to our Postgres database, we run a fairly ordinary Apache web site.\n\nThere is constant chatter in this group about buying servers vs. the\nvarious hosted services. Does anyone have any sort of summary comparison of\nthe various solutions out there? Or is it just a matter of researching it\nmyself and maybe doing some benchmarking and price comparisons?\n\nThanks!\nCraig\n\nAt some point in the next year we're going to reconsider our hosting environment, currently consisting of several medium-sized servers (2x4 CPUs, 48GB RAM, 12-disk RAID system with 8 in RAID 10 and 2 in RAID 1 for WAL). We use barman to keep a hot standby and an archive.The last time we dug into this, we were initially excited, but our excitement turned to disappointment when we calculated the real costs of hosted services, and the constraints on performance and customizability.Due to the nature of our business, we need a system where we can install plug-ins to Postgres. I expect that alone will limit our choices. In addition to our Postgres database, we run a fairly ordinary Apache web site.There is constant chatter in this group about buying servers vs. the various hosted services. Does anyone have any sort of summary comparison of the various solutions out there? Or is it just a matter of researching it myself and maybe doing some benchmarking and price comparisons?Thanks!Craig", "msg_date": "Tue, 23 Feb 2016 21:06:53 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Cloud versus buying my own iron" }, { "msg_contents": "Am 24.02.2016 um 06:06 schrieb Craig James:\n> At some point in the next year we're going to reconsider our hosting\n> environment, currently consisting of several medium-sized servers (2x4\n> CPUs, 48GB RAM, 12-disk RAID system with 8 in RAID 10 and 2 in RAID 1\n> for WAL). We use barman to keep a hot standby and an archive.\n> \n> The last time we dug into this, we were initially excited, but our\n> excitement turned to disappointment when we calculated the real costs of\n> hosted services, and the constraints on performance and customizability.\n> \n> Due to the nature of our business, we need a system where we can install\n> plug-ins to Postgres. I expect that alone will limit our choices. In\n> addition to our Postgres database, we run a fairly ordinary Apache web site.\n> \n> There is constant chatter in this group about buying servers vs. the\n> various hosted services. Does anyone have any sort of summary comparison\n> of the various solutions out there? Or is it just a matter of\n> researching it myself and maybe doing some benchmarking and price\n> comparisons?\n\nFor starters, did you see Josh Berkus' presentation on the topic?\n https://www.youtube.com/watch?v=WV5P2DgxPoI\n\nI for myself would probably always go the \"own iron\" road, but alas!\nthat's just the way I feel about control. And I'm kind of a Linux\noldshot, so managing a (hosted root) server doesn't scare me off.\n\nOTOH, I do see the advantages of having things like monitoring, backup,\nHDD replacements etc. done for you. Which is essentially what you pay for.\n\nIn essence, there's obviously no silver bullet ;-)\n\nBest regards,\n-- \nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel: +49 911/991-4665\nMobil: +49 172/8853339\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 24 Feb 2016 10:01:30 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cloud versus buying my own iron" }, { "msg_contents": "Having gotten used to using cloud servers over the past few years, but been\na server hugger for more than 20 before that, I have to say the cloud\noffers a number of huge advantages that would make me seriously question\nwhether there are very many good reasons to go back to using local iron at\nall. (Other than maybe running databases on your laptop for development\nand testing purposes.)\n\nRackspace offers 'bare metal' servers if you want consistent performance.\nYou don't have to pay for a managed solution, there are a lot of tiers of\nservice. AWS also offers solutions that are not on shared platforms. (AWS\ntends to be much more expensive and, in spite of the myriad [proprietary]\nindustry leading new features, actually a little less flexible and with\npoorer support.)\n\nThe main advantage of cloud is the ability to be agile. You can upsize,\ndownsize, add storage, move data centers, and adapt to changing business\nrequirements on the fly. Even with overnight shipping and a minimal\nbureaucracy - selecting new hardware, getting approval to purchase it,\nordering it, unboxing it, setting it up and testing it, and then finally\ngetting to installing software - can take days or weeks of your time and\nenergy. In the cloud, you just click a couple of buttons and then get on\nwith doing the stuff that really adds value to your business.\n\nI spent the better part of a couple of decades ordering servers and disks\nand extra cpu boards for big and small companies and getting them in the\nservers and provisioning them. Now that I use the cloud I just reach over\nwith my mouse, provision an volume, attach it to the server, and voila -\nI've averted a disk space issue. I take an image, build a new server,\nswing DNS, and - there you have it - I'm now on a 16 cpu system instead of\nan 8 cpu system. Hours, at most, instead of weeks. I can spend my time\nworrying about business problems and data science.\n\nEvery 6 months to a year both Rackspace and AWS offer new classes of\nservers with new CPU's and faster backplanes and better performance for the\nbuck. With only a little planning, you can jump into the latest hardware\nevery time they do so. If you have your own iron, you are likely to be\nstuck on the same hardware for 3 or more years before you can upgrade again.\n\nIf the platform you are on suffers a catastropic hardware failure, it\nusually only takes a few minutes to bring up a new server on new hardware\nand be back and running again.\n\nYes, there is a premium for the flexibility and convenience. Surprisingly\nthough, I think by the time you add in electricity and cooling and labor\nand shipping and switches and racks and cabling, you may find that even\nwith their margin, their economy of scale actually offers a better total\nreal cost advantage. (I've heard some arguments to the contrary, but I'm\nnot sure I believe them if the cloud infrastructure is well managed.)\n Throw in the instant deep technical support you can get from some place\nlike Rackspace when things go wrong, and I find few advantages to being a\nserver hugger any more.\n\n\n\n\n\n\n\nOn Wed, Feb 24, 2016 at 4:01 AM, Gunnar \"Nick\" Bluth <\[email protected]> wrote:\n\n> Am 24.02.2016 um 06:06 schrieb Craig James:\n> > At some point in the next year we're going to reconsider our hosting\n> > environment, currently consisting of several medium-sized servers (2x4\n> > CPUs, 48GB RAM, 12-disk RAID system with 8 in RAID 10 and 2 in RAID 1\n> > for WAL). We use barman to keep a hot standby and an archive.\n> >\n> > The last time we dug into this, we were initially excited, but our\n> > excitement turned to disappointment when we calculated the real costs of\n> > hosted services, and the constraints on performance and customizability.\n> >\n> > Due to the nature of our business, we need a system where we can install\n> > plug-ins to Postgres. I expect that alone will limit our choices. In\n> > addition to our Postgres database, we run a fairly ordinary Apache web\n> site.\n> >\n> > There is constant chatter in this group about buying servers vs. the\n> > various hosted services. Does anyone have any sort of summary comparison\n> > of the various solutions out there? Or is it just a matter of\n> > researching it myself and maybe doing some benchmarking and price\n> > comparisons?\n>\n> For starters, did you see Josh Berkus' presentation on the topic?\n> https://www.youtube.com/watch?v=WV5P2DgxPoI\n>\n> I for myself would probably always go the \"own iron\" road, but alas!\n> that's just the way I feel about control. And I'm kind of a Linux\n> oldshot, so managing a (hosted root) server doesn't scare me off.\n>\n> OTOH, I do see the advantages of having things like monitoring, backup,\n> HDD replacements etc. done for you. Which is essentially what you pay for.\n>\n> In essence, there's obviously no silver bullet ;-)\n>\n> Best regards,\n> --\n> Gunnar \"Nick\" Bluth\n> DBA ELSTER\n>\n> Tel: +49 911/991-4665\n> Mobil: +49 172/8853339\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHaving gotten used to using cloud servers over the past few years, but been a server hugger for more than 20 before that, I have to say the cloud offers a number of huge advantages that would make me seriously question whether there are very many good reasons to go back to using local iron at all.  (Other than maybe running databases on your laptop for development and testing purposes.)Rackspace offers 'bare metal' servers if you want consistent performance.  You don't have to pay for a managed solution, there are a lot of tiers of service.  AWS also offers solutions that are not on shared platforms.  (AWS tends to be much more expensive and, in spite of the myriad [proprietary] industry leading new features, actually a little less flexible and with poorer support.)The main advantage of cloud is the ability to be agile.  You can upsize, downsize, add storage, move data centers, and adapt to changing business requirements on the fly.   Even with overnight shipping and a minimal bureaucracy - selecting new hardware, getting approval to purchase it, ordering it, unboxing it, setting it up and testing it, and then finally getting to installing software - can take days or weeks of your time and energy.  In the cloud, you just click a couple of buttons and then get on with doing the stuff that really adds value to your business.I spent the better part of a couple of decades ordering servers and disks and extra cpu boards for big and small companies and getting them in the servers and provisioning them.   Now that I use the cloud I just reach over with my mouse, provision an volume, attach it to the server, and voila - I've averted a disk space issue.   I take an image, build a new server, swing DNS, and - there you have it - I'm now on a 16 cpu system instead of an 8 cpu system.  Hours, at most, instead of weeks.   I can spend my time worrying about business problems and data science.Every 6 months to a year both Rackspace and AWS offer new classes of servers with new CPU's and faster backplanes and better performance for the buck.  With only a little planning, you can jump into the latest hardware every time they do so.  If you have your own iron, you are likely to be stuck on the same hardware for 3 or more years before you can upgrade again.If the platform you are on suffers a catastropic hardware failure, it usually only takes a few minutes to bring up a new server on new hardware and be back and running again.Yes, there is a premium for the flexibility and convenience.  Surprisingly though, I think by the time you add in electricity and cooling and labor and shipping and switches and racks and cabling, you may find that even with their margin, their economy of scale actually offers a better total real cost advantage.  (I've heard some arguments to the contrary, but I'm not sure I believe them if the cloud infrastructure is well managed.)  Throw in the instant deep technical support you can get from some place like Rackspace when things go wrong, and I find few advantages to being a server hugger any more.On Wed, Feb 24, 2016 at 4:01 AM, Gunnar \"Nick\" Bluth <[email protected]> wrote:Am 24.02.2016 um 06:06 schrieb Craig James:\n> At some point in the next year we're going to reconsider our hosting\n> environment, currently consisting of several medium-sized servers (2x4\n> CPUs, 48GB RAM, 12-disk RAID system with 8 in RAID 10 and 2 in RAID 1\n> for WAL). We use barman to keep a hot standby and an archive.\n>\n> The last time we dug into this, we were initially excited, but our\n> excitement turned to disappointment when we calculated the real costs of\n> hosted services, and the constraints on performance and customizability.\n>\n> Due to the nature of our business, we need a system where we can install\n> plug-ins to Postgres. I expect that alone will limit our choices. In\n> addition to our Postgres database, we run a fairly ordinary Apache web site.\n>\n> There is constant chatter in this group about buying servers vs. the\n> various hosted services. Does anyone have any sort of summary comparison\n> of the various solutions out there? Or is it just a matter of\n> researching it myself and maybe doing some benchmarking and price\n> comparisons?\n\nFor starters, did you see Josh Berkus' presentation on the topic?\n  https://www.youtube.com/watch?v=WV5P2DgxPoI\n\nI for myself would probably always go the \"own iron\" road, but alas!\nthat's just the way I feel about control. And I'm kind of a Linux\noldshot, so managing a (hosted root) server doesn't scare me off.\n\nOTOH, I do see the advantages of having things like monitoring, backup,\nHDD replacements etc. done for you. Which is essentially what you pay for.\n\nIn essence, there's obviously no silver bullet ;-)\n\nBest regards,\n--\nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel:   +49 911/991-4665\nMobil: +49 172/8853339\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 24 Feb 2016 06:08:15 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Fwd: Cloud versus buying my own iron" } ]
[ { "msg_contents": "Also available on S.O.:\n\nhttp://stackoverflow.com/questions/35658238/postgres-odd-behavior-with-indices\n\nI've got a datavalue table with ~200M rows or so, with indices on both\nsite_id and parameter_id. I need to execute queries like \"return all sites\nwith data\" and \"return all parameters with data\". The site table has only\n200 rows or so, and the parameter table has only 100 or so rows.\n\nThe site query is fast and uses the index:\n\nEXPLAIN ANALYZEselect *from sitewhere exists (\n select 1 from datavalue\n where datavalue.site_id = site.id limit 1);\n\nSeq Scan on site (cost=0.00..64.47 rows=64 width=113) (actual\ntime=0.046..1.106 rows=89 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 39\n SubPlan 1\n -> Limit (cost=0.44..0.47 rows=1 width=0) (actual\ntime=0.008..0.008 rows=1 loops=128)\n -> Index Only Scan using ix_datavalue_site_id on datavalue\n(cost=0.44..8142.71 rows=248930 width=0) (actual time=0.008..0.008\nrows=1 loops=128)\n Index Cond: (site_id = site.id)\n Heap Fetches: 0\nPlanning time: 0.361 ms\nExecution time: 1.149 ms\n\nThe same query for parameters is rather slow and does NOT use the index:\n\nEXPLAIN ANALYZEselect *from parameterwhere exists (\n select 1 from datavalue\n where datavalue.parameter_id = parameter.id limit 1);\n\nSeq Scan on parameter (cost=0.00..20.50 rows=15 width=2648) (actual\ntime=2895.972..21331.701 rows=15 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 6\n SubPlan 1\n -> Limit (cost=0.00..0.34 rows=1 width=0) (actual\ntime=1015.790..1015.790 rows=1 loops=21)\n -> Seq Scan on datavalue (cost=0.00..502127.10\nrows=1476987 width=0) (actual time=1015.786..1015.786 rows=1 loops=21)\n Filter: (parameter_id = parameter.id)\n Rows Removed by Filter: 7739355\nPlanning time: 0.123 ms\nExecution time: 21331.736 ms\n\nWhat the deuce is going on here? Alternatively, whats a good way to do this?\n\nAny help/guidance appreciated!\n\n\n\nSome of the table description:\n\n\\d datavalue\n\nid BIGINT DEFAULT nextval('datavalue_id_seq'::regclass) NOT NULL,\nvalue DOUBLE PRECISION NOT NULL,\nsite_id INTEGER NOT NULL,\nparameter_id INTEGER NOT NULL,\ndeployment_id INTEGER,\ninstrument_id INTEGER,\ninvalid BOOLEAN,\nIndexes:\n \"datavalue_pkey\" PRIMARY KEY, btree (id)\n \"datavalue_datetime_utc_site_id_parameter_id_instrument_id_key\"\nUNIQUE CONSTRAINT, btree (datetime_utc, site_id, parameter_id,\ninstrument_id)\n \"ix_datavalue_instrument_id\" btree (instrument_id)\n \"ix_datavalue_parameter_id\" btree (parameter_id)\n \"ix_datavalue_site_id\" btree (site_id)\n \"tmp_idx\" btree (site_id, datetime_utc)\nForeign-key constraints:\n \"datavalue_instrument_id_fkey\" FOREIGN KEY (instrument_id)\nREFERENCES instrument(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_parameter_id_fkey\" FOREIGN KEY (parameter_id)\nREFERENCES parameter(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_site_id_fkey\" FOREIGN KEY (site_id) REFERENCES\ncoastal.site(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_statistic_type_id_fkey\"\n\nAlso available on S.O.: http://stackoverflow.com/questions/35658238/postgres-odd-behavior-with-indicesI've got a datavalue table with ~200M rows or so, with indices on both site_id and parameter_id. I need to execute queries like \"return all sites with data\" and \"return all parameters with data\". The site table has only 200 rows or so, and the parameter table has only 100 or so rows.The site query is fast and uses the index:EXPLAIN ANALYZE\nselect *\nfrom site\nwhere exists (\n select 1 from datavalue\n where datavalue.site_id = site.id limit 1\n);\n\nSeq Scan on site (cost=0.00..64.47 rows=64 width=113) (actual time=0.046..1.106 rows=89 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 39\n SubPlan 1\n -> Limit (cost=0.44..0.47 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n -> Index Only Scan using ix_datavalue_site_id on datavalue (cost=0.44..8142.71 rows=248930 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n Index Cond: (site_id = site.id)\n Heap Fetches: 0\nPlanning time: 0.361 ms\nExecution time: 1.149 msThe same query for parameters is rather slow and does NOT use the index:EXPLAIN ANALYZE\nselect *\nfrom parameter\nwhere exists (\n select 1 from datavalue\n where datavalue.parameter_id = parameter.id limit 1\n);\n\nSeq Scan on parameter (cost=0.00..20.50 rows=15 width=2648) (actual time=2895.972..21331.701 rows=15 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 6\n SubPlan 1\n -> Limit (cost=0.00..0.34 rows=1 width=0) (actual time=1015.790..1015.790 rows=1 loops=21)\n -> Seq Scan on datavalue (cost=0.00..502127.10 rows=1476987 width=0) (actual time=1015.786..1015.786 rows=1 loops=21)\n Filter: (parameter_id = parameter.id)\n Rows Removed by Filter: 7739355\nPlanning time: 0.123 ms\nExecution time: 21331.736 msWhat the deuce is going on here? Alternatively, whats a good way to do this?Any help/guidance appreciated!Some of the table description:\\d datavalueid BIGINT DEFAULT nextval('datavalue_id_seq'::regclass) NOT NULL,\nvalue DOUBLE PRECISION NOT NULL,\nsite_id INTEGER NOT NULL,\nparameter_id INTEGER NOT NULL,\ndeployment_id INTEGER,\ninstrument_id INTEGER,\ninvalid BOOLEAN,\nIndexes:\n \"datavalue_pkey\" PRIMARY KEY, btree (id)\n \"datavalue_datetime_utc_site_id_parameter_id_instrument_id_key\" UNIQUE CONSTRAINT, btree (datetime_utc, site_id, parameter_id, instrument_id)\n \"ix_datavalue_instrument_id\" btree (instrument_id)\n \"ix_datavalue_parameter_id\" btree (parameter_id)\n \"ix_datavalue_site_id\" btree (site_id)\n \"tmp_idx\" btree (site_id, datetime_utc)\nForeign-key constraints:\n \"datavalue_instrument_id_fkey\" FOREIGN KEY (instrument_id) REFERENCES instrument(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_parameter_id_fkey\" FOREIGN KEY (parameter_id) REFERENCES parameter(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_site_id_fkey\" FOREIGN KEY (site_id) REFERENCES coastal.site(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_statistic_type_id_fkey\"", "msg_date": "Fri, 26 Feb 2016 13:43:42 -0600", "msg_from": "joe meiring <[email protected]>", "msg_from_op": true, "msg_subject": "Odd behavior with indices" }, { "msg_contents": "On Fri, Feb 26, 2016 at 12:43 PM, joe meiring <[email protected]>\nwrote:\n\n> Also available on S.O.:\n>\n>\n> http://stackoverflow.com/questions/35658238/postgres-odd-behavior-with-indices\n>\n> I've got a datavalue table with ~200M rows or so, with indices on both\n> site_id and parameter_id. I need to execute queries like \"return all\n> sites with data\" and \"return all parameters with data\". The site table\n> has only 200 rows or so, and the parameter table has only 100 or so rows.\n>\n> The site query is fast and uses the index:\n>\n> EXPLAIN ANALYZEselect *from sitewhere exists (\n> select 1 from datavalue\n> where datavalue.site_id = site.id limit 1);\n>\n> Seq Scan on site (cost=0.00..64.47 rows=64 width=113) (actual time=0.046..1.106 rows=89 loops=1)\n> Filter: (SubPlan 1)\n> Rows Removed by Filter: 39\n> SubPlan 1\n> -> Limit (cost=0.44..0.47 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n> -> Index Only Scan using ix_datavalue_site_id on datavalue (cost=0.44..8142.71 rows=248930 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n> Index Cond: (site_id = site.id)\n> Heap Fetches: 0\n> Planning time: 0.361 ms\n> Execution time: 1.149 ms\n>\n> The same query for parameters is rather slow and does NOT use the index:\n>\n> EXPLAIN ANALYZEselect *from parameterwhere exists (\n> select 1 from datavalue\n> where datavalue.parameter_id = parameter.id limit 1);\n>\n> Seq Scan on parameter (cost=0.00..20.50 rows=15 width=2648) (actual time=2895.972..21331.701 rows=15 loops=1)\n> Filter: (SubPlan 1)\n> Rows Removed by Filter: 6\n> SubPlan 1\n> -> Limit (cost=0.00..0.34 rows=1 width=0) (actual time=1015.790..1015.790 rows=1 loops=21)\n> -> Seq Scan on datavalue (cost=0.00..502127.10 rows=1476987 width=0) (actual time=1015.786..1015.786 rows=1 loops=21)\n> Filter: (parameter_id = parameter.id)\n> Rows Removed by Filter: 7739355\n> Planning time: 0.123 ms\n> Execution time: 21331.736 ms\n>\n> What the deuce is going on here? Alternatively, whats a good way to do\n> this?\n>\n> Any help/guidance appreciated!\n>\n>\n>\n> Some of the table description:\n>\n> \\d datavalue\n>\n> id BIGINT DEFAULT nextval('datavalue_id_seq'::regclass) NOT NULL,\n> value DOUBLE PRECISION NOT NULL,\n> site_id INTEGER NOT NULL,\n> parameter_id INTEGER NOT NULL,\n> deployment_id INTEGER,\n> instrument_id INTEGER,\n> invalid BOOLEAN,\n> Indexes:\n> \"datavalue_pkey\" PRIMARY KEY, btree (id)\n> \"datavalue_datetime_utc_site_id_parameter_id_instrument_id_key\" UNIQUE CONSTRAINT, btree (datetime_utc, site_id, parameter_id, instrument_id)\n> \"ix_datavalue_instrument_id\" btree (instrument_id)\n> \"ix_datavalue_parameter_id\" btree (parameter_id)\n> \"ix_datavalue_site_id\" btree (site_id)\n> \"tmp_idx\" btree (site_id, datetime_utc)\n> Foreign-key constraints:\n> \"datavalue_instrument_id_fkey\" FOREIGN KEY (instrument_id) REFERENCES instrument(id) ON UPDATE CASCADE ON DELETE CASCADE\n> \"datavalue_parameter_id_fkey\" FOREIGN KEY (parameter_id) REFERENCES parameter(id) ON UPDATE CASCADE ON DELETE CASCADE\n> \"datavalue_site_id_fkey\" FOREIGN KEY (site_id) REFERENCES coastal.site(id) ON UPDATE CASCADE ON DELETE CASCADE\n> \"datavalue_statistic_type_id_fkey\"\n>\n>\n> ​I'm not great with the details but the short answer - aside from the fact\nthat you should consider increasing the statistics on these columns - is\nthat at a certain point querying the index and then subsequently checking\nthe table for visibility is more expensive than simply scanning and then\ndiscarding ​the extra rows.\n\nThe fact that you could perform an INDEX ONLY scan in the first query makes\nthat cost go away since no subsequent heap check is required. In the\nparameters query the planner thinks it needs 1.5 million of the rows and\nwill have to check each of them for visibility. It decided that scanning\nthe entire table was more efficient.\n\nThe LIMIT 1 in both queries should not be necessary. The planner is smart\nenough to stop once it finds what it is looking for. In fact the LIMIT's\npresence may be a contributing factor...but I cannot say for sure.\n\nA better query seems like it would be:\n\nWITH active_sites AS (\nSELECT DISTINCT site_id FROM datavalues;\n)\nSELECT *\nFROM sites\nJOIN active_sites USING (site_id);\n\nDavid J.\n\nOn Fri, Feb 26, 2016 at 12:43 PM, joe meiring <[email protected]> wrote:Also available on S.O.: http://stackoverflow.com/questions/35658238/postgres-odd-behavior-with-indicesI've got a datavalue table with ~200M rows or so, with indices on both site_id and parameter_id. I need to execute queries like \"return all sites with data\" and \"return all parameters with data\". The site table has only 200 rows or so, and the parameter table has only 100 or so rows.The site query is fast and uses the index:EXPLAIN ANALYZE\nselect *\nfrom site\nwhere exists (\n select 1 from datavalue\n where datavalue.site_id = site.id limit 1\n);\n\nSeq Scan on site (cost=0.00..64.47 rows=64 width=113) (actual time=0.046..1.106 rows=89 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 39\n SubPlan 1\n -> Limit (cost=0.44..0.47 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n -> Index Only Scan using ix_datavalue_site_id on datavalue (cost=0.44..8142.71 rows=248930 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n Index Cond: (site_id = site.id)\n Heap Fetches: 0\nPlanning time: 0.361 ms\nExecution time: 1.149 msThe same query for parameters is rather slow and does NOT use the index:EXPLAIN ANALYZE\nselect *\nfrom parameter\nwhere exists (\n select 1 from datavalue\n where datavalue.parameter_id = parameter.id limit 1\n);\n\nSeq Scan on parameter (cost=0.00..20.50 rows=15 width=2648) (actual time=2895.972..21331.701 rows=15 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 6\n SubPlan 1\n -> Limit (cost=0.00..0.34 rows=1 width=0) (actual time=1015.790..1015.790 rows=1 loops=21)\n -> Seq Scan on datavalue (cost=0.00..502127.10 rows=1476987 width=0) (actual time=1015.786..1015.786 rows=1 loops=21)\n Filter: (parameter_id = parameter.id)\n Rows Removed by Filter: 7739355\nPlanning time: 0.123 ms\nExecution time: 21331.736 msWhat the deuce is going on here? Alternatively, whats a good way to do this?Any help/guidance appreciated!Some of the table description:\\d datavalueid BIGINT DEFAULT nextval('datavalue_id_seq'::regclass) NOT NULL,\nvalue DOUBLE PRECISION NOT NULL,\nsite_id INTEGER NOT NULL,\nparameter_id INTEGER NOT NULL,\ndeployment_id INTEGER,\ninstrument_id INTEGER,\ninvalid BOOLEAN,\nIndexes:\n \"datavalue_pkey\" PRIMARY KEY, btree (id)\n \"datavalue_datetime_utc_site_id_parameter_id_instrument_id_key\" UNIQUE CONSTRAINT, btree (datetime_utc, site_id, parameter_id, instrument_id)\n \"ix_datavalue_instrument_id\" btree (instrument_id)\n \"ix_datavalue_parameter_id\" btree (parameter_id)\n \"ix_datavalue_site_id\" btree (site_id)\n \"tmp_idx\" btree (site_id, datetime_utc)\nForeign-key constraints:\n \"datavalue_instrument_id_fkey\" FOREIGN KEY (instrument_id) REFERENCES instrument(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_parameter_id_fkey\" FOREIGN KEY (parameter_id) REFERENCES parameter(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_site_id_fkey\" FOREIGN KEY (site_id) REFERENCES coastal.site(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_statistic_type_id_fkey\"\n​I'm not great with the details but the short answer - aside from the fact that you should consider increasing the statistics on these columns - is that at a certain point querying the index and then subsequently checking the table for visibility is more expensive than simply scanning and then discarding ​the extra rows.The fact that you could perform an INDEX ONLY scan in the first query makes that cost go away since no subsequent heap check is required.  In the parameters query the planner thinks it needs 1.5 million of the rows and will have to check each of them for visibility.  It decided that scanning the entire table was more efficient.The LIMIT 1 in both queries should not be necessary.  The planner is smart enough to stop once it finds what it is looking for.  In fact the LIMIT's presence may be a contributing factor...but I cannot say for sure.A better query seems like it would be:WITH active_sites AS (SELECT DISTINCT site_id FROM datavalues;)SELECT * FROM sitesJOIN active_sites USING (site_id);David J.", "msg_date": "Fri, 26 Feb 2016 13:02:30 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd behavior with indices" }, { "msg_contents": "Here's the distribution of parameter_id's\n\nselect count(parameter_id), parameter_id from datavalue group by parameter_id\n88169 142889171 815805 178570 124257262 213947049 151225902\n24091090 3103877 10633764 11994442 1849232 2014935 4563638\n132955919 7\n\n\nOn Fri, Feb 26, 2016 at 2:02 PM, David G. Johnston <\[email protected]> wrote:\n\n> On Fri, Feb 26, 2016 at 12:43 PM, joe meiring <[email protected]>\n> wrote:\n>\n>> Also available on S.O.:\n>>\n>>\n>> http://stackoverflow.com/questions/35658238/postgres-odd-behavior-with-indices\n>>\n>> I've got a datavalue table with ~200M rows or so, with indices on both\n>> site_id and parameter_id. I need to execute queries like \"return all\n>> sites with data\" and \"return all parameters with data\". The site table\n>> has only 200 rows or so, and the parameter table has only 100 or so rows.\n>>\n>> The site query is fast and uses the index:\n>>\n>> EXPLAIN ANALYZEselect *from sitewhere exists (\n>> select 1 from datavalue\n>> where datavalue.site_id = site.id limit 1);\n>>\n>> Seq Scan on site (cost=0.00..64.47 rows=64 width=113) (actual time=0.046..1.106 rows=89 loops=1)\n>> Filter: (SubPlan 1)\n>> Rows Removed by Filter: 39\n>> SubPlan 1\n>> -> Limit (cost=0.44..0.47 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n>> -> Index Only Scan using ix_datavalue_site_id on datavalue (cost=0.44..8142.71 rows=248930 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n>> Index Cond: (site_id = site.id)\n>> Heap Fetches: 0\n>> Planning time: 0.361 ms\n>> Execution time: 1.149 ms\n>>\n>> The same query for parameters is rather slow and does NOT use the index:\n>>\n>> EXPLAIN ANALYZEselect *from parameterwhere exists (\n>> select 1 from datavalue\n>> where datavalue.parameter_id = parameter.id limit 1);\n>>\n>> Seq Scan on parameter (cost=0.00..20.50 rows=15 width=2648) (actual time=2895.972..21331.701 rows=15 loops=1)\n>> Filter: (SubPlan 1)\n>> Rows Removed by Filter: 6\n>> SubPlan 1\n>> -> Limit (cost=0.00..0.34 rows=1 width=0) (actual time=1015.790..1015.790 rows=1 loops=21)\n>> -> Seq Scan on datavalue (cost=0.00..502127.10 rows=1476987 width=0) (actual time=1015.786..1015.786 rows=1 loops=21)\n>> Filter: (parameter_id = parameter.id)\n>> Rows Removed by Filter: 7739355\n>> Planning time: 0.123 ms\n>> Execution time: 21331.736 ms\n>>\n>> What the deuce is going on here? Alternatively, whats a good way to do\n>> this?\n>>\n>> Any help/guidance appreciated!\n>>\n>>\n>>\n>> Some of the table description:\n>>\n>> \\d datavalue\n>>\n>> id BIGINT DEFAULT nextval('datavalue_id_seq'::regclass) NOT NULL,\n>> value DOUBLE PRECISION NOT NULL,\n>> site_id INTEGER NOT NULL,\n>> parameter_id INTEGER NOT NULL,\n>> deployment_id INTEGER,\n>> instrument_id INTEGER,\n>> invalid BOOLEAN,\n>> Indexes:\n>> \"datavalue_pkey\" PRIMARY KEY, btree (id)\n>> \"datavalue_datetime_utc_site_id_parameter_id_instrument_id_key\" UNIQUE CONSTRAINT, btree (datetime_utc, site_id, parameter_id, instrument_id)\n>> \"ix_datavalue_instrument_id\" btree (instrument_id)\n>> \"ix_datavalue_parameter_id\" btree (parameter_id)\n>> \"ix_datavalue_site_id\" btree (site_id)\n>> \"tmp_idx\" btree (site_id, datetime_utc)\n>> Foreign-key constraints:\n>> \"datavalue_instrument_id_fkey\" FOREIGN KEY (instrument_id) REFERENCES instrument(id) ON UPDATE CASCADE ON DELETE CASCADE\n>> \"datavalue_parameter_id_fkey\" FOREIGN KEY (parameter_id) REFERENCES parameter(id) ON UPDATE CASCADE ON DELETE CASCADE\n>> \"datavalue_site_id_fkey\" FOREIGN KEY (site_id) REFERENCES coastal.site(id) ON UPDATE CASCADE ON DELETE CASCADE\n>> \"datavalue_statistic_type_id_fkey\"\n>>\n>>\n>> ​I'm not great with the details but the short answer - aside from the\n> fact that you should consider increasing the statistics on these columns -\n> is that at a certain point querying the index and then subsequently\n> checking the table for visibility is more expensive than simply scanning\n> and then discarding ​the extra rows.\n>\n> The fact that you could perform an INDEX ONLY scan in the first query\n> makes that cost go away since no subsequent heap check is required. In the\n> parameters query the planner thinks it needs 1.5 million of the rows and\n> will have to check each of them for visibility. It decided that scanning\n> the entire table was more efficient.\n>\n> The LIMIT 1 in both queries should not be necessary. The planner is smart\n> enough to stop once it finds what it is looking for. In fact the LIMIT's\n> presence may be a contributing factor...but I cannot say for sure.\n>\n> A better query seems like it would be:\n>\n> WITH active_sites AS (\n> SELECT DISTINCT site_id FROM datavalues;\n> )\n> SELECT *\n> FROM sites\n> JOIN active_sites USING (site_id);\n>\n> David J.\n>\n\nHere's the distribution of parameter_id'sselect count(parameter_id), parameter_id from datavalue group by parameter_id\n\n88169 14\n2889171 8\n15805 17\n8570 12\n4257262 21\n3947049 15\n1225902 2\n4091090 3\n103877 10\n633764 11\n994442 18\n49232 20\n14935 4\n563638 13\n2955919 7On Fri, Feb 26, 2016 at 2:02 PM, David G. Johnston <[email protected]> wrote:On Fri, Feb 26, 2016 at 12:43 PM, joe meiring <[email protected]> wrote:Also available on S.O.: http://stackoverflow.com/questions/35658238/postgres-odd-behavior-with-indicesI've got a datavalue table with ~200M rows or so, with indices on both site_id and parameter_id. I need to execute queries like \"return all sites with data\" and \"return all parameters with data\". The site table has only 200 rows or so, and the parameter table has only 100 or so rows.The site query is fast and uses the index:EXPLAIN ANALYZE\nselect *\nfrom site\nwhere exists (\n select 1 from datavalue\n where datavalue.site_id = site.id limit 1\n);\n\nSeq Scan on site (cost=0.00..64.47 rows=64 width=113) (actual time=0.046..1.106 rows=89 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 39\n SubPlan 1\n -> Limit (cost=0.44..0.47 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n -> Index Only Scan using ix_datavalue_site_id on datavalue (cost=0.44..8142.71 rows=248930 width=0) (actual time=0.008..0.008 rows=1 loops=128)\n Index Cond: (site_id = site.id)\n Heap Fetches: 0\nPlanning time: 0.361 ms\nExecution time: 1.149 msThe same query for parameters is rather slow and does NOT use the index:EXPLAIN ANALYZE\nselect *\nfrom parameter\nwhere exists (\n select 1 from datavalue\n where datavalue.parameter_id = parameter.id limit 1\n);\n\nSeq Scan on parameter (cost=0.00..20.50 rows=15 width=2648) (actual time=2895.972..21331.701 rows=15 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 6\n SubPlan 1\n -> Limit (cost=0.00..0.34 rows=1 width=0) (actual time=1015.790..1015.790 rows=1 loops=21)\n -> Seq Scan on datavalue (cost=0.00..502127.10 rows=1476987 width=0) (actual time=1015.786..1015.786 rows=1 loops=21)\n Filter: (parameter_id = parameter.id)\n Rows Removed by Filter: 7739355\nPlanning time: 0.123 ms\nExecution time: 21331.736 msWhat the deuce is going on here? Alternatively, whats a good way to do this?Any help/guidance appreciated!Some of the table description:\\d datavalueid BIGINT DEFAULT nextval('datavalue_id_seq'::regclass) NOT NULL,\nvalue DOUBLE PRECISION NOT NULL,\nsite_id INTEGER NOT NULL,\nparameter_id INTEGER NOT NULL,\ndeployment_id INTEGER,\ninstrument_id INTEGER,\ninvalid BOOLEAN,\nIndexes:\n \"datavalue_pkey\" PRIMARY KEY, btree (id)\n \"datavalue_datetime_utc_site_id_parameter_id_instrument_id_key\" UNIQUE CONSTRAINT, btree (datetime_utc, site_id, parameter_id, instrument_id)\n \"ix_datavalue_instrument_id\" btree (instrument_id)\n \"ix_datavalue_parameter_id\" btree (parameter_id)\n \"ix_datavalue_site_id\" btree (site_id)\n \"tmp_idx\" btree (site_id, datetime_utc)\nForeign-key constraints:\n \"datavalue_instrument_id_fkey\" FOREIGN KEY (instrument_id) REFERENCES instrument(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_parameter_id_fkey\" FOREIGN KEY (parameter_id) REFERENCES parameter(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_site_id_fkey\" FOREIGN KEY (site_id) REFERENCES coastal.site(id) ON UPDATE CASCADE ON DELETE CASCADE\n \"datavalue_statistic_type_id_fkey\"\n​I'm not great with the details but the short answer - aside from the fact that you should consider increasing the statistics on these columns - is that at a certain point querying the index and then subsequently checking the table for visibility is more expensive than simply scanning and then discarding ​the extra rows.The fact that you could perform an INDEX ONLY scan in the first query makes that cost go away since no subsequent heap check is required.  In the parameters query the planner thinks it needs 1.5 million of the rows and will have to check each of them for visibility.  It decided that scanning the entire table was more efficient.The LIMIT 1 in both queries should not be necessary.  The planner is smart enough to stop once it finds what it is looking for.  In fact the LIMIT's presence may be a contributing factor...but I cannot say for sure.A better query seems like it would be:WITH active_sites AS (SELECT DISTINCT site_id FROM datavalues;)SELECT * FROM sitesJOIN active_sites USING (site_id);David J.", "msg_date": "Fri, 26 Feb 2016 14:38:09 -0600", "msg_from": "joe meiring <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd behavior with indices" }, { "msg_contents": "On Fri, Feb 26, 2016 at 1:38 PM, joe meiring <[email protected]>\nwrote:\n\n> Here's the distribution of parameter_id's\n>\n> select count(parameter_id), parameter_id from datavalue group by parameter_id\n> 88169 142889171 815805 178570 124257262 213947049 151225902 24091090 3103877 10633764 11994442 1849232 2014935 4563638 132955919 7\n>\n>\n​Ok...again its beyond my present experience ​but its what the planner\nthinks about the distribution, and not what actually is present, that\nmatters.\n\nDavid J.\n\nOn Fri, Feb 26, 2016 at 1:38 PM, joe meiring <[email protected]> wrote:Here's the distribution of parameter_id'sselect count(parameter_id), parameter_id from datavalue group by parameter_id\n\n88169 14\n2889171 8\n15805 17\n8570 12\n4257262 21\n3947049 15\n1225902 2\n4091090 3\n103877 10\n633764 11\n994442 18\n49232 20\n14935 4\n563638 13\n2955919 7​Ok...again its beyond my present experience ​but its what the planner thinks about the distribution, and not what actually is present, that matters.David J.", "msg_date": "Fri, 26 Feb 2016 13:56:19 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd behavior with indices" }, { "msg_contents": "Em 26 de fev de 2016 4:44 PM, \"joe meiring\" <[email protected]>\nescreveu:\n>\n> The same query for parameters is rather slow and does NOT use the index:\n>\n> EXPLAIN ANALYZE\n> select *\n> from parameter\n> where exists (\n> select 1 from datavalue\n> where datavalue.parameter_id = parameter.id limit 1\n> );\n>\n\nPlease, could you execute both queries without the LIMIT 1 and show us the\nplans?\n\nLIMIT in the inner query is like a fence and it caps some optimizations\navailable for EXISTS, you'd better avoid it and see if you get a proper\nsemi-join plan then.\n\nRegards.\n\n\nEm 26 de fev de 2016 4:44 PM, \"joe meiring\" <[email protected]> escreveu:\n>\n> The same query for parameters is rather slow and does NOT use the index:\n>\n> EXPLAIN ANALYZE\n> select *\n> from parameter\n> where exists (\n>       select 1 from datavalue\n>       where datavalue.parameter_id = parameter.id limit 1\n> );\n>\nPlease, could you execute both queries without the LIMIT 1 and show us the plans?\nLIMIT in the inner query is like a fence and it caps some optimizations available for EXISTS, you'd better avoid it and see if you get a proper semi-join plan then.\nRegards.", "msg_date": "Sun, 28 Feb 2016 10:26:11 -0300", "msg_from": "Matheus de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd behavior with indices" }, { "msg_contents": "Matheus de Oliveira <[email protected]> writes:\n> Em 26 de fev de 2016 4:44 PM, \"joe meiring\" <[email protected]>\n> escreveu:\n>> The same query for parameters is rather slow and does NOT use the index:\n>> \n>> EXPLAIN ANALYZE\n>> select *\n>> from parameter\n>> where exists (\n>> select 1 from datavalue\n>> where datavalue.parameter_id = parameter.id limit 1\n>> );\n\n> Please, could you execute both queries without the LIMIT 1 and show us the\n> plans?\n\n> LIMIT in the inner query is like a fence and it caps some optimizations\n> available for EXISTS, you'd better avoid it and see if you get a proper\n> semi-join plan then.\n\nFWIW, PG >= 9.5 will ignore a LIMIT 1 inside an EXISTS, so that you get\nthe same plan with or without it. But that does act as an optimization\nfence in earlier releases.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 29 Feb 2016 13:47:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd behavior with indices" }, { "msg_contents": "On Mon, Feb 29, 2016 at 12:47 PM, Tom Lane <[email protected]> wrote:\n> FWIW, PG >= 9.5 will ignore a LIMIT 1 inside an EXISTS, so that you get\n> the same plan with or without it. But that does act as an optimization\n> fence in earlier releases.\n\nDoes 'offset 0' still work as it did?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Mar 2016 16:18:34 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd behavior with indices" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Mon, Feb 29, 2016 at 12:47 PM, Tom Lane <[email protected]> wrote:\n>> FWIW, PG >= 9.5 will ignore a LIMIT 1 inside an EXISTS, so that you get\n>> the same plan with or without it. But that does act as an optimization\n>> fence in earlier releases.\n\n> Does 'offset 0' still work as it did?\n\nYes.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 04 Mar 2016 18:53:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd behavior with indices" } ]
[ { "msg_contents": "Dear psql-performance,\n\nI'm having issues with a certain query, and I was hoping you could help me\nout.\n\nThe schema:\n\n(start with new database cluster, with either SQL_ASCII or en.us-UTF8\nencoding, using the default server configuration available in the pgdg\nJessie packages).\n\nCREATE TABLE a (id bigint primary key, nonce bigint);\nCREATE TABLE b (id bigint primary key, a_id bigint not null);\nCREATE INDEX a_idx ON b (a_id);\n\nThe query:\n\nSELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = ? ORDER BY b.id\nASC;\n\n(skip down to [1] and [2] to see the query performance)\n\nWhat I know:\n\nIf you force the query planner to use a merge join on the above query, it\ntakes 10+ minutes to complete using the data as per below. If you force the\nquery planner to use a hash join on the same data, it takes ~200\nmilliseconds. This behavior is the same both on Postgresql 9.2.15 and\nPostgresql 9.4.6 (at least as provided by the Debian Jessie repo hosted by\npostgresql.org), and happens both on i386 and x86_64 platforms. Note: the\ndata for these queries is the same as described below. Let me know if you\nwant me to provide the raw csv's or similar.\n\nCreating a new Postgresql 9.4 cluster (64-bit), creating the tables (a) and\n(b), importing the table data into the tables (a) and (b), and then running\nthe above query using EXPLAIN results in a merge join query plan, as in [1].\n\nCreating a new Postgresql 9.2 cluster (32-bit or 64-bit), creating the\ntables (a) and (b), importing the table data into (a) and (b), and then\nrunning the above query results in a hash join query plan, as in [2].\n\nWhen running query [1], the postgres process on the machine consumes 100%\nCPU for a long time (it seems CPU-bound).\n\nWhat I expected:\n\nI expected both of the hash join and merge join implementations of this\nquery to have comparable query times; perhaps within an order of magnitude.\nThis was expected on my part mostly because the cost metrics for each query\nwere very similar. Instead, the \"optimal\" query plan for the query takes\nmore than 1000x longer.\n\nI also expected that the \"Rows Removed by Filter: \" for the index scan on\n(a) would not have such a high count, as the number of rows in table (a)\n(~500,000) is significantly less than the count (2,201,063,696).\n\nWhat I want to know:\n\n- Is this expected behavior? Can you describe how the merge join algorithm\nachieves these results?\n- Can I avoid this issue by disabling merge joins in the server\nconfiguration?\n\nConfiguration:\n\nThe configuration of the database is the sample configuration as per the\nDebian Jessie packages of Postgresql available at\nhttp://apt.postgresql.org/pub/repos/apt/ with the exception that the data\ndirectory was explicitly specified.\n\nInformation about the data:\n\nHere are some queries that help describe the data I'm working with:\n\npostgres=# select distinct a_id, count(*) from b group by a_id;\n a_id | count\n--------+-------\n 49872 | 320\n 47994 | 5\n 19084 | 82977\n 53251 | 100\n 109804 | 10\n 51738 | 5\n 49077 | 10\n\npostgres=# select count(*) from b;\n count\n-------\n 83427\n\npostgres=# select count(distinct nonce) from a;\n count\n-------\n 198\n\npostgres=# select count(*) from a;\n count\n--------\n 490166\n\npostgres=# select count(*) from a where nonce = 64;\n count\n-------\n 395\n\nHardware:\n\n2015-era Intel Xeon processors\n> 300 GB of ram (greater than the size of the database with a large margin)\ndatabase on hardware raid 1 array on 2 SSDs\n\nCommentary:\n\nThis is my first bug report to a major open source project, so I apologize\nin advance if I messed up this report. Let me know if I have left out key\ndetails -- I'm happy to provide them.\n\nGiven that there are roughly 500k rows in table a, and given that the\nEXPLAIN output claims that the filter (nonce = 64) caused 2 billion rows to\nbe skipped (refer to [1]) suggests that each row in table b is being\ncompared to a non-negligible number of rows in (a).\n\nI can probably make this data available as a pg_dump file. Let me know if\nyou think that's necessary, and where I should upload it.\n\nRegards,\nJames\n\n[1]\npostgres=# explain (analyze,buffers) select b.* from b join a on b.a_id =\na.id where a.nonce = 64 order by b.id asc;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=7855.25..7855.61 rows=143 width=16) (actual\ntime=752058.415..752080.731 rows=83427 loops=1)\n Sort Key: b.id\n Sort Method: external merge Disk: 2096kB\n Buffers: shared hit=2151025721 read=479, temp read=267 written=267\n I/O Timings: read=2.384\n -> Merge Join (cost=869.07..7850.13 rows=143 width=16) (actual\ntime=5.718..751760.637 rows=83427 loops=1)\n Merge Cond: (b.a_id = a.id)\n Buffers: shared hit=2151025721 read=479\n I/O Timings: read=2.384\n -> Index Scan using a_idx on b (cost=0.00..2953.35 rows=83427\nwidth=16) (actual time=0.007..68.165 rows=83427 loops=1)\n Buffers: shared hit=1303 read=139\n I/O Timings: read=1.369\n -> Index Scan using a_pkey on a (cost=0.00..26163.20 rows=843\nwidth=8) (actual time=5.706..751385.306 rows=83658 loops=1)\n Filter: (nonce = 64)\n Rows Removed by Filter: 2201063696\n Buffers: shared hit=2151024418 read=340\n I/O Timings: read=1.015\n Total runtime: 752092.206 ms\n\n[2]\npostgres=# explain (analyze,buffers) select b.* from b join a on b.a_id =\na.id where a.nonce = 64 order by b.id asc;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------\n Sort (cost=10392.28..10392.64 rows=143 width=16) (actual\ntime=164.415..186.297 rows=83427 loops=1)\n Sort Key: b.id\n Sort Method: external merge Disk: 2112kB\n Buffers: shared hit=2514 read=587, temp read=267 written=267\n I/O Timings: read=1.199\n -> Hash Join (cost=8787.61..10387.16 rows=143 width=16) (actual\ntime=61.836..113.434 rows=83427 loops=1)\n Hash Cond: (b.a_id = a.id)\n Buffers: shared hit=2514 read=587\n I/O Timings: read=1.199\n -> Seq Scan on b (cost=0.00..1285.27 rows=83427 width=16)\n(actual time=0.011..15.826 rows=83427 loops=1)\n Buffers: shared hit=449 read=2\n I/O Timings: read=0.009\n -> Hash (cost=8777.08..8777.08 rows=843 width=8) (actual\ntime=61.812..61.812 rows=395 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\n Buffers: shared hit=2065 read=585\n I/O Timings: read=1.190\n -> Seq Scan on a (cost=0.00..8777.08 rows=843 width=8)\n(actual time=0.143..61.609 rows=395 loops=1)\n Filter: (nonce = 64)\n Rows Removed by Filter: 489771\n Buffers: shared hit=2065 read=585\n I/O Timings: read=1.190\n Total runtime: 198.014 ms\n\nThe above numbers were acquired using this version of Postgresql:\nPostgreSQL 9.2.15 on x86_64-unknown-linux-gnu, compiled by gcc (Debian\n4.9.2-10) 4.9.2, 64-bit\n\nDear psql-performance,I'm having issues with a certain query, and I was hoping you could help me out.The schema:(start with new database cluster, with either SQL_ASCII or en.us-UTF8 encoding, using the default server configuration available in the pgdg Jessie packages).CREATE TABLE a (id bigint primary key, nonce bigint);CREATE TABLE b (id bigint primary key, a_id bigint not null);CREATE INDEX a_idx ON b (a_id);The query:SELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = ? ORDER BY b.id ASC;(skip down to [1] and [2] to see the query performance)What I know:If you force the query planner to use a merge join on the above query, it takes 10+ minutes to complete using the data as per below. If you force the query planner to use a hash join on the same data, it takes ~200 milliseconds. This behavior is the same both on Postgresql 9.2.15 and Postgresql 9.4.6 (at least as provided by the Debian Jessie repo hosted by postgresql.org), and happens both on i386 and x86_64 platforms. Note: the data for these queries is the same as described below. Let me know if you want me to provide the raw csv's or similar.Creating a new Postgresql 9.4 cluster (64-bit), creating the tables (a) and (b), importing the table data into the tables (a) and (b), and then running the above query using EXPLAIN results in a merge join query plan, as in [1].Creating a new Postgresql 9.2 cluster (32-bit or 64-bit), creating the tables (a) and (b), importing the table data into (a) and (b), and then running the above query results in a hash join query plan, as in [2].When running query [1], the postgres process on the machine consumes 100% CPU for a long time (it seems CPU-bound).What I expected:I expected both of the hash join and merge join implementations of this query to have comparable query times; perhaps within an order of magnitude. This was expected on my part mostly because the cost metrics for each query were very similar. Instead, the \"optimal\" query plan for the query takes more than 1000x longer.I also expected that the \"Rows Removed by Filter: \" for the index scan on (a) would not have such a high count, as the number of rows in table (a) (~500,000) is significantly less than the count (2,201,063,696).What I want to know:- Is this expected behavior? Can you describe how the merge join algorithm achieves these results?- Can I avoid this issue by disabling merge joins in the server configuration?Configuration:The configuration of the database is the sample configuration as per the Debian Jessie packages of Postgresql available at http://apt.postgresql.org/pub/repos/apt/ with the exception that the data directory was explicitly specified.Information about the data:Here are some queries that help describe the data I'm working with:postgres=# select distinct a_id, count(*) from b group by a_id;  a_id  | count--------+-------  49872 |   320  47994 |     5  19084 | 82977  53251 |   100 109804 |    10  51738 |     5  49077 |    10postgres=# select count(*) from b; count------- 83427postgres=# select count(distinct nonce) from a; count-------   198postgres=# select count(*) from a; count -------- 490166postgres=# select count(*) from a where nonce = 64; count-------   395Hardware:2015-era Intel Xeon processors> 300 GB of ram (greater than the size of the database with a large margin)database on hardware raid 1 array on 2 SSDsCommentary:This is my first bug report to a major open source project, so I apologize in advance if I messed up this report. Let me know if I have left out key details -- I'm happy to provide them.Given that there are roughly 500k rows in table a, and given that the EXPLAIN output claims that the filter (nonce = 64) caused 2 billion rows to be skipped (refer to [1]) suggests that each row in table b is being compared to a non-negligible number of rows in (a).I can probably make this data available as a pg_dump file. Let me know if you think that's necessary, and where I should upload it.Regards,James[1]postgres=# explain (analyze,buffers) select b.* from b join a on b.a_id = a.id where a.nonce = 64 order by b.id asc;                                                             QUERY PLAN                                                             ------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=7855.25..7855.61 rows=143 width=16) (actual time=752058.415..752080.731 rows=83427 loops=1)   Sort Key: b.id   Sort Method: external merge  Disk: 2096kB   Buffers: shared hit=2151025721 read=479, temp read=267 written=267   I/O Timings: read=2.384   ->  Merge Join  (cost=869.07..7850.13 rows=143 width=16) (actual time=5.718..751760.637 rows=83427 loops=1)         Merge Cond: (b.a_id = a.id)         Buffers: shared hit=2151025721 read=479         I/O Timings: read=2.384         ->  Index Scan using a_idx on b  (cost=0.00..2953.35 rows=83427 width=16) (actual time=0.007..68.165 rows=83427 loops=1)               Buffers: shared hit=1303 read=139               I/O Timings: read=1.369         ->  Index Scan using a_pkey on a  (cost=0.00..26163.20 rows=843 width=8) (actual time=5.706..751385.306 rows=83658 loops=1)               Filter: (nonce = 64)               Rows Removed by Filter: 2201063696               Buffers: shared hit=2151024418 read=340               I/O Timings: read=1.015 Total runtime: 752092.206 ms[2]postgres=# explain (analyze,buffers) select b.* from b join a on b.a_id = a.id where a.nonce = 64 order by b.id asc;                                                     QUERY PLAN                                                     --------------------------------------------------------------------------------------------------------------------- Sort  (cost=10392.28..10392.64 rows=143 width=16) (actual time=164.415..186.297 rows=83427 loops=1)   Sort Key: b.id   Sort Method: external merge  Disk: 2112kB   Buffers: shared hit=2514 read=587, temp read=267 written=267   I/O Timings: read=1.199   ->  Hash Join  (cost=8787.61..10387.16 rows=143 width=16) (actual time=61.836..113.434 rows=83427 loops=1)         Hash Cond: (b.a_id = a.id)         Buffers: shared hit=2514 read=587         I/O Timings: read=1.199         ->  Seq Scan on b  (cost=0.00..1285.27 rows=83427 width=16) (actual time=0.011..15.826 rows=83427 loops=1)               Buffers: shared hit=449 read=2               I/O Timings: read=0.009         ->  Hash  (cost=8777.08..8777.08 rows=843 width=8) (actual time=61.812..61.812 rows=395 loops=1)               Buckets: 1024  Batches: 1  Memory Usage: 16kB               Buffers: shared hit=2065 read=585               I/O Timings: read=1.190               ->  Seq Scan on a  (cost=0.00..8777.08 rows=843 width=8) (actual time=0.143..61.609 rows=395 loops=1)                     Filter: (nonce = 64)                     Rows Removed by Filter: 489771                     Buffers: shared hit=2065 read=585                     I/O Timings: read=1.190 Total runtime: 198.014 msThe above numbers were acquired using this version of Postgresql:PostgreSQL 9.2.15 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit", "msg_date": "Fri, 26 Feb 2016 14:07:57 -0800", "msg_from": "James Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Merge joins on index scans" }, { "msg_contents": "On 27 February 2016 at 11:07, James Parks <[email protected]> wrote:\n>\n> CREATE TABLE a (id bigint primary key, nonce bigint);\n> CREATE TABLE b (id bigint primary key, a_id bigint not null);\n> CREATE INDEX a_idx ON b (a_id);\n>\n> The query:\n>\n> SELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = ? ORDER BY b.id\n> ASC;\n>\n> (skip down to [1] and [2] to see the query performance)\n>\n> What I know:\n>\n> If you force the query planner to use a merge join on the above query, it\n> takes 10+ minutes to complete using the data as per below. If you force the\n> query planner to use a hash join on the same data, it takes ~200\n> milliseconds.\n\nI believe I know what is going on here, but can you please test;\n\nSELECT b.* FROM b WHERE EXISTS (SELECT 1 FROM a ON b.a_id = a.id AND\na.nonce = ?) ORDER BY b.id ASC;\n\nusing the merge join plan.\n\nIf this performs much better then the problem is due to the merge join\nmark/restore causing the join to have to transition through many\ntuples which don't match the a.nonce = ? predicate. The mark and\nrestore is not required for the rewritten query, as this use a semi\njoin rather than a regular inner join. With the semi join the executor\nknows that it's only meant to be matching a single tuple in \"a\", so\nonce the first match is found it can move to the next row in the outer\nrelation without having to restore the scan back to where it started\nmatching that inner row again.\n\nIf I'm right, to get around the problem you could; create index on a\n(nonce, id);\n\nIf such an index is out of the question then a patch has been\nsubmitted for review which should fix this problem in (hopefully)\neither 9.6 or 9.7\nhttps://commitfest.postgresql.org/9/129/\nIf you have a test environment handy, it would be nice if you could\ntest the patch on the current git head to see if this fixes your\nproblem. The findings would be quite interesting for me. Please note\nthis patch is for test environments only at this stage, not for\nproduction use.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 28 Feb 2016 23:06:35 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Merge joins on index scans" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On 27 February 2016 at 11:07, James Parks <[email protected]> wrote:\n>> If you force the query planner to use a merge join on the above query, it\n>> takes 10+ minutes to complete using the data as per below. If you force the\n>> query planner to use a hash join on the same data, it takes ~200\n>> milliseconds.\n\n> I believe I know what is going on here, but can you please test;\n> SELECT b.* FROM b WHERE EXISTS (SELECT 1 FROM a ON b.a_id = a.id AND\n> a.nonce = ?) ORDER BY b.id ASC;\n> using the merge join plan.\n\n> If this performs much better then the problem is due to the merge join\n> mark/restore causing the join to have to transition through many\n> tuples which don't match the a.nonce = ? predicate.\n\nClearly we are rescanning an awful lot of the \"a\" table:\n\n -> Index Scan using a_pkey on a (cost=0.00..26163.20 rows=843 width=8) (actual time=5.706..751385.306 rows=83658 loops=1)\n Filter: (nonce = 64)\n Rows Removed by Filter: 2201063696\n Buffers: shared hit=2151024418 read=340\n I/O Timings: read=1.015\n\nThe other explain shows a scan of \"a\" reading about 490k rows and\nreturning 395 of them, so there's a factor of about 200 re-read here.\nI wonder if the planner should have inserted a materialize node to\nreduce that.\n\nHowever, I think the real problem is upstream of that: if that indexscan\nwas estimated at 26163.20 units, how'd the mergejoin above it get costed\nat only 7850.13 units? The answer has to be that the planner thought the\nmerge would stop before reading most of \"a\", as a result of limited range\nof b.a_id. It would be interesting to look into what the actual maximum\nb.a_id value is.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 29 Feb 2016 20:22:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Merge joins on index scans" }, { "msg_contents": "On Sun, Feb 28, 2016 at 2:06 AM, David Rowley <[email protected]>\nwrote:\n\n> On 27 February 2016 at 11:07, James Parks <[email protected]> wrote:\n> >\n> > CREATE TABLE a (id bigint primary key, nonce bigint);\n> > CREATE TABLE b (id bigint primary key, a_id bigint not null);\n> > CREATE INDEX a_idx ON b (a_id);\n> >\n> > The query:\n> >\n> > SELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = ? ORDER BY\n> b.id\n> > ASC;\n> >\n> > (skip down to [1] and [2] to see the query performance)\n> >\n> > What I know:\n> >\n> > If you force the query planner to use a merge join on the above query, it\n> > takes 10+ minutes to complete using the data as per below. If you force\n> the\n> > query planner to use a hash join on the same data, it takes ~200\n> > milliseconds.\n>\n> I believe I know what is going on here, but can you please test;\n>\n> SELECT b.* FROM b WHERE EXISTS (SELECT 1 FROM a ON b.a_id = a.id AND\n> a.nonce = ?) ORDER BY b.id ASC;\n>\n> using the merge join plan.\n>\n>\nHere's the query plan for that query (slight modifications to get it to\nrun):\n\npostgres=# explain (analyze,buffers) SELECT b.* FROM b WHERE EXISTS (SELECT\n1 FROM a WHERE b.a_id = a.id AND a.nonce = 64) ORDER BY b.id ASC;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=16639.19..16855.62 rows=86572 width=16) (actual\ntime=145.117..173.298 rows=83427 loops=1)\n Sort Key: b.id\n Sort Method: external merge Disk: 2112kB\n Buffers: shared hit=86193 read=881, temp read=269 written=269\n I/O Timings: read=3.199\n -> Merge Semi Join (cost=795.82..8059.09 rows=86572 width=16) (actual\ntime=6.680..91.862 rows=83427 loops=1)\n Merge Cond: (b.a_id = a.id)\n Buffers: shared hit=86193 read=881\n I/O Timings: read=3.199\n -> Index Scan using a_idx on b (cost=0.00..3036.70 rows=86572\nwidth=16) (actual time=0.005..25.193 rows=83427 loops=1)\n Buffers: shared hit=1064 read=374\n I/O Timings: read=1.549\n -> Index Scan using a_pkey on a (cost=0.00..26259.85 rows=891\nwidth=8) (actual time=6.663..35.177 rows=237 loops=1)\n Filter: (nonce = 64)\n Rows Removed by Filter: 87939\n Buffers: shared hit=85129 read=507\n I/O Timings: read=1.650\n Total runtime: 186.825 ms\n\n... so, yes, it does indeed get a lot faster\n\n\n> If this performs much better then the problem is due to the merge join\n> mark/restore causing the join to have to transition through many\n> tuples which don't match the a.nonce = ? predicate. The mark and\n> restore is not required for the rewritten query, as this use a semi\n> join rather than a regular inner join. With the semi join the executor\n> knows that it's only meant to be matching a single tuple in \"a\", so\n> once the first match is found it can move to the next row in the outer\n> relation without having to restore the scan back to where it started\n> matching that inner row again.\n>\n> If I'm right, to get around the problem you could; create index on a\n> (nonce, id);\n>\n>\npostgres=# CREATE INDEX a_id_nonce_idx ON a (nonce, id);\nCREATE INDEX\npostgres=# explain (analyze,buffers) SELECT b.* FROM b JOIN a ON b.a_id =\na.id WHERE a.nonce = 64 ORDER BY b.id ASC;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=3376.57..3376.92 rows=140 width=16) (actual\ntime=144.340..160.875 rows=83427 loops=1)\n Sort Key: b.id\n Sort Method: external merge Disk: 2120kB\n Buffers: shared hit=368 read=522, temp read=266 written=266\n -> Merge Join (cost=69.50..3371.58 rows=140 width=16) (actual\ntime=0.056..88.409 rows=83427 loops=1)\n Merge Cond: (b.a_id = a.id)\n Buffers: shared hit=368 read=522\n -> Index Scan using a_idx on b (cost=0.29..2724.58 rows=83427\nwidth=16) (actual time=0.009..23.834 rows=83427 loops=1)\n Buffers: shared hit=171 read=518\n -> Index Only Scan using a_id_nonce_idx on a (cost=0.42..2450.58\nrows=820 width=8) (actual time=0.041..20.776 rows=83658 loops=1)\n Index Cond: (nonce = 64)\n Heap Fetches: 83658\n Buffers: shared hit=197 read=4\n Planning time: 0.241 ms\n Execution time: 172.346 ms\n\nLooks pretty fast to me. That being said, the number of rows returned by\nthe Index Only Scan seems a bit high, as compared to the results below, so\nI added your patch below and got [2].\n\n\n> If such an index is out of the question then a patch has been\n> submitted for review which should fix this problem in (hopefully)\n> either 9.6 or 9.7\n> https://commitfest.postgresql.org/9/129/\n> If you have a test environment handy, it would be nice if you could\n> test the patch on the current git head to see if this fixes your\n> problem. The findings would be quite interesting for me. Please note\n> this patch is for test environments only at this stage, not for\n> production use.\n>\n\nCan confirm that your patch there seems to improve performance.\n\nHEAD [1], with patch:\n\npostgres=# explain (analyze,buffers) SELECT b.* FROM b JOIN a ON b.a_id =\na.id WHERE a.nonce = 64 ORDER BY b.id ASC;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=7320.39..7320.75 rows=144 width=16) (actual\ntime=162.199..181.497 rows=83427 loops=1)\n Sort Key: b.id\n Sort Method: external merge Disk: 2120kB\n Buffers: shared hit=85780 read=472, temp read=266 written=266\n -> Merge Join (cost=808.17..7315.23 rows=144 width=16) (actual\ntime=9.719..101.864 rows=83427 loops=1)\n Merge Cond: (b.a_id = a.id)\n Buffers: shared hit=85780 read=472\n -> Index Scan using a_idx on b (cost=0.29..2721.49 rows=83427\nwidth=16) (actual time=0.059..26.346 rows=83427 loops=1)\n Buffers: shared hit=460 read=229\n -> Index Scan using a_pkey on a (cost=0.42..24394.88 rows=846\nwidth=8) (actual time=9.651..41.231 rows=237 loops=1)\n Filter: (nonce = 64)\n Rows Removed by Filter: 87939\n Buffers: shared hit=85320 read=243\n Planning time: 0.289 ms\n Execution time: 195.071 ms\n\nHEAD [1], without patch:\n\npostgres=# explain (analyze,buffers) SELECT b.* FROM b JOIN a ON b.a_id =\na.id WHERE a.nonce = 64 ORDER BY b.id ASC;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=7285.48..7285.83 rows=140 width=16) (actual\ntime=710073.059..710089.924 rows=83427 loops=1)\n Sort Key: b.id\n Sort Method: external merge Disk: 2120kB\n Buffers: shared hit=2149282200 read=243, temp read=266 written=266\n -> Merge Join (cost=685.28..7280.49 rows=140 width=16) (actual\ntime=10.022..709878.946 rows=83427 loops=1)\n Merge Cond: (b.a_id = a.id)\n Buffers: shared hit=2149282200 read=243\n -> Index Scan using a_idx on b (cost=0.29..2724.58 rows=83427\nwidth=16) (actual time=0.013..59.103 rows=83427 loops=1)\n Buffers: shared hit=689\n -> Index Scan using a_pkey on a (cost=0.42..24404.69 rows=820\nwidth=8) (actual time=9.998..709595.905 rows=83658 loops=1)\n Filter: (nonce = 64)\n Rows Removed by Filter: 2201063696\n Buffers: shared hit=2149281511 read=243\n Planning time: 0.297 ms\n Execution time: 710101.931 ms\n\nThank you for your response! This allows us to understand the situations in\nwhich we can run into trouble, and what we can do in most, if not all\ncases, to resolve.\n\nRegards,\nJames\n\n[1] HEAD is \"a892234 Change the format of the VM fork to add a second bit\nper page.\" as found on the Github mirror of postgres\n\n[2]\npostgres=# explain (analyze,buffers) SELECT b.* FROM b JOIN a ON b.a_id =\na.id WHERE a.nonce = 64 ORDER BY b.id ASC;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=11000.03..11001.07 rows=417 width=16) (actual\ntime=109.739..129.031 rows=83427 loops=1)\n Sort Key: b.id\n Sort Method: external merge Disk: 2120kB\n Buffers: shared hit=886, temp read=266 written=266\n -> Merge Join (cost=0.71..10981.88 rows=417 width=16) (actual\ntime=0.050..55.701 rows=83427 loops=1)\n Merge Cond: (b.a_id = a.id)\n Buffers: shared hit=886\n -> Index Scan using a_idx on b (cost=0.29..3975.70 rows=83427\nwidth=16) (actual time=0.017..23.647 rows=83427 loops=1)\n Buffers: shared hit=689\n -> Index Only Scan using a_id_nonce_idx on a (cost=0.42..6787.31\nrows=2451 width=8) (actual time=0.028..0.172 rows=237 loops=1)\n Index Cond: (nonce = 64)\n Heap Fetches: 237\n Buffers: shared hit=197\n Planning time: 0.153 ms\n Execution time: 142.994 ms\n\nOn Sun, Feb 28, 2016 at 2:06 AM, David Rowley <[email protected]> wrote:On 27 February 2016 at 11:07, James Parks <[email protected]> wrote:\r\n>\r\n> CREATE TABLE a (id bigint primary key, nonce bigint);\r\n> CREATE TABLE b (id bigint primary key, a_id bigint not null);\r\n> CREATE INDEX a_idx ON b (a_id);\r\n>\r\n> The query:\r\n>\r\n> SELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = ? ORDER BY b.id\r\n> ASC;\r\n>\r\n> (skip down to [1] and [2] to see the query performance)\r\n>\r\n> What I know:\r\n>\r\n> If you force the query planner to use a merge join on the above query, it\r\n> takes 10+ minutes to complete using the data as per below. If you force the\r\n> query planner to use a hash join on the same data, it takes ~200\r\n> milliseconds.\n\nI believe I know what is going on here, but can you please test;\n\r\nSELECT b.* FROM b WHERE EXISTS (SELECT 1 FROM a ON b.a_id = a.id AND\r\na.nonce = ?) ORDER BY b.id ASC;\n\r\nusing the merge join plan.\nHere's the query plan for that query (slight modifications to get it to run):postgres=# explain (analyze,buffers) SELECT b.* FROM b WHERE EXISTS (SELECT 1 FROM a WHERE b.a_id = a.id AND a.nonce = 64) ORDER BY b.id ASC;                                                            QUERY PLAN                                                            ---------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=16639.19..16855.62 rows=86572 width=16) (actual time=145.117..173.298 rows=83427 loops=1)   Sort Key: b.id   Sort Method: external merge  Disk: 2112kB   Buffers: shared hit=86193 read=881, temp read=269 written=269   I/O Timings: read=3.199   ->  Merge Semi Join  (cost=795.82..8059.09 rows=86572 width=16) (actual time=6.680..91.862 rows=83427 loops=1)         Merge Cond: (b.a_id = a.id)         Buffers: shared hit=86193 read=881         I/O Timings: read=3.199         ->  Index Scan using a_idx on b  (cost=0.00..3036.70 rows=86572 width=16) (actual time=0.005..25.193 rows=83427 loops=1)               Buffers: shared hit=1064 read=374               I/O Timings: read=1.549         ->  Index Scan using a_pkey on a  (cost=0.00..26259.85 rows=891 width=8) (actual time=6.663..35.177 rows=237 loops=1)               Filter: (nonce = 64)               Rows Removed by Filter: 87939               Buffers: shared hit=85129 read=507               I/O Timings: read=1.650 Total runtime: 186.825 ms... so, yes, it does indeed get a lot faster \r\nIf this performs much better then the problem is due to the merge join\r\nmark/restore causing the join to have to transition through many\r\ntuples which don't match the a.nonce = ? predicate. The mark and\r\nrestore is not required for the rewritten query, as this use a semi\r\njoin rather than a regular inner join. With the semi join the executor\r\nknows that it's only meant to be matching a single tuple in \"a\", so\r\nonce the first match is found it can move to the next row in the outer\r\nrelation without having to restore the scan back to where it started\r\nmatching that inner row again.\n\r\nIf I'm right, to get around the problem you could; create index on a\r\n(nonce, id);\npostgres=# CREATE INDEX a_id_nonce_idx ON a (nonce, id);CREATE INDEXpostgres=# explain (analyze,buffers) SELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = 64 ORDER BY b.id ASC;                                                                 QUERY PLAN                                                                  --------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=3376.57..3376.92 rows=140 width=16) (actual time=144.340..160.875 rows=83427 loops=1)   Sort Key: b.id   Sort Method: external merge  Disk: 2120kB   Buffers: shared hit=368 read=522, temp read=266 written=266   ->  Merge Join  (cost=69.50..3371.58 rows=140 width=16) (actual time=0.056..88.409 rows=83427 loops=1)         Merge Cond: (b.a_id = a.id)         Buffers: shared hit=368 read=522         ->  Index Scan using a_idx on b  (cost=0.29..2724.58 rows=83427 width=16) (actual time=0.009..23.834 rows=83427 loops=1)               Buffers: shared hit=171 read=518         ->  Index Only Scan using a_id_nonce_idx on a  (cost=0.42..2450.58 rows=820 width=8) (actual time=0.041..20.776 rows=83658 loops=1)               Index Cond: (nonce = 64)               Heap Fetches: 83658               Buffers: shared hit=197 read=4 Planning time: 0.241 ms Execution time: 172.346 msLooks pretty fast to me. That being said, the number of rows returned by the Index Only Scan seems a bit high, as compared to the results below, so I added your patch below and got [2]. \r\nIf such an index is out of the question then a patch has been\r\nsubmitted for review which should fix this problem in (hopefully)\r\neither 9.6 or 9.7\nhttps://commitfest.postgresql.org/9/129/\r\nIf you have a test environment handy, it would be nice if you could\r\ntest the patch on the current git head to see if this fixes your\r\nproblem. The findings would be quite interesting for me. Please note\r\nthis patch is for test environments only at this stage, not for\r\nproduction use.Can confirm that your patch there seems to improve performance.HEAD [1], with patch:postgres=# explain (analyze,buffers) SELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = 64 ORDER BY b.id ASC;                                                            QUERY PLAN                                                            ---------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=7320.39..7320.75 rows=144 width=16) (actual time=162.199..181.497 rows=83427 loops=1)   Sort Key: b.id   Sort Method: external merge  Disk: 2120kB   Buffers: shared hit=85780 read=472, temp read=266 written=266   ->  Merge Join  (cost=808.17..7315.23 rows=144 width=16) (actual time=9.719..101.864 rows=83427 loops=1)         Merge Cond: (b.a_id = a.id)         Buffers: shared hit=85780 read=472         ->  Index Scan using a_idx on b  (cost=0.29..2721.49 rows=83427 width=16) (actual time=0.059..26.346 rows=83427 loops=1)               Buffers: shared hit=460 read=229         ->  Index Scan using a_pkey on a  (cost=0.42..24394.88 rows=846 width=8) (actual time=9.651..41.231 rows=237 loops=1)               Filter: (nonce = 64)               Rows Removed by Filter: 87939               Buffers: shared hit=85320 read=243 Planning time: 0.289 ms Execution time: 195.071 msHEAD [1], without patch:postgres=# explain (analyze,buffers) SELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = 64 ORDER BY b.id ASC;                                                             QUERY PLAN                                                              ------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=7285.48..7285.83 rows=140 width=16) (actual time=710073.059..710089.924 rows=83427 loops=1)   Sort Key: b.id   Sort Method: external merge  Disk: 2120kB   Buffers: shared hit=2149282200 read=243, temp read=266 written=266   ->  Merge Join  (cost=685.28..7280.49 rows=140 width=16) (actual time=10.022..709878.946 rows=83427 loops=1)         Merge Cond: (b.a_id = a.id)         Buffers: shared hit=2149282200 read=243         ->  Index Scan using a_idx on b  (cost=0.29..2724.58 rows=83427 width=16) (actual time=0.013..59.103 rows=83427 loops=1)               Buffers: shared hit=689         ->  Index Scan using a_pkey on a  (cost=0.42..24404.69 rows=820 width=8) (actual time=9.998..709595.905 rows=83658 loops=1)               Filter: (nonce = 64)               Rows Removed by Filter: 2201063696               Buffers: shared hit=2149281511 read=243 Planning time: 0.297 ms Execution time: 710101.931 msThank you for your response! This allows us to understand the situations in which we can run into trouble, and what we can do in most, if not all cases, to resolve.Regards,James[1] HEAD is \"a892234 Change the format of the VM fork to add a second bit per page.\" as found on the Github mirror of postgres[2]postgres=# explain (analyze,buffers) SELECT b.* FROM b JOIN a ON b.a_id = a.id WHERE a.nonce = 64 ORDER BY b.id ASC;                                                                QUERY PLAN                                                                 ------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=11000.03..11001.07 rows=417 width=16) (actual time=109.739..129.031 rows=83427 loops=1)   Sort Key: b.id   Sort Method: external merge  Disk: 2120kB   Buffers: shared hit=886, temp read=266 written=266   ->  Merge Join  (cost=0.71..10981.88 rows=417 width=16) (actual time=0.050..55.701 rows=83427 loops=1)         Merge Cond: (b.a_id = a.id)         Buffers: shared hit=886        \r\n ->  Index Scan using a_idx on b  (cost=0.29..3975.70 rows=83427 \r\nwidth=16) (actual time=0.017..23.647 rows=83427 loops=1)               Buffers: shared hit=689        \r\n ->  Index Only Scan using a_id_nonce_idx on a  (cost=0.42..6787.31 \r\nrows=2451 width=8) (actual time=0.028..0.172 rows=237 loops=1)               Index Cond: (nonce = 64)               Heap Fetches: 237               Buffers: shared hit=197 Planning time: 0.153 ms Execution time: 142.994 ms", "msg_date": "Tue, 1 Mar 2016 23:36:20 -0800", "msg_from": "James Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Merge joins on index scans" }, { "msg_contents": "James Parks <[email protected]> writes:\n> On Mon, Feb 29, 2016 at 5:22 PM, Tom Lane <[email protected]> wrote:\n>> The other explain shows a scan of \"a\" reading about 490k rows and\n>> returning 395 of them, so there's a factor of about 200 re-read here.\n>> I wonder if the planner should have inserted a materialize node to\n>> reduce that.\n>> \n>> However, I think the real problem is upstream of that: if that indexscan\n>> was estimated at 26163.20 units, how'd the mergejoin above it get costed\n>> at only 7850.13 units? The answer has to be that the planner thought the\n>> merge would stop before reading most of \"a\", as a result of limited range\n>> of b.a_id. It would be interesting to look into what the actual maximum\n>> b.a_id value is.\n\n> I've attached a pg_dump of a database that contains all of the data, in the\n> event you (or others) would like to look at it. The attachment is ~1.8MB\n> (gzipped), and you can replay the pg_dump file on a database that has just\n> been created with initdb.\n\nThanks for sending the test data --- I got a chance to look at this\nfinally. It looks like my first suspicion was right and the second one\nwrong: the planner is badly underestimating the amount of rescan required\nand thus not putting in a Materialize buffer node where needed.\n\nIf I force a Materialize node to be put in, without changing any cost\nestimates (basically, set path->materialize_inner = 1 at the end of\nfinal_cost_mergejoin), then I get this:\n\n Sort (cost=7343.28..7343.62 rows=137 width=16) (actual time=218.342..232.817 rows=83427 loops=1)\n Sort Key: b.id\n Sort Method: external merge Disk: 2120kB\n -> Merge Join (cost=696.93..7338.41 rows=137 width=16) (actual time=18.627..136.549 rows=83427 loops=1)\n Merge Cond: (b.a_id = a.id)\n -> Index Scan using a_idx on b (cost=0.29..2708.59 rows=83427 width=16) (actual time=0.048..28.876 rows=83427 loops=1)\n -> Materialize (cost=0.42..24368.01 rows=805 width=8) (actual time=18.568..74.447 rows=83658 loops=1)\n -> Index Scan using a_pkey on a (cost=0.42..24366.00 rows=805 width=8) (actual time=18.560..66.186 rows=238 loops=1)\n Filter: (nonce = 64)\n Rows Removed by Filter: 87955\n Execution time: 239.195 ms\n\nwhich is a pretty satisfactory result in terms of runtime, though\nstill a bit slower than the hash alternative.\n\nNow, there are 490166 \"a\" rows, but we can see that the inner indexscan\nstopped after reading 87955+238 = 88193 of them. So there really is an\nearly-stop effect and the planner seems to have gotten that about right.\nThe problem is the rescan effect, which we can now quantify at 83658/238\nor about 350:1. Despite which, stepping through final_cost_mergejoin\nfinds that it estimates *zero* rescan effect. If I force the\nrescannedtuples estimate to be the correct value of 83658 - 238 = 83420,\nit properly decides that a materialize node ought to be put in; but that\nalso increases its overall cost estimate for this mergejoin to 7600, which\ncauses it to prefer doing the join in the other direction:\n\n Sort (cost=7343.28..7343.62 rows=137 width=16) (actual time=169.579..184.184 rows=83427 loops=1)\n Sort Key: b.id\n Sort Method: external merge Disk: 2120kB\n -> Merge Join (cost=696.93..7338.41 rows=137 width=16) (actual time=16.460..108.562 rows=83427 loops=1)\n Merge Cond: (a.id = b.a_id)\n -> Index Scan using a_pkey on a (cost=0.42..24366.00 rows=805 width=8) (actual time=16.442..63.197 rows=238 loops=1)\n Filter: (nonce = 64)\n Rows Removed by Filter: 87955\n -> Index Scan using a_idx on b (cost=0.29..2708.59 rows=83427 width=16) (actual time=0.013..26.811 rows=83427 loops=1)\n Execution time: 190.441 ms\n\nwhich is an even more satisfactory outcome; the cheaper merge direction\nis properly estimated as cheaper, and this is actually a tad faster than\nthe hash join for me.\n\nSo the entire problem here is a bogus rescannedtuples estimate. The code\ncomment about that estimate is\n\n * When there are equal merge keys in the outer relation, the mergejoin\n * must rescan any matching tuples in the inner relation. This means\n * re-fetching inner tuples; we have to estimate how often that happens.\n *\n * For regular inner and outer joins, the number of re-fetches can be\n * estimated approximately as size of merge join output minus size of\n * inner relation. Assume that the distinct key values are 1, 2, ..., and\n * denote the number of values of each key in the outer relation as m1,\n * m2, ...; in the inner relation, n1, n2, ... Then we have\n *\n * size of join = m1 * n1 + m2 * n2 + ...\n *\n * number of rescanned tuples = (m1 - 1) * n1 + (m2 - 1) * n2 + ... = m1 *\n * n1 + m2 * n2 + ... - (n1 + n2 + ...) = size of join - size of inner\n * relation\n *\n * This equation works correctly for outer tuples having no inner match\n * (nk = 0), but not for inner tuples having no outer match (mk = 0); we\n * are effectively subtracting those from the number of rescanned tuples,\n * when we should not. Can we do better without expensive selectivity\n * computations?\n\nSome investigation of the actual keys values says that indeed there are a\nwhole lot of a.id values that have no match in b.a_id, so I think the\ncomment at the end is telling us what the problem is. Is there a better\nway to make this estimate?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 13 Mar 2016 20:57:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Merge joins on index scans" } ]
[ { "msg_contents": "Hi.\n\nI've noticed that autovac. process worked more than 10 minutes, during this\nzabbix logged more than 90% IO disk utilization on db volume....\n\n===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG:\nautomatic vacuum of table \"lb_upr.public._reference32\": index scans: 1\n\tpages: 0 removed, 263307 remain\n\ttuples: 298 removed, 1944753 remain, 0 are dead but not yet removable\n\tbuffer usage: 67814 hits, 265465 misses, 15647 dirtied\n\tavg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s\n\t*system usage: CPU 5.34s/6.27u sec elapsed 651.57 sec*\n\nIs it possible to log autovac. io impact during it execution?\nIs there any way to limit or \"nice\" autovac. process?\n\nThanks to all for any help.\n\nHi. I've noticed that autovac. process worked more than 10 minutes, during this zabbix logged more than 90% IO disk utilization on db volume....===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG: automatic vacuum of table \"lb_upr.public._reference32\": index scans: 1\n\tpages: 0 removed, 263307 remain\n\ttuples: 298 removed, 1944753 remain, 0 are dead but not yet removable\n\tbuffer usage: 67814 hits, 265465 misses, 15647 dirtied\n\tavg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s\n\tsystem usage: CPU 5.34s/6.27u sec elapsed 651.57 secIs it possible to log autovac. io impact during it execution?Is there any way to limit or \"nice\" autovac. process?Thanks to all for any help.", "msg_date": "Wed, 2 Mar 2016 17:25:10 +0200", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum disk IO" }, { "msg_contents": "Hi\n\n2016-03-02 16:25 GMT+01:00 Artem Tomyuk <[email protected]>:\n\n> Hi.\n>\n> I've noticed that autovac. process worked more than 10 minutes, during\n> this zabbix logged more than 90% IO disk utilization on db volume....\n>\n> ===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG: automatic vacuum of table \"lb_upr.public._reference32\": index scans: 1\n> \tpages: 0 removed, 263307 remain\n> \ttuples: 298 removed, 1944753 remain, 0 are dead but not yet removable\n> \tbuffer usage: 67814 hits, 265465 misses, 15647 dirtied\n> \tavg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s\n> \t*system usage: CPU 5.34s/6.27u sec elapsed 651.57 sec*\n>\n> Is it possible to log autovac. io impact during it execution?\n> Is there any way to limit or \"nice\" autovac. process?\n>\n> Thanks to all for any help.\n>\n>\nmaybe offtopic - there is known problem of Zabbix. Any limits for vacuum\nare usually way to hell.\n\nBut more times the partitioning helps to Zabbix\n\nhttps://www.zabbix.org/wiki/Higher_performant_partitioning_in_PostgreSQL\n\nRegards\n\nPavel\n\nHi2016-03-02 16:25 GMT+01:00 Artem Tomyuk <[email protected]>:Hi. I've noticed that autovac. process worked more than 10 minutes, during this zabbix logged more than 90% IO disk utilization on db volume....===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG: automatic vacuum of table \"lb_upr.public._reference32\": index scans: 1\n\tpages: 0 removed, 263307 remain\n\ttuples: 298 removed, 1944753 remain, 0 are dead but not yet removable\n\tbuffer usage: 67814 hits, 265465 misses, 15647 dirtied\n\tavg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s\n\tsystem usage: CPU 5.34s/6.27u sec elapsed 651.57 secIs it possible to log autovac. io impact during it execution?Is there any way to limit or \"nice\" autovac. process?Thanks to all for any help. \nmaybe offtopic - there is known problem of Zabbix. Any limits for vacuum are usually way to hell.But more times the partitioning helps to Zabbixhttps://www.zabbix.org/wiki/Higher_performant_partitioning_in_PostgreSQLRegardsPavel", "msg_date": "Wed, 2 Mar 2016 16:31:15 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] autovacuum disk IO" }, { "msg_contents": "On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk <[email protected]> wrote:\n> Hi.\n>\n> I've noticed that autovac. process worked more than 10 minutes, during this\n> zabbix logged more than 90% IO disk utilization on db volume....\n>\n> ===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG: automatic\n> vacuum of table \"lb_upr.public._reference32\": index scans: 1\n> pages: 0 removed, 263307 remain\n> tuples: 298 removed, 1944753 remain, 0 are dead but not yet removable\n> buffer usage: 67814 hits, 265465 misses, 15647 dirtied\n> avg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s\n> system usage: CPU 5.34s/6.27u sec elapsed 651.57 sec\n>\n> Is it possible to log autovac. io impact during it execution?\n> Is there any way to limit or \"nice\" autovac. process?\n\nI'll assume you're running a fairly recent version of postgresql.\n\nThere are a few settings that adjust how hard autovacuum works when\nit's working.\n\nautovacuum_max_workers tells autovacuum how many threads to vacuum\nwith. Lowering this will limit the impact of autovacuum, but generally\nthe default setting of 3 is reasonable on most machines.\n\nautovacuum_vacuum_cost_delay sets how to wail between internal rounds.\nRaising this makes autovacuum take bigger pauses internally. The\ndefault of 20ms is usually large enough to keep you out of trouble,\nbut feel free to raise it and see if your IO utilization lowers.\n\nautovacuum_vacuum_cost_limit sets a limit to how much work to do\nbetween the pauses set by the cost delay above. Lowering this will\ncause autovac to do less work between pauses.\n\nMost of the time I'm adjusting these I'm making vacuum more\naggressive, not less aggressive because vacuum falling behind is a\nproblem on the large, fast production systems I work on. In your case\nyou want to watch for when autovacuum IS running, and using a tool\nlike vmstat or iostat or iotop, watch it for % utilization. You can\nthen adjust cost delay and cost limit to make it less aggressive and\nsee if your io util goes down.\n\nNote though that 90% utilization isn't 100% so it's not likely\nflooding the IO. But if you say raise cost delay from 20 to 40ms, it\nmight drop to 75% or so. The primary goal here is to arrive at numbers\nthat left autovacuum keep up with reclaiming the discarded tuples in\nthe database without getting in the way of the workload.\n\nIf your workload isn't slowing down, or isn't slowing down very much,\nduring autobvacuum then you're OK.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Mar 2016 08:45:31 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum disk IO" }, { "msg_contents": "On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk <[email protected]> wrote:\n>> Hi.\n>>\n>> I've noticed that autovac. process worked more than 10 minutes, during this\n>> zabbix logged more than 90% IO disk utilization on db volume....\n>>\n>> ===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG: automatic\n>> vacuum of table \"lb_upr.public._reference32\": index scans: 1\n>> pages: 0 removed, 263307 remain\n>> tuples: 298 removed, 1944753 remain, 0 are dead but not yet removable\n>> buffer usage: 67814 hits, 265465 misses, 15647 dirtied\n>> avg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s\n>> system usage: CPU 5.34s/6.27u sec elapsed 651.57 sec\n>>\n>> Is it possible to log autovac. io impact during it execution?\n>> Is there any way to limit or \"nice\" autovac. process?\n>\n> I'll assume you're running a fairly recent version of postgresql.\n>\n> There are a few settings that adjust how hard autovacuum works when\n> it's working.\n>\n> autovacuum_max_workers tells autovacuum how many threads to vacuum\n> with. Lowering this will limit the impact of autovacuum, but generally\n> the default setting of 3 is reasonable on most machines.\n>\n> autovacuum_vacuum_cost_delay sets how to wail between internal rounds.\n> Raising this makes autovacuum take bigger pauses internally. The\n> default of 20ms is usually large enough to keep you out of trouble,\n> but feel free to raise it and see if your IO utilization lowers.\n>\n> autovacuum_vacuum_cost_limit sets a limit to how much work to do\n> between the pauses set by the cost delay above. Lowering this will\n> cause autovac to do less work between pauses.\n>\n> Most of the time I'm adjusting these I'm making vacuum more\n> aggressive, not less aggressive because vacuum falling behind is a\n> problem on the large, fast production systems I work on. In your case\n> you want to watch for when autovacuum IS running, and using a tool\n> like vmstat or iostat or iotop, watch it for % utilization. You can\n> then adjust cost delay and cost limit to make it less aggressive and\n> see if your io util goes down.\n>\n> Note though that 90% utilization isn't 100% so it's not likely\n> flooding the IO. But if you say raise cost delay from 20 to 40ms, it\n> might drop to 75% or so. The primary goal here is to arrive at numbers\n> that left autovacuum keep up with reclaiming the discarded tuples in\n> the database without getting in the way of the workload.\n>\n> If your workload isn't slowing down, or isn't slowing down very much,\n> during autobvacuum then you're OK.\n\nJust to add a point here. If you're machine can't keep up with\nproduction load AND the job of vacuuming, then your IO subsystem is\ntoo slow and needs upgrading. The difference between a pair of\nspinning 7200RPM drives and a pair of enterprise class SSDs (always\nwith power off safe writing etc, consumer SSDs can eat your data on\npower off) can be truly huge. I've seen improvements from a few\nhundred transactions per second to thousands of transactions per\nsecond by a simple upgrade like that.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Mar 2016 08:49:00 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum disk IO" }, { "msg_contents": "Il 02/03/2016 16:49, Scott Marlowe ha scritto:\n> On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe <[email protected]> wrote:\n>> On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk <[email protected]> wrote:\n>>> Hi.\n>>>\n>>> I've noticed that autovac. process worked more than 10 minutes, during this\n>>> zabbix logged more than 90% IO disk utilization on db volume....\n>>>\n>>> ===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG: automatic\n>>> vacuum of table \"lb_upr.public._reference32\": index scans: 1\n>>> pages: 0 removed, 263307 remain\n>>> tuples: 298 removed, 1944753 remain, 0 are dead but not yet removable\n>>> buffer usage: 67814 hits, 265465 misses, 15647 dirtied\n>>> avg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s\n>>> system usage: CPU 5.34s/6.27u sec elapsed 651.57 sec\n>>>\n>>> Is it possible to log autovac. io impact during it execution?\n>>> Is there any way to limit or \"nice\" autovac. process?\n>> I'll assume you're running a fairly recent version of postgresql.\n>>\n>> There are a few settings that adjust how hard autovacuum works when\n>> it's working.\n>>\n>> autovacuum_max_workers tells autovacuum how many threads to vacuum\n>> with. Lowering this will limit the impact of autovacuum, but generally\n>> the default setting of 3 is reasonable on most machines.\n>>\n>> autovacuum_vacuum_cost_delay sets how to wail between internal rounds.\n>> Raising this makes autovacuum take bigger pauses internally. The\n>> default of 20ms is usually large enough to keep you out of trouble,\n>> but feel free to raise it and see if your IO utilization lowers.\n>>\n>> autovacuum_vacuum_cost_limit sets a limit to how much work to do\n>> between the pauses set by the cost delay above. Lowering this will\n>> cause autovac to do less work between pauses.\n>>\n>> Most of the time I'm adjusting these I'm making vacuum more\n>> aggressive, not less aggressive because vacuum falling behind is a\n>> problem on the large, fast production systems I work on. In your case\n>> you want to watch for when autovacuum IS running, and using a tool\n>> like vmstat or iostat or iotop, watch it for % utilization. You can\n>> then adjust cost delay and cost limit to make it less aggressive and\n>> see if your io util goes down.\n>>\n>> Note though that 90% utilization isn't 100% so it's not likely\n>> flooding the IO. But if you say raise cost delay from 20 to 40ms, it\n>> might drop to 75% or so. The primary goal here is to arrive at numbers\n>> that left autovacuum keep up with reclaiming the discarded tuples in\n>> the database without getting in the way of the workload.\n>>\n>> If your workload isn't slowing down, or isn't slowing down very much,\n>> during autobvacuum then you're OK.\n> Just to add a point here. If you're machine can't keep up with\n> production load AND the job of vacuuming, then your IO subsystem is\n> too slow and needs upgrading. The difference between a pair of\n> spinning 7200RPM drives and a pair of enterprise class SSDs (always\n> with power off safe writing etc, consumer SSDs can eat your data on\n> power off) can be truly huge. I've seen improvements from a few\n> hundred transactions per second to thousands of transactions per\n> second by a simple upgrade like that.\n>\n>\n... or maybe add some more RAM to have more disk caching (if you're on \n*nix).... this worked for me in the past... even if IMHO it's more a \ntemporary \"patch\" while upgrading (if it can't be done in a hurry) than \na real solution...\n\nCheers\nMoreno.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Mar 2016 17:11:12 +0100", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM] Re: autovacuum disk IO" }, { "msg_contents": "On Wed, Mar 2, 2016 at 9:11 AM, Moreno Andreo <[email protected]> wrote:\n> Il 02/03/2016 16:49, Scott Marlowe ha scritto:\n>>\n>> On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe <[email protected]>\n>> wrote:\n>>>\n>>> On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk <[email protected]>\n>>> wrote:\n>>>>\n>>>> Hi.\n>>>>\n>>>> I've noticed that autovac. process worked more than 10 minutes, during\n>>>> this\n>>>> zabbix logged more than 90% IO disk utilization on db volume....\n>>>>\n>>>> ===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG: automatic\n>>>> vacuum of table \"lb_upr.public._reference32\": index scans: 1\n>>>> pages: 0 removed, 263307 remain\n>>>> tuples: 298 removed, 1944753 remain, 0 are dead but not yet removable\n>>>> buffer usage: 67814 hits, 265465 misses, 15647 dirtied\n>>>> avg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s\n>>>> system usage: CPU 5.34s/6.27u sec elapsed 651.57 sec\n>>>>\n>>>> Is it possible to log autovac. io impact during it execution?\n>>>> Is there any way to limit or \"nice\" autovac. process?\n>>>\n>>> I'll assume you're running a fairly recent version of postgresql.\n>>>\n>>> There are a few settings that adjust how hard autovacuum works when\n>>> it's working.\n>>>\n>>> autovacuum_max_workers tells autovacuum how many threads to vacuum\n>>> with. Lowering this will limit the impact of autovacuum, but generally\n>>> the default setting of 3 is reasonable on most machines.\n>>>\n>>> autovacuum_vacuum_cost_delay sets how to wail between internal rounds.\n>>> Raising this makes autovacuum take bigger pauses internally. The\n>>> default of 20ms is usually large enough to keep you out of trouble,\n>>> but feel free to raise it and see if your IO utilization lowers.\n>>>\n>>> autovacuum_vacuum_cost_limit sets a limit to how much work to do\n>>> between the pauses set by the cost delay above. Lowering this will\n>>> cause autovac to do less work between pauses.\n>>>\n>>> Most of the time I'm adjusting these I'm making vacuum more\n>>> aggressive, not less aggressive because vacuum falling behind is a\n>>> problem on the large, fast production systems I work on. In your case\n>>> you want to watch for when autovacuum IS running, and using a tool\n>>> like vmstat or iostat or iotop, watch it for % utilization. You can\n>>> then adjust cost delay and cost limit to make it less aggressive and\n>>> see if your io util goes down.\n>>>\n>>> Note though that 90% utilization isn't 100% so it's not likely\n>>> flooding the IO. But if you say raise cost delay from 20 to 40ms, it\n>>> might drop to 75% or so. The primary goal here is to arrive at numbers\n>>> that left autovacuum keep up with reclaiming the discarded tuples in\n>>> the database without getting in the way of the workload.\n>>>\n>>> If your workload isn't slowing down, or isn't slowing down very much,\n>>> during autobvacuum then you're OK.\n>>\n>> Just to add a point here. If you're machine can't keep up with\n>> production load AND the job of vacuuming, then your IO subsystem is\n>> too slow and needs upgrading. The difference between a pair of\n>> spinning 7200RPM drives and a pair of enterprise class SSDs (always\n>> with power off safe writing etc, consumer SSDs can eat your data on\n>> power off) can be truly huge. I've seen improvements from a few\n>> hundred transactions per second to thousands of transactions per\n>> second by a simple upgrade like that.\n>>\n>>\n> ... or maybe add some more RAM to have more disk caching (if you're on\n> *nix).... this worked for me in the past... even if IMHO it's more a\n> temporary \"patch\" while upgrading (if it can't be done in a hurry) than a\n> real solution...\n\nOh yeah, definitely worth looking at. But RAM can't speed up writes,\njust reads, so it's very workload dependent. If you're IO subsystem is\nmaxing out on writes, faster drives / IO. If it's maxing out on reads,\nmore memory. But if your dataset is much bigger than memory (say 64GB\nRAM and a 1TB data store) then more RAM isn't going to be the answer.\n\nSo as usual, to help out OP we might want to know more about his\nsystem. There's a lot of helpful tips for reporting slow queries /\nperformance issues here:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Mar 2016 09:21:02 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM] Re: autovacuum disk IO" }, { "msg_contents": "Scott Marlowe wrote:\n> On Wed, Mar 2, 2016 at 9:11 AM, Moreno Andreo <[email protected]> wrote:\n\n> > ... or maybe add some more RAM to have more disk caching (if you're on\n> > *nix).... this worked for me in the past... even if IMHO it's more a\n> > temporary \"patch\" while upgrading (if it can't be done in a hurry) than a\n> > real solution...\n> \n> Oh yeah, definitely worth looking at. But RAM can't speed up writes,\n> just reads, so it's very workload dependent. If you're IO subsystem is\n> maxing out on writes, faster drives / IO. If it's maxing out on reads,\n> more memory. But if your dataset is much bigger than memory (say 64GB\n> RAM and a 1TB data store) then more RAM isn't going to be the answer.\n\nIn the particular case of autovacuum, it may be helpful to create a\n\"ramdisk\" and put the stats temp file in it.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Mar 2016 15:40:26 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM] Re: autovacuum disk IO" }, { "msg_contents": "Il 02/03/2016 19:40, Alvaro Herrera ha scritto:\n> Scott Marlowe wrote:\n>> On Wed, Mar 2, 2016 at 9:11 AM, Moreno Andreo <[email protected]> wrote:\n>>> ... or maybe add some more RAM to have more disk caching (if you're on\n>>> *nix).... this worked for me in the past... even if IMHO it's more a\n>>> temporary \"patch\" while upgrading (if it can't be done in a hurry) than a\n>>> real solution...\n>> Oh yeah, definitely worth looking at. But RAM can't speed up writes,\n>> just reads, so it's very workload dependent. If you're IO subsystem is\n>> maxing out on writes, faster drives / IO. If it's maxing out on reads,\n>> more memory. But if your dataset is much bigger than memory (say 64GB\n>> RAM and a 1TB data store) then more RAM isn't going to be the answer.\n> In the particular case of autovacuum, it may be helpful to create a\n> \"ramdisk\" and put the stats temp file in it.\n>\nDefinitely. I my new server (as I've been taught here :-) ) I'm going to \nput stats in a ramdisk and pg_xlog in another partition.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Mar 2016 19:54:07 +0100", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SPAM] Re: autovacuum disk IO" }, { "msg_contents": "\n\nLe 2 mars 2016 16:25:10 GMT+01:00, Artem Tomyuk <[email protected]> a écrit :\n>Hi.\n>\n>I've noticed that autovac. process worked more than 10 minutes, during\n>this\n>zabbix logged more than 90% IO disk utilization on db volume....\n>\n>===========>29237 2016-03-02 15:17:23 EET 00000 [24-1]LOG:\n>automatic vacuum of table \"lb_upr.public._reference32\": index scans: 1\n>\tpages: 0 removed, 263307 remain\n>\ttuples: 298 removed, 1944753 remain, 0 are dead but not yet removable\n>\tbuffer usage: 67814 hits, 265465 misses, 15647 dirtied\n>\tavg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s\n\nAccording to what I am reading here, your autovacumm avg read activity was less than 4MB/s...and 188kB/s for writes.\n\nI seriously doubt these numbers are high enough to saturate your disks bandwidth.\n\n>\t*system usage: CPU 5.34s/6.27u sec elapsed 651.57 sec*\n\nThis tells that the autovacumm tooks 651s to complete...but only 5s and 6s of cpu in kernel and userland in total.\n\nAutovacuum are slowed down thanks to autovacumm_vacuum_cost_delay param, which explains the 10min run and such a low disk activity.\n\n>Is it possible to log autovac. io impact during it execution?\n\nI would bet the problem is elsewhere...\n\n/Ioguix\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 04 Mar 2016 18:28:02 +0100", "msg_from": "Jehan-Guillaume de Rorthais <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] autovacuum disk IO" } ]
[ { "msg_contents": "Howdy\n\nPostgres9.2 going to 9.4\nCentOS 6.5\n\nSo in most of my environments, I use slony and thus use slony replication\nfor my upgrades (Drop/add nodes etc).\n\nBut I've got a pretty big DB just shy of a TB that is on a single node. A\ndump restore would take over 48 hours because of index creations etc, so\nthought maybe I would look at doing a upgrade via pg_upgrade.\n\nThere are some challenges, since I build my rpm's to a standard directory\nfor binaries and then the data directory. So I will have to move/rename\ndirectories, but when that's done, I'm slightly confused on the pg_upgrade\nusing link options.\n\nIf my data is located in /data\n\nand I link to a new dir in /data1, what actually happens. do I end up with\n2 file systems and links and thus am not able to delete or cleanup any old\ndata, or how does this work?\n\nAlso will the reindex creation still happen with this type of in-place\nupgrade, as if so, then it may not save too much time vs a dump/import.\n\nI'm nervous about using pg_upgrade but it's really tough to recover from\nthe jobs that backup during a dump/restore process (2-3 days), so really\ntrying to wrap my head around pg_upgrade..\n\nSuggestions, opinions on pg_upgrade vs dump/restore, the filesystem/mount\nbelow is what I'm working with.\n\nFilesystem Size Used Avail Use% Mounted on\n\n/dev/sda6 4.0T 1.1T 2.8T 29% /data\n\nThanks\nTory\n\nHowdyPostgres9.2 going to 9.4CentOS 6.5So in most of my environments, I use slony and thus use slony replication for my upgrades (Drop/add nodes etc).But I've got a pretty big DB  just shy of a TB that is on a single node. A dump restore would take over 48 hours because of index creations etc, so thought maybe I would look at doing a upgrade via pg_upgrade.There are some challenges, since I build my rpm's to a standard directory for binaries and then the data directory. So I will have to move/rename directories, but when that's done, I'm slightly confused on the pg_upgrade using link options.If my data is located in /dataand I link to a new dir in /data1,  what actually happens. do I end up with 2 file systems and links and thus am not able to delete or cleanup any old data, or how does this work?Also will the reindex creation still happen with this type of in-place upgrade, as if so, then it may not save too much time vs a dump/import.I'm nervous about using pg_upgrade but it's really tough to recover from the jobs that backup during a dump/restore process (2-3 days), so really trying to wrap my head around pg_upgrade..Suggestions, opinions on pg_upgrade vs dump/restore, the filesystem/mount below is what I'm working with.\nFilesystem                       Size  Used Avail Use% Mounted on/dev/sda6                        4.0T  1.1T  2.8T  29% /dataThanksTory", "msg_date": "Fri, 4 Mar 2016 14:27:59 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Clarification on using pg_upgrade" }, { "msg_contents": "On Fri, Mar 04, 2016 at 02:27:59PM -0800, Tory M Blue wrote:\n> If my data is located in /data\n> \n> and I link to a new dir in /data1, what actually happens. do I end up with\n> 2 file systems and links and thus am not able to delete or cleanup any old\n> data, or how does this work?\n> \n> Also will the reindex creation still happen with this type of in-place\n> upgrade, as if so, then it may not save too much time vs a dump/import.\n\nSince you have the space, you can do a test upgrade; make a dump of the\nessential tables (or the entire thing) and restore it to another instance,\nperhaps even something run from your /home.\n\npg_upgrade --link makes hardlinks for tables and indices (same as cp -l), so\nuses very little additional space. Note, that means that both must be within\nthe filesystem (/data). You should understand about hardinks and inodes\notherwise this will lead to confusion and mistakes.\n\nIndexes don't need to be rebuilt afterwards. I've upgraded ~35 customers to\n9.5 already, some as big as 5TB. So far the disruption has been at most 30min\n(not counting ANALYZE afterwards).\n\nWhen I use pg_upgrade, after stopping the old instance, I rename the data dir\n(under centos, /var/lib/pgsql/9.4~). Then pg_upgrade makes links in 9.5/.\nRenaming has the advantage that the old instances can't be accidentally\nstarted; and, makes it much easier to believe that it's safe to remove the 9.4~\nafterwards.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Mar 2016 16:58:11 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on using pg_upgrade" }, { "msg_contents": "On 3/4/16 4:58 PM, Justin Pryzby wrote:\n> On Fri, Mar 04, 2016 at 02:27:59PM -0800, Tory M Blue wrote:\n>> >If my data is located in /data\n>> >\n>> >and I link to a new dir in /data1, what actually happens. do I end up with\n>> >2 file systems and links and thus am not able to delete or cleanup any old\n>> >data, or how does this work?\n>> >\n>> >Also will the reindex creation still happen with this type of in-place\n>> >upgrade, as if so, then it may not save too much time vs a dump/import.\n> Since you have the space, you can do a test upgrade; make a dump of the\n> essential tables (or the entire thing) and restore it to another instance,\n> perhaps even something run from your /home.\n\nSince pg_upgrade operates at a binary level, if you want to test it I'd \nrecommend using a PITR backup and not pg_dump. It's theoretically \npossible to have a database that will pg_dump correctly but that \npg_upgrade chokes on.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Mar 2016 10:46:42 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on using pg_upgrade" }, { "msg_contents": "Thanks to all that responded\n\nI successfully upgraded 800GB DB with pg_upgrade in about 2 hours.\nThis would have taken 2 days to dump/restore.\n\nSlon is also starting to not be viable as it takes some indexes over 7\nhours to complete. So this upgrade path seemed to really be nice.\n\nNot sure how I can incorporate with my slon cluster, I guess that will\nbe the next thing I research.\n\nAppreciate the responses and assistance.\n\nTory\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 24 Mar 2016 10:43:27 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clarification on using pg_upgrade" }, { "msg_contents": "On 3/24/16 12:43 PM, Tory M Blue wrote:\n> Slon is also starting to not be viable as it takes some indexes over 7\n> hours to complete. So this upgrade path seemed to really be nice.\n\nIf you're standing up a new replica from scratch on the latest version, \nI'm not really sure why that matters?\n\n> Not sure how I can incorporate with my slon cluster, I guess that will\n> be the next thing I research.\n\nNot sure I'm following, but you can pg_upgrade your replicas at the same \ntime as you do the master... or you can do them after the fact.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 3 Apr 2016 12:13:56 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on using pg_upgrade" }, { "msg_contents": "In line Jim\n\nOn Sun, Apr 3, 2016 at 10:13 AM, Jim Nasby <[email protected]> wrote:\n> On 3/24/16 12:43 PM, Tory M Blue wrote:\n>>\n>> Slon is also starting to not be viable as it takes some indexes over 7\n>> hours to complete. So this upgrade path seemed to really be nice.\n>\n>\n> If you're standing up a new replica from scratch on the latest version, I'm\n> not really sure why that matters?\n\nNot sure why the 7-13 hours causes an issue? Because if I'm upgrading\nvia slon process, I have to add and drop a node. If I'm dropping my\nsecondary (slave) I have to move reporting to the master, so now the\nmaster is handing normal inserts and reports. Next item, I'm without\na replica for 13+ hours, that's not good either.\n\n>> Not sure how I can incorporate with my slon cluster, I guess that will\n>> be the next thing I research.\n>\n>\n> Not sure I'm following, but you can pg_upgrade your replicas at the same\n> time as you do the master... or you can do them after the fact.\n> --\n\nI'm not sure how that statement is true. I'm fundamentally changing\nthe data in the master. My gut says you are thinking, just shut\neverything down until you have upgraded all 4-5 servers. I'm hoping\nthat's not what you are thinking here.\n\nIf I update my Master, my slave and query slaves are going to be\nwondering what the heck is going on. Now I can stop slon, upgrade and\nrestart slon (if Postgres upgrade handles the weird pointers and stuff\nthat slon does on the slave nodes (inside the slon schema), but\ndepending on how long this process takes I'm down for a period of\ntime, that's not acceptable. so I have to upgrade my standby unit,\nwhich now fundamentally is different than the master. This is what my\nstatement was referencing, with slon running, how do I use pg_upgrade\nto upgrade the cluster without downtime. Again slon requires a drop\nadd if I'm rebuilding via slon but as I stated that's almost\nunbearable at this juncture with how long indexes take..\n\nThanks\nTory\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Apr 2016 21:01:27 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clarification on using pg_upgrade" }, { "msg_contents": "On 4/19/16 11:01 PM, Tory M Blue wrote:\n>>> >> Slon is also starting to not be viable as it takes some indexes over 7\n>>> >> hours to complete. So this upgrade path seemed to really be nice.\n>> >\n>> >\n>> > If you're standing up a new replica from scratch on the latest version, I'm\n>> > not really sure why that matters?\n> Not sure why the 7-13 hours causes an issue? Because if I'm upgrading\n> via slon process, I have to add and drop a node. If I'm dropping my\n> secondary (slave) I have to move reporting to the master, so now the\n> master is handing normal inserts and reports. Next item, I'm without\n> a replica for 13+ hours, that's not good either.\n\nDon't drop and add a node, just do a master switchover. AFAIK that's \nnearly instant as long as things are in sync.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 14 Jun 2016 16:03:19 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on using pg_upgrade" }, { "msg_contents": "On Tue, Jun 14, 2016 at 2:03 PM, Jim Nasby <[email protected]> wrote:\n> On 4/19/16 11:01 PM, Tory M Blue wrote:\n>>>>\n>>>> >> Slon is also starting to not be viable as it takes some indexes over\n>>>> >> 7\n>>>> >> hours to complete. So this upgrade path seemed to really be nice.\n>>>\n>>> >\n>>> >\n>>> > If you're standing up a new replica from scratch on the latest version,\n>>> > I'm\n>>> > not really sure why that matters?\n>>\n>> Not sure why the 7-13 hours causes an issue? Because if I'm upgrading\n>> via slon process, I have to add and drop a node. If I'm dropping my\n>> secondary (slave) I have to move reporting to the master, so now the\n>> master is handing normal inserts and reports. Next item, I'm without\n>> a replica for 13+ hours, that's not good either.\n>\n>\n> Don't drop and add a node, just do a master switchover. AFAIK that's nearly\n> instant as long as things are in sync.\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX\n> Experts in Analytics, Data Architecture and PostgreSQL\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n> 855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\nRight, that's what we do, but then to upgrade, we have to drop/add the\nnode, because it's being upgraded. If I'm updating the underlying OS,\nI have to kill it all. If I'm doing a postgres upgrade, using an old\nversion of slon, without using pg_upgrade, I have to drop the db,\nrecreate it, which requires a drop/add.\n\nI'm trying to figure out how to best do it using pg_upgrade instead\nof the entire drop/add for postgres upgrades (which are needed if you\nare using slon as an upgrade engine for your db).\n\nTory\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 14 Jun 2016 14:08:15 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clarification on using pg_upgrade" }, { "msg_contents": "----- Original Message -----\n\n> From: Tory M Blue <[email protected]>\n> To: Jim Nasby <[email protected]>\n> Cc: \"[email protected]\" <[email protected]>\n> Sent: Tuesday, 14 June 2016, 22:08\n\n> Subject: Re: [PERFORM] Clarification on using pg_upgrade\n>\n> Right, that's what we do, but then to upgrade, we have to drop/add the\n> node, because it's being upgraded. If I'm updating the underlying OS,\n> I have to kill it all. If I'm doing a postgres upgrade, using an old\n> version of slon, without using pg_upgrade, I have to drop the db,\n> recreate it, which requires a drop/add.\n> \n> I'm trying to figure out how to best do it using pg_upgrade instead\n> of the entire drop/add for postgres upgrades (which are needed if you\n> are using slon as an upgrade engine for your db).\n> \n\n\nI've just skimmed through this thread, but I can't quite gather what it is you're trying to achieve. Are you looking to move away from Slony? Upgrade by any means with or without Slony? Or just find a \"fast\" way of doing a major upgrade whilst keeping Slony in-place as your replication method?\n\nIf it's the latter, the easiest way is to have 2 or more subscribers subscribed to the same sets and one at a time; drop a subscriber node, upgrade and re-initdb, then use clone node to recreate it from another subscriber. If you're intent on using pg_upgrade you might be able to fudge it as long as you can bump up current txid to be greater than what it was before the upgrade; in fact I've done similar before with a slony subscriber, but only as a test on a small database.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 15 Jun 2016 10:14:27 +0000 (UTC)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on using pg_upgrade" } ]
[ { "msg_contents": "No hits on the intratubes on this.\n\nAny idea ? We are doing some massive deletes so was curious as to what\nwould cause this error. The DB is up, not overburdened, just some big\ndeletes and slon replication processes.\n\nCentOS 6.6\nPostgres 9.4.5\n\n\nFirst time I've ever seen this alert/error just curious about it.\n\n2016-03-08 16:17:29 PST 11877 2016-03-08 16:17:29.850 PSTLOG: using\nstale statistics instead of current ones because stats collector is not\nresponding\n\n\nThanks\n\nTory\n\nNo hits on the intratubes on this.Any idea ? We are doing some massive deletes so was curious as to what would cause this error. The DB is up, not overburdened, just some big deletes and slon replication processes.CentOS 6.6 Postgres 9.4.5First time I've ever seen this alert/error just curious about it.\n2016-03-08 16:17:29 PST    11877 2016-03-08 16:17:29.850 PSTLOG:  using stale statistics instead of current ones because stats collector is not respondingThanksTory", "msg_date": "Tue, 8 Mar 2016 16:18:19 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "using stale statistics instead of current ones because stats\n collector is not responding" }, { "msg_contents": "Hi,\n\nOn Tue, 2016-03-08 at 16:18 -0800, Tory M Blue wrote:\n> No hits on the intratubes on this.\n> \n> \n> Any idea ? We are doing some massive deletes so was curious as to what\n> would cause this error. The DB is up, not overburdened, just some big\n> deletes and slon replication processes.\n>\n> CentOS 6.6 \n> Postgres 9.4.5\n> \n> First time I've ever seen this alert/error just curious about it.\n>\n> 2016-03-08 16:17:29 PST 11877 2016-03-08 16:17:29.850 PSTLOG:\n> using stale statistics instead of current ones because stats collector\n> is not responding\n\nPostgreSQL tracks 'runtime statistics' (number of scans of a table,\ntuples fetched from index/table etc.) in a file, maintained by a\nseparate process (collector). When a backed process requests some of the\nstats (e.g. when a monitoring tool selects from pg_stat_all_tables) it\nrequests a recent snapshot of the file from the collector.\n\nThe log message you see means that the collector did not handle such\nrequests fast enough, and the backend decided to read an older snapshot\ninstead. So you may see some stale data in monitoring for example.\n\nThis may easily happen if the I/O system is overloaded, for example. The\nsimplest solution is to move the statistics file to RAM disk (e.g. tmpfs\nmount on Linux) using stats_temp_directory in postgresql.conf.\n\nThe space neede depends on the number of objects (databases, tables,\nindexes), and usually it's a megabyte in total or so.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 09 Mar 2016 12:37:47 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: using stale statistics instead of current ones\n because stats collector is not responding" } ]
[ { "msg_contents": "Hi all.\n\nIs there any way of how to retrieve information from pg_stat_activity (its\nnot very comfort to get it from iotop, because its not showing full text of\nquery) which query generates or consumes the most IO load or time.\n\nThanks for any advice.\n\nHi all.Is there any way of how to retrieve information from pg_stat_activity (its not very comfort to get it from iotop, because its not showing full text of query) which query generates or consumes the most IO load or time. Thanks for any advice.", "msg_date": "Sun, 13 Mar 2016 19:39:38 +0200", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": true, "msg_subject": "DIsk I/O from pg_stat_activity" }, { "msg_contents": "> 13 марта 2016 г., в 20:39, Artem Tomyuk <[email protected]> написал(а):\n> \n> Hi all.\n> \n> Is there any way of how to retrieve information from pg_stat_activity (its not very comfort to get it from iotop, because its not showing full text of query) which query generates or consumes the most IO load or time. \n\nProbably this can be done with pg_stat_kcache. Installing it with pg_stat_statements and querying it something like below will give stats per query:\n\nrpopdb01d/postgres R # SELECT rolname, queryid, round(total_time::numeric, 2) AS total_time, calls,\n pg_size_pretty(shared_blks_hit*8192) AS shared_hit,\n pg_size_pretty(int8larger(0, (shared_blks_read*8192 - reads))) AS page_cache_hit,\n pg_size_pretty(reads) AS physical_read,\n round(blk_read_time::numeric, 2) AS blk_read_time,\n round(user_time::numeric, 2) AS user_time,\n round(system_time::numeric, 2) AS system_time\nFROM pg_stat_statements s\n JOIN pg_stat_kcache() k USING (userid, dbid, queryid)\n JOIN pg_database d ON s.dbid = d.oid\n JOIN pg_roles r ON r.oid = userid\nWHERE datname != 'postgres' AND datname NOT LIKE 'template%'\nORDER BY reads DESC LIMIT 1;\n rolname | queryid | total_time | calls | shared_hit | page_cache_hit | physical_read | blk_read_time | user_time | system_time\n---------+------------+--------------+----------+------------+----------------+---------------+---------------+-----------+-------------\n rpop | 3183006759 | 309049021.97 | 38098195 | 276 TB | 27 TB | 22 TB | 75485646.81 | 269508.98 | 35635.96\n(1 row)\n\nTime: 18.605 ms\nrpopdb01d/postgres R #\n\nQuery text may be resolved by queryid something like SELECT query FROM pg_stat_statements WHERE queryid = 3183006759.\n\nWorks only with 9.4+ and gives you statistics per query for all the time, not the current state.\n\n> \n> Thanks for any advice.\n\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n13 марта 2016 г., в 20:39, Artem Tomyuk <[email protected]> написал(а):Hi all.Is there any way of how to retrieve information from pg_stat_activity (its not very comfort to get it from iotop, because its not showing full text of query) which query generates or consumes the most IO load or time. Probably this can be done with pg_stat_kcache. Installing it with pg_stat_statements and querying it something like below will give stats per query:rpopdb01d/postgres R # SELECT rolname, queryid, round(total_time::numeric, 2) AS total_time, calls,    pg_size_pretty(shared_blks_hit*8192) AS shared_hit,    pg_size_pretty(int8larger(0, (shared_blks_read*8192 - reads))) AS page_cache_hit,    pg_size_pretty(reads) AS physical_read,    round(blk_read_time::numeric, 2) AS blk_read_time,    round(user_time::numeric, 2) AS user_time,    round(system_time::numeric, 2) AS system_timeFROM pg_stat_statements s    JOIN pg_stat_kcache() k USING (userid, dbid, queryid)    JOIN pg_database d ON s.dbid = d.oid    JOIN pg_roles r ON r.oid = useridWHERE datname != 'postgres' AND datname NOT LIKE 'template%'ORDER BY reads DESC LIMIT 1; rolname |  queryid   |  total_time  |  calls   | shared_hit | page_cache_hit | physical_read | blk_read_time | user_time | system_time---------+------------+--------------+----------+------------+----------------+---------------+---------------+-----------+------------- rpop    | 3183006759 | 309049021.97 | 38098195 | 276 TB     | 27 TB          | 22 TB         |   75485646.81 | 269508.98 |    35635.96(1 row)Time: 18.605 msrpopdb01d/postgres R #Query text may be resolved by queryid something like SELECT query FROM pg_stat_statements WHERE queryid = 3183006759.Works only with 9.4+ and gives you statistics per query for all the time, not the current state.Thanks for any advice.\n\n--May the force be with you…https://simply.name", "msg_date": "Sun, 13 Mar 2016 20:50:42 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DIsk I/O from pg_stat_activity" } ]
[ { "msg_contents": "Hi all.\n \nI'm doing full-text-search and want do display the results in the order the \narticles were received (timestamp). I have millions of articles where the \nnewest are the most interesting, and a search may match many articles so doing \nthe sort using some INDEX would be great.\n \nTake the following example-schema:\n \ncreate extension if not exists btree_gin; \ndrop table if EXISTS delivery; create table delivery( id BIGSERIAL primary key\n, fts_allTSVECTOR not null, folder_id BIGINT NOT NULL, received_timestamp \nTIMESTAMP not null, message varchar not null ); create index fts_idx ON delivery\nusing gin(fts_all, folder_id); CREATE OR REPLACE FUNCTION \nupdate_delivery_tsvector_tf()RETURNS TRIGGER AS $$ BEGIN NEW.fts_all = \nto_tsvector('simple', NEW.message); return NEW; END; $$ LANGUAGE PLPGSQL; \nCREATE TRIGGERupdate_delivery_tsvector_t BEFORE INSERT OR UPDATE ON delivery \nFOR EACH ROW EXECUTE PROCEDUREupdate_delivery_tsvector_tf(); insert into \ndelivery(folder_id, received_timestamp,message) values (1, '2015-01-01', 'Yes \nhit four') , (1, '2014-01-01', 'Hi man') , (2, '2013-01-01', 'Hi man') , (2, \n'2013-01-01', 'fish') ; analyze delivery; set ENABLE_SEQSCAN to off; explain \nanalyze SELECTdel.id , del.received_timestamp FROM delivery del WHERE 1 = 1 AND \ndel.fts_all @@ to_tsquery('simple', 'hi:*') AND del.folder_id = 1 ORDER BY \ndel.received_timestampDESC LIMIT 101 OFFSET 0; \n \nI use btree_gin extention to make folder_id part of index.\n \nI get the following plan (using 9.6 from master):\n                                                         QUERY \nPLAN                                                         \n \n────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n  Limit  (cost=5.23..5.23 rows=1 width=16) (actual time=0.042..0.043 rows=2 \nloops=1)\n    ->  Sort  (cost=5.23..5.23 rows=1 width=16) (actual time=0.040..0.040 \nrows=2 loops=1)\n          Sort Key: received_timestamp DESC\n          Sort Method: quicksort  Memory: 25kB\n          ->  Bitmap Heap Scan on delivery del  (cost=3.90..5.22 rows=1 \nwidth=16) (actual time=0.029..0.030 rows=2 loops=1)\n                Recheck Cond: (fts_all @@ '''hi'':*'::tsquery)\n                Filter: (folder_id = 1)\n                Rows Removed by Filter: 1\n                Heap Blocks: exact=1\n                ->  Bitmap Index Scan on fts_idx  (cost=0.00..3.90 rows=1 \nwidth=0) (actual time=0.018..0.018 rows=3 loops=1)\n                      Index Cond: (fts_all @@ '''hi'':*'::tsquery)\n  Planning time: 0.207 ms\n  Execution time: 0.085 ms\n (13 rows)\n \nHere is the explain from a real-world query:\n  EXPLAIN ANALYZE SELECT del.entity_id , del.received_timestamp FROM \norigo_email_delivery delWHERE 1 = 1 AND del.fts_all @@ to_tsquery('simple', \n'andre:*') AND del.folder_id = 44964 ORDER BY del.received_timestamp DESC LIMIT \n101OFFSET 0; \n                                                                        QUERY \nPLAN                                                                        \n \n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n  Limit  (cost=92260.66..92260.91 rows=101 width=16) (actual \ntime=347.891..347.907 rows=101 loops=1)\n    ->  Sort  (cost=92260.66..92291.08 rows=12167 width=16) (actual \ntime=347.888..347.899 rows=101 loops=1)\n          Sort Key: received_timestamp DESC\n          Sort Method: top-N heapsort  Memory: 29kB\n          ->  Bitmap Heap Scan on origo_email_delivery del  \n(cost=2480.95..91794.77 rows=12167 width=16) (actual time=152.568..346.132 \nrows=18257 loops=1)\n                Recheck Cond: (fts_all @@ '''andre'':*'::tsquery)\n                Filter: (folder_id = 44964)\n                Rows Removed by Filter: 264256\n                Heap Blocks: exact=80871\n                ->  Bitmap Index Scan on temp_fts_idx  (cost=0.00..2477.91 \nrows=309588 width=0) (actual time=134.903..134.903 rows=282513 loops=1)\n                      Index Cond: (fts_all @@ '''andre'':*'::tsquery)\n  Planning time: 0.530 ms\n  Execution time: 347.967 ms\n (13 rows)\n \n\n \n1. Why isnt' folder_id part of the index-cond?\n2. Is there a way to make it use the (same) index to sort by \nreceived_timestamp?\n3. Using a GIN-index, is there a way to use the index at all for sorting?\n4. It doesn't seem like ts_rank uses the index for sorting either.\n \nThanks.\n\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Wed, 16 Mar 2016 10:00:02 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "Andreas Joseph Krogh <[email protected]> writes:\n> 1. Why isnt' folder_id part of the index-cond?\n\nBecause a GIN index is useless for sorting.\n\n> 2. Is there a way to make it use the (same) index to sort by \n> received_timestamp?\n\nNo.\n\n> 3. Using a GIN-index, is there a way to use the index at all for sorting?\n\nNo.\n\n> 4. It doesn't seem like ts_rank uses the index for sorting either.\n\nSame reason.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 16 Mar 2016 09:37:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "På onsdag 16. mars 2016 kl. 14:37:27, skrev Tom Lane <[email protected] \n<mailto:[email protected]>>:\nAndreas Joseph Krogh <[email protected]> writes:\n > 1. Why isnt' folder_id part of the index-cond?\n\n Because a GIN index is useless for sorting.\n\n > 2. Is there a way to make it use the (same) index to sort by\n > received_timestamp?\n\n No.\n\n > 3. Using a GIN-index, is there a way to use the index at all for sorting?\n\n No.\n\n > 4. It doesn't seem like ts_rank uses the index for sorting either.\n\n Same reason.\n\n regards, tom lane\n \nSo it's basically impossible to use FTS/GIN with sorting on large datasets?\nAre there any plans to improve this situation?\n \nThanks.\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Wed, 16 Mar 2016 14:53:04 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching GIN-index (FTS) and sort by\n timestamp-column" }, { "msg_contents": "På onsdag 16. mars 2016 kl. 14:53:04, skrev Andreas Joseph Krogh <\[email protected] <mailto:[email protected]>>:\nPå onsdag 16. mars 2016 kl. 14:37:27, skrev Tom Lane <[email protected] \n<mailto:[email protected]>>:\nAndreas Joseph Krogh <[email protected]> writes:\n > 1. Why isnt' folder_id part of the index-cond?\n\n Because a GIN index is useless for sorting.\n\n > 2. Is there a way to make it use the (same) index to sort by\n > received_timestamp?\n\n No.\n\n > 3. Using a GIN-index, is there a way to use the index at all for sorting?\n\n No.\n\n > 4. It doesn't seem like ts_rank uses the index for sorting either.\n\n Same reason.\n\n regards, tom lane\n \nSo it's basically impossible to use FTS/GIN with sorting on large datasets?\nAre there any plans to improve this situation?\n \nThanks.\n \nThis paper talks about ORDER BY optimizations for FTS (starting at slide 6 and \n7):\nhttp://www.sai.msu.su/~megera/postgres/talks/Next%20generation%20of%20GIN.pdf\n \nThis indicates some work is being done in this area.\n \nOleg, if you're listening, do you guys have any exiting news regarding this?\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Wed, 16 Mar 2016 15:01:23 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching GIN-index (FTS) and sort by\n timestamp-column" }, { "msg_contents": "\n> On 16 Mar 2016, at 16:37, Tom Lane <[email protected]> wrote:\n> \n> Andreas Joseph Krogh <[email protected]> writes:\n>> 1. Why isnt' folder_id part of the index-cond?\n> \n> Because a GIN index is useless for sorting.\n\nI don't see how gin inability to return sorted data relates to index condition.\nIn fact i tried to reproduce the example,\nand if i change folder_id to int from bigint, then index condition with folder_id is used\n\n Index Cond: ((fts_all @@ '''hi'''::tsquery) AND (folder_id = 1))\n\n\n> \n>> 2. Is there a way to make it use the (same) index to sort by \n>> received_timestamp?\n> \n> No.\n> \n>> 3. Using a GIN-index, is there a way to use the index at all for sorting?\n> \n> No.\n> \n>> 4. It doesn't seem like ts_rank uses the index for sorting either.\n> \n> Same reason.\n> \n> \t\t\tregards, tom lane\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 16 Mar 2016 17:52:40 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "\n> On 16 Mar 2016, at 17:52, Evgeniy Shishkin <[email protected]> wrote:\n> \n> \n>> On 16 Mar 2016, at 16:37, Tom Lane <[email protected]> wrote:\n>> \n>> Andreas Joseph Krogh <[email protected]> writes:\n>>> 1. Why isnt' folder_id part of the index-cond?\n>> \n>> Because a GIN index is useless for sorting.\n> \n> I don't see how gin inability to return sorted data relates to index condition.\n> In fact i tried to reproduce the example,\n> and if i change folder_id to int from bigint, then index condition with folder_id is used\n> \n> Index Cond: ((fts_all @@ '''hi'''::tsquery) AND (folder_id = 1))\n> \n\nLooks like documentation http://www.postgresql.org/docs/9.5/static/btree-gin.html\nis lying about supporting int8 type\n\n> \n>> \n>>> 2. Is there a way to make it use the (same) index to sort by \n>>> received_timestamp?\n>> \n>> No.\n>> \n>>> 3. Using a GIN-index, is there a way to use the index at all for sorting?\n>> \n>> No.\n>> \n>>> 4. It doesn't seem like ts_rank uses the index for sorting either.\n>> \n>> Same reason.\n>> \n>> \t\t\tregards, tom lane\n>> \n>> \n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 16 Mar 2016 18:04:08 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "\n> On 16 Mar 2016, at 18:04, Evgeniy Shishkin <[email protected]> wrote:\n> \n>> \n>> On 16 Mar 2016, at 17:52, Evgeniy Shishkin <[email protected]> wrote:\n>> \n>> \n>>> On 16 Mar 2016, at 16:37, Tom Lane <[email protected]> wrote:\n>>> \n>>> Andreas Joseph Krogh <[email protected]> writes:\n>>>> 1. Why isnt' folder_id part of the index-cond?\n>>> \n>>> Because a GIN index is useless for sorting.\n>> \n>> I don't see how gin inability to return sorted data relates to index condition.\n>> In fact i tried to reproduce the example,\n>> and if i change folder_id to int from bigint, then index condition with folder_id is used\n>> \n>> Index Cond: ((fts_all @@ '''hi'''::tsquery) AND (folder_id = 1))\n>> \n> \n> Looks like documentation http://www.postgresql.org/docs/9.5/static/btree-gin.html\n> is lying about supporting int8 type\n> \n\nUh, it works if i cast to bigint explicitly\n WHERE del.fts_all @@ to_tsquery('simple', 'hi')\n AND del.folder_id = 1::bigint;\nresults in \n Index Cond: ((folder_id = '1'::bigint) AND (fts_all @@ '''hi'''::tsquery))\n\n>> \n>>> \n>>>> 2. Is there a way to make it use the (same) index to sort by \n>>>> received_timestamp?\n>>> \n>>> No.\n>>> \n>>>> 3. Using a GIN-index, is there a way to use the index at all for sorting?\n>>> \n>>> No.\n>>> \n>>>> 4. It doesn't seem like ts_rank uses the index for sorting either.\n>>> \n>>> Same reason.\n>>> \n>>> \t\t\tregards, tom lane\n>>> \n>>> \n>>> -- \n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 16 Mar 2016 18:07:56 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "På onsdag 16. mars 2016 kl. 16:04:08, skrev Evgeniy Shishkin <\[email protected] <mailto:[email protected]>>:\n\n > On 16 Mar 2016, at 17:52, Evgeniy Shishkin <[email protected]> wrote:\n >\n >\n >> On 16 Mar 2016, at 16:37, Tom Lane <[email protected]> wrote:\n >>\n >> Andreas Joseph Krogh <[email protected]> writes:\n >>> 1. Why isnt' folder_id part of the index-cond?\n >>\n >> Because a GIN index is useless for sorting.\n >\n > I don't see how gin inability to return sorted data relates to index \ncondition.\n > In fact i tried to reproduce the example,\n > and if i change folder_id to int from bigint, then index condition with \nfolder_id is used\n >\n >         Index Cond: ((fts_all @@ '''hi'''::tsquery) AND (folder_id = 1))\n >\n\n Looks like documentation \nhttp://www.postgresql.org/docs/9.5/static/btree-gin.html\n is lying about supporting int8 type\n \nHm, interesting!\n \n@Tom: Any idea why BIGINT doesn't work, but INTEGER does?\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Wed, 16 Mar 2016 16:08:04 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching GIN-index (FTS) and sort by\n timestamp-column" }, { "msg_contents": "På onsdag 16. mars 2016 kl. 16:07:56, skrev Evgeniy Shishkin <\[email protected] <mailto:[email protected]>>:\n\n > On 16 Mar 2016, at 18:04, Evgeniy Shishkin <[email protected]> wrote:\n >\n >>\n >> On 16 Mar 2016, at 17:52, Evgeniy Shishkin <[email protected]> wrote:\n >>\n >>\n >>> On 16 Mar 2016, at 16:37, Tom Lane <[email protected]> wrote:\n >>>\n >>> Andreas Joseph Krogh <[email protected]> writes:\n >>>> 1. Why isnt' folder_id part of the index-cond?\n >>>\n >>> Because a GIN index is useless for sorting.\n >>\n >> I don't see how gin inability to return sorted data relates to index \ncondition.\n >> In fact i tried to reproduce the example,\n >> and if i change folder_id to int from bigint, then index condition with \nfolder_id is used\n >>\n >>        Index Cond: ((fts_all @@ '''hi'''::tsquery) AND (folder_id = 1))\n >>\n >\n > Looks like documentation \nhttp://www.postgresql.org/docs/9.5/static/btree-gin.html\n > is lying about supporting int8 type\n >\n\n Uh, it works if i cast to bigint explicitly\n       WHERE  del.fts_all @@ to_tsquery('simple', 'hi')\n       AND del.folder_id = 1::bigint;\n results in\n          Index Cond: ((folder_id = '1'::bigint) AND (fts_all @@ \n'''hi'''::tsquery))\n \nHm, this is quite cranky, but thanks for the heads-up!\n \nTho it looks like it works if prepared, without explicit cast:\n \nprepare fish AS SELECT del.id , del.received_timestamp FROM delivery del WHERE\n1= 1 AND del.fts_all @@ to_tsquery('simple', $1) AND del.folder_id = $2 ORDER BY\ndel.received_timestampDESC LIMIT 101 OFFSET 0; explain analyze execute fish(\n'hi:*', 1);                                                          QUERY \nPLAN                                                         \n \n────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n  Limit  (cost=9.13..9.13 rows=1 width=16) (actual time=0.047..0.048 rows=2 \nloops=1)\n    ->  Sort  (cost=9.13..9.13 rows=1 width=16) (actual time=0.045..0.045 \nrows=2 loops=1)\n          Sort Key: received_timestamp DESC\n          Sort Method: quicksort  Memory: 25kB\n          ->  Bitmap Heap Scan on delivery del  (cost=7.80..9.12 rows=1 \nwidth=16) (actual time=0.034..0.034 rows=2 loops=1)\n                Recheck Cond: ((fts_all @@ '''hi'':*'::tsquery) AND (folder_id \n= '1'::bigint))\n                Heap Blocks: exact=1\n                ->  Bitmap Index Scan on fts_idx  (cost=0.00..7.80 rows=1 \nwidth=0) (actual time=0.023..0.023 rows=2 loops=1)\n                      Index Cond: ((fts_all @@ '''hi'':*'::tsquery) AND \n(folder_id = '1'::bigint))\n  Execution time: 0.103 ms\n (10 rows)\n \n\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Wed, 16 Mar 2016 16:17:40 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching GIN-index (FTS) and sort by\n timestamp-column" }, { "msg_contents": "Evgeniy Shishkin <[email protected]> writes:\n> Uh, it works if i cast to bigint explicitly\n\nFWIW, the reason for that is that the int8_ops operator class that\nbtree_gin creates doesn't contain any cross-type operators. Probably\nwouldn't be that hard to fix if somebody wanted to put in the work.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Mar 2016 13:20:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "På torsdag 17. mars 2016 kl. 18:20:23, skrev Tom Lane <[email protected] \n<mailto:[email protected]>>:\nEvgeniy Shishkin <[email protected]> writes:\n > Uh, it works if i cast to bigint explicitly\n\n FWIW, the reason for that is that the int8_ops operator class that\n btree_gin creates doesn't contain any cross-type operators.  Probably\n wouldn't be that hard to fix if somebody wanted to put in the work.\n\n regards, tom lane\n \nThanks for info.\n \nCan you explain why it works when using prepared statement without casting? \nDoes the machinary then know the type so the \"setParameter\"-call uses the \ncorrect type?\n \nThanks.\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Thu, 17 Mar 2016 18:30:12 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching GIN-index (FTS) and sort by\n timestamp-column" }, { "msg_contents": "Andreas Joseph Krogh <[email protected]> writes:\n> På torsdag 17. mars 2016 kl. 18:20:23, skrev Tom Lane <[email protected] \n> FWIW, the reason for that is that the int8_ops operator class that\n> btree_gin creates doesn't contain any cross-type operators.  Probably\n> wouldn't be that hard to fix if somebody wanted to put in the work.\n\n> Can you explain why it works when using prepared statement without casting? \n\nIf you mean the example\n\nprepare fish AS SELECT del.id , del.received_timestamp FROM delivery del\nWHERE 1= 1 AND del.fts_all @@ to_tsquery('simple', $1) AND\ndel.folder_id = $2 ORDER BY del.received_timestamp DESC LIMIT 101 OFFSET 0;\n\nyou didn't provide any type for the parameter $2, so the parser had to\ninfer a type, and the applicable heuristic here is \"same type that's on\nthe other side of the operator\". So you ended up with \"bigint = bigint\"\nwhich is in the btree_gin operator class. If you'd specified the\nparameter's type as integer, it would've worked the same as Evgeniy's\nexample.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Mar 2016 13:57:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "On Wed, Mar 16, 2016 at 6:53 AM, Andreas Joseph Krogh <[email protected]>\nwrote:\n\n> På onsdag 16. mars 2016 kl. 14:37:27, skrev Tom Lane <[email protected]>:\n>\n> Andreas Joseph Krogh <[email protected]> writes:\n> > 1. Why isnt' folder_id part of the index-cond?\n>\n> Because a GIN index is useless for sorting.\n>\n> > 2. Is there a way to make it use the (same) index to sort by\n> > received_timestamp?\n>\n> No.\n>\n> > 3. Using a GIN-index, is there a way to use the index at all for sorting?\n>\n> No.\n>\n> > 4. It doesn't seem like ts_rank uses the index for sorting either.\n>\n> Same reason.\n>\n> regards, tom lane\n>\n>\n> So it's basically impossible to use FTS/GIN with sorting on large datasets?\n> Are there any plans to improve this situation?\n>\n\nI don't see why it would not be possible to create a new execution node\ntype that does an index scan to obtain order (or just to satisfy an\nequality or range expression), and takes a bitmap (as produced by the\nFTS/GIN) to apply as a filter. But, I don't know of anyone planning on\ndoing that.\n\nCheers,\n\nJeff\n\nOn Wed, Mar 16, 2016 at 6:53 AM, Andreas Joseph Krogh <[email protected]> wrote:På onsdag 16. mars 2016 kl. 14:37:27, skrev Tom Lane <[email protected]>:\n\nAndreas Joseph Krogh <[email protected]> writes:\n> 1. Why isnt' folder_id part of the index-cond?\n\nBecause a GIN index is useless for sorting.\n\n> 2. Is there a way to make it use the (same) index to sort by\n> received_timestamp?\n\nNo.\n\n> 3. Using a GIN-index, is there a way to use the index at all for sorting?\n\nNo.\n\n> 4. It doesn't seem like ts_rank uses the index for sorting either.\n\nSame reason.\n\nregards, tom lane\n\n \nSo it's basically impossible to use FTS/GIN with sorting on large datasets?\nAre there any plans to improve this situation?I don't see why it would not be possible to create a new execution node type that does an index scan to obtain order (or just to satisfy an equality or range expression), and takes a bitmap (as produced by the FTS/GIN) to apply as a filter.  But, I don't know of anyone planning on doing that.Cheers,Jeff", "msg_date": "Fri, 18 Mar 2016 19:44:55 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "På lørdag 19. mars 2016 kl. 03:44:55, skrev Jeff Janes <[email protected] \n<mailto:[email protected]>>:\nOn Wed, Mar 16, 2016 at 6:53 AM, Andreas Joseph Krogh <[email protected] \n<mailto:[email protected]>> wrote: På onsdag 16. mars 2016 kl. 14:37:27, skrev \nTom Lane <[email protected] <mailto:[email protected]>>:\nAndreas Joseph Krogh <[email protected] <mailto:[email protected]>> writes:\n > 1. Why isnt' folder_id part of the index-cond?\n\n Because a GIN index is useless for sorting.\n\n > 2. Is there a way to make it use the (same) index to sort by\n > received_timestamp?\n\n No.\n\n > 3. Using a GIN-index, is there a way to use the index at all for sorting?\n\n No.\n\n > 4. It doesn't seem like ts_rank uses the index for sorting either.\n\n Same reason.\n\n regards, tom lane\n \nSo it's basically impossible to use FTS/GIN with sorting on large datasets?\nAre there any plans to improve this situation?\n \nI don't see why it would not be possible to create a new execution node type \nthat does an index scan to obtain order (or just to satisfy an equality or \nrange expression), and takes a bitmap (as produced by the FTS/GIN) to apply as \na filter.  But, I don't know of anyone planning on doing that.\n\n\n\n \nIsn't this what Postgres Pro are planning? \nhttp://postgrespro.com/roadmap/mssearch\n \n\"Unlike external special-purpose search engines, a full-text search engine \nbuilt in a DBMS is capable of combining full-text and attributive search \ncriteria in SQL query syntax. It is planned to improve the existing PostgreSQL \nfull-text search engine byextending the functionality of Generalized Inverted \nIndex (GIN) to make it capable of storing extra information required for \nranging query results. This search acceleration will allow to go back from \nexternal full-text search engines, thus facilitating system administration and \nuse, reducing technology risks, and improving information security.\"\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Mon, 21 Mar 2016 15:41:42 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching GIN-index (FTS) and sort by\n timestamp-column" }, { "msg_contents": "On Mon, Mar 21, 2016 at 5:41 PM, Andreas Joseph Krogh <[email protected]>\nwrote:\n\n> På lørdag 19. mars 2016 kl. 03:44:55, skrev Jeff Janes <\n> [email protected]>:\n>\n> On Wed, Mar 16, 2016 at 6:53 AM, Andreas Joseph Krogh <[email protected]>\n> wrote:\n>>\n>> På onsdag 16. mars 2016 kl. 14:37:27, skrev Tom Lane <[email protected]>:\n>>\n>> Andreas Joseph Krogh <[email protected]> writes:\n>> > 1. Why isnt' folder_id part of the index-cond?\n>>\n>> Because a GIN index is useless for sorting.\n>>\n>> > 2. Is there a way to make it use the (same) index to sort by\n>> > received_timestamp?\n>>\n>> No.\n>>\n>> > 3. Using a GIN-index, is there a way to use the index at all for\n>> sorting?\n>>\n>> No.\n>>\n>> > 4. It doesn't seem like ts_rank uses the index for sorting either.\n>>\n>> Same reason.\n>>\n>> regards, tom lane\n>>\n>>\n>> So it's basically impossible to use FTS/GIN with sorting on large\n>> datasets?\n>> Are there any plans to improve this situation?\n>>\n>\n> I don't see why it would not be possible to create a new execution node\n> type that does an index scan to obtain order (or just to satisfy an\n> equality or range expression), and takes a bitmap (as produced by the\n> FTS/GIN) to apply as a filter. But, I don't know of anyone planning on\n> doing that.\n>\n>\n> Isn't this what Postgres Pro are planning?\n> http://postgrespro.com/roadmap/mssearch\n>\n> *\"Unlike external special-purpose search engines, a full-text search\n> engine built in a DBMS is capable of combining full-text and attributive\n> search criteria in SQL query syntax. It is planned to improve the existing\n> PostgreSQL full-text search engine by extending the functionality of\n> Generalized Inverted Index (GIN) to make it capable of storing extra\n> information required for ranging query results. This search acceleration\n> will allow to go back from external full-text search engines, thus\n> facilitating system administration and use, reducing technology risks, and\n> improving information security.\"*\n>\n\nThis is different feature ! Actually, we already have prototype of what\nJeff suggested, we called it bitmap filtering, but failed to find use case\nwhere it provides benefits. Teodor will comment this idea more detail.\n\n\n>\n> --\n> *Andreas Joseph Krogh*\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected]\n> www.visena.com\n> <https://www.visena.com>\n>\n>", "msg_date": "Mon, 21 Mar 2016 18:13:07 +0300", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "På mandag 21. mars 2016 kl. 16:13:07, skrev Oleg Bartunov <[email protected] \n<mailto:[email protected]>>:\n    On Mon, Mar 21, 2016 at 5:41 PM, Andreas Joseph Krogh <[email protected] \n<mailto:[email protected]>> wrote: På lørdag 19. mars 2016 kl. 03:44:55, skrev \nJeff Janes <[email protected] <mailto:[email protected]>>:\nOn Wed, Mar 16, 2016 at 6:53 AM, Andreas Joseph Krogh <[email protected] \n<mailto:[email protected]>> wrote: På onsdag 16. mars 2016 kl. 14:37:27, skrev \nTom Lane <[email protected] <mailto:[email protected]>>:\nAndreas Joseph Krogh <[email protected] <mailto:[email protected]>> writes:\n > 1. Why isnt' folder_id part of the index-cond?\n\n Because a GIN index is useless for sorting.\n\n > 2. Is there a way to make it use the (same) index to sort by\n > received_timestamp?\n\n No.\n\n > 3. Using a GIN-index, is there a way to use the index at all for sorting?\n\n No.\n\n > 4. It doesn't seem like ts_rank uses the index for sorting either.\n\n Same reason.\n\n regards, tom lane\n \nSo it's basically impossible to use FTS/GIN with sorting on large datasets?\nAre there any plans to improve this situation?\n \nI don't see why it would not be possible to create a new execution node type \nthat does an index scan to obtain order (or just to satisfy an equality or \nrange expression), and takes a bitmap (as produced by the FTS/GIN) to apply as \na filter.  But, I don't know of anyone planning on doing that.\n\n\n\n \nIsn't this what Postgres Pro are planning? \nhttp://postgrespro.com/roadmap/mssearch \n<http://postgrespro.com/roadmap/mssearch>\n \n\"Unlike external special-purpose search engines, a full-text search engine \nbuilt in a DBMS is capable of combining full-text and attributive search \ncriteria in SQL query syntax. It is planned to improve the existing PostgreSQL \nfull-text search engine byextending the functionality of Generalized Inverted \nIndex (GIN) to make it capable of storing extra information required for \nranging query results. This search acceleration will allow to go back from \nexternal full-text search engines, thus facilitating system administration and \nuse, reducing technology risks, and improving information security.\"\n \nThis is different feature ! Actually, we already have prototype of what Jeff \nsuggested, we called it bitmap filtering, but failed to find use case where it \nprovides benefits. Teodor will comment this idea more detail.\n\n\n\n \nThe feature I'm missing is the ability to do FTS (or use GIN in general) and \nthen sort on some other column (also indexed by the same GIN-index, using the \nbtree-gin extention), often of type BIGINT or TIMESTAMP.\nAre you planning to work on such a feature for GIN?\n \nThanks.\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Mon, 21 Mar 2016 16:33:12 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching GIN-index (FTS) and sort by\n timestamp-column" }, { "msg_contents": "On Sat, Mar 19, 2016 at 5:44 AM, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Mar 16, 2016 at 6:53 AM, Andreas Joseph Krogh <[email protected]>\n> wrote:\n>\n>> På onsdag 16. mars 2016 kl. 14:37:27, skrev Tom Lane <[email protected]>:\n>>\n>> Andreas Joseph Krogh <[email protected]> writes:\n>> > 1. Why isnt' folder_id part of the index-cond?\n>>\n>> Because a GIN index is useless for sorting.\n>>\n>> > 2. Is there a way to make it use the (same) index to sort by\n>> > received_timestamp?\n>>\n>> No.\n>>\n>> > 3. Using a GIN-index, is there a way to use the index at all for\n>> sorting?\n>>\n>> No.\n>>\n>> > 4. It doesn't seem like ts_rank uses the index for sorting either.\n>>\n>> Same reason.\n>>\n>> regards, tom lane\n>>\n>>\n>> So it's basically impossible to use FTS/GIN with sorting on large\n>> datasets?\n>> Are there any plans to improve this situation?\n>>\n>\n> I don't see why it would not be possible to create a new execution node\n> type that does an index scan to obtain order (or just to satisfy an\n> equality or range expression), and takes a bitmap (as produced by the\n> FTS/GIN) to apply as a filter. But, I don't know of anyone planning on\n> doing that.\n>\n\nPlease, find bitmap filtering patch, which we developed several months ago,\nbut failed to find good use case :( Teodor is here now, so he could answer\nthe questions.\n\n\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 22 Mar 2016 19:41:45 +0300", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" }, { "msg_contents": "On Tue, Mar 22, 2016 at 9:41 AM, Oleg Bartunov <[email protected]> wrote:\n>\n>\n> On Sat, Mar 19, 2016 at 5:44 AM, Jeff Janes <[email protected]> wrote:\n>>\n>>\n>> I don't see why it would not be possible to create a new execution node\n>> type that does an index scan to obtain order (or just to satisfy an equality\n>> or range expression), and takes a bitmap (as produced by the FTS/GIN) to\n>> apply as a filter. But, I don't know of anyone planning on doing that.\n>\n>\n> Please, find bitmap filtering patch, which we developed several months ago,\n> but failed to find good use case :( Teodor is here now, so he could answer\n> the questions.\n\nI can't find any benefit because I can't get the new node to ever execute.\n\nI set up this:\n\ncreate table foo as select md5(random()::text), random() as y from\ngenerate_series(1,10000000);\ncreate index on foo using gin (md5 gin_trgm_ops);\ncreate index on foo (y);\nvacuum ANALYZE foo ;\n\nThen when I run this:\n\nexplain (analyze,buffers) select y from foo where md5 like '%abcde%'\norder by y limit 1\n\nThe function \"cost_filtered_index(newpath)\" never fires. So the\nplanner is never even considering this feature.\n\nIt seems to be getting short-circuited here:\n\n if (ipath->indexorderbys == NIL && ipath->indexorderbycols == NIL)\n continue;\n\n\n\nI don't know enough about the planner to know where to start on this.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 2 Apr 2016 19:53:27 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching GIN-index (FTS) and sort by timestamp-column" } ]
[ { "msg_contents": "I have the following queries:\n\nEXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)\nselect […]\n from f_calc_service a11,\n d_patient_type a12\n where a11.d_patient_pop_id in (336)\n and a11.d_patient_type_id = a12.id\n and a12.short_name = 'I'\n group by a11.d_rate_schedule_id,\n a11.d_payer_id,\n a11.d_patient_pop_id,\n a11.d_patient_type_id\n;\n\nAnd\n\nEXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)\nselect […]\n from f_calc_service a11,\n d_patient_type a12\n where a11.d_patient_pop_id in (336)\n and a11.d_patient_type_id = a12.id\n and a12.short_name = 'O'\n group by a11.d_rate_schedule_id,\n a11.d_payer_id,\n a11.d_patient_pop_id,\n a11.d_patient_type_id\n;\n\nMaking this one change from short_name = ‘I’ to short_name = ‘O’ changes the query execution from 200k ms to 280ms. The first one chooses a Nested Loop, the second chooses a hash join. How do I get them both to choose the same? There are no values for d_patient_pop_id in (336) and short_name = ‘I’.\n\nThanks!\n\nDan\n\n\n\n\n\n\n\n\nI have the following queries:\n\n\n\nEXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)\nselect    […]\n                from      f_calc_service   a11,\n                                d_patient_type                a12\n                where   a11.d_patient_pop_id in (336)\n                         and a11.d_patient_type_id = a12.id\n                         and a12.short_name = 'I'\n                group by              a11.d_rate_schedule_id,\n                                a11.d_payer_id,\n                                a11.d_patient_pop_id,\n                                a11.d_patient_type_id\n; \n\n\n\nAnd \n\n\n\nEXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)\nselect     […]\n                from      f_calc_service   a11,\n                                d_patient_type                a12\n                where   a11.d_patient_pop_id in (336)\n                         and a11.d_patient_type_id = a12.id\n                         and a12.short_name = 'O'\n                group by              a11.d_rate_schedule_id,\n                                a11.d_payer_id,\n                                a11.d_patient_pop_id,\n                                a11.d_patient_type_id\n;  \n\n\n\nMaking this one change from short_name = ‘I’ to short_name = ‘O’ changes the query execution from 200k ms to 280ms. The first one chooses a Nested Loop, the second chooses a hash join. How do I get them both to choose the same? There are no values for\n d_patient_pop_id in (336) and short_name = ‘I’. \n\n\nThanks!\n\n\nDan", "msg_date": "Wed, 16 Mar 2016 20:23:05 +0000", "msg_from": "\"Doiron, Daniel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nested Loop vs Hash Join based on predicate?" }, { "msg_contents": "2016-03-16 21:23 GMT+01:00 Doiron, Daniel <[email protected]>:\n\n> I have the following queries:\n>\n> EXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)\n> select […]\n> from f_calc_service a11,\n> d_patient_type a12\n> where a11.d_patient_pop_id in (336)\n> and a11.d_patient_type_id = a12.id\n> and a12.short_name = 'I'\n> group by a11.d_rate_schedule_id,\n> a11.d_payer_id,\n> a11.d_patient_pop_id,\n> a11.d_patient_type_id\n> ;\n>\n> And\n>\n> EXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)\n> select […]\n> from f_calc_service a11,\n> d_patient_type a12\n> where a11.d_patient_pop_id in (336)\n> and a11.d_patient_type_id = a12.id\n> and a12.short_name = 'O'\n> group by a11.d_rate_schedule_id,\n> a11.d_payer_id,\n> a11.d_patient_pop_id,\n> a11.d_patient_type_id\n> ;\n>\n> Making this one change from short_name = ‘I’ to short_name = ‘O’ changes\n> the query execution from 200k ms to 280ms. The first one chooses a Nested\n> Loop, the second chooses a hash join. How do I get them both to choose the\n> same? There are no values for d_patient_pop_id in (336) and short_name =\n> ‘I’.\n>\n\nwe don't see plans, so it is blind shot,\n\nProbably the estimation for 'I' value is pretty underestimated - so planner\nchoose nested loop. The reasons can be different - possible correlation\ninside data for example.\n\nYou can try:\n\n0) ensure so your statistic are current - run statement ANALYZE\n\na) increase statistic by statement ALTER TABLE xx ALTER COLUMN yyy SET\nSTATISTICS some number\n\nb) penalize nested loop - statement SET enable_nestloop TO off;\n\nRegards\n\nPavel\n\n\n>\n> Thanks!\n>\n> Dan\n>\n>\n>\n\n2016-03-16 21:23 GMT+01:00 Doiron, Daniel <[email protected]>:\n\nI have the following queries:\n\n\n\nEXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)\nselect    […]\n                from      f_calc_service   a11,\n                                d_patient_type                a12\n                where   a11.d_patient_pop_id in (336)\n                         and a11.d_patient_type_id = a12.id\n                         and a12.short_name = 'I'\n                group by              a11.d_rate_schedule_id,\n                                a11.d_payer_id,\n                                a11.d_patient_pop_id,\n                                a11.d_patient_type_id\n; \n\n\n\nAnd \n\n\n\nEXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)\nselect     […]\n                from      f_calc_service   a11,\n                                d_patient_type                a12\n                where   a11.d_patient_pop_id in (336)\n                         and a11.d_patient_type_id = a12.id\n                         and a12.short_name = 'O'\n                group by              a11.d_rate_schedule_id,\n                                a11.d_payer_id,\n                                a11.d_patient_pop_id,\n                                a11.d_patient_type_id\n;  \n\n\n\nMaking this one change from short_name = ‘I’ to short_name = ‘O’ changes the query execution from 200k ms to 280ms. The first one chooses a Nested Loop, the second chooses a hash join. How do I get them both to choose the same? There are no values for\n d_patient_pop_id in (336) and short_name = ‘I’. we don't see plans, so it is blind shot,Probably the estimation for 'I' value is pretty underestimated - so planner choose nested loop. The reasons can be different - possible correlation inside data for example. You can try:0) ensure so your statistic are current - run statement ANALYZEa) increase statistic by statement ALTER TABLE xx ALTER COLUMN yyy SET STATISTICS some numberb) penalize nested loop - statement SET enable_nestloop TO off;RegardsPavel \n\n\nThanks!\n\n\nDan", "msg_date": "Wed, 16 Mar 2016 22:01:24 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop vs Hash Join based on predicate?" } ]
[ { "msg_contents": "Hi All!\n\nIs Postgres use shared_buffers during seq_scan?\nIn what way i can optimize seq_scan on big tables?\n\nThanks!\n\nHi All!Is  Postgres use shared_buffers during seq_scan? In what way i can optimize seq_scan on big tables?Thanks!", "msg_date": "Thu, 17 Mar 2016 11:57:13 +0200", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": true, "msg_subject": "using shared_buffers during seq_scan" }, { "msg_contents": "There is parallel sequence scanning coming in 9.6 --\nhttp://rhaas.blogspot.com/2015/11/parallel-sequential-scan-is-committed.html\n\n\nAnd there is the GPU extension - https://wiki.postgresql.org/wiki/PGStrom\n\nIf those aren't options, you'll want your table as much in memory as\npossible so your scan doesn't have to to go disk.\n\n\n\nOn Thu, Mar 17, 2016 at 5:57 AM, Artem Tomyuk <[email protected]> wrote:\n\n> Hi All!\n>\n> Is Postgres use shared_buffers during seq_scan?\n> In what way i can optimize seq_scan on big tables?\n>\n> Thanks!\n>\n\nThere is parallel sequence scanning coming in 9.6 -- http://rhaas.blogspot.com/2015/11/parallel-sequential-scan-is-committed.html And there is the GPU extension - https://wiki.postgresql.org/wiki/PGStrom If those aren't options, you'll want your table as much in memory as possible so your scan doesn't have to to go disk.On Thu, Mar 17, 2016 at 5:57 AM, Artem Tomyuk <[email protected]> wrote:Hi All!Is  Postgres use shared_buffers during seq_scan? In what way i can optimize seq_scan on big tables?Thanks!", "msg_date": "Thu, 17 Mar 2016 09:07:08 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: using shared_buffers during seq_scan" }, { "msg_contents": "Artem Tomyuk wrote:\r\n> Is Postgres use shared_buffers during seq_scan?\r\n> In what way i can optimize seq_scan on big tables?\r\n\r\nIf the estimated table size is less than a quarter of shared_buffers,\r\nthe whole table will be read to the shared buffers during a sequential scan.\r\n\r\nIf the table is larger than that, it is scanned using a ring\r\nbuffer of 256 KB inside the shared buffers, so only 256 KB of the\r\ntable end up in cache.\r\n\r\nYou can speed up all scans after the first one by having lots of RAM.\r\nEven if you cannot set shared_buffers four times as big as the table,\r\nyou can profit from having a large operating system cache.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 18 Mar 2016 10:30:17 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: using shared_buffers during seq_scan" } ]
[ { "msg_contents": "I'm pretty new to benchmarking hard disks and I'm looking for some advice\non interpreting the results of some basic tests.\n\nThe server is:\n- Dell PowerEdge R430\n- 1 x Intel Xeon E5-2620 2.4GHz\n- 32 GB RAM\n- 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10\n- PERC H730P Raid Controller with 2GB cache in write back mode.\n\nThe OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and\nan xfs volume for PGDATA.\n\nI ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I\nran 'bonnie++ -n0 -f' on the root volume.\n\nHere's a link to the bonnie test results\nhttps://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0\n\nThe vendor stats say sustained throughput of 215 to 108 MBps, so I guess\nI'd expect around 400-800 MBps read and 200-400 MBps write. In any case,\nI'm pretty confused as to why the read and write sequential speeds are\nalmost identical. Does this look wrong?\n\nThanks,\n\nDave\n\nI'm pretty new to benchmarking hard disks and I'm looking for some advice on interpreting the results of some basic tests.The server is:- Dell PowerEdge R430- 1 x Intel Xeon E5-2620 2.4GHz- 32 GB RAM- 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10- PERC H730P Raid Controller with 2GB cache in write back mode.The OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and an xfs volume for PGDATA.I ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I ran 'bonnie++ -n0 -f' on the root volume.Here's a link to the bonnie test resultshttps://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0The vendor stats say sustained throughput of 215 to 108 MBps, so I guess I'd expect around 400-800 MBps read and 200-400 MBps write. In any case, I'm pretty confused as to why the read and write sequential speeds are almost identical. Does this look wrong?Thanks,Dave", "msg_date": "Thu, 17 Mar 2016 16:45:10 -0400", "msg_from": "Dave Stibrany <[email protected]>", "msg_from_op": true, "msg_subject": "Disk Benchmarking Question" }, { "msg_contents": "Hi Dave,\n\n \n\nDatabase disk performance has to take into account IOPs, and IMO, over MBPs, since it’s the ability of the disk subsystem to write lots of little bits (usually) versus writing giant globs, especially in direct attached storage (like yours, versus a SAN). Most db disk benchmarks revolve around IOPs…and this is where SSDs utterly crush spinning disks.\n\n \n\nYou can get maybe 200 IOPs out of each disk, you have 4 in raid 10 so you get a whopping 400 IOPs. A single quality SSD (like the Samsung 850 pro) will support a minimum of 40k IOPs on reads and 80k IOPs on writes. That’s why SSDs are eliminating spinning disks when performance is critical and budget allows.\n\n \n\nBack to your question – the MBPs is the capacity of interface, so it makes sense that it’s the same for both reads and writes. The perc raid controller will be saving your bacon on writes, with 2gb cache (assuming it’s caching writes), so it becomes the equivalent of an SSD up to the capacity limit of the write cache. With only 400 iops of write speed, with a busy server you can easily saturate the cache and then your system will drop to a crawl.\n\n \n\nIf I didn’t answer the intent of your question, feel free to clarify for me.\n\n \n\nMike\n\n \n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Dave Stibrany\nSent: Thursday, March 17, 2016 1:45 PM\nTo: [email protected]\nSubject: [PERFORM] Disk Benchmarking Question\n\n \n\nI'm pretty new to benchmarking hard disks and I'm looking for some advice on interpreting the results of some basic tests.\n\n \n\nThe server is:\n\n- Dell PowerEdge R430\n\n- 1 x Intel Xeon E5-2620 2.4GHz\n\n- 32 GB RAM\n\n- 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10\n\n- PERC H730P Raid Controller with 2GB cache in write back mode.\n\n \n\nThe OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and an xfs volume for PGDATA.\n\n \n\nI ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I ran 'bonnie++ -n0 -f' on the root volume.\n\n \n\nHere's a link to the bonnie test results\n\nhttps://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0\n\n \n\nThe vendor stats say sustained throughput of 215 to 108 MBps, so I guess I'd expect around 400-800 MBps read and 200-400 MBps write. In any case, I'm pretty confused as to why the read and write sequential speeds are almost identical. Does this look wrong?\n\n \n\nThanks,\n\n \n\nDave\n\n \n\n \n\n \n\n\nHi Dave, Database disk performance has to take into account IOPs, and IMO, over MBPs, since it’s the ability of the disk subsystem to write lots of little bits (usually) versus writing giant globs, especially in direct attached storage (like yours, versus a SAN).  Most db disk benchmarks revolve around IOPs…and this is where SSDs utterly crush spinning disks. You can get maybe 200 IOPs out of each disk, you have 4 in raid  10 so you get a whopping 400 IOPs.  A single quality SSD (like the Samsung 850 pro) will support a minimum of 40k IOPs on reads and 80k IOPs on writes.  That’s why SSDs are eliminating spinning disks when performance is critical and budget allows. Back to your question – the MBPs is the capacity of interface, so it makes sense that it’s the same for both reads and writes.  The perc raid controller will be saving your bacon on writes, with 2gb cache (assuming it’s caching writes), so it becomes the equivalent of an SSD up to the capacity limit of the write cache.  With only 400 iops of write speed, with a busy server you can easily saturate the cache and then your system will drop to a crawl. If I didn’t answer the intent of your question, feel free to clarify for me. Mike From: [email protected] [mailto:[email protected]] On Behalf Of Dave StibranySent: Thursday, March 17, 2016 1:45 PMTo: [email protected]: [PERFORM] Disk Benchmarking Question I'm pretty new to benchmarking hard disks and I'm looking for some advice on interpreting the results of some basic tests. The server is:- Dell PowerEdge R430- 1 x Intel Xeon E5-2620 2.4GHz- 32 GB RAM- 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10- PERC H730P Raid Controller with 2GB cache in write back mode. The OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and an xfs volume for PGDATA. I ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I ran 'bonnie++ -n0 -f' on the root volume. Here's a link to the bonnie test resultshttps://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0 The vendor stats say sustained throughput of 215 to 108 MBps, so I guess I'd expect around 400-800 MBps read and 200-400 MBps write. In any case, I'm pretty confused as to why the read and write sequential speeds are almost identical. Does this look wrong? Thanks, Dave", "msg_date": "Thu, 17 Mar 2016 14:11:17 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk Benchmarking Question" }, { "msg_contents": "Hey Mike,\n\nThanks for the response. I think where I'm confused is that I thought\nvendor specified MBps was an estimate of sequential read/write speed.\nTherefore if you're in RAID10, you'd have 4x the sequential read speed and\n2x the sequential write speed. Am I misunderstanding something?\n\nAlso, when you mention that MBPs is the capacity of the interface, what do\nyou mean exactly. I've been taking interface speed to be the electronic\ntransfer speed, not the speed from the actual physical medium, and more in\nthe 6-12 gigabit range.\n\nPlease let me know if I'm way off on any of this, I'm hoping to have my\nmental model updated.\n\nThanks!\n\nDave\n\nOn Thu, Mar 17, 2016 at 5:11 PM, Mike Sofen <[email protected]> wrote:\n\n> Hi Dave,\n>\n>\n>\n> Database disk performance has to take into account IOPs, and IMO, over\n> MBPs, since it’s the ability of the disk subsystem to write lots of little\n> bits (usually) versus writing giant globs, especially in direct attached\n> storage (like yours, versus a SAN). Most db disk benchmarks revolve around\n> IOPs…and this is where SSDs utterly crush spinning disks.\n>\n>\n>\n> You can get maybe 200 IOPs out of each disk, you have 4 in raid 10 so you\n> get a whopping 400 IOPs. A single quality SSD (like the Samsung 850 pro)\n> will support a minimum of 40k IOPs on reads and 80k IOPs on writes. That’s\n> why SSDs are eliminating spinning disks when performance is critical and\n> budget allows.\n>\n>\n>\n> Back to your question – the MBPs is the capacity of interface, so it makes\n> sense that it’s the same for both reads and writes. The perc raid\n> controller will be saving your bacon on writes, with 2gb cache (assuming\n> it’s caching writes), so it becomes the equivalent of an SSD up to the\n> capacity limit of the write cache. With only 400 iops of write speed, with\n> a busy server you can easily saturate the cache and then your system will\n> drop to a crawl.\n>\n>\n>\n> If I didn’t answer the intent of your question, feel free to clarify for\n> me.\n>\n>\n>\n> Mike\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Dave Stibrany\n> *Sent:* Thursday, March 17, 2016 1:45 PM\n> *To:* [email protected]\n> *Subject:* [PERFORM] Disk Benchmarking Question\n>\n>\n>\n> I'm pretty new to benchmarking hard disks and I'm looking for some advice\n> on interpreting the results of some basic tests.\n>\n>\n>\n> The server is:\n>\n> - Dell PowerEdge R430\n>\n> - 1 x Intel Xeon E5-2620 2.4GHz\n>\n> - 32 GB RAM\n>\n> - 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10\n>\n> - PERC H730P Raid Controller with 2GB cache in write back mode.\n>\n>\n>\n> The OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and\n> an xfs volume for PGDATA.\n>\n>\n>\n> I ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I\n> ran 'bonnie++ -n0 -f' on the root volume.\n>\n>\n>\n> Here's a link to the bonnie test results\n>\n> https://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0\n>\n>\n>\n> The vendor stats say sustained throughput of 215 to 108 MBps, so I guess\n> I'd expect around 400-800 MBps read and 200-400 MBps write. In any case,\n> I'm pretty confused as to why the read and write sequential speeds are\n> almost identical. Does this look wrong?\n>\n>\n>\n> Thanks,\n>\n>\n>\n> Dave\n>\n>\n>\n>\n>\n>\n>\n\n\n\n-- \n*THIS IS A TEST*\n\nHey Mike,Thanks for the response. I think where I'm confused is that I thought vendor specified MBps was an estimate of sequential read/write speed. Therefore if you're in RAID10, you'd have 4x the sequential read speed and 2x the sequential write speed. Am I misunderstanding something?Also, when you mention that MBPs is the capacity of the interface, what do you mean exactly. I've been taking interface speed to be the electronic transfer speed, not the speed from the actual physical medium, and more in the 6-12 gigabit range.Please let me know if I'm way off on any of this, I'm hoping to have my mental model updated.Thanks!DaveOn Thu, Mar 17, 2016 at 5:11 PM, Mike Sofen <[email protected]> wrote:Hi Dave, Database disk performance has to take into account IOPs, and IMO, over MBPs, since it’s the ability of the disk subsystem to write lots of little bits (usually) versus writing giant globs, especially in direct attached storage (like yours, versus a SAN).  Most db disk benchmarks revolve around IOPs…and this is where SSDs utterly crush spinning disks. You can get maybe 200 IOPs out of each disk, you have 4 in raid  10 so you get a whopping 400 IOPs.  A single quality SSD (like the Samsung 850 pro) will support a minimum of 40k IOPs on reads and 80k IOPs on writes.  That’s why SSDs are eliminating spinning disks when performance is critical and budget allows. Back to your question – the MBPs is the capacity of interface, so it makes sense that it’s the same for both reads and writes.  The perc raid controller will be saving your bacon on writes, with 2gb cache (assuming it’s caching writes), so it becomes the equivalent of an SSD up to the capacity limit of the write cache.  With only 400 iops of write speed, with a busy server you can easily saturate the cache and then your system will drop to a crawl. If I didn’t answer the intent of your question, feel free to clarify for me. Mike From: [email protected] [mailto:[email protected]] On Behalf Of Dave StibranySent: Thursday, March 17, 2016 1:45 PMTo: [email protected]: [PERFORM] Disk Benchmarking Question I'm pretty new to benchmarking hard disks and I'm looking for some advice on interpreting the results of some basic tests. The server is:- Dell PowerEdge R430- 1 x Intel Xeon E5-2620 2.4GHz- 32 GB RAM- 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10- PERC H730P Raid Controller with 2GB cache in write back mode. The OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and an xfs volume for PGDATA. I ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I ran 'bonnie++ -n0 -f' on the root volume. Here's a link to the bonnie test resultshttps://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0 The vendor stats say sustained throughput of 215 to 108 MBps, so I guess I'd expect around 400-800 MBps read and 200-400 MBps write. In any case, I'm pretty confused as to why the read and write sequential speeds are almost identical. Does this look wrong? Thanks, Dave   -- THIS IS A TEST", "msg_date": "Fri, 18 Mar 2016 10:48:06 -0400", "msg_from": "Dave Stibrany <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disk Benchmarking Question" }, { "msg_contents": "Sorry for the delay, long work day!\n\n \n\nOk, I THINK I understand where you’re going. Do it this way:\n\n4 drives in Raid10 = 2 pairs of mirrored drives, aka still 2 active drives (2 are failover). They are sharing the 12gbps SAS interface, but that speed is quite irrelevant…it’s just a giant pipe for filling lots of drives. \n\n \n\nEach of your 2 drives has a max seq read/write spec 200 MBPs (WAY max). When I say max, I mean, under totally edge laboratory conditions, writing to the outer few tracks with purely sequential data (never happens in the real world). With 2 drives running perfectly in raid 10, the theoretical max would be 400mbps. Real world, less than half, on sequential.\n\n \n\nBut random writes are the rulers of most activity in the data world (think of writing a single row to a table – a few thousand bytes that might be plopped anywhere on the disk and then randomly retrieved. So the MBPs throughput number becomes mostly meaningless (because the data chunks are small and random), and IOPs and drive seek times become king (thus my earlier comments).\n\n \n\nSo – if you’re having disk performance issues with a database, you either add more spinning disks (to increase IOPs/distribute them) or switch to SSDs and forget about almost everything…\n\n \n\nMike\n\n \n\n------------------\n\nFrom: Dave Stibrany [mailto:[email protected]] \nSent: Friday, March 18, 2016 7:48 AM\n\n\n\nHey Mike,\n\n \n\nThanks for the response. I think where I'm confused is that I thought vendor specified MBps was an estimate of sequential read/write speed. Therefore if you're in RAID10, you'd have 4x the sequential read speed and 2x the sequential write speed. Am I misunderstanding something?\n\n \n\nAlso, when you mention that MBPs is the capacity of the interface, what do you mean exactly. I've been taking interface speed to be the electronic transfer speed, not the speed from the actual physical medium, and more in the 6-12 gigabit range.\n\n \n\nPlease let me know if I'm way off on any of this, I'm hoping to have my mental model updated.\n\n \n\nThanks!\n\n \n\nDave\n\n \n\nOn Thu, Mar 17, 2016 at 5:11 PM, Mike Sofen <[email protected] <mailto:[email protected]> > wrote:\n\nHi Dave,\n\n \n\nDatabase disk performance has to take into account IOPs, and IMO, over MBPs, since it’s the ability of the disk subsystem to write lots of little bits (usually) versus writing giant globs, especially in direct attached storage (like yours, versus a SAN). Most db disk benchmarks revolve around IOPs…and this is where SSDs utterly crush spinning disks.\n\n \n\nYou can get maybe 200 IOPs out of each disk, you have 4 in raid 10 so you get a whopping 400 IOPs. A single quality SSD (like the Samsung 850 pro) will support a minimum of 40k IOPs on reads and 80k IOPs on writes. That’s why SSDs are eliminating spinning disks when performance is critical and budget allows.\n\n \n\nBack to your question – the MBPs is the capacity of interface, so it makes sense that it’s the same for both reads and writes. The perc raid controller will be saving your bacon on writes, with 2gb cache (assuming it’s caching writes), so it becomes the equivalent of an SSD up to the capacity limit of the write cache. With only 400 iops of write speed, with a busy server you can easily saturate the cache and then your system will drop to a crawl.\n\n \n\nIf I didn’t answer the intent of your question, feel free to clarify for me.\n\n \n\nMike\n\n \n\nFrom: [email protected] <mailto:[email protected]> [mailto:[email protected] <mailto:[email protected]> ] On Behalf Of Dave Stibrany\nSent: Thursday, March 17, 2016 1:45 PM\nTo: [email protected] <mailto:[email protected]> \nSubject: [PERFORM] Disk Benchmarking Question\n\n \n\nI'm pretty new to benchmarking hard disks and I'm looking for some advice on interpreting the results of some basic tests.\n\n \n\nThe server is:\n\n- Dell PowerEdge R430\n\n- 1 x Intel Xeon E5-2620 2.4GHz\n\n- 32 GB RAM\n\n- 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10\n\n- PERC H730P Raid Controller with 2GB cache in write back mode.\n\n \n\nThe OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and an xfs volume for PGDATA.\n\n \n\nI ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I ran 'bonnie++ -n0 -f' on the root volume.\n\n \n\nHere's a link to the bonnie test results\n\nhttps://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0\n\n \n\nThe vendor stats say sustained throughput of 215 to 108 MBps, so I guess I'd expect around 400-800 MBps read and 200-400 MBps write. In any case, I'm pretty confused as to why the read and write sequential speeds are almost identical. Does this look wrong?\n\n \n\nThanks,\n\n \n\nDave\n\n \n\n \n\n \n\n\n\n\n\n \n\n-- \n\nTHIS IS A TEST\n\n\nSorry for the delay, long work day! Ok, I THINK I understand where you’re going.  Do it this way:4 drives in Raid10 = 2 pairs of mirrored drives, aka still 2 active drives (2 are failover).  They are sharing the 12gbps SAS interface, but that speed is quite irrelevant…it’s just a giant pipe for filling lots of drives.   Each of your 2 drives has a max seq read/write spec 200 MBPs (WAY max).  When I say max, I mean, under totally edge laboratory conditions, writing to the outer few tracks with purely sequential data (never happens in the real world).  With 2 drives running perfectly in raid 10, the theoretical max would be 400mbps.  Real world, less than half, on sequential. But random writes are the rulers of most activity in the data world (think of writing a single row to a table – a few thousand bytes that might be plopped anywhere on the disk and then randomly retrieved.  So the MBPs throughput number becomes mostly meaningless (because the data chunks are small and random), and IOPs and drive seek times become king (thus my earlier comments). So – if you’re having disk performance issues with a database, you either add more spinning disks (to increase IOPs/distribute them) or switch to SSDs and forget about almost everything… Mike ------------------From: Dave Stibrany [mailto:[email protected]] Sent: Friday, March 18, 2016 7:48 AMHey Mike, Thanks for the response. I think where I'm confused is that I thought vendor specified MBps was an estimate of sequential read/write speed. Therefore if you're in RAID10, you'd have 4x the sequential read speed and 2x the sequential write speed. Am I misunderstanding something? Also, when you mention that MBPs is the capacity of the interface, what do you mean exactly. I've been taking interface speed to be the electronic transfer speed, not the speed from the actual physical medium, and more in the 6-12 gigabit range. Please let me know if I'm way off on any of this, I'm hoping to have my mental model updated. Thanks! Dave On Thu, Mar 17, 2016 at 5:11 PM, Mike Sofen <[email protected]> wrote:Hi Dave, Database disk performance has to take into account IOPs, and IMO, over MBPs, since it’s the ability of the disk subsystem to write lots of little bits (usually) versus writing giant globs, especially in direct attached storage (like yours, versus a SAN).  Most db disk benchmarks revolve around IOPs…and this is where SSDs utterly crush spinning disks. You can get maybe 200 IOPs out of each disk, you have 4 in raid  10 so you get a whopping 400 IOPs.  A single quality SSD (like the Samsung 850 pro) will support a minimum of 40k IOPs on reads and 80k IOPs on writes.  That’s why SSDs are eliminating spinning disks when performance is critical and budget allows. Back to your question – the MBPs is the capacity of interface, so it makes sense that it’s the same for both reads and writes.  The perc raid controller will be saving your bacon on writes, with 2gb cache (assuming it’s caching writes), so it becomes the equivalent of an SSD up to the capacity limit of the write cache.  With only 400 iops of write speed, with a busy server you can easily saturate the cache and then your system will drop to a crawl. If I didn’t answer the intent of your question, feel free to clarify for me. Mike From: [email protected] [mailto:[email protected]] On Behalf Of Dave StibranySent: Thursday, March 17, 2016 1:45 PMTo: [email protected]: [PERFORM] Disk Benchmarking Question I'm pretty new to benchmarking hard disks and I'm looking for some advice on interpreting the results of some basic tests. The server is:- Dell PowerEdge R430- 1 x Intel Xeon E5-2620 2.4GHz- 32 GB RAM- 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10- PERC H730P Raid Controller with 2GB cache in write back mode. The OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and an xfs volume for PGDATA. I ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I ran 'bonnie++ -n0 -f' on the root volume. Here's a link to the bonnie test resultshttps://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0 The vendor stats say sustained throughput of 215 to 108 MBps, so I guess I'd expect around 400-800 MBps read and 200-400 MBps write. In any case, I'm pretty confused as to why the read and write sequential speeds are almost identical. Does this look wrong? Thanks, Dave    -- THIS IS A TEST", "msg_date": "Fri, 18 Mar 2016 20:22:25 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk Benchmarking Question" }, { "msg_contents": "On Thu, Mar 17, 2016 at 2:45 PM, Dave Stibrany <[email protected]> wrote:\n> I'm pretty new to benchmarking hard disks and I'm looking for some advice on\n> interpreting the results of some basic tests.\n>\n> The server is:\n> - Dell PowerEdge R430\n> - 1 x Intel Xeon E5-2620 2.4GHz\n> - 32 GB RAM\n> - 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10\n> - PERC H730P Raid Controller with 2GB cache in write back mode.\n>\n> The OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and\n> an xfs volume for PGDATA.\n>\n> I ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I\n> ran 'bonnie++ -n0 -f' on the root volume.\n>\n> Here's a link to the bonnie test results\n> https://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0\n>\n> The vendor stats say sustained throughput of 215 to 108 MBps, so I guess I'd\n> expect around 400-800 MBps read and 200-400 MBps write. In any case, I'm\n> pretty confused as to why the read and write sequential speeds are almost\n> identical. Does this look wrong?\n\nFor future reference, it's good to include the data you linked to in\nyour post, as in 2, 5 or 10 years the postgresql discussion archives\nwill still be here but your dropbox may or may not, and then people\nwon't know what numbers you are referring to.\n\nGiven the size of your bonnie test set and the fact that you're using\nRAID-10, the cache should make little or no difference. The RAID\ncontroller may or may not interleave reads between all four drives.\nSome do, some don't. It looks to me like yours doesn't. I.e. when\nreading it's not reading all 4 disks at once, but just 2, 1 from each\npair.\n\nBut the important question here is what kind of workload are you\nlooking at throwing at this server? If it's going to be a reporting\ndatabase you may get as good or better read performance from RAID-5 as\nRAID-10, especially if you add more drives. If you're looking at\ntransactional use then as Mike suggested SSDs might be your best\nchoice.\n\nWe run some big transactional dbs at work that are 4 to 6 TB and for\nthose we use 10 800GB SSDs in RAID-5 with the RAID controller cache\nturned off. We can hit ~18k tps in pgbench on ~100GB test sets. With\nthe cache on we drop to 3 to 5k tps. With 512MB cache we overwrite the\ncache every couple of seconds and it just gets in the way.\n\nSSDs win hands down if you need random access speed. It's like a\nStanley Steamer (spinners) versus a Bugatti Veyron (SSDs).\n\nFor sequential throughput like a reporting server often spinners do\nalright, as long as there's only one or two processes accessing your\ndata at a time. As soon as you start to get more accesses going as you\nhave RAID-10 pairs your performance will drop off noticeably.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 19 Mar 2016 04:29:35 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk Benchmarking Question" }, { "msg_contents": "On Sat, Mar 19, 2016 at 4:29 AM, Scott Marlowe <[email protected]> wrote:\n\n> Given the size of your bonnie test set and the fact that you're using\n> RAID-10, the cache should make little or no difference. The RAID\n> controller may or may not interleave reads between all four drives.\n> Some do, some don't. It looks to me like yours doesn't. I.e. when\n> reading it's not reading all 4 disks at once, but just 2, 1 from each\n> pair.\n\nPoint of clarification. It may be that if two processes are reading\nthe data set at once you'd get a sustained individual throughput that\nmatches what a single read can get.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 19 Mar 2016 04:32:13 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk Benchmarking Question" }, { "msg_contents": "Thanks for the feedback guys. I'm looking forward to the day when we\nupgrade to SSDs.\n\nFor future reference, the bonnie++ numbers I was referring to are:\n\nSize: 63G\n\nSequential Output:\n------------------------\n396505 K/sec\n% CPU 21\n\nSequential Input:\n------------------------\n401117 K/sec\n% CPU 21\n\nRandom Seeks:\n----------------------\n650.7 /sec\n% CPU 25\n\nI think a lot of my confusion resulted from expecting sequential reads to\nbe 4x the speed of a single disk because the disks are in RAID10. I'm\nthinking now that the 4x only applies to random reads.\n\nOn Sat, Mar 19, 2016 at 6:32 AM, Scott Marlowe <[email protected]>\nwrote:\n\n> On Sat, Mar 19, 2016 at 4:29 AM, Scott Marlowe <[email protected]>\n> wrote:\n>\n> > Given the size of your bonnie test set and the fact that you're using\n> > RAID-10, the cache should make little or no difference. The RAID\n> > controller may or may not interleave reads between all four drives.\n> > Some do, some don't. It looks to me like yours doesn't. I.e. when\n> > reading it's not reading all 4 disks at once, but just 2, 1 from each\n> > pair.\n>\n> Point of clarification. It may be that if two processes are reading\n> the data set at once you'd get a sustained individual throughput that\n> matches what a single read can get.\n>\n\n\n\n-- \n*THIS IS A TEST*\n\nThanks for the feedback guys. I'm looking forward to the day when we upgrade to SSDs.For future reference, the bonnie++ numbers I was referring to are: Size: 63GSequential Output: ------------------------396505 K/sec% CPU 21Sequential Input: ------------------------401117 K/sec% CPU 21Random Seeks:----------------------650.7 /sec% CPU 25I think a lot of my confusion resulted from expecting sequential reads to be 4x the speed of a single disk because the disks are in RAID10. I'm thinking now that the 4x only applies to random reads.On Sat, Mar 19, 2016 at 6:32 AM, Scott Marlowe <[email protected]> wrote:On Sat, Mar 19, 2016 at 4:29 AM, Scott Marlowe <[email protected]> wrote:\n\n> Given the size of your bonnie test set and the fact that you're using\n> RAID-10, the cache should make little or no difference. The RAID\n> controller may or may not interleave reads between all four drives.\n> Some do, some don't. It looks to me like yours doesn't. I.e. when\n> reading it's not reading all 4 disks at once, but just 2, 1 from each\n> pair.\n\nPoint of clarification. It may be that if two processes are reading\nthe data set at once you'd get a sustained individual throughput that\nmatches what a single read can get.\n-- THIS IS A TEST", "msg_date": "Tue, 22 Mar 2016 10:44:07 -0400", "msg_from": "Dave Stibrany <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disk Benchmarking Question" } ]
[ { "msg_contents": "Hi,\n\nWhile developing a batch processing platform using postgresql as the \nunderlying data store we are seeing a performance decline in our \napplication.\n\nIn this application a job is broken up into chunks where each chunk \ncontains a number of items (typically 10).\n\nCREATE TABLE item (\n id SMALLINT NOT NULL,\n chunkId INTEGER NOT NULL,\n jobId INTEGER NOT NULL,\n -- other attributes omitted for brewity\n PRIMARY KEY (jobId, chunkId, id)\n);\n\nSo a job with 600.000 items results in 600.000 rows in the items table \nwith a fixed jobId, chunkId ranging from 0-59999 and for each chunkId an \nid ranging from 0-9.\n\nAll ten inserts for a particular chunkId are handled in a single \ntransaction, and over time we are seeing an increase in transaction \nexecution time, <100ms for the first 100.000 items, >300ms when we reach \nthe 400.000 mark, and the trend seems to be forever increasing.\n\nNo decline is observed if we instead sequentially submit 6 jobs of \n100.000 items each.\n\nTherefore we are beginning to wonder if we are hitting some sort of \nupper limit with regards to the multi column index? Perhaps something \ncausing it to sort on disk or something like that?\n\nAny suggestions to the cause of this would be very much appreciated.\n\njobstore=> SELECT version();\n version\n----------------------------------------------------------------------------------------------\n PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Debian \n4.7.2-5) 4.7.2, 64-bit\n\njobstore=> SELECT name, current_setting(name), SOURCE\njobstore-> FROM pg_settings\njobstore-> WHERE SOURCE NOT IN ('default', 'override');\n name | current_setting | source\n----------------------------+----------------------------------------+----------------------\n application_name | psql | client\n client_encoding | UTF8 | client\n DateStyle | ISO, YMD | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n lc_messages | en_DK.UTF-8 | configuration file\n lc_monetary | en_DK.UTF-8 | configuration file\n lc_numeric | en_DK.UTF-8 | configuration file\n lc_time | en_DK.UTF-8 | configuration file\n listen_addresses | * | configuration file\n log_line_prefix | %t | configuration file\n log_timezone | localtime | configuration file\n max_connections | 100 | configuration file\n max_stack_depth | 2MB | environment variable\n port | 5432 | configuration file\n shared_buffers | 128MB | configuration file\n ssl | on | configuration file\n ssl_cert_file | /etc/ssl/certs/ssl-cert-snakeoil.pem | \nconfiguration file\n ssl_key_file | /etc/ssl/private/ssl-cert-snakeoil.key | \nconfiguration file\n TimeZone | localtime | configuration file\n\nKind regards,\n\nJan Bauer Nielsen\nSoftware developer\nDBC as\nhttp://www.dbc.dk/english\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 18 Mar 2016 14:26:14 +0100", "msg_from": "Jan Bauer Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Performance decline maybe caused by multi-column index?" }, { "msg_contents": "On Fri, Mar 18, 2016 at 6:26 AM, Jan Bauer Nielsen <[email protected]> wrote:\n> Hi,\n>\n> While developing a batch processing platform using postgresql as the\n> underlying data store we are seeing a performance decline in our\n> application.\n>\n> In this application a job is broken up into chunks where each chunk contains\n> a number of items (typically 10).\n>\n> CREATE TABLE item (\n> id SMALLINT NOT NULL,\n> chunkId INTEGER NOT NULL,\n> jobId INTEGER NOT NULL,\n> -- other attributes omitted for brewity\n> PRIMARY KEY (jobId, chunkId, id)\n> );\n>\n> So a job with 600.000 items results in 600.000 rows in the items table with\n> a fixed jobId, chunkId ranging from 0-59999 and for each chunkId an id\n> ranging from 0-9.\n\nIs it 0-59999 in order, or in some arbitrary order?\n\n>\n> All ten inserts for a particular chunkId are handled in a single\n> transaction, and over time we are seeing an increase in transaction\n> execution time, <100ms for the first 100.000 items, >300ms when we reach the\n> 400.000 mark, and the trend seems to be forever increasing.\n\nWhy such small transactions? Why not do the entire 600.000 in on transaction?\n\nAre you inserting them via COPY, or doing single-valued inserts in a\nloop, or inserts with multiple value lists?\n\n\n>\n> No decline is observed if we instead sequentially submit 6 jobs of 100.000\n> items each.\n>\n> Therefore we are beginning to wonder if we are hitting some sort of upper\n> limit with regards to the multi column index? Perhaps something causing it\n> to sort on disk or something like that?\n\n\nMy gut feeling is that is more about memory management in your client,\nrather than something going on in the database. What does `top`, or\n`perf top`, show you about what is going on?\n\nCan you produce a simple perl or python script that reproduces the problem?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 19 Mar 2016 14:48:08 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance decline maybe caused by multi-column index?" } ]
[ { "msg_contents": "Guys\n Whats the best way to grant select on pg_stat_activity so that non\nsuper user can access this view.\n\nThanks\nAvi\n\nGuys        Whats the best way to grant select on pg_stat_activity so that non super user can access this view. ThanksAvi", "msg_date": "Fri, 18 Mar 2016 13:09:56 -0700", "msg_from": "avi Singh <[email protected]>", "msg_from_op": true, "msg_subject": "grant select on pg_stat_activity" }, { "msg_contents": "On 03/18/2016 01:09 PM, avi Singh wrote:\n> Guys\n> Whats the best way to grant select on pg_stat_activity so that\n> non super user can access this view.\n\nThey should be able to, see below. If that is not your case, then more \ninformation is needed.\n\nguest@test=> select current_user;\n current_user \n \n\n-------------- \n \n\n guest\n(1 row)\n\nguest@test=> \\du guest\n List of roles\n Role name | Attributes | Member of\n-----------+------------+-----------\n guest | | {}\n\n\nguest@test=> select * from pg_stat_activity;\n-[ RECORD 1 ]----+--------------------------------\ndatid | 16385\ndatname | test\npid | 2622\nusesysid | 1289138\nusename | guest\napplication_name | psql\nclient_addr |\nclient_hostname |\nclient_port | -1\nbackend_start | 2016-03-18 14:41:43.906754-07\nxact_start | 2016-03-18 14:44:22.550742-07\nquery_start | 2016-03-18 14:44:22.550742-07\nstate_change | 2016-03-18 14:44:22.550746-07\nwaiting | f\nstate | active\nbackend_xid |\nbackend_xmin | 58635\nquery | select * from pg_stat_activity;\n\n>\n> Thanks\n> Avi\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 18 Mar 2016 14:46:46 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] grant select on pg_stat_activity" }, { "msg_contents": "On Fri, Mar 18, 2016 at 5:46 PM, Adrian Klaver <[email protected]>\nwrote:\n\n> They should be able to, see below. If that is not your case, then more\n> information is needed.\n>\n\nYou can see your own queries, however non-superuser will not see the query\nfor other users. You will be able to see the other info, though.\n\nI do not know what permission is necessary to make that visible. My hunch\nis it will require superuser privileges.\n\nOn Fri, Mar 18, 2016 at 5:46 PM, Adrian Klaver <[email protected]> wrote:They should be able to, see below. If that is not your case, then more information is needed.You can see your own queries, however non-superuser will not see the query for other users. You will be able to see the other info, though.I do not know what permission is necessary to make that visible. My hunch is it will require superuser privileges.", "msg_date": "Mon, 21 Mar 2016 10:15:12 -0400", "msg_from": "Vick Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: grant select on pg_stat_activity" }, { "msg_contents": "On 03/21/2016 07:15 AM, Vick Khera wrote:\n>\n> On Fri, Mar 18, 2016 at 5:46 PM, Adrian Klaver\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> They should be able to, see below. If that is not your case, then\n> more information is needed.\n>\n>\n> You can see your own queries, however non-superuser will not see the\n> query for other users. You will be able to see the other info, though.\n\nDid not think of that.\n\n>\n> I do not know what permission is necessary to make that visible. My\n> hunch is it will require superuser privileges\n\nHmm, I would hesitate to mess with permissions on a system view.\n\nA quick and dirty fix as a superuser:\n\nCREATE FUNCTION pg_stat_allusers( )\n RETURNS setof pg_stat_activity\n LANGUAGE sql SECURITY DEFINER\nAS $function$\n SELECT * FROM pg_stat_activity;\n$function$\n\n\ntest=> select current_user;\n-[ RECORD 1 ]+------\ncurrent_user | guest\n\ntest=> select * from pg_stat_allusers();\n-[ RECORD 1 ]----+----------------------------------------------\ndatid | 983301\ndatname | test\npid | 5886\nusesysid | 10\nusename | postgres\napplication_name | psql\nclient_addr |\nclient_hostname |\nclient_port | -1\nbackend_start | 2016-03-21 08:03:43.60797-07\nxact_start |\nquery_start | 2016-03-21 08:14:47.166341-07\nstate_change | 2016-03-21 08:14:47.166953-07\nwaiting | f\nstate | idle\nbackend_xid |\nbackend_xmin |\nquery | SELECT pg_catalog.pg_get_functiondef(1730587)\n-[ RECORD 2 ]----+----------------------------------------------\ndatid | 983301\ndatname | test \n \n\npid | 5889 \n \n\nusesysid | 432800 \n \n\nusename | guest \n \n\napplication_name | psql \n \n\nclient_addr | \n \n\nclient_hostname | \n \n\nclient_port | -1 \n \n\nbackend_start | 2016-03-21 08:03:48.559611-07 \n \n\nxact_start | 2016-03-21 08:18:40.245858-07 \n \n\nquery_start | 2016-03-21 08:18:40.245858-07 \n \n\nstate_change | 2016-03-21 08:18:40.245862-07 \n \n\nwaiting | f \n \n\nstate | active \n \n\nbackend_xid | \n \n\nbackend_xmin | 119564 \n \n\nquery | select * from pg_stat_allusers();\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Mon, 21 Mar 2016 08:19:25 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: grant select on pg_stat_activity" } ]
[ { "msg_contents": "Hi all,\n\nPlease provide some advise on the following query not using the index:\n\npgsql version: 9.2.4\nOS version: RedHat 6.5\nRam: 64 GB\nrows in testdb: 180 million\nshared_buffers: 16GB\neffective_cache_size: 32GB\nwork_mem='32MB'\n\nI have executed the query below after I vaccum analyze the table.\n\nI have 2 questions:\n\n 1. Why does the optimizer chose not to use the index when it will run\n faster?\n 2. How do I ensure the optimizer will use the index without setting\n enable_seqscan='off'\n\n\n*Table structure.*\ntestdb=# \\d testtable\n Table \"public.testtable\"\n Column | Type | Modifiers\n-------------------+---------+-----------\n pk | text | not null\n additionaldetails | text |\n authtoken | text | not null\n customid | text |\n eventstatus | text | not null\n eventtype | text | not null\n module | text | not null\n nodeid | text | not null\n rowprotection | text |\n rowversion | integer | not null\n searchdetail1 | text |\n searchdetail2 | text |\n sequencenumber | bigint | not null\n service | text | not null\n timestamp | bigint | not null\nIndexes:\n \"testtable_pkey\" PRIMARY KEY, btree (pk)\n \"testtable_nodeid_eleanor1_idx\" btree (nodeid) WHERE nodeid =\n'eleanor1'::text, tablespace \"tablespace_index\"\n \"testtable_nodeid_eleanor2_idx\" btree (nodeid) WHERE nodeid =\n'eleanor2'::text, tablespace \"tablespace_index\"\n \"testtable_nodeid_eleanor3_idx\" btree (nodeid) WHERE nodeid =\n'eleanor3'::text, tablespace \"tablespace_index\"\n\n*Explain Plan with enable_seqscan='on'*\ntestdb=# explain analyze select max ( auditrecor0_.sequenceNumber ) AS\ncol_0_0_ From testdb auditrecor0_ where auditrecor0_.nodeid = 'eleanor1';\n QUERY\nPLAN\n\n-----------------------------------------------------------------------------------------------------------------------------\n-------------------------\n Aggregate (cost=18291486.05..18291486.06 rows=1 width=8) (actual\ntime=484907.446..484907.446 rows=1 loops=1)\n -> Seq Scan on testdb auditrecor0_ (cost=0.00..18147465.00\nrows=57608421 width=8) (actual time=0.166..473959.12\n6 rows=57801797 loops=1)\n Filter: (nodeid = 'eleanor1'::text)\n Rows Removed by Filter: 126233820\n Total runtime: 484913.013 ms\n(5 rows)\n\n*Explain Plan with enable_seqscan='off'*\ntestdb=# explain analyze select max ( auditrecor0_.sequenceNumber ) AS\ncol_0_0_ From testdb auditrecor0_ where auditrecor0_.nodeid = 'eleanor3';\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------\n Aggregate (cost=19226040.50..19226040.51 rows=1 width=8) (actual\ntime=388293.245..388293.245 rows=1 loops=1)\n -> Bitmap Heap Scan on testdb auditrecor0_\n (cost=2291521.32..19046381.97 rows=71863412 width=8) (actual\ntime=15626.372..375378.362 rows=71\n412687 loops=1)\n Recheck Cond: (nodeid = 'eleanor3'::text)\n Rows Removed by Index Recheck: 900820\n -> Bitmap Index Scan on testdb_nodeid_eleanor3_idx\n (cost=0.00..2273555.47 rows=71863412 width=0) (actual\ntime=15503.465..15503.465 r\nows=71412687 loops=1)\n Index Cond: (nodeid = 'eleanor3'::text)\n Total runtime: 388294.378 ms\n(7 rows)\n\n\nThanks!\n\n-- \nRegards,\nAng Wei Shan\n\nHi all,Please provide some advise on the following query not using the index:pgsql version: 9.2.4OS version: RedHat 6.5Ram: 64 GBrows in testdb: 180 millionshared_buffers: 16GBeffective_cache_size: 32GBwork_mem='32MB'I have executed the query below after I vaccum analyze the table.I have 2 questions:Why does the optimizer chose not to use the index when it will run faster?How do I ensure the optimizer will use the index without setting enable_seqscan='off'Table structure.testdb=# \\d testtable     Table \"public.testtable\"      Column       |  Type   | Modifiers-------------------+---------+----------- pk                | text    | not null additionaldetails | text    | authtoken         | text    | not null customid          | text    | eventstatus       | text    | not null eventtype         | text    | not null module            | text    | not null nodeid            | text    | not null rowprotection     | text    | rowversion        | integer | not null searchdetail1     | text    | searchdetail2     | text    | sequencenumber    | bigint  | not null service           | text    | not null timestamp         | bigint  | not nullIndexes:    \"testtable_pkey\" PRIMARY KEY, btree (pk)    \"testtable_nodeid_eleanor1_idx\" btree (nodeid) WHERE nodeid = 'eleanor1'::text, tablespace \"tablespace_index\"    \"testtable_nodeid_eleanor2_idx\" btree (nodeid) WHERE nodeid = 'eleanor2'::text, tablespace \"tablespace_index\"    \"testtable_nodeid_eleanor3_idx\" btree (nodeid) WHERE nodeid = 'eleanor3'::text, tablespace \"tablespace_index\"Explain Plan with enable_seqscan='on'testdb=# explain analyze select max ( auditrecor0_.sequenceNumber ) AS col_0_0_ From testdb auditrecor0_ where auditrecor0_.nodeid = 'eleanor1';                                                                      QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=18291486.05..18291486.06 rows=1 width=8) (actual time=484907.446..484907.446 rows=1 loops=1)   ->  Seq Scan on testdb auditrecor0_  (cost=0.00..18147465.00 rows=57608421 width=8) (actual time=0.166..473959.126 rows=57801797 loops=1)         Filter: (nodeid = 'eleanor1'::text)         Rows Removed by Filter: 126233820 Total runtime: 484913.013 ms(5 rows)Explain Plan with enable_seqscan='off'testdb=# explain analyze select max ( auditrecor0_.sequenceNumber ) AS col_0_0_ From testdb auditrecor0_ where auditrecor0_.nodeid = 'eleanor3';                                                                                  QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=19226040.50..19226040.51 rows=1 width=8) (actual time=388293.245..388293.245 rows=1 loops=1)   ->  Bitmap Heap Scan on testdb auditrecor0_  (cost=2291521.32..19046381.97 rows=71863412 width=8) (actual time=15626.372..375378.362 rows=71412687 loops=1)         Recheck Cond: (nodeid = 'eleanor3'::text)         Rows Removed by Index Recheck: 900820         ->  Bitmap Index Scan on testdb_nodeid_eleanor3_idx  (cost=0.00..2273555.47 rows=71863412 width=0) (actual time=15503.465..15503.465 rows=71412687 loops=1)               Index Cond: (nodeid = 'eleanor3'::text) Total runtime: 388294.378 ms(7 rows)Thanks!-- Regards,Ang Wei Shan", "msg_date": "Sat, 26 Mar 2016 21:14:15 +0800", "msg_from": "Wei Shan <[email protected]>", "msg_from_op": true, "msg_subject": "Query not using Index" }, { "msg_contents": "Wei Shan <[email protected]> wrote:\n\n> Hi all,\n> \n> Please provide some advise on the following query not using the index:\n> I have 2 questions:\n> \n> 1. Why does the optimizer chose not to use the index when it will run faster?\n\nbecause of the estimated costs.:\n\nSeq Scan on testdb auditrecor0_ �(cost=0.00..18147465.00\nBitmap Heap Scan on testdb auditrecor0_ �(cost=2291521.32..19046381.97\n\nThe estimated costs for the index-scan are higher. \n\n\n> 2. How do I ensure the optimizer will use the index without setting\n> enable_seqscan='off'\n\nYou have a dedicated tablespace for indexes, is this a SSD? You can try\nto reduce the random_page_cost, from default 4 to maybe 2.(depends on\nhardware) This would reduce the estimated costs for the Index-scan and\nprefer the index-scan.\n\n\n\nRegards, Andreas Kretschmer\n-- \nAndreas Kretschmer\nhttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 26 Mar 2016 15:13:46 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query not using Index" }, { "msg_contents": "Hi Andreas,\n\nThe tablespace is not on SSD although I intend to do it within the next\nweek. I actually tried reducing the random_page_cost to 0.2 but it doesn't\nhelp.\n\nOn 26 March 2016 at 22:13, Andreas Kretschmer <[email protected]>\nwrote:\n\n> Wei Shan <[email protected]> wrote:\n>\n> > Hi all,\n> >\n> > Please provide some advise on the following query not using the index:\n> > I have 2 questions:\n> >\n> > 1. Why does the optimizer chose not to use the index when it will run\n> faster?\n>\n> because of the estimated costs.:\n>\n> Seq Scan on testdb auditrecor0_ (cost=0.00..18147465.00\n> Bitmap Heap Scan on testdb auditrecor0_ (cost=2291521.32..19046381.97\n>\n> The estimated costs for the index-scan are higher.\n>\n>\n> > 2. How do I ensure the optimizer will use the index without setting\n> > enable_seqscan='off'\n>\n> You have a dedicated tablespace for indexes, is this a SSD? You can try\n> to reduce the random_page_cost, from default 4 to maybe 2.(depends on\n> hardware) This would reduce the estimated costs for the Index-scan and\n> prefer the index-scan.\n>\n>\n>\n> Regards, Andreas Kretschmer\n> --\n> Andreas Kretschmer\n> http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRegards,\nAng Wei Shan\n\nHi Andreas,The tablespace is not on SSD although I intend to do it within the next week. I actually tried reducing the random_page_cost to 0.2 but it doesn't help.On 26 March 2016 at 22:13, Andreas Kretschmer <[email protected]> wrote:Wei Shan <[email protected]> wrote:\n\n> Hi all,\n>\n> Please provide some advise on the following query not using the index:\n> I have 2 questions:\n>\n>  1. Why does the optimizer chose not to use the index when it will run faster?\n\nbecause of the estimated costs.:\n\nSeq Scan on testdb auditrecor0_  (cost=0.00..18147465.00\nBitmap Heap Scan on testdb auditrecor0_  (cost=2291521.32..19046381.97\n\nThe estimated costs for the index-scan are higher.\n\n\n>  2. How do I ensure the optimizer will use the index without setting\n>     enable_seqscan='off'\n\nYou have a dedicated tablespace for indexes, is this a SSD? You can try\nto reduce the random_page_cost, from default 4 to maybe 2.(depends on\nhardware) This would reduce the estimated costs for the Index-scan and\nprefer the index-scan.\n\n\n\nRegards, Andreas Kretschmer\n--\nAndreas Kretschmer\nhttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Regards,Ang Wei Shan", "msg_date": "Mon, 28 Mar 2016 00:12:43 +0800", "msg_from": "Wei Shan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query not using Index" }, { "msg_contents": "On Sun, Mar 27, 2016 at 9:12 AM, Wei Shan <[email protected]> wrote:\n> Hi Andreas,\n>\n> The tablespace is not on SSD although I intend to do it within the next\n> week. I actually tried reducing the random_page_cost to 0.2 but it doesn't\n> help.\n\nSetting random_page_cost to less than seq_page_cost is nonsensical.\n\nYou could try to increase cpu_tuple_cost to 0.015 or 0.02\n\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 27 Mar 2016 12:20:36 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query not using Index" } ]
[ { "msg_contents": "Hi folks! I’ve a query where adding a rollup to the group by switches to GroupAggregate unexpectedly, where the standard GROUP BY uses HashAggregate. Since the rollup should only add one additional bucket, the switch to having to sort (and thus a to-disk temporary file) is very puzzling. This reads like a query optimiser bug to me. This is the first I’ve posted to the list, please forgive me if I’ve omitted any “before bugging the list” homework.\n\n\nDescription: Adding a summary row by changing “GROUP BY x” into “GROUP BY ROLLUP (x)” should not cause a switch from HashAggregate to GroupAggregate\n\n\nHere’s the “explain” from the simple GROUP BY:\n\nprojectdb=> explain analyze verbose SELECT error_code, count ( * ) FROM api_activities GROUP BY error_code;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=3456930.11..3456930.16 rows=5 width=2) (actual time=26016.222..26016.223 rows=5 loops=1)\n Output: error_code, count(*)\n Group Key: api_activities.error_code\n -> Seq Scan on public.api_activities (cost=0.00..3317425.74 rows=27900874 width=2) (actual time=0.018..16232.608 rows=36224844 loops=1)\n Output: id, client_id, date_added, kind, activity, error_code\n Planning time: 0.098 ms\n Execution time: 26016.337 ms\n(7 rows)\n\nChanging this to a GROUP BY ROLLUP switches to GroupAggregate (with the corresponding to-disk temporary table being created):\n\nprojectdb=> explain analyze verbose SELECT error_code, count ( * ) FROM api_activities GROUP BY rollup (error_code);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=7149357.90..7358614.52 rows=6 width=2) (actual time=54271.725..82354.144 rows=6 loops=1)\n Output: error_code, count(*)\n Group Key: api_activities.error_code\n Group Key: ()\n -> Sort (cost=7149357.90..7219110.09 rows=27900874 width=2) (actual time=54270.636..76651.121 rows=36222428 loops=1)\n Output: error_code\n Sort Key: api_activities.error_code\n Sort Method: external merge Disk: 424864kB\n -> Seq Scan on public.api_activities (cost=0.00..3317425.74 rows=27900874 width=2) (actual time=0.053..34282.239 rows=36222428 loops=1)\n Output: error_code\n Planning time: 2.611 ms\n Execution time: 82437.416 ms\n(12 rows)\n\n\nI’ve given the output of “EXPLAIN ANAYLZE VERBOSE” rather than non-analyze, but there was no difference in the plan.\n\nRunning VACUUM FULL ANALYZE on this table makes no difference. Switching to Count(error_code) makes no difference. Using GROUP BY GROUPING SETS ((), error_code) makes no difference.\n\nI understand that a HashAggregate is possible only if it can fit all the aggregates into work_mem. There are 5 different error codes, and the statistics (from pg_stats) are showing that PG knows this. Adding just one more bucket for the “()” case should not cause a fallback to GroupAggregate.\n\n\nPostgreSQL version: 9.5.2 (just upgraded today, Thank you! <3 )\n\n(Was exhibiting same problem under 9.5.0)\n\n\nHow installed: apt-get package from apt.postgresql.org <http://apt.postgresql.org/>\n\n\nSettings differences:\n\n application_name: psql\n client_encoding: UTF8\n DateStyle: ISO, MDY\n default_text_search_config: pg_catalog.english\n dynamic_shared_memory_type: posix\n lc_messages: en_US.UTF-8\n lc_monetary: en_US.UTF-8\n lc_numeric: en_US.UTF-8\n lc_time: en_US.UTF-8\n listen_addresses: *\n log_line_prefix: %t [%p-%c-%l][%a][%i][%e][%s][%x-%v] %q%u@%d \n log_timezone: UTC\n logging_collector: on\n max_connections: 100\n max_stack_depth: 2MB\n port: 5432\n shared_buffers: 1GB\n ssl: on\n ssl_cert_file: /etc/ssl/certs/ssl-cert-snakeoil.pem\n ssl_key_file: /etc/ssl/private/ssl-cert-snakeoil.key\n TimeZone: UTC\n work_mem: 128MB\n\n\nOS and Version: Ubuntu Trusty: Linux 3.13.0-66-generic #108-Ubuntu SMP Wed Oct 7 15:20:27 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux\n\n\nProgram used to connect: psql\n\n\nNothing unusual in the logs, apart from the query indicating that it took a while to run.\n\n\nI know that there’s several workarounds I can use for this simple case, such as using a CTE, then doing a rollup on that, but I’m simply reporting what I think is a bug in the query optimizer.\n\n\nThank you for your attention! Please let me know if there’s any additional information you need, or additional tests you’d like to run.\n\n\n— Chris Cogdon <[email protected] <mailto:[email protected]>>\n— Using PostgreSQL since 6.2! \n\n\n\n\n\nHi folks! I’ve a query where adding a rollup to the group by switches to GroupAggregate unexpectedly, where the standard GROUP BY uses HashAggregate. Since the rollup should only add one additional bucket, the switch to having to sort (and thus a to-disk temporary file) is very puzzling. This reads like a query optimiser bug to me. This is the first I’ve posted to the list, please forgive me if I’ve omitted any “before bugging the list” homework.Description: Adding a summary row by changing “GROUP BY x” into “GROUP BY ROLLUP (x)” should not cause a switch from HashAggregate to GroupAggregateHere’s the “explain” from the simple GROUP BY:projectdb=> explain analyze verbose SELECT error_code, count ( * ) FROM api_activities GROUP BY error_code;                                                                 QUERY PLAN                                                                  --------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=3456930.11..3456930.16 rows=5 width=2) (actual time=26016.222..26016.223 rows=5 loops=1)   Output: error_code, count(*)   Group Key: api_activities.error_code   ->  Seq Scan on public.api_activities  (cost=0.00..3317425.74 rows=27900874 width=2) (actual time=0.018..16232.608 rows=36224844 loops=1)         Output: id, client_id, date_added, kind, activity, error_code Planning time: 0.098 ms Execution time: 26016.337 ms(7 rows)Changing this to a GROUP BY ROLLUP switches to GroupAggregate (with the corresponding to-disk temporary table being created):projectdb=> explain analyze verbose SELECT error_code, count ( * ) FROM api_activities GROUP BY rollup (error_code);                                                                    QUERY PLAN                                                                     --------------------------------------------------------------------------------------------------------------------------------------------------- GroupAggregate  (cost=7149357.90..7358614.52 rows=6 width=2) (actual time=54271.725..82354.144 rows=6 loops=1)   Output: error_code, count(*)   Group Key: api_activities.error_code   Group Key: ()   ->  Sort  (cost=7149357.90..7219110.09 rows=27900874 width=2) (actual time=54270.636..76651.121 rows=36222428 loops=1)         Output: error_code         Sort Key: api_activities.error_code         Sort Method: external merge  Disk: 424864kB         ->  Seq Scan on public.api_activities  (cost=0.00..3317425.74 rows=27900874 width=2) (actual time=0.053..34282.239 rows=36222428 loops=1)               Output: error_code Planning time: 2.611 ms Execution time: 82437.416 ms(12 rows)I’ve given the output of “EXPLAIN ANAYLZE VERBOSE” rather than non-analyze, but there was no difference in the plan.Running VACUUM FULL ANALYZE on this table makes no difference. Switching to Count(error_code) makes no difference. Using GROUP BY GROUPING SETS ((), error_code) makes no difference.I understand that a HashAggregate is possible only if it can fit all the aggregates into work_mem. There are 5 different error codes, and the statistics (from pg_stats) are showing that PG knows this. Adding just one more bucket for the “()” case should not cause a fallback to GroupAggregate.PostgreSQL version: 9.5.2 (just upgraded today, Thank you! <3 )(Was exhibiting same problem under 9.5.0)How installed: apt-get package from apt.postgresql.orgSettings differences: application_name: psql client_encoding: UTF8 DateStyle: ISO, MDY default_text_search_config: pg_catalog.english dynamic_shared_memory_type: posix lc_messages: en_US.UTF-8 lc_monetary: en_US.UTF-8 lc_numeric: en_US.UTF-8 lc_time: en_US.UTF-8 listen_addresses: * log_line_prefix: %t [%p-%c-%l][%a][%i][%e][%s][%x-%v] %q%u@%d  log_timezone: UTC logging_collector: on max_connections: 100 max_stack_depth: 2MB port: 5432 shared_buffers: 1GB ssl: on ssl_cert_file: /etc/ssl/certs/ssl-cert-snakeoil.pem ssl_key_file: /etc/ssl/private/ssl-cert-snakeoil.key TimeZone: UTC work_mem: 128MBOS and Version: Ubuntu Trusty: Linux 3.13.0-66-generic #108-Ubuntu SMP Wed Oct 7 15:20:27 UTC 2015 x86_64 x86_64 x86_64 GNU/LinuxProgram used to connect: psqlNothing unusual in the logs, apart from the query indicating that it took a while to run.I know that there’s several workarounds I can use for this simple case, such as using a CTE, then doing a rollup on that, but I’m simply reporting what I think is a bug in the query optimizer.Thank you for your attention! Please let me know if there’s any additional information you need, or additional tests you’d like to run.— Chris Cogdon <[email protected]>— Using PostgreSQL since 6.2!", "msg_date": "Thu, 31 Mar 2016 10:03:27 -0700", "msg_from": "Chris Cogdon <[email protected]>", "msg_from_op": true, "msg_subject": "Adding a ROLLUP switches to GroupAggregate unexpectedly" }, { "msg_contents": "Chris Cogdon <[email protected]> writes:\n> Hi folks! I’ve a query where adding a rollup to the group by switches to\n> GroupAggregate unexpectedly, where the standard GROUP BY uses\n> HashAggregate.\n\nThe current implementation of rollup doesn't support using hashed\naggregation. I don't know if that's for lack of round tuits or because\nit's actually hard, but it's not the planner's fault.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Mar 2016 14:56:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding a ROLLUP switches to GroupAggregate unexpectedly" }, { "msg_contents": "On Thu, Mar 31, 2016 at 02:56:48PM -0400, Tom Lane wrote:\n> Chris Cogdon <[email protected]> writes:\n> > Hi folks! I’ve a query where adding a rollup to the group by switches to\n> > GroupAggregate unexpectedly, where the standard GROUP BY uses\n> > HashAggregate.\n> \n> The current implementation of rollup doesn't support using hashed\n> aggregation. I don't know if that's for lack of round tuits or because\n> it's actually hard, but it's not the planner's fault.\n> \n> \t\t\tregards, tom lane\n> \n\nHi,\n\nCribbed from the mailing list:\n\nhttp://www.postgresql.org/message-id/[email protected]\n\nThe current implementation of grouping sets only supports using sorting\nfor input. Individual sets that share a sort order are computed in one\npass. If there are sets that don't share a sort order, additional sort &\naggregation steps are performed. These additional passes are sourced by\nthe previous sort step; thus avoiding repeated scans of the source data.\n\nThe code is structured in a way that adding support for purely using\nhash aggregation or a mix of hashing and sorting is possible. Sorting\nwas chosen to be supported first, as it is the most generic method of\nimplementation.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Mar 2016 14:08:03 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding a ROLLUP switches to GroupAggregate unexpectedly" }, { "msg_contents": "On Thu, Mar 31, 2016 at 10:03 AM, Chris Cogdon <[email protected]> wrote:\n> Description: Adding a summary row by changing “GROUP BY x” into “GROUP BY\n> ROLLUP (x)” should not cause a switch from HashAggregate to GroupAggregate\n\nWhile this restriction has not been lifted for PostgreSQL 9.6,\nexternal sorting will be much faster in 9.6. During benchmarking,\nthere were 2x-3x speedups in overall query runtime for many common\ncases. This new performance optimization should ameliorate your ROLLUP\nproblem on 9.6, simply because the sort operation will be so much\nfaster.\n\nSimilarly, we have yet to make HashAggregates spill when they exceed\nwork_mem, which is another restriction on their use that we should get\naround to fixing. As you point out, this restriction continues to be a\nmajor consideration during planning, sometimes resulting in a\nGroupAggregate where a HashAggregate would have been faster (even with\nspilling of the hash table). However, simply having significantly\nfaster external sorts once again makes that restriction less of a\nproblem.\n\nI have noticed that the old replacement selection algorithm that the\nexternal sort would have used here does quite badly on low cardinality\ninputs, too. I bet that was a factor here.\n\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 9 Apr 2016 17:41:14 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding a ROLLUP switches to GroupAggregate unexpectedly" } ]
[ { "msg_contents": "Hello!\n\n\tWe are going to build system based on PostgreSQL database for huge\nnumber of individual users (few thousands). Each user will have his own\naccount, for authorization we will use Kerberos (MIT or Windows). \nMost of users will have low activity, but for various reasons,\nconnection should be open all the time.\nI'd like to know what potential problems and limitations we can expect\nwith such deployment.\n\tDuring preliminary testing we have found that for each connection we\nneed ~1MB RAM. Is there any way to decrease this ? Is there any risk,\nthat such number of users will degrade performance ?\n\tI'll be happy to hear any remarks and suggestions related to design,\nadministration and handling of such installation.\n\nbest regards\nJarek\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Mar 2016 21:08:06 +0200", "msg_from": "Jarek <[email protected]>", "msg_from_op": true, "msg_subject": "Big number of connections" }, { "msg_contents": "\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Jarek\r\nSent: Thursday, March 31, 2016 3:08 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] Big number of connections\r\n\r\nHello!\r\n\r\n\tWe are going to build system based on PostgreSQL database for huge number of individual users (few thousands). Each user will have his own account, for authorization we will use Kerberos (MIT or Windows). \r\nMost of users will have low activity, but for various reasons, connection should be open all the time.\r\nI'd like to know what potential problems and limitations we can expect with such deployment.\r\n\tDuring preliminary testing we have found that for each connection we need ~1MB RAM. Is there any way to decrease this ? Is there any risk, that such number of users will degrade performance ?\r\n\tI'll be happy to hear any remarks and suggestions related to design, administration and handling of such installation.\r\n\r\nbest regards\r\nJarek\r\n\r\n_______________________________________________________________________________\r\n\r\nTake a look at PgBouncer.\r\nIt should solve your problems.\r\n\r\nRegards,\r\nIgor Neyman\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Mar 2016 19:12:44 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "\n\nOn 03/31/2016 03:12 PM, Igor Neyman wrote:\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of Jarek\n> Sent: Thursday, March 31, 2016 3:08 PM\n> To: [email protected]\n> Subject: [PERFORM] Big number of connections\n>\n> Hello!\n>\n> \tWe are going to build system based on PostgreSQL database for huge number of individual users (few thousands). Each user will have his own account, for authorization we will use Kerberos (MIT or Windows).\n> Most of users will have low activity, but for various reasons, connection should be open all the time.\n> I'd like to know what potential problems and limitations we can expect with such deployment.\n> \tDuring preliminary testing we have found that for each connection we need ~1MB RAM. Is there any way to decrease this ? Is there any risk, that such number of users will degrade performance ?\n> \tI'll be happy to hear any remarks and suggestions related to design, administration and handling of such installation.\n>\n> best regards\n> Jarek\n>\n> _______________________________________________________________________________\n>\n> Take a look at PgBouncer.\n> It should solve your problems.\n>\n\n\nIf they are going to keep the client connections open, they would need \nto run pgbouncer in statement or transaction mode.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Mar 2016 15:51:23 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "Andrew Dunstan wrote:\n\n> On 03/31/2016 03:12 PM, Igor Neyman wrote:\n\n> > >\tWe are going to build system based on PostgreSQL database for huge number of individual users (few thousands). Each user will have his own account, for authorization we will use Kerberos (MIT or Windows).\n> > >Most of users will have low activity, but for various reasons, connection should be open all the time.\n> > >I'd like to know what potential problems and limitations we can expect with such deployment.\n> > >\tDuring preliminary testing we have found that for each connection we need ~1MB RAM. Is there any way to decrease this ? Is there any risk, that such number of users will degrade performance ?\n> > >\tI'll be happy to hear any remarks and suggestions related to design, administration and handling of such installation.\n\n> >Take a look at PgBouncer.\n> >It should solve your problems.\n> \n> If they are going to keep the client connections open, they would need to\n> run pgbouncer in statement or transaction mode.\n\nAs I understand, in pgbouncer you cannot have connections that serve\ndifferent users. If each individual requires its own database-level\nuser, pgbouncer would not help at all.\n\nI would look seriously into getting rid of the always-open requirement\nfor connections.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Mar 2016 19:47:12 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "Although somewhat academic, since pgboucer doesn’t support it (and might not ever), have a look at this ticket which, if it was ever supported, would give you what you needed:\n\nhttps://github.com/pgbouncer/pgbouncer/issues/75 <https://github.com/pgbouncer/pgbouncer/issues/75>\n\n\n> On Mar 31, 2016, at 15:47, Alvaro Herrera <[email protected]> wrote:\n> \n>> If they are going to keep the client connections open, they would need to\n>> run pgbouncer in statement or transaction mode.\n> \n> As I understand, in pgbouncer you cannot have connections that serve\n> different users. If each individual requires its own database-level\n> user, pgbouncer would not help at all.\n> \n> I would look seriously into getting rid of the always-open requirement\n> for connections.\n\n— Chris Cogdon\nAlthough somewhat academic, since pgboucer doesn’t support it (and might not ever), have a look at this ticket which, if it was ever supported, would give you what you needed:https://github.com/pgbouncer/pgbouncer/issues/75On Mar 31, 2016, at 15:47, Alvaro Herrera <[email protected]> wrote:If they are going to keep the client connections open, they would need torun pgbouncer in statement or transaction mode.As I understand, in pgbouncer you cannot have connections that servedifferent users.  If each individual requires its own database-leveluser, pgbouncer would not help at all.I would look seriously into getting rid of the always-open requirementfor connections.— Chris Cogdon", "msg_date": "Thu, 31 Mar 2016 16:12:51 -0700", "msg_from": "Chris Cogdon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "On 3/31/2016 17:47, Alvaro Herrera wrote:\n> Andrew Dunstan wrote:\n>\n>> On 03/31/2016 03:12 PM, Igor Neyman wrote:\n>>>> \tWe are going to build system based on PostgreSQL database for huge number of individual users (few thousands). Each user will have his own account, for authorization we will use Kerberos (MIT or Windows).\n>>>> Most of users will have low activity, but for various reasons, connection should be open all the time.\n>>>> I'd like to know what potential problems and limitations we can expect with such deployment.\n>>>> \tDuring preliminary testing we have found that for each connection we need ~1MB RAM. Is there any way to decrease this ? Is there any risk, that such number of users will degrade performance ?\n>>>> \tI'll be happy to hear any remarks and suggestions related to design, administration and handling of such installation.\n>>> Take a look at PgBouncer.\n>>> It should solve your problems.\n>> If they are going to keep the client connections open, they would need to\n>> run pgbouncer in statement or transaction mode.\n> As I understand, in pgbouncer you cannot have connections that serve\n> different users. If each individual requires its own database-level\n> user, pgbouncer would not help at all.\n>\n> I would look seriously into getting rid of the always-open requirement\n> for connections.\nI'm trying to figure out where the \"always open\" requirement comes from;\nthere are very, very few instances where that's real, when you get down\nto it.\n\n-- \nKarl Denninger\[email protected] <mailto:[email protected]>\n/The Market Ticker/\n/[S/MIME encrypted email preferred]/", "msg_date": "Thu, 31 Mar 2016 18:27:07 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "Hello!\n\nDnia 2016-03-31, czw o godzinie 19:12 +0000, Igor Neyman pisze:\n\n> Take a look at PgBouncer.\n> It should solve your problems.\n\nWell, we don't have problems yet :), but we are looking for possible\nthreats.\nI'll be happy to hear form users of big PostgreSQL installations, how\nmany users do you have and what kind of problems we may expect.\nIs there any risk, that huge number of roles will slowdown overall\nperformance ?\n\nbest regards\nJarek\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 01 Apr 2016 09:54:03 +0200", "msg_from": "jarek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "On 4/1/16 2:54 AM, jarek wrote:\n> I'll be happy to hear form users of big PostgreSQL installations, how\n> many users do you have and what kind of problems we may expect.\n> Is there any risk, that huge number of roles will slowdown overall\n> performance ?\n\nThe red flag from your original email was concern over each connection \nconsuming 1MB of memory. If you're so tight on memory that you can't \nafford 4GB of backend-local data, then I don't think you'll be happy \nwith any method of trying to handle 4000 concurrent connections.\n\nAssuming you're on decent sized hardware though, 3000-4000 open \nconnections shouldn't be much of an issue *as long as very few are \nactive at once*. If you get into a situation where there's a surge of \nactivity and you suddenly have 2x more active connections than cores, \nyou won't be happy. I've seen that push servers into a state where the \nonly way to recover was to disconnect everyone.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 3 Apr 2016 12:18:59 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "From: Jim Nasby Sent: Sunday, April 03, 2016 10:19 AM\n\n>>On 4/1/16 2:54 AM, jarek wrote:\n>> I'll be happy to hear form users of big PostgreSQL installations, how \n>> many users do you have and what kind of problems we may expect.\n>> Is there any risk, that huge number of roles will slowdown overall \n>> performance ?\n\n>Assuming you're on decent sized hardware though, 3000-4000 open connections shouldn't be much of an >issue *as long as very few are active at once*. If you get into a situation where there's a surge of activity >and you suddenly have 2x more active connections than cores, you won't be happy. I've seen that push >servers into a state where the only way to recover was to disconnect everyone.\n>--\n>Jim Nasby\n\nJim - I don't quite understand the math here: on a server with 20 cores, it can only support 40 active users?\n\nI come from the SQL Server world where a single 20 core server could support hundreds/thousands of active users and/or many dozens of background/foreground data processes. Is there something fundamentally different between the two platforms relative to active user loads? How would we be able to use Postgres for larger web apps?\n\nMike Sofen\n\n \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Apr 2016 06:14:36 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "Hi\n\n2016-04-04 15:14 GMT+02:00 Mike Sofen <[email protected]>:\n\n> From: Jim Nasby Sent: Sunday, April 03, 2016 10:19 AM\n>\n> >>On 4/1/16 2:54 AM, jarek wrote:\n> >> I'll be happy to hear form users of big PostgreSQL installations, how\n> >> many users do you have and what kind of problems we may expect.\n> >> Is there any risk, that huge number of roles will slowdown overall\n> >> performance ?\n>\n> >Assuming you're on decent sized hardware though, 3000-4000 open\n> connections shouldn't be much of an >issue *as long as very few are active\n> at once*. If you get into a situation where there's a surge of activity\n> >and you suddenly have 2x more active connections than cores, you won't be\n> happy. I've seen that push >servers into a state where the only way to\n> recover was to disconnect everyone.\n> >--\n> >Jim Nasby\n>\n> Jim - I don't quite understand the math here: on a server with 20 cores,\n> it can only support 40 active users?\n>\n> I come from the SQL Server world where a single 20 core server could\n> support hundreds/thousands of active users and/or many dozens of\n> background/foreground data processes. Is there something fundamentally\n> different between the two platforms relative to active user loads? How\n> would we be able to use Postgres for larger web apps?\n>\n\nPostgreSQL doesn't contain integrated pooler - so any connection to\nPostgres enforces one PostgreSQL proces. A performance benchmarks is\nshowing maximum performance about 10x cores. With high number of\nconnections you have to use low size of work_mem, what enforces can have\nnegative impact on performance too. Too high number of active PostgreSQL\nprocesses increase a risk of performance problems with spin locks, etc.\n\nUsually Web frameworks has own pooling solution - so just use it. If you\nneed more logical connection than is optimum against number of cores, then\nyou should to use external pooler like pgpool II or pgbouncer.\n\nhttp://www.pgpool.net/mediawiki/index.php/Main_Page\nhttp://pgbouncer.github.io/\n\nPgbouncer is light with only necessary functions, pgpool is little bit\nheavy with lot of functions.\n\nRegards\n\nPavel\n\n\n>\n> Mike Sofen\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi2016-04-04 15:14 GMT+02:00 Mike Sofen <[email protected]>:From: Jim Nasby Sent: Sunday, April 03, 2016 10:19 AM\n\n>>On 4/1/16 2:54 AM, jarek wrote:\n>> I'll be happy to hear form users of big PostgreSQL installations, how\n>> many users do you have and what kind of problems we may expect.\n>> Is there any risk, that huge number of roles will slowdown overall\n>> performance ?\n\n>Assuming you're on decent sized hardware though, 3000-4000 open connections shouldn't be much of an >issue *as long as very few are active at once*. If you get into a situation where there's a surge of activity >and you suddenly have 2x more active connections than cores, you won't be happy. I've seen that push >servers into a state where the only way to recover was to disconnect everyone.\n>--\n>Jim Nasby\n\nJim - I don't quite understand the math here: on a server with 20 cores, it can only support 40 active users?\n\nI come from the SQL Server world where a single 20 core server could support hundreds/thousands of active users and/or many dozens of background/foreground data processes.  Is there something fundamentally different between the two platforms relative to active user loads?  How would we be able to use Postgres for larger web apps?PostgreSQL doesn't contain integrated pooler - so any connection to Postgres enforces one PostgreSQL proces. A performance benchmarks is showing maximum performance about 10x cores.  With high number of connections you have to use low size of work_mem, what enforces can have negative impact on performance too. Too high number of active PostgreSQL processes increase a risk of performance problems with spin locks, etc. Usually Web frameworks has own pooling solution - so just use it. If you need more logical connection than is optimum against number of cores, then you should to use external pooler like pgpool II or pgbouncer.http://www.pgpool.net/mediawiki/index.php/Main_Pagehttp://pgbouncer.github.io/Pgbouncer is light with only necessary functions, pgpool is little bit heavy with lot of functions.RegardsPavel \n\nMike Sofen\n\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 4 Apr 2016 15:33:32 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "Il 04/04/2016 15:33, Pavel Stehule ha scritto:\n>\n>\n> PostgreSQL doesn't contain integrated pooler - so any connection to \n> Postgres enforces one PostgreSQL proces. A performance benchmarks is \n> showing maximum performance about 10x cores. With high number of \n> connections you have to use low size of work_mem, what enforces can \n> have negative impact on performance too. Too high number of active \n> PostgreSQL processes increase a risk of performance problems with spin \n> locks, etc.\n\n:-O\nI wasn't absolutely aware of this thing... is there a way to monitor \nactive connections, or at least to report when they grow too much?\n(say, I have an 8-core system and want to track down if, and when, \nactive connections grow over 80)\n\nThanks\nMoreno.-\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Apr 2016 16:43:07 +0200", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "2016-04-04 17:43 GMT+03:00 Moreno Andreo <[email protected]>:\n\n> s there a way to monitor active connections, or at least to report when\n> they grow too much?\n> (say, I have an 8-core system and want to track down if, and when, active\n> connections grow over 80)\n>\n\nYou can achieve that just running simple query like\nselect count(*) from pg_stat_activity where state = 'active'\n\n2016-04-04 17:43 GMT+03:00 Moreno Andreo <[email protected]>:s there a way to monitor active connections, or at least to report when they grow too much?\n(say, I have an 8-core system and want to track down if, and when, active connections grow over 80)You can achieve that just running simple query like\nselect count(*) from pg_stat_activity where state = 'active'", "msg_date": "Mon, 4 Apr 2016 17:54:46 +0300", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "\n\n\n\n\nIl 04/04/2016 16:54, Artem Tomyuk ha\n scritto:\n\n\n\n\n2016-04-04 17:43 GMT+03:00 Moreno\n Andreo <[email protected]>:\n\ns there a\n way to monitor active connections, or at least to report\n when they grow too much?\n (say, I have an 8-core system and want to track down if,\n and when, active connections grow over 80)\n\n\n\n You can achieve that just running simple query like\nselect count(*) from pg_stat_activity where state = 'active' \n\n\n\n\n\n\n\n\n\n\n Thanks, but this way I get the \"sample\" on that actual moment: what\n I'd need is to monitor, or to have something warning me like \"Hey,\n You've got 2000 active connections! Time to grow up!\" :-)\n\n Cheers,\n Moreno.-\n\n\n\n", "msg_date": "Mon, 4 Apr 2016 17:00:45 +0200", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "there are two ways:\n\n 1. to write bash script with condition if number of conn. is > 1000 send\n me email and put that script in crontab\n 2. monitor it with external monitoring system like zabbix, nagios etc....\n\n\n2016-04-04 18:00 GMT+03:00 Moreno Andreo <[email protected]>:\n\n> Il 04/04/2016 16:54, Artem Tomyuk ha scritto:\n>\n>\n> 2016-04-04 17:43 GMT+03:00 Moreno Andreo <[email protected]>:\n>\n>> s there a way to monitor active connections, or at least to report when\n>> they grow too much?\n>> (say, I have an 8-core system and want to track down if, and when, active\n>> connections grow over 80)\n>>\n>\n> You can achieve that just running simple query like\n> select count(*) from pg_stat_activity where state = 'active'\n>\n>\n>\n>\n> Thanks, but this way I get the \"sample\" on that actual moment: what I'd\n> need is to monitor, or to have something warning me like \"Hey, You've got\n> 2000 active connections! Time to grow up!\" :-)\n>\n> Cheers,\n> Moreno.-\n>\n>\n\nthere are two ways: to write bash script with condition if number of conn. is > 1000 send me email and put that script in crontabmonitor it with external monitoring system like zabbix, nagios etc....2016-04-04 18:00 GMT+03:00 Moreno Andreo <[email protected]>:\n\nIl 04/04/2016 16:54, Artem Tomyuk ha\n scritto:\n\n\n\n\n2016-04-04 17:43 GMT+03:00 Moreno\n Andreo <[email protected]>:\n\ns there a\n way to monitor active connections, or at least to report\n when they grow too much?\n (say, I have an 8-core system and want to track down if,\n and when, active connections grow over 80)\n\n\n\n You can achieve that just running simple query like\nselect count(*) from pg_stat_activity where state = 'active' \n\n\n\n\n\n\n\n\n\n\n Thanks, but this way I get the \"sample\" on that actual moment: what\n I'd need is to monitor, or to have something warning me like \"Hey,\n You've got 2000 active connections! Time to grow up!\" :-)\n\n Cheers,\n Moreno.-", "msg_date": "Mon, 4 Apr 2016 18:03:57 +0300", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" }, { "msg_contents": "2016-04-04 16:43 GMT+02:00 Moreno Andreo <[email protected]>:\n\n> Il 04/04/2016 15:33, Pavel Stehule ha scritto:\n>\n>>\n>>\n>> PostgreSQL doesn't contain integrated pooler - so any connection to\n>> Postgres enforces one PostgreSQL proces. A performance benchmarks is\n>> showing maximum performance about 10x cores. With high number of\n>> connections you have to use low size of work_mem, what enforces can have\n>> negative impact on performance too. Too high number of active PostgreSQL\n>> processes increase a risk of performance problems with spin locks, etc.\n>>\n>\n> :-O\n> I wasn't absolutely aware of this thing... is there a way to monitor\n> active connections, or at least to report when they grow too much?\n> (say, I have an 8-core system and want to track down if, and when, active\n> connections grow over 80)\n>\n\n100 connections are probably ok, 200 is over the optimum - there is some\ntolerance.\n\nWe are speaking about optimum - I had possibility to work with system where\nmax connections was 300, 600 - and it was working. But then the\nmax_connection doesn't work as safeguard against overloading. And the\nsystem under higher load can be pretty slow.\n\nRegards\n\nPavel\n\n\n>\n> Thanks\n> Moreno.-\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2016-04-04 16:43 GMT+02:00 Moreno Andreo <[email protected]>:Il 04/04/2016 15:33, Pavel Stehule ha scritto:\n\n\n\nPostgreSQL doesn't contain integrated pooler - so any connection to Postgres enforces one PostgreSQL proces. A performance benchmarks is showing maximum performance about 10x cores.  With high number of connections you have to use low size of work_mem, what enforces can have negative impact on performance too. Too high number of active PostgreSQL processes increase a risk of performance problems with spin locks, etc.\n\n\n:-O\nI wasn't absolutely aware of this thing... is there a way to monitor active connections, or at least to report when they grow too much?\n(say, I have an 8-core system and want to track down if, and when, active connections grow over 80)100 connections are probably ok, 200 is over the optimum - there is some tolerance.We are speaking about optimum - I had possibility to work with system where max connections was 300, 600 - and it was working. But then the max_connection doesn't work as safeguard against overloading. And the system under higher load can be pretty slow. RegardsPavel \n\nThanks\nMoreno.-\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 4 Apr 2016 17:06:30 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of connections" } ]
[ { "msg_contents": "Hi,\n\nIn the query below, the planner choose an extreme slow mergejoin(380\nseconds). 'Vacuum analyze' can't help.\nIf I CLUSTER (or recreate) table ES09T1, the planner choose a faster\nhashjoin (about 10 seconds). But, obviously, I can't do that with the users\nconnected.\nAfter some time after cluster(generally in the same day), the problem\nreturns. Autovacuum is on, but the tables are vacuumed forced after\npg_dump, 3 times in a day (00:00 - 12:00 - 23:00).\n\nPostgresql 9.4.5\n128GB RAM/10xRAID10 SAS 15k\nshared_buffers = 8GB\n\nwork_mem = 256MB\n\nmaintenance_work_mem = 16GB\nrandom_page_cost = 2.0\n\neffective_cache_size = 120GB\n\n\ndb=# explain (buffers,analyze) SELECT T1.es09item, T1.es09status,\nT3.es09usuari, T3.es09datreq, T2.es08desdoc AS es09desdoc, T1.es09numdoc,\nT1.es09tipdoc AS es09tipdoc, T1.es09codemp, COALESCE( T4.es09quatre, 0) AS\nes09quatre FROM (((ES09T1 T1 LEFT JOIN ES08T T2 ON T2.es08tipdoc =\nT1.es09tipdoc) LEFT JOIN ES09T T3 ON T3.es09codemp = T1.es09codemp AND\nT3.es09tipdoc = T1.es09tipdoc AND T3.es09numdoc = T1.es09numdoc) LEFT JOIN\n(SELECT COUNT(*) AS es09quatre, es09codemp, es09tipdoc, es09numdoc FROM\nES09T1 GROUP BY es09codemp, es09tipdoc, es09numdoc ) T4 ON T4.es09codemp =\nT1.es09codemp AND T4.es09tipdoc = T1.es09tipdoc AND T4.es09numdoc =\nT1.es09numdoc) WHERE (T1.es09codemp = 1) and (T3.es09datreq >= '2016-02-02'\nand T3.es09datreq <= '2016-02-02') and (T3.es09usuari like\n'%%%%%%%%%%%%%%%%%%%%') and (T1.es09tipdoc like '%%%%%') ORDER BY\nT1.es09codemp, T1.es09numdoc DESC, T1.es09tipdoc;\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=289546.93..289546.94 rows=2 width=78) (actual\ntime=380405.796..380405.929 rows=2408 loops=1)\n Sort Key: t1.es09numdoc, t1.es09tipdoc\n Sort Method: quicksort Memory: 435kB\n Buffers: shared hit=82163\n -> Merge Left Join (cost=47.09..289546.92 rows=2 width=78) (actual\ntime=1133.077..380398.160 rows=2408 loops=1)\n Merge Cond: (t1.es09tipdoc = es09t1.es09tipdoc)\n Join Filter: ((es09t1.es09codemp = t1.es09codemp) AND\n(es09t1.es09numdoc = t1.es09numdoc))\n Rows Removed by Join Filter: 992875295\n Buffers: shared hit=82163\n -> Merge Left Join (cost=46.53..49.29 rows=2 width=70) (actual\ntime=12.206..18.155 rows=2408 loops=1)\n Merge Cond: (t1.es09tipdoc = t2.es08tipdoc)\n Buffers: shared hit=6821\n -> Sort (cost=9.19..9.19 rows=2 width=44) (actual\ntime=11.611..12.248 rows=2408 loops=1)\n Sort Key: t1.es09tipdoc\n Sort Method: quicksort Memory: 285kB\n Buffers: shared hit=6814\n -> Nested Loop (cost=1.11..9.18 rows=2 width=44)\n(actual time=0.040..10.398 rows=2408 loops=1)\n Buffers: shared hit=6814\n -> Index Scan using ad_es09t_1 on es09t t3\n (cost=0.56..4.58 rows=1 width=42) (actual time=0.020..0.687 rows=1212\nloops=1)\n Index Cond: ((es09codemp = 1) AND\n(es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))\n Filter: (es09usuari ~~\n'%%%%%%%%%%%%%%%%%%%%'::text)\n Buffers: shared hit=108\n -> Index Scan using es09t1_pkey on es09t1 t1\n (cost=0.56..4.59 rows=1 width=19) (actual time=0.006..0.007 rows=2\nloops=1212)\n Index Cond: ((es09codemp = 1) AND\n(es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))\n Filter: (es09tipdoc ~~ '%%%%%'::text)\n Buffers: shared hit=6706\n -> Sort (cost=37.35..38.71 rows=547 width=32) (actual\ntime=0.592..2.206 rows=2919 loops=1)\n Sort Key: t2.es08tipdoc\n Sort Method: quicksort Memory: 67kB\n Buffers: shared hit=7\n -> Seq Scan on es08t t2 (cost=0.00..12.47 rows=547\nwidth=32) (actual time=0.003..0.126 rows=547 loops=1)\n Buffers: shared hit=7\n -> Materialize (cost=0.56..287644.85 rows=716126 width=23)\n(actual time=0.027..68577.800 rows=993087854 loops=1)\n Buffers: shared hit=75342\n -> GroupAggregate (cost=0.56..278693.28 rows=716126\nwidth=15) (actual time=0.025..4242.453 rows=3607573 loops=1)\n Group Key: es09t1.es09codemp, es09t1.es09tipdoc,\nes09t1.es09numdoc\n Buffers: shared hit=75342\n -> Index Only Scan using es09t1_pkey on es09t1\n (cost=0.56..199919.49 rows=7161253 width=15) (actual time=0.016..1625.031\nrows=7160921 loops=1)\n Index Cond: (es09codemp = 1)\n Heap Fetches: 51499\n Buffers: shared hit=75342\n Planning time: 50.129 ms\n Execution time: 380419.435 ms\n(43 rows)\n\n\ndb=# vacuum ANALYZE es09t1;\nVACUUM\n\n\ndb=# explain SELECT T1.es09item, T1.es09status, T3.es09usuari,\nT3.es09datreq, T2.es08desdoc AS es09desdoc, T1.es09numdoc, T1.es09tipdoc AS\nes09tipdoc, T1.es09codemp, COALESCE( T4.es09quatre, 0) AS es09quatre FROM\n(((ES09T1 T1 LEFT\n JOIN ES08T T2 ON T2.es08tipdoc = T1.es09tipdoc) LEFT JOIN ES09T T3 ON\nT3.es09codemp = T1.es09codemp AND T3.es09tipdoc = T1.es09tipdoc AND\nT3.es09numdoc = T1.es09numdoc) LEFT JOIN (SELECT COUNT(*) AS es09quatre,\nes09codemp, es09tipdoc, e\ns09numdoc FROM ES09T1 GROUP BY es09codemp, es09tipdoc, es09numdoc ) T4 ON\nT4.es09codemp = T1.es09codemp AND T4.es09tipdoc = T1.es09tipdoc AND\nT4.es09numdoc = T1.es09numdoc) WHERE (T1.es09codemp = 1) and (T3.es09datreq\n>= '2016-02-02' and\n T3.es09datreq <= '2016-02-02') and (T3.es09usuari like\n'%%%%%%%%%%%%%%%%%%%%') and (T1.es09tipdoc like '%%%%%') ORDER BY\nT1.es09codemp, T1.es09numdoc DESC, T1.es09tipdoc;\n\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=288400.09..288400.09 rows=2 width=78)\n Sort Key: t1.es09numdoc, t1.es09tipdoc\n -> Merge Left Join (cost=46.22..288400.08 rows=2 width=78)\n Merge Cond: (t1.es09tipdoc = es09t1.es09tipdoc)\n Join Filter: ((es09t1.es09codemp = t1.es09codemp) AND\n(es09t1.es09numdoc = t1.es09numdoc))\n -> Merge Left Join (cost=45.66..48.43 rows=2 width=70)\n Merge Cond: (t1.es09tipdoc = t2.es08tipdoc)\n -> Sort (cost=9.19..9.19 rows=2 width=44)\n Sort Key: t1.es09tipdoc\n -> Nested Loop (cost=1.11..9.18 rows=2 width=44)\n -> Index Scan using ad_es09t_1 on es09t t3\n (cost=0.56..4.58 rows=1 width=42)\n Index Cond: ((es09codemp = 1) AND\n(es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))\n Filter: (es09usuari ~~\n'%%%%%%%%%%%%%%%%%%%%'::text)\n -> Index Scan using es09t1_pkey on es09t1 t1\n (cost=0.56..4.59 rows=1 width=19)\n Index Cond: ((es09codemp = 1) AND\n(es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))\n Filter: (es09tipdoc ~~ '%%%%%'::text)\n -> Sort (cost=36.47..37.84 rows=549 width=32)\n Sort Key: t2.es08tipdoc\n -> Seq Scan on es08t t2 (cost=0.00..11.49 rows=549\nwidth=32)\n -> Materialize (cost=0.56..286496.26 rows=716037 width=23)\n -> GroupAggregate (cost=0.56..277545.79 rows=716037\nwidth=15)\n Group Key: es09t1.es09codemp, es09t1.es09tipdoc,\nes09t1.es09numdoc\n -> Index Only Scan using es09t1_pkey on es09t1\n (cost=0.56..198781.81 rows=7160361 width=15)\n Index Cond: (es09codemp = 1)\n(24 rows)\n\n\n----------------------------------------------------------------------------\n\ndb=# cluster es09t1;\nCLUSTER\n\ndb=# explain (buffers,analyze) SELECT T1.es09item, T1.es09status,\nT3.es09usuari, T3.es09datreq, T2.es08desdoc AS es09desdoc, T1.es09numdoc,\nT1.es09tipdoc AS es09tipdoc, T1.es09codemp, COALESCE( T4.es09quatre, 0) AS\nes09quatre FROM (((ES09T1 T1 LEFT JOIN ES08T T2 ON T2.es08tipdoc =\nT1.es09tipdoc) LEFT JOIN ES09T T3 ON T3.es09codemp = T1.es09codemp AND\nT3.es09tipdoc = T1.es09tipdoc AND T3.es09numdoc = T1.es09numdoc) LEFT JOIN\n(SELECT COUNT(*) AS es09quatre, es09codemp, es09tipdoc, es09numdoc FROM\nES09T1 GROUP BY es09codemp, es09tipdoc, es09numdoc ) T4 ON T4.es09codemp =\nT1.es09codemp AND T4.es09tipdoc = T1.es09tipdoc AND T4.es09numdoc =\nT1.es09numdoc) WHERE (T1.es09codemp = 1) and (T3.es09datreq >= '2016-02-02'\nand T3.es09datreq <= '2016-02-02') and (T3.es09usuari like\n'%%%%%%%%%%%%%%%%%%%%') and (T1.es09tipdoc like '%%%%%') ORDER BY\nT1.es09codemp, T1.es09numdoc DESC, T1.es09tipdoc;\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=483816.33..483816.34 rows=2 width=78) (actual\ntime=8794.833..8795.001 rows=2408 loops=1)\n Sort Key: t1.es09numdoc, t1.es09tipdoc\n Sort Method: quicksort Memory: 435kB\n Buffers: shared hit=13649 read=299785\n -> Nested Loop Left Join (cost=461417.89..483816.32 rows=2 width=78)\n(actual time=6563.106..8790.845 rows=2408 loops=1)\n Buffers: shared hit=13649 read=299785\n -> Hash Right Join (cost=461417.61..483815.72 rows=2 width=52)\n(actual time=6563.082..8782.169 rows=2408 loops=1)\n Hash Cond: ((es09t1.es09codemp = t1.es09codemp) AND\n(es09t1.es09tipdoc = t1.es09tipdoc) AND (es09t1.es09numdoc = t1.es09numdoc))\n Buffers: shared hit=6425 read=299785\n -> HashAggregate (cost=461408.40..468575.79 rows=716739\nwidth=15) (actual time=6548.467..7866.944 rows=3607578 loops=1)\n Group Key: es09t1.es09codemp, es09t1.es09tipdoc,\nes09t1.es09numdoc\n Buffers: shared hit=421 read=299566\n -> Seq Scan on es09t1 (cost=0.00..389734.56\nrows=7167384 width=15) (actual time=2.154..1818.148 rows=7160931 loops=1)\n Filter: (es09codemp = 1)\n Rows Removed by Filter: 11849\n Buffers: shared hit=421 read=299566\n -> Hash (cost=9.18..9.18 rows=2 width=44) (actual\ntime=12.486..12.486 rows=2408 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 188kB\n Buffers: shared hit=6004 read=219\n -> Nested Loop (cost=1.11..9.18 rows=2 width=44)\n(actual time=0.076..11.112 rows=2408 loops=1)\n Buffers: shared hit=6004 read=219\n -> Index Scan using ad_es09t_1 on es09t t3\n (cost=0.56..4.58 rows=1 width=42) (actual time=0.035..0.743 rows=1212\nloops=1)\n Index Cond: ((es09codemp = 1) AND\n(es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))\n Filter: (es09usuari ~~\n'%%%%%%%%%%%%%%%%%%%%'::text)\n Buffers: shared hit=98 read=12\n -> Index Scan using es09t1_pkey on es09t1 t1\n (cost=0.56..4.59 rows=1 width=19) (actual time=0.007..0.008 rows=2\nloops=1212)\n Index Cond: ((es09codemp = 1) AND\n(es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))\n Filter: (es09tipdoc ~~ '%%%%%'::text)\n Buffers: shared hit=5906 read=207\n -> Index Scan using es08t_pkey on es08t t2 (cost=0.28..0.29\nrows=1 width=32) (actual time=0.002..0.003 rows=1 loops=2408)\n Index Cond: (es08tipdoc = t1.es09tipdoc)\n Buffers: shared hit=7224\n Planning time: 14.498 ms\n Execution time: 8819.824 ms\n(34 rows)\n\n Best regards,\n\nAlexandre\n\nHi,In the query below, the planner choose an extreme slow mergejoin(380 seconds). 'Vacuum analyze' can't help.If I CLUSTER (or recreate) table ES09T1, the planner choose a faster hashjoin (about 10 seconds). But, obviously, I can't do that with the users connected.After some time after cluster(generally in the same day), the problem returns. Autovacuum is on, but the tables are vacuumed forced after pg_dump, 3 times in a day (00:00 - 12:00 - 23:00).Postgresql 9.4.5128GB RAM/10xRAID10 SAS 15kshared_buffers = 8GB                                                                        work_mem = 256MB                                                                                                                        maintenance_work_mem = 16GBrandom_page_cost = 2.0                                                                                   effective_cache_size = 120GB       db=# explain (buffers,analyze) SELECT T1.es09item, T1.es09status, T3.es09usuari, T3.es09datreq, T2.es08desdoc AS es09desdoc, T1.es09numdoc, T1.es09tipdoc AS es09tipdoc, T1.es09codemp, COALESCE( T4.es09quatre, 0) AS es09quatre FROM (((ES09T1 T1 LEFT JOIN ES08T T2 ON T2.es08tipdoc = T1.es09tipdoc) LEFT JOIN ES09T T3 ON T3.es09codemp = T1.es09codemp AND T3.es09tipdoc = T1.es09tipdoc AND T3.es09numdoc = T1.es09numdoc) LEFT JOIN (SELECT COUNT(*) AS es09quatre, es09codemp, es09tipdoc, es09numdoc FROM ES09T1 GROUP BY es09codemp, es09tipdoc, es09numdoc ) T4 ON T4.es09codemp = T1.es09codemp AND T4.es09tipdoc = T1.es09tipdoc AND T4.es09numdoc = T1.es09numdoc) WHERE (T1.es09codemp = 1) and (T3.es09datreq >= '2016-02-02' and T3.es09datreq <= '2016-02-02') and (T3.es09usuari like '%%%%%%%%%%%%%%%%%%%%') and (T1.es09tipdoc like '%%%%%') ORDER BY T1.es09codemp, T1.es09numdoc DESC, T1.es09tipdoc;                                                                              QUERY PLAN                                                                              ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=289546.93..289546.94 rows=2 width=78) (actual time=380405.796..380405.929 rows=2408 loops=1)   Sort Key: t1.es09numdoc, t1.es09tipdoc   Sort Method: quicksort  Memory: 435kB   Buffers: shared hit=82163   ->  Merge Left Join  (cost=47.09..289546.92 rows=2 width=78) (actual time=1133.077..380398.160 rows=2408 loops=1)         Merge Cond: (t1.es09tipdoc = es09t1.es09tipdoc)         Join Filter: ((es09t1.es09codemp = t1.es09codemp) AND (es09t1.es09numdoc = t1.es09numdoc))         Rows Removed by Join Filter: 992875295         Buffers: shared hit=82163         ->  Merge Left Join  (cost=46.53..49.29 rows=2 width=70) (actual time=12.206..18.155 rows=2408 loops=1)               Merge Cond: (t1.es09tipdoc = t2.es08tipdoc)               Buffers: shared hit=6821               ->  Sort  (cost=9.19..9.19 rows=2 width=44) (actual time=11.611..12.248 rows=2408 loops=1)                     Sort Key: t1.es09tipdoc                     Sort Method: quicksort  Memory: 285kB                     Buffers: shared hit=6814                     ->  Nested Loop  (cost=1.11..9.18 rows=2 width=44) (actual time=0.040..10.398 rows=2408 loops=1)                           Buffers: shared hit=6814                           ->  Index Scan using ad_es09t_1 on es09t t3  (cost=0.56..4.58 rows=1 width=42) (actual time=0.020..0.687 rows=1212 loops=1)                                 Index Cond: ((es09codemp = 1) AND (es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))                                 Filter: (es09usuari ~~ '%%%%%%%%%%%%%%%%%%%%'::text)                                 Buffers: shared hit=108                           ->  Index Scan using es09t1_pkey on es09t1 t1  (cost=0.56..4.59 rows=1 width=19) (actual time=0.006..0.007 rows=2 loops=1212)                                 Index Cond: ((es09codemp = 1) AND (es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))                                 Filter: (es09tipdoc ~~ '%%%%%'::text)                                 Buffers: shared hit=6706               ->  Sort  (cost=37.35..38.71 rows=547 width=32) (actual time=0.592..2.206 rows=2919 loops=1)                     Sort Key: t2.es08tipdoc                     Sort Method: quicksort  Memory: 67kB                     Buffers: shared hit=7                     ->  Seq Scan on es08t t2  (cost=0.00..12.47 rows=547 width=32) (actual time=0.003..0.126 rows=547 loops=1)                           Buffers: shared hit=7         ->  Materialize  (cost=0.56..287644.85 rows=716126 width=23) (actual time=0.027..68577.800 rows=993087854 loops=1)               Buffers: shared hit=75342               ->  GroupAggregate  (cost=0.56..278693.28 rows=716126 width=15) (actual time=0.025..4242.453 rows=3607573 loops=1)                     Group Key: es09t1.es09codemp, es09t1.es09tipdoc, es09t1.es09numdoc                     Buffers: shared hit=75342                     ->  Index Only Scan using es09t1_pkey on es09t1  (cost=0.56..199919.49 rows=7161253 width=15) (actual time=0.016..1625.031 rows=7160921 loops=1)                           Index Cond: (es09codemp = 1)                           Heap Fetches: 51499                           Buffers: shared hit=75342 Planning time: 50.129 ms Execution time: 380419.435 ms(43 rows)db=# vacuum ANALYZE es09t1;VACUUMdb=# explain SELECT T1.es09item, T1.es09status, T3.es09usuari, T3.es09datreq, T2.es08desdoc AS es09desdoc, T1.es09numdoc, T1.es09tipdoc AS es09tipdoc, T1.es09codemp, COALESCE( T4.es09quatre, 0) AS es09quatre FROM (((ES09T1 T1 LEFT JOIN ES08T T2 ON T2.es08tipdoc = T1.es09tipdoc) LEFT JOIN ES09T T3 ON T3.es09codemp = T1.es09codemp AND T3.es09tipdoc = T1.es09tipdoc AND T3.es09numdoc = T1.es09numdoc) LEFT JOIN (SELECT COUNT(*) AS es09quatre, es09codemp, es09tipdoc, es09numdoc FROM ES09T1 GROUP BY es09codemp, es09tipdoc, es09numdoc ) T4 ON T4.es09codemp = T1.es09codemp AND T4.es09tipdoc = T1.es09tipdoc AND T4.es09numdoc = T1.es09numdoc) WHERE (T1.es09codemp = 1) and (T3.es09datreq >= '2016-02-02' and T3.es09datreq <= '2016-02-02') and (T3.es09usuari like '%%%%%%%%%%%%%%%%%%%%') and (T1.es09tipdoc like '%%%%%') ORDER BY T1.es09codemp, T1.es09numdoc DESC, T1.es09tipdoc;                                                                                                                                    QUERY PLAN                                                                  ---------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=288400.09..288400.09 rows=2 width=78)   Sort Key: t1.es09numdoc, t1.es09tipdoc   ->  Merge Left Join  (cost=46.22..288400.08 rows=2 width=78)         Merge Cond: (t1.es09tipdoc = es09t1.es09tipdoc)         Join Filter: ((es09t1.es09codemp = t1.es09codemp) AND (es09t1.es09numdoc = t1.es09numdoc))         ->  Merge Left Join  (cost=45.66..48.43 rows=2 width=70)               Merge Cond: (t1.es09tipdoc = t2.es08tipdoc)               ->  Sort  (cost=9.19..9.19 rows=2 width=44)                     Sort Key: t1.es09tipdoc                     ->  Nested Loop  (cost=1.11..9.18 rows=2 width=44)                           ->  Index Scan using ad_es09t_1 on es09t t3  (cost=0.56..4.58 rows=1 width=42)                                 Index Cond: ((es09codemp = 1) AND (es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))                                 Filter: (es09usuari ~~ '%%%%%%%%%%%%%%%%%%%%'::text)                           ->  Index Scan using es09t1_pkey on es09t1 t1  (cost=0.56..4.59 rows=1 width=19)                                 Index Cond: ((es09codemp = 1) AND (es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))                                 Filter: (es09tipdoc ~~ '%%%%%'::text)               ->  Sort  (cost=36.47..37.84 rows=549 width=32)                     Sort Key: t2.es08tipdoc                     ->  Seq Scan on es08t t2  (cost=0.00..11.49 rows=549 width=32)         ->  Materialize  (cost=0.56..286496.26 rows=716037 width=23)               ->  GroupAggregate  (cost=0.56..277545.79 rows=716037 width=15)                     Group Key: es09t1.es09codemp, es09t1.es09tipdoc, es09t1.es09numdoc                     ->  Index Only Scan using es09t1_pkey on es09t1  (cost=0.56..198781.81 rows=7160361 width=15)                           Index Cond: (es09codemp = 1)(24 rows)----------------------------------------------------------------------------db=# cluster es09t1;CLUSTERdb=# explain (buffers,analyze) SELECT T1.es09item, T1.es09status, T3.es09usuari, T3.es09datreq, T2.es08desdoc AS es09desdoc, T1.es09numdoc, T1.es09tipdoc AS es09tipdoc, T1.es09codemp, COALESCE( T4.es09quatre, 0) AS es09quatre FROM (((ES09T1 T1 LEFT JOIN ES08T T2 ON T2.es08tipdoc = T1.es09tipdoc) LEFT JOIN ES09T T3 ON T3.es09codemp = T1.es09codemp AND T3.es09tipdoc = T1.es09tipdoc AND T3.es09numdoc = T1.es09numdoc) LEFT JOIN (SELECT COUNT(*) AS es09quatre, es09codemp, es09tipdoc, es09numdoc FROM ES09T1 GROUP BY es09codemp, es09tipdoc, es09numdoc ) T4 ON T4.es09codemp = T1.es09codemp AND T4.es09tipdoc = T1.es09tipdoc AND T4.es09numdoc = T1.es09numdoc) WHERE (T1.es09codemp = 1) and (T3.es09datreq >= '2016-02-02' and T3.es09datreq <= '2016-02-02') and (T3.es09usuari like '%%%%%%%%%%%%%%%%%%%%') and (T1.es09tipdoc like '%%%%%') ORDER BY T1.es09codemp, T1.es09numdoc DESC, T1.es09tipdoc;                                                                       QUERY PLAN                                                                        --------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=483816.33..483816.34 rows=2 width=78) (actual time=8794.833..8795.001 rows=2408 loops=1)   Sort Key: t1.es09numdoc, t1.es09tipdoc   Sort Method: quicksort  Memory: 435kB   Buffers: shared hit=13649 read=299785   ->  Nested Loop Left Join  (cost=461417.89..483816.32 rows=2 width=78) (actual time=6563.106..8790.845 rows=2408 loops=1)         Buffers: shared hit=13649 read=299785         ->  Hash Right Join  (cost=461417.61..483815.72 rows=2 width=52) (actual time=6563.082..8782.169 rows=2408 loops=1)               Hash Cond: ((es09t1.es09codemp = t1.es09codemp) AND (es09t1.es09tipdoc = t1.es09tipdoc) AND (es09t1.es09numdoc = t1.es09numdoc))               Buffers: shared hit=6425 read=299785               ->  HashAggregate  (cost=461408.40..468575.79 rows=716739 width=15) (actual time=6548.467..7866.944 rows=3607578 loops=1)                     Group Key: es09t1.es09codemp, es09t1.es09tipdoc, es09t1.es09numdoc                     Buffers: shared hit=421 read=299566                     ->  Seq Scan on es09t1  (cost=0.00..389734.56 rows=7167384 width=15) (actual time=2.154..1818.148 rows=7160931 loops=1)                           Filter: (es09codemp = 1)                           Rows Removed by Filter: 11849                           Buffers: shared hit=421 read=299566               ->  Hash  (cost=9.18..9.18 rows=2 width=44) (actual time=12.486..12.486 rows=2408 loops=1)                     Buckets: 1024  Batches: 1  Memory Usage: 188kB                     Buffers: shared hit=6004 read=219                     ->  Nested Loop  (cost=1.11..9.18 rows=2 width=44) (actual time=0.076..11.112 rows=2408 loops=1)                           Buffers: shared hit=6004 read=219                           ->  Index Scan using ad_es09t_1 on es09t t3  (cost=0.56..4.58 rows=1 width=42) (actual time=0.035..0.743 rows=1212 loops=1)                                 Index Cond: ((es09codemp = 1) AND (es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))                                 Filter: (es09usuari ~~ '%%%%%%%%%%%%%%%%%%%%'::text)                                 Buffers: shared hit=98 read=12                           ->  Index Scan using es09t1_pkey on es09t1 t1  (cost=0.56..4.59 rows=1 width=19) (actual time=0.007..0.008 rows=2 loops=1212)                                 Index Cond: ((es09codemp = 1) AND (es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))                                 Filter: (es09tipdoc ~~ '%%%%%'::text)                                 Buffers: shared hit=5906 read=207         ->  Index Scan using es08t_pkey on es08t t2  (cost=0.28..0.29 rows=1 width=32) (actual time=0.002..0.003 rows=1 loops=2408)               Index Cond: (es08tipdoc = t1.es09tipdoc)               Buffers: shared hit=7224 Planning time: 14.498 ms Execution time: 8819.824 ms(34 rows) Best regards,Alexandre", "msg_date": "Thu, 31 Mar 2016 23:44:50 -0300", "msg_from": "Alexandre de Arruda Paes <[email protected]>", "msg_from_op": true, "msg_subject": "Fast HashJoin only after a cluster/recreate table" }, { "msg_contents": "On 1 April 2016 at 15:44, Alexandre de Arruda Paes <[email protected]>\nwrote:\n\n> In the query below, the planner choose an extreme slow mergejoin(380\n> seconds). 'Vacuum analyze' can't help.\n>\n\nYeah, it appears that planner is estimating the WHERE clause on es09t quite\nbadly, expecting just 1 row, but there's actually 1212 rows. This seems to\nthrow the whole plan decision out quite a bit, as, if you notice in the\nmerge left join for t1.es09tipdoc = t2.es08tipdoc, it expect just 2 rows to\nbe present, therefore most likely thinks that it's not worth sorting those\nresults on t1.es09tipdoc, t1.es09numdoc in order for it to match the known\noutput order of Materialize node on the inner side of that join. Instead it\nassumes the Merge join will be good enough on just the es09tipdoc order,\nand just adds the other two columns as join filters....\n\n... and this seems to be what's killing the performance. This Merge Join\nconstantly has to perform a mark/restore on the Materialize node. This why\nyou're getting the insanely high \"Rows Removed by Join Filter: 992875295\",\nin other words the join filter throw away that many row combinations\nbecause they didn't match.\n\nThis mark/restore is basically the rewind process that the merge join\nalgorithm needs to do to match many to many rows. In actual fact, this\nrewind is pretty useless in this case as the GROUP BY subquery ensures that\nno duplicate values will make it into the inner side of that merge join.\nThe planner is not currently smart enough to detect this.\n\nThere are some patches currently pending for 9.6 which might help fix this\nproblem in the future;\n\n1. multivariate statistics; this might help the poor estimates on es09t. If\nthis was available you could add statistics on es09codemp, es09datreq,\nwhich may well improve the estimate and cause the plan to change.\nhttps://commitfest.postgresql.org/9/450/\n2. Unique joins. This tackles the problem a different way and allows the\nMerge Join algorithm to skip the restore with some new smarts that are\nadded to the planner to detect when the inner side of the join can only\nproduce, at most, a single row for each outer row.\nhttps://commitfest.postgresql.org/9/129/\n\nIf you feel like compiling 9.6 devel from source and applying each of these\npatches independently and seeing if it helps... Of course that does not\nsolve your 9.4 production dilemma, but it may help evaluation of each of\nthese two patches for 9.6 or beyond.\n\nI wonder what the planner would do if you pulled out the join to ES08T. If\nthat generates a better plan, then providing that es08tipdoc is the primary\nkey of that table, then you could just put a subquery in the SELECT clause\nto lookup the es08desdoc.\n\n\n>\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=289546.93..289546.94 rows=2 width=78) (actual\n> time=380405.796..380405.929 rows=2408 loops=1)\n> Sort Key: t1.es09numdoc, t1.es09tipdoc\n> Sort Method: quicksort Memory: 435kB\n> Buffers: shared hit=82163\n> -> Merge Left Join (cost=47.09..289546.92 rows=2 width=78) (actual\n> time=1133.077..380398.160 rows=2408 loops=1)\n> Merge Cond: (t1.es09tipdoc = es09t1.es09tipdoc)\n> Join Filter: ((es09t1.es09codemp = t1.es09codemp) AND\n> (es09t1.es09numdoc = t1.es09numdoc))\n> Rows Removed by Join Filter: 992875295\n> Buffers: shared hit=82163\n> -> Merge Left Join (cost=46.53..49.29 rows=2 width=70) (actual\n> time=12.206..18.155 rows=2408 loops=1)\n> Merge Cond: (t1.es09tipdoc = t2.es08tipdoc)\n> Buffers: shared hit=6821\n> -> Sort (cost=9.19..9.19 rows=2 width=44) (actual\n> time=11.611..12.248 rows=2408 loops=1)\n> Sort Key: t1.es09tipdoc\n> Sort Method: quicksort Memory: 285kB\n> Buffers: shared hit=6814\n> -> Nested Loop (cost=1.11..9.18 rows=2 width=44)\n> (actual time=0.040..10.398 rows=2408 loops=1)\n> Buffers: shared hit=6814\n> -> Index Scan using ad_es09t_1 on es09t t3\n> (cost=0.56..4.58 rows=1 width=42) (actual time=0.020..0.687 rows=1212\n> loops=1)\n> Index Cond: ((es09codemp = 1) AND\n> (es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))\n> Filter: (es09usuari ~~\n> '%%%%%%%%%%%%%%%%%%%%'::text)\n> Buffers: shared hit=108\n> -> Index Scan using es09t1_pkey on es09t1 t1\n> (cost=0.56..4.59 rows=1 width=19) (actual time=0.006..0.007 rows=2\n> loops=1212)\n> Index Cond: ((es09codemp = 1) AND\n> (es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))\n> Filter: (es09tipdoc ~~ '%%%%%'::text)\n> Buffers: shared hit=6706\n> -> Sort (cost=37.35..38.71 rows=547 width=32) (actual\n> time=0.592..2.206 rows=2919 loops=1)\n> Sort Key: t2.es08tipdoc\n> Sort Method: quicksort Memory: 67kB\n> Buffers: shared hit=7\n> -> Seq Scan on es08t t2 (cost=0.00..12.47 rows=547\n> width=32) (actual time=0.003..0.126 rows=547 loops=1)\n> Buffers: shared hit=7\n> -> Materialize (cost=0.56..287644.85 rows=716126 width=23)\n> (actual time=0.027..68577.800 rows=993087854 loops=1)\n> Buffers: shared hit=75342\n> -> GroupAggregate (cost=0.56..278693.28 rows=716126\n> width=15) (actual time=0.025..4242.453 rows=3607573 loops=1)\n> Group Key: es09t1.es09codemp, es09t1.es09tipdoc,\n> es09t1.es09numdoc\n> Buffers: shared hit=75342\n> -> Index Only Scan using es09t1_pkey on es09t1\n> (cost=0.56..199919.49 rows=7161253 width=15) (actual time=0.016..1625.031\n> rows=7160921 loops=1)\n> Index Cond: (es09codemp = 1)\n> Heap Fetches: 51499\n> Buffers: shared hit=75342\n> Planning time: 50.129 ms\n> Execution time: 380419.435 ms\n>\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn 1 April 2016 at 15:44, Alexandre de Arruda Paes <[email protected]> wrote:In the query below, the planner choose an extreme slow mergejoin(380 seconds). 'Vacuum analyze' can't help.Yeah, it appears that planner is estimating the WHERE clause on es09t quite badly, expecting just 1 row, but there's actually 1212 rows.  This seems to throw the whole plan decision out quite a bit, as, if you notice in the merge left join for t1.es09tipdoc = t2.es08tipdoc, it expect just 2 rows to be present, therefore most likely thinks that it's not worth sorting those results on t1.es09tipdoc, t1.es09numdoc in order for it to match the known output order of Materialize node on the inner side of that join. Instead it assumes the Merge join will be good enough on just the es09tipdoc order, and just adds the other two columns as join filters....... and this seems to be what's killing the performance. This Merge Join constantly has to perform a mark/restore on the Materialize node. This why you're getting the insanely high \"Rows Removed by Join Filter: 992875295\", in other words the join filter throw away that many row combinations because they didn't match.This mark/restore is basically the rewind process that the merge join algorithm needs to do to match many to many rows. In actual fact, this rewind is pretty useless in this case as the GROUP BY subquery ensures that no duplicate values will make it into the inner side of that merge join. The planner is not currently smart enough to detect this.There are some patches currently pending for 9.6 which might help fix this problem in the future;1. multivariate statistics; this might help the poor estimates on es09t. If this was available you could add statistics on es09codemp, es09datreq, which may well improve the estimate and cause the plan to change. https://commitfest.postgresql.org/9/450/2. Unique joins. This tackles the problem a different way and allows the Merge Join algorithm to skip the restore with some new smarts that are added to the planner to detect when the inner side of the join can only produce, at most, a single row for each outer row. https://commitfest.postgresql.org/9/129/If you feel like compiling 9.6 devel from source and applying each of these patches independently and seeing if it helps...  Of course that does not solve your 9.4 production dilemma, but it may help evaluation of each of these two patches for 9.6 or beyond.I wonder what the planner would do if you pulled out the join to ES08T. If that generates a better plan, then providing that es08tipdoc is the primary key of that table, then you could just put a subquery in the SELECT clause to lookup the es08desdoc.                                                                               QUERY PLAN                                                                              ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=289546.93..289546.94 rows=2 width=78) (actual time=380405.796..380405.929 rows=2408 loops=1)   Sort Key: t1.es09numdoc, t1.es09tipdoc   Sort Method: quicksort  Memory: 435kB   Buffers: shared hit=82163   ->  Merge Left Join  (cost=47.09..289546.92 rows=2 width=78) (actual time=1133.077..380398.160 rows=2408 loops=1)         Merge Cond: (t1.es09tipdoc = es09t1.es09tipdoc)         Join Filter: ((es09t1.es09codemp = t1.es09codemp) AND (es09t1.es09numdoc = t1.es09numdoc))         Rows Removed by Join Filter: 992875295         Buffers: shared hit=82163         ->  Merge Left Join  (cost=46.53..49.29 rows=2 width=70) (actual time=12.206..18.155 rows=2408 loops=1)               Merge Cond: (t1.es09tipdoc = t2.es08tipdoc)               Buffers: shared hit=6821               ->  Sort  (cost=9.19..9.19 rows=2 width=44) (actual time=11.611..12.248 rows=2408 loops=1)                     Sort Key: t1.es09tipdoc                     Sort Method: quicksort  Memory: 285kB                     Buffers: shared hit=6814                     ->  Nested Loop  (cost=1.11..9.18 rows=2 width=44) (actual time=0.040..10.398 rows=2408 loops=1)                           Buffers: shared hit=6814                           ->  Index Scan using ad_es09t_1 on es09t t3  (cost=0.56..4.58 rows=1 width=42) (actual time=0.020..0.687 rows=1212 loops=1)                                 Index Cond: ((es09codemp = 1) AND (es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))                                 Filter: (es09usuari ~~ '%%%%%%%%%%%%%%%%%%%%'::text)                                 Buffers: shared hit=108                           ->  Index Scan using es09t1_pkey on es09t1 t1  (cost=0.56..4.59 rows=1 width=19) (actual time=0.006..0.007 rows=2 loops=1212)                                 Index Cond: ((es09codemp = 1) AND (es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))                                 Filter: (es09tipdoc ~~ '%%%%%'::text)                                 Buffers: shared hit=6706               ->  Sort  (cost=37.35..38.71 rows=547 width=32) (actual time=0.592..2.206 rows=2919 loops=1)                     Sort Key: t2.es08tipdoc                     Sort Method: quicksort  Memory: 67kB                     Buffers: shared hit=7                     ->  Seq Scan on es08t t2  (cost=0.00..12.47 rows=547 width=32) (actual time=0.003..0.126 rows=547 loops=1)                           Buffers: shared hit=7         ->  Materialize  (cost=0.56..287644.85 rows=716126 width=23) (actual time=0.027..68577.800 rows=993087854 loops=1)               Buffers: shared hit=75342               ->  GroupAggregate  (cost=0.56..278693.28 rows=716126 width=15) (actual time=0.025..4242.453 rows=3607573 loops=1)                     Group Key: es09t1.es09codemp, es09t1.es09tipdoc, es09t1.es09numdoc                     Buffers: shared hit=75342                     ->  Index Only Scan using es09t1_pkey on es09t1  (cost=0.56..199919.49 rows=7161253 width=15) (actual time=0.016..1625.031 rows=7160921 loops=1)                           Index Cond: (es09codemp = 1)                           Heap Fetches: 51499                           Buffers: shared hit=75342 Planning time: 50.129 ms Execution time: 380419.435 ms--  David Rowley                   http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Sat, 2 Apr 2016 02:17:12 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fast HashJoin only after a cluster/recreate table" }, { "msg_contents": "Hi David,\n\nThanks for the explanations. But I don't undestand why when I recreate the\ntable, the planner choose a best mode for sometime...\n\n>I wonder what the planner would do if you pulled out the join to ES08T. If\nthat generates a better plan, then providing that es08tipdoc is the primary\nkey of that table, then you could just put a\n> subquery in the SELECT clause to lookup the es08desdoc.\n\nWe have a problem with this approach. Actually, this querys are generated\nby a framework and can't be 'rewrite'.\nCan you think in another solution directly in DB (perhaps a partial index,\ntable partitioning, etc) ???\n\nBest regards,\n\nAlexandre\n\n\n\n2016-04-01 10:17 GMT-03:00 David Rowley <[email protected]>:\n\n> On 1 April 2016 at 15:44, Alexandre de Arruda Paes <[email protected]>\n> wrote:\n>\n>> In the query below, the planner choose an extreme slow mergejoin(380\n>> seconds). 'Vacuum analyze' can't help.\n>>\n>\n> Yeah, it appears that planner is estimating the WHERE clause on es09t\n> quite badly, expecting just 1 row, but there's actually 1212 rows. This\n> seems to throw the whole plan decision out quite a bit, as, if you notice\n> in the merge left join for t1.es09tipdoc = t2.es08tipdoc, it expect just 2\n> rows to be present, therefore most likely thinks that it's not worth\n> sorting those results on t1.es09tipdoc, t1.es09numdoc in order for it to\n> match the known output order of Materialize node on the inner side of that\n> join. Instead it assumes the Merge join will be good enough on just the\n> es09tipdoc order, and just adds the other two columns as join filters....\n>\n> ... and this seems to be what's killing the performance. This Merge Join\n> constantly has to perform a mark/restore on the Materialize node. This why\n> you're getting the insanely high \"Rows Removed by Join Filter: 992875295\",\n> in other words the join filter throw away that many row combinations\n> because they didn't match.\n>\n> This mark/restore is basically the rewind process that the merge join\n> algorithm needs to do to match many to many rows. In actual fact, this\n> rewind is pretty useless in this case as the GROUP BY subquery ensures that\n> no duplicate values will make it into the inner side of that merge join.\n> The planner is not currently smart enough to detect this.\n>\n> There are some patches currently pending for 9.6 which might help fix this\n> problem in the future;\n>\n> 1. multivariate statistics; this might help the poor estimates on es09t.\n> If this was available you could add statistics on es09codemp, es09datreq,\n> which may well improve the estimate and cause the plan to change.\n> https://commitfest.postgresql.org/9/450/\n> 2. Unique joins. This tackles the problem a different way and allows the\n> Merge Join algorithm to skip the restore with some new smarts that are\n> added to the planner to detect when the inner side of the join can only\n> produce, at most, a single row for each outer row.\n> https://commitfest.postgresql.org/9/129/\n>\n> If you feel like compiling 9.6 devel from source and applying each of\n> these patches independently and seeing if it helps... Of course that does\n> not solve your 9.4 production dilemma, but it may help evaluation of each\n> of these two patches for 9.6 or beyond.\n>\n> I wonder what the planner would do if you pulled out the join to ES08T. If\n> that generates a better plan, then providing that es08tipdoc is the primary\n> key of that table, then you could just put a subquery in the SELECT clause\n> to lookup the es08desdoc.\n>\n>\n>>\n>> QUERY PLAN\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=289546.93..289546.94 rows=2 width=78) (actual\n>> time=380405.796..380405.929 rows=2408 loops=1)\n>> Sort Key: t1.es09numdoc, t1.es09tipdoc\n>> Sort Method: quicksort Memory: 435kB\n>> Buffers: shared hit=82163\n>> -> Merge Left Join (cost=47.09..289546.92 rows=2 width=78) (actual\n>> time=1133.077..380398.160 rows=2408 loops=1)\n>> Merge Cond: (t1.es09tipdoc = es09t1.es09tipdoc)\n>> Join Filter: ((es09t1.es09codemp = t1.es09codemp) AND\n>> (es09t1.es09numdoc = t1.es09numdoc))\n>> Rows Removed by Join Filter: 992875295\n>> Buffers: shared hit=82163\n>> -> Merge Left Join (cost=46.53..49.29 rows=2 width=70) (actual\n>> time=12.206..18.155 rows=2408 loops=1)\n>> Merge Cond: (t1.es09tipdoc = t2.es08tipdoc)\n>> Buffers: shared hit=6821\n>> -> Sort (cost=9.19..9.19 rows=2 width=44) (actual\n>> time=11.611..12.248 rows=2408 loops=1)\n>> Sort Key: t1.es09tipdoc\n>> Sort Method: quicksort Memory: 285kB\n>> Buffers: shared hit=6814\n>> -> Nested Loop (cost=1.11..9.18 rows=2 width=44)\n>> (actual time=0.040..10.398 rows=2408 loops=1)\n>> Buffers: shared hit=6814\n>> -> Index Scan using ad_es09t_1 on es09t t3\n>> (cost=0.56..4.58 rows=1 width=42) (actual time=0.020..0.687 rows=1212\n>> loops=1)\n>> Index Cond: ((es09codemp = 1) AND\n>> (es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))\n>> Filter: (es09usuari ~~\n>> '%%%%%%%%%%%%%%%%%%%%'::text)\n>> Buffers: shared hit=108\n>> -> Index Scan using es09t1_pkey on es09t1 t1\n>> (cost=0.56..4.59 rows=1 width=19) (actual time=0.006..0.007 rows=2\n>> loops=1212)\n>> Index Cond: ((es09codemp = 1) AND\n>> (es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))\n>> Filter: (es09tipdoc ~~ '%%%%%'::text)\n>> Buffers: shared hit=6706\n>> -> Sort (cost=37.35..38.71 rows=547 width=32) (actual\n>> time=0.592..2.206 rows=2919 loops=1)\n>> Sort Key: t2.es08tipdoc\n>> Sort Method: quicksort Memory: 67kB\n>> Buffers: shared hit=7\n>> -> Seq Scan on es08t t2 (cost=0.00..12.47 rows=547\n>> width=32) (actual time=0.003..0.126 rows=547 loops=1)\n>> Buffers: shared hit=7\n>> -> Materialize (cost=0.56..287644.85 rows=716126 width=23)\n>> (actual time=0.027..68577.800 rows=993087854 loops=1)\n>> Buffers: shared hit=75342\n>> -> GroupAggregate (cost=0.56..278693.28 rows=716126\n>> width=15) (actual time=0.025..4242.453 rows=3607573 loops=1)\n>> Group Key: es09t1.es09codemp, es09t1.es09tipdoc,\n>> es09t1.es09numdoc\n>> Buffers: shared hit=75342\n>> -> Index Only Scan using es09t1_pkey on es09t1\n>> (cost=0.56..199919.49 rows=7161253 width=15) (actual time=0.016..1625.031\n>> rows=7160921 loops=1)\n>> Index Cond: (es09codemp = 1)\n>> Heap Fetches: 51499\n>> Buffers: shared hit=75342\n>> Planning time: 50.129 ms\n>> Execution time: 380419.435 ms\n>>\n>\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nHi David,Thanks for the explanations. But I don't undestand why when I recreate the table, the planner choose a best mode for sometime...>I wonder what the planner would do if you pulled out the join to ES08T. If that generates a better plan, then providing that es08tipdoc is the primary key of that table, then you could just put a > subquery in the SELECT clause to lookup the es08desdoc.We have a problem with this approach. Actually, this querys are generated by a framework and can't be 'rewrite'. Can you think in another solution directly in DB (perhaps a partial index, table partitioning, etc) ???Best regards,Alexandre2016-04-01 10:17 GMT-03:00 David Rowley <[email protected]>:On 1 April 2016 at 15:44, Alexandre de Arruda Paes <[email protected]> wrote:In the query below, the planner choose an extreme slow mergejoin(380 seconds). 'Vacuum analyze' can't help.Yeah, it appears that planner is estimating the WHERE clause on es09t quite badly, expecting just 1 row, but there's actually 1212 rows.  This seems to throw the whole plan decision out quite a bit, as, if you notice in the merge left join for t1.es09tipdoc = t2.es08tipdoc, it expect just 2 rows to be present, therefore most likely thinks that it's not worth sorting those results on t1.es09tipdoc, t1.es09numdoc in order for it to match the known output order of Materialize node on the inner side of that join. Instead it assumes the Merge join will be good enough on just the es09tipdoc order, and just adds the other two columns as join filters....... and this seems to be what's killing the performance. This Merge Join constantly has to perform a mark/restore on the Materialize node. This why you're getting the insanely high \"Rows Removed by Join Filter: 992875295\", in other words the join filter throw away that many row combinations because they didn't match.This mark/restore is basically the rewind process that the merge join algorithm needs to do to match many to many rows. In actual fact, this rewind is pretty useless in this case as the GROUP BY subquery ensures that no duplicate values will make it into the inner side of that merge join. The planner is not currently smart enough to detect this.There are some patches currently pending for 9.6 which might help fix this problem in the future;1. multivariate statistics; this might help the poor estimates on es09t. If this was available you could add statistics on es09codemp, es09datreq, which may well improve the estimate and cause the plan to change. https://commitfest.postgresql.org/9/450/2. Unique joins. This tackles the problem a different way and allows the Merge Join algorithm to skip the restore with some new smarts that are added to the planner to detect when the inner side of the join can only produce, at most, a single row for each outer row. https://commitfest.postgresql.org/9/129/If you feel like compiling 9.6 devel from source and applying each of these patches independently and seeing if it helps...  Of course that does not solve your 9.4 production dilemma, but it may help evaluation of each of these two patches for 9.6 or beyond.I wonder what the planner would do if you pulled out the join to ES08T. If that generates a better plan, then providing that es08tipdoc is the primary key of that table, then you could just put a subquery in the SELECT clause to lookup the es08desdoc.                                                                               QUERY PLAN                                                                              ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=289546.93..289546.94 rows=2 width=78) (actual time=380405.796..380405.929 rows=2408 loops=1)   Sort Key: t1.es09numdoc, t1.es09tipdoc   Sort Method: quicksort  Memory: 435kB   Buffers: shared hit=82163   ->  Merge Left Join  (cost=47.09..289546.92 rows=2 width=78) (actual time=1133.077..380398.160 rows=2408 loops=1)         Merge Cond: (t1.es09tipdoc = es09t1.es09tipdoc)         Join Filter: ((es09t1.es09codemp = t1.es09codemp) AND (es09t1.es09numdoc = t1.es09numdoc))         Rows Removed by Join Filter: 992875295         Buffers: shared hit=82163         ->  Merge Left Join  (cost=46.53..49.29 rows=2 width=70) (actual time=12.206..18.155 rows=2408 loops=1)               Merge Cond: (t1.es09tipdoc = t2.es08tipdoc)               Buffers: shared hit=6821               ->  Sort  (cost=9.19..9.19 rows=2 width=44) (actual time=11.611..12.248 rows=2408 loops=1)                     Sort Key: t1.es09tipdoc                     Sort Method: quicksort  Memory: 285kB                     Buffers: shared hit=6814                     ->  Nested Loop  (cost=1.11..9.18 rows=2 width=44) (actual time=0.040..10.398 rows=2408 loops=1)                           Buffers: shared hit=6814                           ->  Index Scan using ad_es09t_1 on es09t t3  (cost=0.56..4.58 rows=1 width=42) (actual time=0.020..0.687 rows=1212 loops=1)                                 Index Cond: ((es09codemp = 1) AND (es09datreq >= '2016-02-02'::date) AND (es09datreq <= '2016-02-02'::date))                                 Filter: (es09usuari ~~ '%%%%%%%%%%%%%%%%%%%%'::text)                                 Buffers: shared hit=108                           ->  Index Scan using es09t1_pkey on es09t1 t1  (cost=0.56..4.59 rows=1 width=19) (actual time=0.006..0.007 rows=2 loops=1212)                                 Index Cond: ((es09codemp = 1) AND (es09tipdoc = t3.es09tipdoc) AND (es09numdoc = t3.es09numdoc))                                 Filter: (es09tipdoc ~~ '%%%%%'::text)                                 Buffers: shared hit=6706               ->  Sort  (cost=37.35..38.71 rows=547 width=32) (actual time=0.592..2.206 rows=2919 loops=1)                     Sort Key: t2.es08tipdoc                     Sort Method: quicksort  Memory: 67kB                     Buffers: shared hit=7                     ->  Seq Scan on es08t t2  (cost=0.00..12.47 rows=547 width=32) (actual time=0.003..0.126 rows=547 loops=1)                           Buffers: shared hit=7         ->  Materialize  (cost=0.56..287644.85 rows=716126 width=23) (actual time=0.027..68577.800 rows=993087854 loops=1)               Buffers: shared hit=75342               ->  GroupAggregate  (cost=0.56..278693.28 rows=716126 width=15) (actual time=0.025..4242.453 rows=3607573 loops=1)                     Group Key: es09t1.es09codemp, es09t1.es09tipdoc, es09t1.es09numdoc                     Buffers: shared hit=75342                     ->  Index Only Scan using es09t1_pkey on es09t1  (cost=0.56..199919.49 rows=7161253 width=15) (actual time=0.016..1625.031 rows=7160921 loops=1)                           Index Cond: (es09codemp = 1)                           Heap Fetches: 51499                           Buffers: shared hit=75342 Planning time: 50.129 ms Execution time: 380419.435 ms--  David Rowley                   http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Fri, 1 Apr 2016 14:34:37 -0300", "msg_from": "Alexandre de Arruda Paes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fast HashJoin only after a cluster/recreate table" } ]
[ { "msg_contents": "Hey all, been running into some performance issues with one of my tables,\nand it seems to be centered around index maintenance.\n\nI have a table to store aggregated prices that are derived from sale data\nover a configurable period, and a function that runs periodically that\ninserts new prices if necessary, or \"inactivates\" the old record and\ninserts new ones. We use that price in calculations, and store the price\nwe used for a specific calculation for audit purposes, so the old record\ncannot just be updated or deleted. We need a new record every time.\n\nSo to do that we have an \"active_range\" column on the table, and that is\n used to set the periods that each specific price was in use for (important\nfor audit).\n\nThe issue seems to be the exclusion constraint we have on the table to\nensure data consistency... It is ridiculously slow to insert / update. To\nupdate 200 records, it takes 45 seconds. To update 1500 rows takes about\n3.5 min. To build the constraint/index fresh on the table (1,392,085 rows)\nit takes about 4 min.\n\nLooking at the schema, is there just a clearly better way to do this? I\nknow uuid is not supported by GiST indexes yet, so casting to text isn't\ngreat, but necessary (as far as I know...) at the moment.\n\nThe table definition looks like so:\nCREATE TABLE price_generated\n(\n price_generated_id uuid NOT NULL DEFAULT gen_random_uuid(),\n product_id uuid NOT NULL,\n company_id uuid NOT NULL,\n date_range daterange NOT NULL,\n average_price numeric NOT NULL,\n average_price_delivered numeric NOT NULL,\n low_price numeric NOT NULL,\n low_price_delivered numeric NOT NULL,\n high_price numeric NOT NULL,\n high_price_delivered numeric NOT NULL,\n uom_type_id uuid NOT NULL,\n active_range tstzrange NOT NULL DEFAULT tstzrange(now(), NULL::timestamp\nwith time zone),\n CONSTRAINT price_generated_pkey PRIMARY KEY (price_generated_id),\n CONSTRAINT price_generated_company_id_fkey FOREIGN KEY (company_id)\n REFERENCES public.company (company_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT price_generated_product_id_fkey FOREIGN KEY (product_id)\n REFERENCES public.product (product_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT price_generated_company_product_date_active_excl EXCLUDE\n USING gist ((company_id::text) WITH =, (product_id::text) WITH =,\ndate_range WITH &&, active_range WITH &&)\n);\n\nCREATE INDEX idx_price_generated_prod_comp_date\n ON price_generated\n USING btree\n (product_id, company_id, date_range);\n\nHey all, been running into some performance issues with one of my tables, and it seems to be centered around index maintenance.I have a table to store aggregated prices that are derived from sale data over a configurable period, and a function that runs periodically that inserts new prices if necessary, or \"inactivates\" the old record and inserts new ones.  We use that price in calculations, and store the price we used for a specific calculation for audit purposes, so the old record cannot just be updated or deleted. We need a new record every time.So to do that we have an \"active_range\" column on the table, and that is  used to set the periods that each specific price was in use for (important for audit).The issue seems to be the exclusion constraint we have on the table to ensure data consistency... It is ridiculously slow to insert / update.  To update 200 records, it takes 45 seconds. To update 1500 rows takes about 3.5 min. To build the constraint/index fresh on the table (1,392,085 rows) it takes about 4 min.Looking at the schema, is there just a clearly better way to do this? I know uuid is not supported by GiST indexes yet, so casting to text isn't great, but necessary (as far as I know...) at the moment.The table definition looks like so:CREATE TABLE price_generated(  price_generated_id uuid NOT NULL DEFAULT gen_random_uuid(),  product_id uuid NOT NULL,  company_id uuid NOT NULL,  date_range daterange NOT NULL,  average_price numeric NOT NULL,  average_price_delivered numeric NOT NULL,  low_price numeric NOT NULL,  low_price_delivered numeric NOT NULL,  high_price numeric NOT NULL,  high_price_delivered numeric NOT NULL,  uom_type_id uuid NOT NULL,  active_range tstzrange NOT NULL DEFAULT tstzrange(now(), NULL::timestamp with time zone),  CONSTRAINT price_generated_pkey PRIMARY KEY (price_generated_id),  CONSTRAINT price_generated_company_id_fkey FOREIGN KEY (company_id)      REFERENCES public.company (company_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT price_generated_product_id_fkey FOREIGN KEY (product_id)      REFERENCES public.product (product_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT price_generated_company_product_date_active_excl EXCLUDE   USING gist ((company_id::text) WITH =, (product_id::text) WITH =, date_range WITH &&, active_range WITH &&));CREATE INDEX idx_price_generated_prod_comp_date  ON price_generated  USING btree  (product_id, company_id, date_range);", "msg_date": "Wed, 13 Apr 2016 13:03:47 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Slow update on column that is part of exclusion constraint" }, { "msg_contents": "Sorry, brain stopped working and I forgot to include the normal info.\n\nPostgres version: 9.5.1\nHardware: 2 core, 4gb Digital Ocean virtual server\nOS: Debian\n\nexplain analyze for an example update:\n'Update on price_generated (cost=32.45..644.83 rows=1 width=157) (actual\ntime=29329.614..29329.614 rows=0 loops=1)'\n' -> Nested Loop (cost=32.45..644.83 rows=1 width=157) (actual\ntime=29329.608..29329.608 rows=0 loops=1)'\n' -> HashAggregate (cost=32.04..34.35 rows=231 width=52) (actual\ntime=1.137..2.090 rows=231 loops=1)'\n' Group Key: pti.product_id, pti.company_id, pti.date_range'\n' -> Seq Scan on _prices_to_insert pti (cost=0.00..30.31\nrows=231 width=52) (actual time=0.060..0.678 rows=231 loops=1)'\n' -> Index Scan using\nprice_generated_company_product_date_active_excl on price_generated\n (cost=0.41..2.63 rows=1 width=151) (actual time=126.949..126.949 rows=0\nloops=231)'\n' Index Cond: (date_range = pti.date_range)'\n' Filter: ((upper(active_range) IS NULL) AND (pti.product_id =\nproduct_id) AND (pti.company_id = company_id))'\n' Rows Removed by Filter: 29460'\n'Planning time: 3.134 ms'\n'Execution time: 29406.717 ms'\n\nSorry, brain stopped working and I forgot to include the normal info.Postgres version: 9.5.1Hardware: 2 core, 4gb Digital Ocean virtual serverOS: Debian explain analyze for an example update:'Update on price_generated  (cost=32.45..644.83 rows=1 width=157) (actual time=29329.614..29329.614 rows=0 loops=1)''  ->  Nested Loop  (cost=32.45..644.83 rows=1 width=157) (actual time=29329.608..29329.608 rows=0 loops=1)''        ->  HashAggregate  (cost=32.04..34.35 rows=231 width=52) (actual time=1.137..2.090 rows=231 loops=1)''              Group Key: pti.product_id, pti.company_id, pti.date_range''              ->  Seq Scan on _prices_to_insert pti  (cost=0.00..30.31 rows=231 width=52) (actual time=0.060..0.678 rows=231 loops=1)''        ->  Index Scan using price_generated_company_product_date_active_excl on price_generated  (cost=0.41..2.63 rows=1 width=151) (actual time=126.949..126.949 rows=0 loops=231)''              Index Cond: (date_range = pti.date_range)''              Filter: ((upper(active_range) IS NULL) AND (pti.product_id = product_id) AND (pti.company_id = company_id))''              Rows Removed by Filter: 29460''Planning time: 3.134 ms''Execution time: 29406.717 ms'", "msg_date": "Wed, 13 Apr 2016 13:14:29 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow update on column that is part of exclusion constraint" }, { "msg_contents": "\n> On 13 Apr 2016, at 20:14, Adam Brusselback <[email protected]> wrote:\n> \n> Sorry, brain stopped working and I forgot to include the normal info.\n> \n> Postgres version: 9.5.1\n> Hardware: 2 core, 4gb Digital Ocean virtual server\n> OS: Debian \n> \n> explain analyze for an example update:\n> 'Update on price_generated (cost=32.45..644.83 rows=1 width=157) (actual time=29329.614..29329.614 rows=0 loops=1)'\n> ' -> Nested Loop (cost=32.45..644.83 rows=1 width=157) (actual time=29329.608..29329.608 rows=0 loops=1)'\n> ' -> HashAggregate (cost=32.04..34.35 rows=231 width=52) (actual time=1.137..2.090 rows=231 loops=1)'\n> ' Group Key: pti.product_id, pti.company_id, pti.date_range'\n> ' -> Seq Scan on _prices_to_insert pti (cost=0.00..30.31 rows=231 width=52) (actual time=0.060..0.678 rows=231 loops=1)'\n> ' -> Index Scan using price_generated_company_product_date_active_excl on price_generated (cost=0.41..2.63 rows=1 width=151) (actual time=126.949..126.949 rows=0 loops=231)'\n> ' Index Cond: (date_range = pti.date_range)'\n> ' Filter: ((upper(active_range) IS NULL) AND (pti.product_id = product_id) AND (pti.company_id = company_id))'\n> ' Rows Removed by Filter: 29460'\n> 'Planning time: 3.134 ms'\n> 'Execution time: 29406.717 ms'\n\nWell, you see execution time of 30 seconds because there are 231 index lookups,\neach taking 126 ms.\n\nAnd that lookup is slow because of\nFilter: ((upper(active_range) IS NULL) AND (pti.product_id = product_id) AND (pti.company_id = company_id))'\n\nCan you provide self-containing example of update?\nI don't see there (upper(active_range) IS NULL condition is coming from.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 13 Apr 2016 21:54:45 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update on column that is part of exclusion constraint" }, { "msg_contents": "So fair enough, it does seem to be related to the lookup rather than\nmaintenance on the index. I was misguided in my initial assumption.\n\nSpent quite a bit of time trying to come up with a self contained test, and\nit seems like I can't make it choose the GiST index unless I remove the\nregular btree index in my test case, though the opposite is true for my\ntable in production. Not really sure what that means as far as what I need\nto do though. I've tried a vacuum full, analyze, rebuild index, drop and\nre-add the constraint... It still uses that GiST index for this query.\n\nHell, a sequential scan is a ton faster even.\n\nOn Wed, Apr 13, 2016 at 2:54 PM, Evgeniy Shishkin <[email protected]>\nwrote:\n\n>\n> > On 13 Apr 2016, at 20:14, Adam Brusselback <[email protected]>\n> wrote:\n> >\n> > Sorry, brain stopped working and I forgot to include the normal info.\n> >\n> > Postgres version: 9.5.1\n> > Hardware: 2 core, 4gb Digital Ocean virtual server\n> > OS: Debian\n> >\n> > explain analyze for an example update:\n> > 'Update on price_generated (cost=32.45..644.83 rows=1 width=157)\n> (actual time=29329.614..29329.614 rows=0 loops=1)'\n> > ' -> Nested Loop (cost=32.45..644.83 rows=1 width=157) (actual\n> time=29329.608..29329.608 rows=0 loops=1)'\n> > ' -> HashAggregate (cost=32.04..34.35 rows=231 width=52)\n> (actual time=1.137..2.090 rows=231 loops=1)'\n> > ' Group Key: pti.product_id, pti.company_id, pti.date_range'\n> > ' -> Seq Scan on _prices_to_insert pti (cost=0.00..30.31\n> rows=231 width=52) (actual time=0.060..0.678 rows=231 loops=1)'\n> > ' -> Index Scan using\n> price_generated_company_product_date_active_excl on price_generated\n> (cost=0.41..2.63 rows=1 width=151) (actual time=126.949..126.949 rows=0\n> loops=231)'\n> > ' Index Cond: (date_range = pti.date_range)'\n> > ' Filter: ((upper(active_range) IS NULL) AND\n> (pti.product_id = product_id) AND (pti.company_id = company_id))'\n> > ' Rows Removed by Filter: 29460'\n> > 'Planning time: 3.134 ms'\n> > 'Execution time: 29406.717 ms'\n>\n> Well, you see execution time of 30 seconds because there are 231 index\n> lookups,\n> each taking 126 ms.\n>\n> And that lookup is slow because of\n> Filter: ((upper(active_range) IS NULL) AND (pti.product_id = product_id)\n> AND (pti.company_id = company_id))'\n>\n> Can you provide self-containing example of update?\n> I don't see there (upper(active_range) IS NULL condition is coming from.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 14 Apr 2016 00:17:42 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow update on column that is part of exclusion constraint" }, { "msg_contents": "\n> On 14 Apr 2016, at 07:17, Adam Brusselback <[email protected]> wrote:\n> \n> So fair enough, it does seem to be related to the lookup rather than maintenance on the index. I was misguided in my initial assumption. \n> \n> Spent quite a bit of time trying to come up with a self contained test, and it seems like I can't make it choose the GiST index unless I remove the regular btree index in my test case, though the opposite is true for my table in production. Not really sure what that means as far as what I need to do though. I've tried a vacuum full, analyze, rebuild index, drop and re-add the constraint... It still uses that GiST index for this query.\n> \n> Hell, a sequential scan is a ton faster even.\n> \n\nAs i understand it, postgres needs a way to find rows for update.\nIn explain analyze you provided, we see that it chose gist index for that.\nAnd that is a poor chose. I think you need a proper btree index for update \nquery to work properly fast. Like index on (product_id, company_id, date_range) WHERE upper(price_generated_test.active_range) IS NULL. \n\n\n\n> On Wed, Apr 13, 2016 at 2:54 PM, Evgeniy Shishkin <[email protected]> wrote:\n> \n> > On 13 Apr 2016, at 20:14, Adam Brusselback <[email protected]> wrote:\n> >\n> > Sorry, brain stopped working and I forgot to include the normal info.\n> >\n> > Postgres version: 9.5.1\n> > Hardware: 2 core, 4gb Digital Ocean virtual server\n> > OS: Debian\n> >\n> > explain analyze for an example update:\n> > 'Update on price_generated (cost=32.45..644.83 rows=1 width=157) (actual time=29329.614..29329.614 rows=0 loops=1)'\n> > ' -> Nested Loop (cost=32.45..644.83 rows=1 width=157) (actual time=29329.608..29329.608 rows=0 loops=1)'\n> > ' -> HashAggregate (cost=32.04..34.35 rows=231 width=52) (actual time=1.137..2.090 rows=231 loops=1)'\n> > ' Group Key: pti.product_id, pti.company_id, pti.date_range'\n> > ' -> Seq Scan on _prices_to_insert pti (cost=0.00..30.31 rows=231 width=52) (actual time=0.060..0.678 rows=231 loops=1)'\n> > ' -> Index Scan using price_generated_company_product_date_active_excl on price_generated (cost=0.41..2.63 rows=1 width=151) (actual time=126.949..126.949 rows=0 loops=231)'\n> > ' Index Cond: (date_range = pti.date_range)'\n> > ' Filter: ((upper(active_range) IS NULL) AND (pti.product_id = product_id) AND (pti.company_id = company_id))'\n> > ' Rows Removed by Filter: 29460'\n> > 'Planning time: 3.134 ms'\n> > 'Execution time: 29406.717 ms'\n> \n> Well, you see execution time of 30 seconds because there are 231 index lookups,\n> each taking 126 ms.\n> \n> And that lookup is slow because of\n> Filter: ((upper(active_range) IS NULL) AND (pti.product_id = product_id) AND (pti.company_id = company_id))'\n> \n> Can you provide self-containing example of update?\n> I don't see there (upper(active_range) IS NULL condition is coming from.\n> \n> <excl constraint test case.sql>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 15 Apr 2016 12:31:59 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update on column that is part of exclusion constraint" } ]
[ { "msg_contents": "Hello,\n\nI'm assuming this topic has probably been bludgeoned to a pulp, but my\ngoogle-fu can't seem to find a solution.\n\nI have two relatively largish tables that I'm trying to join that result in\na slow query.\n\nHardware:\n\n2014 iMac w/ SSD & i5 processor\n\nTables:\ncontacts: 1.14 million rows\npermissions: 2.49 million rows\n\ncontacts have many permissions\n\nGoal: get first page of contacts (limit 40) a user has access to, sorted by\ncreation date.\n\nFor simplicity's sake, I've taken out some of the complexity of how I\nretrieve the permissions & just got all permissions with id less than\n2,100,000 to simulate access of ~151k contacts.\n\nstage4=# EXPLAIN ANALYZE WITH perms AS (\nstage4(# SELECT DISTINCT(contact_id) from permissions where id < 2100000\nstage4(# ) SELECT\nstage4-# contacts.id,\nstage4-# contacts.first_name\nstage4-# FROM contacts\nstage4-# INNER JOIN perms ON perms.contact_id = contacts.id\nstage4-# ORDER BY contacts.updated_at desc NULLS LAST LIMIT 40 OFFSET 0;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=226777.45..226777.55 rows=40 width=20) (actual\ntime=1556.133..1556.143 rows=40 loops=1)\n CTE perms\n -> HashAggregate (cost=34891.61..35404.25 rows=51264 width=4)\n(actual time=107.107..151.455 rows=151920 loops=1)\n Group Key: permissions.contact_id\n -> Bitmap Heap Scan on permissions (cost=3561.50..34416.43\nrows=190074 width=4) (actual time=14.839..50.445 rows=192372 loops=1)\n Recheck Cond: (id < 2100000)\n Heap Blocks: exact=2435\n -> Bitmap Index Scan on permissions_pkey\n (cost=0.00..3513.98 rows=190074 width=0) (actual time=14.496..14.496\nrows=192372 loops=1)\n Index Cond: (id < 2100000)\n -> Sort (cost=191373.19..191501.35 rows=51264 width=20) (actual\ntime=1556.132..1556.137 rows=40 loops=1)\n Sort Key: contacts.updated_at DESC NULLS LAST\n Sort Method: top-N heapsort Memory: 27kB\n -> Hash Join (cost=180911.60..189752.76 rows=51264 width=20)\n(actual time=1124.969..1532.269 rows=124152 loops=1)\n Hash Cond: (perms.contact_id = contacts.id)\n -> CTE Scan on perms (cost=0.00..1025.28 rows=51264\nwidth=4) (actual time=107.110..203.330 rows=151920 loops=1)\n -> Hash (cost=159891.71..159891.71 rows=1144871 width=20)\n(actual time=1017.354..1017.354 rows=1145174 loops=1)\n Buckets: 65536 Batches: 32 Memory Usage: 2521kB\n -> Seq Scan on contacts (cost=0.00..159891.71\nrows=1144871 width=20) (actual time=0.035..684.361 rows=1145174 loops=1)\n Planning time: 0.222 ms\n Execution time: 1561.693 ms\n(20 rows)\n\nIt is to my understanding that the query requires the entire 150k matched\ncontacts to be joined in order for the Sort to run its operation. I don't\nsee a way around this, however, I can't also see how this can be a unique\ncase where it's acceptable to have a 1.5 second query time? There has to be\nlots of other companies out there that manage much more data that needs to\nbe sorted for the presentation layer.\n\nThanks in advance,\n\n-Aldo\n\nHello,I'm assuming this topic has probably been bludgeoned to a pulp, but my google-fu can't seem to find a solution.I have two relatively largish tables that I'm trying to join that result in a slow query.Hardware: 2014 iMac w/ SSD & i5 processorTables:contacts: 1.14 million rowspermissions: 2.49 million rowscontacts have many permissionsGoal: get first page of contacts (limit 40) a user has access to, sorted by creation date.For simplicity's sake, I've taken out some of the complexity of how I retrieve the permissions & just got all permissions with id less than 2,100,000 to simulate access of ~151k contacts.stage4=# EXPLAIN ANALYZE WITH perms AS (stage4(#   SELECT DISTINCT(contact_id) from permissions where id < 2100000stage4(# ) SELECTstage4-#     contacts.id,stage4-#     contacts.first_namestage4-#   FROM contactsstage4-#   INNER JOIN perms ON  perms.contact_id = contacts.idstage4-#   ORDER BY contacts.updated_at desc NULLS LAST LIMIT 40 OFFSET 0;                                                                      QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=226777.45..226777.55 rows=40 width=20) (actual time=1556.133..1556.143 rows=40 loops=1)   CTE perms     ->  HashAggregate  (cost=34891.61..35404.25 rows=51264 width=4) (actual time=107.107..151.455 rows=151920 loops=1)           Group Key: permissions.contact_id           ->  Bitmap Heap Scan on permissions  (cost=3561.50..34416.43 rows=190074 width=4) (actual time=14.839..50.445 rows=192372 loops=1)                 Recheck Cond: (id < 2100000)                 Heap Blocks: exact=2435                 ->  Bitmap Index Scan on permissions_pkey  (cost=0.00..3513.98 rows=190074 width=0) (actual time=14.496..14.496 rows=192372 loops=1)                       Index Cond: (id < 2100000)   ->  Sort  (cost=191373.19..191501.35 rows=51264 width=20) (actual time=1556.132..1556.137 rows=40 loops=1)         Sort Key: contacts.updated_at DESC NULLS LAST         Sort Method: top-N heapsort  Memory: 27kB         ->  Hash Join  (cost=180911.60..189752.76 rows=51264 width=20) (actual time=1124.969..1532.269 rows=124152 loops=1)               Hash Cond: (perms.contact_id = contacts.id)               ->  CTE Scan on perms  (cost=0.00..1025.28 rows=51264 width=4) (actual time=107.110..203.330 rows=151920 loops=1)               ->  Hash  (cost=159891.71..159891.71 rows=1144871 width=20) (actual time=1017.354..1017.354 rows=1145174 loops=1)                     Buckets: 65536  Batches: 32  Memory Usage: 2521kB                     ->  Seq Scan on contacts  (cost=0.00..159891.71 rows=1144871 width=20) (actual time=0.035..684.361 rows=1145174 loops=1) Planning time: 0.222 ms Execution time: 1561.693 ms(20 rows)It is to my understanding that the query requires the entire 150k matched contacts to be joined in order for the Sort to run its operation. I don't see a way around this, however, I can't also see how this can be a unique case where it's acceptable to have a 1.5 second query time? There has to be lots of other companies out there that manage much more data that needs to be sorted for the presentation layer.Thanks in advance,-Aldo", "msg_date": "Tue, 19 Apr 2016 01:07:48 -0700", "msg_from": "Aldo Sarmiento <[email protected]>", "msg_from_op": true, "msg_subject": "Hash join seq scan slow" }, { "msg_contents": "On Tue, Apr 19, 2016 at 1:07 AM, Aldo Sarmiento <[email protected]> wrote:\n> Hello,\n>\n> I'm assuming this topic has probably been bludgeoned to a pulp, but my\n> google-fu can't seem to find a solution.\n>\n> I have two relatively largish tables that I'm trying to join that result in\n> a slow query.\n>\n> Hardware:\n>\n> 2014 iMac w/ SSD & i5 processor\n>\n> Tables:\n> contacts: 1.14 million rows\n> permissions: 2.49 million rows\n>\n> contacts have many permissions\n>\n> Goal: get first page of contacts (limit 40) a user has access to, sorted by\n> creation date.\n>\n> For simplicity's sake, I've taken out some of the complexity of how I\n> retrieve the permissions & just got all permissions with id less than\n> 2,100,000 to simulate access of ~151k contacts.\n\nI think you have simplified it rather too much. There is no reason to\nthink anything we suggest would carry over to the real case.\n\n\n>\n> stage4=# EXPLAIN ANALYZE WITH perms AS (\n> stage4(# SELECT DISTINCT(contact_id) from permissions where id < 2100000\n> stage4(# ) SELECT\n> stage4-# contacts.id,\n> stage4-# contacts.first_name\n> stage4-# FROM contacts\n> stage4-# INNER JOIN perms ON perms.contact_id = contacts.id\n> stage4-# ORDER BY contacts.updated_at desc NULLS LAST LIMIT 40 OFFSET 0;\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=226777.45..226777.55 rows=40 width=20) (actual\n> time=1556.133..1556.143 rows=40 loops=1)\n> CTE perms\n> -> HashAggregate (cost=34891.61..35404.25 rows=51264 width=4) (actual\n> time=107.107..151.455 rows=151920 loops=1)\n> Group Key: permissions.contact_id\n> -> Bitmap Heap Scan on permissions (cost=3561.50..34416.43\n> rows=190074 width=4) (actual time=14.839..50.445 rows=192372 loops=1)\n> Recheck Cond: (id < 2100000)\n> Heap Blocks: exact=2435\n> -> Bitmap Index Scan on permissions_pkey\n> (cost=0.00..3513.98 rows=190074 width=0) (actual time=14.496..14.496\n> rows=192372 loops=1)\n> Index Cond: (id < 2100000)\n> -> Sort (cost=191373.19..191501.35 rows=51264 width=20) (actual\n> time=1556.132..1556.137 rows=40 loops=1)\n> Sort Key: contacts.updated_at DESC NULLS LAST\n> Sort Method: top-N heapsort Memory: 27kB\n> -> Hash Join (cost=180911.60..189752.76 rows=51264 width=20)\n> (actual time=1124.969..1532.269 rows=124152 loops=1)\n> Hash Cond: (perms.contact_id = contacts.id)\n> -> CTE Scan on perms (cost=0.00..1025.28 rows=51264\n> width=4) (actual time=107.110..203.330 rows=151920 loops=1)\n> -> Hash (cost=159891.71..159891.71 rows=1144871 width=20)\n> (actual time=1017.354..1017.354 rows=1145174 loops=1)\n> Buckets: 65536 Batches: 32 Memory Usage: 2521kB\n> -> Seq Scan on contacts (cost=0.00..159891.71\n> rows=1144871 width=20) (actual time=0.035..684.361 rows=1145174 loops=1)\n> Planning time: 0.222 ms\n> Execution time: 1561.693 ms\n> (20 rows)\n\nIn this type of plan (visiting lots of tuples, but returning only a\nfew), the timing overhead of EXPLAIN (ANALYZE) can be massive.\n\nI would also run it with \"EXPLAIN (ANALYZE, TIMING OFF)\" and make sure\nthe overall execution time between the two methods is comparable. If\nthey are not, then you cannot trust the data from \"EXPLAIN (ANALYZE)\".\n(Run it several times, alternating between them, to make sure you\naren't just seeing cache effect)\n\n\n> It is to my understanding that the query requires the entire 150k matched\n> contacts to be joined in order for the Sort to run its operation. I don't\n> see a way around this, however, I can't also see how this can be a unique\n> case where it's acceptable to have a 1.5 second query time? There has to be\n> lots of other companies out there that manage much more data that needs to\n> be sorted for the presentation layer.\n\nThere are lots of approaches to solving it. One is to realizing that\nnone of your customers are actually interested in scrolling through\n150k records, 40 records at a time. You could also use materialized\nviews, or denormalize the data so that you can build and use a\nmulti-column index (although there are risks in that as well)\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Apr 2016 17:04:40 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash join seq scan slow" } ]
[ { "msg_contents": "Hey all,\n\nNew to the lists so please let me know if this isn't the right place for\nthis question.\n\nI am trying to understand how to structure a table to allow for optimal\nperformance on retrieval. The data will not change frequently so you can\nbasically think of it as static and only concerned about optimizing reads\nfrom basic SELECT...WHERE queries.\n\nThe data:\n\n - ~20 million records\n - Each record has 1 id and ~100 boolean properties\n - Each boolean property has ~85% of the records as true\n\n\nThe retrieval will always be something like \"SELECT id FROM <table> WHERE\n<conditions>.\n\n<conditions> will be some arbitrary set of the ~100 boolean columns and you\nwant the ids that match all of the conditions (true for each boolean\ncolumn). Example:\nWHERE prop1 AND prop18 AND prop24\n\n\nThe obvious thing seems to make a table with ~100 columns, with 1 column\nfor each boolean property. Though, what type of indexing strategy would one\nuse on that table? Doesn't make sense to do BTREE. Is there a better way to\nstructure it?\n\n\nAny and all advice/tips/questions appreciated!\n\nThanks,\nRob\n\nHey all,New to the lists so please let me know if this isn't the right place for this question.I am trying to understand how to structure a table to allow for optimal performance on retrieval. The data will not change frequently so you can basically think of it as static and only concerned about optimizing reads from basic SELECT...WHERE queries.The data:~20 million recordsEach record has 1 id and ~100 boolean propertiesEach boolean property has ~85% of the records as trueThe retrieval will always be something like \"SELECT id FROM <table> WHERE <conditions>.<conditions> will be some arbitrary set of the ~100 boolean columns and you want the ids that match all of the conditions (true for each boolean column). Example: WHERE prop1 AND prop18 AND prop24The obvious thing seems to make a table with ~100 columns, with 1 column for each boolean property. Though, what type of indexing strategy would one use on that table? Doesn't make sense to do BTREE. Is there a better way to structure it?Any and all advice/tips/questions appreciated!Thanks,Rob", "msg_date": "Wed, 20 Apr 2016 14:41:54 -0400", "msg_from": "Rob Imig <[email protected]>", "msg_from_op": true, "msg_subject": "Performant queries on table with many boolean columns" }, { "msg_contents": ">\n> The obvious thing seems to make a table with ~100 columns, with 1 column\n> for each boolean property. Though, what type of indexing strategy would\n> one use on that table? Doesn't make sense to do BTREE. Is there a better\n> way to structure it?\n>\nlooks like a deal for contrib/bloom index in upcoming 9.6 release\n\n\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 20 Apr 2016 21:54:54 +0300", "msg_from": "Teodor Sigaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "Would a bit string column work? --\nhttp://www.postgresql.org/docs/9.5/static/datatype-bit.html\n\nYou might need to use a lot of bitwise OR statements in the query though if\nyou are looking at very sparse sets of specific values...\n\nSomething like the get_bit() function might allow you to select a specific\nbit, but then you might want a bunch of functional indexes on the column\nfor various get_bit() combinations.\n\nMaybe you can group commonly queried sets of columns into bit strings.\n (rather than having one bit string column for all 100 booleans).\n\n\n\nOn Wed, Apr 20, 2016 at 2:54 PM, Teodor Sigaev <[email protected]> wrote:\n\n>\n>> The obvious thing seems to make a table with ~100 columns, with 1 column\n>> for each boolean property. Though, what type of indexing strategy would\n>> one use on that table? Doesn't make sense to do BTREE. Is there a better\n>> way to structure it?\n>>\n>> looks like a deal for contrib/bloom index in upcoming 9.6 release\n>\n>\n> --\n> Teodor Sigaev E-mail: [email protected]\n> WWW: http://www.sigaev.ru/\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWould a bit string column work? -- http://www.postgresql.org/docs/9.5/static/datatype-bit.html You might need to use a lot of bitwise OR statements in the query though if you are looking at very sparse sets of specific values...Something like the get_bit() function might allow you to select a specific bit, but then you might want a bunch of functional indexes on the column for various get_bit() combinations.Maybe you can group commonly queried sets of columns into bit strings.  (rather than having one bit string column for all 100 booleans).On Wed, Apr 20, 2016 at 2:54 PM, Teodor Sigaev <[email protected]> wrote:\n\nThe obvious thing seems to make a table with ~100 columns, with 1 column\nfor each boolean property. Though, what type of indexing strategy would\none use on that table? Doesn't make sense to do BTREE. Is there a better\nway to structure it?\n\n\nlooks like a deal for contrib/bloom index in upcoming 9.6 release\n\n\n-- \nTeodor Sigaev                      E-mail: [email protected]\n                                      WWW: http://www.sigaev.ru/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 20 Apr 2016 17:07:27 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "On Wed, Apr 20, 2016 at 11:54 AM, Teodor Sigaev <[email protected]> wrote:\n\n>\n>> The obvious thing seems to make a table with ~100 columns, with 1 column\n>> for each boolean property. Though, what type of indexing strategy would\n>> one use on that table? Doesn't make sense to do BTREE. Is there a better\n>> way to structure it?\n>>\n>> looks like a deal for contrib/bloom index in upcoming 9.6 release\n\n\n​Curious, it doesn't look like it will work with booleans out of the box.\n\nhttp://www.postgresql.org/docs/devel/static/bloom.html\n\nDavid J.\n​\n\nOn Wed, Apr 20, 2016 at 11:54 AM, Teodor Sigaev <[email protected]> wrote:\n\nThe obvious thing seems to make a table with ~100 columns, with 1 column\nfor each boolean property. Though, what type of indexing strategy would\none use on that table? Doesn't make sense to do BTREE. Is there a better\nway to structure it?\n\n\nlooks like a deal for contrib/bloom index in upcoming 9.6 release​Curious, it doesn't look like it will work with booleans out of the box.http://www.postgresql.org/docs/devel/static/bloom.htmlDavid J.​", "msg_date": "Wed, 20 Apr 2016 14:49:10 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "> looks like a deal for contrib/bloom index in upcoming 9.6 release\n> ​Curious, it doesn't look like it will work with booleans out of the box.\n> http://www.postgresql.org/docs/devel/static/bloom.html\n\nThere is no rocket science here:\n# create table x (v bool);\n# create index i on x using bloom ((v::int4));\n# set enable_seqscan=off; --because of empty table\n# explain select * from x where v::int4 = 1;\n QUERY PLAN\n------------------------------------------------------------------\n Bitmap Heap Scan on x (cost=25.08..35.67 rows=14 width=1)\n Recheck Cond: ((v)::integer = 1)\n -> Bitmap Index Scan on i (cost=0.00..25.07 rows=14 width=0)\n Index Cond: ((v)::integer = 1)\n\nOr cast it to \"char\" type (with quoting!)\n\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Apr 2016 13:04:13 +0300", "msg_from": "Teodor Sigaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "On Thu, Apr 21, 2016 at 3:04 AM, Teodor Sigaev <[email protected]> wrote:\n\n> looks like a deal for contrib/bloom index in upcoming 9.6 release\n>> ​Curious, it doesn't look like it will work with booleans out of the box.\n>> http://www.postgresql.org/docs/devel/static/bloom.html\n>>\n>\n> There is no rocket science here:\n> # create table x (v bool);\n> # create index i on x using bloom ((v::int4));\n> # set enable_seqscan=off; --because of empty table\n> # explain select * from x where v::int4 = 1;\n> QUERY PLAN\n> ------------------------------------------------------------------\n> Bitmap Heap Scan on x (cost=25.08..35.67 rows=14 width=1)\n> Recheck Cond: ((v)::integer = 1)\n> -> Bitmap Index Scan on i (cost=0.00..25.07 rows=14 width=0)\n> Index Cond: ((v)::integer = 1)\n>\n> Or cast it to \"char\" type (with quoting!)\n>\n>\n​At that point you should just forget bool exists and define the columns as\nint4.\n\nI'll give you points for making it work but its not a solution I'd be proud\nto offer up.\n\nDavid J.\n​\n\nOn Thu, Apr 21, 2016 at 3:04 AM, Teodor Sigaev <[email protected]> wrote:\n    looks like a deal for contrib/bloom index in upcoming 9.6 release\n​Curious, it doesn't look like it will work with booleans out of the box.\nhttp://www.postgresql.org/docs/devel/static/bloom.html\n\n\nThere is no rocket science here:\n# create table x (v bool);\n# create index i on x using bloom ((v::int4));\n# set enable_seqscan=off; --because of empty table\n# explain select * from x where v::int4 = 1;\n                            QUERY PLAN\n------------------------------------------------------------------\n Bitmap Heap Scan on x  (cost=25.08..35.67 rows=14 width=1)\n   Recheck Cond: ((v)::integer = 1)\n   ->  Bitmap Index Scan on i  (cost=0.00..25.07 rows=14 width=0)\n         Index Cond: ((v)::integer = 1)\n\nOr cast it to \"char\" type (with quoting!)​At that point you should just forget bool exists and define the columns as int4.I'll give you points for making it work but its not a solution I'd be proud to offer up.David J.​", "msg_date": "Thu, 21 Apr 2016 09:12:49 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "On Wed, Apr 20, 2016 at 11:41 AM, Rob Imig <[email protected]> wrote:\n> Hey all,\n>\n> New to the lists so please let me know if this isn't the right place for\n> this question.\n>\n> I am trying to understand how to structure a table to allow for optimal\n> performance on retrieval. The data will not change frequently so you can\n> basically think of it as static and only concerned about optimizing reads\n> from basic SELECT...WHERE queries.\n>\n> The data:\n>\n> ~20 million records\n> Each record has 1 id and ~100 boolean properties\n> Each boolean property has ~85% of the records as true\n>\n>\n> The retrieval will always be something like \"SELECT id FROM <table> WHERE\n> <conditions>.\n>\n> <conditions> will be some arbitrary set of the ~100 boolean columns and you\n> want the ids that match all of the conditions (true for each boolean\n> column). Example:\n> WHERE prop1 AND prop18 AND prop24\n\n\nIs 3 a typical number of conditions to have?\n\n85%^3 is 61.4%, so you are fetching most of the table. At that point,\nI think I would give up on indexes and just expect to do a full table\nscan each time. Which means a single column\nbit-string data type might be the way to go, although the construction\nof the queries would then be more cumbersome, especially if you will\ndo by hand.\n\nI think the only way to know for sure is to write a few scripts to benchmark it.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Apr 2016 09:36:37 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "On Wed, Apr 20, 2016 at 11:54 AM, Teodor Sigaev <[email protected]> wrote:\n>>\n>> The obvious thing seems to make a table with ~100 columns, with 1 column\n>> for each boolean property. Though, what type of indexing strategy would\n>> one use on that table? Doesn't make sense to do BTREE. Is there a better\n>> way to structure it?\n>>\n> looks like a deal for contrib/bloom index in upcoming 9.6 release\n\nNot without doing a custom compilation with an increased INDEX_MAX_KEYS:\n\nERROR: cannot use more than 32 columns in an index\n\nBut even so, I'm skeptical this would do better than a full scan. It\nwould be interesting to test that.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Apr 2016 09:45:56 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "Hey all,\n\nLots of interesting suggestions! I'm loving it.\n\nJust came back to this a bit earlier today and made a sample table to see\nwhat non-index performance would be. Constructed data just like above (used\n12M rows and 80% true for all 100 boolean columns)\n\nHere's an analyze for what I'd expect to be the types of queries that I'll\nbe handling from the frontend. I would expect around 40-70 properties per\nquery.\n\nNow I'm going to start experimenting with some ideas above and other\ntuning. This isn't as bad as I thought it would be, though would like to\nget this under 200ms.\n\nrimig=# explain analyze select count(*) from bloomtest where prop0 AND\nprop1 AND prop2 AND prop3 AND prop4 AND prop5 AND prop6 AND prop7 AND prop8\nAND prop9 AND prop10 AND prop11 AND prop12 AND prop13 AND prop14 AND prop15\nAND prop16 AND prop17 AND prop18 AND prop19 AND prop20 AND prop21 AND\nprop22 AND prop23 AND prop24 AND prop25 AND prop26 AND prop27 AND prop28\nAND prop29 AND prop30 AND prop31 AND prop32 AND prop33 AND prop34 AND\nprop35 AND prop36 AND prop37 AND prop38 AND prop39 AND prop40 AND prop41\nAND prop42 AND prop43 AND prop44 AND prop45 AND prop46 AND prop47 AND\nprop48 AND prop49 AND prop50 AND prop51 AND prop52 AND prop53 AND prop54\nAND prop55 AND prop56 AND prop57 AND prop58 AND prop59 AND prop60 AND\nprop61 AND prop62 AND prop63 AND prop64;\n\n Aggregate (cost=351563.03..351563.04 rows=1 width=0) (actual\ntime=2636.829..2636.829 rows=1 loops=1)\n\n -> Seq Scan on bloomtest (cost=0.00..351563.02 rows=3 width=0) (actual\ntime=448.200..2636.811 rows=9 loops=1)\n\n Filter: (prop0 AND prop1 AND prop2 AND prop3 AND prop4 AND prop5\nAND prop6 AND prop7 AND prop8 AND prop9 AND prop10 AND prop11 AND prop12\nAND prop13 AND prop14 AND prop15 AND prop16 AND prop17 AND prop18 AND\nprop19 AND prop20 AND prop21 AND prop22 AND prop23 AND prop24 AND prop25\nAND prop26 AND prop27 AND prop28 AND prop29 AND prop30 AND prop31 AND\nprop32 AND prop33 AND prop34 AND prop35 AND prop36 AND prop37 AND prop38\nAND prop39 AND prop40 AND prop41 AND prop42 AND prop43 AND prop44 AND\nprop45 AND prop46 AND prop47 AND prop48 AND prop49 AND prop50 AND prop51\nAND prop52 AND prop53 AND prop54 AND prop55 AND prop56 AND prop57 AND\nprop58 AND prop59 AND prop60 AND prop61 AND prop62 AND prop63 AND prop64)\n\n Rows Removed by Filter: 11999991\n\n Total runtime: 2636.874 ms\n\nOn Thu, Apr 21, 2016 at 12:45 PM, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Apr 20, 2016 at 11:54 AM, Teodor Sigaev <[email protected]> wrote:\n> >>\n> >> The obvious thing seems to make a table with ~100 columns, with 1 column\n> >> for each boolean property. Though, what type of indexing strategy would\n> >> one use on that table? Doesn't make sense to do BTREE. Is there a better\n> >> way to structure it?\n> >>\n> > looks like a deal for contrib/bloom index in upcoming 9.6 release\n>\n> Not without doing a custom compilation with an increased INDEX_MAX_KEYS:\n>\n> ERROR: cannot use more than 32 columns in an index\n>\n> But even so, I'm skeptical this would do better than a full scan. It\n> would be interesting to test that.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHey all,Lots of interesting suggestions! I'm loving it.Just came back to this a bit earlier today and made a sample table to see what non-index performance would be. Constructed data just like above (used 12M rows and 80% true for all 100 boolean columns)Here's an analyze for what I'd expect to be the types of queries that I'll be handling from the frontend. I would expect around 40-70 properties per query.Now I'm going to start experimenting with some ideas above and other tuning. This isn't as bad as I thought it would be, though would like to get this under 200ms.\nrimig=# explain analyze select count(*) from bloomtest where prop0 AND prop1 AND prop2 AND prop3 AND prop4 AND prop5 AND prop6 AND prop7 AND prop8 AND prop9 AND prop10 AND prop11 AND prop12 AND prop13 AND prop14 AND prop15 AND prop16 AND prop17 AND prop18 AND prop19 AND prop20 AND prop21 AND prop22 AND prop23 AND prop24 AND prop25 AND prop26 AND prop27 AND prop28 AND prop29 AND prop30 AND prop31 AND prop32 AND prop33 AND prop34 AND prop35 AND prop36 AND prop37 AND prop38 AND prop39 AND prop40 AND prop41 AND prop42 AND prop43 AND prop44 AND prop45 AND prop46 AND prop47 AND prop48 AND prop49 AND prop50 AND prop51 AND prop52 AND prop53 AND prop54 AND prop55 AND prop56 AND prop57 AND prop58 AND prop59 AND prop60 AND prop61 AND prop62 AND prop63 AND prop64;\n Aggregate  (cost=351563.03..351563.04 rows=1 width=0) (actual time=2636.829..2636.829 rows=1 loops=1)\n   ->  Seq Scan on bloomtest  (cost=0.00..351563.02 rows=3 width=0) (actual time=448.200..2636.811 rows=9 loops=1)\n         Filter: (prop0 AND prop1 AND prop2 AND prop3 AND prop4 AND prop5 AND prop6 AND prop7 AND prop8 AND prop9 AND prop10 AND prop11 AND prop12 AND prop13 AND prop14 AND prop15 AND prop16 AND prop17 AND prop18 AND prop19 AND prop20 AND prop21 AND prop22 AND prop23 AND prop24 AND prop25 AND prop26 AND prop27 AND prop28 AND prop29 AND prop30 AND prop31 AND prop32 AND prop33 AND prop34 AND prop35 AND prop36 AND prop37 AND prop38 AND prop39 AND prop40 AND prop41 AND prop42 AND prop43 AND prop44 AND prop45 AND prop46 AND prop47 AND prop48 AND prop49 AND prop50 AND prop51 AND prop52 AND prop53 AND prop54 AND prop55 AND prop56 AND prop57 AND prop58 AND prop59 AND prop60 AND prop61 AND prop62 AND prop63 AND prop64)\n         Rows Removed by Filter: 11999991\n Total runtime: 2636.874 msOn Thu, Apr 21, 2016 at 12:45 PM, Jeff Janes <[email protected]> wrote:On Wed, Apr 20, 2016 at 11:54 AM, Teodor Sigaev <[email protected]> wrote:\n>>\n>> The obvious thing seems to make a table with ~100 columns, with 1 column\n>> for each boolean property. Though, what type of indexing strategy would\n>> one use on that table? Doesn't make sense to do BTREE. Is there a better\n>> way to structure it?\n>>\n> looks like a deal for contrib/bloom index in upcoming 9.6 release\n\nNot without doing a custom compilation with an increased INDEX_MAX_KEYS:\n\nERROR:  cannot use more than 32 columns in an index\n\nBut even so, I'm skeptical this would do better than a full scan.  It\nwould be interesting to test that.\n\nCheers,\n\nJeff", "msg_date": "Thu, 21 Apr 2016 15:34:52 -0400", "msg_from": "Rob Imig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "Just to followup where I'm at, I've constructed a new column which is a 100\nbit bitstring representing all the flags. Created a b-tree index on that\ncolumn and can now do super fast lookups (2) for specific scenarios however\ngetting the behavior I need would require a huge amount of OR conditions\n(as Rick mentioned earlier). Another option is to do bitwiser operators (3)\nbut that seems really slow. Not sure how I can speed that up.\n\nFor my specific use-case I think we are going to be able to shard by a\ncategory so performance will be acceptable, so this is turning into an\neducational exercise.\n\n*1. SELECT..WHERE on each boolean property*\n\nrimig=# explain analyze select bitstr from bloomtest_bi where prop0 AND\nprop1 AND prop2 AND prop3 AND prop4 AND prop5 AND prop6 AND prop7 AND prop8\nAND prop9 AND prop10 AND prop11 AND prop12 AND prop13 AND prop14 AND prop15\nAND prop16 AND prop17 AND prop18 AND prop19 AND prop20 AND prop21 AND\nprop22 AND prop23 AND prop24 AND prop25 AND prop26 AND prop27 AND prop28\nAND prop29 AND prop30 AND prop31 AND prop32 AND prop33 AND prop34 AND\nprop35 AND prop36 AND prop37 AND prop38 AND prop39 AND prop40 AND prop41\nAND prop42 AND prop43 AND prop44 AND prop45 AND prop46 AND prop47 AND\nprop48 AND prop49 AND prop50 AND prop51 AND prop52 AND prop53 AND prop54\nAND prop55 AND prop56 AND prop57 AND prop58 AND prop59 AND prop60 AND\nprop61 AND prop62 AND prop63 AND prop64;\n\n\n\n\n\n QUERY PLAN\n\n\n\n\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Seq Scan on bloomtest_bi (cost=0.00..350770.00 rows=6 width=18) (actual\ntime=229.365..2576.391 rows=9 loops=1)\n\n Filter: (prop0 AND prop1 AND prop2 AND prop3 AND prop4 AND prop5 AND\nprop6 AND prop7 AND prop8 AND prop9 AND prop10 AND prop11 AND prop12 AND\nprop13 AND prop14 AND prop15 AND prop16 AND prop17 AND prop18 AND prop19\nAND prop20 AND prop21 AND prop22 AND prop23 AND prop24 AND prop25 AND\nprop26 AND prop27 AND prop28 AND prop29 AND prop30 AND prop31 AND prop32\nAND prop33 AND prop34 AND prop35 AND prop36 AND prop37 AND prop38 AND\nprop39 AND prop40 AND prop41 AND prop42 AND prop43 AND prop44 AND prop45\nAND prop46 AND prop47 AND prop48 AND prop49 AND prop50 AND prop51 AND\nprop52 AND prop53 AND prop54 AND prop55 AND prop56 AND prop57 AND prop58\nAND prop59 AND prop60 AND prop61 AND prop62 AND prop63 AND prop64)\n\n Rows Removed by Filter: 11999991\n\n Total runtime: 2576.420 ms\n\n(4 rows)\n\n\n*Time: 2577.160 ms*\n\n\n*2. SELECT..WHERE on exact bitstring match* (standard b-tree index on\nbitstr so obviously fast here)\n\nThis would mean I'd have to OR all the conditions which is a bit gnarly.\n\n\nrimig=# explain analyze select bitstr from bloomtest_bi where bitstr =\n'11111111111111111111111111111111111111111111111111111111111111111011011101110011111100110001101000111';\n\n QUERY\nPLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------\n\n Index Only Scan using i_gist on bloomtest_bi (cost=0.56..8.58 rows=1\nwidth=18) (actual time=0.040..0.040 rows=1 loops=1)\n\n Index Cond: (bitstr =\nB'11111111111111111111111111111111111111111111111111111111111111111011011101110011111100110001101000111'::bit\nvarying)\n\n Heap Fetches: 1\n\n Total runtime: 0.056 ms\n\n(4 rows)\n\n\n*Time: 0.443 ms*\n\n\n*3. SELECT..WHERE using bitwise operator*\n\nThis gets all the results I need however it's slow.\n\nrimig=# explain analyze select bitstr from bloomtest_bi where (bitstr &\n'11111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000'\n) =\n'11111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000';\n\n\n QUERY PLAN\n\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Seq Scan on bloomtest_bi (cost=0.00..410770.00 rows=60000 width=18)\n(actual time=856.595..9359.566 rows=9 loops=1)\n\n Filter: (((bitstr)::\"bit\" &\nB'11111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000'::\"bit\")\n=\nB'11111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000'::\"bit\")\n\n Rows Removed by Filter: 11999991\n\n Total runtime: 9359.593 ms\n\n(4 rows)\n\n\n*Time: 9360.072 ms*\n\n\nOn Thu, Apr 21, 2016 at 3:34 PM, Rob Imig <[email protected]> wrote:\n\n> Hey all,\n>\n> Lots of interesting suggestions! I'm loving it.\n>\n> Just came back to this a bit earlier today and made a sample table to see\n> what non-index performance would be. Constructed data just like above (used\n> 12M rows and 80% true for all 100 boolean columns)\n>\n> Here's an analyze for what I'd expect to be the types of queries that I'll\n> be handling from the frontend. I would expect around 40-70 properties per\n> query.\n>\n> Now I'm going to start experimenting with some ideas above and other\n> tuning. This isn't as bad as I thought it would be, though would like to\n> get this under 200ms.\n>\n> rimig=# explain analyze select count(*) from bloomtest where prop0 AND\n> prop1 AND prop2 AND prop3 AND prop4 AND prop5 AND prop6 AND prop7 AND prop8\n> AND prop9 AND prop10 AND prop11 AND prop12 AND prop13 AND prop14 AND prop15\n> AND prop16 AND prop17 AND prop18 AND prop19 AND prop20 AND prop21 AND\n> prop22 AND prop23 AND prop24 AND prop25 AND prop26 AND prop27 AND prop28\n> AND prop29 AND prop30 AND prop31 AND prop32 AND prop33 AND prop34 AND\n> prop35 AND prop36 AND prop37 AND prop38 AND prop39 AND prop40 AND prop41\n> AND prop42 AND prop43 AND prop44 AND prop45 AND prop46 AND prop47 AND\n> prop48 AND prop49 AND prop50 AND prop51 AND prop52 AND prop53 AND prop54\n> AND prop55 AND prop56 AND prop57 AND prop58 AND prop59 AND prop60 AND\n> prop61 AND prop62 AND prop63 AND prop64;\n>\n> Aggregate (cost=351563.03..351563.04 rows=1 width=0) (actual\n> time=2636.829..2636.829 rows=1 loops=1)\n>\n> -> Seq Scan on bloomtest (cost=0.00..351563.02 rows=3 width=0)\n> (actual time=448.200..2636.811 rows=9 loops=1)\n>\n> Filter: (prop0 AND prop1 AND prop2 AND prop3 AND prop4 AND prop5\n> AND prop6 AND prop7 AND prop8 AND prop9 AND prop10 AND prop11 AND prop12\n> AND prop13 AND prop14 AND prop15 AND prop16 AND prop17 AND prop18 AND\n> prop19 AND prop20 AND prop21 AND prop22 AND prop23 AND prop24 AND prop25\n> AND prop26 AND prop27 AND prop28 AND prop29 AND prop30 AND prop31 AND\n> prop32 AND prop33 AND prop34 AND prop35 AND prop36 AND prop37 AND prop38\n> AND prop39 AND prop40 AND prop41 AND prop42 AND prop43 AND prop44 AND\n> prop45 AND prop46 AND prop47 AND prop48 AND prop49 AND prop50 AND prop51\n> AND prop52 AND prop53 AND prop54 AND prop55 AND prop56 AND prop57 AND\n> prop58 AND prop59 AND prop60 AND prop61 AND prop62 AND prop63 AND prop64)\n>\n> Rows Removed by Filter: 11999991\n>\n> Total runtime: 2636.874 ms\n>\n> On Thu, Apr 21, 2016 at 12:45 PM, Jeff Janes <[email protected]> wrote:\n>\n>> On Wed, Apr 20, 2016 at 11:54 AM, Teodor Sigaev <[email protected]> wrote:\n>> >>\n>> >> The obvious thing seems to make a table with ~100 columns, with 1\n>> column\n>> >> for each boolean property. Though, what type of indexing strategy would\n>> >> one use on that table? Doesn't make sense to do BTREE. Is there a\n>> better\n>> >> way to structure it?\n>> >>\n>> > looks like a deal for contrib/bloom index in upcoming 9.6 release\n>>\n>> Not without doing a custom compilation with an increased INDEX_MAX_KEYS:\n>>\n>> ERROR: cannot use more than 32 columns in an index\n>>\n>> But even so, I'm skeptical this would do better than a full scan. It\n>> would be interesting to test that.\n>>\n>> Cheers,\n>>\n>> Jeff\n>>\n>\n>\n\nJust to followup where I'm at, I've constructed a new column which is a 100 bit bitstring representing all the flags. Created a b-tree index on that column and can now do super fast lookups (2) for specific scenarios however getting the behavior I need would require a huge amount of OR conditions (as Rick mentioned earlier). Another option is to do bitwiser operators (3) but that seems really slow. Not sure how I can speed that up.For my specific use-case I think we are going to be able to shard by a category so performance will be acceptable, so this is turning into an educational exercise.1. SELECT..WHERE on each boolean property\nrimig=# explain analyze select bitstr from bloomtest_bi where prop0 AND prop1 AND prop2 AND prop3 AND prop4 AND prop5 AND prop6 AND prop7 AND prop8 AND prop9 AND prop10 AND prop11 AND prop12 AND prop13 AND prop14 AND prop15 AND prop16 AND prop17 AND prop18 AND prop19 AND prop20 AND prop21 AND prop22 AND prop23 AND prop24 AND prop25 AND prop26 AND prop27 AND prop28 AND prop29 AND prop30 AND prop31 AND prop32 AND prop33 AND prop34 AND prop35 AND prop36 AND prop37 AND prop38 AND prop39 AND prop40 AND prop41 AND prop42 AND prop43 AND prop44 AND prop45 AND prop46 AND prop47 AND prop48 AND prop49 AND prop50 AND prop51 AND prop52 AND prop53 AND prop54 AND prop55 AND prop56 AND prop57 AND prop58 AND prop59 AND prop60 AND prop61 AND prop62 AND prop63 AND prop64;\n                                                                                                                                                                                                                                                                                                                                                                QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on bloomtest_bi  (cost=0.00..350770.00 rows=6 width=18) (actual time=229.365..2576.391 rows=9 loops=1)\n   Filter: (prop0 AND prop1 AND prop2 AND prop3 AND prop4 AND prop5 AND prop6 AND prop7 AND prop8 AND prop9 AND prop10 AND prop11 AND prop12 AND prop13 AND prop14 AND prop15 AND prop16 AND prop17 AND prop18 AND prop19 AND prop20 AND prop21 AND prop22 AND prop23 AND prop24 AND prop25 AND prop26 AND prop27 AND prop28 AND prop29 AND prop30 AND prop31 AND prop32 AND prop33 AND prop34 AND prop35 AND prop36 AND prop37 AND prop38 AND prop39 AND prop40 AND prop41 AND prop42 AND prop43 AND prop44 AND prop45 AND prop46 AND prop47 AND prop48 AND prop49 AND prop50 AND prop51 AND prop52 AND prop53 AND prop54 AND prop55 AND prop56 AND prop57 AND prop58 AND prop59 AND prop60 AND prop61 AND prop62 AND prop63 AND prop64)\n   Rows Removed by Filter: 11999991\n Total runtime: 2576.420 ms\n(4 rows)\n\nTime: 2577.160 ms2. SELECT..WHERE on exact bitstring match (standard b-tree index on bitstr so obviously fast here)This would mean I'd have to OR all the conditions which is a bit gnarly.\nrimig=# explain analyze select bitstr from bloomtest_bi where bitstr = '11111111111111111111111111111111111111111111111111111111111111111011011101110011111100110001101000111';\n                                                                   QUERY PLAN                                                                   \n------------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using i_gist on bloomtest_bi  (cost=0.56..8.58 rows=1 width=18) (actual time=0.040..0.040 rows=1 loops=1)\n   Index Cond: (bitstr = B'11111111111111111111111111111111111111111111111111111111111111111011011101110011111100110001101000111'::bit varying)\n   Heap Fetches: 1\n Total runtime: 0.056 ms\n(4 rows)\n\nTime: 0.443 ms\n3. SELECT..WHERE using bitwise operator This gets all the results I need however it's slow. rimig=# explain analyze select bitstr from bloomtest_bi where (bitstr & '11111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000' ) = '11111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000';                                                                                                                            QUERY PLAN                                                                                                                             ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on bloomtest_bi  (cost=0.00..410770.00 rows=60000 width=18) (actual time=856.595..9359.566 rows=9 loops=1)   Filter: (((bitstr)::\"bit\" & B'11111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000'::\"bit\") = B'11111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000'::\"bit\")   Rows Removed by Filter: 11999991 Total runtime: 9359.593 ms(4 rows)\nTime: 9360.072 msOn Thu, Apr 21, 2016 at 3:34 PM, Rob Imig <[email protected]> wrote:Hey all,Lots of interesting suggestions! I'm loving it.Just came back to this a bit earlier today and made a sample table to see what non-index performance would be. Constructed data just like above (used 12M rows and 80% true for all 100 boolean columns)Here's an analyze for what I'd expect to be the types of queries that I'll be handling from the frontend. I would expect around 40-70 properties per query.Now I'm going to start experimenting with some ideas above and other tuning. This isn't as bad as I thought it would be, though would like to get this under 200ms.\nrimig=# explain analyze select count(*) from bloomtest where prop0 AND prop1 AND prop2 AND prop3 AND prop4 AND prop5 AND prop6 AND prop7 AND prop8 AND prop9 AND prop10 AND prop11 AND prop12 AND prop13 AND prop14 AND prop15 AND prop16 AND prop17 AND prop18 AND prop19 AND prop20 AND prop21 AND prop22 AND prop23 AND prop24 AND prop25 AND prop26 AND prop27 AND prop28 AND prop29 AND prop30 AND prop31 AND prop32 AND prop33 AND prop34 AND prop35 AND prop36 AND prop37 AND prop38 AND prop39 AND prop40 AND prop41 AND prop42 AND prop43 AND prop44 AND prop45 AND prop46 AND prop47 AND prop48 AND prop49 AND prop50 AND prop51 AND prop52 AND prop53 AND prop54 AND prop55 AND prop56 AND prop57 AND prop58 AND prop59 AND prop60 AND prop61 AND prop62 AND prop63 AND prop64;\n Aggregate  (cost=351563.03..351563.04 rows=1 width=0) (actual time=2636.829..2636.829 rows=1 loops=1)\n   ->  Seq Scan on bloomtest  (cost=0.00..351563.02 rows=3 width=0) (actual time=448.200..2636.811 rows=9 loops=1)\n         Filter: (prop0 AND prop1 AND prop2 AND prop3 AND prop4 AND prop5 AND prop6 AND prop7 AND prop8 AND prop9 AND prop10 AND prop11 AND prop12 AND prop13 AND prop14 AND prop15 AND prop16 AND prop17 AND prop18 AND prop19 AND prop20 AND prop21 AND prop22 AND prop23 AND prop24 AND prop25 AND prop26 AND prop27 AND prop28 AND prop29 AND prop30 AND prop31 AND prop32 AND prop33 AND prop34 AND prop35 AND prop36 AND prop37 AND prop38 AND prop39 AND prop40 AND prop41 AND prop42 AND prop43 AND prop44 AND prop45 AND prop46 AND prop47 AND prop48 AND prop49 AND prop50 AND prop51 AND prop52 AND prop53 AND prop54 AND prop55 AND prop56 AND prop57 AND prop58 AND prop59 AND prop60 AND prop61 AND prop62 AND prop63 AND prop64)\n         Rows Removed by Filter: 11999991\n Total runtime: 2636.874 msOn Thu, Apr 21, 2016 at 12:45 PM, Jeff Janes <[email protected]> wrote:On Wed, Apr 20, 2016 at 11:54 AM, Teodor Sigaev <[email protected]> wrote:\n>>\n>> The obvious thing seems to make a table with ~100 columns, with 1 column\n>> for each boolean property. Though, what type of indexing strategy would\n>> one use on that table? Doesn't make sense to do BTREE. Is there a better\n>> way to structure it?\n>>\n> looks like a deal for contrib/bloom index in upcoming 9.6 release\n\nNot without doing a custom compilation with an increased INDEX_MAX_KEYS:\n\nERROR:  cannot use more than 32 columns in an index\n\nBut even so, I'm skeptical this would do better than a full scan.  It\nwould be interesting to test that.\n\nCheers,\n\nJeff", "msg_date": "Fri, 22 Apr 2016 09:57:39 -0400", "msg_from": "Rob Imig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "On Fri, Apr 22, 2016 at 6:57 AM, Rob Imig <[email protected]> wrote:\n\n> Just to followup where I'm at, I've constructed a new column which is a\n> 100 bit bitstring representing all the flags. Created a b-tree index on\n> that column and can now do super fast lookups (2) for specific scenarios\n> however getting the behavior I need would require a huge amount of OR\n> conditions (as Rick mentioned earlier). Another option is to do bitwiser\n> operators (3) but that seems really slow. Not sure how I can speed that up.\n>\n\nI tried a slightly different tact - how about creating a function-based\nmd5() index over your columns and doing the same for you input values? For\nthe test I ran, I used a char datatype with two possible values: '1' (true)\nand '0' (false).\nThe columns were named (for simplicity), c1 to c100.\n\neg.\ncreate index lots_of_columns_md5_idx on lots_of_columns (\nmd5(c1||c2||c3||c4||c5||c6||c7||c8||c9||c10||\nc11||c12||c13||c14||c15||c16||c17||c18||c19||c20||\nc21||c22||c23||c24||c25||c26||c27||c28||c29||c30||\nc31||c32||c33||c34||c35||c36||c37||c38||c39||c40||\nc41||c42||c43||c44||c45||c46||c47||c48||c49||c50||\nc51||c52||c53||c54||c55||c56||c57||c58||c59||c60||\nc61||c62||c63||c64||c65||c66||c67||c68||c69||c70||\nc71||c72||c73||c74||c75||c76||c77||c78||c79||c80||\nc81||c82||c83||c84||c85||c86||c87||c88||c89||c90||\nc91||c92||c93||c94||c95||c96||c97||c98||c99||c100)\n) with (fillfactor=100);\n\nThe query then looked like:\nselect ...\nfrom ...\nwhere md5(all||the||columns) = md5(all||your||values);\n\nThe test data I fabricated wasn't necessarily 85% true as you expect your\ndata to be, but the tests I ran were returning results in single-digit\nmilliseconds for a 1M row table. The queries become a bit more difficult to\ncreate as you need to concatenate all the values together. You could pass\nthe list of columns into a function to abstract that away from the query,\nbut that might mess with the planner.\nNote that the method suggested here relies on column ordering always being\nthe same, otherwise the hash will be different/inaccurate.\n\nOn Fri, Apr 22, 2016 at 6:57 AM, Rob Imig <[email protected]> wrote:Just to followup where I'm at, I've constructed a new column which is a 100 bit bitstring representing all the flags. Created a b-tree index on that column and can now do super fast lookups (2) for specific scenarios however getting the behavior I need would require a huge amount of OR conditions (as Rick mentioned earlier). Another option is to do bitwiser operators (3) but that seems really slow. Not sure how I can speed that up.I tried a slightly different tact - how about creating a function-based md5() index over your columns and doing the same for you input values? For the test I ran, I used a char datatype with two possible values: '1' (true) and '0' (false). The columns were named (for simplicity), c1 to c100.eg.create index lots_of_columns_md5_idx on lots_of_columns (md5(c1||c2||c3||c4||c5||c6||c7||c8||c9||c10||c11||c12||c13||c14||c15||c16||c17||c18||c19||c20||c21||c22||c23||c24||c25||c26||c27||c28||c29||c30||c31||c32||c33||c34||c35||c36||c37||c38||c39||c40||c41||c42||c43||c44||c45||c46||c47||c48||c49||c50||c51||c52||c53||c54||c55||c56||c57||c58||c59||c60||c61||c62||c63||c64||c65||c66||c67||c68||c69||c70||c71||c72||c73||c74||c75||c76||c77||c78||c79||c80||c81||c82||c83||c84||c85||c86||c87||c88||c89||c90||c91||c92||c93||c94||c95||c96||c97||c98||c99||c100)) with (fillfactor=100);The query then looked like:select ...from ...where md5(all||the||columns) = md5(all||your||values);The test data I fabricated wasn't necessarily 85% true as you expect your data to be, but the tests I ran were returning results in single-digit milliseconds for a 1M row table. The queries become a bit more difficult to create as you need to concatenate all the values together. You could pass the list of columns into a function to abstract that away from the query, but that might mess with the planner.Note that the method suggested here relies on column ordering always being the same, otherwise the hash will be different/inaccurate.", "msg_date": "Sun, 24 Apr 2016 13:10:46 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "Query plan for the md5() index test:\n\n Index Scan using lots_of_columns_md5_idx on lots_of_columns\n(cost=0.93..3.94 rows=1 width=208) (actual time=0.043..0.043 rows=1 loops=1)\n Index Cond: ('1ba23a0668ec17e230d98c270d6664dc'::text =\nmd5(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((c1)::text\n|| (c2)::text) || (c3)::text) || (c4)::text) || (c5)::text) || (c6)::text)\n|| (c7)::text) || (c8)::text) || (c9)::text) || (c10)::text) ||\n(c11)::text) || (c12)::text) || (c13)::text) || (c14)::text) ||\n(c15)::text) || (c16)::text) || (c17)::text) || (c18)::text) ||\n(c19)::text) || (c20)::text) || (c21)::text) || (c22)::text) ||\n(c23)::text) || (c24)::text) || (c25)::text) || (c26)::text) ||\n(c27)::text) || (c28)::text) || (c29)::text) || (c30)::text) ||\n(c31)::text) || (c32)::text) || (c33)::text) || (c34)::text) ||\n(c35)::text) || (c36)::text) || (c37)::text) || (c38)::text) ||\n(c39)::text) || (c40)::text) || (c41)::text) || (c42)::text) ||\n(c43)::text) || (c44)::text) || (c45)::text) || (c46)::text) ||\n(c47)::text) || (c48)::text) || (c49)::text) || (c50)::text) ||\n(c51)::text) || (c52)::text) || (c53)::text) || (c54)::text) ||\n(c55)::text) || (c56)::text) || (c57)::text) || (c58)::text) ||\n(c59)::text) || (c60)::text) || (c61)::text) || (c62)::text) ||\n(c63)::text) || (c64)::text) || (c65)::text) || (c66)::text) ||\n(c67)::text) || (c68)::text) || (c69)::text) || (c70)::text) ||\n(c71)::text) || (c72)::text) || (c73)::text) || (c74)::text) ||\n(c75)::text) || (c76)::text) || (c77)::text) || (c78)::text) ||\n(c79)::text) || (c80)::text) || (c81)::text) || (c82)::text) ||\n(c83)::text) || (c84)::text) || (c85)::text) || (c86)::text) ||\n(c87)::text) || (c88)::text) || (c89)::text) || (c90)::text) ||\n(c91)::text) || (c92)::text) || (c93)::text) || (c94)::text) ||\n(c95)::text) || (c96)::text) || (c97)::text) || (c98)::text) ||\n(c99)::text) || (c100)::text)))\n Buffers: shared hit=4\n Planning time: 0.389 ms\n Execution time: 0.129 ms\n(5 rows)\n\nQuery plan for the md5() index test: Index Scan using lots_of_columns_md5_idx on lots_of_columns  (cost=0.93..3.94 rows=1 width=208) (actual time=0.043..0.043 rows=1 loops=1)   Index Cond: ('1ba23a0668ec17e230d98c270d6664dc'::text = md5(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((c1)::text || (c2)::text) || (c3)::text) || (c4)::text) || (c5)::text) || (c6)::text) || (c7)::text) || (c8)::text) || (c9)::text) || (c10)::text) || (c11)::text) || (c12)::text) || (c13)::text) || (c14)::text) || (c15)::text) || (c16)::text) || (c17)::text) || (c18)::text) || (c19)::text) || (c20)::text) || (c21)::text) || (c22)::text) || (c23)::text) || (c24)::text) || (c25)::text) || (c26)::text) || (c27)::text) || (c28)::text) || (c29)::text) || (c30)::text) || (c31)::text) || (c32)::text) || (c33)::text) || (c34)::text) || (c35)::text) || (c36)::text) || (c37)::text) || (c38)::text) || (c39)::text) || (c40)::text) || (c41)::text) || (c42)::text) || (c43)::text) || (c44)::text) || (c45)::text) || (c46)::text) || (c47)::text) || (c48)::text) || (c49)::text) || (c50)::text) || (c51)::text) || (c52)::text) || (c53)::text) || (c54)::text) || (c55)::text) || (c56)::text) || (c57)::text) || (c58)::text) || (c59)::text) || (c60)::text) || (c61)::text) || (c62)::text) || (c63)::text) || (c64)::text) || (c65)::text) || (c66)::text) || (c67)::text) || (c68)::text) || (c69)::text) || (c70)::text) || (c71)::text) || (c72)::text) || (c73)::text) || (c74)::text) || (c75)::text) || (c76)::text) || (c77)::text) || (c78)::text) || (c79)::text) || (c80)::text) || (c81)::text) || (c82)::text) || (c83)::text) || (c84)::text) || (c85)::text) || (c86)::text) || (c87)::text) || (c88)::text) || (c89)::text) || (c90)::text) || (c91)::text) || (c92)::text) || (c93)::text) || (c94)::text) || (c95)::text) || (c96)::text) || (c97)::text) || (c98)::text) || (c99)::text) || (c100)::text)))   Buffers: shared hit=4 Planning time: 0.389 ms Execution time: 0.129 ms(5 rows)", "msg_date": "Sun, 24 Apr 2016 13:14:25 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "On Sun, Apr 24, 2016 at 3:14 PM, bricklen <[email protected]> wrote:\n> Query plan for the md5() index test:\n>\n> Index Scan using lots_of_columns_md5_idx on lots_of_columns\n> (cost=0.93..3.94 rows=1 width=208) (actual time=0.043..0.043 rows=1 loops=1)\n> Index Cond: ('1ba23a0668ec17e230d98c270d6664dc'::text =\n> md5(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((c1)::text\n> || (c2)::text) || (c3)::text) || (c4)::text) || (c5)::text) || (c6)::text)\n> || (c7)::text) || (c8)::text) || (c9)::text) || (c10)::text) || (c11)::text)\n> || (c12)::text) || (c13)::text) || (c14)::text) || (c15)::text) ||\n> (c16)::text) || (c17)::text) || (c18)::text) || (c19)::text) || (c20)::text)\n> || (c21)::text) || (c22)::text) || (c23)::text) || (c24)::text) ||\n> (c25)::text) || (c26)::text) || (c27)::text) || (c28)::text) || (c29)::text)\n> || (c30)::text) || (c31)::text) || (c32)::text) || (c33)::text) ||\n> (c34)::text) || (c35)::text) || (c36)::text) || (c37)::text) || (c38)::text)\n> || (c39)::text) || (c40)::text) || (c41)::text) || (c42)::text) ||\n> (c43)::text) || (c44)::text) || (c45)::text) || (c46)::text) || (c47)::text)\n> || (c48)::text) || (c49)::text) || (c50)::text) || (c51)::text) ||\n> (c52)::text) || (c53)::text) || (c54)::text) || (c55)::text) || (c56)::text)\n> || (c57)::text) || (c58)::text) || (c59)::text) || (c60)::text) ||\n> (c61)::text) || (c62)::text) || (c63)::text) || (c64)::text) || (c65)::text)\n> || (c66)::text) || (c67)::text) || (c68)::text) || (c69)::text) ||\n> (c70)::text) || (c71)::text) || (c72)::text) || (c73)::text) || (c74)::text)\n> || (c75)::text) || (c76)::text) || (c77)::text) || (c78)::text) ||\n> (c79)::text) || (c80)::text) || (c81)::text) || (c82)::text) || (c83)::text)\n> || (c84)::text) || (c85)::text) || (c86)::text) || (c87)::text) ||\n> (c88)::text) || (c89)::text) || (c90)::text) || (c91)::text) || (c92)::text)\n> || (c93)::text) || (c94)::text) || (c95)::text) || (c96)::text) ||\n> (c97)::text) || (c98)::text) || (c99)::text) || (c100)::text)))\n> Buffers: shared hit=4\n> Planning time: 0.389 ms\n> Execution time: 0.129 ms\n> (5 rows)\n\nHm. Maybe use VARBIT? (assuming there are no null values or null can\nbe treated as false).\n\nCREATE OR REPLACE FUNCTION MakeVarBit(VARIADIC BOOL[]) RETURNS VARBIT AS\n$$\n SELECT string_agg(CASE WHEN v THEN '1' ELSE '0' END, '')::VARBIT\n FROM\n (\n SELECT UNNEST($1) v\n ) q;\n$$ LANGUAGE SQL IMMUTABLE;\n\npostgres=# select MakeVarBit(true, true, false);\n makevarbit\n────────────\n 110\n\ncreate index on lots_of_columns (MakeVarBit(c1, c2, c3, c4 ...));\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Apr 2016 11:28:42 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" }, { "msg_contents": "At that point would it be better to just use a boolean array?\n\nHere is an example I just wrote up that does pretty damn fast searches.\n\n\nSET work_mem = '256 MB';\n\nCREATE TABLE test_bool AS\nSELECT id, array_agg(random() < 0.85) as boolean_column\nFROM generate_series(1, 100)\nCROSS JOIN generate_series(1, 500000) id\nGROUP BY id;\n\nCREATE INDEX idx_test_bool ON test_bool (boolean_column);\n\nVACUUM ANALYZE test_bool;\n\nSELECT *\nFROM test_bool\nORDER BY random()\nLIMIT 10\n\nSELECT id\nFROM test_bool\nWHERE boolean_column =\n'{t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,f,t,t,t,t,f,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,f,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,f,f,t,t,t,t,t,t,t,t,t,f,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,f}'\n\nAt that point would it be better to just use a boolean array?Here is an example I just wrote up that does pretty damn fast searches.SET work_mem = '256 MB';CREATE TABLE test_bool AS SELECT id, array_agg(random() < 0.85) as boolean_columnFROM generate_series(1, 100)CROSS JOIN generate_series(1, 500000) idGROUP BY id;CREATE INDEX idx_test_bool ON test_bool (boolean_column);VACUUM ANALYZE test_bool;SELECT *FROM test_boolORDER BY  random()LIMIT 10SELECT idFROM test_boolWHERE boolean_column = '{t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,f,t,t,t,t,f,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,f,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,f,f,t,t,t,t,t,t,t,t,t,f,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,f}'", "msg_date": "Mon, 25 Apr 2016 13:23:28 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performant queries on table with many boolean columns" } ]
[ { "msg_contents": "After remodeling a table we have some performance problems.\n\n \n\nThe Original tables have much more fields and we thought it came from these\nmany fields. After some testing I tried these test layout and the\nperformance problem is not solved.\n\n \n\nPostgresql 9.3.12\n\n \n\nFormer DB-Layout was table _masterOld_ with 2 tables inherits from\n_masterOld_: _part1Old_ and _part2Old_. In _masterOld_ were 7 million rows\n(_part1Old_: 5 millions, _part2Old_: 2 millions). \n\n \n\nNow we have only one new table _masterNew_ with 7 million rows.\n\n \n\nDDL: \n\nexport:\n\n \n\n CREATE TABLE public.export (\n\n id_firma BIGINT,\n\n status VARCHAR(32)\n\n ) \n\n WITH (oids = false);\n\n \n\n CREATE INDEX export_idx ON public.export\n\n USING btree (id_firma);\n\n \n\nmasterNew:\n\n \n\n CREATE TABLE public.\"masterNew\" (\n\n id_firma BIGINT,\n\n id_bestand BIGINT NOT NULL,\n\n status VARCHAR(32),\n\n sperre VARCHAR(32),\n\n CONSTRAINT \"masterNew_2016_pkey\" PRIMARY KEY(id_bestand)\n\n ) \n\n WITH (oids = false);\n\n \n\n CREATE INDEX \"masterNew_2016_pi_idx\" ON public.\"masterNew\"\n\n USING btree (id_firma)\n\n WHERE ((status IS NULL) AND (sperre IS NULL));\n\n \n\n CREATE INDEX \"masterNew_sperre_2016\" ON public.\"masterNew\"\n\n USING btree (sperre COLLATE pg_catalog.\"default\");\n\n \n\n CREATE INDEX \"masterNew_status_2016\" ON public.\"masterNew\"\n\n USING btree (status COLLATE pg_catalog.\"default\");\n\n \n\nmasterOld:\n\n \n\n CREATE TABLE public.\"masterOld\" (\n\n id_firma BIGINT,\n\n id_bestand BIGINT NOT NULL,\n\n status VARCHAR(32),\n\n sperre VARCHAR(32),\n\n CONSTRAINT \"masterOld_pkey\" PRIMARY KEY(id_bestand)\n\n ) \n\n WITH (oids = false);\n\n \n\n CREATE INDEX \"masterOld_idx\" ON public.\"masterOld\"\n\n USING btree (id_firma);\n\n \n\n CREATE INDEX \"masterOld_sperre\" ON public.\"masterOld\"\n\n USING btree (sperre COLLATE pg_catalog.\"default\");\n\n \n\n CREATE INDEX \"masterOld_status\" ON public.\"masterOld\"\n\n USING btree (status COLLATE pg_catalog.\"default\");\n\n \n\npart1Old:\n\n \n\n CREATE TABLE public.\"part1Old\" (\n\n CONSTRAINT \"part1Old_idx\" PRIMARY KEY(id_bestand)\n\n ) INHERITS (public.\"masterOld\")\n\n \n\n WITH (oids = false);\n\n \n\n CREATE INDEX \"part1Old_idx1\" ON public.\"part1Old\"\n\n USING btree (id_firma);\n\n \n\n CREATE INDEX \"part1Old_idx2\" ON public.\"part1Old\"\n\n USING btree (status COLLATE pg_catalog.\"default\");\n\n \n\n CREATE INDEX \"part1Old_idx3\" ON public.\"part1Old\"\n\n USING btree (sperre COLLATE pg_catalog.\"default\");\n\n \n\npart2Old:\n\n \n\n CREATE TABLE public.\"part2Old\" (\n\n CONSTRAINT \"part2Old_idx\" PRIMARY KEY(id_bestand)\n\n ) INHERITS (public.\"masterOld\")\n\n \n\n WITH (oids = false);\n\n \n\n CREATE INDEX \"part2Old_idx1\" ON public.\"part2Old\"\n\n USING btree (id_firma);\n\n \n\n CREATE INDEX \"part2Old_idx2\" ON public.\"part2Old\"\n\n USING btree (status COLLATE pg_catalog.\"default\");\n\n \n\n CREATE INDEX \"part2Old_idx3\" ON public.\"part2Old\"\n\n USING btree (sperre COLLATE pg_catalog.\"default\");\n\n \n\nIn the _export_ table are 1.2 million rows.\n\n \n\nOld:\n\n \n\n EXPLAIN\n\n SELECT b.id, b.status\n\n FROM export b, masterOld mb\n\n WHERE mb.sperre IS NULL\n\n AND mb.status IS NULL\n\n AND b.id_firma = mb.id_firma\n\n LIMIT 100; \n\n \n\n<a href=\"http://explain.depesz.com/s/SCBo\">Plan on explain.depesz.com</a> \n\n \n\n - Plan: \n\n Node Type: \"Limit\"\n\n Startup Cost: 0.00\n\n Total Cost: 0.09\n\n Plan Rows: 100\n\n Plan Width: 90\n\n Plans: \n\n - Node Type: \"Nested Loop\"\n\n Parent Relationship: \"Outer\"\n\n Join Type: \"Inner\"\n\n Startup Cost: 0.00\n\n Total Cost: 118535034.59\n\n Plan Rows: 126126068850\n\n Plan Width: 90\n\n Plans: \n\n - Node Type: \"Seq Scan\"\n\n Parent Relationship: \"Outer\"\n\n Relation Name: \"export\"\n\n Alias: \"b\"\n\n Startup Cost: 0.00\n\n Total Cost: 79129.80\n\n Plan Rows: 5485680\n\n Plan Width: 90\n\n - Node Type: \"Append\"\n\n Parent Relationship: \"Inner\"\n\n Startup Cost: 0.00\n\n Total Cost: 21.56\n\n Plan Rows: 3\n\n Plan Width: 8\n\n Plans: \n\n - Node Type: \"Seq Scan\"\n\n Parent Relationship: \"Member\"\n\n Relation Name: \"masterOld\"\n\n Alias: \"mb\"\n\n Startup Cost: 0.00\n\n Total Cost: 1.10\n\n Plan Rows: 1\n\n Plan Width: 8\n\n Filter: \"((sperre IS NULL) AND (status IS NULL) AND (b.id =\nid))\"\n\n - Node Type: \"Index Scan\"\n\n Parent Relationship: \"Member\"\n\n Scan Direction: \"Forward\"\n\n Index Name: \"part1Old_idx9\"\n\n Relation Name: \"part1Old\"\n\n Alias: \"mb_1\"\n\n Startup Cost: 0.43\n\n Total Cost: 12.20\n\n Plan Rows: 1\n\n Plan Width: 8\n\n Index Cond: \"(id = b.id)\"\n\n Filter: \"((sperre IS NULL) AND (status IS NULL))\"\n\n - Node Type: \"Index Scan\"\n\n Parent Relationship: \"Member\"\n\n Scan Direction: \"Forward\"\n\n Index Name: \"part2Old_idx\"\n\n Relation Name: \"part2Old\"\n\n Alias: \"mb_2\"\n\n Startup Cost: 0.43\n\n Total Cost: 8.26\n\n Plan Rows: 1\n\n Plan Width: 8\n\n Index Cond: \"(id = b.id)\"\n\n Filter: \"((sperre IS NULL) AND (status IS NULL))\"\n\n \n\nThere were no speed problems.\n\n \n\nNew:\n\n \n\n EXPLAIN\n\n SELECT b.id, b.status\n\n FROM export b, masterNew mb\n\n WHERE mb.sperre IS NULL\n\n AND mb.status IS NULL\n\n AND b.id = mb.id\n\n LIMIT 100; \n\n \n\n<a href=\"http://explain.depesz.com/s/eAqG\">Plan on explain.depesz.com</a>\n\n\n \n\n - Plan: \n\n Node Type: \"Limit\"\n\n Startup Cost: 5.38\n\n Total Cost: 306.99\n\n Plan Rows: 100\n\n Plan Width: 90\n\n Plans: \n\n - Node Type: \"Nested Loop\"\n\n Parent Relationship: \"Outer\"\n\n Join Type: \"Inner\"\n\n Startup Cost: 5.38\n\n Total Cost: 14973468.06\n\n Plan Rows: 4964540\n\n Plan Width: 90\n\n Join Filter: \"(b.id = mb.id)\"\n\n Plans: \n\n - Node Type: \"Seq Scan\"\n\n Parent Relationship: \"Outer\"\n\n Relation Name: \"export\"\n\n Alias: \"b\"\n\n Startup Cost: 0.00\n\n Total Cost: 79129.80\n\n Plan Rows: 5485680\n\n Plan Width: 90\n\n - Node Type: \"Materialize\"\n\n Parent Relationship: \"Inner\"\n\n Startup Cost: 5.38\n\n Total Cost: 717.51\n\n Plan Rows: 181\n\n Plan Width: 8\n\n Plans: \n\n - Node Type: \"Bitmap Heap Scan\"\n\n Parent Relationship: \"Outer\"\n\n Relation Name: \"masterNew\"\n\n Alias: \"mb\"\n\n Startup Cost: 5.38\n\n Total Cost: 716.61\n\n Plan Rows: 181\n\n Plan Width: 8\n\n Recheck Cond: \"((status IS NULL) AND (sperre IS NULL))\"\n\n Plans: \n\n - Node Type: \"Bitmap Index Scan\"\n\n Parent Relationship: \"Outer\"\n\n Index Name: \"masterNew_2016_pi_idx\"\n\n Startup Cost: 0.00\n\n Total Cost: 5.34\n\n Plan Rows: 181\n\n Plan Width: 0\n\n \n\nThere we have our problem.\n\nWe have tried to fix it using a partial Index on _id_ with `status is null\nand sperre is null` .\n\nIf we don't use `sperre is null` in this query it is quick. I think we have\nthese problems because _sperre_ and _status_ have much null values.\n_status_: 67% null and _sperre_: 97% null .\n\nOn each table there are btree indexes on _id_, _sperre_ and _status_.\n\nOn _masterNew_ there is a partial Index on _id_ with `sperre is null and\nstatus is null`.\n\n \n\nCan somebody help me with these performance Problem.\n\nWhat can I try to solve this? \n\n \n\nBest regards,\n\nSven Kerkling\n\n\nAfter remodeling a table we have some performance problems. The Original tables have much more fields and we thought it came from these many fields. After some testing I tried these test layout and the performance problem is not solved. Postgresql 9.3.12 Former DB-Layout was table _masterOld_ with 2 tables inherits from _masterOld_: _part1Old_ and _part2Old_. In _masterOld_ were 7 million rows (_part1Old_: 5 millions, _part2Old_: 2 millions).  Now we have only one new table _masterNew_ with 7 million rows. DDL:  export:      CREATE TABLE public.export (      id_firma BIGINT,      status VARCHAR(32)    )     WITH (oids = false);     CREATE INDEX export_idx ON public.export      USING btree (id_firma); masterNew:       CREATE TABLE public.\"masterNew\" (    id_firma BIGINT,    id_bestand BIGINT NOT NULL,    status VARCHAR(32),    sperre VARCHAR(32),    CONSTRAINT \"masterNew_2016_pkey\" PRIMARY KEY(id_bestand)    )     WITH (oids = false);     CREATE INDEX \"masterNew_2016_pi_idx\" ON public.\"masterNew\"    USING btree (id_firma)    WHERE ((status IS NULL) AND (sperre IS NULL));     CREATE INDEX \"masterNew_sperre_2016\" ON public.\"masterNew\"    USING btree (sperre COLLATE pg_catalog.\"default\");     CREATE INDEX \"masterNew_status_2016\" ON public.\"masterNew\"    USING btree (status COLLATE pg_catalog.\"default\"); masterOld:     CREATE TABLE public.\"masterOld\" (      id_firma BIGINT,      id_bestand BIGINT NOT NULL,      status VARCHAR(32),      sperre VARCHAR(32),      CONSTRAINT \"masterOld_pkey\" PRIMARY KEY(id_bestand)    )     WITH (oids = false);     CREATE INDEX \"masterOld_idx\" ON public.\"masterOld\"      USING btree (id_firma);     CREATE INDEX \"masterOld_sperre\" ON public.\"masterOld\"      USING btree (sperre COLLATE pg_catalog.\"default\");     CREATE INDEX \"masterOld_status\" ON public.\"masterOld\"      USING btree (status COLLATE pg_catalog.\"default\"); part1Old:     CREATE TABLE public.\"part1Old\" (      CONSTRAINT \"part1Old_idx\" PRIMARY KEY(id_bestand)    ) INHERITS (public.\"masterOld\")     WITH (oids = false);     CREATE INDEX \"part1Old_idx1\" ON public.\"part1Old\"      USING btree (id_firma);     CREATE INDEX \"part1Old_idx2\" ON public.\"part1Old\"      USING btree (status COLLATE pg_catalog.\"default\");     CREATE INDEX \"part1Old_idx3\" ON public.\"part1Old\"      USING btree (sperre COLLATE pg_catalog.\"default\"); part2Old:     CREATE TABLE public.\"part2Old\" (      CONSTRAINT \"part2Old_idx\" PRIMARY KEY(id_bestand)    ) INHERITS (public.\"masterOld\")     WITH (oids = false);     CREATE INDEX \"part2Old_idx1\" ON public.\"part2Old\"      USING btree (id_firma);     CREATE INDEX \"part2Old_idx2\" ON public.\"part2Old\"      USING btree (status COLLATE pg_catalog.\"default\");     CREATE INDEX \"part2Old_idx3\" ON public.\"part2Old\"      USING btree (sperre COLLATE pg_catalog.\"default\"); In the _export_ table are 1.2 million rows. Old:     EXPLAIN    SELECT b.id, b.status    FROM export b, masterOld mb    WHERE mb.sperre IS NULL      AND mb.status IS NULL      AND b.id_firma = mb.id_firma    LIMIT 100;   <a href=\"http://explain.depesz.com/s/SCBo\">Plan on explain.depesz.com</a>      - Plan:     Node Type: \"Limit\"    Startup Cost: 0.00    Total Cost: 0.09    Plan Rows: 100    Plan Width: 90    Plans:       - Node Type: \"Nested Loop\"        Parent Relationship: \"Outer\"        Join Type: \"Inner\"        Startup Cost: 0.00        Total Cost: 118535034.59        Plan Rows: 126126068850        Plan Width: 90        Plans:           - Node Type: \"Seq Scan\"            Parent Relationship: \"Outer\"            Relation Name: \"export\"            Alias: \"b\"            Startup Cost: 0.00            Total Cost: 79129.80            Plan Rows: 5485680            Plan Width: 90          - Node Type: \"Append\"            Parent Relationship: \"Inner\"            Startup Cost: 0.00            Total Cost: 21.56            Plan Rows: 3            Plan Width: 8            Plans:               - Node Type: \"Seq Scan\"                Parent Relationship: \"Member\"                Relation Name: \"masterOld\"                Alias: \"mb\"                Startup Cost: 0.00                Total Cost: 1.10                Plan Rows: 1                Plan Width: 8                Filter: \"((sperre IS NULL) AND (status IS NULL) AND (b.id = id))\"              - Node Type: \"Index Scan\"                Parent Relationship: \"Member\"                Scan Direction: \"Forward\"                Index Name: \"part1Old_idx9\"                Relation Name: \"part1Old\"                Alias: \"mb_1\"                Startup Cost: 0.43                Total Cost: 12.20                Plan Rows: 1                Plan Width: 8                Index Cond: \"(id = b.id)\"                Filter: \"((sperre IS NULL) AND (status IS NULL))\"              - Node Type: \"Index Scan\"                Parent Relationship: \"Member\"                Scan Direction: \"Forward\"                Index Name: \"part2Old_idx\"                Relation Name: \"part2Old\"                Alias: \"mb_2\"                Startup Cost: 0.43                Total Cost: 8.26                Plan Rows: 1                Plan Width: 8                Index Cond: \"(id = b.id)\"                Filter: \"((sperre IS NULL) AND (status IS NULL))\" There were no speed problems. New:     EXPLAIN    SELECT b.id, b.status    FROM export b, masterNew mb    WHERE mb.sperre IS NULL      AND mb.status IS NULL      AND b.id = mb.id    LIMIT 100;  <a href=\"http://explain.depesz.com/s/eAqG\">Plan on explain.depesz.com</a>         - Plan:     Node Type: \"Limit\"    Startup Cost: 5.38    Total Cost: 306.99    Plan Rows: 100    Plan Width: 90    Plans:       - Node Type: \"Nested Loop\"        Parent Relationship: \"Outer\"        Join Type: \"Inner\"        Startup Cost: 5.38        Total Cost: 14973468.06        Plan Rows: 4964540        Plan Width: 90        Join Filter: \"(b.id = mb.id)\"        Plans:           - Node Type: \"Seq Scan\"            Parent Relationship: \"Outer\"            Relation Name: \"export\"            Alias: \"b\"            Startup Cost: 0.00            Total Cost: 79129.80            Plan Rows: 5485680            Plan Width: 90          - Node Type: \"Materialize\"            Parent Relationship: \"Inner\"            Startup Cost: 5.38            Total Cost: 717.51            Plan Rows: 181            Plan Width: 8            Plans:               - Node Type: \"Bitmap Heap Scan\"                Parent Relationship: \"Outer\"                Relation Name: \"masterNew\"                Alias: \"mb\"                Startup Cost: 5.38                Total Cost: 716.61                Plan Rows: 181                Plan Width: 8                Recheck Cond: \"((status IS NULL) AND (sperre IS NULL))\"                Plans:                   - Node Type: \"Bitmap Index Scan\"                    Parent Relationship: \"Outer\"                    Index Name: \"masterNew_2016_pi_idx\"                    Startup Cost: 0.00                    Total Cost: 5.34                    Plan Rows: 181                    Plan Width: 0 There we have our problem.We have tried to fix it using a partial Index on _id_ with `status is null and sperre is null` .If we don't use `sperre is null` in this query it is quick. I think we have these problems because _sperre_ and _status_ have much null values. _status_: 67% null and _sperre_: 97% null .On each table there are btree indexes on _id_, _sperre_ and _status_.On _masterNew_ there is a partial Index on _id_ with `sperre is null and status is null`. Can somebody help me with these performance Problem.What can I try to solve this?  Best regards,Sven Kerkling", "msg_date": "Thu, 21 Apr 2016 11:49:54 +0200", "msg_from": "\"Sven Kerkling\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems with postgres and null Values?" }, { "msg_contents": "On Thu, Apr 21, 2016 at 4:49 AM, Sven Kerkling <[email protected]> wrote:\n> Can somebody help me with these performance Problem.\n>\n> What can I try to solve this?\n\ncan you explain what the problem actually is? Which query is running\nslow and how fast do you think it should run?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Apr 2016 17:10:33 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with postgres and null Values?" }, { "msg_contents": "This one is quick, running in 20ms:\n\n SELECT b.id, b.status\n FROM export b, masterOld mb\n WHERE mb.sperre IS NULL\n AND mb.status IS NULL\n AND b.id_firma = mb.id_firma\n LIMIT 100; \n\nhttp://explain.depesz.com/s/SCBo\n \nThis one ist the burden, running at least 100 seconds:\n\n SELECT b.id, b.status\n FROM export b, masterNew mb\n WHERE mb.sperre IS NULL\n AND mb.status IS NULL\n AND b.id = mb.id\n LIMIT 100; \n\nhttp://explain.depesz.com/s/eAqG \n\nThere should be only slight differences between them.\n\nSven\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] [mailto:[email protected]] Im Auftrag von Merlin Moncure\nGesendet: Samstag, 23. April 2016 00:11\nAn: Sven Kerkling\nCc: postgres performance list\nBetreff: Re: [PERFORM] Performance problems with postgres and null Values?\n\nOn Thu, Apr 21, 2016 at 4:49 AM, Sven Kerkling <[email protected]> wrote:\n> Can somebody help me with these performance Problem.\n>\n> What can I try to solve this?\n\ncan you explain what the problem actually is? Which query is running slow and how fast do you think it should run?\n\nmerlin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Apr 2016 09:22:54 +0200", "msg_from": "\"Sven Kerkling\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with postgres and null Values?" }, { "msg_contents": "Sven Kerkling wrote:\r\n> This one ist the burden, running at least 100 seconds:\r\n> \r\n> SELECT b.id, b.status\r\n> FROM export b, masterNew mb\r\n> WHERE mb.sperre IS NULL\r\n> AND mb.status IS NULL\r\n> AND b.id = mb.id\r\n> LIMIT 100;\r\n> \r\n> http://explain.depesz.com/s/eAqG\r\n\r\nI think the problem is here:\r\n\r\nBitmap Index Scan on masterNew_2016_pi_idx (cost=0.00..5.34 rows=181 width=0) (actual time=805.225..805.225 rows=4,764,537 loops=1)\r\n\r\nPerhaps you should ANALYZE \"masterNew\" to get better statistics.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Apr 2016 08:08:22 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with postgres and null Values?" }, { "msg_contents": "Thx.\n\nAll queries are now running as usual. \n\nThx for helping me.\n\nBest regards\nSven\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] [mailto:[email protected]] Im Auftrag von Albe Laurenz\nGesendet: Montag, 25. April 2016 10:08\nAn: 'Sven Kerkling *EXTERN*'; 'Merlin Moncure'\nCc: 'postgres performance list'\nBetreff: Re: [PERFORM] Performance problems with postgres and null Values?\n\nSven Kerkling wrote:\n> This one ist the burden, running at least 100 seconds:\n> \n> SELECT b.id, b.status\n> FROM export b, masterNew mb\n> WHERE mb.sperre IS NULL\n> AND mb.status IS NULL\n> AND b.id = mb.id\n> LIMIT 100;\n> \n> http://explain.depesz.com/s/eAqG\n\nI think the problem is here:\n\nBitmap Index Scan on masterNew_2016_pi_idx (cost=0.00..5.34 rows=181 width=0) (actual time=805.225..805.225 rows=4,764,537 loops=1)\n\nPerhaps you should ANALYZE \"masterNew\" to get better statistics.\n\nYours,\nLaurenz Albe\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Apr 2016 12:13:03 +0200", "msg_from": "\"Sven Kerkling\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with postgres and null Values?" } ]
[ { "msg_contents": "Hi All.\n\nI've noticed that there is a huge (more than ~3x slower) performance\ndifference between KVM guest and host machine.\nHost machine:\ndell r720xd\nRAID10 with 12 SAS 15 k drives and RAID0 with 2*128 GB INTEL SSD drives in\nDell CacheCade mode.\n\n*On the KVM guest:*\n\n /usr/pgsql-9.4/bin/pg_test_fsync -f test.sync\n\n5 seconds per test\n\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\n\nCompare file sync methods using one 8kB write:\n\n(in wal_sync_method preference order, except fdatasync\n\nis Linux's default)\n\n open_datasync 5190.279 ops/sec 193 usecs/op\n\n fdatasync 4022.553 ops/sec 249 usecs/op\n\n fsync 3069.069 ops/sec 326 usecs/op\n\n fsync_writethrough n/a\n\n open_sync 4892.348 ops/sec 204 usecs/op\n\n\nCompare file sync methods using two 8kB writes:\n\n(in wal_sync_method preference order, except fdatasync\n\nis Linux's default)\n\n open_datasync 2406.577 ops/sec 416 usecs/op\n\n fdatasync 4309.413 ops/sec 232 usecs/op\n\n fsync 3518.844 ops/sec 284 usecs/op\n\n fsync_writethrough n/a\n\n open_sync 1159.604 ops/sec 862 usecs/op\n\n\nCompare open_sync with different write sizes:\n\n(This is designed to compare the cost of writing 16kB\n\nin different write open_sync sizes.)\n\n 1 * 16kB open_sync write 3700.689 ops/sec 270 usecs/op\n\n 2 * 8kB open_sync writes 2581.405 ops/sec 387 usecs/op\n\n 4 * 4kB open_sync writes 1318.871 ops/sec 758 usecs/op\n\n 8 * 2kB open_sync writes 698.640 ops/sec 1431 usecs/op\n\n 16 * 1kB open_sync writes 262.506 ops/sec 3809 usecs/op\n\n\nTest if fsync on non-write file descriptor is honored:\n\n(If the times are similar, fsync() can sync data written\n\non a different descriptor.)\n\n write, fsync, close 3071.141 ops/sec 326 usecs/op\n\n write, close, fsync 3303.946 ops/sec 303 usecs/op\n\n\nNon-Sync'ed 8kB writes:\n\n write 251321.188 ops/sec 4 usecs/op\n\n\n*On the host machine:*\n\n/usr/pgsql-9.4/bin/pg_test_fsync -f test.sync\n\n5 seconds per test\n\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\n\nCompare file sync methods using one 8kB write:\n\n(in wal_sync_method preference order, except fdatasync\n\nis Linux's default)\n\n open_datasync 11364.136 ops/sec 88 usecs/op\n\n fdatasync 12352.160 ops/sec 81 usecs/op\n\n fsync 9833.745 ops/sec 102 usecs/op\n\n fsync_writethrough n/a\n\n open_sync 14938.531 ops/sec 67 usecs/op\n\n\nCompare file sync methods using two 8kB writes:\n\n(in wal_sync_method preference order, except fdatasync\n\nis Linux's default)\n\n open_datasync 7703.471 ops/sec 130 usecs/op\n\n fdatasync 11494.492 ops/sec 87 usecs/op\n\n fsync 9029.837 ops/sec 111 usecs/op\n\n fsync_writethrough n/a\n\n open_sync 6504.138 ops/sec 154 usecs/op\n\n\nCompare open_sync with different write sizes:\n\n(This is designed to compare the cost of writing 16kB\n\nin different write open_sync sizes.)\n\n 1 * 16kB open_sync write 14113.912 ops/sec 71 usecs/op\n\n 2 * 8kB open_sync writes 7843.234 ops/sec 127 usecs/op\n\n 4 * 4kB open_sync writes 3995.702 ops/sec 250 usecs/op\n\n 8 * 2kB open_sync writes 1788.979 ops/sec 559 usecs/op\n\n 16 * 1kB open_sync writes 937.177 ops/sec 1067 usecs/op\n\n\nTest if fsync on non-write file descriptor is honored:\n\n(If the times are similar, fsync() can sync data written\n\non a different descriptor.)\n\n write, fsync, close 10144.280 ops/sec 99 usecs/op\n\n write, close, fsync 8378.558 ops/sec 119 usecs/op\n\n\nNon-Sync'ed 8kB writes:\n\n write 159176.122 ops/sec 6 usecs/op\n\n\nThe file system \"inside\" and \"outside\" the same - ext4 on LVM. Disk\nscheduler \"inside\" and \"outside\" set to \"noop\". Fstab options same to,\nsetted to rw,noatime,nodiratime,barrier=0. OS on host and guest the same\nCentOS release 6.5 (Final).\n\nGuest volume options:\n\nDisk bus: Virtio\n\nCache mode: none\n\nIO mode: native\n\n\nAny ideas?\n\nHi All. I've noticed that there is a huge (more than ~3x slower) performance difference between KVM guest and host machine. Host machine: dell r720xd RAID10 with 12 SAS 15 k drives and RAID0  with 2*128 GB INTEL SSD drives in Dell CacheCade mode.On the KVM guest:\n /usr/pgsql-9.4/bin/pg_test_fsync -f test.sync\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n        open_datasync                      5190.279 ops/sec     193 usecs/op\n        fdatasync                          4022.553 ops/sec     249 usecs/op\n        fsync                              3069.069 ops/sec     326 usecs/op\n        fsync_writethrough                              n/a\n        open_sync                          4892.348 ops/sec     204 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n        open_datasync                      2406.577 ops/sec     416 usecs/op\n        fdatasync                          4309.413 ops/sec     232 usecs/op\n        fsync                              3518.844 ops/sec     284 usecs/op\n        fsync_writethrough                              n/a\n        open_sync                          1159.604 ops/sec     862 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n         1 * 16kB open_sync write          3700.689 ops/sec     270 usecs/op\n         2 *  8kB open_sync writes         2581.405 ops/sec     387 usecs/op\n         4 *  4kB open_sync writes         1318.871 ops/sec     758 usecs/op\n         8 *  2kB open_sync writes          698.640 ops/sec    1431 usecs/op\n        16 *  1kB open_sync writes          262.506 ops/sec    3809 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n        write, fsync, close                3071.141 ops/sec     326 usecs/op\n        write, close, fsync                3303.946 ops/sec     303 usecs/op\n\nNon-Sync'ed 8kB writes:\n        write                            251321.188 ops/sec       4 usecs/opOn the host machine:/usr/pgsql-9.4/bin/pg_test_fsync -f test.sync5 seconds per testO_DIRECT supported on this platform for open_datasync and open_sync.Compare file sync methods using one 8kB write:(in wal_sync_method preference order, except fdatasyncis Linux's default)        open_datasync                     11364.136 ops/sec      88 usecs/op        fdatasync                         12352.160 ops/sec      81 usecs/op        fsync                              9833.745 ops/sec     102 usecs/op        fsync_writethrough                              n/a        open_sync                         14938.531 ops/sec      67 usecs/opCompare file sync methods using two 8kB writes:(in wal_sync_method preference order, except fdatasyncis Linux's default)        open_datasync                      7703.471 ops/sec     130 usecs/op        fdatasync                         11494.492 ops/sec      87 usecs/op        fsync                              9029.837 ops/sec     111 usecs/op        fsync_writethrough                              n/a        open_sync                          6504.138 ops/sec     154 usecs/opCompare open_sync with different write sizes:(This is designed to compare the cost of writing 16kBin different write open_sync sizes.)         1 * 16kB open_sync write         14113.912 ops/sec      71 usecs/op         2 *  8kB open_sync writes         7843.234 ops/sec     127 usecs/op         4 *  4kB open_sync writes         3995.702 ops/sec     250 usecs/op         8 *  2kB open_sync writes         1788.979 ops/sec     559 usecs/op        16 *  1kB open_sync writes          937.177 ops/sec    1067 usecs/opTest if fsync on non-write file descriptor is honored:(If the times are similar, fsync() can sync data writtenon a different descriptor.)        write, fsync, close               10144.280 ops/sec      99 usecs/op        write, close, fsync                8378.558 ops/sec     119 usecs/opNon-Sync'ed 8kB writes:\n        write                            159176.122 ops/sec       6 usecs/opThe file system \"inside\" and \"outside\" the same - ext4 on LVM. Disk scheduler  \"inside\" and \"outside\" set to \"noop\". Fstab options same to, setted to rw,noatime,nodiratime,barrier=0. OS on host and guest the same CentOS release 6.5 (Final).Guest volume options:Disk bus: VirtioCache mode: noneIO mode: native \nAny ideas?", "msg_date": "Tue, 26 Apr 2016 17:03:08 +0300", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": true, "msg_subject": "Poor disk (virtio) Performance Inside KVM virt-machine vs host\n machine" }, { "msg_contents": ">I've noticed that there is a huge (more than ~3x slower) performance\ndifference between KVM guest and host machine.\n\nI don't know that this is relevant or not , but there is an IBM research\npaper (Published in 2014)\n\"IBM Research Report - An Updated Performance Comparison of Virtual\nMachines and Linux Containers\"\nhttp://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/\n-> \" As we would expect, Docker introduces no overhead compared to Linux,\nbut KVM delivers only half as many IOPS because each I/O operation must go\nthrough QEMU. While the VM’s absolute performance is still quite high, it\nuses more CPU cycles per I/O operation, leaving less CPU available for\napplication work. Figure 7 shows that KVM increases read latency by 2-3x, a\ncrucial metric for some real workloads.\"\n\nImre\n\n\n\n\n2016-04-26 16:03 GMT+02:00 Artem Tomyuk <[email protected]>:\n\n> Hi All.\n>\n> I've noticed that there is a huge (more than ~3x slower) performance\n> difference between KVM guest and host machine.\n> Host machine:\n> dell r720xd\n> RAID10 with 12 SAS 15 k drives and RAID0 with 2*128 GB INTEL SSD drives\n> in Dell CacheCade mode.\n>\n> *On the KVM guest:*\n>\n> /usr/pgsql-9.4/bin/pg_test_fsync -f test.sync\n>\n> 5 seconds per test\n>\n> O_DIRECT supported on this platform for open_datasync and open_sync.\n>\n>\n> Compare file sync methods using one 8kB write:\n>\n> (in wal_sync_method preference order, except fdatasync\n>\n> is Linux's default)\n>\n> open_datasync 5190.279 ops/sec 193\n> usecs/op\n>\n> fdatasync 4022.553 ops/sec 249\n> usecs/op\n>\n> fsync 3069.069 ops/sec 326\n> usecs/op\n>\n> fsync_writethrough n/a\n>\n> open_sync 4892.348 ops/sec 204\n> usecs/op\n>\n>\n> Compare file sync methods using two 8kB writes:\n>\n> (in wal_sync_method preference order, except fdatasync\n>\n> is Linux's default)\n>\n> open_datasync 2406.577 ops/sec 416\n> usecs/op\n>\n> fdatasync 4309.413 ops/sec 232\n> usecs/op\n>\n> fsync 3518.844 ops/sec 284\n> usecs/op\n>\n> fsync_writethrough n/a\n>\n> open_sync 1159.604 ops/sec 862\n> usecs/op\n>\n>\n> Compare open_sync with different write sizes:\n>\n> (This is designed to compare the cost of writing 16kB\n>\n> in different write open_sync sizes.)\n>\n> 1 * 16kB open_sync write 3700.689 ops/sec 270\n> usecs/op\n>\n> 2 * 8kB open_sync writes 2581.405 ops/sec 387\n> usecs/op\n>\n> 4 * 4kB open_sync writes 1318.871 ops/sec 758\n> usecs/op\n>\n> 8 * 2kB open_sync writes 698.640 ops/sec 1431\n> usecs/op\n>\n> 16 * 1kB open_sync writes 262.506 ops/sec 3809\n> usecs/op\n>\n>\n> Test if fsync on non-write file descriptor is honored:\n>\n> (If the times are similar, fsync() can sync data written\n>\n> on a different descriptor.)\n>\n> write, fsync, close 3071.141 ops/sec 326\n> usecs/op\n>\n> write, close, fsync 3303.946 ops/sec 303\n> usecs/op\n>\n>\n> Non-Sync'ed 8kB writes:\n>\n> write 251321.188 ops/sec 4\n> usecs/op\n>\n>\n> *On the host machine:*\n>\n> /usr/pgsql-9.4/bin/pg_test_fsync -f test.sync\n>\n> 5 seconds per test\n>\n> O_DIRECT supported on this platform for open_datasync and open_sync.\n>\n>\n> Compare file sync methods using one 8kB write:\n>\n> (in wal_sync_method preference order, except fdatasync\n>\n> is Linux's default)\n>\n> open_datasync 11364.136 ops/sec 88\n> usecs/op\n>\n> fdatasync 12352.160 ops/sec 81\n> usecs/op\n>\n> fsync 9833.745 ops/sec 102\n> usecs/op\n>\n> fsync_writethrough n/a\n>\n> open_sync 14938.531 ops/sec 67\n> usecs/op\n>\n>\n> Compare file sync methods using two 8kB writes:\n>\n> (in wal_sync_method preference order, except fdatasync\n>\n> is Linux's default)\n>\n> open_datasync 7703.471 ops/sec 130\n> usecs/op\n>\n> fdatasync 11494.492 ops/sec 87\n> usecs/op\n>\n> fsync 9029.837 ops/sec 111\n> usecs/op\n>\n> fsync_writethrough n/a\n>\n> open_sync 6504.138 ops/sec 154\n> usecs/op\n>\n>\n> Compare open_sync with different write sizes:\n>\n> (This is designed to compare the cost of writing 16kB\n>\n> in different write open_sync sizes.)\n>\n> 1 * 16kB open_sync write 14113.912 ops/sec 71\n> usecs/op\n>\n> 2 * 8kB open_sync writes 7843.234 ops/sec 127\n> usecs/op\n>\n> 4 * 4kB open_sync writes 3995.702 ops/sec 250\n> usecs/op\n>\n> 8 * 2kB open_sync writes 1788.979 ops/sec 559\n> usecs/op\n>\n> 16 * 1kB open_sync writes 937.177 ops/sec 1067\n> usecs/op\n>\n>\n> Test if fsync on non-write file descriptor is honored:\n>\n> (If the times are similar, fsync() can sync data written\n>\n> on a different descriptor.)\n>\n> write, fsync, close 10144.280 ops/sec 99\n> usecs/op\n>\n> write, close, fsync 8378.558 ops/sec 119\n> usecs/op\n>\n>\n> Non-Sync'ed 8kB writes:\n>\n> write 159176.122 ops/sec 6\n> usecs/op\n>\n>\n> The file system \"inside\" and \"outside\" the same - ext4 on LVM. Disk\n> scheduler \"inside\" and \"outside\" set to \"noop\". Fstab options same to,\n> setted to rw,noatime,nodiratime,barrier=0. OS on host and guest the same\n> CentOS release 6.5 (Final).\n>\n> Guest volume options:\n>\n> Disk bus: Virtio\n>\n> Cache mode: none\n>\n> IO mode: native\n>\n>\n> Any ideas?\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\n>I've noticed that there is a huge (more than ~3x slower) performance difference between KVM guest and host machine. I don't know that this is relevant or not , but there is an IBM research paper (Published in 2014)\"IBM Research Report - An Updated Performance Comparison of Virtual Machines and Linux Containers\"http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/->  \" As we would expect, Docker introduces no overhead compared to Linux, but KVM delivers only half as many IOPS\nbecause each I/O operation must go through QEMU. While the\nVM’s absolute performance is still quite high, it uses more\nCPU cycles per I/O operation, leaving less CPU available for\napplication work. Figure 7 shows that KVM increases read\nlatency by 2-3x, a crucial metric for some real workloads.\" Imre2016-04-26 16:03 GMT+02:00 Artem Tomyuk <[email protected]>:Hi All. I've noticed that there is a huge (more than ~3x slower) performance difference between KVM guest and host machine. Host machine: dell r720xd RAID10 with 12 SAS 15 k drives and RAID0  with 2*128 GB INTEL SSD drives in Dell CacheCade mode.On the KVM guest:\n /usr/pgsql-9.4/bin/pg_test_fsync -f test.sync\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n        open_datasync                      5190.279 ops/sec     193 usecs/op\n        fdatasync                          4022.553 ops/sec     249 usecs/op\n        fsync                              3069.069 ops/sec     326 usecs/op\n        fsync_writethrough                              n/a\n        open_sync                          4892.348 ops/sec     204 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n        open_datasync                      2406.577 ops/sec     416 usecs/op\n        fdatasync                          4309.413 ops/sec     232 usecs/op\n        fsync                              3518.844 ops/sec     284 usecs/op\n        fsync_writethrough                              n/a\n        open_sync                          1159.604 ops/sec     862 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n         1 * 16kB open_sync write          3700.689 ops/sec     270 usecs/op\n         2 *  8kB open_sync writes         2581.405 ops/sec     387 usecs/op\n         4 *  4kB open_sync writes         1318.871 ops/sec     758 usecs/op\n         8 *  2kB open_sync writes          698.640 ops/sec    1431 usecs/op\n        16 *  1kB open_sync writes          262.506 ops/sec    3809 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n        write, fsync, close                3071.141 ops/sec     326 usecs/op\n        write, close, fsync                3303.946 ops/sec     303 usecs/op\n\nNon-Sync'ed 8kB writes:\n        write                            251321.188 ops/sec       4 usecs/opOn the host machine:/usr/pgsql-9.4/bin/pg_test_fsync -f test.sync5 seconds per testO_DIRECT supported on this platform for open_datasync and open_sync.Compare file sync methods using one 8kB write:(in wal_sync_method preference order, except fdatasyncis Linux's default)        open_datasync                     11364.136 ops/sec      88 usecs/op        fdatasync                         12352.160 ops/sec      81 usecs/op        fsync                              9833.745 ops/sec     102 usecs/op        fsync_writethrough                              n/a        open_sync                         14938.531 ops/sec      67 usecs/opCompare file sync methods using two 8kB writes:(in wal_sync_method preference order, except fdatasyncis Linux's default)        open_datasync                      7703.471 ops/sec     130 usecs/op        fdatasync                         11494.492 ops/sec      87 usecs/op        fsync                              9029.837 ops/sec     111 usecs/op        fsync_writethrough                              n/a        open_sync                          6504.138 ops/sec     154 usecs/opCompare open_sync with different write sizes:(This is designed to compare the cost of writing 16kBin different write open_sync sizes.)         1 * 16kB open_sync write         14113.912 ops/sec      71 usecs/op         2 *  8kB open_sync writes         7843.234 ops/sec     127 usecs/op         4 *  4kB open_sync writes         3995.702 ops/sec     250 usecs/op         8 *  2kB open_sync writes         1788.979 ops/sec     559 usecs/op        16 *  1kB open_sync writes          937.177 ops/sec    1067 usecs/opTest if fsync on non-write file descriptor is honored:(If the times are similar, fsync() can sync data writtenon a different descriptor.)        write, fsync, close               10144.280 ops/sec      99 usecs/op        write, close, fsync                8378.558 ops/sec     119 usecs/opNon-Sync'ed 8kB writes:\n        write                            159176.122 ops/sec       6 usecs/opThe file system \"inside\" and \"outside\" the same - ext4 on LVM. Disk scheduler  \"inside\" and \"outside\" set to \"noop\". Fstab options same to, setted to rw,noatime,nodiratime,barrier=0. OS on host and guest the same CentOS release 6.5 (Final).Guest volume options:Disk bus: VirtioCache mode: noneIO mode: native \nAny ideas?", "msg_date": "Tue, 26 Apr 2016 16:32:07 +0200", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor disk (virtio) Performance Inside KVM virt-machine\n vs host machine" }, { "msg_contents": "On Tue, Apr 26, 2016 at 10:03 AM, Artem Tomyuk <[email protected]> wrote:\n\n> Hi All.\n>\n> I've noticed that there is a huge (more than ~3x slower) performance\n> difference between KVM guest and host machine.\n>\n>\nIs this unique to KVM, or do similar things happen with other virtualizers?\n--\nMike Nolan\n\nOn Tue, Apr 26, 2016 at 10:03 AM, Artem Tomyuk <[email protected]> wrote:Hi All. I've noticed that there is a huge (more than ~3x slower) performance difference between KVM guest and host machine. Is this unique to KVM, or do similar things happen with other virtualizers?--Mike Nolan", "msg_date": "Tue, 26 Apr 2016 11:21:15 -0400", "msg_from": "Michael Nolan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor disk (virtio) Performance Inside KVM virt-machine\n vs host machine" }, { "msg_contents": "I didn't compare impact of virtualization on other hypervisors yet.\n\n2016-04-26 18:21 GMT+03:00 Michael Nolan <[email protected]>:\n\n>\n>\n> On Tue, Apr 26, 2016 at 10:03 AM, Artem Tomyuk <[email protected]>\n> wrote:\n>\n>> Hi All.\n>>\n>> I've noticed that there is a huge (more than ~3x slower) performance\n>> difference between KVM guest and host machine.\n>>\n>>\n> Is this unique to KVM, or do similar things happen with other virtualizers?\n> --\n> Mike Nolan\n>\n\nI didn't compare impact of virtualization on other hypervisors yet. 2016-04-26 18:21 GMT+03:00 Michael Nolan <[email protected]>:On Tue, Apr 26, 2016 at 10:03 AM, Artem Tomyuk <[email protected]> wrote:Hi All. I've noticed that there is a huge (more than ~3x slower) performance difference between KVM guest and host machine. Is this unique to KVM, or do similar things happen with other virtualizers?--Mike Nolan", "msg_date": "Tue, 26 Apr 2016 18:27:04 +0300", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor disk (virtio) Performance Inside KVM virt-machine\n vs host machine" }, { "msg_contents": "On Tuesday, April 26, 2016 11:21:15 AM Michael Nolan wrote:\n> On Tue, Apr 26, 2016 at 10:03 AM, Artem Tomyuk <[email protected]> wrote:\n> > Hi All.\n> > \n> > I've noticed that there is a huge (more than ~3x slower) performance\n> > difference between KVM guest and host machine.\n> \n> Is this unique to KVM, or do similar things happen with other virtualizers?\n> --\n> Mike Nolan\n\nCentOS 6.5 is pretty old. KVM/qemu is definitely faster in newer versions. \n\nIt will always be slower than native though. Any virtualization will be slower \nthan native. \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Apr 2016 13:47:06 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor disk (virtio) Performance Inside KVM virt-machine vs host\n machine" }, { "msg_contents": "On Tue, Apr 26, 2016 at 10:27 AM, Artem Tomyuk <[email protected]> wrote:\n> I didn't compare impact of virtualization on other hypervisors yet.\n\nMy rule of thumb is 50% hit for 1:1 host:guest. Virtualization is\nnot free. If that's a pain try using 100% native solutions (docker,\netc)\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Apr 2016 15:52:38 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor disk (virtio) Performance Inside KVM virt-machine\n vs host machine" } ]
[ { "msg_contents": "\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 27 Apr 2016 22:41:57 -0400", "msg_from": "George Neuner <[email protected]>", "msg_from_op": true, "msg_subject": "testing - please ignore" } ]
[ { "msg_contents": "Hello.\n\nIt seems my quite complex query runs 10 times faster on \"some_column \nLIKE '%test_1' \" vs \"some_column LIKE 'test_1' \"\nSo I just add \"%\" to the pattern...\n\nBoth query plans use same indexes.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 6 May 2016 12:48:01 +0300", "msg_from": "=?UTF-8?B?0JLQu9Cw0LTQuNC80LjRgA==?= <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE pattern" }, { "msg_contents": "Владимир-3 wrote\n> It seems my quite complex query runs 10 times faster on \"some_column \n> LIKE '%test_1' \" vs \"some_column LIKE 'test_1' \"\n> So I just add \"%\" to the pattern...\n\nKeep in mind then LIKE '%test_1' and LIKE 'test_1' are not equivalent, using\nthe % as a prefix to the argument means that the scan only has to confirm\nthat the value ends in 'test_1' where forgoing the % entirely means that you\nare essentially saying some_column='test_1'.\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/LIKE-pattern-tp5902225p5902701.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 9 May 2016 14:41:06 -0700 (MST)", "msg_from": "SoDupuDupu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE pattern" }, { "msg_contents": "On Mon, May 9, 2016 at 11:41 PM, SoDupuDupu wrote:\n> Владимир-3 wrote\n>> It seems my quite complex query runs 10 times faster on \"some_column\n>> LIKE '%test_1' \" vs \"some_column LIKE 'test_1' \"\n>> So I just add \"%\" to the pattern...\n>\n> Keep in mind then LIKE '%test_1' and LIKE 'test_1' are not equivalent, using\n> the % as a prefix to the argument means that the scan only has to confirm\n> that the value ends in 'test_1' where forgoing the % entirely means that you\n> are essentially saying some_column='test_1'.\n\nYes, but wouldn't the latter test be more efficient usually since it\ntests against a prefix - at least with a regular index?\n\nKind regards\n\nrobert\n\n-- \n[guy, jim, charlie].each {|him| remember.him do |as, often| as.you_can\n- without end}\nhttp://blog.rubybestpractices.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 May 2016 17:13:18 +0200", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE pattern" }, { "msg_contents": "On Thu, May 12, 2016 at 8:13 AM, Robert Klemme\n<[email protected]> wrote:\n> On Mon, May 9, 2016 at 11:41 PM, SoDupuDupu wrote:\n>> Владимир-3 wrote\n>>> It seems my quite complex query runs 10 times faster on \"some_column\n>>> LIKE '%test_1' \" vs \"some_column LIKE 'test_1' \"\n>>> So I just add \"%\" to the pattern...\n>>\n>> Keep in mind then LIKE '%test_1' and LIKE 'test_1' are not equivalent, using\n>> the % as a prefix to the argument means that the scan only has to confirm\n>> that the value ends in 'test_1' where forgoing the % entirely means that you\n>> are essentially saying some_column='test_1'.\n>\n> Yes, but wouldn't the latter test be more efficient usually since it\n> tests against a prefix - at least with a regular index?\n\nIn theory. But the planner is imperfect, and they will have different\nestimated selectivities which could easily tip the planner into making\na poor choice for the more selective case. Without seeing the plans,\nit is hard to say much more.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 May 2016 10:02:04 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE pattern" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Thu, May 12, 2016 at 8:13 AM, Robert Klemme\n> <[email protected]> wrote:\n>> On Mon, May 9, 2016 at 11:41 PM, SoDupuDupu wrote:\n>>> Keep in mind then LIKE '%test_1' and LIKE 'test_1' are not equivalent, using\n>>> the % as a prefix to the argument means that the scan only has to confirm\n>>> that the value ends in 'test_1' where forgoing the % entirely means that you\n>>> are essentially saying some_column='test_1'.\n\n>> Yes, but wouldn't the latter test be more efficient usually since it\n>> tests against a prefix - at least with a regular index?\n\n> In theory. But the planner is imperfect, and they will have different\n> estimated selectivities which could easily tip the planner into making\n> a poor choice for the more selective case. Without seeing the plans,\n> it is hard to say much more.\n\nAlso keep in mind that not every failure of this sort is the planner's\nfault ;-). Particularly with GIN/GiST indexes, quite a lot of the\nintelligence (or lack of it) is buried in the index opclass support\nfunctions, where the planner has little visibility and even less say.\n\nIn this particular case, a whole lot depends on which set of trigrams\nthe pg_trgm opclass support functions will choose to search for. The set\nthat's potentially extractable from the LIKE pattern is well defined, but\nnot all of them are necessarily equally useful for searching the index.\n\nWith a reasonably late-model PG (9.4+), you might well have better luck\nwith a regular-expression pattern than a LIKE pattern, because more work\nhas been put into pg_trgm's heuristics for choosing which trigrams to use\nfor regexes.\n\n(Not sure why it didn't occur to us to make that code apply to LIKE too,\nbut it didn't.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 May 2016 13:25:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE pattern" } ]
[ { "msg_contents": "Transactions to table, ChangeHistory, have recently become intermittently slow and is increasing becoming slower.\n\n* No database configuration changes have been made recently\n* We have run vacuum analyze\n* We have tried backing up and reloading the table (data, indexes, etc)\n\nSome transactions respond quickly (200 ms) and others take over 55 seconds (we cancel the query after 55 seconds - our timeout SLA). The problem has recently become very bad. It is the same query being issued but with different parameters.\n\nIf the transaction takes over 55 seconds and I run the query manually (with or without EXPLAIN ANALYZE) it returns quickly (a few hundred ms). In case I am looking at cache, I have a list of other queries (just different parameters) that have timed out and when I run them (without the limit even) the response is very timely.\n\nAny help or insight would be great.\n\nNOTE: our application is connecting to the database via JDBC and we are using PreparedStatements. I have provided full details so all information is available, but please let me know if any other information is needed - thx in advance.\n\np306=> EXPLAIN ANALYZE SELECT * FROM ChangeHistory WHERE Category BETWEEN 'Employee' AND 'Employeezz' AND PrimaryKeyOfChange BETWEEN '312313' AND '312313!zz' ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\nLimit (cost=33.66..33.67 rows=1 width=136) (actual time=0.297..0.297 rows=11 loops=1)\n -> Sort (cost=33.66..33.67 rows=1 width=136) (actual time=0.297..0.297 rows=11 loops=1)\n Sort Key: chgts, chguser, category, primarykeyofchange\n Sort Method: top-N heapsort Memory: 27kB\n -> Index Scan using changehistory_idx4 on changehistory (cost=0.00..33.65 rows=1 width=136) (actual time=0.046..\n0.239 rows=85 loops=1)\n Index Cond: (((primarykeyofchange)::text >= '312313'::text) AND ((primarykeyofchange)::text <= '312313!zz'::\ntext))\n Filter: (((category)::text >= 'Employee'::text) AND ((category)::text <= 'Employeezz'::text))\n Total runtime: 0.328 ms\n(8 rows)\n\n>>>\nHistory this week of counts with good response times vs timeouts.\n\n| Date | Success # | Time Out # | Avg. Success Secs |\n|------------+-----------+------------+-------------------|\n| 2016-05-09 | 18 | 31 | 7.9 |\n| 2016-05-10 | 17 | 25 | 10.5 |\n| 2016-05-11 | 27 | 33 | 10.1 |\n| 2016-05-12 | 68 | 24 | 9.9 |\n\n\n>>> Sample transaction response times\n\n| Timestamp | Tran ID | Resp MS | Resp CD\n--------------------+----------------+---------+--------\n2016-05-10 06:20:19 | ListChangeHist | 55,023 | TIMEOUT\n2016-05-10 07:47:34 | ListChangeHist | 55,017 | TIMEOUT\n2016-05-10 07:48:00 | ListChangeHist | 9,866 | OK\n2016-05-10 07:48:10 | ListChangeHist | 2,327 | OK\n2016-05-10 07:59:23 | ListChangeHist | 55,020 | TIMEOUT\n2016-05-10 08:11:20 | ListChangeHist | 55,030 | TIMEOUT\n2016-05-10 08:31:45 | ListChangeHist | 4,216 | OK\n2016-05-10 08:35:09 | ListChangeHist | 7,898 | OK\n2016-05-10 08:36:18 | ListChangeHist | 9,810 | OK\n2016-05-10 08:36:56 | ListChangeHist | 55,027 | TIMEOUT\n2016-05-10 08:37:33 | ListChangeHist | 46,433 | OK\n2016-05-10 08:38:09 | ListChangeHist | 55,019 | TIMEOUT\n2016-05-10 08:53:43 | ListChangeHist | 55,019 | TIMEOUT\n2016-05-10 09:45:09 | ListChangeHist | 55,022 | TIMEOUT\n2016-05-10 09:46:13 | ListChangeHist | 55,017 | TIMEOUT\n2016-05-10 09:49:27 | ListChangeHist | 55,011 | TIMEOUT\n2016-05-10 09:52:12 | ListChangeHist | 55,018 | TIMEOUT\n2016-05-10 09:57:42 | ListChangeHist | 9,462 | OK\n2016-05-10 10:05:21 | ListChangeHist | 55,016 | TIMEOUT\n2016-05-10 10:05:29 | ListChangeHist | 136 | OK\n2016-05-10 10:05:38 | ListChangeHist | 1,517 | OK\n\nArtifacts\n======================\n\n$ >uname -a\nSunOS ***** 5.10 Generic_150400-30 sun4v sparc sun4v\n\nMemory : 254G phys mem, 207G free mem.\nProcessors: 32 - CPU is mostly 80% free\n\n>>>\np306=> select version();\n version\n---------------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on sparc-sun-solaris2.10, compiled by gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath), 64-bit\n\n>>>\np306=> \\dt+ changehistory\n List of relations\n Schema | Name | Type | Owner | Size | Description\n--------+---------------+-------+-------+-------+-------------\n public | changehistory | table | p306 | 17 GB |\n\n>>>\np306=> \\di+ changehistory*\n List of relations\n Schema | Name | Type | Owner | Table | Size | Description\n--------+-----------------------+-------+-------+---------------+---------+-------------\n public | changehistory_idx1 | index | p306 | changehistory | 9597 MB |\n public | changehistory_idx3 | index | p306 | changehistory | 11 GB |\n public | changehistory_idx4 | index | p306 | changehistory | 4973 MB |\n public | changehistory_pkey | index | p306 | changehistory | 2791 MB |\n public | changehistory_search2 | index | p306 | changehistory | 9888 MB |\n public | changehistory_search3 | index | p306 | changehistory | 10 GB |\n public | changehistory_search4 | index | p306 | changehistory | 9240 MB |\n public | changehistory_search5 | index | p306 | changehistory | 8373 MB |\n(8 rows)\n\n\n>>>\np306=> select count(*) from changehistory ;\n count\n------------\n 129,185,024\n\n>>>\nShow all (filtered)\n======================================================\n\n name | setting\n---------------------------------+--------------------\n autovacuum | on\n autovacuum_analyze_scale_factor | 0.001\n autovacuum_analyze_threshold | 500\n autovacuum_freeze_max_age | 200000000\n autovacuum_max_workers | 5\n autovacuum_naptime | 1min\n autovacuum_vacuum_cost_delay | 0\n autovacuum_vacuum_cost_limit | -1\n autovacuum_vacuum_scale_factor | 0.001\n autovacuum_vacuum_threshold | 500\n bgwriter_delay | 200ms\n block_size | 8192\n check_function_bodies | on\n checkpoint_completion_target | 0.9\n checkpoint_segments | 256\n checkpoint_timeout | 1h\n checkpoint_warning | 30s\n client_encoding | UTF8\n commit_delay | 0\n commit_siblings | 5\n cpu_index_tuple_cost | 0.005\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n cursor_tuple_fraction | 0.1\n deadlock_timeout | 1s\n default_statistics_target | 100\n default_transaction_deferrable | off\n default_transaction_isolation | read committed\n default_transaction_read_only | off\n default_with_oids | off\n effective_cache_size | 8GB\n from_collapse_limit | 8\n fsync | on\n full_page_writes | on\n ignore_system_indexes | off\n join_collapse_limit | 8\n krb_caseins_users | off\n lo_compat_privileges | off\n maintenance_work_mem | 1GB\n max_connections | 350\n max_files_per_process | 1000\n max_function_args | 100\n max_identifier_length | 63\n max_index_keys | 32\n max_locks_per_transaction | 64\n max_pred_locks_per_transaction | 64\n max_prepared_transactions | 0\n max_stack_depth | 2MB\n max_wal_senders | 5\n random_page_cost | 4\n segment_size | 1GB\n seq_page_cost | 1\n server_encoding | UTF8\n server_version | 9.1.14\n shared_buffers | 2GB\n sql_inheritance | on\n statement_timeout | 0\n synchronize_seqscans | on\n synchronous_commit | on\n synchronous_standby_names |\n tcp_keepalives_count | 0\n tcp_keepalives_idle | -1\n tcp_keepalives_interval | 0\n track_activities | on\n track_activity_query_size | 1024\n track_counts | on\n track_functions | none\n transaction_deferrable | off\n transaction_isolation | read committed\n transaction_read_only | off\n transform_null_equals | off\n update_process_title | on\n vacuum_cost_delay | 0\n vacuum_cost_limit | 200\n vacuum_cost_page_dirty | 20\n vacuum_cost_page_hit | 1\n vacuum_cost_page_miss | 10\n vacuum_defer_cleanup_age | 0\n vacuum_freeze_min_age | 50000000\n vacuum_freeze_table_age | 150000000\n\nJohn Gorman | Manager of Production Support, Architecture, Release Engineering | Eldorado | a Division of MPHASIS | www.eldoinc.com/ |\n5353 North 16th Street, Suite 400, Phoenix, Arizona 85016-3228 | Tel 602.604.3100 | Fax: 602.604.3115\n\n\n\n\n\n\n\n\n\n\n\n\n\nTransactions to table, ChangeHistory, have recently become intermittently slow and is increasing becoming slower.\n \n* No database configuration changes have been made recently\n* We have run vacuum analyze \n* We have tried backing up and reloading the table (data, indexes, etc)\n \nSome transactions respond quickly (200 ms) and others take over 55 seconds (we cancel the query after 55 seconds – our timeout SLA). The problem has recently become very bad. It is the same query being issued but with different parameters.\n \nIf the transaction takes over 55 seconds and I run the query manually (with or without EXPLAIN ANALYZE) it returns quickly (a few hundred ms). In case I am looking at cache, I have a list of other queries (just different parameters) that have timed out\nand when I run them (without the limit even) the response is very timely.\n \nAny help or insight would be great.\n \nNOTE: our application is connecting to the database via JDBC and we are using PreparedStatements. I have provided full details so all information is available, but please let me know if any other information is needed – thx in advance.\n \np306=> EXPLAIN ANALYZE SELECT * FROM ChangeHistory WHERE Category BETWEEN 'Employee' AND 'Employeezz' AND PrimaryKeyOfChange BETWEEN '312313' AND '312313!zz' ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11;\n                                            QUERY PLAN                                             \n------------------------------------------------------------------------------------------------------\nLimit  (cost=33.66..33.67 rows=1 width=136) (actual time=0.297..0.297 rows=11 loops=1)\n   ->  Sort  (cost=33.66..33.67 rows=1 width=136) (actual time=0.297..0.297 rows=11 loops=1)\n         Sort Key: chgts, chguser, category, primarykeyofchange\n         Sort Method: top-N heapsort  Memory: 27kB\n         ->  Index Scan using changehistory_idx4 on changehistory  (cost=0.00..33.65 rows=1 width=136) (actual time=0.046..\n0.239 rows=85 loops=1)\n               Index Cond: (((primarykeyofchange)::text >= '312313'::text) AND ((primarykeyofchange)::text <= '312313!zz'::\ntext))\n               Filter: (((category)::text >= 'Employee'::text) AND ((category)::text <= 'Employeezz'::text))\n Total runtime: 0.328 ms\n(8 rows)\n \n>>>\nHistory this week of counts with good response times vs timeouts.\n \n| Date       | Success # | Time Out # | Avg. Success Secs |\n|------------+-----------+------------+-------------------|\n| 2016-05-09 |        18 |         31 |               7.9 |\n| 2016-05-10 |        17 |         25 |              10.5 |\n| 2016-05-11 |        27 |         33 |              10.1 |\n| 2016-05-12 |        68 |         24 |               9.9 |\n \n \n>>> Sample transaction response times\n \n| Timestamp         | Tran ID        | Resp MS | Resp CD\n--------------------+----------------+---------+--------\n2016-05-10 06:20:19 | ListChangeHist | 55,023  | TIMEOUT\n2016-05-10 07:47:34 | ListChangeHist | 55,017  | TIMEOUT\n2016-05-10 07:48:00 | ListChangeHist |  9,866  | OK\n2016-05-10 07:48:10 | ListChangeHist |  2,327  | OK\n2016-05-10 07:59:23 | ListChangeHist | 55,020  | TIMEOUT\n2016-05-10 08:11:20 | ListChangeHist | 55,030  | TIMEOUT\n2016-05-10 08:31:45 | ListChangeHist |  4,216  | OK\n2016-05-10 08:35:09 | ListChangeHist |  7,898  | OK\n2016-05-10 08:36:18 | ListChangeHist |  9,810  | OK\n2016-05-10 08:36:56 | ListChangeHist | 55,027  | TIMEOUT\n2016-05-10 08:37:33 | ListChangeHist | 46,433  | OK\n2016-05-10 08:38:09 | ListChangeHist | 55,019  | TIMEOUT\n2016-05-10 08:53:43 | ListChangeHist | 55,019  | TIMEOUT\n2016-05-10 09:45:09 | ListChangeHist | 55,022  | TIMEOUT\n2016-05-10 09:46:13 | ListChangeHist | 55,017  | TIMEOUT\n2016-05-10 09:49:27 | ListChangeHist | 55,011  | TIMEOUT\n2016-05-10 09:52:12 | ListChangeHist | 55,018  | TIMEOUT\n2016-05-10 09:57:42 | ListChangeHist |  9,462  | OK\n2016-05-10 10:05:21 | ListChangeHist | 55,016  | TIMEOUT\n2016-05-10 10:05:29 | ListChangeHist |    136  | OK\n2016-05-10 10:05:38 | ListChangeHist |  1,517  | OK\n \nArtifacts\n======================\n \n$ >uname -a\nSunOS ***** 5.10 Generic_150400-30 sun4v sparc sun4v\n \nMemory    : 254G phys mem, 207G free mem.\nProcessors: 32 - CPU is mostly 80% free\n \n>>>\np306=> select version();\n                                                      version                                                      \n---------------------------------------------------------------------------------------------------\nPostgreSQL 9.1.14 on sparc-sun-solaris2.10, compiled by gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath), 64-bit\n \n>>>\np306=> \\dt+ changehistory \n                      List of relations\n Schema |     Name      | Type  | Owner | Size  | Description \n--------+---------------+-------+-------+-------+-------------\n public | changehistory | table | p306  | 17 GB |\n \n>>>\np306=> \\di+ changehistory*\n                                   List of relations\n Schema |         Name          | Type  | Owner |     Table     |  Size   | Description \n--------+-----------------------+-------+-------+---------------+---------+-------------\n public | changehistory_idx1    | index | p306  | changehistory | 9597 MB | \n public | changehistory_idx3    | index | p306  | changehistory | 11 GB   | \n public | changehistory_idx4    | index | p306  | changehistory | 4973 MB | \n public | changehistory_pkey    | index | p306  | changehistory | 2791 MB | \n public | changehistory_search2 | index | p306  | changehistory | 9888 MB | \n public | changehistory_search3 | index | p306  | changehistory | 10 GB   | \n public | changehistory_search4 | index | p306  | changehistory | 9240 MB | \n public | changehistory_search5 | index | p306  | changehistory | 8373 MB | \n(8 rows)\n \n \n>>>\np306=> select count(*) from changehistory ;\n   count   \n------------\n 129,185,024\n \n>>>\nShow all (filtered)\n======================================================\n \n              name               | setting\n---------------------------------+--------------------\n autovacuum                      | on\n autovacuum_analyze_scale_factor | 0.001\n autovacuum_analyze_threshold    | 500\n autovacuum_freeze_max_age       | 200000000\n autovacuum_max_workers          | 5\n autovacuum_naptime              | 1min\n autovacuum_vacuum_cost_delay    | 0\n autovacuum_vacuum_cost_limit    | -1\n autovacuum_vacuum_scale_factor  | 0.001\n autovacuum_vacuum_threshold     | 500\n bgwriter_delay                  | 200ms\n block_size                      | 8192\n check_function_bodies           | on\n checkpoint_completion_target    | 0.9\n checkpoint_segments             | 256\n checkpoint_timeout              | 1h\n checkpoint_warning              | 30s\n client_encoding                 | UTF8\n commit_delay                    | 0\n commit_siblings                 | 5\n cpu_index_tuple_cost            | 0.005\n cpu_operator_cost               | 0.0025\n cpu_tuple_cost                  | 0.01\n cursor_tuple_fraction           | 0.1\n deadlock_timeout                | 1s\n default_statistics_target       | 100\n default_transaction_deferrable  | off\n default_transaction_isolation   | read committed\n default_transaction_read_only   | off\n default_with_oids               | off\n effective_cache_size            | 8GB\n from_collapse_limit             | 8\n fsync                           | on\n full_page_writes                | on\n ignore_system_indexes           | off\n join_collapse_limit             | 8\n krb_caseins_users               | off\n lo_compat_privileges            | off\n maintenance_work_mem            | 1GB\n max_connections                 | 350\n max_files_per_process           | 1000\n max_function_args               | 100\n max_identifier_length           | 63\n max_index_keys                  | 32\n max_locks_per_transaction       | 64\n max_pred_locks_per_transaction  | 64\n max_prepared_transactions       | 0\n max_stack_depth                 | 2MB\n max_wal_senders                 | 5\n random_page_cost                | 4\n segment_size                    | 1GB\n seq_page_cost                   | 1\n server_encoding                 | UTF8\n server_version                  | 9.1.14\n shared_buffers                  | 2GB\n sql_inheritance                 | on\n statement_timeout               | 0\n synchronize_seqscans            | on\n synchronous_commit              | on\n synchronous_standby_names       |\n tcp_keepalives_count            | 0\n tcp_keepalives_idle             | -1\n tcp_keepalives_interval         | 0\n track_activities                | on\n track_activity_query_size       | 1024\n track_counts                    | on\n track_functions                 | none\n transaction_deferrable          | off\n transaction_isolation           | read committed\n transaction_read_only           | off\n transform_null_equals           | off\n update_process_title            | on\n vacuum_cost_delay               | 0\n vacuum_cost_limit               | 200\n vacuum_cost_page_dirty          | 20\n vacuum_cost_page_hit            | 1\n vacuum_cost_page_miss           | 10\n vacuum_defer_cleanup_age        | 0\n vacuum_freeze_min_age           | 50000000\n vacuum_freeze_table_age         | 150000000\n \nJohn Gorman |\nManager of Production Support, Architecture, Release Engineering | Eldorado |\na Division of MPHASIS |\nwww.eldoinc.com/ |\n5353 North 16th Street, Suite 400, Phoenix, Arizona 85016-3228 |\nTel 602.604.3100 | Fax: 602.604.3115", "msg_date": "Fri, 13 May 2016 19:59:51 +0000", "msg_from": "John Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Database transaction with intermittent slow responses" }, { "msg_contents": "After quick reading, im thinking about a couples of chances:\n\n1) You are hitting a connection_limit\n2) You are hitting a lock contention (perhaps some other backend is locking the table and not releasing it)\n\nWho throws the timeout? It is Postgres or your JDBC connector?\n\nMy initial blind guess is that your \"timed out queries\" never gets postgres at all, and are blocked prior to that for some other issue. If im wrong, well, you should at least have the timeout recorded in your logs.\n\nYou should also track #of_connectinos and #of_locks over that tables.\n\nSee http://www.postgresql.org/docs/9.1/static/view-pg-locks.html for pg_lock information\n\nThat should be my starting point for viewing whats going on.\n\nHTH\nGerardo\n\n----- Mensaje original -----\n> De: \"John Gorman\" <[email protected]>\n> Para: [email protected]\n> CC: \"John Gorman\" <[email protected]>\n> Enviados: Viernes, 13 de Mayo 2016 16:59:51\n> Asunto: [PERFORM] Database transaction with intermittent slow responses\n> \n> \n> Transactions to table, ChangeHistory, have recently become\n> intermittently slow and is increasing becoming slower.\n> \n> * No database configuration changes have been made recently\n> * We have run vacuum analyze\n> * We have tried backing up and reloading the table (data, indexes,\n> etc)\n> \n> Some transactions respond quickly (200 ms) and others take over 55\n> seconds (we cancel the query after 55 seconds – our timeout SLA).\n> The problem has recently become very bad. It is the same query being\n> issued but with different parameters.\n> \n> If the transaction takes over 55 seconds and I run the query manually\n> (with or without EXPLAIN ANALYZE) it returns quickly (a few hundred\n> ms). In case I am looking at cache, I have a list of other queries\n> (just different parameters) that have timed out and when I run them\n> (without the limit even) the response is very timely.\n> \n> Any help or insight would be great.\n> \n> NOTE: our application is connecting to the database via JDBC and we\n> are using PreparedStatements. I have provided full details so all\n> information is available, but please let me know if any other\n> information is needed – thx in advance.\n> \n> p306=> EXPLAIN ANALYZE SELECT * FROM ChangeHistory WHERE Category\n> BETWEEN 'Employee' AND 'Employeezz' AND PrimaryKeyOfChange BETWEEN\n> '312313' AND '312313!zz' ORDER BY ChgTS DESC, ChgUser DESC, Category\n> DESC, PrimaryKeyOfChange DESC LIMIT 11;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------\n> Limit (cost=33.66..33.67 rows=1 width=136) (actual time=0.297..0.297\n> rows=11 loops=1)\n> -> Sort (cost=33.66..33.67 rows=1 width=136) (actual\n> time=0.297..0.297 rows=11 loops=1)\n> Sort Key: chgts, chguser, category, primarykeyofchange\n> Sort Method: top-N heapsort Memory: 27kB\n> -> Index Scan using changehistory_idx4 on changehistory\n> (cost=0.00..33.65 rows=1 width=136) (actual time=0.046..\n> 0.239 rows=85 loops=1)\n> Index Cond: (((primarykeyofchange)::text >= '312313'::text) AND\n> ((primarykeyofchange)::text <= '312313!zz'::\n> text))\n> Filter: (((category)::text >= 'Employee'::text) AND ((category)::text\n> <= 'Employeezz'::text))\n> Total runtime: 0.328 ms\n> (8 rows)\n> \n> >>> \n> History this week of counts with good response times vs timeouts.\n> \n> | Date | Success # | Time Out # | Avg. Success Secs |\n> |------------+-----------+------------+-------------------|\n> | 2016-05-09 | 18 | 31 | 7.9 |\n> | 2016-05-10 | 17 | 25 | 10.5 |\n> | 2016-05-11 | 27 | 33 | 10.1 |\n> | 2016-05-12 | 68 | 24 | 9.9 |\n> \n> \n> >>> Sample transaction response times\n> \n> | Timestamp | Tran ID | Resp MS | Resp CD\n> --------------------+----------------+---------+--------\n> 2016-05-10 06:20:19 | ListChangeHist | 55,023 | TIMEOUT\n> 2016-05-10 07:47:34 | ListChangeHist | 55,017 | TIMEOUT\n> 2016-05-10 07:48:00 | ListChangeHist | 9,866 | OK\n> 2016-05-10 07:48:10 | ListChangeHist | 2,327 | OK\n> 2016-05-10 07:59:23 | ListChangeHist | 55,020 | TIMEOUT\n> 2016-05-10 08:11:20 | ListChangeHist | 55,030 | TIMEOUT\n> 2016-05-10 08:31:45 | ListChangeHist | 4,216 | OK\n> 2016-05-10 08:35:09 | ListChangeHist | 7,898 | OK\n> 2016-05-10 08:36:18 | ListChangeHist | 9,810 | OK\n> 2016-05-10 08:36:56 | ListChangeHist | 55,027 | TIMEOUT\n> 2016-05-10 08:37:33 | ListChangeHist | 46,433 | OK\n> 2016-05-10 08:38:09 | ListChangeHist | 55,019 | TIMEOUT\n> 2016-05-10 08:53:43 | ListChangeHist | 55,019 | TIMEOUT\n> 2016-05-10 09:45:09 | ListChangeHist | 55,022 | TIMEOUT\n> 2016-05-10 09:46:13 | ListChangeHist | 55,017 | TIMEOUT\n> 2016-05-10 09:49:27 | ListChangeHist | 55,011 | TIMEOUT\n> 2016-05-10 09:52:12 | ListChangeHist | 55,018 | TIMEOUT\n> 2016-05-10 09:57:42 | ListChangeHist | 9,462 | OK\n> 2016-05-10 10:05:21 | ListChangeHist | 55,016 | TIMEOUT\n> 2016-05-10 10:05:29 | ListChangeHist | 136 | OK\n> 2016-05-10 10:05:38 | ListChangeHist | 1,517 | OK\n> \n> Artifacts\n> ======================\n> \n> $ >uname -a\n> SunOS ***** 5.10 Generic_150400-30 sun4v sparc sun4v\n> \n> Memory : 254G phys mem, 207G free mem.\n> Processors: 32 - CPU is mostly 80% free\n> \n> >>> \n> p306=> select version();\n> version\n> ---------------------------------------------------------------------------------------------------\n> PostgreSQL 9.1.14 on sparc-sun-solaris2.10, compiled by gcc (GCC)\n> 3.4.3 (csl-sol210-3_4-branch+sol_rpath), 64-bit\n> \n> >>> \n> p306=> \\dt+ changehistory\n> List of relations\n> Schema | Name | Type | Owner | Size | Description\n> --------+---------------+-------+-------+-------+-------------\n> public | changehistory | table | p306 | 17 GB |\n> \n> >>> \n> p306=> \\di+ changehistory*\n> List of relations\n> Schema | Name | Type | Owner | Table | Size | Description\n> --------+-----------------------+-------+-------+---------------+---------+-------------\n> public | changehistory_idx1 | index | p306 | changehistory | 9597 MB\n> |\n> public | changehistory_idx3 | index | p306 | changehistory | 11 GB |\n> public | changehistory_idx4 | index | p306 | changehistory | 4973 MB\n> |\n> public | changehistory_pkey | index | p306 | changehistory | 2791 MB\n> |\n> public | changehistory_search2 | index | p306 | changehistory | 9888\n> MB |\n> public | changehistory_search3 | index | p306 | changehistory | 10 GB\n> |\n> public | changehistory_search4 | index | p306 | changehistory | 9240\n> MB |\n> public | changehistory_search5 | index | p306 | changehistory | 8373\n> MB |\n> (8 rows)\n> \n> \n> >>> \n> p306=> select count(*) from changehistory ;\n> count\n> ------------\n> 129,185,024\n> \n> >>> \n> Show all (filtered)\n> ======================================================\n> \n> name | setting\n> ---------------------------------+--------------------\n> autovacuum | on\n> autovacuum_analyze_scale_factor | 0.001\n> autovacuum_analyze_threshold | 500\n> autovacuum_freeze_max_age | 200000000\n> autovacuum_max_workers | 5\n> autovacuum_naptime | 1min\n> autovacuum_vacuum_cost_delay | 0\n> autovacuum_vacuum_cost_limit | -1\n> autovacuum_vacuum_scale_factor | 0.001\n> autovacuum_vacuum_threshold | 500\n> bgwriter_delay | 200ms\n> block_size | 8192\n> check_function_bodies | on\n> checkpoint_completion_target | 0.9\n> checkpoint_segments | 256\n> checkpoint_timeout | 1h\n> checkpoint_warning | 30s\n> client_encoding | UTF8\n> commit_delay | 0\n> commit_siblings | 5\n> cpu_index_tuple_cost | 0.005\n> cpu_operator_cost | 0.0025\n> cpu_tuple_cost | 0.01\n> cursor_tuple_fraction | 0.1\n> deadlock_timeout | 1s\n> default_statistics_target | 100\n> default_transaction_deferrable | off\n> default_transaction_isolation | read committed\n> default_transaction_read_only | off\n> default_with_oids | off\n> effective_cache_size | 8GB\n> from_collapse_limit | 8\n> fsync | on\n> full_page_writes | on\n> ignore_system_indexes | off\n> join_collapse_limit | 8\n> krb_caseins_users | off\n> lo_compat_privileges | off\n> maintenance_work_mem | 1GB\n> max_connections | 350\n> max_files_per_process | 1000\n> max_function_args | 100\n> max_identifier_length | 63\n> max_index_keys | 32\n> max_locks_per_transaction | 64\n> max_pred_locks_per_transaction | 64\n> max_prepared_transactions | 0\n> max_stack_depth | 2MB\n> max_wal_senders | 5\n> random_page_cost | 4\n> segment_size | 1GB\n> seq_page_cost | 1\n> server_encoding | UTF8\n> server_version | 9.1.14\n> shared_buffers | 2GB\n> sql_inheritance | on\n> statement_timeout | 0\n> synchronize_seqscans | on\n> synchronous_commit | on\n> synchronous_standby_names |\n> tcp_keepalives_count | 0\n> tcp_keepalives_idle | -1\n> tcp_keepalives_interval | 0\n> track_activities | on\n> track_activity_query_size | 1024\n> track_counts | on\n> track_functions | none\n> transaction_deferrable | off\n> transaction_isolation | read committed\n> transaction_read_only | off\n> transform_null_equals | off\n> update_process_title | on\n> vacuum_cost_delay | 0\n> vacuum_cost_limit | 200\n> vacuum_cost_page_dirty | 20\n> vacuum_cost_page_hit | 1\n> vacuum_cost_page_miss | 10\n> vacuum_defer_cleanup_age | 0\n> vacuum_freeze_min_age | 50000000\n> vacuum_freeze_table_age | 150000000\n> \n> John Gorman | Manager of Production Support, Architecture, Release\n> Engineering | Eldorado | a Division of M PHASI S | www.eldoinc.com/\n> |\n> 5353 North 16 th Street, Suite 400, Phoenix, Arizona 85016-3228 | Tel\n> 602.604.3100 | Fax: 602.604.3115\n> \n> \n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 13 May 2016 18:05:01 -0300 (ART)", "msg_from": "Gerardo Herzig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database transaction with intermittent slow responses" }, { "msg_contents": "Hi Gerado,\r\n\r\nThanks for the quick response. We do not appear to have a connection limit since our application is the only thing talking to the database, the connections are somewhat limited. We are using about 126 of a max allowed 350 connections. We keep these metrics in a different database, and we also generate alerts if we get close to the catalog/cluster limit.\r\n\r\nAlso I have been monitoring heavily and watching for locks while the transaction runs for a long time. While I see occasional locks, they are on other tables and are brief, so I do not believe there is a database lock issue/contention.\r\n\r\nThe application is timing the transaction out. When we detect that the timeout limit has occurred, we cancel the database connection (conn.cancel();) - we have been doing this for several years with no issue.\r\n\r\nI setup a adhoc monitor which runs every 2 seconds and displays \"select * from pg_stat_activity where datname = 'p306' and current_query not like '<IDLE%'; and then write the output to a log. I can see the transaction being executed in the database for over 50 seconds, so I do believe the database actually is working on it.\r\n\r\nWe have a few monitoring programs that track and record quite a few thinks including database locks (number and type), connections (number and where). I have reviewed the history and do not see any trends.\r\n\r\nIf it helps here is a monitor snippet of the transaction taking over 50 seconds (SELECT * FROM ChangeHistory)\r\n\r\n\r\n>> Wed May 11 07:50:09 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n\r\n>> Wed May 11 07:50:11 MST 2016\r\n 3709009 | p306 | 15014 | 16387 | p306 | | 172.20.0.82 | coreb | 35859 | 2016-05-11 07:31:31.968087-07 | 2016-05-11 07:50:11.575881-07 | 2016-05-11 07:50:11.766942-07 | f | SELECT * FROM Employee WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName, FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber, SocialSecurityNumber, MedicareID, BirthDate, AlternateID1, AlternateID2 LIMIT 11\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:11.712848-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:11.712887-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:13 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:13.733643-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 16771 | 16387 | p306 | | 172.20.0.82 | coreb | 37470 | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:50:13.770366-07 | 2016-05-11 07:50:13.811502-07 | f | SELECT * FROM Dependent WHERE DependentID = $1\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:13.733968-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:15 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:15.734777-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:15.73486-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:17 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n\r\n>> Wed May 11 07:50:19 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n\r\n>> Wed May 11 07:50:21 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 21656 | 16387 | p306 | | 172.20.0.82 | coreb | 41810 | 2016-05-11 07:48:40.95846-07 | 2016-05-11 07:50:21.871077-07 | 2016-05-11 07:50:21.871579-07 | f | DELETE FROM ClaimPrevQueue WHERE Claimnumber = $1\r\n 3709009 | p306 | 8042 | 16387 | p306 | | 172.20.0.82 | coreb | 63023 | 2016-05-11 07:14:34.208098-07 | 2016-05-11 07:50:21.813662-07 | 2016-05-11 07:50:21.814575-07 | f | SELECT * FROM Employee WHERE CertificateNumber BETWEEN $1 AND $2 ORDER BY LastName, FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber, SocialSecurityNumber, MedicareID, BirthDate, AlternateID1, AlternateID2 LIMIT 11\r\n\r\n>> Wed May 11 07:50:23 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:23.85706-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 7925 | 16387 | p306 | | 172.20.0.82 | coreb | 62888 | 2016-05-11 07:14:05.586327-07 | 2016-05-11 07:50:23.517469-07 | 2016-05-11 07:50:23.684134-07 | f | DELETE FROM ToothChartMaintenance WHERE Claimnumber = $1\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:23.857092-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 22235 | 10 | postgres | | | | | 2016-05-11 07:50:22.129887-07 | 2016-05-11 07:50:22.162326-07 | 2016-05-11 07:50:22.162326-07 | f | autovacuum: VACUUM public.adjrespendrsncode\r\n\r\n>> Wed May 11 07:50:25 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 7925 | 16387 | p306 | | 172.20.0.82 | coreb | 62888 | 2016-05-11 07:14:05.586327-07 | 2016-05-11 07:50:23.517469-07 | 2016-05-11 07:50:25.931788-07 | f | SELECT * FROM CategoryPlaceService WHERE CategoryID = $1 ORDER BY CategoryID, RangeFrom, RangeTo LIMIT 1000\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:25.920308-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:27 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:27.935677-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:27.938329-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:29 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17652 | 16387 | p306 | | 172.20.0.82 | coreb | 38402 | 2016-05-11 07:39:00.298771-07 | 2016-05-11 07:50:29.615446-07 | 2016-05-11 07:50:29.954405-07 | f | SELECT * FROM Dependent WHERE DepSocialSecurityNumber BETWEEN $1 AND $2 ORDER BY DepCertificateNumber, DepSocialSecurityNumber, DepLastName, DepFirstName, DepMiddleName, DepMedicareID, BirthDate, AlternateID1, AlternateID2 LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:29.966428-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:29.966481-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:31 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:32.000148-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:34 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:33.953492-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:33.953803-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:36 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:35.996862-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:35.996892-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:38 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:38.039441-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:38.036922-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:40 MST 2016\r\n 3709009 | p306 | 17321 | 16387 | p306 | | 172.20.0.82 | coreb | 38226 | 2016-05-11 07:38:10.838611-07 | 2016-05-11 07:50:40.059438-07 | 2016-05-11 07:50:40.060951-07 | f | DELETE FROM ClaimPrevQueue WHERE Claimnumber = $1\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 2860 | 16387 | p306 | | 172.20.0.82 | coreb | 56427 | 2016-05-11 06:53:47.633683-07 | 2016-05-11 07:50:40.060863-07 | 2016-05-11 07:50:40.062051-07 | f | SELECT * FROM Employee WHERE CertificateNumber BETWEEN $1 AND $2 ORDER BY LastName, FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber, SocialSecurityNumber, MedicareID, BirthDate, AlternateID1, AlternateID2 LIMIT 11\r\n 3709009 | p306 | 17652 | 16387 | p306 | | 172.20.0.82 | coreb | 38402 | 2016-05-11 07:39:00.298771-07 | 2016-05-11 07:50:40.059956-07 | 2016-05-11 07:50:40.083659-07 | f | DELETE FROM ToothChartMaintenance WHERE Claimnumber = $1\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:40.077061-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 16771 | 16387 | p306 | | 172.20.0.82 | coreb | 37470 | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:50:40.060076-07 | 2016-05-11 07:50:40.072735-07 | f | INSERT INTO CrmCallLinks VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27)\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:40.080967-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 18895 | 16387 | p306 | | 172.20.0.82 | coreb | 39389 | 2016-05-11 07:42:04.682022-07 | 2016-05-11 07:50:40.062356-07 | 2016-05-11 07:50:40.062667-07 | f | SELECT * FROM RealtimeTransInfo WHERE SenderID = $1 AND PayLoadID = $2 ORDER BY SenderID DESC, PayLoadID DESC LIMIT 2\r\n 3709009 | p306 | 8864 | 16387 | p306 | | 172.20.0.82 | coreb | 64407 | 2016-05-11 07:18:34.848657-07 | 2016-05-11 07:50:39.601078-07 | 2016-05-11 07:50:40.077433-07 | f | SELECT * FROM Facility WHERE FacilityID = $1\r\n\r\n>> Wed May 11 07:50:42 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 2860 | 16387 | p306 | | 172.20.0.82 | coreb | 56427 | 2016-05-11 06:53:47.633683-07 | 2016-05-11 07:50:42.094681-07 | 2016-05-11 07:50:42.095179-07 | f | DELETE FROM ClaimPrevQueue WHERE Claimnumber = $1\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:42.023507-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:42.023043-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:44 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:44.054554-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:44.054674-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:46 MST 2016\r\n 3709009 | p306 | 17321 | 16387 | p306 | | 172.20.0.82 | coreb | 38226 | 2016-05-11 07:38:10.838611-07 | 2016-05-11 07:50:45.908151-07 | 2016-05-11 07:50:46.08959-07 | f | SELECT * FROM Employee WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName, FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber, SocialSecurityNumber, MedicareID, BirthDate, AlternateID1, AlternateID2 LIMIT 11\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:46.077355-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 16771 | 16387 | p306 | | 172.20.0.82 | coreb | 37470 | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:50:46.139211-07 | 2016-05-11 07:50:46.141386-07 | f | SELECT * FROM AdjudicationResult WHERE Claimnumber BETWEEN $1 AND $2 ORDER BY Claimnumber DESC LIMIT 11\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:46.067222-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:48 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:48.082695-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:48.082883-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:50 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 2860 | 16387 | p306 | | 172.20.0.82 | coreb | 56427 | 2016-05-11 06:53:47.633683-07 | 2016-05-11 07:50:50.056892-07 | 2016-05-11 07:50:50.174587-07 | f | SELECT * FROM Employee WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName, FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber, SocialSecurityNumber, MedicareID, BirthDate, AlternateID1, AlternateID2 LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:50.107102-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 5620 | 16387 | p306 | TaskRunner | 172.20.0.86 | batchb.eldocomp.com | 37594 | 2016-05-11 07:04:08.129626-07 | 2016-05-11 07:50:49.812093-07 | 2016-05-11 07:50:49.81238-07 | f | SELECT * FROM EDIFtpFileDetails WHERE MsgLogStatus = $1 AND ProcessId > $2 ORDER BY MsgLogStatus, ProcessId LIMIT 10\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:50.107172-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:52 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:52.127455-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:52.127776-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:54 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:54.165381-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:54.165596-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:56 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:56.189515-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:56.176308-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:50:58 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 21656 | 16387 | p306 | | 172.20.0.82 | coreb | 41810 | 2016-05-11 07:48:40.95846-07 | 2016-05-11 07:50:56.785032-07 | 2016-05-11 07:50:56.789767-07 | f | SELECT * FROM FacilityPhysicianAffl WHERE Status IN ('A', 'H', 'D') AND (FacilityID IN (SELECT FacilityID FROM Facility WHERE UPPER(TaxID)= '811190101' AND Status IN ( 'A' , 'H' , 'D') ) AND ( PhysicianID IN (SELECT PhysicianID FROM Physician WHERE Status IN ( 'A' , 'H' , 'D') )) AND (( ISDUMMY='0' AND FacilityID IN ( SELECT FacilityID FROM Facility WHERE FacilityRecordType = 'S' AND ( FacilityName IS NOT NULL AND FacilityName != '' ) ) OR ( FacilityID IN ( SELECT FacilityID FROM Facility WHERE FacilityRecordType = 'F' AND ( FacilityName IS NOT NULL AND FacilityName != '' ) ))))) ORDER BY FacilityID ASC , PhysicianID ASC\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:50:58.212226-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:50:58.269389-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:51:00 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:00.228846-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:51:00.229019-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:51:02 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 4715 | 16387 | p306 | TaskRunner | 172.20.0.86 | batchb.eldocomp.com | 33881 | 2016-05-11 07:01:41.247388-07 | 2016-05-11 07:51:02.125906-07 | 2016-05-11 07:51:02.311956-07 | f | SELECT * FROM EmpEligibilityCoverage WHERE EmployeeID = $1 AND EffectiveDate <= $2 ORDER BY EmployeeID DESC, EffectiveDate DESC LIMIT 101\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:02.23586-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 16771 | 16387 | p306 | | 172.20.0.82 | coreb | 37470 | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:51:02.188886-07 | 2016-05-11 07:51:02.295888-07 | f | DELETE FROM ToothChartMaintenance WHERE Claimnumber = $1\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:51:02.235869-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:51:04 MST 2016\r\n 3709009 | p306 | 5644 | 16387 | p306 | | 172.20.0.82 | coreb | 59871 | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:04.277287-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:51:04.277543-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:51:06 MST 2016\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:06.313649-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:51:06.313855-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\n>> Wed May 11 07:51:08 MST 2016\r\n 3709009 | p306 | 22530 | 16387 | p306 | | 172.20.0.82 | coreb | 42494 | 2016-05-11 07:51:04.419169-07 | 2016-05-11 07:51:08.351721-07 | 2016-05-11 07:51:08.373929-07 | f | SELECT * FROM ChangeHistory WHERE Category = $1 AND PrimaryKeyOfChange = $2 ORDER BY Category, PrimaryKeyOfChange, ChgTS, ExcludedKeyFields LIMIT 2001\r\n 3709009 | p306 | 17312 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 54263 | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:08.335854-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n 3709009 | p306 | 8671 | 16387 | p306 | | 172.20.0.86 | batchb.eldocomp.com | 55055 | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:51:08.359281-07 | f | SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n\r\nRegards\r\nJohn\r\n\r\n-----Original Message-----\r\nFrom: Gerardo Herzig [mailto:[email protected]] \r\nSent: Friday, May 13, 2016 2:05 PM\r\nTo: John Gorman\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Database transaction with intermittent slow responses\r\n\r\nAfter quick reading, im thinking about a couples of chances:\r\n\r\n1) You are hitting a connection_limit\r\n2) You are hitting a lock contention (perhaps some other backend is locking the table and not releasing it)\r\n\r\nWho throws the timeout? It is Postgres or your JDBC connector?\r\n\r\nMy initial blind guess is that your \"timed out queries\" never gets postgres at all, and are blocked prior to that for some other issue. If im wrong, well, you should at least have the timeout recorded in your logs.\r\n\r\nYou should also track #of_connectinos and #of_locks over that tables.\r\n\r\nSee http://www.postgresql.org/docs/9.1/static/view-pg-locks.html for pg_lock information\r\n\r\nThat should be my starting point for viewing whats going on.\r\n\r\nHTH\r\nGerardo\r\n\r\n----- Mensaje original -----\r\n> De: \"John Gorman\" <[email protected]>\r\n> Para: [email protected]\r\n> CC: \"John Gorman\" <[email protected]>\r\n> Enviados: Viernes, 13 de Mayo 2016 16:59:51\r\n> Asunto: [PERFORM] Database transaction with intermittent slow responses\r\n> \r\n> \r\n> Transactions to table, ChangeHistory, have recently become\r\n> intermittently slow and is increasing becoming slower.\r\n> \r\n> * No database configuration changes have been made recently\r\n> * We have run vacuum analyze\r\n> * We have tried backing up and reloading the table (data, indexes,\r\n> etc)\r\n> \r\n> Some transactions respond quickly (200 ms) and others take over 55\r\n> seconds (we cancel the query after 55 seconds – our timeout SLA).\r\n> The problem has recently become very bad. It is the same query being\r\n> issued but with different parameters.\r\n> \r\n> If the transaction takes over 55 seconds and I run the query manually\r\n> (with or without EXPLAIN ANALYZE) it returns quickly (a few hundred\r\n> ms). In case I am looking at cache, I have a list of other queries\r\n> (just different parameters) that have timed out and when I run them\r\n> (without the limit even) the response is very timely.\r\n> \r\n> Any help or insight would be great.\r\n> \r\n> NOTE: our application is connecting to the database via JDBC and we\r\n> are using PreparedStatements. I have provided full details so all\r\n> information is available, but please let me know if any other\r\n> information is needed – thx in advance.\r\n> \r\n> p306=> EXPLAIN ANALYZE SELECT * FROM ChangeHistory WHERE Category\r\n> BETWEEN 'Employee' AND 'Employeezz' AND PrimaryKeyOfChange BETWEEN\r\n> '312313' AND '312313!zz' ORDER BY ChgTS DESC, ChgUser DESC, Category\r\n> DESC, PrimaryKeyOfChange DESC LIMIT 11;\r\n> QUERY PLAN\r\n> ------------------------------------------------------------------------------------------------------\r\n> Limit (cost=33.66..33.67 rows=1 width=136) (actual time=0.297..0.297\r\n> rows=11 loops=1)\r\n> -> Sort (cost=33.66..33.67 rows=1 width=136) (actual\r\n> time=0.297..0.297 rows=11 loops=1)\r\n> Sort Key: chgts, chguser, category, primarykeyofchange\r\n> Sort Method: top-N heapsort Memory: 27kB\r\n> -> Index Scan using changehistory_idx4 on changehistory\r\n> (cost=0.00..33.65 rows=1 width=136) (actual time=0.046..\r\n> 0.239 rows=85 loops=1)\r\n> Index Cond: (((primarykeyofchange)::text >= '312313'::text) AND\r\n> ((primarykeyofchange)::text <= '312313!zz'::\r\n> text))\r\n> Filter: (((category)::text >= 'Employee'::text) AND ((category)::text\r\n> <= 'Employeezz'::text))\r\n> Total runtime: 0.328 ms\r\n> (8 rows)\r\n> \r\n> >>> \r\n> History this week of counts with good response times vs timeouts.\r\n> \r\n> | Date | Success # | Time Out # | Avg. Success Secs |\r\n> |------------+-----------+------------+-------------------|\r\n> | 2016-05-09 | 18 | 31 | 7.9 |\r\n> | 2016-05-10 | 17 | 25 | 10.5 |\r\n> | 2016-05-11 | 27 | 33 | 10.1 |\r\n> | 2016-05-12 | 68 | 24 | 9.9 |\r\n> \r\n> \r\n> >>> Sample transaction response times\r\n> \r\n> | Timestamp | Tran ID | Resp MS | Resp CD\r\n> --------------------+----------------+---------+--------\r\n> 2016-05-10 06:20:19 | ListChangeHist | 55,023 | TIMEOUT\r\n> 2016-05-10 07:47:34 | ListChangeHist | 55,017 | TIMEOUT\r\n> 2016-05-10 07:48:00 | ListChangeHist | 9,866 | OK\r\n> 2016-05-10 07:48:10 | ListChangeHist | 2,327 | OK\r\n> 2016-05-10 07:59:23 | ListChangeHist | 55,020 | TIMEOUT\r\n> 2016-05-10 08:11:20 | ListChangeHist | 55,030 | TIMEOUT\r\n> 2016-05-10 08:31:45 | ListChangeHist | 4,216 | OK\r\n> 2016-05-10 08:35:09 | ListChangeHist | 7,898 | OK\r\n> 2016-05-10 08:36:18 | ListChangeHist | 9,810 | OK\r\n> 2016-05-10 08:36:56 | ListChangeHist | 55,027 | TIMEOUT\r\n> 2016-05-10 08:37:33 | ListChangeHist | 46,433 | OK\r\n> 2016-05-10 08:38:09 | ListChangeHist | 55,019 | TIMEOUT\r\n> 2016-05-10 08:53:43 | ListChangeHist | 55,019 | TIMEOUT\r\n> 2016-05-10 09:45:09 | ListChangeHist | 55,022 | TIMEOUT\r\n> 2016-05-10 09:46:13 | ListChangeHist | 55,017 | TIMEOUT\r\n> 2016-05-10 09:49:27 | ListChangeHist | 55,011 | TIMEOUT\r\n> 2016-05-10 09:52:12 | ListChangeHist | 55,018 | TIMEOUT\r\n> 2016-05-10 09:57:42 | ListChangeHist | 9,462 | OK\r\n> 2016-05-10 10:05:21 | ListChangeHist | 55,016 | TIMEOUT\r\n> 2016-05-10 10:05:29 | ListChangeHist | 136 | OK\r\n> 2016-05-10 10:05:38 | ListChangeHist | 1,517 | OK\r\n> \r\n> Artifacts\r\n> ======================\r\n> \r\n> $ >uname -a\r\n> SunOS ***** 5.10 Generic_150400-30 sun4v sparc sun4v\r\n> \r\n> Memory : 254G phys mem, 207G free mem.\r\n> Processors: 32 - CPU is mostly 80% free\r\n> \r\n> >>> \r\n> p306=> select version();\r\n> version\r\n> ---------------------------------------------------------------------------------------------------\r\n> PostgreSQL 9.1.14 on sparc-sun-solaris2.10, compiled by gcc (GCC)\r\n> 3.4.3 (csl-sol210-3_4-branch+sol_rpath), 64-bit\r\n> \r\n> >>> \r\n> p306=> \\dt+ changehistory\r\n> List of relations\r\n> Schema | Name | Type | Owner | Size | Description\r\n> --------+---------------+-------+-------+-------+-------------\r\n> public | changehistory | table | p306 | 17 GB |\r\n> \r\n> >>> \r\n> p306=> \\di+ changehistory*\r\n> List of relations\r\n> Schema | Name | Type | Owner | Table | Size | Description\r\n> --------+-----------------------+-------+-------+---------------+---------+-------------\r\n> public | changehistory_idx1 | index | p306 | changehistory | 9597 MB\r\n> |\r\n> public | changehistory_idx3 | index | p306 | changehistory | 11 GB |\r\n> public | changehistory_idx4 | index | p306 | changehistory | 4973 MB\r\n> |\r\n> public | changehistory_pkey | index | p306 | changehistory | 2791 MB\r\n> |\r\n> public | changehistory_search2 | index | p306 | changehistory | 9888\r\n> MB |\r\n> public | changehistory_search3 | index | p306 | changehistory | 10 GB\r\n> |\r\n> public | changehistory_search4 | index | p306 | changehistory | 9240\r\n> MB |\r\n> public | changehistory_search5 | index | p306 | changehistory | 8373\r\n> MB |\r\n> (8 rows)\r\n> \r\n> \r\n> >>> \r\n> p306=> select count(*) from changehistory ;\r\n> count\r\n> ------------\r\n> 129,185,024\r\n> \r\n> >>> \r\n> Show all (filtered)\r\n> ======================================================\r\n> \r\n> name | setting\r\n> ---------------------------------+--------------------\r\n> autovacuum | on\r\n> autovacuum_analyze_scale_factor | 0.001\r\n> autovacuum_analyze_threshold | 500\r\n> autovacuum_freeze_max_age | 200000000\r\n> autovacuum_max_workers | 5\r\n> autovacuum_naptime | 1min\r\n> autovacuum_vacuum_cost_delay | 0\r\n> autovacuum_vacuum_cost_limit | -1\r\n> autovacuum_vacuum_scale_factor | 0.001\r\n> autovacuum_vacuum_threshold | 500\r\n> bgwriter_delay | 200ms\r\n> block_size | 8192\r\n> check_function_bodies | on\r\n> checkpoint_completion_target | 0.9\r\n> checkpoint_segments | 256\r\n> checkpoint_timeout | 1h\r\n> checkpoint_warning | 30s\r\n> client_encoding | UTF8\r\n> commit_delay | 0\r\n> commit_siblings | 5\r\n> cpu_index_tuple_cost | 0.005\r\n> cpu_operator_cost | 0.0025\r\n> cpu_tuple_cost | 0.01\r\n> cursor_tuple_fraction | 0.1\r\n> deadlock_timeout | 1s\r\n> default_statistics_target | 100\r\n> default_transaction_deferrable | off\r\n> default_transaction_isolation | read committed\r\n> default_transaction_read_only | off\r\n> default_with_oids | off\r\n> effective_cache_size | 8GB\r\n> from_collapse_limit | 8\r\n> fsync | on\r\n> full_page_writes | on\r\n> ignore_system_indexes | off\r\n> join_collapse_limit | 8\r\n> krb_caseins_users | off\r\n> lo_compat_privileges | off\r\n> maintenance_work_mem | 1GB\r\n> max_connections | 350\r\n> max_files_per_process | 1000\r\n> max_function_args | 100\r\n> max_identifier_length | 63\r\n> max_index_keys | 32\r\n> max_locks_per_transaction | 64\r\n> max_pred_locks_per_transaction | 64\r\n> max_prepared_transactions | 0\r\n> max_stack_depth | 2MB\r\n> max_wal_senders | 5\r\n> random_page_cost | 4\r\n> segment_size | 1GB\r\n> seq_page_cost | 1\r\n> server_encoding | UTF8\r\n> server_version | 9.1.14\r\n> shared_buffers | 2GB\r\n> sql_inheritance | on\r\n> statement_timeout | 0\r\n> synchronize_seqscans | on\r\n> synchronous_commit | on\r\n> synchronous_standby_names |\r\n> tcp_keepalives_count | 0\r\n> tcp_keepalives_idle | -1\r\n> tcp_keepalives_interval | 0\r\n> track_activities | on\r\n> track_activity_query_size | 1024\r\n> track_counts | on\r\n> track_functions | none\r\n> transaction_deferrable | off\r\n> transaction_isolation | read committed\r\n> transaction_read_only | off\r\n> transform_null_equals | off\r\n> update_process_title | on\r\n> vacuum_cost_delay | 0\r\n> vacuum_cost_limit | 200\r\n> vacuum_cost_page_dirty | 20\r\n> vacuum_cost_page_hit | 1\r\n> vacuum_cost_page_miss | 10\r\n> vacuum_defer_cleanup_age | 0\r\n> vacuum_freeze_min_age | 50000000\r\n> vacuum_freeze_table_age | 150000000\r\n> \r\n \r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 13 May 2016 21:25:37 +0000", "msg_from": "John Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database transaction with intermittent slow responses" }, { "msg_contents": "Oh, so *all* the transactions are being slowed down at that point...What about CPU IO Wait% at that moment? Could be some other processes stressing the system out?\n\nNow im thinking about hard disk issues...maybe some \"smart\" messages?\n\nHave some other hardware to give it a try?\n\nGerardo\n\n----- Mensaje original -----\n> De: \"John Gorman\" <[email protected]>\n> Para: \"Gerardo Herzig\" <[email protected]>\n> CC: [email protected], \"John Gorman\" <[email protected]>\n> Enviados: Viernes, 13 de Mayo 2016 18:25:37\n> Asunto: RE: [PERFORM] Database transaction with intermittent slow responses\n> \n> Hi Gerado,\n> \n> Thanks for the quick response. We do not appear to have a connection\n> limit since our application is the only thing talking to the\n> database, the connections are somewhat limited. We are using about\n> 126 of a max allowed 350 connections. We keep these metrics in a\n> different database, and we also generate alerts if we get close to\n> the catalog/cluster limit.\n> \n> Also I have been monitoring heavily and watching for locks while the\n> transaction runs for a long time. While I see occasional locks, they\n> are on other tables and are brief, so I do not believe there is a\n> database lock issue/contention.\n> \n> The application is timing the transaction out. When we detect that\n> the timeout limit has occurred, we cancel the database connection\n> (conn.cancel();) - we have been doing this for several years with no\n> issue.\n> \n> I setup a adhoc monitor which runs every 2 seconds and displays\n> \"select * from pg_stat_activity where datname = 'p306' and\n> current_query not like '<IDLE%'; and then write the output to a log.\n> I can see the transaction being executed in the database for over 50\n> seconds, so I do believe the database actually is working on it.\n> \n> We have a few monitoring programs that track and record quite a few\n> thinks including database locks (number and type), connections\n> (number and where). I have reviewed the history and do not see any\n> trends.\n> \n> If it helps here is a monitor snippet of the transaction taking over\n> 50 seconds (SELECT * FROM ChangeHistory)\n> \n> \n> >> Wed May 11 07:50:09 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871 |\n> 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> \n> >> Wed May 11 07:50:11 MST 2016\n> 3709009 | p306 | 15014 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 35859\n> | 2016-05-11 07:31:31.968087-07 | 2016-05-11 07:50:11.575881-07 |\n> 2016-05-11 07:50:11.766942-07 | f | SELECT * FROM Employee\n> WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName,\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\n> AlternateID2 LIMIT 11\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:11.712848-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:11.712887-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:13 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:13.733643-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 16771 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 37470\n> | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:50:13.770366-07 |\n> 2016-05-11 07:50:13.811502-07 | f | SELECT * FROM Dependent\n> WHERE DependentID = $1\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:13.733968-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:15 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:15.734777-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:15.73486-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:17 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871 |\n> 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> \n> >> Wed May 11 07:50:19 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871 |\n> 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> \n> >> Wed May 11 07:50:21 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871 |\n> 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 21656 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 41810 |\n> 2016-05-11 07:48:40.95846-07 | 2016-05-11 07:50:21.871077-07 |\n> 2016-05-11 07:50:21.871579-07 | f | DELETE FROM\n> ClaimPrevQueue WHERE Claimnumber = $1\n> 3709009 | p306 | 8042 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 63023 |\n> 2016-05-11 07:14:34.208098-07 | 2016-05-11 07:50:21.813662-07 |\n> 2016-05-11 07:50:21.814575-07 | f | SELECT * FROM Employee\n> WHERE CertificateNumber BETWEEN $1 AND $2 ORDER BY LastName,\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\n> AlternateID2 LIMIT 11\n> \n> >> Wed May 11 07:50:23 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:23.85706-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 7925 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 62888\n> | 2016-05-11 07:14:05.586327-07 | 2016-05-11 07:50:23.517469-07 |\n> 2016-05-11 07:50:23.684134-07 | f | DELETE FROM\n> ToothChartMaintenance WHERE Claimnumber = $1\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:23.857092-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 22235 | 10 | postgres |\n> | | |\n> | 2016-05-11 07:50:22.129887-07 | 2016-05-11\n> 07:50:22.162326-07 | 2016-05-11 07:50:22.162326-07 | f |\n> autovacuum: VACUUM public.adjrespendrsncode\n> \n> >> Wed May 11 07:50:25 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 7925 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 62888\n> | 2016-05-11 07:14:05.586327-07 | 2016-05-11 07:50:23.517469-07 |\n> 2016-05-11 07:50:25.931788-07 | f | SELECT * FROM\n> CategoryPlaceService WHERE CategoryID = $1 ORDER BY CategoryID,\n> RangeFrom, RangeTo LIMIT 1000\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:25.920308-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:27 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:27.935677-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:27.938329-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:29 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17652 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 38402\n> | 2016-05-11 07:39:00.298771-07 | 2016-05-11 07:50:29.615446-07 |\n> 2016-05-11 07:50:29.954405-07 | f | SELECT * FROM Dependent\n> WHERE DepSocialSecurityNumber BETWEEN $1 AND $2 ORDER BY\n> DepCertificateNumber, DepSocialSecurityNumber, DepLastName,\n> DepFirstName, DepMiddleName, DepMedicareID, BirthDate,\n> AlternateID1, AlternateID2 LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:29.966428-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:29.966481-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:31 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:32.000148-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:34 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:33.953492-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:33.953803-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:36 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:35.996862-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:35.996892-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:38 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:38.039441-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:38.036922-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:40 MST 2016\n> 3709009 | p306 | 17321 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 38226\n> | 2016-05-11 07:38:10.838611-07 | 2016-05-11 07:50:40.059438-07 |\n> 2016-05-11 07:50:40.060951-07 | f | DELETE FROM\n> ClaimPrevQueue WHERE Claimnumber = $1\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 2860 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 56427\n> | 2016-05-11 06:53:47.633683-07 | 2016-05-11 07:50:40.060863-07 |\n> 2016-05-11 07:50:40.062051-07 | f | SELECT * FROM Employee\n> WHERE CertificateNumber BETWEEN $1 AND $2 ORDER BY LastName,\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\n> AlternateID2 LIMIT 11\n> 3709009 | p306 | 17652 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 38402\n> | 2016-05-11 07:39:00.298771-07 | 2016-05-11 07:50:40.059956-07 |\n> 2016-05-11 07:50:40.083659-07 | f | DELETE FROM\n> ToothChartMaintenance WHERE Claimnumber = $1\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:40.077061-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 16771 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 37470\n> | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:50:40.060076-07 |\n> 2016-05-11 07:50:40.072735-07 | f | INSERT INTO CrmCallLinks\n> VALUES\n> ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27)\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:40.080967-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 18895 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 39389\n> | 2016-05-11 07:42:04.682022-07 | 2016-05-11 07:50:40.062356-07 |\n> 2016-05-11 07:50:40.062667-07 | f | SELECT * FROM\n> RealtimeTransInfo WHERE SenderID = $1 AND PayLoadID = $2 ORDER BY\n> SenderID DESC, PayLoadID DESC LIMIT 2\n> 3709009 | p306 | 8864 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 64407\n> | 2016-05-11 07:18:34.848657-07 | 2016-05-11 07:50:39.601078-07 |\n> 2016-05-11 07:50:40.077433-07 | f | SELECT * FROM Facility\n> WHERE FacilityID = $1\n> \n> >> Wed May 11 07:50:42 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 2860 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 56427\n> | 2016-05-11 06:53:47.633683-07 | 2016-05-11 07:50:42.094681-07 |\n> 2016-05-11 07:50:42.095179-07 | f | DELETE FROM\n> ClaimPrevQueue WHERE Claimnumber = $1\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:42.023507-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:42.023043-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:44 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:44.054554-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:44.054674-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:46 MST 2016\n> 3709009 | p306 | 17321 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 38226\n> | 2016-05-11 07:38:10.838611-07 | 2016-05-11 07:50:45.908151-07 |\n> 2016-05-11 07:50:46.08959-07 | f | SELECT * FROM Employee\n> WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName,\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\n> AlternateID2 LIMIT 11\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:46.077355-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 16771 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 37470\n> | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:50:46.139211-07 |\n> 2016-05-11 07:50:46.141386-07 | f | SELECT * FROM\n> AdjudicationResult WHERE Claimnumber BETWEEN $1 AND $2 ORDER BY\n> Claimnumber DESC LIMIT 11\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:46.067222-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:48 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:48.082695-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:48.082883-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:50 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 2860 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 56427\n> | 2016-05-11 06:53:47.633683-07 | 2016-05-11 07:50:50.056892-07 |\n> 2016-05-11 07:50:50.174587-07 | f | SELECT * FROM Employee\n> WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName,\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\n> AlternateID2 LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:50.107102-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 5620 | 16387 | p306 | TaskRunner\n> | 172.20.0.86 | batchb.eldocomp.com | 37594 |\n> 2016-05-11 07:04:08.129626-07 | 2016-05-11 07:50:49.812093-07 |\n> 2016-05-11 07:50:49.81238-07 | f | SELECT * FROM\n> EDIFtpFileDetails WHERE MsgLogStatus = $1 AND ProcessId > $2 ORDER\n> BY MsgLogStatus, ProcessId LIMIT 10\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:50.107172-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:52 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:52.127455-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:52.127776-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:54 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:54.165381-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:54.165596-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:56 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:56.189515-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:56.176308-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:50:58 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 21656 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 41810\n> | 2016-05-11 07:48:40.95846-07 | 2016-05-11 07:50:56.785032-07 |\n> 2016-05-11 07:50:56.789767-07 | f | SELECT * FROM\n> FacilityPhysicianAffl WHERE Status IN ('A', 'H', 'D') AND\n> (FacilityID IN (SELECT FacilityID FROM Facility WHERE UPPER(TaxID)=\n> '811190101' AND Status IN ( 'A' , 'H' , 'D') ) AND (\n> PhysicianID IN (SELECT PhysicianID FROM Physician WHERE Status IN (\n> 'A' , 'H' , 'D') )) AND (( ISDUMMY='0' AND FacilityID IN (\n> SELECT FacilityID FROM Facility WHERE FacilityRecordType = 'S' AND\n> ( FacilityName IS NOT NULL AND FacilityName != '' ) ) OR (\n> FacilityID IN ( SELECT FacilityID FROM Facility WHERE\n> FacilityRecordType = 'F' AND ( FacilityName IS NOT NULL AND\n> FacilityName != '' ) ))))) ORDER BY FacilityID ASC , PhysicianID\n> ASC\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:50:58.212226-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:50:58.269389-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:51:00 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:51:00.228846-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:51:00.229019-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:51:02 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 4715 | 16387 | p306 | TaskRunner\n> | 172.20.0.86 | batchb.eldocomp.com | 33881 |\n> 2016-05-11 07:01:41.247388-07 | 2016-05-11 07:51:02.125906-07 |\n> 2016-05-11 07:51:02.311956-07 | f | SELECT * FROM\n> EmpEligibilityCoverage WHERE EmployeeID = $1 AND EffectiveDate <=\n> $2 ORDER BY EmployeeID DESC, EffectiveDate DESC LIMIT 101\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:51:02.23586-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 16771 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 37470\n> | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:51:02.188886-07 |\n> 2016-05-11 07:51:02.295888-07 | f | DELETE FROM\n> ToothChartMaintenance WHERE Claimnumber = $1\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:51:02.235869-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:51:04 MST 2016\n> 3709009 | p306 | 5644 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 59871\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:51:04.277287-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:51:04.277543-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:51:06 MST 2016\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:51:06.313649-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:51:06.313855-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> >> Wed May 11 07:51:08 MST 2016\n> 3709009 | p306 | 22530 | 16387 | p306 |\n> | 172.20.0.82 | coreb | 42494\n> | 2016-05-11 07:51:04.419169-07 | 2016-05-11 07:51:08.351721-07 |\n> 2016-05-11 07:51:08.373929-07 | f | SELECT * FROM\n> ChangeHistory WHERE Category = $1 AND PrimaryKeyOfChange = $2 ORDER\n> BY Category, PrimaryKeyOfChange, ChgTS, ExcludedKeyFields LIMIT\n> 2001\n> 3709009 | p306 | 17312 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\n> 2016-05-11 07:51:08.335854-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> 3709009 | p306 | 8671 | 16387 | p306 |\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\n> 2016-05-11 07:51:08.359281-07 | f | SELECT * FROM Employee\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\n> \n> Regards\n> John\n> \n> -----Original Message-----\n> From: Gerardo Herzig [mailto:[email protected]]\n> Sent: Friday, May 13, 2016 2:05 PM\n> To: John Gorman\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Database transaction with intermittent slow\n> responses\n> \n> After quick reading, im thinking about a couples of chances:\n> \n> 1) You are hitting a connection_limit\n> 2) You are hitting a lock contention (perhaps some other backend is\n> locking the table and not releasing it)\n> \n> Who throws the timeout? It is Postgres or your JDBC connector?\n> \n> My initial blind guess is that your \"timed out queries\" never gets\n> postgres at all, and are blocked prior to that for some other issue.\n> If im wrong, well, you should at least have the timeout recorded in\n> your logs.\n> \n> You should also track #of_connectinos and #of_locks over that tables.\n> \n> See http://www.postgresql.org/docs/9.1/static/view-pg-locks.html for\n> pg_lock information\n> \n> That should be my starting point for viewing whats going on.\n> \n> HTH\n> Gerardo\n> \n> ----- Mensaje original -----\n> > De: \"John Gorman\" <[email protected]>\n> > Para: [email protected]\n> > CC: \"John Gorman\" <[email protected]>\n> > Enviados: Viernes, 13 de Mayo 2016 16:59:51\n> > Asunto: [PERFORM] Database transaction with intermittent slow\n> > responses\n> > \n> > \n> > Transactions to table, ChangeHistory, have recently become\n> > intermittently slow and is increasing becoming slower.\n> > \n> > * No database configuration changes have been made recently\n> > * We have run vacuum analyze\n> > * We have tried backing up and reloading the table (data, indexes,\n> > etc)\n> > \n> > Some transactions respond quickly (200 ms) and others take over 55\n> > seconds (we cancel the query after 55 seconds – our timeout SLA).\n> > The problem has recently become very bad. It is the same query\n> > being\n> > issued but with different parameters.\n> > \n> > If the transaction takes over 55 seconds and I run the query\n> > manually\n> > (with or without EXPLAIN ANALYZE) it returns quickly (a few hundred\n> > ms). In case I am looking at cache, I have a list of other queries\n> > (just different parameters) that have timed out and when I run them\n> > (without the limit even) the response is very timely.\n> > \n> > Any help or insight would be great.\n> > \n> > NOTE: our application is connecting to the database via JDBC and we\n> > are using PreparedStatements. I have provided full details so all\n> > information is available, but please let me know if any other\n> > information is needed – thx in advance.\n> > \n> > p306=> EXPLAIN ANALYZE SELECT * FROM ChangeHistory WHERE Category\n> > BETWEEN 'Employee' AND 'Employeezz' AND PrimaryKeyOfChange BETWEEN\n> > '312313' AND '312313!zz' ORDER BY ChgTS DESC, ChgUser DESC,\n> > Category\n> > DESC, PrimaryKeyOfChange DESC LIMIT 11;\n> > QUERY PLAN\n> > ------------------------------------------------------------------------------------------------------\n> > Limit (cost=33.66..33.67 rows=1 width=136) (actual\n> > time=0.297..0.297\n> > rows=11 loops=1)\n> > -> Sort (cost=33.66..33.67 rows=1 width=136) (actual\n> > time=0.297..0.297 rows=11 loops=1)\n> > Sort Key: chgts, chguser, category, primarykeyofchange\n> > Sort Method: top-N heapsort Memory: 27kB\n> > -> Index Scan using changehistory_idx4 on changehistory\n> > (cost=0.00..33.65 rows=1 width=136) (actual time=0.046..\n> > 0.239 rows=85 loops=1)\n> > Index Cond: (((primarykeyofchange)::text >= '312313'::text) AND\n> > ((primarykeyofchange)::text <= '312313!zz'::\n> > text))\n> > Filter: (((category)::text >= 'Employee'::text) AND\n> > ((category)::text\n> > <= 'Employeezz'::text))\n> > Total runtime: 0.328 ms\n> > (8 rows)\n> > \n> > >>> \n> > History this week of counts with good response times vs timeouts.\n> > \n> > | Date | Success # | Time Out # | Avg. Success Secs |\n> > |------------+-----------+------------+-------------------|\n> > | 2016-05-09 | 18 | 31 | 7.9 |\n> > | 2016-05-10 | 17 | 25 | 10.5 |\n> > | 2016-05-11 | 27 | 33 | 10.1 |\n> > | 2016-05-12 | 68 | 24 | 9.9 |\n> > \n> > \n> > >>> Sample transaction response times\n> > \n> > | Timestamp | Tran ID | Resp MS | Resp CD\n> > --------------------+----------------+---------+--------\n> > 2016-05-10 06:20:19 | ListChangeHist | 55,023 | TIMEOUT\n> > 2016-05-10 07:47:34 | ListChangeHist | 55,017 | TIMEOUT\n> > 2016-05-10 07:48:00 | ListChangeHist | 9,866 | OK\n> > 2016-05-10 07:48:10 | ListChangeHist | 2,327 | OK\n> > 2016-05-10 07:59:23 | ListChangeHist | 55,020 | TIMEOUT\n> > 2016-05-10 08:11:20 | ListChangeHist | 55,030 | TIMEOUT\n> > 2016-05-10 08:31:45 | ListChangeHist | 4,216 | OK\n> > 2016-05-10 08:35:09 | ListChangeHist | 7,898 | OK\n> > 2016-05-10 08:36:18 | ListChangeHist | 9,810 | OK\n> > 2016-05-10 08:36:56 | ListChangeHist | 55,027 | TIMEOUT\n> > 2016-05-10 08:37:33 | ListChangeHist | 46,433 | OK\n> > 2016-05-10 08:38:09 | ListChangeHist | 55,019 | TIMEOUT\n> > 2016-05-10 08:53:43 | ListChangeHist | 55,019 | TIMEOUT\n> > 2016-05-10 09:45:09 | ListChangeHist | 55,022 | TIMEOUT\n> > 2016-05-10 09:46:13 | ListChangeHist | 55,017 | TIMEOUT\n> > 2016-05-10 09:49:27 | ListChangeHist | 55,011 | TIMEOUT\n> > 2016-05-10 09:52:12 | ListChangeHist | 55,018 | TIMEOUT\n> > 2016-05-10 09:57:42 | ListChangeHist | 9,462 | OK\n> > 2016-05-10 10:05:21 | ListChangeHist | 55,016 | TIMEOUT\n> > 2016-05-10 10:05:29 | ListChangeHist | 136 | OK\n> > 2016-05-10 10:05:38 | ListChangeHist | 1,517 | OK\n> > \n> > Artifacts\n> > ======================\n> > \n> > $ >uname -a\n> > SunOS ***** 5.10 Generic_150400-30 sun4v sparc sun4v\n> > \n> > Memory : 254G phys mem, 207G free mem.\n> > Processors: 32 - CPU is mostly 80% free\n> > \n> > >>> \n> > p306=> select version();\n> > version\n> > ---------------------------------------------------------------------------------------------------\n> > PostgreSQL 9.1.14 on sparc-sun-solaris2.10, compiled by gcc (GCC)\n> > 3.4.3 (csl-sol210-3_4-branch+sol_rpath), 64-bit\n> > \n> > >>> \n> > p306=> \\dt+ changehistory\n> > List of relations\n> > Schema | Name | Type | Owner | Size | Description\n> > --------+---------------+-------+-------+-------+-------------\n> > public | changehistory | table | p306 | 17 GB |\n> > \n> > >>> \n> > p306=> \\di+ changehistory*\n> > List of relations\n> > Schema | Name | Type | Owner | Table | Size | Description\n> > --------+-----------------------+-------+-------+---------------+---------+-------------\n> > public | changehistory_idx1 | index | p306 | changehistory | 9597\n> > MB\n> > |\n> > public | changehistory_idx3 | index | p306 | changehistory | 11 GB\n> > |\n> > public | changehistory_idx4 | index | p306 | changehistory | 4973\n> > MB\n> > |\n> > public | changehistory_pkey | index | p306 | changehistory | 2791\n> > MB\n> > |\n> > public | changehistory_search2 | index | p306 | changehistory |\n> > 9888\n> > MB |\n> > public | changehistory_search3 | index | p306 | changehistory | 10\n> > GB\n> > |\n> > public | changehistory_search4 | index | p306 | changehistory |\n> > 9240\n> > MB |\n> > public | changehistory_search5 | index | p306 | changehistory |\n> > 8373\n> > MB |\n> > (8 rows)\n> > \n> > \n> > >>> \n> > p306=> select count(*) from changehistory ;\n> > count\n> > ------------\n> > 129,185,024\n> > \n> > >>> \n> > Show all (filtered)\n> > ======================================================\n> > \n> > name | setting\n> > ---------------------------------+--------------------\n> > autovacuum | on\n> > autovacuum_analyze_scale_factor | 0.001\n> > autovacuum_analyze_threshold | 500\n> > autovacuum_freeze_max_age | 200000000\n> > autovacuum_max_workers | 5\n> > autovacuum_naptime | 1min\n> > autovacuum_vacuum_cost_delay | 0\n> > autovacuum_vacuum_cost_limit | -1\n> > autovacuum_vacuum_scale_factor | 0.001\n> > autovacuum_vacuum_threshold | 500\n> > bgwriter_delay | 200ms\n> > block_size | 8192\n> > check_function_bodies | on\n> > checkpoint_completion_target | 0.9\n> > checkpoint_segments | 256\n> > checkpoint_timeout | 1h\n> > checkpoint_warning | 30s\n> > client_encoding | UTF8\n> > commit_delay | 0\n> > commit_siblings | 5\n> > cpu_index_tuple_cost | 0.005\n> > cpu_operator_cost | 0.0025\n> > cpu_tuple_cost | 0.01\n> > cursor_tuple_fraction | 0.1\n> > deadlock_timeout | 1s\n> > default_statistics_target | 100\n> > default_transaction_deferrable | off\n> > default_transaction_isolation | read committed\n> > default_transaction_read_only | off\n> > default_with_oids | off\n> > effective_cache_size | 8GB\n> > from_collapse_limit | 8\n> > fsync | on\n> > full_page_writes | on\n> > ignore_system_indexes | off\n> > join_collapse_limit | 8\n> > krb_caseins_users | off\n> > lo_compat_privileges | off\n> > maintenance_work_mem | 1GB\n> > max_connections | 350\n> > max_files_per_process | 1000\n> > max_function_args | 100\n> > max_identifier_length | 63\n> > max_index_keys | 32\n> > max_locks_per_transaction | 64\n> > max_pred_locks_per_transaction | 64\n> > max_prepared_transactions | 0\n> > max_stack_depth | 2MB\n> > max_wal_senders | 5\n> > random_page_cost | 4\n> > segment_size | 1GB\n> > seq_page_cost | 1\n> > server_encoding | UTF8\n> > server_version | 9.1.14\n> > shared_buffers | 2GB\n> > sql_inheritance | on\n> > statement_timeout | 0\n> > synchronize_seqscans | on\n> > synchronous_commit | on\n> > synchronous_standby_names |\n> > tcp_keepalives_count | 0\n> > tcp_keepalives_idle | -1\n> > tcp_keepalives_interval | 0\n> > track_activities | on\n> > track_activity_query_size | 1024\n> > track_counts | on\n> > track_functions | none\n> > transaction_deferrable | off\n> > transaction_isolation | read committed\n> > transaction_read_only | off\n> > transform_null_equals | off\n> > update_process_title | on\n> > vacuum_cost_delay | 0\n> > vacuum_cost_limit | 200\n> > vacuum_cost_page_dirty | 20\n> > vacuum_cost_page_hit | 1\n> > vacuum_cost_page_miss | 10\n> > vacuum_defer_cleanup_age | 0\n> > vacuum_freeze_min_age | 50000000\n> > vacuum_freeze_table_age | 150000000\n> > \n> \n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 13 May 2016 20:11:07 -0300 (ART)", "msg_from": "Gerardo Herzig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database transaction with intermittent slow responses" }, { "msg_contents": "On Sat, May 14, 2016 at 1:11 AM, Gerardo Herzig wrote:\n> Oh, so *all* the transactions are being slowed down at that point...What about CPU IO Wait% at that moment? Could be some other processes stressing the system out?\n\nOr the database has just grown pass the size where disk caching is\nefficient. Usually these are nonlinear processes, i.e. it works good\nuntil a certain moment and then cache hit ratio decreases dramatically\nbecause all of a sudden content starts heavily competing about space\nin the cache.\n\n> Now im thinking about hard disk issues...maybe some \"smart\" messages?\n>\n> Have some other hardware to give it a try?\n\nCan we please see the full DDL of table and indexes?\n\nThe only additional idea I can throw up at the moment is looking at a\ncombined index on (primarykeyofchange, category) - maybe replacing\nchangehistory_idx4 - but that of course depends on the other SQL used\nagainst that table.\n\nKind regards\n\nrobert\n\n-- \n[guy, jim, charlie].each {|him| remember.him do |as, often| as.you_can\n- without end}\nhttp://blog.rubybestpractices.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 14 May 2016 11:47:30 +0200", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database transaction with intermittent slow responses" }, { "msg_contents": "Hi Gerado and Robert,\r\n\r\nWe turned DEBUG on in the application logs and then this weekend I was going to multi-threaded test program to reproduce the issue in a stand-alone program, and when I got the debug logs and stripped out everything except the database queries for the ChangeHistory table, I found some other unexpected transactions which were hitting the ChangeHistory table fairly hard and in rapid succession.\r\n\r\nAt this point development is looking into it, and we believe this application transaction is the source of the issue, which is why I have not responded to your emails.\r\n\r\nI should know more after tonight.\r\n\r\nThanks again for your feedback and responses\r\n\r\nRegards\r\nJohn\r\n\r\n-----Original Message-----\r\nFrom: Gerardo Herzig [mailto:[email protected]] \r\nSent: Friday, May 13, 2016 4:11 PM\r\nTo: John Gorman\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Database transaction with intermittent slow responses\r\n\r\nOh, so *all* the transactions are being slowed down at that point...What about CPU IO Wait% at that moment? Could be some other processes stressing the system out?\r\n\r\nNow im thinking about hard disk issues...maybe some \"smart\" messages?\r\n\r\nHave some other hardware to give it a try?\r\n\r\nGerardo\r\n\r\n----- Mensaje original -----\r\n> De: \"John Gorman\" <[email protected]>\r\n> Para: \"Gerardo Herzig\" <[email protected]>\r\n> CC: [email protected], \"John Gorman\" <[email protected]>\r\n> Enviados: Viernes, 13 de Mayo 2016 18:25:37\r\n> Asunto: RE: [PERFORM] Database transaction with intermittent slow responses\r\n> \r\n> Hi Gerado,\r\n> \r\n> Thanks for the quick response. We do not appear to have a connection\r\n> limit since our application is the only thing talking to the\r\n> database, the connections are somewhat limited. We are using about\r\n> 126 of a max allowed 350 connections. We keep these metrics in a\r\n> different database, and we also generate alerts if we get close to\r\n> the catalog/cluster limit.\r\n> \r\n> Also I have been monitoring heavily and watching for locks while the\r\n> transaction runs for a long time. While I see occasional locks, they\r\n> are on other tables and are brief, so I do not believe there is a\r\n> database lock issue/contention.\r\n> \r\n> The application is timing the transaction out. When we detect that\r\n> the timeout limit has occurred, we cancel the database connection\r\n> (conn.cancel();) - we have been doing this for several years with no\r\n> issue.\r\n> \r\n> I setup a adhoc monitor which runs every 2 seconds and displays\r\n> \"select * from pg_stat_activity where datname = 'p306' and\r\n> current_query not like '<IDLE%'; and then write the output to a log.\r\n> I can see the transaction being executed in the database for over 50\r\n> seconds, so I do believe the database actually is working on it.\r\n> \r\n> We have a few monitoring programs that track and record quite a few\r\n> thinks including database locks (number and type), connections\r\n> (number and where). I have reviewed the history and do not see any\r\n> trends.\r\n> \r\n> If it helps here is a monitor snippet of the transaction taking over\r\n> 50 seconds (SELECT * FROM ChangeHistory)\r\n> \r\n> \r\n> >> Wed May 11 07:50:09 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871 |\r\n> 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> \r\n> >> Wed May 11 07:50:11 MST 2016\r\n> 3709009 | p306 | 15014 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 35859\r\n> | 2016-05-11 07:31:31.968087-07 | 2016-05-11 07:50:11.575881-07 |\r\n> 2016-05-11 07:50:11.766942-07 | f | SELECT * FROM Employee\r\n> WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName,\r\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\r\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\r\n> AlternateID2 LIMIT 11\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:11.712848-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:11.712887-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:13 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:13.733643-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 16771 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 37470\r\n> | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:50:13.770366-07 |\r\n> 2016-05-11 07:50:13.811502-07 | f | SELECT * FROM Dependent\r\n> WHERE DependentID = $1\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:13.733968-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:15 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:15.734777-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:15.73486-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:17 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871 |\r\n> 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> \r\n> >> Wed May 11 07:50:19 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871 |\r\n> 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> \r\n> >> Wed May 11 07:50:21 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871 |\r\n> 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 21656 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 41810 |\r\n> 2016-05-11 07:48:40.95846-07 | 2016-05-11 07:50:21.871077-07 |\r\n> 2016-05-11 07:50:21.871579-07 | f | DELETE FROM\r\n> ClaimPrevQueue WHERE Claimnumber = $1\r\n> 3709009 | p306 | 8042 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 63023 |\r\n> 2016-05-11 07:14:34.208098-07 | 2016-05-11 07:50:21.813662-07 |\r\n> 2016-05-11 07:50:21.814575-07 | f | SELECT * FROM Employee\r\n> WHERE CertificateNumber BETWEEN $1 AND $2 ORDER BY LastName,\r\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\r\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\r\n> AlternateID2 LIMIT 11\r\n> \r\n> >> Wed May 11 07:50:23 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:23.85706-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 7925 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 62888\r\n> | 2016-05-11 07:14:05.586327-07 | 2016-05-11 07:50:23.517469-07 |\r\n> 2016-05-11 07:50:23.684134-07 | f | DELETE FROM\r\n> ToothChartMaintenance WHERE Claimnumber = $1\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:23.857092-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 22235 | 10 | postgres |\r\n> | | |\r\n> | 2016-05-11 07:50:22.129887-07 | 2016-05-11\r\n> 07:50:22.162326-07 | 2016-05-11 07:50:22.162326-07 | f |\r\n> autovacuum: VACUUM public.adjrespendrsncode\r\n> \r\n> >> Wed May 11 07:50:25 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 7925 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 62888\r\n> | 2016-05-11 07:14:05.586327-07 | 2016-05-11 07:50:23.517469-07 |\r\n> 2016-05-11 07:50:25.931788-07 | f | SELECT * FROM\r\n> CategoryPlaceService WHERE CategoryID = $1 ORDER BY CategoryID,\r\n> RangeFrom, RangeTo LIMIT 1000\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:25.920308-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:27 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:27.935677-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:27.938329-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:29 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17652 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 38402\r\n> | 2016-05-11 07:39:00.298771-07 | 2016-05-11 07:50:29.615446-07 |\r\n> 2016-05-11 07:50:29.954405-07 | f | SELECT * FROM Dependent\r\n> WHERE DepSocialSecurityNumber BETWEEN $1 AND $2 ORDER BY\r\n> DepCertificateNumber, DepSocialSecurityNumber, DepLastName,\r\n> DepFirstName, DepMiddleName, DepMedicareID, BirthDate,\r\n> AlternateID1, AlternateID2 LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:29.966428-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:29.966481-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:31 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:32.000148-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:34 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:33.953492-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:33.953803-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:36 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:35.996862-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:35.996892-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:38 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:38.039441-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:38.036922-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:40 MST 2016\r\n> 3709009 | p306 | 17321 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 38226\r\n> | 2016-05-11 07:38:10.838611-07 | 2016-05-11 07:50:40.059438-07 |\r\n> 2016-05-11 07:50:40.060951-07 | f | DELETE FROM\r\n> ClaimPrevQueue WHERE Claimnumber = $1\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 2860 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 56427\r\n> | 2016-05-11 06:53:47.633683-07 | 2016-05-11 07:50:40.060863-07 |\r\n> 2016-05-11 07:50:40.062051-07 | f | SELECT * FROM Employee\r\n> WHERE CertificateNumber BETWEEN $1 AND $2 ORDER BY LastName,\r\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\r\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\r\n> AlternateID2 LIMIT 11\r\n> 3709009 | p306 | 17652 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 38402\r\n> | 2016-05-11 07:39:00.298771-07 | 2016-05-11 07:50:40.059956-07 |\r\n> 2016-05-11 07:50:40.083659-07 | f | DELETE FROM\r\n> ToothChartMaintenance WHERE Claimnumber = $1\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:40.077061-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 16771 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 37470\r\n> | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:50:40.060076-07 |\r\n> 2016-05-11 07:50:40.072735-07 | f | INSERT INTO CrmCallLinks\r\n> VALUES\r\n> ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27)\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:40.080967-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 18895 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 39389\r\n> | 2016-05-11 07:42:04.682022-07 | 2016-05-11 07:50:40.062356-07 |\r\n> 2016-05-11 07:50:40.062667-07 | f | SELECT * FROM\r\n> RealtimeTransInfo WHERE SenderID = $1 AND PayLoadID = $2 ORDER BY\r\n> SenderID DESC, PayLoadID DESC LIMIT 2\r\n> 3709009 | p306 | 8864 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 64407\r\n> | 2016-05-11 07:18:34.848657-07 | 2016-05-11 07:50:39.601078-07 |\r\n> 2016-05-11 07:50:40.077433-07 | f | SELECT * FROM Facility\r\n> WHERE FacilityID = $1\r\n> \r\n> >> Wed May 11 07:50:42 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 2860 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 56427\r\n> | 2016-05-11 06:53:47.633683-07 | 2016-05-11 07:50:42.094681-07 |\r\n> 2016-05-11 07:50:42.095179-07 | f | DELETE FROM\r\n> ClaimPrevQueue WHERE Claimnumber = $1\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:42.023507-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:42.023043-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:44 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:44.054554-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:44.054674-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:46 MST 2016\r\n> 3709009 | p306 | 17321 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 38226\r\n> | 2016-05-11 07:38:10.838611-07 | 2016-05-11 07:50:45.908151-07 |\r\n> 2016-05-11 07:50:46.08959-07 | f | SELECT * FROM Employee\r\n> WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName,\r\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\r\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\r\n> AlternateID2 LIMIT 11\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:46.077355-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 16771 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 37470\r\n> | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:50:46.139211-07 |\r\n> 2016-05-11 07:50:46.141386-07 | f | SELECT * FROM\r\n> AdjudicationResult WHERE Claimnumber BETWEEN $1 AND $2 ORDER BY\r\n> Claimnumber DESC LIMIT 11\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:46.067222-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:48 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:48.082695-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:48.082883-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:50 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 2860 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 56427\r\n> | 2016-05-11 06:53:47.633683-07 | 2016-05-11 07:50:50.056892-07 |\r\n> 2016-05-11 07:50:50.174587-07 | f | SELECT * FROM Employee\r\n> WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName,\r\n> FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,\r\n> SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,\r\n> AlternateID2 LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:50.107102-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 5620 | 16387 | p306 | TaskRunner\r\n> | 172.20.0.86 | batchb.eldocomp.com | 37594 |\r\n> 2016-05-11 07:04:08.129626-07 | 2016-05-11 07:50:49.812093-07 |\r\n> 2016-05-11 07:50:49.81238-07 | f | SELECT * FROM\r\n> EDIFtpFileDetails WHERE MsgLogStatus = $1 AND ProcessId > $2 ORDER\r\n> BY MsgLogStatus, ProcessId LIMIT 10\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:50.107172-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:52 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:52.127455-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:52.127776-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:54 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:54.165381-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:54.165596-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:56 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:56.189515-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:56.176308-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:50:58 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 21656 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 41810\r\n> | 2016-05-11 07:48:40.95846-07 | 2016-05-11 07:50:56.785032-07 |\r\n> 2016-05-11 07:50:56.789767-07 | f | SELECT * FROM\r\n> FacilityPhysicianAffl WHERE Status IN ('A', 'H', 'D') AND\r\n> (FacilityID IN (SELECT FacilityID FROM Facility WHERE UPPER(TaxID)=\r\n> '811190101' AND Status IN ( 'A' , 'H' , 'D') ) AND (\r\n> PhysicianID IN (SELECT PhysicianID FROM Physician WHERE Status IN (\r\n> 'A' , 'H' , 'D') )) AND (( ISDUMMY='0' AND FacilityID IN (\r\n> SELECT FacilityID FROM Facility WHERE FacilityRecordType = 'S' AND\r\n> ( FacilityName IS NOT NULL AND FacilityName != '' ) ) OR (\r\n> FacilityID IN ( SELECT FacilityID FROM Facility WHERE\r\n> FacilityRecordType = 'F' AND ( FacilityName IS NOT NULL AND\r\n> FacilityName != '' ) ))))) ORDER BY FacilityID ASC , PhysicianID\r\n> ASC\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:50:58.212226-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:50:58.269389-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:51:00 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:51:00.228846-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:51:00.229019-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:51:02 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 4715 | 16387 | p306 | TaskRunner\r\n> | 172.20.0.86 | batchb.eldocomp.com | 33881 |\r\n> 2016-05-11 07:01:41.247388-07 | 2016-05-11 07:51:02.125906-07 |\r\n> 2016-05-11 07:51:02.311956-07 | f | SELECT * FROM\r\n> EmpEligibilityCoverage WHERE EmployeeID = $1 AND EffectiveDate <=\r\n> $2 ORDER BY EmployeeID DESC, EffectiveDate DESC LIMIT 101\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:51:02.23586-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 16771 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 37470\r\n> | 2016-05-11 07:36:18.535139-07 | 2016-05-11 07:51:02.188886-07 |\r\n> 2016-05-11 07:51:02.295888-07 | f | DELETE FROM\r\n> ToothChartMaintenance WHERE Claimnumber = $1\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:51:02.235869-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:51:04 MST 2016\r\n> 3709009 | p306 | 5644 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 59871\r\n> | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |\r\n> 2016-05-11 07:50:09.396161-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category BETWEEN $1 AND $2 AND\r\n> PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser\r\n> DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:51:04.277287-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:51:04.277543-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:51:06 MST 2016\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:51:06.313649-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:51:06.313855-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> >> Wed May 11 07:51:08 MST 2016\r\n> 3709009 | p306 | 22530 | 16387 | p306 |\r\n> | 172.20.0.82 | coreb | 42494\r\n> | 2016-05-11 07:51:04.419169-07 | 2016-05-11 07:51:08.351721-07 |\r\n> 2016-05-11 07:51:08.373929-07 | f | SELECT * FROM\r\n> ChangeHistory WHERE Category = $1 AND PrimaryKeyOfChange = $2 ORDER\r\n> BY Category, PrimaryKeyOfChange, ChgTS, ExcludedKeyFields LIMIT\r\n> 2001\r\n> 3709009 | p306 | 17312 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 54263\r\n> | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |\r\n> 2016-05-11 07:51:08.335854-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> 3709009 | p306 | 8671 | 16387 | p306 |\r\n> | 172.20.0.86 | batchb.eldocomp.com | 55055\r\n> | 2016-05-11 07:17:52.292909-07 | 2016-05-11 07:40:50.525528-07 |\r\n> 2016-05-11 07:51:08.359281-07 | f | SELECT * FROM Employee\r\n> WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID LIMIT 1001\r\n> \r\n> Regards\r\n> John\r\n> \r\n> -----Original Message-----\r\n> From: Gerardo Herzig [mailto:[email protected]]\r\n> Sent: Friday, May 13, 2016 2:05 PM\r\n> To: John Gorman\r\n> Cc: [email protected]\r\n> Subject: Re: [PERFORM] Database transaction with intermittent slow\r\n> responses\r\n> \r\n> After quick reading, im thinking about a couples of chances:\r\n> \r\n> 1) You are hitting a connection_limit\r\n> 2) You are hitting a lock contention (perhaps some other backend is\r\n> locking the table and not releasing it)\r\n> \r\n> Who throws the timeout? It is Postgres or your JDBC connector?\r\n> \r\n> My initial blind guess is that your \"timed out queries\" never gets\r\n> postgres at all, and are blocked prior to that for some other issue.\r\n> If im wrong, well, you should at least have the timeout recorded in\r\n> your logs.\r\n> \r\n> You should also track #of_connectinos and #of_locks over that tables.\r\n> \r\n> See http://www.postgresql.org/docs/9.1/static/view-pg-locks.html for\r\n> pg_lock information\r\n> \r\n> That should be my starting point for viewing whats going on.\r\n> \r\n> HTH\r\n> Gerardo\r\n> \r\n> ----- Mensaje original -----\r\n> > De: \"John Gorman\" <[email protected]>\r\n> > Para: [email protected]\r\n> > CC: \"John Gorman\" <[email protected]>\r\n> > Enviados: Viernes, 13 de Mayo 2016 16:59:51\r\n> > Asunto: [PERFORM] Database transaction with intermittent slow\r\n> > responses\r\n> > \r\n> > \r\n> > Transactions to table, ChangeHistory, have recently become\r\n> > intermittently slow and is increasing becoming slower.\r\n> > \r\n> > * No database configuration changes have been made recently\r\n> > * We have run vacuum analyze\r\n> > * We have tried backing up and reloading the table (data, indexes,\r\n> > etc)\r\n> > \r\n> > Some transactions respond quickly (200 ms) and others take over 55\r\n> > seconds (we cancel the query after 55 seconds – our timeout SLA).\r\n> > The problem has recently become very bad. It is the same query\r\n> > being\r\n> > issued but with different parameters.\r\n> > \r\n> > If the transaction takes over 55 seconds and I run the query\r\n> > manually\r\n> > (with or without EXPLAIN ANALYZE) it returns quickly (a few hundred\r\n> > ms). In case I am looking at cache, I have a list of other queries\r\n> > (just different parameters) that have timed out and when I run them\r\n> > (without the limit even) the response is very timely.\r\n> > \r\n> > Any help or insight would be great.\r\n> > \r\n> > NOTE: our application is connecting to the database via JDBC and we\r\n> > are using PreparedStatements. I have provided full details so all\r\n> > information is available, but please let me know if any other\r\n> > information is needed – thx in advance.\r\n> > \r\n> > p306=> EXPLAIN ANALYZE SELECT * FROM ChangeHistory WHERE Category\r\n> > BETWEEN 'Employee' AND 'Employeezz' AND PrimaryKeyOfChange BETWEEN\r\n> > '312313' AND '312313!zz' ORDER BY ChgTS DESC, ChgUser DESC,\r\n> > Category\r\n> > DESC, PrimaryKeyOfChange DESC LIMIT 11;\r\n> > QUERY PLAN\r\n> > ------------------------------------------------------------------------------------------------------\r\n> > Limit (cost=33.66..33.67 rows=1 width=136) (actual\r\n> > time=0.297..0.297\r\n> > rows=11 loops=1)\r\n> > -> Sort (cost=33.66..33.67 rows=1 width=136) (actual\r\n> > time=0.297..0.297 rows=11 loops=1)\r\n> > Sort Key: chgts, chguser, category, primarykeyofchange\r\n> > Sort Method: top-N heapsort Memory: 27kB\r\n> > -> Index Scan using changehistory_idx4 on changehistory\r\n> > (cost=0.00..33.65 rows=1 width=136) (actual time=0.046..\r\n> > 0.239 rows=85 loops=1)\r\n> > Index Cond: (((primarykeyofchange)::text >= '312313'::text) AND\r\n> > ((primarykeyofchange)::text <= '312313!zz'::\r\n> > text))\r\n> > Filter: (((category)::text >= 'Employee'::text) AND\r\n> > ((category)::text\r\n> > <= 'Employeezz'::text))\r\n> > Total runtime: 0.328 ms\r\n> > (8 rows)\r\n> > \r\n> > >>> \r\n> > History this week of counts with good response times vs timeouts.\r\n> > \r\n> > | Date | Success # | Time Out # | Avg. Success Secs |\r\n> > |------------+-----------+------------+-------------------|\r\n> > | 2016-05-09 | 18 | 31 | 7.9 |\r\n> > | 2016-05-10 | 17 | 25 | 10.5 |\r\n> > | 2016-05-11 | 27 | 33 | 10.1 |\r\n> > | 2016-05-12 | 68 | 24 | 9.9 |\r\n> > \r\n> > \r\n> > >>> Sample transaction response times\r\n> > \r\n> > | Timestamp | Tran ID | Resp MS | Resp CD\r\n> > --------------------+----------------+---------+--------\r\n> > 2016-05-10 06:20:19 | ListChangeHist | 55,023 | TIMEOUT\r\n> > 2016-05-10 07:47:34 | ListChangeHist | 55,017 | TIMEOUT\r\n> > 2016-05-10 07:48:00 | ListChangeHist | 9,866 | OK\r\n> > 2016-05-10 07:48:10 | ListChangeHist | 2,327 | OK\r\n> > 2016-05-10 07:59:23 | ListChangeHist | 55,020 | TIMEOUT\r\n> > 2016-05-10 08:11:20 | ListChangeHist | 55,030 | TIMEOUT\r\n> > 2016-05-10 08:31:45 | ListChangeHist | 4,216 | OK\r\n> > 2016-05-10 08:35:09 | ListChangeHist | 7,898 | OK\r\n> > 2016-05-10 08:36:18 | ListChangeHist | 9,810 | OK\r\n> > 2016-05-10 08:36:56 | ListChangeHist | 55,027 | TIMEOUT\r\n> > 2016-05-10 08:37:33 | ListChangeHist | 46,433 | OK\r\n> > 2016-05-10 08:38:09 | ListChangeHist | 55,019 | TIMEOUT\r\n> > 2016-05-10 08:53:43 | ListChangeHist | 55,019 | TIMEOUT\r\n> > 2016-05-10 09:45:09 | ListChangeHist | 55,022 | TIMEOUT\r\n> > 2016-05-10 09:46:13 | ListChangeHist | 55,017 | TIMEOUT\r\n> > 2016-05-10 09:49:27 | ListChangeHist | 55,011 | TIMEOUT\r\n> > 2016-05-10 09:52:12 | ListChangeHist | 55,018 | TIMEOUT\r\n> > 2016-05-10 09:57:42 | ListChangeHist | 9,462 | OK\r\n> > 2016-05-10 10:05:21 | ListChangeHist | 55,016 | TIMEOUT\r\n> > 2016-05-10 10:05:29 | ListChangeHist | 136 | OK\r\n> > 2016-05-10 10:05:38 | ListChangeHist | 1,517 | OK\r\n> > \r\n> > Artifacts\r\n> > ======================\r\n> > \r\n> > $ >uname -a\r\n> > SunOS ***** 5.10 Generic_150400-30 sun4v sparc sun4v\r\n> > \r\n> > Memory : 254G phys mem, 207G free mem.\r\n> > Processors: 32 - CPU is mostly 80% free\r\n> > \r\n> > >>> \r\n> > p306=> select version();\r\n> > version\r\n> > ---------------------------------------------------------------------------------------------------\r\n> > PostgreSQL 9.1.14 on sparc-sun-solaris2.10, compiled by gcc (GCC)\r\n> > 3.4.3 (csl-sol210-3_4-branch+sol_rpath), 64-bit\r\n> > \r\n> > >>> \r\n> > p306=> \\dt+ changehistory\r\n> > List of relations\r\n> > Schema | Name | Type | Owner | Size | Description\r\n> > --------+---------------+-------+-------+-------+-------------\r\n> > public | changehistory | table | p306 | 17 GB |\r\n> > \r\n> > >>> \r\n> > p306=> \\di+ changehistory*\r\n> > List of relations\r\n> > Schema | Name | Type | Owner | Table | Size | Description\r\n> > --------+-----------------------+-------+-------+---------------+---------+-------------\r\n> > public | changehistory_idx1 | index | p306 | changehistory | 9597\r\n> > MB\r\n> > |\r\n> > public | changehistory_idx3 | index | p306 | changehistory | 11 GB\r\n> > |\r\n> > public | changehistory_idx4 | index | p306 | changehistory | 4973\r\n> > MB\r\n> > |\r\n> > public | changehistory_pkey | index | p306 | changehistory | 2791\r\n> > MB\r\n> > |\r\n> > public | changehistory_search2 | index | p306 | changehistory |\r\n> > 9888\r\n> > MB |\r\n> > public | changehistory_search3 | index | p306 | changehistory | 10\r\n> > GB\r\n> > |\r\n> > public | changehistory_search4 | index | p306 | changehistory |\r\n> > 9240\r\n> > MB |\r\n> > public | changehistory_search5 | index | p306 | changehistory |\r\n> > 8373\r\n> > MB |\r\n> > (8 rows)\r\n> > \r\n> > \r\n> > >>> \r\n> > p306=> select count(*) from changehistory ;\r\n> > count\r\n> > ------------\r\n> > 129,185,024\r\n> > \r\n> > >>> \r\n> > Show all (filtered)\r\n> > ======================================================\r\n> > \r\n> > name | setting\r\n> > ---------------------------------+--------------------\r\n> > autovacuum | on\r\n> > autovacuum_analyze_scale_factor | 0.001\r\n> > autovacuum_analyze_threshold | 500\r\n> > autovacuum_freeze_max_age | 200000000\r\n> > autovacuum_max_workers | 5\r\n> > autovacuum_naptime | 1min\r\n> > autovacuum_vacuum_cost_delay | 0\r\n> > autovacuum_vacuum_cost_limit | -1\r\n> > autovacuum_vacuum_scale_factor | 0.001\r\n> > autovacuum_vacuum_threshold | 500\r\n> > bgwriter_delay | 200ms\r\n> > block_size | 8192\r\n> > check_function_bodies | on\r\n> > checkpoint_completion_target | 0.9\r\n> > checkpoint_segments | 256\r\n> > checkpoint_timeout | 1h\r\n> > checkpoint_warning | 30s\r\n> > client_encoding | UTF8\r\n> > commit_delay | 0\r\n> > commit_siblings | 5\r\n> > cpu_index_tuple_cost | 0.005\r\n> > cpu_operator_cost | 0.0025\r\n> > cpu_tuple_cost | 0.01\r\n> > cursor_tuple_fraction | 0.1\r\n> > deadlock_timeout | 1s\r\n> > default_statistics_target | 100\r\n> > default_transaction_deferrable | off\r\n> > default_transaction_isolation | read committed\r\n> > default_transaction_read_only | off\r\n> > default_with_oids | off\r\n> > effective_cache_size | 8GB\r\n> > from_collapse_limit | 8\r\n> > fsync | on\r\n> > full_page_writes | on\r\n> > ignore_system_indexes | off\r\n> > join_collapse_limit | 8\r\n> > krb_caseins_users | off\r\n> > lo_compat_privileges | off\r\n> > maintenance_work_mem | 1GB\r\n> > max_connections | 350\r\n> > max_files_per_process | 1000\r\n> > max_function_args | 100\r\n> > max_identifier_length | 63\r\n> > max_index_keys | 32\r\n> > max_locks_per_transaction | 64\r\n> > max_pred_locks_per_transaction | 64\r\n> > max_prepared_transactions | 0\r\n> > max_stack_depth | 2MB\r\n> > max_wal_senders | 5\r\n> > random_page_cost | 4\r\n> > segment_size | 1GB\r\n> > seq_page_cost | 1\r\n> > server_encoding | UTF8\r\n> > server_version | 9.1.14\r\n> > shared_buffers | 2GB\r\n> > sql_inheritance | on\r\n> > statement_timeout | 0\r\n> > synchronize_seqscans | on\r\n> > synchronous_commit | on\r\n> > synchronous_standby_names |\r\n> > tcp_keepalives_count | 0\r\n> > tcp_keepalives_idle | -1\r\n> > tcp_keepalives_interval | 0\r\n> > track_activities | on\r\n> > track_activity_query_size | 1024\r\n> > track_counts | on\r\n> > track_functions | none\r\n> > transaction_deferrable | off\r\n> > transaction_isolation | read committed\r\n> > transaction_read_only | off\r\n> > transform_null_equals | off\r\n> > update_process_title | on\r\n> > vacuum_cost_delay | 0\r\n> > vacuum_cost_limit | 200\r\n> > vacuum_cost_page_dirty | 20\r\n> > vacuum_cost_page_hit | 1\r\n> > vacuum_cost_page_miss | 10\r\n> > vacuum_defer_cleanup_age | 0\r\n> > vacuum_freeze_min_age | 50000000\r\n> > vacuum_freeze_table_age | 150000000\r\n> > \r\n> \r\n> \r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 16 May 2016 18:24:18 +0000", "msg_from": "John Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database transaction with intermittent slow responses" } ]
[ { "msg_contents": "Hi,\nI have defined a function into public schema which permits to execute a set\nof SQL statements on every schema:\n\nCREATE OR REPLACE FUNCTION \"public\".\"multiddl\"(\"sql\" text)\n RETURNS \"pg_catalog\".\"bool\" AS $BODY$DECLARE\n r record;\nBEGIN\n FOR r IN\n SELECT schema_name\n FROM information_schema.schemata\n WHERE schema_name NOT LIKE 'pg_%' AND\n schema_name NOT IN('information_schema')\n LOOP\n EXECUTE 'SET search_path TO ' || r.schema_name || ', public';\n RAISE NOTICE 'Executing for %', r.schema_name;\n EXECUTE sql;\n END LOOP;\n RETURN 't';\nEND\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE COST 100\n;\n\nThen I have executed this statement:\n\nSELECT * FROM public.multiddl($$\n\nCREATE TYPE enum_report_type AS ENUM ('A', 'B');\nCREATE TABLE \"report_layout\" (\n \"id\" serial,\n \"report_type\" enum_report_type NOT NULL,\n \"layout_name\" varchar(255) NOT NULL,\n \"report_config\" jsonb,\n \"created_by\" integer,\n \"id_cliente\" integer,\n \"builder\" varchar(255),\n \"can_modify\" bool,\n \"can_delete\" bool,\n \"is_default\" bool,\n \"created_on\" timestamp NULL,\n \"modified_on\" timestamp NULL,\n \"modified_by\" integer,\n CONSTRAINT \"fk_clienti_report_layout\" FOREIGN KEY (\"id_cliente\")\nREFERENCES \"public\".\"customer\" (\"id\"),\n CONSTRAINT \"fk_utenti_report_layout_create\" FOREIGN KEY (\"created_by\")\nREFERENCES \"user\" (\"id\"),\n CONSTRAINT \"fk_utenti_report_layout_modify\" FOREIGN KEY (\"modified_by\")\nREFERENCES \"user\" (\"id\")\n)\nWITH (OIDS=FALSE);\nALTER TABLE report ADD COLUMN id_layout integer;\n$$);\n\nAll locks derived from this statement seem to be related to public views,\nthat are commodity views which ties together all schemata. Example of view:\n\nCREATE OR REPLACE VIEW \"public\".\"v_contacts\" AS\n SELECT 'public'::text AS schema,\n [FIELDS]\nUNION\n SELECT 'customer2'::text AS schema,\n [FIELDS]\n FROM ((((customer c\n JOIN customer2.table1 g ON ...\n JOIN customer2.table2 s ON ...\n JOIN customer2.reparti r ON ...\n JOIN customer2.contatto co ON ...\n\nI cannot understand why every query which uses union view like the before\nmentioned is stuck.\nThanks for any advice.\n\n-- \n\n*Christian Castelliskype: christrack*\n\nHi,I have defined a function into public schema which permits to execute a set of SQL statements on every schema:CREATE OR REPLACE FUNCTION \"public\".\"multiddl\"(\"sql\" text)  RETURNS \"pg_catalog\".\"bool\" AS $BODY$DECLARE    r record; BEGIN  FOR r IN        SELECT schema_name        FROM information_schema.schemata        WHERE schema_name NOT LIKE 'pg_%' AND              schema_name NOT IN('information_schema')    LOOP      EXECUTE 'SET search_path TO ' ||  r.schema_name || ', public';      RAISE NOTICE 'Executing for %', r.schema_name;      EXECUTE sql;  END LOOP;    RETURN 't';END$BODY$  LANGUAGE 'plpgsql' VOLATILE COST 100;Then I have executed this statement: SELECT * FROM public.multiddl($$\nCREATE TYPE enum_report_type AS ENUM ('A', 'B'); \nCREATE TABLE \"report_layout\" (\n    \"id\" serial,\n    \"report_type\" enum_report_type NOT NULL,         \"layout_name\" varchar(255) NOT NULL,\n    \"report_config\" jsonb,\n    \"created_by\" integer,\n    \"id_cliente\" integer,\n    \"builder\" varchar(255),\n    \"can_modify\" bool,\n    \"can_delete\" bool,\n    \"is_default\" bool,\n    \"created_on\" timestamp NULL,\n    \"modified_on\" timestamp NULL,\n    \"modified_by\" integer,\n    CONSTRAINT \"fk_clienti_report_layout\" FOREIGN KEY (\"id_cliente\") REFERENCES \"public\".\"customer\" (\"id\"),\n    CONSTRAINT \"fk_utenti_report_layout_create\" FOREIGN KEY (\"created_by\") REFERENCES \"user\" (\"id\"),\n    CONSTRAINT \"fk_utenti_report_layout_modify\" FOREIGN KEY (\"modified_by\") REFERENCES \"user\" (\"id\")\n)\nWITH (OIDS=FALSE);\nALTER TABLE report ADD COLUMN id_layout integer; $$);All locks derived from this statement seem to be related to public views, that are commodity views which ties together all schemata. Example of view:CREATE OR REPLACE VIEW \"public\".\"v_contacts\" AS  SELECT 'public'::text AS schema,    [FIELDS]UNION SELECT 'customer2'::text AS schema,   [FIELDS]   FROM ((((customer c     JOIN customer2.table1 g ON ...     JOIN customer2.table2 s ON ...     JOIN customer2.reparti r ON ...     JOIN customer2.contatto co ON ...I cannot understand why every query which uses union view like the before mentioned is stuck.Thanks for any advice.-- Christian Castelliskype:  christrack", "msg_date": "Wed, 18 May 2016 13:10:16 +0200", "msg_from": "Christian Castelli <[email protected]>", "msg_from_op": true, "msg_subject": "Locks when launching function across schemata" } ]