threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi\n\nThis will be a rather lengthy post, just to give the full (I hope) picture. We're using Zabbix for monitoring and I'm having problems\nunderstanding why the deletion of rows in the events table is so slow.\n\nZabbix: 4.2 (never mind the name of the db - it is 4.2)\nnew values per second: ~400\nhosts: ~600\nitems: ~45000\n\nOS: CentOS Linux release 7.6.1810 (Core)\nPostgresql was installed from the yum repo on postgresql.org\n\nzabbix_34=> select version();\n version\n---------------------------------------------------------------------------------------------------------\nPostgreSQL 10.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit\n(1 row)\n\nThe database is analyzed + vacuumed nightly. The server runs Zabbix and the database, has 16 GB memory, 4 vCPUs (modern hardware).\nSome parameters:\n\nshared_buffers = 3GB\nwork_mem = 10MB (I also tested with work_mem = 128MB - no difference)\neffective_cache_size = 6 GB\neffective_io_concurrency = 40\ncheckpoint_timeout = 5 min (default)\nmax_wal_size = 1 GB (default)\ncheckpoint_completion_target = 0.8\n\npg_wal is already on a separate device.\n\nevents table: ~25 million rows / 2.9 GB\nevent_recovery table: ~12 million rows / 550 MB\nalerts table: ~600000 rows / 530 MB\n\nGenerally the database is quite snappy and shows no indication of problems. But now I've seen that housekeeping of events is\nvery slow - a single (normally hourly) run can take more than one day to finish, so events keep stacking up in the table. A typical slow\ndelete statement, from the postgres log:\n\npostgresql-10-20190717-031404.log:2019-07-17 03:37:43 CEST [80965]: [4-1] user=zabbix,db=zabbix_34,app=[unknown],client=[local]: LOG: duration: 27298798.930 ms statement: delete from events where (eventid between 5580621 and 5580681 or eventid between 5580689 and 5580762 or eventid between 5580769 and 5580844 or eventid between 5580851 and 5580867 or eventid between 5580869 and 5580926 or eventid between 5580933 and 5580949 or eventid between 5580963 and 5581024\n--- 8< --- a lot of similar eventids snipped away -----\nor eventid between 5586799 and 5586839 or eventid in (5581385,5581389,5581561,5581563,5581564,5581580,5 581582,5581584,5581585,5581635))\n\nI've analyzed the deletion of a single row in events. First, some table information:\n\n\nzabbix_34=> \\d events\n Table \"zabbix.events\"\n Column | Type | Collation | Nullable | Default\n--------------+-------------------------+-----------+----------+-----------------------\neventid | numeric | | not null |\nsource | bigint | | not null | '0'::bigint\nobject | bigint | | not null | '0'::bigint\nobjectid | numeric | | not null | '0'::numeric\nclock | bigint | | not null | '0'::bigint\nvalue | bigint | | not null | '0'::bigint\nacknowledged | bigint | | not null | '0'::bigint\nns | bigint | | not null | '0'::bigint\nname | character varying(2048) | | not null | ''::character varying\nseverity | integer | | not null | 0\nIndexes:\n \"idx_29337_primary\" PRIMARY KEY, btree (eventid)\n \"events_1\" btree (source, object, objectid, clock)\n \"events_2\" btree (source, object, clock)\n \"events_clk_3\" btree (clock)\nReferenced by:\n TABLE \"acknowledges\" CONSTRAINT \"c_acknowledges_2\" FOREIGN KEY (eventid) REFERENCES events(eventid) ON UPDATE RESTRICT ON DELETE CASCADE\n TABLE \"alerts\" CONSTRAINT \"c_alerts_2\" FOREIGN KEY (eventid) REFERENCES events(eventid) ON UPDATE RESTRICT ON DELETE CASCADE\n TABLE \"alerts\" CONSTRAINT \"c_alerts_5\" FOREIGN KEY (p_eventid) REFERENCES events(eventid) ON DELETE CASCADE\n TABLE \"event_recovery\" CONSTRAINT \"c_event_recovery_1\" FOREIGN KEY (eventid) REFERENCES events(eventid) ON DELETE CASCADE\n TABLE \"event_recovery\" CONSTRAINT \"c_event_recovery_2\" FOREIGN KEY (r_eventid) REFERENCES events(eventid) ON DELETE CASCADE\n TABLE \"event_recovery\" CONSTRAINT \"c_event_recovery_3\" FOREIGN KEY (c_eventid) REFERENCES events(eventid) ON DELETE CASCADE\n TABLE \"event_suppress\" CONSTRAINT \"c_event_suppress_1\" FOREIGN KEY (eventid) REFERENCES events(eventid) ON DELETE CASCADE\n TABLE \"event_tag\" CONSTRAINT \"c_event_tag_1\" FOREIGN KEY (eventid) REFERENCES events(eventid) ON DELETE CASCADE\n TABLE \"problem\" CONSTRAINT \"c_problem_1\" FOREIGN KEY (eventid) REFERENCES events(eventid) ON DELETE CASCADE\n TABLE \"problem\" CONSTRAINT \"c_problem_2\" FOREIGN KEY (r_eventid) REFERENCES events(eventid) ON DELETE CASCADE\n\n\n\nzabbix_34=> \\d event_recovery\n Table \"zabbix.event_recovery\"\n Column | Type | Collation | Nullable | Default\n---------------+--------+-----------+----------+---------\neventid | bigint | | not null |\nr_eventid | bigint | | not null |\nc_eventid | bigint | | |\ncorrelationid | bigint | | |\nuserid | bigint | | |\nIndexes:\n \"event_recovery_pkey\" PRIMARY KEY, btree (eventid)\n \"event_recovery_1\" btree (r_eventid)\n \"event_recovery_2\" btree (c_eventid)\nForeign-key constraints:\n \"c_event_recovery_1\" FOREIGN KEY (eventid) REFERENCES events(eventid) ON DELETE CASCADE\n \"c_event_recovery_2\" FOREIGN KEY (r_eventid) REFERENCES events(eventid) ON DELETE CASCADE\n \"c_event_recovery_3\" FOREIGN KEY (c_eventid) REFERENCES events(eventid) ON DELETE CASCADE\n\n\n\nzabbix_34=> \\d alerts\n Table \"zabbix.alerts\"\n Column | Type | Collation | Nullable | Default\n---------------+-------------------------+-----------+----------+-----------------------\nalertid | numeric | | not null |\nactionid | numeric | | not null |\neventid | numeric | | not null |\nuserid | numeric | | |\nclock | bigint | | not null | '0'::bigint\nmediatypeid | numeric | | |\nsendto | character varying(1024) | | not null | ''::character varying\nsubject | character varying(255) | | not null | ''::character varying\nmessage | text | | not null | ''::text\nstatus | bigint | | not null | '0'::bigint\nretries | bigint | | not null | '0'::bigint\nerror | character varying(2048) | | not null | ''::character varying\nesc_step | bigint | | not null | '0'::bigint\nalerttype | bigint | | not null | '0'::bigint\np_eventid | bigint | | |\nacknowledgeid | bigint | | |\nIndexes:\n \"idx_29120_primary\" PRIMARY KEY, btree (alertid)\n \"alerts_1\" btree (actionid)\n \"alerts_2\" btree (clock)\n \"alerts_3\" btree (eventid)\n \"alerts_4\" btree (status)\n \"alerts_5\" btree (mediatypeid)\n \"alerts_6\" btree (userid)\n \"alerts_7\" btree (p_eventid)\nForeign-key constraints:\n \"c_alerts_1\" FOREIGN KEY (actionid) REFERENCES actions(actionid) ON UPDATE RESTRICT ON DELETE CASCADE\n \"c_alerts_2\" FOREIGN KEY (eventid) REFERENCES events(eventid) ON UPDATE RESTRICT ON DELETE CASCADE\n \"c_alerts_3\" FOREIGN KEY (userid) REFERENCES users(userid) ON UPDATE RESTRICT ON DELETE CASCADE\n \"c_alerts_4\" FOREIGN KEY (mediatypeid) REFERENCES media_type(mediatypeid) ON UPDATE RESTRICT ON DELETE CASCADE\n \"c_alerts_5\" FOREIGN KEY (p_eventid) REFERENCES events(eventid) ON DELETE CASCADE\n \"c_alerts_6\" FOREIGN KEY (acknowledgeid) REFERENCES acknowledges(acknowledgeid) ON DELETE CASCADE\n\n\nLet's look at what's in the tables for event 7123123:\n\nzabbix_34=> select * from events where eventid=7123123;\neventid | source | object | objectid | clock | value | acknowledged | ns | name | severity\n---------+--------+--------+----------+------------+-------+--------------+---------+--------------------------------------+----------\n7123123 | 3 | 0 | 27562 | 1525264196 | 1 | 0 | 1980875 | Cannot calculate trigger expression. | 0\n(1 row)\n\nzabbix_34=> select * from event_recovery where eventid=7123123;\neventid | r_eventid | c_eventid | correlationid | userid\n---------+-----------+-----------+---------------+--------\n7123123 | 7124371 | | |\n(1 row)\n\nzabbix_34=> select * from alerts where eventid=7123123;\nalertid | actionid | eventid | userid | clock | mediatypeid | sendto | subject | message | status | retries | error | esc_step | aler\nttype | p_eventid | acknowledgeid\n---------+----------+---------+--------+-------+-------------+--------+---------+---------+--------+---------+-------+----------+-----\n------+-----------+---------------\n(0 rows)\n\n\nAll these queries execute well below 1 ms, using indexes.\n\nLet's delete one row. See explain results here: https://explain.depesz.com/s/aycf . 5 seconds to delete a single row, wow!\nThis shows that it is the foreign key constraints on event_recovery and alerts that take a lot of time.\nBut why? I far as I can see, the delete is fully CPU bound during execution.\n\nDeleting the corresponding row directly from event_recovery or alerts executes in less than 0.1 ms.\n\nAny ideas?\n\nI've observed that alerts and event_recovery tables both have more than one foreign key that references events, if that matters.\n\nRegards\nKristian Ejvind\n\n\n\n\n\n[cid:[email protected]]\nKristian Ejvind\nLinux System Administrator\nIT Operations | Technical Operations\n\n\n\n\nResurs Bank\nEkslingan 8\nBox 222 09, SE-25467 Helsingborg\n\n\n\n\nMobil: +46 728571483\nVäxel: +46 42 38 20 00\nE-post: [email protected]<mailto:[email protected]>\nWebb: www.resursbank.se<http://www.resursbank.se>",
"msg_date": "Tue, 23 Jul 2019 08:07:55 +0000",
"msg_from": "Kristian Ejvind <[email protected]>",
"msg_from_op": true,
"msg_subject": "zabbix on postgresql - very slow delete of events"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 08:07:55AM +0000, Kristian Ejvind wrote:\n> Hi\n> \n> This will be a rather lengthy post, just to give the full (I hope) picture. We're using Zabbix for monitoring and I'm having problems\n> understanding why the deletion of rows in the events table is so slow.\n> \n> Zabbix: 4.2 (never mind the name of the db - it is 4.2)\n> new values per second: ~400\n> hosts: ~600\n> items: ~45000\n> \n\nHi Kristian,\n\nTime series databases like Zabbix work poorly with the Housekeeper\nservice. We had many similar sorts of problems as our Zabbix usage\ngrew. Once we partitioned the big tables, turned off the Housekeeper,\nand cleaned up by dropping partitions instead everything worked much,\nmuch, much better. When we started using partitioning, we used the\nold inheiritance style. Now you can use the native partitioning.\n\nRegards,\nKen\n\n\n",
"msg_date": "Tue, 23 Jul 2019 07:58:03 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: zabbix on postgresql - very slow delete of events"
},
{
"msg_contents": "Thanks Kenneth. In fact we've already partitioned the largest history* and trends* tables\nand that has been running fine for a year. Performance was vastly improved. But since you\ncan't have a unique index on a partitioned table in postgres 10, we haven't worked on that.\n\nRegards\nKristian\n\n\n?On 2019-07-23, 14:58, \"Kenneth Marshall\" <[email protected]> wrote:\n\n Hi Kristian,\n\n Time series databases like Zabbix work poorly with the Housekeeper\n service. We had many similar sorts of problems as our Zabbix usage\n grew. Once we partitioned the big tables, turned off the Housekeeper,\n and cleaned up by dropping partitions instead everything worked much,\n much, much better. When we started using partitioning, we used the\n old inheiritance style. Now you can use the native partitioning.\n\n Regards,\n Ken\n\n\n\n\n\n\nResurs Bank AB\nKristian Ejvind\nLinux System Administrator\nIT Operations | Technical Operations\n\nEkslingan 8\nBox 222 09, SE-25467 Helsingborg\n\nDirekt Tfn:\nMobil: +46 728571483\nVxl: +46 42 382000\nFax:\nE-post: [email protected]\nWebb: http://www.resursbank.se\n\n\n\n",
"msg_date": "Tue, 23 Jul 2019 13:41:53 +0000",
"msg_from": "Kristian Ejvind <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: zabbix on postgresql - very slow delete of events"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 01:41:53PM +0000, Kristian Ejvind wrote:\n> Thanks Kenneth. In fact we've already partitioned the largest history* and trends* tables\n> and that has been running fine for a year. Performance was vastly improved. But since you\n> can't have a unique index on a partitioned table in postgres 10, we haven't worked on that.\n> \n> Regards\n> Kristian\n\nHi Kristian,\n\nWhy are you not partitioning the events and alerts tables as well? That\nwould eliminate this problem and you already have the infrastructure in\nplace to support the management since you are using it for the history\nand trends tables.\n\nRegards,\nKen\n\n\n",
"msg_date": "Tue, 23 Jul 2019 09:33:00 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: zabbix on postgresql - very slow delete of events"
},
{
"msg_contents": "Hi.\n\nWell, the events table has both a primary key and foreign keys referencing it, which is not possible\non a partitioned table in postgresql 10. How did you work around this issue?\n\nOn the other hand, if we can get the deletion of rows from the events table run at normal speed, I\ncan't imagine we would have a problem with it in a long time. After all, although our Zabbix installation\ndefinitely is larger than \"small\", it's still far from \"large\".\n\nI think I would need assistance with debugging why postgresql behaves like it does.\nIs there a defect with deleting data from a table that has multiple foreign keys referencing it from a certain table?\nIs there a problem with the query optimizer that chooses the wrong plan when working on the foreign key constraints?\nHow do I inspect how the db works on the deletion of rows from the referencing tables?\n\nRegards\nKristian\n\n\n\n?On 2019-07-23, 16:33, \"Kenneth Marshall\" <[email protected]> wrote:\n\n On Tue, Jul 23, 2019 at 01:41:53PM +0000, Kristian Ejvind wrote:\n > Thanks Kenneth. In fact we've already partitioned the largest history* and trends* tables\n > and that has been running fine for a year. Performance was vastly improved. But since you\n > can't have a unique index on a partitioned table in postgres 10, we haven't worked on that.\n >\n > Regards\n > Kristian\n\n Hi Kristian,\n\n Why are you not partitioning the events and alerts tables as well? That\n would eliminate this problem and you already have the infrastructure in\n place to support the management since you are using it for the history\n and trends tables.\n\n Regards,\n Ken\n\n\n\n\n\n\nResurs Bank AB\nKristian Ejvind\nLinux System Administrator\nIT Operations | Technical Operations\n\nEkslingan 8\nBox 222 09, SE-25467 Helsingborg\n\nDirekt Tfn:\nMobil: +46 728571483\nVxl: +46 42 382000\nFax:\nE-post: [email protected]\nWebb: http://www.resursbank.se\n\n\n\n",
"msg_date": "Wed, 24 Jul 2019 08:11:59 +0000",
"msg_from": "Kristian Ejvind <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: zabbix on postgresql - very slow delete of events"
},
{
"msg_contents": "Hi Kristian,\n\nIf you look for explain analyze results for delete,\nyou will see that 99% of time query spent on the foreign key triggers\nchecks.\nIn the same time the database have indexes on foreign key side in place.\n\n\nI recommend try this:\n\n\\timing on\nBEGIN;\ndelete from zabbix.events where eventid = [some testing id];\nselect * from pg_stat_xact_user_tables where seq_scan>0 or idx_scan>0 order\nby seq_scan+idx_scan desc;\nABORT;\n\nAnd provide result of the last query and how long delete runs.\nIt might help us understand whats going on.\n\nCurrently I have 3 ideas:\n1)very very slow and overloaded IO subsystem\n2)a lot of stuff being delete by ON DELETE CASCADE\n3)some locking prevent foreign key checks run fast\n\n\n\nOn Wed, Jul 24, 2019 at 11:12 AM Kristian Ejvind <[email protected]>\nwrote:\n\n> Hi.\n>\n> Well, the events table has both a primary key and foreign keys referencing\n> it, which is not possible\n> on a partitioned table in postgresql 10. How did you work around this\n> issue?\n>\n> On the other hand, if we can get the deletion of rows from the events\n> table run at normal speed, I\n> can't imagine we would have a problem with it in a long time. After all,\n> although our Zabbix installation\n> definitely is larger than \"small\", it's still far from \"large\".\n>\n> I think I would need assistance with debugging why postgresql behaves like\n> it does.\n> Is there a defect with deleting data from a table that has multiple\n> foreign keys referencing it from a certain table?\n> Is there a problem with the query optimizer that chooses the wrong plan\n> when working on the foreign key constraints?\n> How do I inspect how the db works on the deletion of rows from the\n> referencing tables?\n>\n> Regards\n> Kristian\n>\n>\n>\n> ?On 2019-07-23, 16:33, \"Kenneth Marshall\" <[email protected]> wrote:\n>\n> On Tue, Jul 23, 2019 at 01:41:53PM +0000, Kristian Ejvind wrote:\n> > Thanks Kenneth. In fact we've already partitioned the largest\n> history* and trends* tables\n> > and that has been running fine for a year. Performance was vastly\n> improved. But since you\n> > can't have a unique index on a partitioned table in postgres 10, we\n> haven't worked on that.\n> >\n> > Regards\n> > Kristian\n>\n> Hi Kristian,\n>\n> Why are you not partitioning the events and alerts tables as well? That\n> would eliminate this problem and you already have the infrastructure in\n> place to support the management since you are using it for the history\n> and trends tables.\n>\n> Regards,\n> Ken\n>\n>\n>\n>\n>\n>\n> Resurs Bank AB\n> Kristian Ejvind\n> Linux System Administrator\n> IT Operations | Technical Operations\n>\n> Ekslingan 8\n> Box 222 09, SE-25467 Helsingborg\n>\n> Direkt Tfn:\n> Mobil: +46 728571483\n> Vxl: +46 42 382000\n> Fax:\n> E-post: [email protected]\n> Webb: http://www.resursbank.se\n>\n>\n>\n>\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone RU: +7 985 433 0000\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\n\n\"Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно\nкогда я так делаю ещё раз?\"\n\nHi Kristian,If you look for explain analyze results for delete,you will see that 99% of time query spent on the foreign key triggers checks.In the same time the database have indexes on foreign key side in place.I recommend try this:\\timing onBEGIN;delete from zabbix.events where eventid = [some testing id];select * from pg_stat_xact_user_tables where seq_scan>0 or idx_scan>0 order by seq_scan+idx_scan desc;ABORT;And provide result of the last query and how long delete runs.It might help us understand whats going on.Currently I have 3 ideas:1)very very slow and overloaded IO subsystem2)a lot of stuff being delete by ON DELETE CASCADE 3)some locking prevent foreign key checks run fastOn Wed, Jul 24, 2019 at 11:12 AM Kristian Ejvind <[email protected]> wrote:Hi.\n\nWell, the events table has both a primary key and foreign keys referencing it, which is not possible\non a partitioned table in postgresql 10. How did you work around this issue?\n\nOn the other hand, if we can get the deletion of rows from the events table run at normal speed, I\ncan't imagine we would have a problem with it in a long time. After all, although our Zabbix installation\ndefinitely is larger than \"small\", it's still far from \"large\".\n\nI think I would need assistance with debugging why postgresql behaves like it does.\nIs there a defect with deleting data from a table that has multiple foreign keys referencing it from a certain table?\nIs there a problem with the query optimizer that chooses the wrong plan when working on the foreign key constraints?\nHow do I inspect how the db works on the deletion of rows from the referencing tables?\n\nRegards\nKristian\n\n\n\n?On 2019-07-23, 16:33, \"Kenneth Marshall\" <[email protected]> wrote:\n\n On Tue, Jul 23, 2019 at 01:41:53PM +0000, Kristian Ejvind wrote:\n > Thanks Kenneth. In fact we've already partitioned the largest history* and trends* tables\n > and that has been running fine for a year. Performance was vastly improved. But since you\n > can't have a unique index on a partitioned table in postgres 10, we haven't worked on that.\n >\n > Regards\n > Kristian\n\n Hi Kristian,\n\n Why are you not partitioning the events and alerts tables as well? That\n would eliminate this problem and you already have the infrastructure in\n place to support the management since you are using it for the history\n and trends tables.\n\n Regards,\n Ken\n\n\n\n\n\n\nResurs Bank AB\nKristian Ejvind\nLinux System Administrator\nIT Operations | Technical Operations\n\nEkslingan 8\nBox 222 09, SE-25467 Helsingborg\n\nDirekt Tfn:\nMobil: +46 728571483\nVxl: +46 42 382000\nFax:\nE-post: [email protected]\nWebb: http://www.resursbank.se\n\n\n\n-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone RU: +7 985 433 0000Phone UA: +380 99 143 0000Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.boguk\"Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно когда я так делаю ещё раз?\"",
"msg_date": "Wed, 24 Jul 2019 16:54:26 +0300",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: zabbix on postgresql - very slow delete of events"
},
{
"msg_contents": ">\n>\n> All these queries execute well below 1 ms, using indexes.\n>\n>\n>\n> Let's delete one row. See explain results here:\n> https://explain.depesz.com/s/aycf . 5 seconds to delete a single row,\n> wow!\n>\n> This shows that it is the foreign key constraints on event_recovery and\n> alerts that take a lot of time.\n>\n> But why? I far as I can see, the delete is fully CPU bound during\n> execution.\n>\n>\n>\n> Deleting the corresponding row directly from event_recovery or alerts\n> executes in less than 0.1 ms.\n>\n>\n>\n> Any ideas?\n>\n>\n>\n> I've observed that alerts and event_recovery tables both have more than\n> one foreign key that references events, if that matters.\n>\n>\n>\nHi Kristian,\n\nAfter comparing structure of zabbix tables with same in my zabbix\ninstallation I found one very weird difference.\nWhy type of events.eventid had been changed from default bigint to numeric?\n\nI suspect that the difference between events.eventid (numeric) type\nand event_recovery.*_eventid (bigint) types might lead to inability of use\nindex during foreign key checks.\nAnyway it will be clearly visible on the pg_stat_xact_user_tables results\n(I now expect to see 3 sequential scan on event_recovery and may be on some\nother tables as well).\n\nKind Regards,\nMaxim\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone RU: +7 985 433 0000\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\n\n\"Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно\nкогда я так делаю ещё раз?\"",
"msg_date": "Wed, 24 Jul 2019 17:05:34 +0300",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: zabbix on postgresql - very slow delete of events"
},
{
"msg_contents": "Hi Maxim\n\nThanks for your advice, and let me start with your second email, which I'll copy here:\n\n=====\nHi Kristian,\n\nAfter comparing structure of zabbix tables with same in my zabbix installation I found one very weird difference.\nWhy type of events.eventid had been changed from default bigint to numeric?\n\nI suspect that the difference between events.eventid (numeric) type and event_recovery.*_eventid (bigint) types might lead to inability of use index during foreign key checks.\nAnyway it will be clearly visible on the pg_stat_xact_user_tables results (I now expect to see 3 sequential scan on event_recovery and may be on some other tables as well).\n\nKind Regards,\nMaxim\n=====\n\nWell spotted! On closer examination it seems that data types are wrong in several places. I suspect that this comes\nfrom the time when our Zabbix ran on a MySQL database, which was converted over to PostgreSQL a few years\nago. I agree this discrepancy is suspicious and I will continue to examine it.\n\nRegarding your ideas in the email below, I can say that 1) is not valid, disk latency is in the range of a few ms.\nThis is the output from your recommended query, which seems to verify your suspicions.\n\nzabbix_34=# begin; delete from zabbix.events where eventid = 7123123; select * from pg_stat_xact_user_tables where seq_scan>0 or idx_scan>0 order by seq_scan+idx_scan desc; rollback;\nTime: 0.113 ms\nTime: 4798.189 ms (00:04.798)\nrelid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd\n--------+------------+----------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------\n 41940 | zabbix | event_recovery | 3 | 35495224 | 0 | 0 | 0 | 0 | 1 | 0\n 41675 | zabbix | alerts | 1 | 544966 | 1 | 0 | 0 | 0 | 0 | 0\n 42573 | zabbix | problem | 2 | 13896 | 0 | 0 | 0 | 0 | 0 | 0\n 41943 | zabbix | event_tag | 1 | 22004 | 0 | 0 | 0 | 0 | 0 | 0\n 41649 | zabbix | acknowledges | 1 | 47 | 0 | 0 | 0 | 0 | 0 | 0\n 41951 | zabbix | events | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0\n260215 | zabbix | event_suppress | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0\n(7 rows)\n\nTime: 2.857 ms\nTime: 0.162 ms\n\nRegards\nKristian\n\n\n\n\n\n\n\n[cid:[email protected]]\nKristian Ejvind\nLinux System Administrator\nIT Operations | Technical Operations\n\n\n\n\nResurs Bank\nEkslingan 8\nBox 222 09, SE-25467 Helsingborg\n\n\n\n\nMobil: +46 728571483\nVäxel: +46 42 38 20 00\nE-post: [email protected]<mailto:[email protected]>\nWebb: www.resursbank.se<http://www.resursbank.se>\n\n\n\nFrom: Maxim Boguk <[email protected]>\nDate: Wednesday, 24 July 2019 at 15:55\nTo: Kristian Ejvind <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: Re: zabbix on postgresql - very slow delete of events\n\nHi Kristian,\n\nIf you look for explain analyze results for delete,\nyou will see that 99% of time query spent on the foreign key triggers checks.\nIn the same time the database have indexes on foreign key side in place.\n\n\nI recommend try this:\n\n\\timing on\nBEGIN;\ndelete from zabbix.events where eventid = [some testing id];\nselect * from pg_stat_xact_user_tables where seq_scan>0 or idx_scan>0 order by seq_scan+idx_scan desc;\nABORT;\n\nAnd provide result of the last query and how long delete runs.\nIt might help us understand whats going on.\n\nCurrently I have 3 ideas:\n1)very very slow and overloaded IO subsystem\n2)a lot of stuff being delete by ON DELETE CASCADE\n3)some locking prevent foreign key checks run fast\n\n\n\nOn Wed, Jul 24, 2019 at 11:12 AM Kristian Ejvind <[email protected]<mailto:[email protected]>> wrote:\nHi.\n\nWell, the events table has both a primary key and foreign keys referencing it, which is not possible\non a partitioned table in postgresql 10. How did you work around this issue?\n\nOn the other hand, if we can get the deletion of rows from the events table run at normal speed, I\ncan't imagine we would have a problem with it in a long time. After all, although our Zabbix installation\ndefinitely is larger than \"small\", it's still far from \"large\".\n\nI think I would need assistance with debugging why postgresql behaves like it does.\nIs there a defect with deleting data from a table that has multiple foreign keys referencing it from a certain table?\nIs there a problem with the query optimizer that chooses the wrong plan when working on the foreign key constraints?\nHow do I inspect how the db works on the deletion of rows from the referencing tables?\n\nRegards\nKristian\n\n\n\n?On 2019-07-23, 16:33, \"Kenneth Marshall\" <[email protected]<mailto:[email protected]>> wrote:\n\n On Tue, Jul 23, 2019 at 01:41:53PM +0000, Kristian Ejvind wrote:\n > Thanks Kenneth. In fact we've already partitioned the largest history* and trends* tables\n > and that has been running fine for a year. Performance was vastly improved. But since you\n > can't have a unique index on a partitioned table in postgres 10, we haven't worked on that.\n >\n > Regards\n > Kristian\n\n Hi Kristian,\n\n Why are you not partitioning the events and alerts tables as well? That\n would eliminate this problem and you already have the infrastructure in\n place to support the management since you are using it for the history\n and trends tables.\n\n Regards,\n Ken\n\n\n\n\n\n\nResurs Bank AB\nKristian Ejvind\nLinux System Administrator\nIT Operations | Technical Operations\n\nEkslingan 8\nBox 222 09, SE-25467 Helsingborg\n\nDirekt Tfn:\nMobil: +46 728571483\nVxl: +46 42 382000\nFax:\nE-post: [email protected]<mailto:[email protected]>\nWebb: http://www.resursbank.se\n\n\n\n\n--\nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/<https://smex12-5-en-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2fdataegret.com&umid=16afbce9-6327-4b7f-83a0-8fb86924fc63&auth=daed959355609b907128d19d56c675829c94a38e-295bb5eec00c0cdf017530b536ec3b62b3f86768>\n\nPhone RU: +7 985 433 0000\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\n\n\"??????, ?? ??? ?????????? ??? ?? ??????, ?? ?????? ??? ??-???????? ?????? ????? ? ??? ????? ??? ????\"",
"msg_date": "Wed, 24 Jul 2019 15:12:18 +0000",
"msg_from": "Kristian Ejvind <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: zabbix on postgresql - very slow delete of events"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 6:12 PM Kristian Ejvind <[email protected]>\nwrote:\n\n> Hi Maxim\n>\n>\n>\n> Thanks for your advice, and let me start with your second email, which\n> I'll copy here:\n>\n>\n>\n> =====\n>\n> Hi Kristian,\n>\n>\n>\n> After comparing structure of zabbix tables with same in my zabbix\n> installation I found one very weird difference.\n>\n> Why type of events.eventid had been changed from default bigint to numeric?\n>\n>\n>\n> I suspect that the difference between events.eventid (numeric) type\n> and event_recovery.*_eventid (bigint) types might lead to inability of use\n> index during foreign key checks.\n>\n> Anyway it will be clearly visible on the pg_stat_xact_user_tables results\n> (I now expect to see 3 sequential scan on event_recovery and may be on some\n> other tables as well).\n>\n>\n>\n> Kind Regards,\n>\n> Maxim\n>\n> =====\n>\n>\n>\n> Well spotted! On closer examination it seems that data types are wrong in\n> several places. I suspect that this comes\n>\n> from the time when our Zabbix ran on a MySQL database, which was converted\n> over to PostgreSQL a few years\n>\n> ago. I agree this discrepancy is suspicious and I will continue to examine\n> it.\n>\n>\n>\n> Regarding your ideas in the email below, I can say that 1) is not valid,\n> disk latency is in the range of a few ms.\n>\n> This is the output from your recommended query, which seems to verify your\n> suspicions.\n>\n>\n>\n> zabbix_34=# begin; delete from zabbix.events where eventid = 7123123;\n> select * from pg_stat_xact_user_tables where seq_scan>0 or idx_scan>0 order\n> by seq_scan+idx_scan desc; rollback;\n>\n> Time: 0.113 ms\n>\n> Time: 4798.189 ms (00:04.798)\n>\n> relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan\n> | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd\n>\n>\n> --------+------------+----------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------\n>\n> 41940 | zabbix | event_recovery | 3 | 35495224 | 0\n> | 0 | 0 | 0 | 1 | 0\n>\n> 41675 | zabbix | alerts | 1 | 544966 | 1\n> | 0 | 0 | 0 | 0 | 0\n>\n> 42573 | zabbix | problem | 2 | 13896 | 0\n> | 0 | 0 | 0 | 0 | 0\n>\n> 41943 | zabbix | event_tag | 1 | 22004 | 0\n> | 0 | 0 | 0 | 0 | 0\n>\n> 41649 | zabbix | acknowledges | 1 | 47 | 0\n> | 0 | 0 | 0 | 0 | 0\n>\n> 41951 | zabbix | events | 0 | 0 | 1\n> | 1 | 0 | 0 | 1 | 0\n>\n> 260215 | zabbix | event_suppress | 1 | 0 | 0\n> | 0 | 0 | 0 | 0 | 0\n>\n\nHi Kristian,\n\nThis result definitely proves that indexes not used during foreign key\nchecks (see that non-zero seq_scan counters for linked tables).\nOnly possible reason (IMHO) that wrong usage numeric in place of bigint.\nI recommend change types of events.eventid (and any other similar fields)\nto bigint.\nIt should resolve your performance issues with deletes on events table (as\nadditional bonus - bigint a lot faster and compact type than numeric).\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone RU: +7 985 433 0000\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\n\n\"Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно\nкогда я так делаю ещё раз?\"\n\nOn Wed, Jul 24, 2019 at 6:12 PM Kristian Ejvind <[email protected]> wrote:\n\n\n\nHi Maxim\n \nThanks for your advice, and let me start with your second email, which I'll copy here:\n\n \n=====\nHi Kristian,\n \nAfter comparing structure of zabbix tables with same in my zabbix installation I found one very weird difference.\nWhy type of events.eventid had been changed from default bigint to numeric?\n \nI suspect that the difference between events.eventid (numeric) type and event_recovery.*_eventid (bigint) types might lead to inability of use index\n during foreign key checks.\nAnyway it will be clearly visible on the pg_stat_xact_user_tables results (I now expect to see 3 sequential scan on event_recovery and may be on\n some other tables as well).\n \nKind Regards,\nMaxim\n=====\n \nWell spotted! On closer examination it seems that data types are wrong in several places. I suspect that this comes\nfrom the time when our Zabbix ran on a MySQL database, which was converted over to PostgreSQL a few years\nago. I agree this discrepancy is suspicious and I will continue to examine it.\n \nRegarding your ideas in the email below, I can say that 1) is not valid, disk latency is in the range of a few ms.\n\nThis is the output from your recommended query, which seems to verify your suspicions. \n \nzabbix_34=# begin; delete from zabbix.events where eventid = 7123123; select * from pg_stat_xact_user_tables where seq_scan>0 or idx_scan>0 order by seq_scan+idx_scan\n desc; rollback;\nTime: 0.113 ms\nTime: 4798.189 ms (00:04.798)\nrelid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd\n--------+------------+----------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------\n 41940 | zabbix | event_recovery | 3 | 35495224 | 0 | 0 | 0 | 0 | 1 | 0\n 41675 | zabbix | alerts | 1 | 544966 | 1 | 0 | 0 | 0 | 0 | 0\n 42573 | zabbix | problem | 2 | 13896 | 0 | 0 | 0 | 0 | 0 | 0\n 41943 | zabbix | event_tag | 1 | 22004 | 0 | 0 | 0 | 0 | 0 | 0\n 41649 | zabbix | acknowledges | 1 | 47 | 0 | 0 | 0 | 0 | 0 | 0\n 41951 | zabbix | events | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0\n260215 | zabbix | event_suppress | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0Hi Kristian,This result definitely proves that indexes not used during foreign key checks (see that non-zero seq_scan counters for linked tables).Only possible reason (IMHO) that wrong usage numeric in place of bigint.I recommend change types of events.eventid (and any other similar fields) to bigint.It should resolve your performance issues with deletes on events table (as additional bonus - bigint a lot faster and compact type than numeric).-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone RU: +7 985 433 0000Phone UA: +380 99 143 0000Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.boguk\"Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно когда я так делаю ещё раз?\"",
"msg_date": "Wed, 24 Jul 2019 20:16:24 +0300",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: zabbix on postgresql - very slow delete of events"
},
{
"msg_contents": "Hi.\n\nJust a short message, confirming that after we've altered the tables to have matching\ntypes, deletes now take 1 ms, instead of 5 sec. Indexes are being used now.\n\nThanks for assistance.\n\nRegards\nKristian\n\nps. would be nice with some warnings or indications in analyze output when this happens.\n\n\n\n\n\n[cid:[email protected]]\nKristian Ejvind\nLinux System Administrator\nIT Operations | Technical Operations\n\n\n\n\nResurs Bank\nEkslingan 8\nBox 222 09, SE-25467 Helsingborg\n\n\n\n\nMobil: +46 728571483\nVäxel: +46 42 38 20 00\nE-post: [email protected]<mailto:[email protected]>\nWebb: www.resursbank.se<http://www.resursbank.se>\n\n\n\nFrom: Maxim Boguk <[email protected]>\nDate: Wednesday, 24 July 2019 at 19:17\nTo: Kristian Ejvind <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: Re: zabbix on postgresql - very slow delete of events\n\n\n\nOn Wed, Jul 24, 2019 at 6:12 PM Kristian Ejvind <[email protected]<mailto:[email protected]>> wrote:\nHi Maxim\n\nThanks for your advice, and let me start with your second email, which I'll copy here:\n\n=====\nHi Kristian,\n\nAfter comparing structure of zabbix tables with same in my zabbix installation I found one very weird difference.\nWhy type of events.eventid had been changed from default bigint to numeric?\n\nI suspect that the difference between events.eventid (numeric) type and event_recovery.*_eventid (bigint) types might lead to inability of use index during foreign key checks.\nAnyway it will be clearly visible on the pg_stat_xact_user_tables results (I now expect to see 3 sequential scan on event_recovery and may be on some other tables as well).\n\nKind Regards,\nMaxim\n=====\n\nWell spotted! On closer examination it seems that data types are wrong in several places. I suspect that this comes\nfrom the time when our Zabbix ran on a MySQL database, which was converted over to PostgreSQL a few years\nago. I agree this discrepancy is suspicious and I will continue to examine it.\n\nRegarding your ideas in the email below, I can say that 1) is not valid, disk latency is in the range of a few ms.\nThis is the output from your recommended query, which seems to verify your suspicions.\n\nzabbix_34=# begin; delete from zabbix.events where eventid = 7123123; select * from pg_stat_xact_user_tables where seq_scan>0 or idx_scan>0 order by seq_scan+idx_scan desc; rollback;\nTime: 0.113 ms\nTime: 4798.189 ms (00:04.798)\nrelid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd\n--------+------------+----------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------\n 41940 | zabbix | event_recovery | 3 | 35495224 | 0 | 0 | 0 | 0 | 1 | 0\n 41675 | zabbix | alerts | 1 | 544966 | 1 | 0 | 0 | 0 | 0 | 0\n 42573 | zabbix | problem | 2 | 13896 | 0 | 0 | 0 | 0 | 0 | 0\n 41943 | zabbix | event_tag | 1 | 22004 | 0 | 0 | 0 | 0 | 0 | 0\n 41649 | zabbix | acknowledges | 1 | 47 | 0 | 0 | 0 | 0 | 0 | 0\n 41951 | zabbix | events | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0\n260215 | zabbix | event_suppress | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0\n\nHi Kristian,\n\nThis result definitely proves that indexes not used during foreign key checks (see that non-zero seq_scan counters for linked tables).\nOnly possible reason (IMHO) that wrong usage numeric in place of bigint.\nI recommend change types of events.eventid (and any other similar fields) to bigint.\nIt should resolve your performance issues with deletes on events table (as additional bonus - bigint a lot faster and compact type than numeric).\n\n--\nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/<https://smex12-5-en-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2fdataegret.com&umid=90a98c9f-46cd-4941-b939-8da90b514311&auth=daed959355609b907128d19d56c675829c94a38e-92a73de4d891916aa17d6a4577153d5be0a70dd8>\n\nPhone RU: +7 985 433 0000\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\n\n\"??????, ?? ??? ?????????? ??? ?? ??????, ?? ?????? ??? ??-???????? ?????? ????? ? ??? ????? ??? ????\"",
"msg_date": "Tue, 13 Aug 2019 11:08:21 +0000",
"msg_from": "Kristian Ejvind <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: zabbix on postgresql - very slow delete of events"
}
] |
[
{
"msg_contents": "Hi,\n\nI have finally found some time to implement a custom data type optimized\nfor version 1 UUID's (timestamp, clock sequence, node):\nhttps://github.com/ancoron/pg-uuid-v1\n\nSome tests (using a few millions of rows) have shown the following\nresults (when used as a primary key):\n\nCOPY ... FROM: ~7.8x faster (from file - SSD)\nCOPY ... TO : ~1.5x faster (no where clause, sequential output)\n\nThe best thing is that for INSERT's there is a very high chance of\nhitting the B-Tree \"fastpath\" because of the timestamp being the most\nsignificant part of the data type, which tends to be increasing.\n\nThis also results in much lower \"bloat\", where the standard \"uuid\" type\neasily goes beyond 30%, the \"uuid_v1\" should be between 10 and 20%.\n\nAdditionally, it also reveals the massive performance degrade I saw in\nmy tests for standard UUID's:\n\nInitial 200 million rows: ~ 80k rows / second\nAdditional 17 million rows: ~26k rows / second\n\n...and the new data type:\nInitial 200 million rows: ~ 623k rows / second\nAdditional 17 million rows: ~618k rows / second\n\n\nThe data type also has functions to extract the three parts and has an\nadditional set of operators to compare it to timestamps for time-series\nqueries.\n\nOther test results which are of interest:\n\nANALYZE: essentially equal with uuid_v1 to be just slightly faster\nVACUUM: ~4-5x faster\nREINDEX: only slightly faster (~10-20%)\n\n\nI think there's also something from it for a faster standard UUID\nimplementation as a char array is not very compute friendly and\nconversion from or to strings (in/out) can be optimized.\n\n\nCheers,\n\n\tAncoron\n\nRef:\nLatest test results:\nhttps://gist.github.com/ancoron/d5114b0907e8974b6808077e02f8d109\n\n\n",
"msg_date": "Thu, 25 Jul 2019 11:26:23 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Standard uuid vs. custom data type uuid_v1"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 11:26:23AM +0200, Ancoron Luciferis wrote:\n>Hi,\n>\n>I have finally found some time to implement a custom data type optimized\n>for version 1 UUID's (timestamp, clock sequence, node):\n>https://github.com/ancoron/pg-uuid-v1\n>\n>Some tests (using a few millions of rows) have shown the following\n>results (when used as a primary key):\n>\n>COPY ... FROM: ~7.8x faster (from file - SSD)\n>COPY ... TO : ~1.5x faster (no where clause, sequential output)\n>\n>The best thing is that for INSERT's there is a very high chance of\n>hitting the B-Tree \"fastpath\" because of the timestamp being the most\n>significant part of the data type, which tends to be increasing.\n>\n>This also results in much lower \"bloat\", where the standard \"uuid\" type\n>easily goes beyond 30%, the \"uuid_v1\" should be between 10 and 20%.\n>\n>Additionally, it also reveals the massive performance degrade I saw in\n>my tests for standard UUID's:\n>\n>Initial 200 million rows: ~ 80k rows / second\n>Additional 17 million rows: ~26k rows / second\n>\n>...and the new data type:\n>Initial 200 million rows: ~ 623k rows / second\n>Additional 17 million rows: ~618k rows / second\n>\n\nPresumably, the new data type is sorted in a way that eliminates/reduces\nrandom I/O against the index. But maybe that's not the case - hard to\nsay, because the linked results don't say how the data files were\ngenerated ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 27 Jul 2019 15:47:36 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Standard uuid vs. custom data type uuid_v1"
},
{
"msg_contents": "On 27/07/2019 15:47, Tomas Vondra wrote:\n> On Thu, Jul 25, 2019 at 11:26:23AM +0200, Ancoron Luciferis wrote:\n>> Hi,\n>>\n>> I have finally found some time to implement a custom data type optimized\n>> for version 1 UUID's (timestamp, clock sequence, node):\n>> https://github.com/ancoron/pg-uuid-v1\n>>\n>> Some tests (using a few millions of rows) have shown the following\n>> results (when used as a primary key):\n>>\n>> COPY ... FROM: ~7.8x faster (from file - SSD)\n>> COPY ... TO : ~1.5x faster (no where clause, sequential output)\n>>\n>> The best thing is that for INSERT's there is a very high chance of\n>> hitting the B-Tree \"fastpath\" because of the timestamp being the most\n>> significant part of the data type, which tends to be increasing.\n>>\n>> This also results in much lower \"bloat\", where the standard \"uuid\" type\n>> easily goes beyond 30%, the \"uuid_v1\" should be between 10 and 20%.\n>>\n>> Additionally, it also reveals the massive performance degrade I saw in\n>> my tests for standard UUID's:\n>>\n>> Initial 200 million rows: ~ 80k rows / second\n>> Additional 17 million rows: ~26k rows / second\n>>\n>> ...and the new data type:\n>> Initial 200 million rows: ~ 623k rows / second\n>> Additional 17 million rows: ~618k rows / second\n>>\n> \n> Presumably, the new data type is sorted in a way that eliminates/reduces\n> random I/O against the index. But maybe that's not the case - hard to\n> say, because the linked results don't say how the data files were\n> generated ...\n> \n\nYes, by definition, version 1 UUID's contain the timestamp and when used\nin applications, that is usually the \"current\" timestamp when an entry\ngets prepared for storage. As a result, for testing I've simulated 9\nnodes with increasing timestamps, which then results in very similar\nbehavior to sequentially increasing values.\n\nThe calculation for the above numbers has been based on \"\\timing\" in\nlogged psql test sessions (so the little client overhead is included in\ncalculation). Internal behavior has also been tracked with Linux \"perf\nrecord\" and flamegraph'd.\n\nTest data has been generated by the following Java (Maven) project:\nhttps://github.com/ancoron/java-uuid-serial\n\n...with arguments:\n\nmvn clean test \\\n\t-Duuids=217000000 \\ # number of UUID's\n\t-Duuid.skip.v1=false \\ # generate V1 UUID's and serial\n\t-Dnodes=9 \\ # how many nodes to simulate\n\t-Duuid.historic=true \\ # enable historic mode (not \"now\")\n\t-Dinterval_days=365 # range of days to generate\n\n...which will generate V1 UUID's into a file\n\"target/uuids.v1.historic.txt\" as follows:\n\nfb2893ae-9265-11e8-90d8-e03f496e733b\nfb2c3d2e-9265-11e8-a131-e03f49777cbb\nfda90b33-9265-11e8-af1b-e03f4957fa73\nfdaba343-9265-11e8-b648-e03f49e7fd77\nfdad50f3-9265-11e8-b9be-e03f49de7ab7\nfdaf73d3-9265-11e8-bdce-e03f49dff937\nfdb25a03-9265-11e8-a0d8-e03f49c67ff3\nfdb28113-9265-11e8-9c15-e03f4976fd73\nfdb2a823-9265-11e8-8273-e03f49d6f3f7\nfdb8c2a3-9265-11e8-90d8-e03f496e733b\nfdbc6c23-9265-11e8-a131-e03f49777cbb\n00393a28-9266-11e8-af1b-e03f4957fa73\n003bd238-9266-11e8-b648-e03f49e7fd77\n003d7fe8-9266-11e8-b9be-e03f49de7ab7\n003fa2c8-9266-11e8-bdce-e03f49dff937\n004288f8-9266-11e8-a0d8-e03f49c67ff3\n0042b008-9266-11e8-9c15-e03f4976fd73\n0042d718-9266-11e8-8273-e03f49d6f3f7\n0048f198-9266-11e8-90d8-e03f496e733b\n\n...so yes, they are ever-increasing from a timestamp-perspective, which\nis what I was intending to optimize for in the first iteration.\n\nI'll do some more testing with slight time offsets between the nodes to\nsimulate behavior when some nodes generate time a few seconds behind others.\n\nI might re-implement the UUID test data generator in Python as that\nwould be far more usable for others.\n\n\nCheers,\n\n\tAncoron\n\n\n",
"msg_date": "Mon, 29 Jul 2019 10:13:23 +0200",
"msg_from": "Ancoron Luciferis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Standard uuid vs. custom data type uuid_v1"
}
] |
[
{
"msg_contents": "Hello there.\n\nI am not an PG expert, as currently I work as a Enterprise Architect (who\nbelieves in OSS and in particular PostgreSQL 😍). So please forgive me if\nthis question is too simple. 🙏\n\nHere it goes:\n\nWe have a new Inventory system running on its own database (PG 10 AWS\nRDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than\n10GB at the moment. We provided 1TB to get more IOPS from EBS.\n\nAs we don't have a lot of different products in our catalogue it's quite\ncommon (especially when a particular product is on sale) to have a high\nrate of concurrent updates against the same row. There is also a frequent\n(every 30 minutes) update to all items which changed their current\nstock/Inventory coming from the warehouses (SAP), the latter is a batch\nprocess. We have just installed this system for a new tenant (one of the\nsmallest one) and although it's running great so far, we believe this\nsolution would not scale as we roll out this system to new (and bigger)\ntenants. Currently there is up to 1.500 transactions per second (mostly\nSELECTS and 1 particular UPDATE which I believe is the one being\naborted/deadlocked some tImes) in this inventory database.\n\nI am not a DBA, but as the DBAs (most of them old school Oracle DBAs who\nare not happy with the move to POSTGRES) are considering ditching\nPostgresql without any previous tunning I would like to understand the\npossibilities.\n\nConsidering this is a highly concurrent (same row) system I thought to\nsuggest:\n\n1) Set up Shared_buffer to 25% of the RAM on the RDS instance;\n\n2) Install a pair (HA) of PGBouncers (session) in front of PG and setup it\nin a way that it would keep only 32 connections (4 per core) open to the\ndatabase at the same time, but all connections going to PGBouncer (might be\nthousands) would be queued as soon as there is more than 32 active\nconnections to the Database. We have reached more than 500 concurrent\nconnections so far. But these numbers will grow.\n\n3) set work_mem to 3 times the size of largest temp file;\n\n4) set maintenance_work_mem to 2GB;\n\n5) set effective_cache_size to 50% of total memory.\n\nThe most used update is already a HOT UPDATE, as it (or any trigger)\ndoesn't change indexed columns.\n\nIt seems to me the kind of problem we have is similar to those systems\nwhich sell limited number of tickets to large concerts/events, like\ngoogleIO used to be... Where everyone tried to buy the ticket as soon as\npossible, and the system had to keep a consistent number of available\ntickets. I believe that's a hard problem to solve. So that's way I am\nasking for suggestions/ideas from the experts.\n\nThanks so much!\n\nHello there.I am not an PG expert, as currently I work as a Enterprise Architect (who believes in OSS and in particular PostgreSQL 😍). So please forgive me if this question is too simple. 🙏Here it goes:We have a new Inventory system running on its own database (PG 10 AWS RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than 10GB at the moment. We provided 1TB to get more IOPS from EBS.As we don't have a lot of different products in our catalogue it's quite common (especially when a particular product is on sale) to have a high rate of concurrent updates against the same row. There is also a frequent (every 30 minutes) update to all items which changed their current stock/Inventory coming from the warehouses (SAP), the latter is a batch process. We have just installed this system for a new tenant (one of the smallest one) and although it's running great so far, we believe this solution would not scale as we roll out this system to new (and bigger) tenants. Currently there is up to 1.500 transactions per second (mostly SELECTS and 1 particular UPDATE which I believe is the one being aborted/deadlocked some tImes) in this inventory database.I am not a DBA, but as the DBAs (most of them old school Oracle DBAs who are not happy with the move to POSTGRES) are considering ditching Postgresql without any previous tunning I would like to understand the possibilities.Considering this is a highly concurrent (same row) system I thought to suggest:1) Set up Shared_buffer to 25% of the RAM on the RDS instance;2) Install a pair (HA) of PGBouncers (session) in front of PG and setup it in a way that it would keep only 32 connections (4 per core) open to the database at the same time, but all connections going to PGBouncer (might be thousands) would be queued as soon as there is more than 32 active connections to the Database. We have reached more than 500 concurrent connections so far. But these numbers will grow.3) set work_mem to 3 times the size of largest temp file;4) set maintenance_work_mem to 2GB;5) set effective_cache_size to 50% of total memory.The most used update is already a HOT UPDATE, as it (or any trigger) doesn't change indexed columns.It seems to me the kind of problem we have is similar to those systems which sell limited number of tickets to large concerts/events, like googleIO used to be... Where everyone tried to buy the ticket as soon as possible, and the system had to keep a consistent number of available tickets. I believe that's a hard problem to solve. So that's way I am asking for suggestions/ideas from the experts.Thanks so much!",
"msg_date": "Mon, 29 Jul 2019 03:17:17 -0300",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "High concurrency same row (inventory)"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 2:16 AM Jean Baro <[email protected]> wrote:\n\n>\n> We have a new Inventory system running on its own database (PG 10 AWS\n> RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than\n> 10GB at the moment. We provided 1TB to get more IOPS from EBS.\n>\n> As we don't have a lot of different products in our catalogue it's quite\n> common (especially when a particular product is on sale) to have a high\n> rate of concurrent updates against the same row. There is also a frequent\n> (every 30 minutes) update to all items which changed their current\n> stock/Inventory coming from the warehouses (SAP), the latter is a batch\n> process. We have just installed this system for a new tenant (one of the\n> smallest one) and although it's running great so far, we believe this\n> solution would not scale as we roll out this system to new (and bigger)\n> tenants. Currently there is up to 1.500 transactions per second (mostly\n> SELECTS and 1 particular UPDATE which I believe is the one being\n> aborted/deadlocked some tImes) in this inventory database.\n>\n> I am not a DBA, but as the DBAs (most of them old school Oracle DBAs who\n> are not happy with the move to POSTGRES) are considering ditching\n> Postgresql without any previous tunning I would like to understand the\n> possibilities.\n>\n> Considering this is a highly concurrent (same row) system I thought to\n> suggest:\n>\n>\n>\nAnother thing which you might want to investigate is your checkpoint\ntunables. My hunch is with that many writes, the defaults are probably not\ngoing to be ideal.\nConsider the WAL tunables documentation:\nhttps://www.postgresql.org/docs/10/wal-configuration.html\n\nOn Mon, Jul 29, 2019 at 2:16 AM Jean Baro <[email protected]> wrote:We have a new Inventory system running on its own database (PG 10 AWS RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than 10GB at the moment. We provided 1TB to get more IOPS from EBS.As we don't have a lot of different products in our catalogue it's quite common (especially when a particular product is on sale) to have a high rate of concurrent updates against the same row. There is also a frequent (every 30 minutes) update to all items which changed their current stock/Inventory coming from the warehouses (SAP), the latter is a batch process. We have just installed this system for a new tenant (one of the smallest one) and although it's running great so far, we believe this solution would not scale as we roll out this system to new (and bigger) tenants. Currently there is up to 1.500 transactions per second (mostly SELECTS and 1 particular UPDATE which I believe is the one being aborted/deadlocked some tImes) in this inventory database.I am not a DBA, but as the DBAs (most of them old school Oracle DBAs who are not happy with the move to POSTGRES) are considering ditching Postgresql without any previous tunning I would like to understand the possibilities.Considering this is a highly concurrent (same row) system I thought to suggest:Another thing which you might want to investigate is your checkpoint tunables. My hunch is with that many writes, the defaults are probably not going to be ideal.Consider the WAL tunables documentation: https://www.postgresql.org/docs/10/wal-configuration.html",
"msg_date": "Mon, 29 Jul 2019 08:35:20 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency same row (inventory)"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 11:46 AM Jean Baro <[email protected]> wrote:\n\n> Hello there.\n>\n> I am not an PG expert, as currently I work as a Enterprise Architect (who\n> believes in OSS and in particular PostgreSQL 😍). So please forgive me if\n> this question is too simple. 🙏\n>\n> Here it goes:\n>\n> We have a new Inventory system running on its own database (PG 10 AWS\n> RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than\n> 10GB at the moment. We provided 1TB to get more IOPS from EBS.\n>\n> As we don't have a lot of different products in our catalogue it's quite\n> common (especially when a particular product is on sale) to have a high\n> rate of concurrent updates against the same row. There is also a frequent\n> (every 30 minutes) update to all items which changed their current\n> stock/Inventory coming from the warehouses (SAP), the latter is a batch\n> process. We have just installed this system for a new tenant (one of the\n> smallest one) and although it's running great so far, we believe this\n> solution would not scale as we roll out this system to new (and bigger)\n> tenants. Currently there is up to 1.500 transactions per second (mostly\n> SELECTS and 1 particular UPDATE which I believe is the one being\n> aborted/deadlocked some tImes) in this inventory database.\n>\nMonitoring the locks and activities, as described here, may help -\nhttps://wiki.postgresql.org/wiki/Lock_Monitoring\n\nRegards,\nJayadevan\n\nOn Mon, Jul 29, 2019 at 11:46 AM Jean Baro <[email protected]> wrote:Hello there.I am not an PG expert, as currently I work as a Enterprise Architect (who believes in OSS and in particular PostgreSQL 😍). So please forgive me if this question is too simple. 🙏Here it goes:We have a new Inventory system running on its own database (PG 10 AWS RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than 10GB at the moment. We provided 1TB to get more IOPS from EBS.As we don't have a lot of different products in our catalogue it's quite common (especially when a particular product is on sale) to have a high rate of concurrent updates against the same row. There is also a frequent (every 30 minutes) update to all items which changed their current stock/Inventory coming from the warehouses (SAP), the latter is a batch process. We have just installed this system for a new tenant (one of the smallest one) and although it's running great so far, we believe this solution would not scale as we roll out this system to new (and bigger) tenants. Currently there is up to 1.500 transactions per second (mostly SELECTS and 1 particular UPDATE which I believe is the one being aborted/deadlocked some tImes) in this inventory database.Monitoring the locks and activities, as described here, may help - https://wiki.postgresql.org/wiki/Lock_MonitoringRegards,Jayadevan",
"msg_date": "Mon, 29 Jul 2019 18:23:55 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency same row (inventory)"
},
{
"msg_contents": "Does pg_stat_user_tables validate that the major updates are indeed \"hot \nupdates\"? Otherwise, you may be experiencing bloat problems if \nautovacuum is not set aggressively. Did you change default parameters \nfor autovacuum? You should. They are set very conservatively right \nouta the box. Also, I wouldn't increase work_mem too much unless you \nare experiencing query spill over to disk. Turn on \"log_temp_files\" \n(=0) and monitor if you have this spillover. If not, don't mess with \nwork_mem. Also, why isn't effective_cache_size set closer to 80-90% of \nmemory instead of 50%? Are there other servers on the same host as \npostgres? As the other person mentioned, tune checkpoints so that they \ndo not happen too often. Turn on \"log_checkpoints\" to get more info.\n\nRegards,\nMichael Vitale\n\nRick Otten wrote on 7/29/2019 8:35 AM:\n>\n> On Mon, Jul 29, 2019 at 2:16 AM Jean Baro <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n> We have a new Inventory system running on its own database (PG 10\n> AWS RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size\n> is less than 10GB at the moment. We provided 1TB to get more IOPS\n> from EBS.\n>\n> As we don't have a lot of different products in our catalogue it's\n> quite common (especially when a particular product is on sale) to\n> have a high rate of concurrent updates against the same row. There\n> is also a frequent (every 30 minutes) update to all items which\n> changed their current stock/Inventory coming from the warehouses\n> (SAP), the latter is a batch process. We have just installed this\n> system for a new tenant (one of the smallest one) and although\n> it's running great so far, we believe this solution would not\n> scale as we roll out this system to new (and bigger) tenants.\n> Currently there is up to 1.500 transactions per second (mostly\n> SELECTS and 1 particular UPDATE which I believe is the one being\n> aborted/deadlocked some tImes) in this inventory database.\n>\n> I am not a DBA, but as the DBAs (most of them old school Oracle\n> DBAs who are not happy with the move to POSTGRES) are considering\n> ditching Postgresql without any previous tunning I would like to\n> understand the possibilities.\n>\n> Considering this is a highly concurrent (same row) system I\n> thought to suggest:\n>\n>\n>\n> Another thing which you might want to investigate is your checkpoint \n> tunables. My hunch is with that many writes, the defaults are probably \n> not going to be ideal.\n> Consider the WAL tunables documentation: \n> https://www.postgresql.org/docs/10/wal-configuration.html\n\n\n\n\nDoes pg_stat_user_tables \nvalidate that the major updates are indeed \"hot updates\"? Otherwise, \nyou may be experiencing bloat problems if autovacuum is not set \naggressively. Did you change default parameters for autovacuum? You \nshould. They are set very conservatively right outa the box. Also, I \nwouldn't increase work_mem too much unless you are experiencing query \nspill over to disk. Turn on \"log_temp_files\" (=0) and monitor if you \nhave this spillover. If not, don't mess with work_mem. Also, why isn't\n effective_cache_size set closer to 80-90% of memory instead of 50%? \nAre there other servers on the same host as postgres? As the other \nperson mentioned, tune checkpoints so that they do not happen too \noften. Turn on \"log_checkpoints\" to get more info.\n \nRegards,\nMichael Vitale\n\nRick Otten wrote on 7/29/2019 8:35 AM:\n\n\nOn Mon, Jul 29, 2019 at 2:16 AM Jean Baro \n<[email protected]>\n wrote:We have a new \nInventory system running on its own database (PG 10 AWS RDS.m5.2xlarge \n1TB SSD EBS - Multizone). The DB effective size is less than 10GB at the\n moment. We provided 1TB to get more IOPS from EBS.As we don't have a lot of different products in our \ncatalogue it's quite common (especially when a particular product is on \nsale) to have a high rate of concurrent updates against the same row. \nThere is also a frequent (every 30 minutes) update to all items which \nchanged their current stock/Inventory coming from the warehouses (SAP), \nthe latter is a batch process. We have just installed this system for a \nnew tenant (one of the smallest one) and although it's running great so \nfar, we believe this solution would not scale as we roll out this system\n to new (and bigger) tenants. Currently there is up to 1.500 \ntransactions per second (mostly SELECTS and 1 particular UPDATE which I \nbelieve is the one being aborted/deadlocked some tImes) in this \ninventory database.I am \nnot a DBA, but as the DBAs (most of them old school Oracle DBAs who are \nnot happy with the move to POSTGRES) are considering ditching Postgresql\n without any previous tunning I would like to understand the \npossibilities.Considering\n this is a highly concurrent (same row) system I thought to suggest:Another\n thing which you might want to investigate is your checkpoint tunables. \n My hunch is with that many writes, the defaults are probably not going \nto be ideal.Consider the WAL tunables documentation: https://www.postgresql.org/docs/10/wal-configuration.html",
"msg_date": "Mon, 29 Jul 2019 08:55:23 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency same row (inventory)"
},
{
"msg_contents": "Thanks guys! This is really good and useful information! :)\n\nDuring the day we can see some exceptions coming from Postgres (alway when\nthe load is the highest), only in the MAIN UPDATE:\n\n- How to overcome the error \"current transaction is aborted, commands\nignored until end of transaction block\"\n- Deadlock detected\n\nThanks\n\nOn Mon, Jul 29, 2019 at 9:55 AM MichaelDBA <[email protected]> wrote:\n\n> Does pg_stat_user_tables validate that the major updates are indeed \"hot\n> updates\"? Otherwise, you may be experiencing bloat problems if autovacuum\n> is not set aggressively. Did you change default parameters for\n> autovacuum? You should. They are set very conservatively right outa the\n> box. Also, I wouldn't increase work_mem too much unless you are\n> experiencing query spill over to disk. Turn on \"log_temp_files\" (=0) and\n> monitor if you have this spillover. If not, don't mess with work_mem.\n> Also, why isn't effective_cache_size set closer to 80-90% of memory instead\n> of 50%? Are there other servers on the same host as postgres? As the\n> other person mentioned, tune checkpoints so that they do not happen too\n> often. Turn on \"log_checkpoints\" to get more info.\n>\n> Regards,\n> Michael Vitale\n>\n> Rick Otten wrote on 7/29/2019 8:35 AM:\n>\n>\n> On Mon, Jul 29, 2019 at 2:16 AM Jean Baro <[email protected]> wrote:\n>\n>>\n>> We have a new Inventory system running on its own database (PG 10 AWS\n>> RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than\n>> 10GB at the moment. We provided 1TB to get more IOPS from EBS.\n>>\n>> As we don't have a lot of different products in our catalogue it's quite\n>> common (especially when a particular product is on sale) to have a high\n>> rate of concurrent updates against the same row. There is also a frequent\n>> (every 30 minutes) update to all items which changed their current\n>> stock/Inventory coming from the warehouses (SAP), the latter is a batch\n>> process. We have just installed this system for a new tenant (one of the\n>> smallest one) and although it's running great so far, we believe this\n>> solution would not scale as we roll out this system to new (and bigger)\n>> tenants. Currently there is up to 1.500 transactions per second (mostly\n>> SELECTS and 1 particular UPDATE which I believe is the one being\n>> aborted/deadlocked some tImes) in this inventory database.\n>>\n>> I am not a DBA, but as the DBAs (most of them old school Oracle DBAs who\n>> are not happy with the move to POSTGRES) are considering ditching\n>> Postgresql without any previous tunning I would like to understand the\n>> possibilities.\n>>\n>> Considering this is a highly concurrent (same row) system I thought to\n>> suggest:\n>>\n>>\n>>\n> Another thing which you might want to investigate is your checkpoint\n> tunables. My hunch is with that many writes, the defaults are probably not\n> going to be ideal.\n> Consider the WAL tunables documentation:\n> https://www.postgresql.org/docs/10/wal-configuration.html\n>\n>\n>\n>\n\nThanks guys! This is really good and useful information! :)During the day we can see some exceptions coming from Postgres (alway when the load is the highest), only in the MAIN UPDATE:- How to overcome the error \"current transaction is aborted, commands ignored until end of transaction block\"- Deadlock detectedThanksOn Mon, Jul 29, 2019 at 9:55 AM MichaelDBA <[email protected]> wrote:\nDoes pg_stat_user_tables \nvalidate that the major updates are indeed \"hot updates\"? Otherwise, \nyou may be experiencing bloat problems if autovacuum is not set \naggressively. Did you change default parameters for autovacuum? You \nshould. They are set very conservatively right outa the box. Also, I \nwouldn't increase work_mem too much unless you are experiencing query \nspill over to disk. Turn on \"log_temp_files\" (=0) and monitor if you \nhave this spillover. If not, don't mess with work_mem. Also, why isn't\n effective_cache_size set closer to 80-90% of memory instead of 50%? \nAre there other servers on the same host as postgres? As the other \nperson mentioned, tune checkpoints so that they do not happen too \noften. Turn on \"log_checkpoints\" to get more info.\n \nRegards,\nMichael Vitale\n\nRick Otten wrote on 7/29/2019 8:35 AM:\n\nOn Mon, Jul 29, 2019 at 2:16 AM Jean Baro \n<[email protected]>\n wrote:We have a new \nInventory system running on its own database (PG 10 AWS RDS.m5.2xlarge \n1TB SSD EBS - Multizone). The DB effective size is less than 10GB at the\n moment. We provided 1TB to get more IOPS from EBS.As we don't have a lot of different products in our \ncatalogue it's quite common (especially when a particular product is on \nsale) to have a high rate of concurrent updates against the same row. \nThere is also a frequent (every 30 minutes) update to all items which \nchanged their current stock/Inventory coming from the warehouses (SAP), \nthe latter is a batch process. We have just installed this system for a \nnew tenant (one of the smallest one) and although it's running great so \nfar, we believe this solution would not scale as we roll out this system\n to new (and bigger) tenants. Currently there is up to 1.500 \ntransactions per second (mostly SELECTS and 1 particular UPDATE which I \nbelieve is the one being aborted/deadlocked some tImes) in this \ninventory database.I am \nnot a DBA, but as the DBAs (most of them old school Oracle DBAs who are \nnot happy with the move to POSTGRES) are considering ditching Postgresql\n without any previous tunning I would like to understand the \npossibilities.Considering\n this is a highly concurrent (same row) system I thought to suggest:Another\n thing which you might want to investigate is your checkpoint tunables. \n My hunch is with that many writes, the defaults are probably not going \nto be ideal.Consider the WAL tunables documentation: https://www.postgresql.org/docs/10/wal-configuration.html",
"msg_date": "Mon, 29 Jul 2019 15:09:08 -0300",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High concurrency same row (inventory)"
},
{
"msg_contents": "Can you share the schema of the table(s) involved and an example or two of\nthe updates being executed?\n\nCan you share the schema of the table(s) involved and an example or two of the updates being executed?",
"msg_date": "Mon, 29 Jul 2019 12:12:23 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency same row (inventory)"
},
{
"msg_contents": "All the failures come from the Bucket Table (see image below).\n\nI don't have access to the DB, neither the code, but last time I was\npresented to the UPDATE it was changing (incrementing or decrementing)\n*qty_available*, but tomorrow morning I can be sure, once the developers\nand DBAs are back to the office. I know it's quite a simple UPDATE.\n\nTable is called Bucket:\n{autovacuum_vacuum_scale_factor=0.01}\n\n[image: Bucket.png]\n\n\nOn Mon, Jul 29, 2019 at 3:12 PM Michael Lewis <[email protected]> wrote:\n\n> Can you share the schema of the table(s) involved and an example or two of\n> the updates being executed?\n>",
"msg_date": "Mon, 29 Jul 2019 21:04:55 -0300",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High concurrency same row (inventory)"
},
{
"msg_contents": "The UPDATE was something like:\n\nUPDATE bucket SET qty_available = qty_available + 1 WHERE bucket_uid =\n0940850938059380590\n\nThanks for all your help guys!\n\nOn Mon, Jul 29, 2019 at 9:04 PM Jean Baro <[email protected]> wrote:\n\n> All the failures come from the Bucket Table (see image below).\n>\n> I don't have access to the DB, neither the code, but last time I was\n> presented to the UPDATE it was changing (incrementing or decrementing)\n> *qty_available*, but tomorrow morning I can be sure, once the developers\n> and DBAs are back to the office. I know it's quite a simple UPDATE.\n>\n> Table is called Bucket:\n> {autovacuum_vacuum_scale_factor=0.01}\n>\n> [image: Bucket.png]\n>\n>\n> On Mon, Jul 29, 2019 at 3:12 PM Michael Lewis <[email protected]> wrote:\n>\n>> Can you share the schema of the table(s) involved and an example or two\n>> of the updates being executed?\n>>\n>",
"msg_date": "Mon, 29 Jul 2019 21:06:57 -0300",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High concurrency same row (inventory)"
},
{
"msg_contents": "[image: image.png]\n\nThe dead tuples goes up at a high ratio, but then it gets cleaned.\n\nif you guys need any further information, please let me know!\n\n\n\nOn Mon, Jul 29, 2019 at 9:06 PM Jean Baro <[email protected]> wrote:\n\n> The UPDATE was something like:\n>\n> UPDATE bucket SET qty_available = qty_available + 1 WHERE bucket_uid =\n> 0940850938059380590\n>\n> Thanks for all your help guys!\n>\n> On Mon, Jul 29, 2019 at 9:04 PM Jean Baro <[email protected]> wrote:\n>\n>> All the failures come from the Bucket Table (see image below).\n>>\n>> I don't have access to the DB, neither the code, but last time I was\n>> presented to the UPDATE it was changing (incrementing or decrementing)\n>> *qty_available*, but tomorrow morning I can be sure, once the developers\n>> and DBAs are back to the office. I know it's quite a simple UPDATE.\n>>\n>> Table is called Bucket:\n>> {autovacuum_vacuum_scale_factor=0.01}\n>>\n>> [image: Bucket.png]\n>>\n>>\n>> On Mon, Jul 29, 2019 at 3:12 PM Michael Lewis <[email protected]> wrote:\n>>\n>>> Can you share the schema of the table(s) involved and an example or two\n>>> of the updates being executed?\n>>>\n>>",
"msg_date": "Mon, 29 Jul 2019 21:26:11 -0300",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High concurrency same row (inventory)"
},
{
"msg_contents": "Looks like regular updates not HOT UPDATES\n\nJean Baro wrote on 7/29/2019 8:26 PM:\n> image.png\n>\n> The dead tuples goes up at a high ratio, but then it gets cleaned.\n>\n> if you guys need any further information, please let me know!\n>\n>\n>\n> On Mon, Jul 29, 2019 at 9:06 PM Jean Baro <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> The UPDATE was something like:\n>\n> UPDATE bucket SET qty_available = qty_available + 1 WHERE\n> bucket_uid = 0940850938059380590\n>\n> Thanks for all your help guys!\n>\n> On Mon, Jul 29, 2019 at 9:04 PM Jean Baro <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> All the failures come from the Bucket Table (see image below).\n>\n> I don't have access to the DB, neither the code, but last time\n> I was presented to the UPDATE it was changing (incrementing or\n> decrementing) *qty_available*, but tomorrow morning I can be\n> sure, once the developers and DBAs are back to the office. I\n> know it's quite a simple UPDATE.\n>\n> Table is called Bucket:\n> {autovacuum_vacuum_scale_factor=0.01}\n>\n> Bucket.png\n>\n>\n> On Mon, Jul 29, 2019 at 3:12 PM Michael Lewis\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Can you share the schema of the table(s) involved and an\n> example or two of the updates being executed?\n>",
"msg_date": "Mon, 29 Jul 2019 21:13:00 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency same row (inventory)"
},
{
"msg_contents": "Michael Vitale --> No, there is only postgreSQL running in this server...\nit is in fact an RDS server.\n\nSELECT n_tup_ins as \"inserts\",n_tup_upd as \"updates\",n_tup_del as\n\"deletes\", n_tup_hot_upd as \"hot updates\", n_live_tup as \"live_tuples\",\nn_dead_tup as \"dead_tuples\"\nFROM pg_stat_user_tables\nWHERE schemaname = 'schemaFOO' and relname = 'bucket';\n\n[image: image.png]\n\n\n\nOn Mon, Jul 29, 2019 at 9:26 PM Jean Baro <[email protected]> wrote:\n\n> [image: image.png]\n>\n> The dead tuples goes up at a high ratio, but then it gets cleaned.\n>\n> if you guys need any further information, please let me know!\n>\n>\n>\n> On Mon, Jul 29, 2019 at 9:06 PM Jean Baro <[email protected]> wrote:\n>\n>> The UPDATE was something like:\n>>\n>> UPDATE bucket SET qty_available = qty_available + 1 WHERE bucket_uid =\n>> 0940850938059380590\n>>\n>> Thanks for all your help guys!\n>>\n>> On Mon, Jul 29, 2019 at 9:04 PM Jean Baro <[email protected]> wrote:\n>>\n>>> All the failures come from the Bucket Table (see image below).\n>>>\n>>> I don't have access to the DB, neither the code, but last time I was\n>>> presented to the UPDATE it was changing (incrementing or decrementing)\n>>> *qty_available*, but tomorrow morning I can be sure, once the\n>>> developers and DBAs are back to the office. I know it's quite a simple\n>>> UPDATE.\n>>>\n>>> Table is called Bucket:\n>>> {autovacuum_vacuum_scale_factor=0.01}\n>>>\n>>> [image: Bucket.png]\n>>>\n>>>\n>>> On Mon, Jul 29, 2019 at 3:12 PM Michael Lewis <[email protected]>\n>>> wrote:\n>>>\n>>>> Can you share the schema of the table(s) involved and an example or two\n>>>> of the updates being executed?\n>>>>\n>>>",
"msg_date": "Mon, 29 Jul 2019 22:23:36 -0300",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High concurrency same row (inventory)"
}
] |
[
{
"msg_contents": "Hello,\n\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\n\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\n\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\n\nIf so would it be sane to keep a copy of the original quals to make the partial join possible?\n\n\nRegards\n\nArne",
"msg_date": "Mon, 29 Jul 2019 16:43:05 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partial join"
},
{
"msg_contents": "Hello,\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\nIf so would it be sane to keep a copy of the original quals to make the partial join possible? Do you have better ideas?\n\n\nRegards\nArne",
"msg_date": "Thu, 1 Aug 2019 08:07:25 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partial join"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 5:38 PM Arne Roland <[email protected]> wrote:\n\n> Hello,\n>\n> I attached one example of a partitioned table with multi column partition\n> key. I also attached the output.\n> Disabling the hash_join is not really necessary, it just shows the more\n> drastic result in the case of low work_mem.\n>\n> Comparing the first and the second query I was surprised to see that SET\n> enable_partitionwise_join could cause the costs to go up. Shouldn't the\n> paths of the first query be generated as well?\n>\n> The third query seems to have a different issue. That one is close to my\n> original performance problem. It looks to me like the push down of the sl\n> condition stops the optimizer considering a partial join.\n> If so would it be sane to keep a copy of the original quals to make the\n> partial join possible? Do you have better ideas?\n>\n\nFor the third query, a rough investigation shows that, the qual 'sl =\n5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\nimplied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\ndown to the base rels. One consequence of the deduction is when\nconstructing restrict lists for the joinrel, we lose the original\nrestrict 'sc.sl = sg.sl', and this would fail the check\nhave_partkey_equi_join(), which checks if there exists an equi-join\ncondition for each pair of partition keys. As a result, this joinrel\nwould not be considered as an input to further partitionwise joins.\n\nWe need to fix this.\n\nThanks\nRichard\n\nOn Thu, Aug 1, 2019 at 5:38 PM Arne Roland <[email protected]> wrote:\n\n\nHello,\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\nIf so would it be sane to keep a copy of the original quals to make the partial join possible? Do you have better ideas?For the third query, a rough investigation shows that, the qual 'sl =5' and 'sc.sl = sg.sl' will form an equivalence class and generate twoimplied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pusheddown to the base rels. One consequence of the deduction is whenconstructing restrict lists for the joinrel, we lose the originalrestrict 'sc.sl = sg.sl', and this would fail the checkhave_partkey_equi_join(), which checks if there exists an equi-joincondition for each pair of partition keys. As a result, this joinrelwould not be considered as an input to further partitionwise joins.We need to fix this.ThanksRichard",
"msg_date": "Thu, 1 Aug 2019 19:14:44 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "Hello Richard,\n\n\nthanks for your quick reply.\n\n\n> We need to fix this.\n\n\nDo you have a better idea than just keeping the old quals - possibly just the ones that get eliminated - in a separate data structure? Is the push down of quals the only case of elimination of quals, only counting the ones which happen before the restrict lists are generated?\n\n\nRegards\n\nArne\n\n________________________________\nFrom: Richard Guo <[email protected]>\nSent: Thursday, August 1, 2019 1:14:44 PM\nTo: Arne Roland\nCc: [email protected]\nSubject: Re: Partial join\n\n\nOn Thu, Aug 1, 2019 at 5:38 PM Arne Roland <[email protected]<mailto:[email protected]>> wrote:\nHello,\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\nIf so would it be sane to keep a copy of the original quals to make the partial join possible? Do you have better ideas?\n\nFor the third query, a rough investigation shows that, the qual 'sl =\n5' and 'sc.sl<http://sc.sl> = sg.sl<http://sg.sl>' will form an equivalence class and generate two\nimplied equalities: 'sc.sl<http://sc.sl> = 5' and 'sg.sl<http://sg.sl> = 5', which can be pushed\ndown to the base rels. One consequence of the deduction is when\nconstructing restrict lists for the joinrel, we lose the original\nrestrict 'sc.sl<http://sc.sl> = sg.sl<http://sg.sl>', and this would fail the check\nhave_partkey_equi_join(), which checks if there exists an equi-join\ncondition for each pair of partition keys. As a result, this joinrel\nwould not be considered as an input to further partitionwise joins.\n\nWe need to fix this.\n\nThanks\nRichard\n\n\n\n\n\n\n\n\nHello Richard,\n\n\nthanks for your quick reply.\n\n\n\n> We need to fix this.\n\n\nDo you have a better idea than just keeping the old quals - possibly just the ones that get eliminated - in a separate data structure? Is the push down of quals the only case of elimination of quals, only counting the ones which happen before the restrict\n lists are generated?\n\n\nRegards\nArne\n\n\n\nFrom: Richard Guo <[email protected]>\nSent: Thursday, August 1, 2019 1:14:44 PM\nTo: Arne Roland\nCc: [email protected]\nSubject: Re: Partial join\n \n\n\n\n\n\n\nOn Thu, Aug 1, 2019 at 5:38 PM Arne Roland <[email protected]> wrote:\n\n\n\n\nHello,\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\nIf so would it be sane to keep a copy of the original quals to make the partial join possible? Do you have better ideas?\n\n\n\n\n\n\nFor the third query, a rough investigation shows that, the qual 'sl =\n5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\nimplied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\ndown to the base rels. One consequence of the deduction is when\nconstructing restrict lists for the joinrel, we lose the original\nrestrict 'sc.sl = sg.sl', and this would fail the check\nhave_partkey_equi_join(), which checks if there exists an equi-join\ncondition for each pair of partition keys. As a result, this joinrel\nwould not be considered as an input to further partitionwise joins.\n\n\n\nWe need to fix this.\n\n\nThanks\nRichard",
"msg_date": "Thu, 1 Aug 2019 11:46:08 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> For the third query, a rough investigation shows that, the qual 'sl =\n> 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> down to the base rels. One consequence of the deduction is when\n> constructing restrict lists for the joinrel, we lose the original\n> restrict 'sc.sl = sg.sl', and this would fail the check\n> have_partkey_equi_join(), which checks if there exists an equi-join\n> condition for each pair of partition keys. As a result, this joinrel\n> would not be considered as an input to further partitionwise joins.\n\n> We need to fix this.\n\nUh ... why? The pushed-down restrictions should result in pruning\naway any prunable partitions at the scan level, leaving nothing for\nthe partitionwise join code to do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2019 10:14:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "\"Tom Lane\" <[email protected]> wrote:\n> Uh ... why? The pushed-down restrictions should result in pruning\n> away any prunable partitions at the scan level, leaving nothing for\n> the partitionwise join code to do.\n\nIt seems reasonable to me that the join condition can no longer be verified, since 'sc.sl = sg.sl' is now replaced by 'sg.sl = 5' so the join condition can no longer be validated.\n\nIt's true that the pruning would prune everything but one partition, in case we'd just have a single column partition key. But we don't. I don't see how pruning partitions should help in this case, since we are left with multiple partitions for both relations.\n\nRegards\nArne\n\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Thursday, August 1, 2019 4:14:54 PM\nTo: Richard Guo\nCc: Arne Roland; [email protected]\nSubject: Re: Partial join\n\nRichard Guo <[email protected]> writes:\n> For the third query, a rough investigation shows that, the qual 'sl =\n> 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> down to the base rels. One consequence of the deduction is when\n> constructing restrict lists for the joinrel, we lose the original\n> restrict 'sc.sl = sg.sl', and this would fail the check\n> have_partkey_equi_join(), which checks if there exists an equi-join\n> condition for each pair of partition keys. As a result, this joinrel\n> would not be considered as an input to further partitionwise joins.\n\n> We need to fix this.\n\nUh ... why? The pushed-down restrictions should result in pruning\naway any prunable partitions at the scan level, leaving nothing for\nthe partitionwise join code to do.\n\n regards, tom lane\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\"Tom Lane\" <[email protected]> wrote:\n\n> Uh ... why? The pushed-down restrictions should result in pruning\n> away any prunable partitions at the scan level, leaving nothing for\n> the partitionwise join code to do.\n\n\n\nIt seems reasonable to me that the join condition can no longer be verified, since 'sc.sl = sg.sl' is now replaced by 'sg.sl = 5' so the join condition can no longer be validated.\n\nIt's true that the pruning would prune everything but one partition, in case we'd just have a single column partition key. But we don't. I don't see how pruning partitions should help in this case, since we are left with multiple partitions for both relations.\n\n\n\nRegards\nArne\n\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Thursday, August 1, 2019 4:14:54 PM\nTo: Richard Guo\nCc: Arne Roland; [email protected]\nSubject: Re: Partial join\n \n\n\n\nRichard Guo <[email protected]> writes:\n> For the third query, a rough investigation shows that, the qual 'sl =\n> 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> down to the base rels. One consequence of the deduction is when\n> constructing restrict lists for the joinrel, we lose the original\n> restrict 'sc.sl = sg.sl', and this would fail the check\n> have_partkey_equi_join(), which checks if there exists an equi-join\n> condition for each pair of partition keys. As a result, this joinrel\n> would not be considered as an input to further partitionwise joins.\n\n> We need to fix this.\n\nUh ... why? The pushed-down restrictions should result in pruning\naway any prunable partitions at the scan level, leaving nothing for\nthe partitionwise join code to do.\n\n regards, tom lane",
"msg_date": "Thu, 1 Aug 2019 16:29:21 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 10:15 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > For the third query, a rough investigation shows that, the qual 'sl =\n> > 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> > implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> > down to the base rels. One consequence of the deduction is when\n> > constructing restrict lists for the joinrel, we lose the original\n> > restrict 'sc.sl = sg.sl', and this would fail the check\n> > have_partkey_equi_join(), which checks if there exists an equi-join\n> > condition for each pair of partition keys. As a result, this joinrel\n> > would not be considered as an input to further partitionwise joins.\n>\n> > We need to fix this.\n>\n> Uh ... why? The pushed-down restrictions should result in pruning\n> away any prunable partitions at the scan level, leaving nothing for\n> the partitionwise join code to do.\n>\n\nHmm..In the case of multiple partition keys, for range partitioning, if\nwe have no clauses for a given key, any later keys would not be\nconsidered for partition pruning.\n\nThat is to day, for table 'p partition by range (k1, k2)', quals like\n'k2 = Const' would not prune partitions.\n\nFor query:\n\nselect * from p as t1 join p as t2 on t1.k1 = t2.k1 and t1.k2 = t2.k2\nand t1.k2 = 2;\n\nSince we don't consider ECs containing consts when generating join\nclauses, we don't have restriction 't1.k2 = t2.k2' when building the\njoinrel. As a result, partitionwise join is not considered as it\nrequires there existing an equi-join condition for each pair of\npartition keys.\n\nIs this a problem? What's your opinion?\n\nThanks\nRichard\n\nOn Thu, Aug 1, 2019 at 10:15 PM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> For the third query, a rough investigation shows that, the qual 'sl =\n> 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> down to the base rels. One consequence of the deduction is when\n> constructing restrict lists for the joinrel, we lose the original\n> restrict 'sc.sl = sg.sl', and this would fail the check\n> have_partkey_equi_join(), which checks if there exists an equi-join\n> condition for each pair of partition keys. As a result, this joinrel\n> would not be considered as an input to further partitionwise joins.\n\n> We need to fix this.\n\nUh ... why? The pushed-down restrictions should result in pruning\naway any prunable partitions at the scan level, leaving nothing for\nthe partitionwise join code to do.Hmm..In the case of multiple partition keys, for range partitioning, ifwe have no clauses for a given key, any later keys would not beconsidered for partition pruning.That is to day, for table 'p partition by range (k1, k2)', quals like'k2 = Const' would not prune partitions.For query:select * from p as t1 join p as t2 on t1.k1 = t2.k1 and t1.k2 = t2.k2and t1.k2 = 2;Since we don't consider ECs containing consts when generating joinclauses, we don't have restriction 't1.k2 = t2.k2' when building thejoinrel. As a result, partitionwise join is not considered as itrequires there existing an equi-join condition for each pair ofpartition keys.Is this a problem? What's your opinion?ThanksRichard",
"msg_date": "Fri, 2 Aug 2019 17:33:39 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 7:46 PM Arne Roland <[email protected]> wrote:\n\n> Hello Richard,\n>\n> thanks for your quick reply.\n>\n>\n> > We need to fix this.\n>\n>\n> Do you have a better idea than just keeping the old quals - possibly just\n> the ones that get eliminated - in a separate data structure? Is the push\n> down of quals the only case of elimination of quals, only counting the ones\n> which happen before the restrict lists are generated?\n>\nIn you case, the restriction 'sl = sl' is just not generated for the\njoin, because it forms an EC with const, which is not considered when\ngenerating join clauses.\n\nPlease refer to the code snippet below:\n\n@@ -1164,8 +1164,8 @@ generate_join_implied_equalities(PlannerInfo *root,\n List *sublist = NIL;\n\n /* ECs containing consts do not need any further\nenforcement */\n if (ec->ec_has_const)\n continue;\n\nThanks\nRichard\n\nOn Thu, Aug 1, 2019 at 7:46 PM Arne Roland <[email protected]> wrote:\n\n\nHello Richard,\n\n\nthanks for your quick reply.\n\n\n\n> We need to fix this.\n\n\nDo you have a better idea than just keeping the old quals - possibly just the ones that get eliminated - in a separate data structure? Is the push down of quals the only case of elimination of quals, only counting the ones which happen before the restrict\n lists are generated?In you case, the restriction 'sl = sl' is just not generated for thejoin, because it forms an EC with const, which is not considered whengenerating join clauses.Please refer to the code snippet below:@@ -1164,8 +1164,8 @@ generate_join_implied_equalities(PlannerInfo *root, List *sublist = NIL; /* ECs containing consts do not need any further enforcement */ if (ec->ec_has_const) continue;ThanksRichard",
"msg_date": "Fri, 2 Aug 2019 18:00:01 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "Richard Guo <[email protected]> wrote:\n> Please refer to the code snippet below:\n>\n> @@ -1164,8 +1164,8 @@ generate_join_implied_equalities(PlannerInfo *root,\n> List *sublist = NIL;\n>\n> /* ECs containing consts do not need any further enforcement */\n> if (ec->ec_has_const)\n> continue;\n\nSorry, I'm quite busy currently. And thanks! That was a good read.\n\nI might be wrong, but I think have_partkey_equi_join in joinrels.c should be aware of the const case. My naive approach would be keeping pointers to the first few constant clauses, which are referencing to a yet unmatched partition key, to keep the memory footprint feasible in manner similar to pk_has_clause. The question would be what to do, if there are a lot of const expressions on the part keys. One could palloc additional memory in that case, hoping that it will be quite rare. Or is there a different, better way to go about that?\nThank you for your feedback!\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nRichard Guo <[email protected]> wrote:\n\n\n> Please refer to the code snippet below:\n> \n\n> @@ -1164,8 +1164,8 @@ generate_join_implied_equalities(PlannerInfo *root,\n> List *sublist = NIL;\n> \n> /* ECs containing consts do not need any further enforcement */\n> if (ec->ec_has_const)\n> continue;\n\n\n\nSorry, I'm quite busy currently. And thanks! That was a good read.\n\n\n\nI might be wrong, but I think have_partkey_equi_join in\njoinrels.c should be aware of the const case. My naive approach would be keeping pointers to the first few constant clauses, which are referencing to a yet unmatched partition key, to keep the memory footprint feasible in manner similar to pk_has_clause.\n The question would be what to do, if there are a lot of const expressions on the part keys. One could palloc additional memory in that case, hoping that it will be quite rare. Or is there a different, better way to go about that?\nThank you for your feedback!\n\n\n\nRegards\nArne",
"msg_date": "Mon, 19 Aug 2019 15:17:21 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partial join"
}
] |
[
{
"msg_contents": "Hey all,\nI have a questions regarding streaming replication that I would like to\nask in order to understand the feature better :\nI have 2 nodes configured with replication (primary + secondary).\nIn my primary I configured streaming replcation + archiving. My archive\ncommand :\ngzip < %p > /var/lib/pgsql/archive/%f ; echo \"archiving wal %f\"\n\nCorrect me if I'm wrong but my archive_command will be run when the\narchive_timeout is reached or when the wal is full (16MB). The wal file is\ncreated with default size of 16MB and it will be archived only after 16MB\nof wal records will be created.\n\nIn my secondary I have the following settings :\nrestore_command = 'rsync -avzhe ssh postgres@my_primary\n:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ; gunzip <\n/var/lib/pgsql/archive/%f > %p; echo \"restore command was launched\"'\narchive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n/var/lib/pgsql/archive %r; \"archive_cleanupup for %r was launched\"'\n\nWhich means, that the restore command on the secondary connects to the\nprimary, copies the wal file from the archive dir , unzip it and saves it\nin the pg_xlog dir of the database.\n\nmy question :\nWhen exactly the restore_command will be used ? I use streaming replication\ntherefore wal records are passed through wal_sender and wal receiver. I\ndont see in my logs in the secondary that the restore_command is used but I\nsee that the same wal files as in the primary are exists on the pg_xlog in\nthe secondary. Does the streaming replication generates wals on the\nsecondary from the wals records that it receives from the primary ?\n\nHey all,I have a questions regarding streaming replication that I would like to ask in order to understand the feature better : I have 2 nodes configured with replication (primary + secondary).In my primary I configured streaming replcation + archiving. My archive command : gzip < %p > /var/lib/pgsql/archive/%f ; echo \"archiving wal %f\"Correct me if I'm wrong but my archive_command will be run when the archive_timeout is reached or when the wal is full (16MB). The wal file is created with default size of 16MB and it will be archived only after 16MB of wal records will be created.In my secondary I have the following settings : restore_command = 'rsync -avzhe ssh postgres@my_primary :/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p; echo \"restore command was launched\"'archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup /var/lib/pgsql/archive %r; \"archive_cleanupup for %r was launched\"'Which means, that the restore command on the secondary connects to the primary, copies the wal file from the archive dir , unzip it and saves it in the pg_xlog dir of the database.my question : When exactly the restore_command will be used ? I use streaming replication therefore wal records are passed through wal_sender and wal receiver. I dont see in my logs in the secondary that the restore_command is used but I see that the same wal files as in the primary are exists on the pg_xlog in the secondary. Does the streaming replication generates wals on the secondary from the wals records that it receives from the primary ?",
"msg_date": "Wed, 31 Jul 2019 12:28:44 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "A question regarding streaming replication"
},
{
"msg_contents": "Hi Mariel,\n\nplease, as already you have been told already in other occasions, do not cross-post. \n\nPosting to 2 mailing lists will disperse the answers, duplicate the efforts and probably waste time that might go into helping somebody else. \n\nSame reason why not to call 2 ambulances when somebody does not feel good.\n\nregards,\n\nfabio pardi\n\n\nOn 31/07/2019 11:28, Mariel Cherkassky wrote:\n> Hey all,\n> I have a questions regarding streaming replication that I would like to ask in order to understand the feature better : \n> I have 2 nodes configured with replication (primary + secondary).\n> In my primary I configured streaming replcation + archiving. My archive command : \n> gzip < %p > /var/lib/pgsql/archive/%f ; echo \"archiving wal %f\"\n> \n> Correct me if I'm wrong but my archive_command will be run when the archive_timeout is reached or when the wal is full (16MB). The wal file is created with default size of 16MB and it will be archived only after 16MB of wal records will be created.\n> \n> In my secondary I have the following settings : \n> restore_command = 'rsync -avzhe ssh postgres@my_primary :/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p; echo \"restore command was launched\"'\n> archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup /var/lib/pgsql/archive %r; \"archive_cleanupup for %r was launched\"'\n> \n> Which means, that the restore command on the secondary connects to the primary, copies the wal file from the archive dir , unzip it and saves it in the pg_xlog dir of the database.\n> \n> my question : \n> When exactly the restore_command will be used ? I use streaming replication therefore wal records are passed through wal_sender and wal receiver. I dont see in my logs in the secondary that the restore_command is used but I see that the same wal files as in the primary are exists on the pg_xlog in the secondary. Does the streaming replication generates wals on the secondary from the wals records that it receives from the primary ?\n> \n> \n\n\n",
"msg_date": "Wed, 31 Jul 2019 11:41:55 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A question regarding streaming replication"
},
{
"msg_contents": "Beware, the echo command always returns the same exit status, whether the archiving had succeeded or not. You might lose WAL files!\n\nRegarding the restore command: It will be executed only if the slave is not capable of receiving all WALs via streaming replication, e. g. in case the slave was of for a while.\n\nRegards,\nHolger\n\nAm 31. Juli 2019 11:28:44 MESZ schrieb Mariel Cherkassky <[email protected]>:\n>Hey all,\n>I have a questions regarding streaming replication that I would like\n>to\n>ask in order to understand the feature better :\n>I have 2 nodes configured with replication (primary + secondary).\n>In my primary I configured streaming replcation + archiving. My archive\n>command :\n>gzip < %p > /var/lib/pgsql/archive/%f ; echo \"archiving wal %f\"\n>\n>Correct me if I'm wrong but my archive_command will be run when the\n>archive_timeout is reached or when the wal is full (16MB). The wal file\n>is\n>created with default size of 16MB and it will be archived only after\n>16MB\n>of wal records will be created.\n>\n>In my secondary I have the following settings :\n>restore_command = 'rsync -avzhe ssh postgres@my_primary\n>:/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ; gunzip <\n>/var/lib/pgsql/archive/%f > %p; echo \"restore command was launched\"'\n>archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup\n>/var/lib/pgsql/archive %r; \"archive_cleanupup for %r was launched\"'\n>\n>Which means, that the restore command on the secondary connects to the\n>primary, copies the wal file from the archive dir , unzip it and saves\n>it\n>in the pg_xlog dir of the database.\n>\n>my question :\n>When exactly the restore_command will be used ? I use streaming\n>replication\n>therefore wal records are passed through wal_sender and wal receiver. I\n>dont see in my logs in the secondary that the restore_command is used\n>but I\n>see that the same wal files as in the primary are exists on the pg_xlog\n>in\n>the secondary. Does the streaming replication generates wals on the\n>secondary from the wals records that it receives from the primary ?\n\n-- \nHolger Jakobs, Bergisch Gladbach\n+49 178 9759012\n- sent from mobile, therefore short -\n\nBeware, the echo command always returns the same exit status, whether the archiving had succeeded or not. You might lose WAL files!Regarding the restore command: It will be executed only if the slave is not capable of receiving all WALs via streaming replication, e. g. in case the slave was of for a while.Regards,HolgerAm 31. Juli 2019 11:28:44 MESZ schrieb Mariel Cherkassky <[email protected]>:\nHey all,I have a questions regarding streaming replication that I would like to ask in order to understand the feature better : I have 2 nodes configured with replication (primary + secondary).In my primary I configured streaming replcation + archiving. My archive command : gzip < %p > /var/lib/pgsql/archive/%f ; echo \"archiving wal %f\"Correct me if I'm wrong but my archive_command will be run when the archive_timeout is reached or when the wal is full (16MB). The wal file is created with default size of 16MB and it will be archived only after 16MB of wal records will be created.In my secondary I have the following settings : restore_command = 'rsync -avzhe ssh postgres@my_primary :/var/lib/pgsql/archive/%f /var/lib/pgsql/archive/%f ; gunzip < /var/lib/pgsql/archive/%f > %p; echo \"restore command was launched\"'archive_cleanup_command = '/usr/pgsql-9.6/bin/pg_archivecleanup /var/lib/pgsql/archive %r; \"archive_cleanupup for %r was launched\"'Which means, that the restore command on the secondary connects to the primary, copies the wal file from the archive dir , unzip it and saves it in the pg_xlog dir of the database.my question : When exactly the restore_command will be used ? I use streaming replication therefore wal records are passed through wal_sender and wal receiver. I dont see in my logs in the secondary that the restore_command is used but I see that the same wal files as in the primary are exists on the pg_xlog in the secondary. Does the streaming replication generates wals on the secondary from the wals records that it receives from the primary ?\n-- Holger Jakobs, Bergisch Gladbach+49 178 9759012- sent from mobile, therefore short -",
"msg_date": "Wed, 31 Jul 2019 11:45:13 +0200",
"msg_from": "Holger Jakobs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A question regarding streaming replication"
}
] |
[
{
"msg_contents": "Hello team,\n\nWe have to migrate a schema from oracle to postgres but there is one table that is having following large lob segments. This table is taking time to export. What parameters we have to set in ora2pg.conf to speed up the data export by ora2pg.\n\nTable: CLIENT_DB_AUDIT_LOG\n\nLOBSEGMENT SYS_LOB0000095961C00008$$ 80.26\nLOBSEGMENT SYS_LOB0000095961C00007$$ 79.96\nLOBSEGMENT SYS_LOB0000094338C00008$$ 8.84\nLOBSEGMENT SYS_LOB0000084338C00007$$ 8.71\nLOBSEGMENT SYS_LOB0000085961C00009$$ 5.32\n\nVM Details are:\n\nRAM 8GB\nVCPUs 2 VCPU\nDisk 40GB\n\nThanks,\n\n\n\n\n\n\n\n\n\n\nHello team,\n \nWe have to migrate a schema from oracle to postgres but there is one table that is having following large lob segments. This table is taking time to export. What parameters we have to set in ora2pg.conf to speed up the data export by ora2pg.\n \nTable: CLIENT_DB_AUDIT_LOG\n \nLOBSEGMENT SYS_LOB0000095961C00008$$ 80.26\nLOBSEGMENT SYS_LOB0000095961C00007$$ 79.96\nLOBSEGMENT SYS_LOB0000094338C00008$$ 8.84\nLOBSEGMENT SYS_LOB0000084338C00007$$ 8.71\nLOBSEGMENT SYS_LOB0000085961C00009$$ 5.32\n \nVM Details are:\n \nRAM 8GB\nVCPUs 2 VCPU\nDisk 40GB\n \nThanks,",
"msg_date": "Wed, 31 Jul 2019 11:31:45 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Oracle to postgres migration via ora2pg (blob data)"
},
{
"msg_contents": "I would look at the source table in Oracle first. It looks a lot like audit data. Perhaps all content is not needed in Postgres. If it is, then the table and lobs may benefit from being reorganised in oracle.\n\nAlter table CLIENT_DB_AUDIT_LOG move;\nAlter table CLIENT_DB_AUDIT_LOG move lob (SYS_LOB0000095961C00008$$);\n-- Three more of these.\n\nThe syntax is from my the back of my head. You may need to look the details up.\n\nNiels\n\n\nFra: Daulat Ram <[email protected]>\nSendt: 31. juli 2019 13:32\nTil: [email protected]; [email protected]\nEmne: Oracle to postgres migration via ora2pg (blob data)\n\nHello team,\n\nWe have to migrate a schema from oracle to postgres but there is one table that is having following large lob segments. This table is taking time to export. What parameters we have to set in ora2pg.conf to speed up the data export by ora2pg.\n\nTable: CLIENT_DB_AUDIT_LOG\n\nLOBSEGMENT SYS_LOB0000095961C00008$$ 80.26\nLOBSEGMENT SYS_LOB0000095961C00007$$ 79.96\nLOBSEGMENT SYS_LOB0000094338C00008$$ 8.84\nLOBSEGMENT SYS_LOB0000084338C00007$$ 8.71\nLOBSEGMENT SYS_LOB0000085961C00009$$ 5.32\n\nVM Details are:\n\nRAM 8GB\nVCPUs 2 VCPU\nDisk 40GB\n\nThanks,\n\n\n\n\n\n\n\n\n\n\nI would look at the source table in Oracle first. It looks a lot like audit data. Perhaps all content is not needed in Postgres. If it is, then the table and lobs may benefit\n from being reorganised in oracle. \n \nAlter table CLIENT_DB_AUDIT_LOG move;\nAlter table CLIENT_DB_AUDIT_LOG move lob (SYS_LOB0000095961C00008$$);\n\n-- Three more of these.\n\n \nThe syntax is from my the back of my head. You may need to look the details up.\n\n \nNiels\n \n \n\n\nFra: Daulat Ram <[email protected]> \nSendt: 31. juli 2019 13:32\nTil: [email protected]; [email protected]\nEmne: Oracle to postgres migration via ora2pg (blob data)\n\n\n \nHello team,\n \nWe have to migrate a schema from oracle to postgres but there is one table that is having following large lob segments. This table is taking time to export. What parameters we have to set in ora2pg.conf to speed up the\n data export by ora2pg.\n \nTable: CLIENT_DB_AUDIT_LOG\n \nLOBSEGMENT SYS_LOB0000095961C00008$$ 80.26\nLOBSEGMENT SYS_LOB0000095961C00007$$ 79.96\nLOBSEGMENT SYS_LOB0000094338C00008$$ 8.84\nLOBSEGMENT SYS_LOB0000084338C00007$$ 8.71\nLOBSEGMENT SYS_LOB0000085961C00009$$ 5.32\n \nVM Details are:\n \nRAM 8GB\nVCPUs 2 VCPU\nDisk 40GB\n \nThanks,",
"msg_date": "Wed, 31 Jul 2019 11:45:58 +0000",
"msg_from": "Niels Jespersen <[email protected]>",
"msg_from_op": false,
"msg_subject": "SV: Oracle to postgres migration via ora2pg (blob data)"
},
{
"msg_contents": "FullConvert does this job much faster than ora2pg\n\n\nWith Warm Regards,\nAmol Tarte,\nProject Lead,\nRajdeep InfoTechno Pvt. Ltd.\nVisit us at http://it.rajdeepgroup.com\n\nOn Wed 31 Jul, 2019, 5:16 PM Niels Jespersen, <[email protected]> wrote:\n\n> I would look at the source table in Oracle first. It looks a lot like\n> audit data. Perhaps all content is not needed in Postgres. If it is, then\n> the table and lobs may benefit from being reorganised in oracle.\n>\n>\n>\n> Alter table CLIENT_DB_AUDIT_LOG move;\n>\n> Alter table CLIENT_DB_AUDIT_LOG move lob (SYS_LOB0000095961C00008$$);\n>\n> -- Three more of these.\n>\n>\n>\n> The syntax is from my the back of my head. You may need to look the\n> details up.\n>\n>\n>\n> Niels\n>\n>\n>\n>\n>\n> *Fra:* Daulat Ram <[email protected]>\n> *Sendt:* 31. juli 2019 13:32\n> *Til:* [email protected];\n> [email protected]\n> *Emne:* Oracle to postgres migration via ora2pg (blob data)\n>\n>\n>\n> Hello team,\n>\n>\n>\n> We have to migrate a schema from oracle to postgres but there is one table\n> that is having following large lob segments. This table is taking time to\n> export. What parameters we have to set in ora2pg.conf to speed up the data\n> export by ora2pg.\n>\n>\n>\n> Table: CLIENT_DB_AUDIT_LOG\n>\n>\n>\n> LOBSEGMENT SYS_LOB0000095961C00008$$ 80.26\n>\n> LOBSEGMENT SYS_LOB0000095961C00007$$ 79.96\n>\n> LOBSEGMENT SYS_LOB0000094338C00008$$ 8.84\n>\n> LOBSEGMENT SYS_LOB0000084338C00007$$ 8.71\n>\n> LOBSEGMENT SYS_LOB0000085961C00009$$ 5.32\n>\n>\n>\n> VM Details are:\n>\n>\n>\n> RAM 8GB\n>\n> VCPUs 2 VCPU\n>\n> Disk 40GB\n>\n>\n>\n> Thanks,\n>\n>\n>\n\nFullConvert does this job much faster than ora2pgWith Warm Regards,Amol Tarte,Project Lead,Rajdeep InfoTechno Pvt. Ltd.Visit us at http://it.rajdeepgroup.comOn Wed 31 Jul, 2019, 5:16 PM Niels Jespersen, <[email protected]> wrote:\n\n\nI would look at the source table in Oracle first. It looks a lot like audit data. Perhaps all content is not needed in Postgres. If it is, then the table and lobs may benefit\n from being reorganised in oracle. \n \nAlter table CLIENT_DB_AUDIT_LOG move;\nAlter table CLIENT_DB_AUDIT_LOG move lob (SYS_LOB0000095961C00008$$);\n\n-- Three more of these.\n\n \nThe syntax is from my the back of my head. You may need to look the details up.\n\n \nNiels\n \n \n\n\nFra: Daulat Ram <[email protected]> \nSendt: 31. juli 2019 13:32\nTil: [email protected]; [email protected]\nEmne: Oracle to postgres migration via ora2pg (blob data)\n\n\n \nHello team,\n \nWe have to migrate a schema from oracle to postgres but there is one table that is having following large lob segments. This table is taking time to export. What parameters we have to set in ora2pg.conf to speed up the\n data export by ora2pg.\n \nTable: CLIENT_DB_AUDIT_LOG\n \nLOBSEGMENT SYS_LOB0000095961C00008$$ 80.26\nLOBSEGMENT SYS_LOB0000095961C00007$$ 79.96\nLOBSEGMENT SYS_LOB0000094338C00008$$ 8.84\nLOBSEGMENT SYS_LOB0000084338C00007$$ 8.71\nLOBSEGMENT SYS_LOB0000085961C00009$$ 5.32\n \nVM Details are:\n \nRAM 8GB\nVCPUs 2 VCPU\nDisk 40GB\n \nThanks,",
"msg_date": "Wed, 31 Jul 2019 21:32:10 +0530",
"msg_from": "Amol Tarte <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle to postgres migration via ora2pg (blob data)"
},
{
"msg_contents": "Le 31/07/2019 à 18:02, Amol Tarte a écrit :\n> FullConvert does this job much faster than ora2pg\n>\n>\n> With Warm Regards,\n> Amol Tarte,\n> Project Lead,\n> Rajdeep InfoTechno Pvt. Ltd.\n> Visit us at http://it.rajdeepgroup.com\n>\n> On Wed 31 Jul, 2019, 5:16 PM Niels Jespersen, <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> I would look at the source table in Oracle first. It looks a lot\n> like audit data. Perhaps all content is not needed in Postgres. If\n> it is, then the table and lobs may benefit from being reorganised\n> in oracle.\n>\n> \n>\n> Alter table CLIENT_DB_AUDIT_LOG move;\n>\n> Alter table CLIENT_DB_AUDIT_LOG move lob\n> (SYS_LOB0000095961C00008$$);\n>\n> -- Three more of these.\n>\n> \n>\n> The syntax is from my the back of my head. You may need to look\n> the details up.\n>\n> \n>\n> Niels\n>\n> \n>\n> \n>\n> *Fra:* Daulat Ram <[email protected]\n> <mailto:[email protected]>>\n> *Sendt:* 31. juli 2019 13:32\n> *Til:* [email protected]\n> <mailto:[email protected]>;\n> [email protected]\n> <mailto:[email protected]>\n> *Emne:* Oracle to postgres migration via ora2pg (blob data)\n>\n> \n>\n> Hello team,\n>\n> \n>\n> We have to migrate a schema from oracle to postgres but there is\n> one table that is having following large lob segments. This table\n> is taking time to export. What parameters we have to set in\n> ora2pg.conf to speed up the data export by ora2pg.\n>\n> \n>\n> Table: CLIENT_DB_AUDIT_LOG\n>\n> \n>\n> LOBSEGMENT SYS_LOB0000095961C00008$$ 80.26\n>\n> LOBSEGMENT SYS_LOB0000095961C00007$$ 79.96\n>\n> LOBSEGMENT SYS_LOB0000094338C00008$$ 8.84\n>\n> LOBSEGMENT SYS_LOB0000084338C00007$$ 8.71\n>\n> LOBSEGMENT SYS_LOB0000085961C00009$$ 5.32\n>\n> \n>\n> VM Details are:\n>\n> \n>\n> RAM 8GB\n>\n> VCPUs 2 VCPU\n>\n> Disk 40GB\n>\n> \n>\n> Thanks,\n>\n\nHi,\n\nBefore using impressive commercial products you can try some additional\nconfiguration with Ora2pg.\n\nThe only solution to improve data migration performances is to use\nparallelism. The problem with BLOB is that they have to be converted\ninto hex to be inserted into a bytea. The internal function in Ora2Pg\nthat responsible of this job is _escape_lob(). The other problem is that\nOracle is very slow to send the BLOB to Ora2Pg, I don't know if\ncommercial products are better at this point but I have challenged\nOra2Pg with Kettle some years ago without do much differences. So what\nyou should do first is to set the following in your ora2pg.conf:\n\n NO_LOB_LOCATOR 1\n LONGREADLEN 100000000\n\n\nThis will force Oracle to send the full content of the BLOB in a single\npass otherwise it will use small chunks. The only drawback is that you\nhave to set LONGREADLEN to the highest BLOB size in your table to not\nthrow and LONGTRUNC error.\n\nThat also mean that for each ROW returned DBD::Oracle (Perl driver for\nOracle) will allocate at least LONGREADLEN in memory and we want to\nextract as much rows as possible. You have understood that your 8GB of\nmemory will limit the quantity of rows that can be exported at the same\ntime.\n\nThe other point is that Oracle is slow so you have to parallelize data\nexport. Use -J 4 at command line to create 4 simultaneous process to\ndata export. Parallelization on Oracle side is only possible if you have\na numeric column that can be used to split the data using modulo 4 in\nthis case. This is a basic implementation but it is enough in most cases.\n\nConverting BLOB to Bytea consume lot of cpu cycle too so it is a good\npractice to parallelize this work too. Use -j 2 or -j 3 for this work.\nThe number of parallelisation process should be tested because there is\na limit where you will not win anything.\n\nIf you have, let's say 32GB of memory and 12 cpu you could try a command\nlike :\n\n ora2pg -c ora2pg.conf -J 4 -j 3 -t CLIENT_DB_AUDIT_LOG -L 500\n\nIf you have less resources don't forget that -J and -j must be\nmultiplied to have the number of process that Ora2Pg will parallelize.\nThe -L option (DATA_LIMIT) is used to reduce the number of row extracted\nat a time. Here with a value of 500 it will process 50 (DATA_LIMIT/10)\nrows with BLOB at a time. If the table do not have any BLOB it will use\n500 row at a time. For most tables this parameter should be set to 10000\nup to 250000. If you have lot of memory the value can be higher. If you\nthink it is too low you can set BLOB_LIMIT in ora2pg.conf to set it at a\nhigher value.\n\nHowever Ora2Pg will show you the data migration speed so you can adjust\nall these parameters to see if you have some performances gains. If you\nwant to know exactly at which speed Oracle is able to send the data add\n--oracle_speed to the ora2pg command. Ora2Pg will only extract data from\nOracle, there will be no bytea transformation or data writing, just the\nfull Oracle speed. You can do some test with the value of the -J option\nto see what is the best value. On the other side you can use\n--ora2pg_speed option to see at which speed Ora2Pg is able to convert\nthe data, nothing will be written too. Use it to know if you have some\nwin with the value of the -j option. Don't forget to do some additional\ntest with the BLOB_LIMIT value to see if there some more improvement. If\nsomeone can prove me that they have better performances at Oracle data\nextraction side I will be pleased to look at this code.\n\nI hope this will help.\n \n\nRegards,\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n\n\n\n Le 31/07/2019 à 18:02, Amol Tarte a écrit :\n\n\nFullConvert does this job much faster than ora2pg\n\n\n With Warm Regards,\n Amol Tarte,\n Project Lead,\n Rajdeep InfoTechno Pvt. Ltd.\n Visit us at http://it.rajdeepgroup.com\n\n\n\nOn Wed 31 Jul, 2019, 5:16 PM\n Niels Jespersen, <[email protected]> wrote:\n\n\n\n\nI would look at the source table in\n Oracle first. It looks a lot like audit data. Perhaps\n all content is not needed in Postgres. If it is, then\n the table and lobs may benefit from being reorganised\n in oracle. \n \nAlter table CLIENT_DB_AUDIT_LOG move;\nAlter table CLIENT_DB_AUDIT_LOG move lob\n (SYS_LOB0000095961C00008$$);\n \n-- Three more of these.\n \n \nThe syntax is from my the back of my\n head. You may need to look the details up.\n \n \nNiels\n \n \n\n\nFra: Daulat Ram <[email protected]>\n \nSendt: 31. juli 2019 13:32\nTil: [email protected];\n [email protected]\nEmne: Oracle to postgres migration via ora2pg\n (blob data)\n\n\n \nHello team,\n \nWe have to migrate\n a schema from oracle to postgres but there is one\n table that is having following large lob segments. \n This table is taking time to export. What parameters\n we have to set in ora2pg.conf to speed up the data\n export by ora2pg.\n \nTable: \n CLIENT_DB_AUDIT_LOG\n \nLOBSEGMENT \n SYS_LOB0000095961C00008$$ 80.26\nLOBSEGMENT \n SYS_LOB0000095961C00007$$ 79.96\nLOBSEGMENT \n SYS_LOB0000094338C00008$$ 8.84\nLOBSEGMENT \n SYS_LOB0000084338C00007$$ 8.71\nLOBSEGMENT \n SYS_LOB0000085961C00009$$ 5.32\n \nVM Details are:\n \nRAM 8GB\nVCPUs 2 VCPU\nDisk 40GB\n \nThanks,\n\n\n\n\n\n\n\nHi,\n\n\nBefore using impressive commercial\n products you can try some additional configuration with Ora2pg.\n\n\nThe only solution to improve data\n migration performances is to use parallelism. The problem with\n BLOB is that they have to be converted into hex to be inserted\n into a bytea. The internal function in Ora2Pg that responsible of\n this job is _escape_lob(). The other problem is that Oracle is\n very slow to send the BLOB to Ora2Pg, I don't know if commercial\n products are better at this point but I have challenged Ora2Pg\n with Kettle some years ago without do much differences. So what\n you should do first is to set the following in your ora2pg.conf:\n\n\n\nNO_LOB_LOCATOR 1\nLONGREADLEN 100000000\n\n\n\nThis will force Oracle to send the full\n content of the BLOB in a single pass otherwise it will use small\n chunks. The only drawback is that you have to set LONGREADLEN to\n the highest BLOB size in your table to not throw and LONGTRUNC\n error.\n\n\nThat also mean that for each ROW\n returned DBD::Oracle (Perl driver for Oracle) will allocate at\n least LONGREADLEN in memory and we want to extract as much rows as\n possible. You have understood that your 8GB of memory will limit\n the quantity of rows that can be exported at the same time.\n\n\nThe other point is that Oracle is slow\n so you have to parallelize data export. Use -J 4 at command line\n to create 4 simultaneous process to data export. Parallelization\n on Oracle side is only possible if you have a numeric column that\n can be used to split the data using modulo 4 in this case. This is\n a basic implementation but it is enough in most cases.\n\n\nConverting BLOB to Bytea consume lot of\n cpu cycle too so it is a good practice to parallelize this work\n too. Use -j 2 or -j 3 for this work. The number of parallelisation\n process should be tested because there is a limit where you will\n not win anything.\n\n\nIf you have, let's say 32GB of memory\n and 12 cpu you could try a command like :\n\n\n ora2pg -c ora2pg.conf -J 4 -j 3 -t\n CLIENT_DB_AUDIT_LOG -L 500\n\n\nIf you have less resources don't forget\n that -J and -j must be multiplied to have the number of process\n that Ora2Pg will parallelize. The -L option (DATA_LIMIT) is used\n to reduce the number of row extracted at a time. Here with a value\n of 500 it will process 50 (DATA_LIMIT/10) rows with BLOB at a\n time. If the table do not have any BLOB it will use 500 row at a\n time. For most tables this parameter should be set to 10000 up to\n 250000. If you have lot of memory the value can be higher. If you\n think it is too low you can set BLOB_LIMIT in ora2pg.conf to set\n it at a higher value. \n\n\n\nHowever Ora2Pg will show you the data\n migration speed so you can adjust all these parameters to see if\n you have some performances gains. If you want to know exactly at\n which speed Oracle is able to send the data add --oracle_speed to\n the ora2pg command. Ora2Pg will only extract data from Oracle,\n there will be no bytea transformation or data writing, just the\n full Oracle speed. You can do some test with the value of the -J\n option to see what is the best value. On the other side you can\n use --ora2pg_speed option to see at which speed Ora2Pg is able to\n convert the data, nothing will be written too. Use it to know if\n you have some win with the value of the -j option. Don't forget to\n do some additional test with the BLOB_LIMIT value to see if there\n some more improvement. If someone can prove me that they have\n better performances at Oracle data extraction side I will be\n pleased to look at this code. \n\n\n\nI hope this will help.\n\n \n\n\n\nRegards,\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Wed, 31 Jul 2019 23:05:28 +0200",
"msg_from": "Gilles Darold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle to postgres migration via ora2pg (blob data)"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are working on development of an application with postgresql 9.6 as\nbackend. Application as a whole is expected to give an throughput of 100k\ntransactions per sec. The transactions are received by DB from component\nfiring DMLs in ad-hoc fashion i.e. the commits are fired after random\nnumbers of transaction like 2,3,4. There is no bulk loading of records. DB\nshould have HA setup in active passive streaming replication. We are doing\na test setup on a 8-core machine having 16 GB RAM. Actual HW will be\nbetter.\n\nNeed help in:\n1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We have\ntested with a simple Java code firing insert and commit in a loop on a\nsimple table with one column. We get 1200 rows per sec. If we increase\nthreads RPS decrease.\n\n2. We have tuned some DB params like shared_buffers, sync_commit off, are\nthere any other pointers to tune DB params?\n\n\nThanks.\n\nHello,We are working on development of an application with postgresql 9.6 as backend. Application as a whole is expected to give an throughput of 100k transactions per sec. The transactions are received by DB from component firing DMLs in ad-hoc fashion i.e. the commits are fired after random numbers of transaction like 2,3,4. There is no bulk loading of records. DB should have HA setup in active passive streaming replication. We are doing a test setup on a 8-core machine having 16 GB RAM. Actual HW will be better. Need help in:1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We have tested with a simple Java code firing insert and commit in a loop on a simple table with one column. We get 1200 rows per sec. If we increase threads RPS decrease.2. We have tuned some DB params like shared_buffers, sync_commit off, are there any other pointers to tune DB params?Thanks.",
"msg_date": "Thu, 1 Aug 2019 08:40:53 +0530",
"msg_from": "Shital A <[email protected]>",
"msg_from_op": true,
"msg_subject": "PSQL performance - TPS"
},
{
"msg_contents": "On 01/08/2019 15:10, Shital A wrote:\n> Hello,\n>\n> We are working on development of an application with postgresql 9.6 as \n> backend. Application as a whole is expected to give an throughput of \n> 100k transactions per sec. The transactions are received by DB from \n> component firing DMLs in ad-hoc fashion i.e. the commits are fired \n> after random numbers of transaction like 2,3,4. There is no bulk \n> loading of records. DB should have HA setup in active passive \n> streaming replication. We are doing a test setup on a 8-core machine \n> having 16 GB RAM. Actual HW will be better.\n>\n> Need help in:\n> 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We \n> have tested with a simple Java code firing insert and commit in a loop \n> on a simple table with one column. We get 1200 rows per sec. If we \n> increase threads RPS decrease.\n>\n> 2. We have tuned some DB params like shared_buffers, sync_commit off, \n> are there any other pointers to tune DB params?\n>\n>\n> Thanks.\n\nCurious, why not use a more up-to-date version of Postgres, such 11.4? \nAs more recent versions tend to run faster and to be better optimised!\n\nYou also need to specify the operating system! Hopefully you are \nrunning a Linux or Unix O/S!\n\n\nCheers,\nGavin\n\n\n\n\n",
"msg_date": "Thu, 1 Aug 2019 17:15:51 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL performance - TPS"
},
{
"msg_contents": "Hello,\n\nVersion 9.6 is used because the components interacting with DB support this\nversion. OS is RHEL 7.6.\n\nThanks!\n\nOn Thu, 1 Aug 2019, 10:45 Gavin Flower, <[email protected]>\nwrote:\n\n> On 01/08/2019 15:10, Shital A wrote:\n> > Hello,\n> >\n> > We are working on development of an application with postgresql 9.6 as\n> > backend. Application as a whole is expected to give an throughput of\n> > 100k transactions per sec. The transactions are received by DB from\n> > component firing DMLs in ad-hoc fashion i.e. the commits are fired\n> > after random numbers of transaction like 2,3,4. There is no bulk\n> > loading of records. DB should have HA setup in active passive\n> > streaming replication. We are doing a test setup on a 8-core machine\n> > having 16 GB RAM. Actual HW will be better.\n> >\n> > Need help in:\n> > 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We\n> > have tested with a simple Java code firing insert and commit in a loop\n> > on a simple table with one column. We get 1200 rows per sec. If we\n> > increase threads RPS decrease.\n> >\n> > 2. We have tuned some DB params like shared_buffers, sync_commit off,\n> > are there any other pointers to tune DB params?\n> >\n> >\n> > Thanks.\n>\n> Curious, why not use a more up-to-date version of Postgres, such 11.4?\n> As more recent versions tend to run faster and to be better optimised!\n>\n> You also need to specify the operating system! Hopefully you are\n> running a Linux or Unix O/S!\n>\n>\n> Cheers,\n> Gavin\n>\n>\n>\n\nHello,Version 9.6 is used because the components interacting with DB support this version. OS is RHEL 7.6.Thanks! On Thu, 1 Aug 2019, 10:45 Gavin Flower, <[email protected]> wrote:On 01/08/2019 15:10, Shital A wrote:\n> Hello,\n>\n> We are working on development of an application with postgresql 9.6 as \n> backend. Application as a whole is expected to give an throughput of \n> 100k transactions per sec. The transactions are received by DB from \n> component firing DMLs in ad-hoc fashion i.e. the commits are fired \n> after random numbers of transaction like 2,3,4. There is no bulk \n> loading of records. DB should have HA setup in active passive \n> streaming replication. We are doing a test setup on a 8-core machine \n> having 16 GB RAM. Actual HW will be better.\n>\n> Need help in:\n> 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We \n> have tested with a simple Java code firing insert and commit in a loop \n> on a simple table with one column. We get 1200 rows per sec. If we \n> increase threads RPS decrease.\n>\n> 2. We have tuned some DB params like shared_buffers, sync_commit off, \n> are there any other pointers to tune DB params?\n>\n>\n> Thanks.\n\nCurious, why not use a more up-to-date version of Postgres, such 11.4? \nAs more recent versions tend to run faster and to be better optimised!\n\nYou also need to specify the operating system! Hopefully you are \nrunning a Linux or Unix O/S!\n\n\nCheers,\nGavin",
"msg_date": "Thu, 1 Aug 2019 10:48:10 +0530",
"msg_from": "Shital A <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PSQL performance - TPS"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 08:40:53 +0530, Shital A wrote:\n> Need help in:\n> 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We have\n> tested with a simple Java code firing insert and commit in a loop on a\n> simple table with one column. We get 1200 rows per sec. If we increase\n> threads RPS decrease.\n> \n> 2. We have tuned some DB params like shared_buffers, sync_commit off, are\n> there any other pointers to tune DB params?\n\nIf you've set synchronous_commit = off, and you still get only 1200\ntransactions/sec, something else is off. Are you sure you set that?\n\nAre your clients in the same datacenter as your database? Otherwise it\ncould be that you're mostly seeing latency effects.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2019 10:21:28 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL performance - TPS"
},
{
"msg_contents": "I am not very surprised with these results. However, what’s the disk type?\nThat can matter quite a bit.\n\nOn Thu, 1 Aug 2019 at 10:51 PM, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2019-08-01 08:40:53 +0530, Shital A wrote:\n> > Need help in:\n> > 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We\n> have\n> > tested with a simple Java code firing insert and commit in a loop on a\n> > simple table with one column. We get 1200 rows per sec. If we increase\n> > threads RPS decrease.\n> >\n> > 2. We have tuned some DB params like shared_buffers, sync_commit off, are\n> > there any other pointers to tune DB params?\n>\n> If you've set synchronous_commit = off, and you still get only 1200\n> transactions/sec, something else is off. Are you sure you set that?\n>\n> Are your clients in the same datacenter as your database? Otherwise it\n> could be that you're mostly seeing latency effects.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\nI am not very surprised with these results. However, what’s the disk type? That can matter quite a bit.On Thu, 1 Aug 2019 at 10:51 PM, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2019-08-01 08:40:53 +0530, Shital A wrote:\n> Need help in:\n> 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We have\n> tested with a simple Java code firing insert and commit in a loop on a\n> simple table with one column. We get 1200 rows per sec. If we increase\n> threads RPS decrease.\n> \n> 2. We have tuned some DB params like shared_buffers, sync_commit off, are\n> there any other pointers to tune DB params?\n\nIf you've set synchronous_commit = off, and you still get only 1200\ntransactions/sec, something else is off. Are you sure you set that?\n\nAre your clients in the same datacenter as your database? Otherwise it\ncould be that you're mostly seeing latency effects.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 1 Aug 2019 23:36:33 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL performance - TPS"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 23:36:33 +0530, Purav Chovatia wrote:\n> > If you've set synchronous_commit = off, and you still get only 1200\n> > transactions/sec, something else is off. Are you sure you set that?\n> I am not very surprised with these results. However, what’s the disk type?\n> That can matter quite a bit.\n\nWhy aren't you surprised? I can easily get 20k+ write transactions/sec\non my laptop, with synchronous_commit=off. With appropriate\nshared_buffers and other settings, the disk speed shouldn't matter that\nmuch for in insertion mostly workload.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2019 11:14:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL performance - TPS"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 2:15 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2019-08-01 23:36:33 +0530, Purav Chovatia wrote:\n> > > If you've set synchronous_commit = off, and you still get only 1200\n> > > transactions/sec, something else is off. Are you sure you set that?\n> > I am not very surprised with these results. However, what’s the disk\n> type?\n> > That can matter quite a bit.\n>\n>\nAlso a reminder that you should have a connection pooler in front of your\ndatabase such as PGBouncer. If you are churning a lot of connections you\ncould be hurting your throughput.\n\nOn Thu, Aug 1, 2019 at 2:15 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2019-08-01 23:36:33 +0530, Purav Chovatia wrote:\n> > If you've set synchronous_commit = off, and you still get only 1200\n> > transactions/sec, something else is off. Are you sure you set that?\n> I am not very surprised with these results. However, what’s the disk type?\n> That can matter quite a bit.Also a reminder that you should have a connection pooler in front of your database such as PGBouncer. If you are churning a lot of connections you could be hurting your throughput.",
"msg_date": "Thu, 1 Aug 2019 14:27:53 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL performance - TPS"
},
{
"msg_contents": "On Thu, 1 Aug 2019, 23:58 Rick Otten, <[email protected]> wrote:\n\n>\n>\n> On Thu, Aug 1, 2019 at 2:15 PM Andres Freund <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> On 2019-08-01 23:36:33 +0530, Purav Chovatia wrote:\n>> > > If you've set synchronous_commit = off, and you still get only 1200\n>> > > transactions/sec, something else is off. Are you sure you set that?\n>> > I am not very surprised with these results. However, what’s the disk\n>> type?\n>> > That can matter quite a bit.\n>>\n>>\n> Also a reminder that you should have a connection pooler in front of your\n> database such as PGBouncer. If you are churning a lot of connections you\n> could be hurting your throughput.\n>\n>\n>\n\nHello,\n\nYes, synchronous_commit is off on primary and standby.\n\nPrimary, standby and clients are in same datacentre.\n\nShared_buffers set to 25% of RAM , no much improvement if this is increased.\n\nOther params set are:\n\nEffective_cache_size 12GB\nMaintainance_work_mem 1GB\nWalk_buffers 16MB\nEffective_io_concurrency 200\nWork_mem 5242kB\nMin_wal_size 2GB\nMax_wal_size 4GB\nMax_worker_processes 8\nMax_parallel_workers_per_gather 8\nCheckpoint_completion_target 0.9\nRandom_page_cost 1.1\n\nWe have not configured connection pooler. Number of coonections are under\n20 for this testing.\n\n@Rick, 20k TPS on your system - is it with batching\n\nWant to know what configuration we are missing to achieve higher TPS. We\nare testing inserts on a simple table with just one text column.\n\n\nThanks !\n\n>\n\nOn Thu, 1 Aug 2019, 23:58 Rick Otten, <[email protected]> wrote:On Thu, Aug 1, 2019 at 2:15 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2019-08-01 23:36:33 +0530, Purav Chovatia wrote:\n> > If you've set synchronous_commit = off, and you still get only 1200\n> > transactions/sec, something else is off. Are you sure you set that?\n> I am not very surprised with these results. However, what’s the disk type?\n> That can matter quite a bit.Also a reminder that you should have a connection pooler in front of your database such as PGBouncer. If you are churning a lot of connections you could be hurting your throughput. Hello,Yes, synchronous_commit is off on primary and standby. Primary, standby and clients are in same datacentre. Shared_buffers set to 25% of RAM , no much improvement if this is increased.Other params set are:Effective_cache_size 12GBMaintainance_work_mem 1GBWalk_buffers 16MBEffective_io_concurrency 200Work_mem 5242kBMin_wal_size 2GBMax_wal_size 4GBMax_worker_processes 8Max_parallel_workers_per_gather 8Checkpoint_completion_target 0.9Random_page_cost 1.1We have not configured connection pooler. Number of coonections are under 20 for this testing. @Rick, 20k TPS on your system - is it with batching Want to know what configuration we are missing to achieve higher TPS. We are testing inserts on a simple table with just one text column. Thanks !",
"msg_date": "Fri, 2 Aug 2019 10:29:16 +0530",
"msg_from": "Shital A <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PSQL performance - TPS"
},
{
"msg_contents": "> Application as a whole is expected to give an throughput of 100k\ntransactions per sec.\n> On this env(8core cpu, 16GB) what is the TPS that we can expect?\n\nas a reference - maybe you can reuse/adapt the \"TechEmpower Framework\nBenchmarks\" tests - and compare your PG9.6+hardware results.\n\nThe new TechEmpower Framework Benchmarks [2019-07-09 Round 18]\n* reference numbers:\nhttps://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=update\n* source code: https://github.com/TechEmpower/FrameworkBenchmarks\n* PG11 config:\nhttps://github.com/TechEmpower/FrameworkBenchmarks/blob/master/toolset/databases/postgres/postgresql.conf\n* java frameworks:\nhttps://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Java\n\n> We have tested with a simple Java code firing insert\n\nAs I see - There are lot of java framework - and sometimes 10x difference\nin performance :\nhttps://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=update\n\n\"Responses per second at 20 updates per request, Dell R440 Xeon Gold + 10\nGbE\"\n( \"Intel Xeon Gold 5120 CPU (14c28t) , 32 GB of memory, and an enterprise\nSSD. Dedicated Cisco 10-gigabit Ethernet switch\")\n* java + PG11 results: low:126 -> high:21807\n\n\"Responses per second at 20 updates per request, Azure D3v2 instances\"\n* java + PG11 results: low:329 -> high:2975\n\nbest,\n Imre\n\n\n\nShital A <[email protected]> ezt írta (időpont: 2019. aug. 1., Cs,\n5:11):\n\n> Hello,\n>\n> We are working on development of an application with postgresql 9.6 as\n> backend. Application as a whole is expected to give an throughput of 100k\n> transactions per sec. The transactions are received by DB from component\n> firing DMLs in ad-hoc fashion i.e. the commits are fired after random\n> numbers of transaction like 2,3,4. There is no bulk loading of records. DB\n> should have HA setup in active passive streaming replication. We are doing\n> a test setup on a 8-core machine having 16 GB RAM. Actual HW will be\n> better.\n>\n> Need help in:\n> 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We\n> have tested with a simple Java code firing insert and commit in a loop on a\n> simple table with one column. We get 1200 rows per sec. If we increase\n> threads RPS decrease.\n>\n> 2. We have tuned some DB params like shared_buffers, sync_commit off, are\n> there any other pointers to tune DB params?\n>\n>\n> Thanks.\n>\n\n> Application as a whole is expected to give an throughput of 100k transactions per sec. > On this env(8core cpu, 16GB) what is the TPS that we can expect? as a reference - maybe you can reuse/adapt the \"TechEmpower Framework Benchmarks\" tests - and compare your PG9.6+hardware results.The new TechEmpower Framework Benchmarks [2019-07-09 Round 18] * reference numbers: https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=update* source code: https://github.com/TechEmpower/FrameworkBenchmarks* PG11 config: https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/toolset/databases/postgres/postgresql.conf* java frameworks: https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Java> We have tested with a simple Java code firing insert As I see - There are lot of java framework - and sometimes 10x difference in performance :https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=update\"Responses per second at 20 updates per request, Dell R440 Xeon Gold + 10 GbE\"( \"Intel Xeon Gold 5120 CPU (14c28t) , 32 GB of memory, and an enterprise SSD. Dedicated Cisco 10-gigabit Ethernet switch\")* java + PG11 results: low:126 -> high:21807\"Responses per second at 20 updates per request, Azure D3v2 instances\" * java + PG11 results: low:329 -> high:2975best, ImreShital A <[email protected]> ezt írta (időpont: 2019. aug. 1., Cs, 5:11):Hello,We are working on development of an application with postgresql 9.6 as backend. Application as a whole is expected to give an throughput of 100k transactions per sec. The transactions are received by DB from component firing DMLs in ad-hoc fashion i.e. the commits are fired after random numbers of transaction like 2,3,4. There is no bulk loading of records. DB should have HA setup in active passive streaming replication. We are doing a test setup on a 8-core machine having 16 GB RAM. Actual HW will be better. Need help in:1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We have tested with a simple Java code firing insert and commit in a loop on a simple table with one column. We get 1200 rows per sec. If we increase threads RPS decrease.2. We have tuned some DB params like shared_buffers, sync_commit off, are there any other pointers to tune DB params?Thanks.",
"msg_date": "Fri, 2 Aug 2019 13:04:10 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL performance - TPS"
}
] |
[
{
"msg_contents": "I stumbled across this question on SO: https://stackoverflow.com/questions/56517852\n\nDisregarding the part about Postgres 9.3, the example for Postgres 11 looks a bit confusing. \n\nThere is a script to setup test data in that question: \n\n==== start of script ====\n\n create table foo (\n foo_id integer not null,\n foo_name varchar(10),\n constraint foo_pkey primary key (foo_id) \n ); \n\n insert into foo\n (foo_id, foo_name) \n values\n (1, 'eeny'),\n (2, 'meeny'),\n (3, 'miny'),\n (4, 'moe'),\n (5, 'tiger'), \n (6, 'toe');\n\n create table foo_bar_baz (\n foo_id integer not null,\n bar_id integer not null,\n baz integer not null,\n constraint foo_bar_baz_pkey primary key (foo_id, bar_id, baz),\n constraint foo_bar_baz_fkey1 foreign key (foo_id)\n references foo (foo_id)\n ) partition by range (foo_id) \n ;\n\n create table if not exists foo_bar_baz_0 partition of foo_bar_baz for values from (0) to (1);\n create table if not exists foo_bar_baz_1 partition of foo_bar_baz for values from (1) to (2);\n create table if not exists foo_bar_baz_2 partition of foo_bar_baz for values from (2) to (3);\n create table if not exists foo_bar_baz_3 partition of foo_bar_baz for values from (3) to (4);\n create table if not exists foo_bar_baz_4 partition of foo_bar_baz for values from (4) to (5);\n create table if not exists foo_bar_baz_5 partition of foo_bar_baz for values from (5) to (6);\n\n with foos_and_bars as (\n select ((random() * 4) + 1)::int as foo_id, bar_id::int\n from generate_series(0, 1499) as t(bar_id)\n ), bazzes as (\n select baz::int\n from generate_series(1, 1500) as t(baz)\n )\n insert into foo_bar_baz (foo_id, bar_id, baz) \n select foo_id, bar_id, baz \n from bazzes as bz \n join foos_and_bars as fab on mod(bz.baz, fab.foo_id) = 0;\n\n==== end of script ====\n\nI see the some strange behaviour similar to to what is reported in the comments to that question: \n\nWhen I run the test query immediately after populating the tables with the sample data:\n\n explain analyze \n select count(*) \n from foo_bar_baz as fbb \n join foo on fbb.foo_id = foo.foo_id \n where foo.foo_name = 'eeny'\n\nI do see an \"Index Only Scan .... (never executed)\" in the plan for the irrelevant partitions: \n\n https://explain.depesz.com/s/AqlE\n\nHowever once I run \"analyze foo_bar_baz\" (or \"vacuum analyze\"), Postgres chooses to do a \"Parallel Seq Scan\" for each partition:\n\n https://explain.depesz.com/s/WwxE\n\nWhy does updating the statistics mess up (runtime) partition pruning? \n\n\nI played around with random_page_cost and that didn't change anything. \nI tried to create extended statistics on \"foo(id, name)\" so that the planner would no, that there is only one name per id. No change. \n\nI saw the above behaviour when running this on Windows 10 (my Laptop) or CentOS 7 (a test environment on a VM) \n\nOn the CentOS server default_statistics_target is set to 100, on my laptop it is set to 1000\n\nIn both cases the Postgres version was 11.4\n\nAny ideas? \n\nThomas\n\n\n",
"msg_date": "Fri, 2 Aug 2019 15:58:51 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange runtime partition pruning behaviour with 11.4"
},
{
"msg_contents": "I too am a bit perplexed by why runtime partition pruning does not seem \nto work with this example. Anybody got any ideas of this?\n\nRegards,\nMichael Vitale\n\nThomas Kellerer wrote on 8/2/2019 9:58 AM:\n> I stumbled across this question on SO: https://stackoverflow.com/questions/56517852\n>\n> Disregarding the part about Postgres 9.3, the example for Postgres 11 looks a bit confusing.\n>\n> There is a script to setup test data in that question:\n>\n> ==== start of script ====\n>\n> create table foo (\n> foo_id integer not null,\n> foo_name varchar(10),\n> constraint foo_pkey primary key (foo_id)\n> );\n>\n> insert into foo\n> (foo_id, foo_name)\n> values\n> (1, 'eeny'),\n> (2, 'meeny'),\n> (3, 'miny'),\n> (4, 'moe'),\n> (5, 'tiger'),\n> (6, 'toe');\n>\n> create table foo_bar_baz (\n> foo_id integer not null,\n> bar_id integer not null,\n> baz integer not null,\n> constraint foo_bar_baz_pkey primary key (foo_id, bar_id, baz),\n> constraint foo_bar_baz_fkey1 foreign key (foo_id)\n> references foo (foo_id)\n> ) partition by range (foo_id)\n> ;\n>\n> create table if not exists foo_bar_baz_0 partition of foo_bar_baz for values from (0) to (1);\n> create table if not exists foo_bar_baz_1 partition of foo_bar_baz for values from (1) to (2);\n> create table if not exists foo_bar_baz_2 partition of foo_bar_baz for values from (2) to (3);\n> create table if not exists foo_bar_baz_3 partition of foo_bar_baz for values from (3) to (4);\n> create table if not exists foo_bar_baz_4 partition of foo_bar_baz for values from (4) to (5);\n> create table if not exists foo_bar_baz_5 partition of foo_bar_baz for values from (5) to (6);\n>\n> with foos_and_bars as (\n> select ((random() * 4) + 1)::int as foo_id, bar_id::int\n> from generate_series(0, 1499) as t(bar_id)\n> ), bazzes as (\n> select baz::int\n> from generate_series(1, 1500) as t(baz)\n> )\n> insert into foo_bar_baz (foo_id, bar_id, baz)\n> select foo_id, bar_id, baz\n> from bazzes as bz\n> join foos_and_bars as fab on mod(bz.baz, fab.foo_id) = 0;\n>\n> ==== end of script ====\n>\n> I see the some strange behaviour similar to to what is reported in the comments to that question:\n>\n> When I run the test query immediately after populating the tables with the sample data:\n>\n> explain analyze\n> select count(*)\n> from foo_bar_baz as fbb\n> join foo on fbb.foo_id = foo.foo_id\n> where foo.foo_name = 'eeny'\n>\n> I do see an \"Index Only Scan .... (never executed)\" in the plan for the irrelevant partitions:\n>\n> https://explain.depesz.com/s/AqlE\n>\n> However once I run \"analyze foo_bar_baz\" (or \"vacuum analyze\"), Postgres chooses to do a \"Parallel Seq Scan\" for each partition:\n>\n> https://explain.depesz.com/s/WwxE\n>\n> Why does updating the statistics mess up (runtime) partition pruning?\n>\n>\n> I played around with random_page_cost and that didn't change anything.\n> I tried to create extended statistics on \"foo(id, name)\" so that the planner would no, that there is only one name per id. No change.\n>\n> I saw the above behaviour when running this on Windows 10 (my Laptop) or CentOS 7 (a test environment on a VM)\n>\n> On the CentOS server default_statistics_target is set to 100, on my laptop it is set to 1000\n>\n> In both cases the Postgres version was 11.4\n>\n> Any ideas?\n>\n> Thomas\n>\n>\n\n\n\n",
"msg_date": "Sat, 3 Aug 2019 09:16:22 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange runtime partition pruning behaviour with 11.4"
},
{
"msg_contents": "Hi,\n\n\nAm 03.08.19 um 15:16 schrieb MichaelDBA:\n> I too am a bit perplexed by why runtime partition pruning does not \n> seem to work with this example. Anybody got any ideas of this? \n\n\nplease don't top-posting.\n\nit's posible to rewrite the query to:\n\n\ntest=# explain analyse select count(*) from foo_bar_baz as fbb where \nfoo_id = (select foo_id from foo where foo_name = 'eeny');\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=15880.63..15880.64 rows=1 width=8) (actual \ntime=48.447..48.448 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Seq Scan on foo (cost=0.00..24.50 rows=6 width=4) (actual \ntime=0.243..0.246 rows=1 loops=1)\n Filter: ((foo_name)::text = 'eeny'::text)\n Rows Removed by Filter: 5\n -> Gather (cost=15855.92..15856.13 rows=2 width=8) (actual \ntime=48.376..51.468 rows=3 loops=1)\n Workers Planned: 2\n Params Evaluated: $0\n Workers Launched: 2\n -> Partial Aggregate (cost=14855.92..14855.93 rows=1 \nwidth=8) (actual time=42.600..42.600 rows=1 loops=3)\n -> Parallel Append (cost=0.00..13883.01 rows=389162 \nwidth=0) (actual time=0.139..34.914 rows=83500 loops=3)\n -> Parallel Bitmap Heap Scan on foo_bar_baz_0 \nfbb (cost=4.23..14.73 rows=6 width=0) (never executed)\n Recheck Cond: (foo_id = $0)\n -> Bitmap Index Scan on foo_bar_baz_0_pkey \n(cost=0.00..4.23 rows=10 width=0) (never executed)\n Index Cond: (foo_id = $0)\n -> Parallel Seq Scan on foo_bar_baz_2 fbb_2 \n(cost=0.00..3865.72 rows=178218 width=0) (never executed)\n Filter: (foo_id = $0)\n -> Parallel Seq Scan on foo_bar_baz_1 fbb_1 \n(cost=0.00..3195.62 rows=147250 width=0) (actual time=0.129..24.735 \nrows=83500 loops=3)\n Filter: (foo_id = $0)\n -> Parallel Seq Scan on foo_bar_baz_3 fbb_3 \n(cost=0.00..2334.49 rows=107559 width=0) (never executed)\n Filter: (foo_id = $0)\n -> Parallel Seq Scan on foo_bar_baz_4 fbb_4 \n(cost=0.00..1860.95 rows=85756 width=0) (never executed)\n Filter: (foo_id = $0)\n -> Parallel Seq Scan on foo_bar_baz_5 fbb_5 \n(cost=0.00..665.69 rows=30615 width=0) (never executed)\n Filter: (foo_id = $0)\n Planning Time: 12.648 ms\n Execution Time: 52.621 ms\n(27 rows)\n\ntest=*#\n\n\nI know, that's not a solution, but a workaround. :-(\n\n(pg 12beta2 and also with PostgreSQL 11.4 (2ndQPG 11.4r1.6.7))\n\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Sat, 3 Aug 2019 15:42:55 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange runtime partition pruning behaviour with 11.4"
},
{
"msg_contents": "> it's posible to rewrite the query to:\n>\n>\n> test=# explain analyse select count(*) from foo_bar_baz as fbb where foo_id = (select foo_id from foo where foo_name = 'eeny');\n>\n> I know, that's not a solution, but a workaround. :-(\n\nYes, I discovered that as well.\n\nBut I'm more confused (or concerned) by the fact that the (original) query works correctly *without* statistics.\n\nThomas\n\n\n\n\n\n",
"msg_date": "Sat, 3 Aug 2019 16:06:57 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange runtime partition pruning behaviour with 11.4"
},
{
"msg_contents": "\n\nAm 03.08.19 um 16:06 schrieb Thomas Kellerer:\n>> it's posible to rewrite the query to:\n>>\n>>\n>> test=# explain analyse select count(*) from foo_bar_baz as fbb where \n>> foo_id = (select foo_id from foo where foo_name = 'eeny');\n>>\n>> I know, that's not a solution, but a workaround. :-(\n>\n> Yes, I discovered that as well.\n>\n> But I'm more confused (or concerned) by the fact that the (original) \n> query works correctly *without* statistics.\n>\n> Thomas\n>\n>\n\ncan't reproduce that :-( (PG 11.4 Community)\n\n(all in a file and executed the explain immediately)\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Sat, 3 Aug 2019 17:18:19 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange runtime partition pruning behaviour with 11.4"
},
{
"msg_contents": "Andreas Kretschmer <[email protected]> writes:\n> Am 03.08.19 um 16:06 schrieb Thomas Kellerer:\n>> But I'm more confused (or concerned) by the fact that the (original) \n>> query works correctly *without* statistics.\n\n> can't reproduce that :-( (PG 11.4 Community)\n\nYeah, I get the same plan with or without ANALYZE, too. In this example,\nhaving the ANALYZE stats barely moves the rowcount estimates for\nfoo_bar_baz at all, so it's not surprising that the plan doesn't change.\n(I do wonder how Thomas got a different outcome...)\n\nGiven the shape of the preferred plan:\n\n Finalize Aggregate (cost=15779.59..15779.60 rows=1 width=8) (actual time=160.329..160.330 rows=1 loops=1)\n -> Gather (cost=15779.38..15779.59 rows=2 width=8) (actual time=160.011..161.712 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Partial Aggregate (cost=14779.38..14779.39 rows=1 width=8) (actual time=154.675..154.675 rows=1 loops=3)\n -> Hash Join (cost=1.09..14612.90 rows=66590 width=0) (actual time=86.814..144.793 rows=100500 loops=3)\n Hash Cond: (fbb_1.foo_id = foo.foo_id)\n -> Parallel Append (cost=0.00..12822.21 rows=399537 width=4) (actual time=0.019..95.644 rows=318950 loops=3)\n -> Parallel Seq Scan on foo_bar_baz_1 fbb_1 (cost=0.00..3403.53 rows=177353 width=4) (actual time=0.012..18.881 rows=100500 loops=3)\n -> Parallel Seq Scan on foo_bar_baz_2 fbb_2 (cost=0.00..3115.53 rows=162353 width=4) (actual time=0.018..51.716 rows=276000 loops=1)\n -> Parallel Seq Scan on foo_bar_baz_3 fbb_3 (cost=0.00..2031.82 rows=105882 width=4) (actual time=0.011..16.854 rows=90000 loops=2)\n -> Parallel Seq Scan on foo_bar_baz_4 fbb_4 (cost=0.00..1584.00 rows=82500 width=4) (actual time=0.011..26.950 rows=140250 loops=1)\n -> Parallel Seq Scan on foo_bar_baz_5 fbb_5 (cost=0.00..667.65 rows=34765 width=4) (actual time=0.014..11.896 rows=59100 loops=1)\n -> Parallel Seq Scan on foo_bar_baz_0 fbb (cost=0.00..22.00 rows=1200 width=4) (actual time=0.001..0.001 rows=0 loops=1)\n -> Hash (cost=1.07..1.07 rows=1 width=4) (actual time=0.038..0.038 rows=1 loops=3)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on foo (cost=0.00..1.07 rows=1 width=4) (actual time=0.021..0.023 rows=1 loops=3)\n Filter: ((foo_name)::text = 'eeny'::text)\n Rows Removed by Filter: 5\n\nit's obvious that no pruning can happen, run-time or otherwise,\nbecause the partitioned table is being scanned on the outside\nof the join --- so the target value of foo_id isn't available.\n\nWe can force the planner to its second best choice with\nset enable_hashjoin to 0;\n\nand then we get\n\n Aggregate (cost=31954.09..31954.10 rows=1 width=8) (actual time=420.158..420.158 rows=1 loops=1)\n -> Nested Loop (cost=0.00..31554.55 rows=159815 width=0) (actual time=0.058..389.974 rows=301500 loops=1)\n Join Filter: (fbb.foo_id = foo.foo_id)\n Rows Removed by Join Filter: 655350\n -> Seq Scan on foo (cost=0.00..1.07 rows=1 width=4) (actual time=0.025..0.028 rows=1 loops=1)\n Filter: ((foo_name)::text = 'eeny'::text)\n Rows Removed by Filter: 5\n -> Append (cost=0.00..19567.35 rows=958890 width=4) (actual time=0.026..280.510 rows=956850 loops=1)\n -> Seq Scan on foo_bar_baz_0 fbb (cost=0.00..30.40 rows=2040 width=4) (actual time=0.003..0.003 rows=0 loops=1)\n -> Seq Scan on foo_bar_baz_1 fbb_1 (cost=0.00..4645.00 rows=301500 width=4) (actual time=0.022..57.836 rows=301500 loops=1)\n -> Seq Scan on foo_bar_baz_2 fbb_2 (cost=0.00..4252.00 rows=276000 width=4) (actual time=0.019..51.834 rows=276000 loops=1)\n -> Seq Scan on foo_bar_baz_3 fbb_3 (cost=0.00..2773.00 rows=180000 width=4) (actual time=0.016..31.951 rows=180000 loops=1)\n -> Seq Scan on foo_bar_baz_4 fbb_4 (cost=0.00..2161.50 rows=140250 width=4) (actual time=0.015..24.392 rows=140250 loops=1)\n -> Seq Scan on foo_bar_baz_5 fbb_5 (cost=0.00..911.00 rows=59100 width=4) (actual time=0.012..10.252 rows=59100 loops=1)\n\nThis is a good deal slower, and the planner correctly estimates that it's\na good deal slower, so that's why it didn't get picked.\n\nBut ... why didn't any run-time pruning happen? Because the shape of the\nplan is still wrong: the join condition is being applied at the nestloop\nnode. If we'd pushed down the foo_id condition to the foo_bar_baz scan\nthen there'd be hope of pruning.\n\nI think the reason that that isn't happening is that the planner has\nnot been taught that run-time pruning is a thing, so it's not giving\nany cost preference to doing things in a way that would enable that.\nIt's not entirely clear what the cost estimate adjustments should be,\nbut obviously somebody had better work on that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2019 12:05:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange runtime partition pruning behaviour with 11.4"
},
{
"msg_contents": "I too got the same plan (non runtime partition pruning plan) with or \nwithout the statistics. So it looks like the workaround until this is \nfixed is to re-arrange the query to do a subselect to force the runtime \npartition pruning as Andreas suggested, which I tested and indeed does \nwork for me too!\n\nRegards,\nMichael Vitale\n\nThomas Kellerer wrote on 8/3/2019 10:06 AM:\n>> it's posible to rewrite the query to:\n>>\n>>\n>> test=# explain analyse select count(*) from foo_bar_baz as fbb where \n>> foo_id = (select foo_id from foo where foo_name = 'eeny');\n>>\n>> I know, that's not a solution, but a workaround. :-(\n>\n> Yes, I discovered that as well.\n>\n> But I'm more confused (or concerned) by the fact that the (original) \n> query works correctly *without* statistics.\n>\n> Thomas\n>\n>\n>\n>\n>\n\n\n\n",
"msg_date": "Sat, 3 Aug 2019 12:49:14 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange runtime partition pruning behaviour with 11.4"
},
{
"msg_contents": "Tom Lane schrieb am 03.08.2019 um 18:05:\n> Yeah, I get the same plan with or without ANALYZE, too. In this example,\n> having the ANALYZE stats barely moves the rowcount estimates for\n> foo_bar_baz at all, so it's not surprising that the plan doesn't change.\n> (I do wonder how Thomas got a different outcome...)\n\nI don't know why either ;) \n\nI am using a JDBC based SQL tool to run that - I don't know if that matters.\n\nI just tried this script with Postgres 12 beta2 and there I do not get \nthe initial plan with \"never executed\" (so the same behaviour as everybody\nelse seems to have).\n\nIf the reason why my initial plan is different than the \"analyzed\" plan \nlies in the configuration, I am happy to share my postgresql.conf if \nthat is of any interest.\n\nThomas\n\n\n\n\n",
"msg_date": "Mon, 5 Aug 2019 09:29:33 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange runtime partition pruning behaviour with 11.4"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 8:46 AM Thomas Kellerer <[email protected]> wrote:\n\n> I stumbled across this question on SO:\n> https://stackoverflow.com/questions/56517852\n>\n> Disregarding the part about Postgres 9.3, the example for Postgres 11\n> looks a bit confusing.\n>\n> There is a script to setup test data in that question:\n>\n> ==== start of script ====\n>\n> create table foo (\n> foo_id integer not null,\n> foo_name varchar(10),\n> constraint foo_pkey primary key (foo_id)\n> );\n>\n> insert into foo\n> (foo_id, foo_name)\n> values\n> (1, 'eeny'),\n> (2, 'meeny'),\n> (3, 'miny'),\n> (4, 'moe'),\n> (5, 'tiger'),\n> (6, 'toe');\n>\n> create table foo_bar_baz (\n> foo_id integer not null,\n> bar_id integer not null,\n> baz integer not null,\n> constraint foo_bar_baz_pkey primary key (foo_id, bar_id, baz),\n> constraint foo_bar_baz_fkey1 foreign key (foo_id)\n> references foo (foo_id)\n> ) partition by range (foo_id)\n> ;\n>\n> create table if not exists foo_bar_baz_0 partition of foo_bar_baz for\n> values from (0) to (1);\n> create table if not exists foo_bar_baz_1 partition of foo_bar_baz for\n> values from (1) to (2);\n> create table if not exists foo_bar_baz_2 partition of foo_bar_baz for\n> values from (2) to (3);\n> create table if not exists foo_bar_baz_3 partition of foo_bar_baz for\n> values from (3) to (4);\n> create table if not exists foo_bar_baz_4 partition of foo_bar_baz for\n> values from (4) to (5);\n> create table if not exists foo_bar_baz_5 partition of foo_bar_baz for\n> values from (5) to (6);\n>\n> with foos_and_bars as (\n> select ((random() * 4) + 1)::int as foo_id, bar_id::int\n> from generate_series(0, 1499) as t(bar_id)\n> ), bazzes as (\n> select baz::int\n> from generate_series(1, 1500) as t(baz)\n> )\n> insert into foo_bar_baz (foo_id, bar_id, baz)\n> select foo_id, bar_id, baz\n> from bazzes as bz\n> join foos_and_bars as fab on mod(bz.baz, fab.foo_id) = 0;\n>\n> ==== end of script ====\n>\n> I see the some strange behaviour similar to to what is reported in the\n> comments to that question:\n>\n> When I run the test query immediately after populating the tables with the\n> sample data:\n>\n> explain analyze\n> select count(*)\n> from foo_bar_baz as fbb\n> join foo on fbb.foo_id = foo.foo_id\n> where foo.foo_name = 'eeny'\n>\n> I do see an \"Index Only Scan .... (never executed)\" in the plan for the\n> irrelevant partitions:\n>\n> https://explain.depesz.com/s/AqlE\n>\n> However once I run \"analyze foo_bar_baz\" (or \"vacuum analyze\"), Postgres\n> chooses to do a \"Parallel Seq Scan\" for each partition:\n>\n> https://explain.depesz.com/s/WwxE\n>\n> Why does updating the statistics mess up (runtime) partition pruning?\n>\n>\n> I played around with random_page_cost and that didn't change anything.\n> I tried to create extended statistics on \"foo(id, name)\" so that the\n> planner would no, that there is only one name per id. No change.\n>\n> I saw the above behaviour when running this on Windows 10 (my Laptop) or\n> CentOS 7 (a test environment on a VM)\n>\n> On the CentOS server default_statistics_target is set to 100, on my laptop\n> it is set to 1000\n>\n> In both cases the Postgres version was 11.4\n>\n> Any ideas?\n>\n> Thomas\n>\n>\nRan into the same behaviour of the planner. The amount of rows in the\npartitions influence the statistics being generated and the statistics in\nturn influence the plan chosen.\n\nI managed to force the \"correct\" plan by manually setting the n_distinct\nstatistics for the partitioned table.\nE.g.: alter table foo_bar_baz alter column foo_id set ( n_distinct=-1,\nn_distinct_inherited=-1);\n\nWith a certain number of rows in the partitions the analyser sets the\nn_distinct value for the partitioned table to the number of unique\npartition keys and the n_distinct value\nfor the individual partitions to number of unique partition keys in that\npartition. Unfortunately this causes the planner to pick a plan that\ndoesn't allow for execution pruning,\nresulting in very slow execution times.\n\nRegards,\nSverre\n\nOn Tue, Aug 13, 2019 at 8:46 AM Thomas Kellerer <[email protected]> wrote:I stumbled across this question on SO: https://stackoverflow.com/questions/56517852\n\nDisregarding the part about Postgres 9.3, the example for Postgres 11 looks a bit confusing. \n\nThere is a script to setup test data in that question: \n\n==== start of script ====\n\n create table foo (\n foo_id integer not null,\n foo_name varchar(10),\n constraint foo_pkey primary key (foo_id) \n ); \n\n insert into foo\n (foo_id, foo_name) \n values\n (1, 'eeny'),\n (2, 'meeny'),\n (3, 'miny'),\n (4, 'moe'),\n (5, 'tiger'), \n (6, 'toe');\n\n create table foo_bar_baz (\n foo_id integer not null,\n bar_id integer not null,\n baz integer not null,\n constraint foo_bar_baz_pkey primary key (foo_id, bar_id, baz),\n constraint foo_bar_baz_fkey1 foreign key (foo_id)\n references foo (foo_id)\n ) partition by range (foo_id) \n ;\n\n create table if not exists foo_bar_baz_0 partition of foo_bar_baz for values from (0) to (1);\n create table if not exists foo_bar_baz_1 partition of foo_bar_baz for values from (1) to (2);\n create table if not exists foo_bar_baz_2 partition of foo_bar_baz for values from (2) to (3);\n create table if not exists foo_bar_baz_3 partition of foo_bar_baz for values from (3) to (4);\n create table if not exists foo_bar_baz_4 partition of foo_bar_baz for values from (4) to (5);\n create table if not exists foo_bar_baz_5 partition of foo_bar_baz for values from (5) to (6);\n\n with foos_and_bars as (\n select ((random() * 4) + 1)::int as foo_id, bar_id::int\n from generate_series(0, 1499) as t(bar_id)\n ), bazzes as (\n select baz::int\n from generate_series(1, 1500) as t(baz)\n )\n insert into foo_bar_baz (foo_id, bar_id, baz) \n select foo_id, bar_id, baz \n from bazzes as bz \n join foos_and_bars as fab on mod(bz.baz, fab.foo_id) = 0;\n\n==== end of script ====\n\nI see the some strange behaviour similar to to what is reported in the comments to that question: \n\nWhen I run the test query immediately after populating the tables with the sample data:\n\n explain analyze \n select count(*) \n from foo_bar_baz as fbb \n join foo on fbb.foo_id = foo.foo_id \n where foo.foo_name = 'eeny'\n\nI do see an \"Index Only Scan .... (never executed)\" in the plan for the irrelevant partitions: \n\n https://explain.depesz.com/s/AqlE\n\nHowever once I run \"analyze foo_bar_baz\" (or \"vacuum analyze\"), Postgres chooses to do a \"Parallel Seq Scan\" for each partition:\n\n https://explain.depesz.com/s/WwxE\n\nWhy does updating the statistics mess up (runtime) partition pruning? \n\n\nI played around with random_page_cost and that didn't change anything. \nI tried to create extended statistics on \"foo(id, name)\" so that the planner would no, that there is only one name per id. No change. \n\nI saw the above behaviour when running this on Windows 10 (my Laptop) or CentOS 7 (a test environment on a VM) \n\nOn the CentOS server default_statistics_target is set to 100, on my laptop it is set to 1000\n\nIn both cases the Postgres version was 11.4\n\nAny ideas? \n\nThomas\nRan into the same behaviour of the planner. The amount of rows in the partitions influence the statistics being generated and the statistics in turn influence the plan chosen.I managed to force the \"correct\" plan by manually setting the n_distinct statistics for the partitioned table.E.g.: alter table foo_bar_baz alter column foo_id set ( n_distinct=-1, n_distinct_inherited=-1);With a certain number of rows in the partitions the analyser sets the n_distinct value for the partitioned table to the number of unique partition keys and the n_distinct valuefor the individual partitions to number of unique partition keys in that partition. Unfortunately this causes the planner to pick a plan that doesn't allow for execution pruning,resulting in very slow execution times.Regards,Sverre",
"msg_date": "Tue, 13 Aug 2019 09:02:32 +0200",
"msg_from": "Sverre Boschman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange runtime partition pruning behaviour with 11.4"
}
] |
[
{
"msg_contents": "Hey,\nI have a very big query that consist from 3-4 subqueries that use windows\nfunctions. There is a chance that I'll need to rewrite the query but first\nI'm trying to search for other ways to improve it and I'll be happy to hear\nif one of u have an idea.\n\nBasically my table has the following structure : (objid,first_num,last_num)\nand each record is a range from the first number to the last one for that\nspecific obj. I'm trying to unite ranges that overlaps. For example :\nfor the following table :\nobjid first_num last_num\n1 5 7\n1 8 10\n2 4 6\n2 9 10\n\nI would like to get :\nobjid first_num last_num\n1 5 10\n2 4 6\n2 9 10\n\nI have a query that does it but takes about 4s for 1.5M records. I created\nan index on (objid,first_num,last_num) in order to use only index scan\ninstead of seq scan on this table. I wanted to here if u guys have any\nother ideas.\n\nThanks.\n\nHey,I have a very big query that consist from 3-4 subqueries that use windows functions. There is a chance that I'll need to rewrite the query but first I'm trying to search for other ways to improve it and I'll be happy to hear if one of u have an idea.Basically my table has the following structure : (objid,first_num,last_num) and each record is a range from the first number to the last one for that specific obj. I'm trying to unite ranges that overlaps. For example : for the following table : objid first_num last_num1 5 71 8 102 4 62 9 10I would like to get : objid first_num last_num1 5 102 4 62 9 10 I have a query that does it but takes about 4s for 1.5M records. I created an index on (objid,first_num,last_num) in order to use only index scan instead of seq scan on this table. I wanted to here if u guys have any other ideas.Thanks.",
"msg_date": "Mon, 5 Aug 2019 23:47:44 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "improving windows functions performance"
},
{
"msg_contents": "\n\nAm 05.08.19 um 22:47 schrieb Mariel Cherkassky:\n> Hey,\n> I have a very big query that consist from 3-4 subqueries that use \n> windows functions. There is a chance that I'll need to rewrite the \n> query but first I'm trying to search for other ways to improve it and \n> I'll be happy to hear if one of u have an idea.\n>\n> Basically my table has the following structure : \n> (objid,first_num,last_num) and each record is a range from the first \n> number to the last one for that specific obj. I'm trying to unite \n> ranges that overlaps. For example :\n> for the following table :\n> objid first_num last_num\n> 1 5 7\n> 1 8 10\n> 2 4 6\n> 2 9 10\n>\n> I would like to get :\n> objid first_num last_num\n> 1 5 10\n> 2 4 6\n> 2 9 10\n>\n> I have a query that does it but takes about 4s for 1.5M records. I \n> created an index on (objid,first_num,last_num) in order to use only \n> index scan instead of seq scan on this table. I wanted to here if u \n> guys have any other ideas.\n>\n\nyou should provide more information, for instance:\n\n* used version\n* table-structure\n* real query\n* execution plan (using explain analyse)\n\nAndreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Mon, 5 Aug 2019 23:15:36 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: improving windows functions performance"
}
] |
[
{
"msg_contents": "Hi,\n\nwe have created restricted view for our tables, so that we can allow \naccess to non-gdpr relevant data but hide everything else.\n\nFor exactly those views, the Query Planner uses the wrong indices, when \nexecuting exactly the same query, once it takes 0.1 s and on the views \nit takes nearly 18 sec (it does a full table scan, or uses the wrong \nindices).\n\nDo we have to GRANT additional rights? I see it's using some indices, \njust not the correct ones!\n\nHas anyone experienced the same issues? What can we do about that?\n\nThanks\nBR\nThomas\n\n\n",
"msg_date": "Thu, 08 Aug 2019 14:02:52 +0200",
"msg_from": "\"Thomas Rosenstein\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres not using correct indices for views."
},
{
"msg_contents": "\"Thomas Rosenstein\" <[email protected]> writes:\n> we have created restricted view for our tables, so that we can allow \n> access to non-gdpr relevant data but hide everything else.\n> For exactly those views, the Query Planner uses the wrong indices, when \n> executing exactly the same query, once it takes 0.1 s and on the views \n> it takes nearly 18 sec (it does a full table scan, or uses the wrong \n> indices).\n> Do we have to GRANT additional rights? I see it's using some indices, \n> just not the correct ones!\n\nDoes EXPLAIN show reasonable rowcount estimates when you query\ndirectly, but bad ones when you query via the views?\n\nIf so, a likely guess is that you're falling foul of the restrictions\nadded for CVE-2017-7484:\n\nAuthor: Peter Eisentraut <[email protected]>\nBranch: master Release: REL_10_BR [e2d4ef8de] 2017-05-08 09:26:32 -0400\nBranch: REL9_6_STABLE Release: REL9_6_3 [c33c42362] 2017-05-08 09:18:57 -0400\nBranch: REL9_5_STABLE Release: REL9_5_7 [d45cd7c0e] 2017-05-08 09:19:07 -0400\nBranch: REL9_4_STABLE Release: REL9_4_12 [3e5ea1f9b] 2017-05-08 09:19:15 -0400\nBranch: REL9_3_STABLE Release: REL9_3_17 [4f1b2089a] 2017-05-08 09:19:23 -0400\nBranch: REL9_2_STABLE Release: REL9_2_21 [d035c1b97] 2017-05-08 09:19:42 -0400\n\n Add security checks to selectivity estimation functions\n \n Some selectivity estimation functions run user-supplied operators over\n data obtained from pg_statistic without security checks, which allows\n those operators to leak pg_statistic data without having privileges on\n the underlying tables. Fix by checking that one of the following is\n satisfied: (1) the user has table or column privileges on the table\n underlying the pg_statistic data, or (2) the function implementing the\n user-supplied operator is leak-proof. If neither is satisfied, planning\n will proceed as if there are no statistics available.\n \n At least one of these is satisfied in most cases in practice. The only\n situations that are negatively impacted are user-defined or\n not-leak-proof operators on a security-barrier view.\n \n Reported-by: Robert Haas <[email protected]>\n Author: Peter Eisentraut <[email protected]>\n Author: Tom Lane <[email protected]>\n \n Security: CVE-2017-7484\n\n\nHowever, if you're not on the latest minor releases, you might\nfind that updating would fix this for you, because of\n\nAuthor: Dean Rasheed <[email protected]>\nBranch: master Release: REL_12_BR [a0905056f] 2019-05-06 11:54:32 +0100\nBranch: REL_11_STABLE Release: REL_11_3 [98dad4cd4] 2019-05-06 11:56:37 +0100\nBranch: REL_10_STABLE Release: REL_10_8 [ca74e3e0f] 2019-05-06 11:58:32 +0100\nBranch: REL9_6_STABLE Release: REL9_6_13 [71185228c] 2019-05-06 12:00:00 +0100\nBranch: REL9_5_STABLE Release: REL9_5_17 [01256815a] 2019-05-06 12:01:44 +0100\nBranch: REL9_4_STABLE Release: REL9_4_22 [3c0999909] 2019-05-06 12:05:05 +0100\n\n Use checkAsUser for selectivity estimator checks, if it's set.\n \n In examine_variable() and examine_simple_variable(), when checking the\n user's table and column privileges to determine whether to grant\n access to the pg_statistic data, use checkAsUser for the privilege\n checks, if it's set. This will be the case if we're accessing the\n table via a view, to indicate that we should perform privilege checks\n as the view owner rather than the current user.\n \n This change makes this planner check consistent with the check in the\n executor, so the planner will be able to make use of statistics if the\n table is accessible via the view. This fixes a performance regression\n introduced by commit e2d4ef8de8, which affects queries against\n non-security barrier views in the case where the user doesn't have\n privileges on the underlying table, but the view owner does.\n \n Note that it continues to provide the same safeguards controlling\n access to pg_statistic for direct table access (in which case\n checkAsUser won't be set) and for security barrier views, because of\n the nearby checks on rte->security_barrier and rte->securityQuals.\n \n Back-patch to all supported branches because e2d4ef8de8 was.\n \n Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Aug 2019 12:05:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using correct indices for views."
},
{
"msg_contents": "Hi,\n\nI'm upgraded to 10.10 from today (on the replicated instance - main db \nis still 10.5), but still have the issue.\n\nThe table is owned by the user \"creamfinance\", and the view is also \nowned by the same user - based on the text you quoted this should allow \nthe correct access.\n\nThe planner estimates the correct row counts, but still does the wrong \nplanning.\n\nWrong:\n\n```\nLimit (cost=1880359.00..1880359.03 rows=9 width=1508) (actual \ntime=25093.258..25093.270 rows=9 loops=1)\n -> Sort (cost=1880359.00..1884101.04 rows=1496816 width=1508) \n(actual time=25093.257..25093.257 rows=9 loops=1)\n Sort Key: p.customer_id DESC\n Sort Method: top-N heapsort Memory: 33kB\n -> Hash Join (cost=359555.11..1849150.95 rows=1496816 \nwidth=1508) (actual time=1081.081..24251.466 rows=543231 loops=1)\n Hash Cond: (p.customer_id = l.customer_id)\n Join Filter: ((p.date - '3 days'::interval day) <= \nl.duedate)\n Rows Removed by Join Filter: 596120\n -> Seq Scan on payments p (cost=0.00..393323.74 \nrows=10046437 width=228) (actual time=0.013..13053.366 rows=10054069 \nloops=1)\n -> Hash (cost=333367.49..333367.49 rows=153409 \nwidth=1272) (actual time=689.835..689.835 rows=156682 loops=1)\n Buckets: 32768 Batches: 8 Memory Usage: 7737kB\n -> Bitmap Heap Scan on loans l \n(cost=22732.48..331833.40 rows=153409 width=1272) (actual \ntime=64.142..398.893 rows=156682 loops=1)\n Recheck Cond: (location_id = 46)\n Heap Blocks: exact=105938\n -> Bitmap Index Scan on \nloans_location_id_repaid_desc_id_index (cost=0.00..22694.12 rows=153409 \nwidth=0) (actual time=41.324..41.324 rows=157794 loops=1)\n Index Cond: (location_id = 46)\n```\n\nCorrect:\n\n```\n Limit (cost=0.87..52.60 rows=9 width=1471)\n -> Nested Loop (cost=0.87..2961441.25 rows=515233 width=1471)\n -> Index Scan Backward using loans_customer_id_index on loans \n (cost=0.43..2215467.63 rows=153409 width=1257)\n Filter: (location_id = 46)\n -> Index Scan using payments_customer_id_idx on payments \n(cost=0.43..4.76 rows=10 width=206)\n Index Cond: (customer_id = loans.customer_id)\n Filter: ((date - '3 days'::interval day) <= \nloans.duedate)\n```\n\nThanks\n\nThomas\n\nOn 8 Aug 2019, at 18:05, Tom Lane wrote:\n\n> \"Thomas Rosenstein\" <[email protected]> writes:\n>> we have created restricted view for our tables, so that we can allow\n>> access to non-gdpr relevant data but hide everything else.\n>> For exactly those views, the Query Planner uses the wrong indices, \n>> when\n>> executing exactly the same query, once it takes 0.1 s and on the \n>> views\n>> it takes nearly 18 sec (it does a full table scan, or uses the wrong\n>> indices).\n>> Do we have to GRANT additional rights? I see it's using some indices,\n>> just not the correct ones!\n>\n> Does EXPLAIN show reasonable rowcount estimates when you query\n> directly, but bad ones when you query via the views?\n>\n> If so, a likely guess is that you're falling foul of the restrictions\n> added for CVE-2017-7484:\n>\n> Author: Peter Eisentraut <[email protected]>\n> Branch: master Release: REL_10_BR [e2d4ef8de] 2017-05-08 09:26:32 \n> -0400\n> Branch: REL9_6_STABLE Release: REL9_6_3 [c33c42362] 2017-05-08 \n> 09:18:57 -0400\n> Branch: REL9_5_STABLE Release: REL9_5_7 [d45cd7c0e] 2017-05-08 \n> 09:19:07 -0400\n> Branch: REL9_4_STABLE Release: REL9_4_12 [3e5ea1f9b] 2017-05-08 \n> 09:19:15 -0400\n> Branch: REL9_3_STABLE Release: REL9_3_17 [4f1b2089a] 2017-05-08 \n> 09:19:23 -0400\n> Branch: REL9_2_STABLE Release: REL9_2_21 [d035c1b97] 2017-05-08 \n> 09:19:42 -0400\n>\n> Add security checks to selectivity estimation functions\n>\n> Some selectivity estimation functions run user-supplied operators \n> over\n> data obtained from pg_statistic without security checks, which \n> allows\n> those operators to leak pg_statistic data without having \n> privileges on\n> the underlying tables. Fix by checking that one of the following \n> is\n> satisfied: (1) the user has table or column privileges on the \n> table\n> underlying the pg_statistic data, or (2) the function implementing \n> the\n> user-supplied operator is leak-proof. If neither is satisfied, \n> planning\n> will proceed as if there are no statistics available.\n>\n> At least one of these is satisfied in most cases in practice. The \n> only\n> situations that are negatively impacted are user-defined or\n> not-leak-proof operators on a security-barrier view.\n>\n> Reported-by: Robert Haas <[email protected]>\n> Author: Peter Eisentraut <[email protected]>\n> Author: Tom Lane <[email protected]>\n>\n> Security: CVE-2017-7484\n>\n>\n> However, if you're not on the latest minor releases, you might\n> find that updating would fix this for you, because of\n>\n> Author: Dean Rasheed <[email protected]>\n> Branch: master Release: REL_12_BR [a0905056f] 2019-05-06 11:54:32 \n> +0100\n> Branch: REL_11_STABLE Release: REL_11_3 [98dad4cd4] 2019-05-06 \n> 11:56:37 +0100\n> Branch: REL_10_STABLE Release: REL_10_8 [ca74e3e0f] 2019-05-06 \n> 11:58:32 +0100\n> Branch: REL9_6_STABLE Release: REL9_6_13 [71185228c] 2019-05-06 \n> 12:00:00 +0100\n> Branch: REL9_5_STABLE Release: REL9_5_17 [01256815a] 2019-05-06 \n> 12:01:44 +0100\n> Branch: REL9_4_STABLE Release: REL9_4_22 [3c0999909] 2019-05-06 \n> 12:05:05 +0100\n>\n> Use checkAsUser for selectivity estimator checks, if it's set.\n>\n> In examine_variable() and examine_simple_variable(), when checking \n> the\n> user's table and column privileges to determine whether to grant\n> access to the pg_statistic data, use checkAsUser for the privilege\n> checks, if it's set. This will be the case if we're accessing the\n> table via a view, to indicate that we should perform privilege \n> checks\n> as the view owner rather than the current user.\n>\n> This change makes this planner check consistent with the check in \n> the\n> executor, so the planner will be able to make use of statistics if \n> the\n> table is accessible via the view. This fixes a performance \n> regression\n> introduced by commit e2d4ef8de8, which affects queries against\n> non-security barrier views in the case where the user doesn't have\n> privileges on the underlying table, but the view owner does.\n>\n> Note that it continues to provide the same safeguards controlling\n> access to pg_statistic for direct table access (in which case\n> checkAsUser won't be set) and for security barrier views, because \n> of\n> the nearby checks on rte->security_barrier and rte->securityQuals.\n>\n> Back-patch to all supported branches because e2d4ef8de8 was.\n>\n> Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.\n>\n> \t\t\tregards, tom lane\n\n\n\n\n\n\n\nHi,\nI'm upgraded to 10.10 from today (on the replicated instance - main db is still 10.5), but still have the issue.\nThe table is owned by the user \"creamfinance\", and the view is also owned by the same user - based on the text you quoted this should allow the correct access.\nThe planner estimates the correct row counts, but still does the wrong planning.\nWrong:\nLimit (cost=1880359.00..1880359.03 rows=9 width=1508) (actual time=25093.258..25093.270 rows=9 loops=1)\n -> Sort (cost=1880359.00..1884101.04 rows=1496816 width=1508) (actual time=25093.257..25093.257 rows=9 loops=1)\n Sort Key: p.customer_id DESC\n Sort Method: top-N heapsort Memory: 33kB\n -> Hash Join (cost=359555.11..1849150.95 rows=1496816 width=1508) (actual time=1081.081..24251.466 rows=543231 loops=1)\n Hash Cond: (p.customer_id = l.customer_id)\n Join Filter: ((p.date - '3 days'::interval day) <= l.duedate)\n Rows Removed by Join Filter: 596120\n -> Seq Scan on payments p (cost=0.00..393323.74 rows=10046437 width=228) (actual time=0.013..13053.366 rows=10054069 loops=1)\n -> Hash (cost=333367.49..333367.49 rows=153409 width=1272) (actual time=689.835..689.835 rows=156682 loops=1)\n Buckets: 32768 Batches: 8 Memory Usage: 7737kB\n -> Bitmap Heap Scan on loans l (cost=22732.48..331833.40 rows=153409 width=1272) (actual time=64.142..398.893 rows=156682 loops=1)\n Recheck Cond: (location_id = 46)\n Heap Blocks: exact=105938\n -> Bitmap Index Scan on loans_location_id_repaid_desc_id_index (cost=0.00..22694.12 rows=153409 width=0) (actual time=41.324..41.324 rows=157794 loops=1)\n Index Cond: (location_id = 46)\n\nCorrect:\n Limit (cost=0.87..52.60 rows=9 width=1471)\n -> Nested Loop (cost=0.87..2961441.25 rows=515233 width=1471)\n -> Index Scan Backward using loans_customer_id_index on loans (cost=0.43..2215467.63 rows=153409 width=1257)\n Filter: (location_id = 46)\n -> Index Scan using payments_customer_id_idx on payments (cost=0.43..4.76 rows=10 width=206)\n Index Cond: (customer_id = loans.customer_id)\n Filter: ((date - '3 days'::interval day) <= loans.duedate)\n\nThanks\nThomas\nOn 8 Aug 2019, at 18:05, Tom Lane wrote:\n\n\"Thomas Rosenstein\" <[email protected]> writes:\nwe have created restricted view for our tables, so that we can allow\naccess to non-gdpr relevant data but hide everything else.\nFor exactly those views, the Query Planner uses the wrong indices, when\nexecuting exactly the same query, once it takes 0.1 s and on the views\nit takes nearly 18 sec (it does a full table scan, or uses the wrong\nindices).\nDo we have to GRANT additional rights? I see it's using some indices,\njust not the correct ones!\nDoes EXPLAIN show reasonable rowcount estimates when you query\ndirectly, but bad ones when you query via the views?\n\nIf so, a likely guess is that you're falling foul of the restrictions\nadded for CVE-2017-7484:\n\nAuthor: Peter Eisentraut <[email protected]>\nBranch: master Release: REL_10_BR [e2d4ef8de] 2017-05-08 09:26:32 -0400\nBranch: REL9_6_STABLE Release: REL9_6_3 [c33c42362] 2017-05-08 09:18:57 -0400\nBranch: REL9_5_STABLE Release: REL9_5_7 [d45cd7c0e] 2017-05-08 09:19:07 -0400\nBranch: REL9_4_STABLE Release: REL9_4_12 [3e5ea1f9b] 2017-05-08 09:19:15 -0400\nBranch: REL9_3_STABLE Release: REL9_3_17 [4f1b2089a] 2017-05-08 09:19:23 -0400\nBranch: REL9_2_STABLE Release: REL9_2_21 [d035c1b97] 2017-05-08 09:19:42 -0400\n\n Add security checks to selectivity estimation functions\n\n Some selectivity estimation functions run user-supplied operators over\n data obtained from pg_statistic without security checks, which allows\n those operators to leak pg_statistic data without having privileges on\n the underlying tables. Fix by checking that one of the following is\n satisfied: (1) the user has table or column privileges on the table\n underlying the pg_statistic data, or (2) the function implementing the\n user-supplied operator is leak-proof. If neither is satisfied, planning\n will proceed as if there are no statistics available.\n\n At least one of these is satisfied in most cases in practice. The only\n situations that are negatively impacted are user-defined or\n not-leak-proof operators on a security-barrier view.\n\n Reported-by: Robert Haas <[email protected]>\n Author: Peter Eisentraut <[email protected]>\n Author: Tom Lane <[email protected]>\n\n Security: CVE-2017-7484\n\n\nHowever, if you're not on the latest minor releases, you might\nfind that updating would fix this for you, because of\n\nAuthor: Dean Rasheed <[email protected]>\nBranch: master Release: REL_12_BR [a0905056f] 2019-05-06 11:54:32 +0100\nBranch: REL_11_STABLE Release: REL_11_3 [98dad4cd4] 2019-05-06 11:56:37 +0100\nBranch: REL_10_STABLE Release: REL_10_8 [ca74e3e0f] 2019-05-06 11:58:32 +0100\nBranch: REL9_6_STABLE Release: REL9_6_13 [71185228c] 2019-05-06 12:00:00 +0100\nBranch: REL9_5_STABLE Release: REL9_5_17 [01256815a] 2019-05-06 12:01:44 +0100\nBranch: REL9_4_STABLE Release: REL9_4_22 [3c0999909] 2019-05-06 12:05:05 +0100\n\n Use checkAsUser for selectivity estimator checks, if it's set.\n\n In examine_variable() and examine_simple_variable(), when checking the\n user's table and column privileges to determine whether to grant\n access to the pg_statistic data, use checkAsUser for the privilege\n checks, if it's set. This will be the case if we're accessing the\n table via a view, to indicate that we should perform privilege checks\n as the view owner rather than the current user.\n\n This change makes this planner check consistent with the check in the\n executor, so the planner will be able to make use of statistics if the\n table is accessible via the view. This fixes a performance regression\n introduced by commit e2d4ef8de8, which affects queries against\n non-security barrier views in the case where the user doesn't have\n privileges on the underlying table, but the view owner does.\n\n Note that it continues to provide the same safeguards controlling\n access to pg_statistic for direct table access (in which case\n checkAsUser won't be set) and for security barrier views, because of\n the nearby checks on rte->security_barrier and rte->securityQuals.\n\n Back-patch to all supported branches because e2d4ef8de8 was.\n\n Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 08 Aug 2019 22:04:31 +0200",
"msg_from": "\"Thomas Rosenstein\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres not using correct indices for views."
},
{
"msg_contents": "To add additional info, the same behaviour is exhibited with the owner, \nand the user which only has read priviledges on the view!\n\nOn 8 Aug 2019, at 22:04, Thomas Rosenstein wrote:\n\n> Hi,\n>\n> I'm upgraded to 10.10 from today (on the replicated instance - main db \n> is still 10.5), but still have the issue.\n>\n> The table is owned by the user \"creamfinance\", and the view is also \n> owned by the same user - based on the text you quoted this should \n> allow the correct access.\n>\n> The planner estimates the correct row counts, but still does the wrong \n> planning.\n>\n> Wrong:\n>\n> ```\n> Limit (cost=1880359.00..1880359.03 rows=9 width=1508) (actual \n> time=25093.258..25093.270 rows=9 loops=1)\n> -> Sort (cost=1880359.00..1884101.04 rows=1496816 width=1508) \n> (actual time=25093.257..25093.257 rows=9 loops=1)\n> Sort Key: p.customer_id DESC\n> Sort Method: top-N heapsort Memory: 33kB\n> -> Hash Join (cost=359555.11..1849150.95 rows=1496816 \n> width=1508) (actual time=1081.081..24251.466 rows=543231 loops=1)\n> Hash Cond: (p.customer_id = l.customer_id)\n> Join Filter: ((p.date - '3 days'::interval day) <= \n> l.duedate)\n> Rows Removed by Join Filter: 596120\n> -> Seq Scan on payments p (cost=0.00..393323.74 \n> rows=10046437 width=228) (actual time=0.013..13053.366 rows=10054069 \n> loops=1)\n> -> Hash (cost=333367.49..333367.49 rows=153409 \n> width=1272) (actual time=689.835..689.835 rows=156682 loops=1)\n> Buckets: 32768 Batches: 8 Memory Usage: 7737kB\n> -> Bitmap Heap Scan on loans l \n> (cost=22732.48..331833.40 rows=153409 width=1272) (actual \n> time=64.142..398.893 rows=156682 loops=1)\n> Recheck Cond: (location_id = 46)\n> Heap Blocks: exact=105938\n> -> Bitmap Index Scan on \n> loans_location_id_repaid_desc_id_index (cost=0.00..22694.12 \n> rows=153409 width=0) (actual time=41.324..41.324 rows=157794 loops=1)\n> Index Cond: (location_id = 46)\n> ```\n>\n> Correct:\n>\n> ```\n> Limit (cost=0.87..52.60 rows=9 width=1471)\n> -> Nested Loop (cost=0.87..2961441.25 rows=515233 width=1471)\n> -> Index Scan Backward using loans_customer_id_index on \n> loans (cost=0.43..2215467.63 rows=153409 width=1257)\n> Filter: (location_id = 46)\n> -> Index Scan using payments_customer_id_idx on payments \n> (cost=0.43..4.76 rows=10 width=206)\n> Index Cond: (customer_id = loans.customer_id)\n> Filter: ((date - '3 days'::interval day) <= \n> loans.duedate)\n> ```\n>\n> Thanks\n>\n> Thomas\n>\n> On 8 Aug 2019, at 18:05, Tom Lane wrote:\n>\n>> \"Thomas Rosenstein\" <[email protected]> writes:\n>>> we have created restricted view for our tables, so that we can allow\n>>> access to non-gdpr relevant data but hide everything else.\n>>> For exactly those views, the Query Planner uses the wrong indices, \n>>> when\n>>> executing exactly the same query, once it takes 0.1 s and on the \n>>> views\n>>> it takes nearly 18 sec (it does a full table scan, or uses the wrong\n>>> indices).\n>>> Do we have to GRANT additional rights? I see it's using some \n>>> indices,\n>>> just not the correct ones!\n>>\n>> Does EXPLAIN show reasonable rowcount estimates when you query\n>> directly, but bad ones when you query via the views?\n>>\n>> If so, a likely guess is that you're falling foul of the restrictions\n>> added for CVE-2017-7484:\n>>\n>> Author: Peter Eisentraut <[email protected]>\n>> Branch: master Release: REL_10_BR [e2d4ef8de] 2017-05-08 09:26:32 \n>> -0400\n>> Branch: REL9_6_STABLE Release: REL9_6_3 [c33c42362] 2017-05-08 \n>> 09:18:57 -0400\n>> Branch: REL9_5_STABLE Release: REL9_5_7 [d45cd7c0e] 2017-05-08 \n>> 09:19:07 -0400\n>> Branch: REL9_4_STABLE Release: REL9_4_12 [3e5ea1f9b] 2017-05-08 \n>> 09:19:15 -0400\n>> Branch: REL9_3_STABLE Release: REL9_3_17 [4f1b2089a] 2017-05-08 \n>> 09:19:23 -0400\n>> Branch: REL9_2_STABLE Release: REL9_2_21 [d035c1b97] 2017-05-08 \n>> 09:19:42 -0400\n>>\n>> Add security checks to selectivity estimation functions\n>>\n>> Some selectivity estimation functions run user-supplied operators \n>> over\n>> data obtained from pg_statistic without security checks, which \n>> allows\n>> those operators to leak pg_statistic data without having \n>> privileges on\n>> the underlying tables. Fix by checking that one of the following \n>> is\n>> satisfied: (1) the user has table or column privileges on the \n>> table\n>> underlying the pg_statistic data, or (2) the function \n>> implementing the\n>> user-supplied operator is leak-proof. If neither is satisfied, \n>> planning\n>> will proceed as if there are no statistics available.\n>>\n>> At least one of these is satisfied in most cases in practice. \n>> The only\n>> situations that are negatively impacted are user-defined or\n>> not-leak-proof operators on a security-barrier view.\n>>\n>> Reported-by: Robert Haas <[email protected]>\n>> Author: Peter Eisentraut <[email protected]>\n>> Author: Tom Lane <[email protected]>\n>>\n>> Security: CVE-2017-7484\n>>\n>>\n>> However, if you're not on the latest minor releases, you might\n>> find that updating would fix this for you, because of\n>>\n>> Author: Dean Rasheed <[email protected]>\n>> Branch: master Release: REL_12_BR [a0905056f] 2019-05-06 11:54:32 \n>> +0100\n>> Branch: REL_11_STABLE Release: REL_11_3 [98dad4cd4] 2019-05-06 \n>> 11:56:37 +0100\n>> Branch: REL_10_STABLE Release: REL_10_8 [ca74e3e0f] 2019-05-06 \n>> 11:58:32 +0100\n>> Branch: REL9_6_STABLE Release: REL9_6_13 [71185228c] 2019-05-06 \n>> 12:00:00 +0100\n>> Branch: REL9_5_STABLE Release: REL9_5_17 [01256815a] 2019-05-06 \n>> 12:01:44 +0100\n>> Branch: REL9_4_STABLE Release: REL9_4_22 [3c0999909] 2019-05-06 \n>> 12:05:05 +0100\n>>\n>> Use checkAsUser for selectivity estimator checks, if it's set.\n>>\n>> In examine_variable() and examine_simple_variable(), when \n>> checking the\n>> user's table and column privileges to determine whether to grant\n>> access to the pg_statistic data, use checkAsUser for the \n>> privilege\n>> checks, if it's set. This will be the case if we're accessing the\n>> table via a view, to indicate that we should perform privilege \n>> checks\n>> as the view owner rather than the current user.\n>>\n>> This change makes this planner check consistent with the check in \n>> the\n>> executor, so the planner will be able to make use of statistics \n>> if the\n>> table is accessible via the view. This fixes a performance \n>> regression\n>> introduced by commit e2d4ef8de8, which affects queries against\n>> non-security barrier views in the case where the user doesn't \n>> have\n>> privileges on the underlying table, but the view owner does.\n>>\n>> Note that it continues to provide the same safeguards controlling\n>> access to pg_statistic for direct table access (in which case\n>> checkAsUser won't be set) and for security barrier views, because \n>> of\n>> the nearby checks on rte->security_barrier and \n>> rte->securityQuals.\n>>\n>> Back-patch to all supported branches because e2d4ef8de8 was.\n>>\n>> Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.\n>>\n>> \t\t\tregards, tom lane\n\n\n\n\n\n\n\n\nTo add additional info, the same behaviour is exhibited with the owner, and the user which only has read priviledges on the view!\nOn 8 Aug 2019, at 22:04, Thomas Rosenstein wrote:\n\n\n\nHi,\nI'm upgraded to 10.10 from today (on the replicated instance - main db is still 10.5), but still have the issue.\nThe table is owned by the user \"creamfinance\", and the view is also owned by the same user - based on the text you quoted this should allow the correct access.\nThe planner estimates the correct row counts, but still does the wrong planning.\nWrong:\nLimit (cost=1880359.00..1880359.03 rows=9 width=1508) (actual time=25093.258..25093.270 rows=9 loops=1)\n -> Sort (cost=1880359.00..1884101.04 rows=1496816 width=1508) (actual time=25093.257..25093.257 rows=9 loops=1)\n Sort Key: p.customer_id DESC\n Sort Method: top-N heapsort Memory: 33kB\n -> Hash Join (cost=359555.11..1849150.95 rows=1496816 width=1508) (actual time=1081.081..24251.466 rows=543231 loops=1)\n Hash Cond: (p.customer_id = l.customer_id)\n Join Filter: ((p.date - '3 days'::interval day) <= l.duedate)\n Rows Removed by Join Filter: 596120\n -> Seq Scan on payments p (cost=0.00..393323.74 rows=10046437 width=228) (actual time=0.013..13053.366 rows=10054069 loops=1)\n -> Hash (cost=333367.49..333367.49 rows=153409 width=1272) (actual time=689.835..689.835 rows=156682 loops=1)\n Buckets: 32768 Batches: 8 Memory Usage: 7737kB\n -> Bitmap Heap Scan on loans l (cost=22732.48..331833.40 rows=153409 width=1272) (actual time=64.142..398.893 rows=156682 loops=1)\n Recheck Cond: (location_id = 46)\n Heap Blocks: exact=105938\n -> Bitmap Index Scan on loans_location_id_repaid_desc_id_index (cost=0.00..22694.12 rows=153409 width=0) (actual time=41.324..41.324 rows=157794 loops=1)\n Index Cond: (location_id = 46)\n\nCorrect:\n Limit (cost=0.87..52.60 rows=9 width=1471)\n -> Nested Loop (cost=0.87..2961441.25 rows=515233 width=1471)\n -> Index Scan Backward using loans_customer_id_index on loans (cost=0.43..2215467.63 rows=153409 width=1257)\n Filter: (location_id = 46)\n -> Index Scan using payments_customer_id_idx on payments (cost=0.43..4.76 rows=10 width=206)\n Index Cond: (customer_id = loans.customer_id)\n Filter: ((date - '3 days'::interval day) <= loans.duedate)\n\nThanks\nThomas\nOn 8 Aug 2019, at 18:05, Tom Lane wrote:\n\n\"Thomas Rosenstein\" <[email protected]> writes:\nwe have created restricted view for our tables, so that we can allow\naccess to non-gdpr relevant data but hide everything else.\nFor exactly those views, the Query Planner uses the wrong indices, when\nexecuting exactly the same query, once it takes 0.1 s and on the views\nit takes nearly 18 sec (it does a full table scan, or uses the wrong\nindices).\nDo we have to GRANT additional rights? I see it's using some indices,\njust not the correct ones!\nDoes EXPLAIN show reasonable rowcount estimates when you query\ndirectly, but bad ones when you query via the views?\n\nIf so, a likely guess is that you're falling foul of the restrictions\nadded for CVE-2017-7484:\n\nAuthor: Peter Eisentraut <[email protected]>\nBranch: master Release: REL_10_BR [e2d4ef8de] 2017-05-08 09:26:32 -0400\nBranch: REL9_6_STABLE Release: REL9_6_3 [c33c42362] 2017-05-08 09:18:57 -0400\nBranch: REL9_5_STABLE Release: REL9_5_7 [d45cd7c0e] 2017-05-08 09:19:07 -0400\nBranch: REL9_4_STABLE Release: REL9_4_12 [3e5ea1f9b] 2017-05-08 09:19:15 -0400\nBranch: REL9_3_STABLE Release: REL9_3_17 [4f1b2089a] 2017-05-08 09:19:23 -0400\nBranch: REL9_2_STABLE Release: REL9_2_21 [d035c1b97] 2017-05-08 09:19:42 -0400\n\n Add security checks to selectivity estimation functions\n\n Some selectivity estimation functions run user-supplied operators over\n data obtained from pg_statistic without security checks, which allows\n those operators to leak pg_statistic data without having privileges on\n the underlying tables. Fix by checking that one of the following is\n satisfied: (1) the user has table or column privileges on the table\n underlying the pg_statistic data, or (2) the function implementing the\n user-supplied operator is leak-proof. If neither is satisfied, planning\n will proceed as if there are no statistics available.\n\n At least one of these is satisfied in most cases in practice. The only\n situations that are negatively impacted are user-defined or\n not-leak-proof operators on a security-barrier view.\n\n Reported-by: Robert Haas <[email protected]>\n Author: Peter Eisentraut <[email protected]>\n Author: Tom Lane <[email protected]>\n\n Security: CVE-2017-7484\n\n\nHowever, if you're not on the latest minor releases, you might\nfind that updating would fix this for you, because of\n\nAuthor: Dean Rasheed <[email protected]>\nBranch: master Release: REL_12_BR [a0905056f] 2019-05-06 11:54:32 +0100\nBranch: REL_11_STABLE Release: REL_11_3 [98dad4cd4] 2019-05-06 11:56:37 +0100\nBranch: REL_10_STABLE Release: REL_10_8 [ca74e3e0f] 2019-05-06 11:58:32 +0100\nBranch: REL9_6_STABLE Release: REL9_6_13 [71185228c] 2019-05-06 12:00:00 +0100\nBranch: REL9_5_STABLE Release: REL9_5_17 [01256815a] 2019-05-06 12:01:44 +0100\nBranch: REL9_4_STABLE Release: REL9_4_22 [3c0999909] 2019-05-06 12:05:05 +0100\n\n Use checkAsUser for selectivity estimator checks, if it's set.\n\n In examine_variable() and examine_simple_variable(), when checking the\n user's table and column privileges to determine whether to grant\n access to the pg_statistic data, use checkAsUser for the privilege\n checks, if it's set. This will be the case if we're accessing the\n table via a view, to indicate that we should perform privilege checks\n as the view owner rather than the current user.\n\n This change makes this planner check consistent with the check in the\n executor, so the planner will be able to make use of statistics if the\n table is accessible via the view. This fixes a performance regression\n introduced by commit e2d4ef8de8, which affects queries against\n non-security barrier views in the case where the user doesn't have\n privileges on the underlying table, but the view owner does.\n\n Note that it continues to provide the same safeguards controlling\n access to pg_statistic for direct table access (in which case\n checkAsUser won't be set) and for security barrier views, because of\n the nearby checks on rte->security_barrier and rte->securityQuals.\n\n Back-patch to all supported branches because e2d4ef8de8 was.\n\n Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 08 Aug 2019 22:30:18 +0200",
"msg_from": "\"Thomas Rosenstein\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres not using correct indices for views."
},
{
"msg_contents": "\"Thomas Rosenstein\" <[email protected]> writes:\n> The planner estimates the correct row counts, but still does the wrong \n> planning.\n\nHm, I'm not exactly convinced. You show\n\n> Wrong:\n> -> Hash Join (cost=359555.11..1849150.95 rows=1496816 \n> width=1508) (actual time=1081.081..24251.466 rows=543231 loops=1)\n> Hash Cond: (p.customer_id = l.customer_id)\n> Join Filter: ((p.date - '3 days'::interval day) <= \n> l.duedate)\n> Rows Removed by Join Filter: 596120\n\n> Correct:\n> -> Nested Loop (cost=0.87..2961441.25 rows=515233 width=1471)\n\nThe join size estimate seems a lot closer to being correct in the\nsecond case, which could lend support to the idea that statistics\naren't being applied in the first case.\n\nHowever ... it sort of looks like the planner didn't even consider\nthe second plan shape in the \"wrong\" case. If it had, then even\nif it costed it 3X more than it did in the \"right\" case, the second\nplan would still have won out by orders of magnitude. So there's\nsomething else going on.\n\nCan you show the actual query and table and view definitions?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Aug 2019 18:45:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using correct indices for views."
},
{
"msg_contents": "[ re-adding list ]\n\n\"Thomas Rosenstein\" <[email protected]> writes:\n> On 9 Aug 2019, at 0:45, Tom Lane wrote:\n>> However ... it sort of looks like the planner didn't even consider\n>> the second plan shape in the \"wrong\" case. If it had, then even\n>> if it costed it 3X more than it did in the \"right\" case, the second\n>> plan would still have won out by orders of magnitude. So there's\n>> something else going on.\n>> \n>> Can you show the actual query and table and view definitions?\n\n> View definition:\n> SELECT l.id,\n> l.created_at,\n> ...\n> togdpr(l.comment) AS comment,\n> ...\n> FROM loans l;\n\nAh-hah. I'd been thinking about permissions on the table and\nview, but here's the other moving part: functions in the view.\nI bet you were incautious about making this function definition\nand allowed togdpr() to be marked volatile --- which it will\nbe by default. That inhibits a lot of optimizations.\n\nI'm guessing about what that function does, but if you could\nsafely mark it stable or even immutable, I bet this view would\nbehave better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Aug 2019 11:16:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using correct indices for views."
},
{
"msg_contents": "> [ re-adding list ]\n>\n> \"Thomas Rosenstein\" <[email protected]> writes:\n>> On 9 Aug 2019, at 0:45, Tom Lane wrote:\n>>> However ... it sort of looks like the planner didn't even consider\n>>> the second plan shape in the \"wrong\" case. If it had, then even\n>>> if it costed it 3X more than it did in the \"right\" case, the second\n>>> plan would still have won out by orders of magnitude. So there's\n>>> something else going on.\n>>>\n>>> Can you show the actual query and table and view definitions?\n>\n>> View definition:\n>> SELECT l.id,\n>> l.created_at,\n>> ...\n>> togdpr(l.comment) AS comment,\n>> ...\n>> FROM loans l;\n>\n> Ah-hah. I'd been thinking about permissions on the table and\n> view, but here's the other moving part: functions in the view.\n> I bet you were incautious about making this function definition\n> and allowed togdpr() to be marked volatile --- which it will\n> be by default. That inhibits a lot of optimizations.\n>\n> I'm guessing about what that function does, but if you could\n> safely mark it stable or even immutable, I bet this view would\n> behave better.\n>\n> \t\t\tregards, tom lane\n\nYep that was IT! Perfect, thank you soo much!\n\nWhy does it inhibit functionalities like using the correct index, if the \nfunction is only in the select?\nCould that still be improved from pg side?\n\nThanks again!\n\n\n",
"msg_date": "Sat, 10 Aug 2019 12:05:26 +0200",
"msg_from": "\"Thomas Rosenstein\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres not using correct indices for views."
},
{
"msg_contents": "What a nice catch!\n\nSent from my iPad\n\nOn Aug 10, 2019, at 6:05 AM, Thomas Rosenstein <[email protected]> wrote:\n\n>> [ re-adding list ]\n>> \n>> \"Thomas Rosenstein\" <[email protected]> writes:\n>>>> On 9 Aug 2019, at 0:45, Tom Lane wrote:\n>>>> However ... it sort of looks like the planner didn't even consider\n>>>> the second plan shape in the \"wrong\" case. If it had, then even\n>>>> if it costed it 3X more than it did in the \"right\" case, the second\n>>>> plan would still have won out by orders of magnitude. So there's\n>>>> something else going on.\n>>>> \n>>>> Can you show the actual query and table and view definitions?\n>> \n>>> View definition:\n>>> SELECT l.id,\n>>> l.created_at,\n>>> ...\n>>> togdpr(l.comment) AS comment,\n>>> ...\n>>> FROM loans l;\n>> \n>> Ah-hah. I'd been thinking about permissions on the table and\n>> view, but here's the other moving part: functions in the view.\n>> I bet you were incautious about making this function definition\n>> and allowed togdpr() to be marked volatile --- which it will\n>> be by default. That inhibits a lot of optimizations.\n>> \n>> I'm guessing about what that function does, but if you could\n>> safely mark it stable or even immutable, I bet this view would\n>> behave better.\n>> \n>> regards, tom lane\n> \n> Yep that was IT! Perfect, thank you soo much!\n> \n> Why does it inhibit functionalities like using the correct index, if the function is only in the select?\n> Could that still be improved from pg side?\n> \n> Thanks again!\n> \n> \n\n\n\n",
"msg_date": "Sat, 10 Aug 2019 07:11:58 -0400",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using correct indices for views."
},
{
"msg_contents": "\"Thomas Rosenstein\" <[email protected]> writes:\n> On 9 Aug 2019, at 0:45, Tom Lane wrote:\n>> I'm guessing about what that function does, but if you could\n>> safely mark it stable or even immutable, I bet this view would\n>> behave better.\n\n> Yep that was IT! Perfect, thank you soo much!\n> Why does it inhibit functionalities like using the correct index, if the \n> function is only in the select?\n> Could that still be improved from pg side?\n\nPossibly, but there's a lot of work between here and there, and it's\nlimited by how much we want to change the semantics around volatile\nfunctions. The core problem that's breaking this case for you is\nthat we won't flatten a view (i.e., pull up the sub-SELECT into the\nparent query) if its targetlist has volatile functions, for fear\nof changing the number of times such functions get invoked.\n\nNow, we're not totally consistent about that anyway --- for example,\nthe code is willing to push down qual expressions into an un-flattened\nsub-SELECT, which could remove rows from the output of the sub-SELECT's\nFROM and thereby reduce the number of calls of any volatile functions\nin its tlist. (That particular behavior is very ancient, and I wonder\nwhether we'd reject it if it were proposed today.)\n\nThe thing that's missing to make this better is to be willing to\npush down join quals not just restriction quals. That would require\nbeing able to make \"parameterized paths\" for subqueries, which is\nsomething that's on the radar screen but nobody's really worked on it.\nThere are substantial concerns about whether it'd make subquery planning\nnoticeably more expensive.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Aug 2019 16:31:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using correct indices for views."
}
] |
[
{
"msg_contents": "Hi Guys,\n\nI’m at a bit of a loss where I can go with the following 2 queries\nthat are over the same data structure (DDL attached) under postgresql\nPostgreSQL 9.5.16 on x86_64-pc-linux-gnu (Debian 9.5.16-1.pgdg90+1),\ncompiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit and\ncould do with a second set of eyes if someone would oblige.\n\nI’ve attached Query1.txt and Query2.txt along with the DDL for the\ntables and indicies and execution plans.\n\nOn our production environment we’re running at about 2 seconds (with\nthe cache warm); I’m getting a comparable speed on my playbox. It\nseems to me like the Bitmap Heap Scan on proposal is the issue because\nthe recheck is throwing away enormous amounts of data. The\nhas_been_anonymised flag on the proposal is effectively a soft-delete;\nso I’ve tried adding something like :\n\nCREATE INDEX ON proposal.proposal (system_id, legacy_organisation_id, reference)\nWHERE has_been_anonymised = false;\n\nWhich I was hoping would shrink the size of the index significantly\nand encourage an index scan rather than bitmap, however it didn’t have\nthat effect. For reference:\n\nHas_been_anonymised false: 1534790\nHas_been_anonymised true: 7072192\n\nRow counts over the whole table in question are :\nProposal.proposal: 8606982 2340 MB\nProposal.note: 2624423 1638 MB\n\nPresumably I could partition proposal on has_been_anonymised, however\nthe row counts seem low enough that it feels a bit like overkill? We\nalso need referential integrity so I'll need to wait until that's in\n(I think it's coming in PG12?)\n\nIf I decrease the number of legacy_organisation_id’s that are being\nused then the query performance gets much better, but presumably\nthat’s because there’s a smaller dataset.\n\nAny thoughts or ideas?\n\nThanks\nRob\n\n-- \n <https://codeweavers.net>\n\n\nA big Get Focused ‘thank you’ \n<https://codeweavers.net/company-blog/a-big-get-focused-thank-you>\nWhy you \nshould partner with an Agile company \n<https://codeweavers.net/company-blog/why-you-should-partner-with-an-agile-company>\n\n\n*\n*\n*Phone:* 0800 021 0888 Email: [email protected] \n<mailto:[email protected]>\nCodeweavers Ltd | Barn 4 | Dunston \nBusiness Village | Dunston | ST18 9AB\nRegistered in England and Wales No. \n04092394 | VAT registration no. 974 9705 63 \n\n\n\n \n<https://twitter.com/Codeweavers_Ltd> \n<https://www.facebook.com/Codeweavers.Ltd/> \n<https://www.linkedin.com/company/codeweavers-limited>",
"msg_date": "Fri, 9 Aug 2019 09:41:57 +0100",
"msg_from": "Rob Emery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bitmap heap scan performance"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 4:42 AM Rob Emery <[email protected]> wrote:\n\n\n>\n> It\n> seems to me like the Bitmap Heap Scan on proposal is the issue because\n> the recheck is throwing away enormous amounts of data.\n\n\nHave you tried increasing work_mem? The probable reason for the recheck is\nthat your bitmap overflows the allowed memory, and then switches\nfrom storing every tid to storing just the block numbers. As indicated by\nthe lossy part of \"Heap Blocks: exact=3983 lossy=27989\"\n\nThe\n> has_been_anonymised flag on the proposal is effectively a soft-delete;\n> so I’ve tried adding something like :\n>\n> CREATE INDEX ON proposal.proposal (system_id, legacy_organisation_id,\n> reference)\n> WHERE has_been_anonymised = false;\n>\n> Which I was hoping would shrink the size of the index significantly\n>\n\nThe partial index should be smaller, but when comparing to the index with\n\"has_been_anonymised\" as the leading column, it won't make a lot of\ndifference. You only have to scan a smaller part of the larger index, and\nthe sizes of part of the index you have to scan in each case will be\nroughly comparable.\n\n\n> and encourage an index scan rather than bitmap, however it didn’t have\n> that effect.\n\n\nTo encourage index scans over bitmap scans, you can increase\neffective_cache_size. Or to really force the issue, you can \"set\nenable_bitmapscan=off\" but that is something you would usually do locally\nfor experimental purposes, not do it in production's config settings.\n\nCheers,\n\nJeff\n\nOn Fri, Aug 9, 2019 at 4:42 AM Rob Emery <[email protected]> wrote: \nIt\nseems to me like the Bitmap Heap Scan on proposal is the issue because\nthe recheck is throwing away enormous amounts of data.Have you tried increasing work_mem? The probable reason for the recheck is that your bitmap overflows the allowed memory, and then switches from storing every tid to storing just the block numbers. As indicated by the lossy part of \"Heap Blocks: exact=3983 lossy=27989\" The\nhas_been_anonymised flag on the proposal is effectively a soft-delete;\nso I’ve tried adding something like :\n\nCREATE INDEX ON proposal.proposal (system_id, legacy_organisation_id, reference)\nWHERE has_been_anonymised = false;\n\nWhich I was hoping would shrink the size of the index significantly The partial index should be smaller, but when comparing to the index with \"has_been_anonymised\" as the leading column, it won't make a lot of difference. You only have to scan a smaller part of the larger index, and the sizes of part of the index you have to scan in each case will be roughly comparable. \nand encourage an index scan rather than bitmap, however it didn’t have\nthat effect. To encourage index scans over bitmap scans, you can increase effective_cache_size. Or to really force the issue, you can \"set enable_bitmapscan=off\" but that is something you would usually do locally for experimental purposes, not do it in production's config settings.Cheers,Jeff",
"msg_date": "Fri, 9 Aug 2019 08:30:16 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap heap scan performance"
},
{
"msg_contents": "Aha!\n\nThat's a great hint, we had that set down to an obscenely low value\ndue to our max_connections setting being quite high. I've tweaked it\nback up to 4MB for now and it's definitely had a marked improvement!\n\nMany Thanks,\nRob\n\nOn 09/08/2019, Jeff Janes <[email protected]> wrote:\n> On Fri, Aug 9, 2019 at 4:42 AM Rob Emery <[email protected]> wrote:\n>\n>\n>>\n>> It\n>> seems to me like the Bitmap Heap Scan on proposal is the issue because\n>> the recheck is throwing away enormous amounts of data.\n>\n>\n> Have you tried increasing work_mem? The probable reason for the recheck is\n> that your bitmap overflows the allowed memory, and then switches\n> from storing every tid to storing just the block numbers. As indicated by\n> the lossy part of \"Heap Blocks: exact=3983 lossy=27989\"\n>\n> The\n>> has_been_anonymised flag on the proposal is effectively a soft-delete;\n>> so I’ve tried adding something like :\n>>\n>> CREATE INDEX ON proposal.proposal (system_id, legacy_organisation_id,\n>> reference)\n>> WHERE has_been_anonymised = false;\n>>\n>> Which I was hoping would shrink the size of the index significantly\n>>\n>\n> The partial index should be smaller, but when comparing to the index with\n> \"has_been_anonymised\" as the leading column, it won't make a lot of\n> difference. You only have to scan a smaller part of the larger index, and\n> the sizes of part of the index you have to scan in each case will be\n> roughly comparable.\n>\n>\n>> and encourage an index scan rather than bitmap, however it didn’t have\n>> that effect.\n>\n>\n> To encourage index scans over bitmap scans, you can increase\n> effective_cache_size. Or to really force the issue, you can \"set\n> enable_bitmapscan=off\" but that is something you would usually do locally\n> for experimental purposes, not do it in production's config settings.\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n-- \nRobert Emery\nInfrastructure Director\n\nE: [email protected] | T: 01785 711633 | W: www.codeweavers.net\n\n-- \n <https://codeweavers.net>\n\n\nA big Get Focused ‘thank you’ \n<https://codeweavers.net/company-blog/a-big-get-focused-thank-you>\nWhy you \nshould partner with an Agile company \n<https://codeweavers.net/company-blog/why-you-should-partner-with-an-agile-company>\n\n\n*\n*\n*Phone:* 0800 021 0888 Email: [email protected] \n<mailto:[email protected]>\nCodeweavers Ltd | Barn 4 | Dunston \nBusiness Village | Dunston | ST18 9AB\nRegistered in England and Wales No. \n04092394 | VAT registration no. 974 9705 63 \n\n\n\n \n<https://twitter.com/Codeweavers_Ltd> \n<https://www.facebook.com/Codeweavers.Ltd/> \n<https://www.linkedin.com/company/codeweavers-limited>\n\n\n",
"msg_date": "Mon, 12 Aug 2019 14:00:42 +0100",
"msg_from": "Rob Emery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bitmap heap scan performance"
},
{
"msg_contents": ">\n> Presumably I could partition proposal on has_been_anonymised, however\n> the row counts seem low enough that it feels a bit like overkill? We\n> also need referential integrity so I'll need to wait until that's in\n> (I think it's coming in PG12?)\n>\n> If I decrease the number of legacy_organisation_id’s that are being\n> used then the query performance gets much better, but presumably\n> that’s because there’s a smaller dataset.\n>\n\nWhat are the actual counts that your queries are returning?\n\nFor your first query at least, are you sure your issue is not simply that\nyou have no index on proposal.proposal.reference? Because the entry_time\nfilter is highly selective (and that part of the query only took 180ms), I\nwould think the planner would first filter on the note table, then join\nback to proposal.proposal using an index scan on reference. But you have\nno index there. You might even consider an index on (reference) WHERE\nhas_been_anonymised = false?\n\nAlso, one of your challenges seems to be that all of your indexed fields\nare low cardinality. Rather than partitioning on has_been_anonymised,\nperhaps you could consider partitioning on system_id and sub-partition on\nlegacy_organisation_id? It depends on if your attached queries are always\nthe standard pattern or not though. This is something you might play\naround with.\n\nAnother option is to try yet further specificity in your partial index\nconditions, and also to only then index your primary key. For example:\n\nCREATE INDEX ON proposal.proposal (id)\nWHERE has_been_anonymised = false AND system_id = 11;\n\nI'm curious if any of these ideas would make a difference.\n\nThanks,\nJeremy\n\nPresumably I could partition proposal on has_been_anonymised, however\nthe row counts seem low enough that it feels a bit like overkill? We\nalso need referential integrity so I'll need to wait until that's in\n(I think it's coming in PG12?)\n\nIf I decrease the number of legacy_organisation_id’s that are being\nused then the query performance gets much better, but presumably\nthat’s because there’s a smaller dataset.What are the actual counts that your queries are returning?For your first query at least, are you sure your issue is not simply that you have no index on proposal.proposal.reference? Because the entry_time filter is highly selective (and that part of the query only took 180ms), I would think the planner would first filter on the note table, then join back to proposal.proposal using an index scan on reference. But you have no index there. You might even consider an index on (reference) WHERE has_been_anonymised = false?Also, one of your challenges seems to be that all of your indexed fields are low cardinality. Rather than partitioning on has_been_anonymised, perhaps you could consider partitioning on system_id and sub-partition on legacy_organisation_id? It depends on if your attached queries are always the standard pattern or not though. This is something you might play around with.Another option is to try yet further specificity in your partial index conditions, and also to only then index your primary key. For example:CREATE INDEX ON proposal.proposal (id)WHERE has_been_anonymised = false AND system_id = 11;I'm curious if any of these ideas would make a difference.Thanks,Jeremy",
"msg_date": "Mon, 12 Aug 2019 08:29:21 -0500",
"msg_from": "Jeremy Finzel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap heap scan performance"
}
] |
[
{
"msg_contents": "Hello,\n\nWe have partitioned tables in two levels. Both stages are partitioned in\nranges method. We see that planner and executor time was 10 time slower when\nwe asked main table rather than partitioned. My question is did planner and\nexecutor are working optimal? I have doubts about it. Let's consider that\nsituation, because I think that in some ways (I think in more cases) planner\nshoudl be more optimal. When we have table partitioned by key which is in\n\"where\" clause we have guarantee that all rows we can find just in ONE\npartition: contained in range or in default if exists. Planner show that\nquery should be executed just in one partition, but it takes a lot of time.\nSo my direct question is if planner is stopping searching for another\npartition when he found one correct? Are partition ranges stored sorted and\nsearching method is optimal and is stopped after first (and only) hit?\n\nI've noticed that increasing numbers of partition proportionally increase\nplanner time. Additionally having index on column that you are searching for\nis adding extra time (one index add +/- 100% time for table). It's\nunderstandable but optimizing planner and executor by ideas I wrote on first\nparagraph automatically decrease time for searching indexes.\n\nReproduction:\n1. ADD MAIN TABLE\n\n-- Table: public.book\n\n--DROP TABLE public.book;\n\nCREATE TABLE public.book\n(\n id bigserial,\n id_owner bigint NOT NULL,\n added date NOT NULL\n) PARTITION BY RANGE (id_owner) \nWITH (\n OIDS = FALSE\n)\nTABLESPACE pg_default;\n\nALTER TABLE public.book\n OWNER to postgres;\n\n2. ADD PARTITIONS (run first \"a\" variant, then drop table book, reconstruct\nand run \"variant\"):\n a. 1200 partitions:\nhttps://gist.github.com/piotrwlodarczyk/4faa05729d1bdd3b5f5738a2a3faabc0 \n b. 6000 partitions:\nhttps://gist.github.com/piotrwlodarczyk/2747e0984f521768f5d36ab2b382ea36 \n \n3. ANALYZE ON MAIN TABLE:\n EXPLAIN ANALYZE SELECT * FROM public.book WHERE id_owner = 4;\n a. My result for 1200 partitions:\nhttps://gist.github.com/piotrwlodarczyk/500f20a0b6e2cac6d36ab88d4fea2c00 \n b. My result for 6000 partitions:\nhttps://gist.github.com/piotrwlodarczyk/277687b21201340377116a18a3dd8be8\n\n4. ANALYZE ON PARTITIONED TABLE (only on first level):\n EXPLAIN ANALYZE SELECT * FROM public.book WHERE id_owner = 4;\n a. My result for 1200:\nhttps://gist.github.com/piotrwlodarczyk/4285907c68b34b486cbf39eb8ae5cf92\n b. My result for 6000:\nhttps://gist.github.com/piotrwlodarczyk/c157cc9321b6e1a1d0f900310f14f1cc\n\n4. CONCLUSIONS\n Planner time for select on public.book (main table) 1200 was 469.416 ms,\nfor 6000 was 2530.179 ms. It looks like time is linear to partition count.\nThat makes me sure that all partitions are checked instead of searching for\nfirst that equals. Intentionally I've searching id_owner = 4 to make sure\nthat in both cases first partition should by marked as correct and planer\ntime should be constant. What is intereting too that real execution time was\n+/- equal in both cases. Is executor working better than planner?\n When we're asking on first level partition directly - time for planner\n1200 is 58.736 ms, for 6000: 60.555 ms. We can say it's equal. Why? Because\nplanner don't have to search for another matching partitions because first\nfound can match. It's guaranteed by rule that say ranges in partitions\ncannot override. Execution time in this case is 50 times faster!",
"msg_date": "Mon, 12 Aug 2019 12:37:19 +0000",
"msg_from": "=?iso-8859-2?Q?Piotr_W=B3odarczyk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner performance in partitions"
},
{
"msg_contents": "\"It is also important to consider the overhead of partitioning during query\nplanning and execution. The query planner is generally able to handle\npartition hierarchies with *up to a few hundred partitions fairly well*,\nprovided that typical queries allow the query planner to prune all but a\nsmall number of partitions. Planning times become longer and memory\nconsumption becomes higher as more partitions are added.\" (emphasis added)\n\n--https://www.postgresql.org/docs/current/ddl-partitioning.html\n\n\"It is also important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few hundred partitions fairly well, provided that typical queries allow the query planner to prune all but a small number of partitions. Planning times become longer and memory consumption becomes higher as more partitions are added.\" (emphasis added)--https://www.postgresql.org/docs/current/ddl-partitioning.html",
"msg_date": "Mon, 12 Aug 2019 13:05:25 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner performance in partitions"
},
{
"msg_contents": "Queries against tables with a lot of partitions (> 1000) start to incur \nan increasing planning time duration even with the current version, \nV11. V12 purportedly has fixed this problem, allowing thousands of \npartitioned tables without a heavy planning cost. Can't seem to find \nthe threads on this topic, but there are out there. I personally noted \na gigantic increase in planning time once I got past 1500 partitioned \ntables in V11.\n\nOn another note, hopefully they have fixed runtime partition pruning in \nV12 since V11 introduced it but some query plans don't use it, so you \nhave to reconstruct some queries to sub queries to make it work correctly.\n\nRegards,\nMichael Vitale\n\n\nMichael Lewis wrote on 8/12/2019 3:05 PM:\n> \"It is also important to consider the overhead of partitioning during \n> query planning and execution. The query planner is generally able to \n> handle partition hierarchies with */up to a few hundred partitions \n> fairly well/*, provided that typical queries allow the query planner \n> to prune all but a small number of partitions. Planning times become \n> longer and memory consumption becomes higher as more partitions are \n> added.\" (emphasis added)\n>\n> --https://www.postgresql.org/docs/current/ddl-partitioning.html\n\n\n\n\nQueries against tables \nwith a lot of partitions (> 1000) start to incur an increasing \nplanning time duration even with the current version, V11. V12 \npurportedly has fixed this problem, allowing thousands of partitioned \ntables without a heavy planning cost. Can't seem to find the threads on\n this topic, but there are out there. I personally noted a gigantic \nincrease in planning time once I got past 1500 partitioned tables in \nV11.\n\nOn another note, hopefully they have fixed runtime partition pruning in \nV12 since V11 introduced it but some query plans don't use it, so you \nhave to reconstruct some queries to sub queries to make it work \ncorrectly.\n\nRegards,\nMichael Vitale\n\n\nMichael Lewis wrote on 8/12/2019 3:05 PM:\n\n\n\"It is also important to consider the overhead of \npartitioning during query planning and execution. The query planner is \ngenerally able to handle partition hierarchies with up to a few \nhundred partitions fairly well, provided that typical queries \nallow the query planner to prune all but a small number of partitions. \nPlanning times become longer and memory consumption becomes higher as \nmore partitions are added.\" (emphasis added)--https://www.postgresql.org/docs/current/ddl-partitioning.html",
"msg_date": "Mon, 12 Aug 2019 15:24:58 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner performance in partitions"
},
{
"msg_contents": "@Michael Lewis: I know documentation. I'm just considerations about possible performance tricks in current production version. I've tested this on V12 on another computer and I can say that I'm impressed. I've checked on 1200 partitions and times are:\n\nPostgreSQL11.5:\n• select on main partition (public.book): planner: 60ms, execution: 5ms\n• select on partitioned table (public.book_1-1000): planner: 2.7 ms, execution: 2,4 ms\nPostgreSQL 12B3:\n• select on main partition (public.book): planner: 2,5ms , execution: 1,2ms\n• select on partitioned table (public.book_1-1000): planner: 2.5 ms, execution: 1,2 ms\n\nSo looking at above results we have two options:\n• Wait for 12.0 stable version 😉\n• Wait for patches to 11 – PostgreSQL Team: can You do this? 😊\n\nPozdrawiam,\nPiotr Włodarczyk\n\nOd: MichaelDBA\nWysłano: poniedziałek, 12 sierpnia 2019 21:25\nDo: Michael Lewis\nDW: Piotr Włodarczyk; [email protected]\nTemat: Re: Planner performance in partitions\n\nQueries against tables with a lot of partitions (> 1000) start to incur an increasing planning time duration even with the current version, V11. V12 purportedly has fixed this problem, allowing thousands of partitioned tables without a heavy planning cost. Can't seem to find the threads on this topic, but there are out there. I personally noted a gigantic increase in planning time once I got past 1500 partitioned tables in V11.\n\nOn another note, hopefully they have fixed runtime partition pruning in V12 since V11 introduced it but some query plans don't use it, so you have to reconstruct some queries to sub queries to make it work correctly.\n\nRegards,\nMichael Vitale\n\n\nMichael Lewis wrote on 8/12/2019 3:05 PM:\n\n\"It is also important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few hundred partitions fairly well, provided that typical queries allow the query planner to prune all but a small number of partitions. Planning times become longer and memory consumption becomes higher as more partitions are added.\" (emphasis added)\n\n--https://www.postgresql.org/docs/current/ddl-partitioning.html\n\n\n\n@Michael Lewis: I know documentation. I'm just considerations about possible performance tricks in current production version. I've tested this on V12 on another computer and I can say that I'm impressed. I've checked on 1200 partitions and times are: PostgreSQL11.5:select on main partition (public.book): planner: 60ms, execution: 5msselect on partitioned table (public.book_1-1000): planner: 2.7 ms, execution: 2,4 msPostgreSQL 12B3:select on main partition (public.book): planner: 2,5ms , execution: 1,2msselect on partitioned table (public.book_1-1000): planner: 2.5 ms, execution: 1,2 ms So looking at above results we have two options:Wait for 12.0 stable version 😉Wait for patches to 11 – PostgreSQL Team: can You do this? 😊 Pozdrawiam,Piotr Włodarczyk Od: MichaelDBAWysłano: poniedziałek, 12 sierpnia 2019 21:25Do: Michael LewisDW: Piotr Włodarczyk; [email protected]: Re: Planner performance in partitions Queries against tables with a lot of partitions (> 1000) start to incur an increasing planning time duration even with the current version, V11. V12 purportedly has fixed this problem, allowing thousands of partitioned tables without a heavy planning cost. Can't seem to find the threads on this topic, but there are out there. I personally noted a gigantic increase in planning time once I got past 1500 partitioned tables in V11.On another note, hopefully they have fixed runtime partition pruning in V12 since V11 introduced it but some query plans don't use it, so you have to reconstruct some queries to sub queries to make it work correctly.Regards,Michael VitaleMichael Lewis wrote on 8/12/2019 3:05 PM:\"It is also important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few hundred partitions fairly well, provided that typical queries allow the query planner to prune all but a small number of partitions. Planning times become longer and memory consumption becomes higher as more partitions are added.\" (emphasis added) --https://www.postgresql.org/docs/current/ddl-partitioning.html",
"msg_date": "Mon, 12 Aug 2019 22:03:48 +0200",
"msg_from": "=?utf-8?Q?Piotr_W=C5=82odarczyk?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "ODP: Planner performance in partitions"
},
{
"msg_contents": "Thanks for clarifying your position and sharing the results you have seen.\nThat is impressive indeed.\n\nIt seems likely that waiting for v12 is needed since feature are not back\npatched. Perhaps one of the contributors will confirm, but that is my\nexpectation.\n\nThanks for clarifying your position and sharing the results you have seen. That is impressive indeed.It seems likely that waiting for v12 is needed since feature are not back patched. Perhaps one of the contributors will confirm, but that is my expectation.",
"msg_date": "Mon, 12 Aug 2019 14:25:47 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner performance in partitions"
},
{
"msg_contents": "On Tue, 13 Aug 2019 at 08:03, Piotr Włodarczyk\n<[email protected]> wrote:\n> PostgreSQL11.5:\n>\n> select on main partition (public.book): planner: 60ms, execution: 5ms\n> select on partitioned table (public.book_1-1000): planner: 2.7 ms, execution: 2,4 ms\n>\n> PostgreSQL 12B3:\n>\n> select on main partition (public.book): planner: 2,5ms , execution: 1,2ms\n> select on partitioned table (public.book_1-1000): planner: 2.5 ms, execution: 1,2 ms\n>\n> So looking at above results we have two options:\n>\n> Wait for 12.0 stable version\n> Wait for patches to 11 – PostgreSQL Team: can You do this?\n\nYou'll need to either reduce the number of partitions down to\nsomething realistic or wait for 12.0.\n\nThe work done to speed up the planner with partitioned tables for v12\nwon't be going into v11.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 13 Aug 2019 10:25:23 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner performance in partitions"
},
{
"msg_contents": "Was there a reason to exceed 100-500 partitions in real life that pushed\nyou to do this test? Is there some issue you see when using 100 partitions\nthat is solved or reduced in severity by increasing to 1200 or 6000\npartitions?\n\nWas there a reason to exceed 100-500 partitions in real life that pushed you to do this test? Is there some issue you see when using 100 partitions that is solved or reduced in severity by increasing to 1200 or 6000 partitions?",
"msg_date": "Mon, 12 Aug 2019 16:37:26 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner performance in partitions"
},
{
"msg_contents": "As you wrote we have about 400/500 partitions in real life. So time problem is much smaller, but still it is and in one place of aur application we have decided to help DB and we're indicating in query exact partition we need. What pushed me to do this test? Just curiosity I think. After I saw in pg_locks that all partitions which was selected in uncommitted transaction have ACCESS SHARED i've started thinking about efficiency. And that way here we are. Why we need some hundred partitions? It’s because our main table (public.book) have hundreds of millions records. It’s not maintainable. VACUUM never ends, space on device is huge and we cannot take database down for longer that 2-3 hours, what is too short to maintain them manually. So we've partitioned them on two levels. First on id_owner (which is in every query) and the second level based on date. It’ll help as detach partitions with old data we no longer need. \n\n \nPozdrawiam,\nPiotr Włodarczyk\n\nOd: Michael Lewis\nWysłano: wtorek, 13 sierpnia 2019 00:37\nDo: David Rowley\nDW: Piotr Włodarczyk; MichaelDBA; Piotr Włodarczyk; [email protected]\nTemat: Re: Planner performance in partitions\n\nWas there a reason to exceed 100-500 partitions in real life that pushed you to do this test? Is there some issue you see when using 100 partitions that is solved or reduced in severity by increasing to 1200 or 6000 partitions?\n\n\nAs you wrote we have about 400/500 partitions in real life. So time problem is much smaller, but still it is and in one place of aur application we have decided to help DB and we're indicating in query exact partition we need. What pushed me to do this test? Just curiosity I think. After I saw in pg_locks that all partitions which was selected in uncommitted transaction have ACCESS SHARED i've started thinking about efficiency. And that way here we are. Why we need some hundred partitions? It’s because our main table (public.book) have hundreds of millions records. It’s not maintainable. VACUUM never ends, space on device is huge and we cannot take database down for longer that 2-3 hours, what is too short to maintain them manually. So we've partitioned them on two levels. First on id_owner (which is in every query) and the second level based on date. It’ll help as detach partitions with old data we no longer need. Pozdrawiam,Piotr Włodarczyk Od: Michael LewisWysłano: wtorek, 13 sierpnia 2019 00:37Do: David RowleyDW: Piotr Włodarczyk; MichaelDBA; Piotr Włodarczyk; [email protected]: Re: Planner performance in partitions Was there a reason to exceed 100-500 partitions in real life that pushed you to do this test? Is there some issue you see when using 100 partitions that is solved or reduced in severity by increasing to 1200 or 6000 partitions?",
"msg_date": "Tue, 13 Aug 2019 08:29:06 +0200",
"msg_from": "=?utf-8?Q?Piotr_W=C5=82odarczyk?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "ODP: Planner performance in partitions"
}
] |
[
{
"msg_contents": "Hey guys,\n\nSo I have two tables: users and events. It is very common for my \napplication to request the last user event.\n\nUsually, what I'll do is get the user, and then SELECT * from events \nWHERE user_id = :user order by timestamp_inc desc LIMIT 1.\n\nI have a big problem, however:\n\nMy app uses a ORM for SQL execution and generation and it cant create \nsubselects at all. The Ideal solution for me would be a view which has \nall the users last events.\n\nI tried:\n\ncreating a view (last_user_event_1) on \"SELECT DISTINCT ON (user_id) * \nFROM events ORDER BY user_id, timestamp_inc DESC\" and another one \n(last_user_event_2) which is a view on users with a lateral join on the \nlast event.\n\nRunning the query with lateral join by itself is very fast, and exactly \nwhat I need. It usually runs < 1ms. The one with \"distinct on (user_id)\" \ntakes around 20ms to complete which is just too slow for my needs.\n\nMy problem is that when I run a query JOINing users with \nlast_user_event_2, it takes about 2 seconds:\n\nThis is the explain output from joining users with \"last_user_event_2\":\n\nhttps://explain.depesz.com/s/oyEp\n\nAnd this is with \"last_user_event_1\":\n\nhttps://explain.depesz.com/s/hWwF\n\nAny help would be greatly appreciated.\n\n\n",
"msg_date": "Mon, 12 Aug 2019 17:57:46 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Last event per user"
},
{
"msg_contents": "The obfuscation makes it difficult to guess at the query you are writing\nand the schema you are using. Can you provide any additional information\nwithout revealing sensitive info?\n\n1) Do you have an index on ( user_id ASC, timestamp_inc DESC ) ?\n2) Sub-queries can't be re-written inline by the optimizer when there is an\naggregate inside the subquery, and I think DISTINCT ON would behave the\nsame. So, that might explain the significant change in behavior when the\nlateral is used. I am guessing at how you wrote the two versions of the\nview though.\n\nObviously not best design, but you could insert events as \"is_latest\" and\nupdate any prior events for that user via trigger as is_latest = false.\n\nThe obfuscation makes it difficult to guess at the query you are writing and the schema you are using. Can you provide any additional information without revealing sensitive info?1) Do you have an index on ( user_id ASC, timestamp_inc DESC ) ?2) Sub-queries can't be re-written inline by the optimizer when there is an aggregate inside the subquery, and I think DISTINCT ON would behave the same. So, that might explain the significant change in behavior when the lateral is used. I am guessing at how you wrote the two versions of the view though.Obviously not best design, but you could insert events as \"is_latest\" and update any prior events for that user via trigger as is_latest = false.",
"msg_date": "Mon, 12 Aug 2019 15:56:31 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last event per user"
},
{
"msg_contents": "> The obfuscation makes it difficult to guess at the query you are writing and the schema you are using. Can you provide any additional information without revealing sensitive info?\n> \n> 1) Do you have an index on ( user_id ASC, timestamp_inc DESC ) ? \n> 2) Sub-queries can't be re-written inline by the optimizer when there is an aggregate inside the subquery, and I think DISTINCT ON would behave the same. So, that might explain the significant change in behavior when the lateral is used. I am guessing at how you wrote the two versions of the view though. \n> \n> Obviously not best design, but you could insert events as \"is_latest\" and update any prior events for that user via trigger as is_latest = false.\n\nThanks for the reply! \n\nthe schema is basically this (simplified): \n\ntable users (user_id,user_group,user_name) \n\ntable events\n(user_id,user_group,event_id,timestamp_inc,event_description) \n\nViews: \n\n\"last_user_event_2\" \n\nSELECT e.* \n\n FROM users u \n\n JOIN LATERAL (SELECT * \n\n FROM events \n\n WHERE user_id = u.user_id \n\n AND user_group = u.user_group \n\n ORDER BY timestamp_inc DESC \n\n LIMIT 1 ) e ON TRUE \n\n\"last_user_event_1\" \n\nSELECT DISTINCT ON (user_id) \n\n * \n\n FROM events \n\n ORDER BY user_id, timestamp_inc DESC \n\nThe query itself is: \n\nSELECT * \n\n FROM users u \n\n JOIN last_user_event_(1|2) e USING (user_id,user_group) \n\nThis explain plan: https://explain.depesz.com/s/oyEp is what Postgres\nuses with \"last_user_event_2\" and https://explain.depesz.com/s/hWwF,\n\"last_user_event_1\" \n\nI do have a btree index on user_id,user_group,timestamp_inc DESC.\n\n\nThe obfuscation makes it difficult to guess at the query you are writing and the schema you are using. Can you provide any additional information without revealing sensitive info?\n \n1) Do you have an index on ( user_id ASC, timestamp_inc DESC ) ?\n2) Sub-queries can't be re-written inline by the optimizer when there is an aggregate inside the subquery, and I think DISTINCT ON would behave the same. So, that might explain the significant change in behavior when the lateral is used. I am guessing at how you wrote the two versions of the view though.\n\n \nObviously not best design, but you could insert events as \"is_latest\" and update any prior events for that user via trigger as is_latest = false.\n\nThanks for the reply!\nthe schema is basically this (simplified):\ntable users (user_id,user_group,user_name)\ntable events (user_id,user_group,event_id,timestamp_inc,event_description)\nViews:\n\"last_user_event_2\"\nSELECT e.*\n FROM users u\n JOIN LATERAL (SELECT * \n FROM events \n WHERE user_id = u.user_id \n AND user_group = u.user_group \n ORDER BY timestamp_inc DESC \n LIMIT 1 ) e ON TRUE\n\n\"last_user_event_1\"\n\nSELECT DISTINCT ON (user_id)\n *\n FROM events\n ORDER BY user_id, timestamp_inc DESC\n \nThe query itself is:\n\nSELECT * \n FROM users u\n JOIN last_user_event_(1|2) e USING (user_id,user_group)\n\n\nThis explain plan: https://explain.depesz.com/s/oyEp is what Postgres uses with \"last_user_event_2\" and https://explain.depesz.com/s/hWwF, \"last_user_event_1\"\n\nI do have a btree index on user_id,user_group,timestamp_inc DESC.",
"msg_date": "Mon, 12 Aug 2019 19:28:33 -0300",
"msg_from": "=?UTF-8?Q?Lu=C3=ADs_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last event per user"
},
{
"msg_contents": "It seems like it should be-\nSELECT * FROM users u JOIN last_user_event_1 e USING (user_id,user_group);\n--OR--\nSELECT * FROM last_user_event_2 e;\n\nfor them to produce the same result set, since the last_user_event_2\nalready (could) have users info in it very simply by select * instead of\ne.* in that view definition.\n\nAre there other important joins/where/order by/limits that would be on this\n\"main query\" that is just SELECT * FROM ____ right now which you have\ndropped to try to simplify the example?\n\nIt seems like it should be-SELECT * FROM users u JOIN last_user_event_1 e USING (user_id,user_group);--OR--SELECT * FROM last_user_event_2 e;for them to produce the same result set, since the last_user_event_2 already (could) have users info in it very simply by select * instead of e.* in that view definition.Are there other important joins/where/order by/limits that would be on this \"main query\" that is just SELECT * FROM ____ right now which you have dropped to try to simplify the example?",
"msg_date": "Mon, 12 Aug 2019 16:35:44 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last event per user"
},
{
"msg_contents": "> It seems like it should be-\n> \n> SELECT * FROM users u JOIN last_user_event_1 e USING (user_id,user_group); \n> --OR-- \n> SELECT * FROM last_user_event_2 e; \n> \n> for them to produce the same result set, since the last_user_event_2 already (could) have users info in it very simply by select * instead of e.* in that view definition. \n> \n> Are there other important joins/where/order by/limits that would be on this \"main query\" that is just SELECT * FROM ____ right now which you have dropped to try to simplify the example?\n\nYou're right about the queries, I made a mistake. \n\nYes, I'm going to filter them by user_id and user_group, possibly (but\nnot likely) using LIMIT 1. In the explain examples I am using user_id =\n1272897 and user_group = 19117\n\n\nIt seems like it should be-\nSELECT * FROM users u JOIN last_user_event_1 e USING (user_id,user_group);\n--OR--\nSELECT * FROM last_user_event_2 e;\n \nfor them to produce the same result set, since the last_user_event_2 already (could) have users info in it very simply by select * instead of e.* in that view definition.\n \nAre there other important joins/where/order by/limits that would be on this \"main query\" that is just SELECT * FROM ____ right now which you have dropped to try to simplify the example?\n\nYou're right about the queries, I made a mistake.\nYes, I'm going to filter them by user_id and user_group, possibly (but not likely) using LIMIT 1. In the explain examples I am using user_id = 1272897 and user_group = 19117",
"msg_date": "Mon, 12 Aug 2019 19:43:54 -0300",
"msg_from": "=?UTF-8?Q?Lu=C3=ADs_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:"
},
{
"msg_contents": "If you modify last_user_event_2 to select user and event info in the view,\nand just put there where clause directly on the view which is not joined to\nanything, instead of on the \"extra copy\" of the users table like you were\nshowing previously, I would expect that the performance should be excellent.\n\n>\n\nIf you modify last_user_event_2 to select user and event info in the view, and just put there where clause directly on the view which is not joined to anything, instead of on the \"extra copy\" of the users table like you were showing previously, I would expect that the performance should be excellent.",
"msg_date": "Mon, 12 Aug 2019 16:53:41 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:"
},
{
"msg_contents": "> If you modify last_user_event_2 to select user and event info in the view, and just put there where clause directly on the view which is not joined to anything, instead of on the \"extra copy\" of the users table like you were showing previously, I would expect that the performance should be excellent.\n\nBut I need user_id and user_group to be outside of the view definition.\nuser_id and user_group are dynamic values, as in, I need to call this\nquery multiple times for different user_ids and user_groups .\n\n\nIf you modify last_user_event_2 to select user and event info in the view, and just put there where clause directly on the view which is not joined to anything, instead of on the \"extra copy\" of the users table like you were showing previously, I would expect that the performance should be excellent.\n\nBut I need user_id and user_group to be outside of the view definition. user_id and user_group are dynamic values, as in, I need to call this query multiple times for different user_ids and user_groups .",
"msg_date": "Mon, 12 Aug 2019 20:03:54 -0300",
"msg_from": "=?UTF-8?Q?Lu=C3=ADs_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last event per user"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 5:03 PM Luís Roberto Weck <\[email protected]> wrote:\n\n> If you modify last_user_event_2 to select user and event info in the view,\n> and just put there where clause directly on the view which is not joined to\n> anything, instead of on the \"extra copy\" of the users table like you were\n> showing previously, I would expect that the performance should be excellent.\n>\n> But I need user_id and user_group to be outside of the view definition.\n> user_id and user_group are dynamic values, as in, I need to call this query\n> multiple times for different user_ids and user_groups .\n>\n\nI don't follow. Perhaps there is something within the limitations of the\nORM layer that I am not expecting. If you have this view-\n\n\"last_user_event_2\"\n\nSELECT u.*, e.*\n\n FROM users u\n\n JOIN LATERAL (SELECT *\n\n FROM events\n\n WHERE user_id = u.user_id\n\n AND user_group = u.user_group\n\n ORDER BY timestamp_inc DESC\n\n LIMIT 1 ) e ON TRUE\n\n\nAnd you execute a query like this-\nSELECT * FROM last_user_event_2 e WHERE user_id = 1272897 and user_group =\n19117;\n\nThen I would expect very good performance.\n\nOn Mon, Aug 12, 2019 at 5:03 PM Luís Roberto Weck <[email protected]> wrote:\n\nIf you modify last_user_event_2 to select user and event info in the view, and just put there where clause directly on the view which is not joined to anything, instead of on the \"extra copy\" of the users table like you were showing previously, I would expect that the performance should be excellent.\n\nBut I need user_id and user_group to be outside of the view definition. user_id and user_group are dynamic values, as in, I need to call this query multiple times for different user_ids and user_groups .I don't follow. Perhaps there is something within the limitations of the ORM layer that I am not expecting. If you have this view-\"last_user_event_2\"SELECT u.*, e.* FROM users u JOIN LATERAL (SELECT * FROM events WHERE user_id = u.user_id AND user_group = u.user_group ORDER BY timestamp_inc DESC LIMIT 1 ) e ON TRUEAnd you execute a query like this-SELECT * FROM last_user_event_2 e WHERE user_id = 1272897 and user_group = 19117;Then I would expect very good performance.",
"msg_date": "Mon, 12 Aug 2019 17:09:53 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last event per user"
},
{
"msg_contents": "> On Mon, Aug 12, 2019 at 5:03 PM Luís Roberto Weck \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n>> If you modify last_user_event_2 to select user and event info in\n>> the view, and just put there where clause directly on the view\n>> which is not joined to anything, instead of on the \"extra copy\"\n>> of the users table like you were showing previously, I would\n>> expect that the performance should be excellent.\n>>\n> But I need user_id and user_group to be outside of the view\n> definition. user_id and user_group are dynamic values, as in, I\n> need to call this query multiple times for different user_ids and\n> user_groups .\n>\n>\n> I don't follow. Perhaps there is something within the limitations of \n> the ORM layer that I am not expecting. If you have this view-\n>\n> \"last_user_event_2\"\n>\n> SELECT u.*, e.*\n>\n> FROM users u\n>\n> JOIN LATERAL (SELECT *\n>\n> FROM events\n>\n> WHERE user_id = u.user_id\n>\n> AND user_group = u.user_group\n>\n> ORDER BY timestamp_inc DESC\n>\n> LIMIT 1 ) e ON TRUE\n>\n>\n> And you execute a query like this-\n>\n> SELECT * FROM last_user_event_2 e WHERE user_id = 1272897 and \n> user_group = 19117;\n>\n> Then I would expect very good performance.\n\nYou're right, thanks! I just had to do a little adjustment on the \nlateral join. Since both users and events have user_id and user_group, \nPostgreSQL complains that I can't have more than one column with the \nsame name. I fixed it by changing the LATERAL condition from \"ON TRUE\" \nto \"USING (user_id,user_group)\" (which I didn't even knew I could do).\n\n\n\n\n\n\n\n\nOn Mon, Aug 12, 2019 at 5:03 PM Luís Roberto Weck\n <[email protected]>\n wrote:\n\n\n\n\n\nIf you modify last_user_event_2 to select user and\n event info in the view, and just put there where\n clause directly on the view which is not joined to\n anything, instead of on the \"extra copy\" of the users\n table like you were showing previously, I would expect\n that the performance should be excellent.\n\nBut I need user_id and user_group to be outside of the\n view definition. user_id and user_group are dynamic\n values, as in, I need to call this query multiple times\n for different user_ids and user_groups .\n\n\n\n\nI don't follow. Perhaps there is something within the\n limitations of the ORM layer that I am not expecting. If you\n have this view-\n\n\"last_user_event_2\"\n\nSELECT u.*, e.*\n FROM users u\n JOIN LATERAL\n (SELECT *\n \n FROM events\n \n WHERE user_id = u.user_id\n \n AND user_group = u.user_group \n \n ORDER BY timestamp_inc DESC\n \n LIMIT 1 ) e ON TRUE\n\n\nAnd you execute a query like this-\n\n SELECT * FROM last_user_event_2 e WHERE user_id = 1272897\n and user_group = 19117;\n\n\nThen I would expect very good performance.\n\n\n\n\n\n You're right, thanks! I just had to do a little adjustment on the\n lateral join. Since both users and events have user_id and\n user_group, PostgreSQL complains that I can't have more than one\n column with the same name. I fixed it by changing the LATERAL\n condition from \"ON TRUE\" to \"USING (user_id,user_group)\" (which I\n didn't even knew I could do).",
"msg_date": "Tue, 13 Aug 2019 08:34:58 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Last event per user"
}
] |
[
{
"msg_contents": "Hi team ,\nI am getting the below error while fetching the data from Oracle 12c using ora2pg.\n\nDBD::Oracle::st fetchall_arrayref failed: ORA-24345: A Truncation or null fetch error occurred (DBD SUCCESS_WITH_INFO: OCIStmtFetch, LongReadLen too small and/or LongTruncOk not set)ERROR no statement executing (perhaps you need to call execute first) [for Statement \"SELECT \"USERS_ID\",\"NAME\",\"USERS\" FROM \"GBOPSUI\".\"USER_GROUP_USERS_V5\" a\"] at /usr/local/share/perl5/Ora2Pg.pm line 14110.\n\n\nInitially did not have LongReadLen set, so I thought this was the cause. But, I have set LongReadLen, on the db handle, equal to 90000000.\n\nThanks,\nDaulat\n\n\n\n\n\n\n\n\n\n\nHi team ,\nI am getting the below error while fetching the data from Oracle 12c using ora2pg.\n \nDBD::Oracle::st fetchall_arrayref failed: ORA-24345: A Truncation or null fetch error occurred (DBD SUCCESS_WITH_INFO: OCIStmtFetch, LongReadLen\n too small and/or LongTruncOk not set)ERROR no statement executing (perhaps you need to call execute first) [for Statement \"SELECT \"USERS_ID\",\"NAME\",\"USERS\" FROM \"GBOPSUI\".\"USER_GROUP_USERS_V5\" a\"] at /usr/local/share/perl5/Ora2Pg.pm\n line 14110.\n \n \nInitially did not have LongReadLen set, so I thought this was the cause. But, I have set LongReadLen, on the db handle, equal to 90000000.\n \nThanks,\nDaulat",
"msg_date": "Tue, 13 Aug 2019 08:23:11 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "ORA-24345: A Truncation or null fetch error occurred -ora2pg"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram <[email protected]> wrote:\n> Initially did not have LongReadLen set, so I thought this was the cause. But, I have set LongReadLen, on the db handle, equal to 90000000.\n\nApparently this is an oracle problem because it acceppted data longer\nthan its type, so my guess would be that in your table you have a\nchar(n) column that could be enlarged before the migration.\n<https://support.oracle.com/knowledge/Siebel/476591_1.html>\nHope this helps.\nAnd please report the version of ora2pg when asking for help.\n\nLuca\n\n\n",
"msg_date": "Tue, 13 Aug 2019 17:02:17 +0200",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg"
},
{
"msg_contents": "H,\r\n\r\nWe are using below the ora2pg version and the data types for tables.\r\n\r\nbash-4.2$ ora2pg -v\r\nOra2Pg v20.0\r\nbash-4.2$\r\n\r\nSQL> SELECT distinct data_type FROM dba_tab_columns WHERE owner='GBOP;\r\n\r\nDATA_TYPE\r\n--------------------------------------------------------------------------------\r\nTIMESTAMP(6)\r\nFLOAT\r\nCLOB\r\nNUMBER\r\nCHAR\r\nDATE\r\nVARCHAR2\r\nBLOB\r\n\r\nSQL>\r\n\r\nWe are getting the same issue for tables which are having blob, clob and char data types.\r\n\r\nThanks,\r\nDaulat\r\n\r\n-----Original Message-----\r\nFrom: Luca Ferrari <[email protected]> \r\nSent: Tuesday, August 13, 2019 8:32 PM\r\nTo: Daulat Ram <[email protected]>\r\nCc: [email protected]; [email protected]\r\nSubject: Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg\r\n\r\nOn Tue, Aug 13, 2019 at 10:23 AM Daulat Ram <[email protected]> wrote:\r\n> Initially did not have LongReadLen set, so I thought this was the cause. But, I have set LongReadLen, on the db handle, equal to 90000000.\r\n\r\nApparently this is an oracle problem because it acceppted data longer than its type, so my guess would be that in your table you have a\r\nchar(n) column that could be enlarged before the migration.\r\n<https://support.oracle.com/knowledge/Siebel/476591_1.html>\r\nHope this helps.\r\nAnd please report the version of ora2pg when asking for help.\r\n\r\nLuca\r\n",
"msg_date": "Tue, 13 Aug 2019 17:34:22 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: ORA-24345: A Truncation or null fetch error occurred -ora2pg"
},
{
"msg_contents": "On 8/13/19 10:34 AM, Daulat Ram wrote:\n> H,\n> \n> We are using below the ora2pg version and the data types for tables.\n> \n> bash-4.2$ ora2pg -v\n> Ora2Pg v20.0\n> bash-4.2$\n> \n> SQL> SELECT distinct data_type FROM dba_tab_columns WHERE owner='GBOP;\n> \n> DATA_TYPE\n> --------------------------------------------------------------------------------\n> TIMESTAMP(6)\n> FLOAT\n> CLOB\n> NUMBER\n> CHAR\n> DATE\n> VARCHAR2\n> BLOB\n> \n> SQL>\n> \n> We are getting the same issue for tables which are having blob, clob and char data types.\n\nThe ora2pg issue below seems to have more information on this:\n\nhttps://github.com/darold/ora2pg/issues/342\n\n> \n> Thanks,\n> Daulat\n> \n> -----Original Message-----\n> From: Luca Ferrari <[email protected]>\n> Sent: Tuesday, August 13, 2019 8:32 PM\n> To: Daulat Ram <[email protected]>\n> Cc: [email protected]; [email protected]\n> Subject: Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg\n> \n> On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram <[email protected]> wrote:\n>> Initially did not have LongReadLen set, so I thought this was the cause. But, I have set LongReadLen, on the db handle, equal to 90000000.\n> \n> Apparently this is an oracle problem because it acceppted data longer than its type, so my guess would be that in your table you have a\n> char(n) column that could be enlarged before the migration.\n> <https://support.oracle.com/knowledge/Siebel/476591_1.html>\n> Hope this helps.\n> And please report the version of ora2pg when asking for help.\n> \n> Luca\n> \n\n\n-- \nAdrian Klaver\[email protected]\n\n\n",
"msg_date": "Tue, 13 Aug 2019 10:57:05 -0700",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg"
},
{
"msg_contents": "Hi Adrian ,\r\n\r\nWe have the below output. What we need to change. \r\n\r\nbash-4.2$ ora2pg -c ora2pg.bidder.conf -t SHOW_ENCODING\r\n\r\nCurrent encoding settings that will be used by Ora2Pg:\r\n Oracle NLS_LANG AMERICAN_AMERICA.AL32UTF8\r\n Oracle NLS_NCHAR AL32UTF8\r\n Oracle NLS_TIMESTAMP_FORMAT YYYY-MM-DD HH24:MI:SS.FF6\r\n Oracle NLS_DATE_FORMAT YYYY-MM-DD HH24:MI:SS\r\n PostgreSQL CLIENT_ENCODING UTF8\r\n Perl output encoding ''\r\nShowing current Oracle encoding and possible PostgreSQL client encoding:\r\n Oracle NLS_LANG AMERICAN_AMERICA.WE8MSWIN1252\r\n Oracle NLS_NCHAR WE8MSWIN1252\r\n Oracle NLS_TIMESTAMP_FORMAT YYYY-MM-DD HH24:MI:SS.FF6\r\n Oracle NLS_DATE_FORMAT YYYY-MM-DD HH24:MI:SS\r\n PostgreSQL CLIENT_ENCODING WIN1252\r\nbash-4.2$\r\n\r\nthanks\r\n\r\n \r\n-----Original Message-----\r\nFrom: Adrian Klaver <[email protected]> \r\nSent: Tuesday, August 13, 2019 11:27 PM\r\nTo: Daulat Ram <[email protected]>; Luca Ferrari <[email protected]>\r\nCc: [email protected]; [email protected]\r\nSubject: Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg\r\n\r\nOn 8/13/19 10:34 AM, Daulat Ram wrote:\r\n> H,\r\n> \r\n> We are using below the ora2pg version and the data types for tables.\r\n> \r\n> bash-4.2$ ora2pg -v\r\n> Ora2Pg v20.0\r\n> bash-4.2$\r\n> \r\n> SQL> SELECT distinct data_type FROM dba_tab_columns WHERE owner='GBOP;\r\n> \r\n> DATA_TYPE\r\n> --------------------------------------------------------------------------------\r\n> TIMESTAMP(6)\r\n> FLOAT\r\n> CLOB\r\n> NUMBER\r\n> CHAR\r\n> DATE\r\n> VARCHAR2\r\n> BLOB\r\n> \r\n> SQL>\r\n> \r\n> We are getting the same issue for tables which are having blob, clob and char data types.\r\n\r\nThe ora2pg issue below seems to have more information on this:\r\n\r\nhttps://github.com/darold/ora2pg/issues/342\r\n\r\n> \r\n> Thanks,\r\n> Daulat\r\n> \r\n> -----Original Message-----\r\n> From: Luca Ferrari <[email protected]>\r\n> Sent: Tuesday, August 13, 2019 8:32 PM\r\n> To: Daulat Ram <[email protected]>\r\n> Cc: [email protected]; [email protected]\r\n> Subject: Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg\r\n> \r\n> On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram <[email protected]> wrote:\r\n>> Initially did not have LongReadLen set, so I thought this was the cause. But, I have set LongReadLen, on the db handle, equal to 90000000.\r\n> \r\n> Apparently this is an oracle problem because it acceppted data longer than its type, so my guess would be that in your table you have a\r\n> char(n) column that could be enlarged before the migration.\r\n> <https://support.oracle.com/knowledge/Siebel/476591_1.html>\r\n> Hope this helps.\r\n> And please report the version of ora2pg when asking for help.\r\n> \r\n> Luca\r\n> \r\n\r\n\r\n-- \r\nAdrian Klaver\r\[email protected]\r\n",
"msg_date": "Wed, 14 Aug 2019 09:39:43 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: ORA-24345: A Truncation or null fetch error occurred -ora2pg"
},
{
"msg_contents": "On 8/14/19 2:39 AM, Daulat Ram wrote:\n> Hi Adrian ,\n> \n> We have the below output. What we need to change.\n\nI am not an ora2pg user so I don't know what to suggest for below. I \nwould say the best thing to do would be to file an issue here:\n\nhttps://github.com/darold/ora2pg/issues\n\nAlong with the original error message include the below and the \nsettings, if any, for NLS_*, CLIENT_ENCODING from your ora2pg.conf file.\n\n> \n> bash-4.2$ ora2pg -c ora2pg.bidder.conf -t SHOW_ENCODING\n> \n> Current encoding settings that will be used by Ora2Pg:\n> Oracle NLS_LANG AMERICAN_AMERICA.AL32UTF8\n> Oracle NLS_NCHAR AL32UTF8\n> Oracle NLS_TIMESTAMP_FORMAT YYYY-MM-DD HH24:MI:SS.FF6\n> Oracle NLS_DATE_FORMAT YYYY-MM-DD HH24:MI:SS\n> PostgreSQL CLIENT_ENCODING UTF8\n> Perl output encoding ''\n> Showing current Oracle encoding and possible PostgreSQL client encoding:\n> Oracle NLS_LANG AMERICAN_AMERICA.WE8MSWIN1252\n> Oracle NLS_NCHAR WE8MSWIN1252\n> Oracle NLS_TIMESTAMP_FORMAT YYYY-MM-DD HH24:MI:SS.FF6\n> Oracle NLS_DATE_FORMAT YYYY-MM-DD HH24:MI:SS\n> PostgreSQL CLIENT_ENCODING WIN1252\n> bash-4.2$\n> \n> thanks\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n",
"msg_date": "Wed, 14 Aug 2019 07:13:43 -0700",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg"
}
] |
[
{
"msg_contents": "hello,\r\n\r\nWe used benchmarksql 4.1.0 to test the performance of PG12 beta TPCC.\r\n\r\n\r\nWe found performance bottlenecks on lock transactionid.\r\n\r\nhelp,\r\n\r\nWe hope to find a way to solve this bottleneck in order to improve performance.\r\n\r\nHere are some of our configurations. Including operating system, PG database, and benchmark SQL:\r\n\r\ndatabase:\r\n\r\npgsql 12 beta2\r\n\r\nLock queues at runtime and top in the attachment。\r\n\r\noperating system:\r\n\r\nuname -a\r\n\r\nLinux localhost.localdomain 4.20.0 #1 SMP Mon Mar 11 23:13:55 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n$ cat /etc/issue\r\n\r\n\\S\r\n\r\nKernel \\r on an \\m\r\n\r\n$ cat /proc/version \r\n\r\nLinux version 4.20.0 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC)) #1 SMP Mon Mar 11 23:13:55 EDT 2019\r\n\r\n$ lscpu\r\n\r\nArchitecture: x86_64\r\n\r\nCPU op-mode(s): 32-bit, 64-bit\r\n\r\nByte Order: Little Endian\r\n\r\nCPU(s): 112\r\n\r\nOn-line CPU(s) list: 0-111\r\n\r\nThread(s) per core: 2\r\n\r\nCore(s) per socket: 28\r\n\r\nSocket(s): 2\r\n\r\nNUMA node(s): 2\r\n\r\nVendor ID: GenuineIntel\r\n\r\nCPU family: 6\r\n\r\nModel: 85\r\n\r\nModel name: Intel(R) Xeon(R) Platinum 8280L CPU @ 2.60GHz\r\n\r\nStepping: 5\r\n\r\nCPU MHz: 1000.006\r\n\r\nCPU max MHz: 3900.0000\r\n\r\nCPU min MHz: 1000.0000\r\n\r\nBogoMIPS: 5200.00\r\n\r\nVirtualization: VT-x\r\n\r\nL1d cache: 32K\r\n\r\nL1i cache: 32K\r\n\r\nL2 cache: 1024K\r\n\r\nL3 cache: 39424K\r\n\r\nNUMA node0 CPU(s): 0-27,56-83\r\n\r\nNUMA node1 CPU(s): 28-55,84-111\r\n\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke flush_l1d arch_capabilities\r\n\r\n\r\n\r\nbenchmarksql:\r\n\r\nversion:4.1.0\r\n\r\n\r\nresult:\r\n\r\n\r\n\r\n\r\n driver=org.postgresql.Driver\r\n\r\n conn=jdbc:postgresql://localhost:5432/postgres\r\n\r\n user=SYSTEM\r\n\r\n \r\n\r\n warehouses=100\r\n\r\n terminals=300\r\n\r\n runMins=30\r\n\r\n limitTxnsPerMin=0\r\n\r\n \r\n\r\n newOrderWeight=45\r\n\r\n paymentWeight=43\r\n\r\n orderStatusWeight=4\r\n\r\n deliveryWeight=4\r\n\r\n stockLevelWeight=4\r\n\r\n Measured tpmC (NewOrders) = 543598.69\r\n\r\n Measured tpmTOTAL = 1205694.86\r\n\r\n Session Start = 2019-07-31 01:27:22\r\n\r\n Session End = 2019-07-31 01:57:23\r\n\r\n Transaction Count = 36191623",
"msg_date": "Tue, 13 Aug 2019 22:15:09 +0800",
"msg_from": "\"=?gb18030?B?zfXI9Omq?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance bottlenecks on lock transactionid"
},
{
"msg_contents": "王若楠 wrote:\n> We used benchmarksql 4.1.0 to test the performance of PG12 beta TPCC.\n> We found performance bottlenecks on lock transactionid.\n\nYou included an attachment with results from the \"pg_locks\" view\nwhere \"granted\" is FALSE for all entries.\n\nI'll assume that these are not *all* the entries in the view, right?\n\nSince the locks are waiting for different transaction IDs, I'd\nassume that this is just a case of contention: many transactions are\ntrying to modify the same rows concurrently.\n\nThis is to be expected.\nPerhaps your benchmark is running with too many connections on\ntoo few table rows?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 14 Aug 2019 09:31:29 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance bottlenecks on lock transactionid"
}
] |
[
{
"msg_contents": "hello,Laurenz Albe\r\n\r\nYes, pg_locks is only an item that does not get a lock in the view. The test data is 300 warehouses connections, and the CPU is only about 60%. I think the lock becomes a performance bottleneck at this time. I want to find a way to reduce the lock waiting and improve the performance.\r\n \r\n\r\n------------------ 原始邮件 ------------------\r\n发件人: \"Laurenz Albe\" <[email protected]>;\r\n发送时间: 2019年8月14日(星期三) 15:31\r\n收件人: \"王若楠\" <[email protected]>;\"pgsql-performance\" <[email protected]>;\r\n主题: Re: performance bottlenecks on lock transactionid\r\n\r\n\r\n\r\n王若楠 wrote:\r\n> We used benchmarksql 4.1.0 to test the performance of PG12 beta TPCC.\r\n> We found performance bottlenecks on lock transactionid.\r\n\r\nYou included an attachment with results from the \"pg_locks\" view\r\nwhere \"granted\" is FALSE for all entries.\r\n\r\nI'll assume that these are not *all* the entries in the view, right?\r\n\r\nSince the locks are waiting for different transaction IDs, I'd\r\nassume that this is just a case of contention: many transactions are\r\ntrying to modify the same rows concurrently.\r\n\r\nThis is to be expected.\r\nPerhaps your benchmark is running with too many connections on\r\ntoo few table rows?\r\n\r\nYours,\r\nLaurenz Albe\r\n-- \r\nCybertec | https://www.cybertec-postgresql.com\nhello,Laurenz AlbeYes, pg_locks is only an item that does not get a lock in the view. The test data is 300 warehouses connections, and the CPU is only about 60%. I think the lock becomes a performance bottleneck at this time. I want to find a way to reduce the lock waiting and improve the performance.\n------------------ 原始邮件 ------------------发件人: \"Laurenz Albe\" <[email protected]>;发送时间: 2019年8月14日(星期三) 15:31收件人: \"王若楠\" <[email protected]>;\"pgsql-performance\" <[email protected]>;主题: Re: performance bottlenecks on lock transactionid王若楠 wrote:> We used benchmarksql 4.1.0 to test the performance of PG12 beta TPCC.> We found performance bottlenecks on lock transactionid.You included an attachment with results from the \"pg_locks\" viewwhere \"granted\" is FALSE for all entries.I'll assume that these are not *all* the entries in the view, right?Since the locks are waiting for different transaction IDs, I'dassume that this is just a case of contention: many transactions aretrying to modify the same rows concurrently.This is to be expected.Perhaps your benchmark is running with too many connections ontoo few table rows?Yours,Laurenz Albe-- Cybertec | https://www.cybertec-postgresql.com",
"msg_date": "Wed, 14 Aug 2019 15:57:04 +0800",
"msg_from": "\"=?utf-8?B?546L6Iul5qWg?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "re:Re: performance bottlenecks on lock transactionid"
},
{
"msg_contents": "王若楠 wrote:\n> I want to find a way to reduce the lock waiting and improve the\n> performance.\n\nYou either have to make the transactions shorter, or you let the\ndifferent clients modify different rows, so that they don't lock each\nother.\n\nThat concurrent writers on the same data lock each other is\nunavoidable, and all database management systems I know do it the same\nway.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 14 Aug 2019 12:25:58 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: performance bottlenecks on lock transactionid"
}
] |
[
{
"msg_contents": "Hello!\n\nI was wondering if there is any external tool or extention to PostgreSQL >\n10 (EC2 or RDS) which would:\n\n - Collect the TOP X statements (queries or DML) used in the last Y days\n IN PRODUCTION;\n - Everytime there is a change in the DATABASE Structure (Migrations,\n properties, Tables, Indexes) run these TOP X statements against a\n PERFORMANCE TEST DATABASE (Which would be very similar to production, on\n both HARDWARE and DATABASE SIZE)\n - Collect the metrics for these tests\n - Plot the results over time so that the PRODUCT team can be sure how\n the performance of the DATABASE is being affected by the chances over time\n (also as the number of rows changes). Something like this:\n\n[image: image.png]\nQuery vs new change to the DATABASE (+ its size). So that we can see which\nmigration affect the database the most.\n\nIs there such a thing? It doesn't need to be open source nor free. But it\nwould be good if it is :)\n\nIf the answer is \"NO\", would it be possible to create such a tool and make\nit available as OOS? I think so, but I would need some professional help\n(and would pay for this).\n\nThanks",
"msg_date": "Wed, 14 Aug 2019 13:18:29 -0300",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Continuous Performance Test"
},
{
"msg_contents": "---------- Forwarded message ---------\nFrom: Jean Baro <[email protected]>\nDate: Wed, 14 Aug 2019, 13:18\nSubject: Continuous Performance Test\nTo: <[email protected]>\n\n\nHello!\n\nI was wondering if there is any external tool or extention to PostgreSQL >\n10 (EC2 or RDS) which would:\n\n - Collect the TOP X statements (queries or DML) used in the last Y days\n IN PRODUCTION;\n - Everytime there is a change in the DATABASE Structure (Migrations,\n properties, Tables, Indexes) run these TOP X statements against a\n PERFORMANCE TEST DATABASE (Which would be very similar to production, on\n both HARDWARE and DATABASE SIZE)\n - Collect the metrics for these tests\n - Plot the results over time so that the PRODUCT team can be sure how\n the performance of the DATABASE is being affected by the chances over time\n (also as the number of rows changes). Something like this:\n\n[image: image.png]\nQuery vs new change to the DATABASE (+ its size). So that we can see which\nmigration affect the database the most.\n\nIs there such a thing? It doesn't need to be open source nor free. But it\nwould be good if it is :)\n\nIf the answer is \"NO\", would it be possible to create such a tool and make\nit available as OOS? I think so, but I would need some professional help\n(and would pay for this).\n\nThanks",
"msg_date": "Wed, 14 Aug 2019 14:56:15 -0300",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Continuous Performance Test"
},
{
"msg_contents": "I’m not aware of such a tool, but it would certainly be useful.\r\n\r\nInstead of trying to tie this directly to production changes (at which point it’s already too late), I encourage you to tie it into your QA/deployment process. If you have a reliable benchmark (a *great* accomplishment!) then you should run that as part of your pre-deployment process. Obviously that won’t reflect ad-hoc changes to your production environment, but frankly you don’t want ad-hoc in production except in an emergency (and even then it’s best to have a formal production change deployment process so you can track exactly what happened when).\r\n\r\nIt does occur to me that it would be handy to be able to flag CloudWatch, Performance Insights, and Enhanced Monitoring when a deployment is done. That would make it easier to see the actual impact of a deployment on production. Let me know if you’re interested in doing a PFR for that.\r\n\r\nFrom: Jean Baro <[email protected]>\r\nDate: Wednesday, August 14, 2019 at 11:19 AM\r\nTo: \"[email protected]\" <[email protected]>\r\nSubject: Continuous Performance Test\r\n\r\nHello!\r\n\r\nI was wondering if there is any external tool or extention to PostgreSQL > 10 (EC2 or RDS) which would:\r\n\r\n * Collect the TOP X statements (queries or DML) used in the last Y days IN PRODUCTION;\r\n * Everytime there is a change in the DATABASE Structure (Migrations, properties, Tables, Indexes) run these TOP X statements against a PERFORMANCE TEST DATABASE (Which would be very similar to production, on both HARDWARE and DATABASE SIZE)\r\n * Collect the metrics for these tests\r\n * Plot the results over time so that the PRODUCT team can be sure how the performance of the DATABASE is being affected by the chances over time (also as the number of rows changes). Something like this:\r\n[cid:[email protected]]\r\nQuery vs new change to the DATABASE (+ its size). So that we can see which migration affect the database the most.\r\n\r\nIs there such a thing? It doesn't need to be open source nor free. But it would be good if it is :)\r\n\r\nIf the answer is \"NO\", would it be possible to create such a tool and make it available as OOS? I think so, but I would need some professional help (and would pay for this).\r\n\r\nThanks",
"msg_date": "Wed, 14 Aug 2019 20:57:06 +0000",
"msg_from": "\"Nasby, Jim\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Continuous Performance Test"
}
] |
[
{
"msg_contents": "Ooops, looks like I need to add email rules for this community list! You can ignore my AWS-specific comments in the last paragraph. 😊\r\n\r\nFrom: \"Nasby, Jim\" <[email protected]>\r\nDate: Wednesday, August 14, 2019 at 3:57 PM\r\nTo: Jean Baro <[email protected]>, \"[email protected]\" <[email protected]>\r\nSubject: [UNVERIFIED SENDER] Re: Continuous Performance Test\r\n\r\nI’m not aware of such a tool, but it would certainly be useful.\r\n\r\nInstead of trying to tie this directly to production changes (at which point it’s already too late), I encourage you to tie it into your QA/deployment process. If you have a reliable benchmark (a *great* accomplishment!) then you should run that as part of your pre-deployment process. Obviously that won’t reflect ad-hoc changes to your production environment, but frankly you don’t want ad-hoc in production except in an emergency (and even then it’s best to have a formal production change deployment process so you can track exactly what happened when).\r\n\r\nIt does occur to me that it would be handy to be able to flag CloudWatch, Performance Insights, and Enhanced Monitoring when a deployment is done. That would make it easier to see the actual impact of a deployment on production. Let me know if you’re interested in doing a PFR for that.\r\n\r\nFrom: Jean Baro <[email protected]>\r\nDate: Wednesday, August 14, 2019 at 11:19 AM\r\nTo: \"[email protected]\" <[email protected]>\r\nSubject: Continuous Performance Test\r\n\r\nHello!\r\n\r\nI was wondering if there is any external tool or extention to PostgreSQL > 10 (EC2 or RDS) which would:\r\n\r\n * Collect the TOP X statements (queries or DML) used in the last Y days IN PRODUCTION;\r\n * Everytime there is a change in the DATABASE Structure (Migrations, properties, Tables, Indexes) run these TOP X statements against a PERFORMANCE TEST DATABASE (Which would be very similar to production, on both HARDWARE and DATABASE SIZE)\r\n * Collect the metrics for these tests\r\n * Plot the results over time so that the PRODUCT team can be sure how the performance of the DATABASE is being affected by the chances over time (also as the number of rows changes). Something like this:\r\n[cid:[email protected]]\r\nQuery vs new change to the DATABASE (+ its size). So that we can see which migration affect the database the most.\r\n\r\nIs there such a thing? It doesn't need to be open source nor free. But it would be good if it is :)\r\n\r\nIf the answer is \"NO\", would it be possible to create such a tool and make it available as OOS? I think so, but I would need some professional help (and would pay for this).\r\n\r\nThanks",
"msg_date": "Wed, 14 Aug 2019 20:59:49 +0000",
"msg_from": "\"Nasby, Jim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: Continuous Performance Test"
},
{
"msg_contents": "No problem!\n\nIf I find anything Ill let you know.\n\nThanks\n\nOn Wed, 14 Aug 2019, 17:59 Nasby, Jim, <[email protected]> wrote:\n\n> Ooops, looks like I need to add email rules for this community list! You\n> can ignore my AWS-specific comments in the last paragraph. 😊\n>\n>\n>\n> *From: *\"Nasby, Jim\" <[email protected]>\n> *Date: *Wednesday, August 14, 2019 at 3:57 PM\n> *To: *Jean Baro <[email protected]>, \"[email protected]\" <\n> [email protected]>\n> *Subject: *[UNVERIFIED SENDER] Re: Continuous Performance Test\n>\n>\n>\n> I’m not aware of such a tool, but it would certainly be useful.\n>\n>\n>\n> Instead of trying to tie this directly to production changes (at which\n> point it’s already too late), I encourage you to tie it into your\n> QA/deployment process. If you have a reliable benchmark (a **great**\n> accomplishment!) then you should run that as part of your pre-deployment\n> process. Obviously that won’t reflect ad-hoc changes to your production\n> environment, but frankly you don’t want ad-hoc in production except in an\n> emergency (and even then it’s best to have a formal production change\n> deployment process so you can track exactly what happened when).\n>\n>\n>\n> It does occur to me that it would be handy to be able to flag CloudWatch,\n> Performance Insights, and Enhanced Monitoring when a deployment is done.\n> That would make it easier to see the actual impact of a deployment on\n> production. Let me know if you’re interested in doing a PFR for that.\n>\n>\n>\n> *From: *Jean Baro <[email protected]>\n> *Date: *Wednesday, August 14, 2019 at 11:19 AM\n> *To: *\"[email protected]\" <[email protected]\n> >\n> *Subject: *Continuous Performance Test\n>\n>\n>\n> Hello!\n>\n>\n>\n> I was wondering if there is any external tool or extention to PostgreSQL >\n> 10 (EC2 or RDS) which would:\n>\n> - Collect the TOP X statements (queries or DML) used in the last Y\n> days IN PRODUCTION;\n> - Everytime there is a change in the DATABASE Structure (Migrations,\n> properties, Tables, Indexes) run these TOP X statements against a\n> PERFORMANCE TEST DATABASE (Which would be very similar to production, on\n> both HARDWARE and DATABASE SIZE)\n> - Collect the metrics for these tests\n> - Plot the results over time so that the PRODUCT team can be sure how\n> the performance of the DATABASE is being affected by the chances over time\n> (also as the number of rows changes). Something like this:\n>\n> Query vs new change to the DATABASE (+ its size). So that we can see which\n> migration affect the database the most.\n>\n>\n>\n> Is there such a thing? It doesn't need to be open source nor free. But it\n> would be good if it is :)\n>\n>\n>\n> If the answer is \"NO\", would it be possible to create such a tool and make\n> it available as OOS? I think so, but I would need some professional help\n> (and would pay for this).\n>\n>\n>\n> Thanks\n>",
"msg_date": "Thu, 15 Aug 2019 22:33:04 -0300",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: Continuous Performance Test"
}
] |
[
{
"msg_contents": "Hey,\nI upgraded my pg9.6 cluster to pg11.2.\nAs it seems after the upgrade the duration of the same flow in my\napplication raised from 13 minutes to 19 minutes.\n\nThe test I did :\n1.reset pg_stat_statements\n2.run the applicative flow\n3.collect everything from pg_stat_statements\n\nI did this test on the env before the upgrade and after the upgrade. I took\nthe sum of the total_time in pg_stat_statements and sumed it up.\n\nmy env settings :\n60GB RAM\n16CPU\nregular HD\n\npostgresql.conf settings :\nmax_wal_size = 2GB\nmin_wal_size = 1GB\nwal_buffers = 16MB\ncheckpoint_completion_target = 0.9\ncheckpoint_timeout = 30min\nstandard_conforming_strings = off\nmax_locks_per_transaction = 5000\nmax_connections = 500\nrandom_page_cost = 4\ndeadlock_timeout = 5s\nshared_preload_libraries = 'pg_stat_statements'\ntrack_activity_query_size = 32764\nlog_directory = 'pg_log'\nenable_partitionwise_join = on\nenable_partitionwise_aggregate = on\nmax_worker_processes = 16 # (change requires restart)\nmax_parallel_maintenance_workers = 8 # taken from max_parallel_workers\nmax_parallel_workers_per_gather = 8 # taken from max_parallel_workers\nmax_parallel_workers = 16\nmaintenance_work_mem = 333MB\nwork_mem = 60MB\nshared_buffers = 15129MB\neffective_cache_size = 30259MB\n\nThe conf file was used in 9.6 (without all the new parallel settings).\nNow the same queries run in both tests because it was the same flow. I will\nbe happy to hear if u have any ideas without involving any queries changes..\n\nHey,I upgraded my pg9.6 cluster to pg11.2.As it seems after the upgrade the duration of the same flow in my application raised from 13 minutes to 19 minutes. The test I did : 1.reset pg_stat_statements2.run the applicative flow3.collect everything from pg_stat_statementsI did this test on the env before the upgrade and after the upgrade. I took the sum of the total_time in pg_stat_statements and sumed it up. my env settings : 60GB RAM16CPUregular HDpostgresql.conf settings : max_wal_size = 2GBmin_wal_size = 1GBwal_buffers = 16MBcheckpoint_completion_target = 0.9checkpoint_timeout = 30minstandard_conforming_strings = offmax_locks_per_transaction = 5000max_connections = 500random_page_cost = 4deadlock_timeout = 5sshared_preload_libraries = 'pg_stat_statements'track_activity_query_size = 32764log_directory = 'pg_log'enable_partitionwise_join = onenable_partitionwise_aggregate = onmax_worker_processes = 16 # (change requires restart)max_parallel_maintenance_workers = 8 # taken from max_parallel_workersmax_parallel_workers_per_gather = 8 # taken from max_parallel_workersmax_parallel_workers = 16maintenance_work_mem = 333MBwork_mem = 60MBshared_buffers = 15129MBeffective_cache_size = 30259MBThe conf file was used in 9.6 (without all the new parallel settings).Now the same queries run in both tests because it was the same flow. I will be happy to hear if u have any ideas without involving any queries changes..",
"msg_date": "Sun, 18 Aug 2019 09:56:35 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "UPGRADE TO PG11 CAUSED DEGREDATION IN PERFORMANCE"
},
{
"msg_contents": "Hi\n\nne 18. 8. 2019 v 8:57 odesílatel Mariel Cherkassky <\[email protected]> napsal:\n\n> Hey,\n> I upgraded my pg9.6 cluster to pg11.2.\n> As it seems after the upgrade the duration of the same flow in my\n> application raised from 13 minutes to 19 minutes.\n>\n> The test I did :\n> 1.reset pg_stat_statements\n> 2.run the applicative flow\n> 3.collect everything from pg_stat_statements\n>\n> I did this test on the env before the upgrade and after the upgrade. I\n> took the sum of the total_time in pg_stat_statements and sumed it up.\n>\n>\nfirst, did you run VACUUM ANALYZE after upgrade?\n\nRegards\n\nPavel\n\n\n> my env settings :\n> 60GB RAM\n> 16CPU\n> regular HD\n>\n> postgresql.conf settings :\n> max_wal_size = 2GB\n> min_wal_size = 1GB\n> wal_buffers = 16MB\n> checkpoint_completion_target = 0.9\n> checkpoint_timeout = 30min\n> standard_conforming_strings = off\n> max_locks_per_transaction = 5000\n> max_connections = 500\n> random_page_cost = 4\n> deadlock_timeout = 5s\n> shared_preload_libraries = 'pg_stat_statements'\n> track_activity_query_size = 32764\n> log_directory = 'pg_log'\n> enable_partitionwise_join = on\n> enable_partitionwise_aggregate = on\n> max_worker_processes = 16 # (change requires restart)\n> max_parallel_maintenance_workers = 8 # taken from max_parallel_workers\n> max_parallel_workers_per_gather = 8 # taken from max_parallel_workers\n> max_parallel_workers = 16\n> maintenance_work_mem = 333MB\n> work_mem = 60MB\n> shared_buffers = 15129MB\n> effective_cache_size = 30259MB\n>\n> The conf file was used in 9.6 (without all the new parallel settings).\n> Now the same queries run in both tests because it was the same flow. I\n> will be happy to hear if u have any ideas without involving any queries\n> changes..\n>\n\nHine 18. 8. 2019 v 8:57 odesílatel Mariel Cherkassky <[email protected]> napsal:Hey,I upgraded my pg9.6 cluster to pg11.2.As it seems after the upgrade the duration of the same flow in my application raised from 13 minutes to 19 minutes. The test I did : 1.reset pg_stat_statements2.run the applicative flow3.collect everything from pg_stat_statementsI did this test on the env before the upgrade and after the upgrade. I took the sum of the total_time in pg_stat_statements and sumed it up. first, did you run VACUUM ANALYZE after upgrade?RegardsPavel my env settings : 60GB RAM16CPUregular HDpostgresql.conf settings : max_wal_size = 2GBmin_wal_size = 1GBwal_buffers = 16MBcheckpoint_completion_target = 0.9checkpoint_timeout = 30minstandard_conforming_strings = offmax_locks_per_transaction = 5000max_connections = 500random_page_cost = 4deadlock_timeout = 5sshared_preload_libraries = 'pg_stat_statements'track_activity_query_size = 32764log_directory = 'pg_log'enable_partitionwise_join = onenable_partitionwise_aggregate = onmax_worker_processes = 16 # (change requires restart)max_parallel_maintenance_workers = 8 # taken from max_parallel_workersmax_parallel_workers_per_gather = 8 # taken from max_parallel_workersmax_parallel_workers = 16maintenance_work_mem = 333MBwork_mem = 60MBshared_buffers = 15129MBeffective_cache_size = 30259MBThe conf file was used in 9.6 (without all the new parallel settings).Now the same queries run in both tests because it was the same flow. I will be happy to hear if u have any ideas without involving any queries changes..",
"msg_date": "Sun, 18 Aug 2019 10:41:11 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPGRADE TO PG11 CAUSED DEGREDATION IN PERFORMANCE"
},
{
"msg_contents": "On Sun, Aug 18, 2019 at 1:57 AM Mariel Cherkassky\n<[email protected]> wrote:\n>\n> Hey,\n> I upgraded my pg9.6 cluster to pg11.2.\n> As it seems after the upgrade the duration of the same flow in my application raised from 13 minutes to 19 minutes.\n>\n> The test I did :\n> 1.reset pg_stat_statements\n> 2.run the applicative flow\n> 3.collect everything from pg_stat_statements\n>\n> I did this test on the env before the upgrade and after the upgrade. I took the sum of the total_time in pg_stat_statements and sumed it up.\n\nSince you have performance data, do you see any trends? For example,\nis it generalized performance issues or are there specific queries\nthat have degraded? We would need more specific detail before being\nable to give better advice on how to fix performance issue.\n\nmerlin\n\n\n",
"msg_date": "Tue, 3 Sep 2019 12:34:26 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPGRADE TO PG11 CAUSED DEGREDATION IN PERFORMANCE"
}
] |
[
{
"msg_contents": "Hey,\nI'm trying to understand when using hash partitions can be better than\nusing list partition when the partition column is bigint. I understand that\nIf my partition column has many distinct values then If I'll use a list\npartitions I might have a lot of partitions. On the other hand, with hash\npartitions on that column I can combine a few partition keys to the same\npartition.\n\nI understand that maintenance on more partitions is harder but the\nperformance with list partition should be faster because we will have less\nrecords in each table. Is there any reason hash partitions will be better\nthan list partitions in aspect of peromance ?\n\nThanks !\n\nHey,I'm trying to understand when using hash partitions can be better than using list partition when the partition column is bigint. I understand that If my partition column has many distinct values then If I'll use a list partitions I might have a lot of partitions. On the other hand, with hash partitions on that column I can combine a few partition keys to the same partition.I understand that maintenance on more partitions is harder but the performance with list partition should be faster because we will have less records in each table. Is there any reason hash partitions will be better than list partitions in aspect of peromance ?Thanks !",
"msg_date": "Mon, 19 Aug 2019 15:32:10 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg11 list partitions vs hash partitions"
}
] |
[
{
"msg_contents": "Hello,\nI'm running \"PostgreSQL 11.2, compiled by Visual C++ build 1914,\n64-bit\" and I have a query that runs several times per user action\n(9-10 times).\nThe query takes a long time to execute, specially at first, due to\ncold caches I think, but the performance varies greatly during a run\nof the application (while applying the said action by the user several\ntimes).\n\nMy tables are only getting bigger with time, not much DELETEs and even\nless UPDATEs as far as I can tell.\n\nProblematic query:\n\nEXPLAIN (ANALYZE,BUFFERS)\nSELECT DISTINCT ON (results.attribute_id) results.timestamp,\nresults.data FROM results\n JOIN scheduler_operation_executions ON\nscheduler_operation_executions.id = results.operation_execution_id\n JOIN scheduler_task_executions ON scheduler_task_executions.id =\nscheduler_operation_executions.task_execution_id\nWHERE scheduler_task_executions.device_id = 97\n AND results.data <> '<NullData/>'\n AND results.data IS NOT NULL\n AND results.object_id = 1955\n AND results.attribute_id IN (4, 5) -- possibly a longer list here\n AND results.data_access_result = 'SUCCESS'\nORDER BY results.attribute_id, results.timestamp DESC\nLIMIT 2 -- limit by the length of the attributes list\n\nIn words: I want the latest (ORDER BY results.timestamp DESC) results\nof a device (scheduler_task_executions.device_id = 97 - hence the\njoins results -> scheduler_operation_executions ->\nscheduler_task_executions)\nfor a given object and attributes with some additional constraints on\nthe data column. But I only want the latest attributes for which we\nhave results, hence the DISTINCT ON (results.attribute_id) and LIMIT.\n\nFirst run: https://explain.depesz.com/s/qh4C\nLimit (cost=157282.39..157290.29 rows=2 width=54) (actual\ntime=44068.166..44086.970 rows=2 loops=1)\n Buffers: shared hit=215928 read=85139\n -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\ntime=44068.164..44069.301 rows=2 loops=1)\n Buffers: shared hit=215928 read=85139\n -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n(actual time=44068.161..44068.464 rows=2052 loops=1)\n Sort Key: results.attribute_id, results.\"timestamp\" DESC\n Sort Method: quicksort Memory: 641kB\n Buffers: shared hit=215928 read=85139\n -> Gather (cost=62853.04..157098.57 rows=3162\nwidth=54) (actual time=23518.745..44076.385 rows=4102 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=215928 read=85139\n -> Nested Loop (cost=61853.04..155782.37\nrows=1318 width=54) (actual time=23290.514..43832.223 rows=1367\nloops=3)\n Buffers: shared hit=215928 read=85139\n -> Parallel Hash Join\n(cost=61852.61..143316.27 rows=24085 width=4) (actual\ntime=23271.275..40018.451 rows=19756 loops=3)\n Hash Cond:\n(scheduler_operation_executions.task_execution_id =\nscheduler_task_executions.id)\n Buffers: shared hit=6057 read=85139\n -> Parallel Seq Scan on\nscheduler_operation_executions (cost=0.00..74945.82 rows=2482982\nwidth=8) (actual time=7.575..15694.435 rows=1986887 loops=3)\n Buffers: shared hit=2996 read=47120\n -> Parallel Hash\n(cost=61652.25..61652.25 rows=16029 width=4) (actual\ntime=23253.337..23253.337 rows=13558 loops=3)\n Buckets: 65536 Batches: 1\nMemory Usage: 2144kB\n Buffers: shared hit=2977 read=38019\n -> Parallel Seq Scan on\nscheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n(actual time=25.939..23222.174 rows=13558 loops=3)\n Filter: (device_id = 97)\n Rows Removed by Filter: 1308337\n Buffers: shared hit=2977 read=38019\n -> Index Scan using\nindex_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\nwidth=58) (actual time=0.191..0.191 rows=0 loops=59269)\n Index Cond: (operation_execution_id =\nscheduler_operation_executions.id)\n Filter: ((data IS NOT NULL) AND (data\n<> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\nAND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n Rows Removed by Filter: 0\n Buffers: shared hit=209871\nPlanning Time: 29.295 ms\nExecution Time: 44087.365 ms\n\n\nSecond run: https://explain.depesz.com/s/uy9f\nLimit (cost=157282.39..157290.29 rows=2 width=54) (actual\ntime=789.363..810.440 rows=2 loops=1)\n Buffers: shared hit=216312 read=84755\n -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\ntime=789.361..789.535 rows=2 loops=1)\n Buffers: shared hit=216312 read=84755\n -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n(actual time=789.361..789.418 rows=2052 loops=1)\n Sort Key: results.attribute_id, results.\"timestamp\" DESC\n Sort Method: quicksort Memory: 641kB\n Buffers: shared hit=216312 read=84755\n -> Gather (cost=62853.04..157098.57 rows=3162\nwidth=54) (actual time=290.356..808.454 rows=4102 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=216312 read=84755\n -> Nested Loop (cost=61853.04..155782.37\nrows=1318 width=54) (actual time=238.313..735.472 rows=1367 loops=3)\n Buffers: shared hit=216312 read=84755\n -> Parallel Hash Join\n(cost=61852.61..143316.27 rows=24085 width=4) (actual\ntime=237.966..677.975 rows=19756 loops=3)\n Hash Cond:\n(scheduler_operation_executions.task_execution_id =\nscheduler_task_executions.id)\n Buffers: shared hit=6441 read=84755\n -> Parallel Seq Scan on\nscheduler_operation_executions (cost=0.00..74945.82 rows=2482982\nwidth=8) (actual time=0.117..234.279 rows=1986887 loops=3)\n Buffers: shared hit=3188 read=46928\n -> Parallel Hash\n(cost=61652.25..61652.25 rows=16029 width=4) (actual\ntime=236.631..236.631 rows=13558 loops=3)\n Buckets: 65536 Batches: 1\nMemory Usage: 2144kB\n Buffers: shared hit=3169 read=37827\n -> Parallel Seq Scan on\nscheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n(actual time=0.132..232.758 rows=13558 loops=3)\n Filter: (device_id = 97)\n Rows Removed by Filter: 1308337\n Buffers: shared hit=3169 read=37827\n -> Index Scan using\nindex_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\nwidth=58) (actual time=0.003..0.003 rows=0 loops=59269)\n Index Cond: (operation_execution_id =\nscheduler_operation_executions.id)\n Filter: ((data IS NOT NULL) AND (data\n<> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\nAND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n Rows Removed by Filter: 0\n Buffers: shared hit=209871\nPlanning Time: 1.787 ms\nExecution Time: 810.634 ms\n\nYou can see that the second run takes less than one second to run...\nwhich is 43 seconds better than the first try, just by re-running the\nquery.\nOther runs take maybe 1s, 3s, still a long time.\n\nHow can I improve it to be consistently fast (is it possible to get to\nseveral milliseconds?)?\nWhat I don't really understand is why the nested loop has 3 loops\n(three joined tables)?\nAnd why does the first index scan indicate ~60k loops? And does it\nreally work? It doesn't seem to filter out any rows.\n\nShould I add an index only on (attribute_id, object_id)? And maybe\ndata_access_result?\nDoes it make sens to add it on a text column (results.data)?\n\nMy tables:\nhttps://gist.githubusercontent.com/paulbarbu/0d36271d710349d8fb6102d9a466bb54/raw/7a6946ba7c2adec5b87ed90f343f1aff37432d21/gistfile1.txt\n\nAs you can see from the gist the foreign keys are indexed. Other\nindices were added to speed up other queries.\nOther relevant information (my tables have 3+ millions of rows, not\nvery big I think?), additional info with regards to size also included\nbelow.\nThis query has poor performance on two PCs (both running off of HDDs)\nso I think it has more to do with my indices and query than Postgres\nconfig & hardware, will post those if necessary.\n\n\nSize info:\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\nrelname IN ('results', 'scheduler_operation_executions',\n'scheduler_task_executions');\n-[ RECORD 1 ]--+-------------------------------\nrelname | results\nrelpages | 65922\nreltuples | 3.17104e+06\nrelallvisible | 65922\nrelkind | r\nrelnatts | 9\nrelhassubclass | f\nreloptions |\npg_table_size | 588791808\n-[ RECORD 2 ]--+-------------------------------\nrelname | scheduler_operation_executions\nrelpages | 50116\nreltuples | 5.95916e+06\nrelallvisible | 50116\nrelkind | r\nrelnatts | 8\nrelhassubclass | f\nreloptions |\npg_table_size | 410697728\n-[ RECORD 3 ]--+-------------------------------\nrelname | scheduler_task_executions\nrelpages | 40996\nreltuples | 3.966e+06\nrelallvisible | 40996\nrelkind | r\nrelnatts | 12\nrelhassubclass | f\nreloptions |\npg_table_size | 335970304\n\nThanks for your time!\n\n--\nBarbu Paul - Gheorghe\n\n\n",
"msg_date": "Tue, 20 Aug 2019 16:54:18 +0300",
"msg_from": "Barbu Paul - Gheorghe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Erratically behaving query needs optimization"
},
{
"msg_contents": "Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:\n> Hello,\n> I'm running \"PostgreSQL 11.2, compiled by Visual C++ build 1914,\n> 64-bit\" and I have a query that runs several times per user action\n> (9-10 times).\n> The query takes a long time to execute, specially at first, due to\n> cold caches I think, but the performance varies greatly during a run\n> of the application (while applying the said action by the user several\n> times).\n>\n> My tables are only getting bigger with time, not much DELETEs and even\n> less UPDATEs as far as I can tell.\n>\n> Problematic query:\n>\n> EXPLAIN (ANALYZE,BUFFERS)\n> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n> results.data FROM results\n> JOIN scheduler_operation_executions ON\n> scheduler_operation_executions.id = results.operation_execution_id\n> JOIN scheduler_task_executions ON scheduler_task_executions.id =\n> scheduler_operation_executions.task_execution_id\n> WHERE scheduler_task_executions.device_id = 97\n> AND results.data <> '<NullData/>'\n> AND results.data IS NOT NULL\n> AND results.object_id = 1955\n> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> AND results.data_access_result = 'SUCCESS'\n> ORDER BY results.attribute_id, results.timestamp DESC\n> LIMIT 2 -- limit by the length of the attributes list\n>\n> In words: I want the latest (ORDER BY results.timestamp DESC) results\n> of a device (scheduler_task_executions.device_id = 97 - hence the\n> joins results -> scheduler_operation_executions ->\n> scheduler_task_executions)\n> for a given object and attributes with some additional constraints on\n> the data column. But I only want the latest attributes for which we\n> have results, hence the DISTINCT ON (results.attribute_id) and LIMIT.\n>\n> First run: https://explain.depesz.com/s/qh4C\n> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> time=44068.166..44086.970 rows=2 loops=1)\n> Buffers: shared hit=215928 read=85139\n> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> time=44068.164..44069.301 rows=2 loops=1)\n> Buffers: shared hit=215928 read=85139\n> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> (actual time=44068.161..44068.464 rows=2052 loops=1)\n> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> Sort Method: quicksort Memory: 641kB\n> Buffers: shared hit=215928 read=85139\n> -> Gather (cost=62853.04..157098.57 rows=3162\n> width=54) (actual time=23518.745..44076.385 rows=4102 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=215928 read=85139\n> -> Nested Loop (cost=61853.04..155782.37\n> rows=1318 width=54) (actual time=23290.514..43832.223 rows=1367\n> loops=3)\n> Buffers: shared hit=215928 read=85139\n> -> Parallel Hash Join\n> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> time=23271.275..40018.451 rows=19756 loops=3)\n> Hash Cond:\n> (scheduler_operation_executions.task_execution_id =\n> scheduler_task_executions.id)\n> Buffers: shared hit=6057 read=85139\n> -> Parallel Seq Scan on\n> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> width=8) (actual time=7.575..15694.435 rows=1986887 loops=3)\n> Buffers: shared hit=2996 read=47120\n> -> Parallel Hash\n> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> time=23253.337..23253.337 rows=13558 loops=3)\n> Buckets: 65536 Batches: 1\n> Memory Usage: 2144kB\n> Buffers: shared hit=2977 read=38019\n> -> Parallel Seq Scan on\n> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> (actual time=25.939..23222.174 rows=13558 loops=3)\n> Filter: (device_id = 97)\n> Rows Removed by Filter: 1308337\n> Buffers: shared hit=2977 read=38019\n> -> Index Scan using\n> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> width=58) (actual time=0.191..0.191 rows=0 loops=59269)\n> Index Cond: (operation_execution_id =\n> scheduler_operation_executions.id)\n> Filter: ((data IS NOT NULL) AND (data\n> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> Rows Removed by Filter: 0\n> Buffers: shared hit=209871\n> Planning Time: 29.295 ms\n> Execution Time: 44087.365 ms\n>\n>\n> Second run: https://explain.depesz.com/s/uy9f\n> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> time=789.363..810.440 rows=2 loops=1)\n> Buffers: shared hit=216312 read=84755\n> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> time=789.361..789.535 rows=2 loops=1)\n> Buffers: shared hit=216312 read=84755\n> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> (actual time=789.361..789.418 rows=2052 loops=1)\n> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> Sort Method: quicksort Memory: 641kB\n> Buffers: shared hit=216312 read=84755\n> -> Gather (cost=62853.04..157098.57 rows=3162\n> width=54) (actual time=290.356..808.454 rows=4102 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=216312 read=84755\n> -> Nested Loop (cost=61853.04..155782.37\n> rows=1318 width=54) (actual time=238.313..735.472 rows=1367 loops=3)\n> Buffers: shared hit=216312 read=84755\n> -> Parallel Hash Join\n> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> time=237.966..677.975 rows=19756 loops=3)\n> Hash Cond:\n> (scheduler_operation_executions.task_execution_id =\n> scheduler_task_executions.id)\n> Buffers: shared hit=6441 read=84755\n> -> Parallel Seq Scan on\n> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> width=8) (actual time=0.117..234.279 rows=1986887 loops=3)\n> Buffers: shared hit=3188 read=46928\n> -> Parallel Hash\n> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> time=236.631..236.631 rows=13558 loops=3)\n> Buckets: 65536 Batches: 1\n> Memory Usage: 2144kB\n> Buffers: shared hit=3169 read=37827\n> -> Parallel Seq Scan on\n> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> (actual time=0.132..232.758 rows=13558 loops=3)\n> Filter: (device_id = 97)\n> Rows Removed by Filter: 1308337\n> Buffers: shared hit=3169 read=37827\n> -> Index Scan using\n> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n> Index Cond: (operation_execution_id =\n> scheduler_operation_executions.id)\n> Filter: ((data IS NOT NULL) AND (data\n> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> Rows Removed by Filter: 0\n> Buffers: shared hit=209871\n> Planning Time: 1.787 ms\n> Execution Time: 810.634 ms\n>\n> You can see that the second run takes less than one second to run...\n> which is 43 seconds better than the first try, just by re-running the\n> query.\n> Other runs take maybe 1s, 3s, still a long time.\n>\n> How can I improve it to be consistently fast (is it possible to get to\n> several milliseconds?)?\n> What I don't really understand is why the nested loop has 3 loops\n> (three joined tables)?\n> And why does the first index scan indicate ~60k loops? And does it\n> really work? It doesn't seem to filter out any rows.\n>\n> Should I add an index only on (attribute_id, object_id)? And maybe\n> data_access_result?\n> Does it make sens to add it on a text column (results.data)?\n>\n> My tables:\n> https://gist.githubusercontent.com/paulbarbu/0d36271d710349d8fb6102d9a466bb54/raw/7a6946ba7c2adec5b87ed90f343f1aff37432d21/gistfile1.txt\n>\n> As you can see from the gist the foreign keys are indexed. Other\n> indices were added to speed up other queries.\n> Other relevant information (my tables have 3+ millions of rows, not\n> very big I think?), additional info with regards to size also included\n> below.\n> This query has poor performance on two PCs (both running off of HDDs)\n> so I think it has more to do with my indices and query than Postgres\n> config & hardware, will post those if necessary.\n>\n>\n> Size info:\n> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n> relname IN ('results', 'scheduler_operation_executions',\n> 'scheduler_task_executions');\n> -[ RECORD 1 ]--+-------------------------------\n> relname | results\n> relpages | 65922\n> reltuples | 3.17104e+06\n> relallvisible | 65922\n> relkind | r\n> relnatts | 9\n> relhassubclass | f\n> reloptions |\n> pg_table_size | 588791808\n> -[ RECORD 2 ]--+-------------------------------\n> relname | scheduler_operation_executions\n> relpages | 50116\n> reltuples | 5.95916e+06\n> relallvisible | 50116\n> relkind | r\n> relnatts | 8\n> relhassubclass | f\n> reloptions |\n> pg_table_size | 410697728\n> -[ RECORD 3 ]--+-------------------------------\n> relname | scheduler_task_executions\n> relpages | 40996\n> reltuples | 3.966e+06\n> relallvisible | 40996\n> relkind | r\n> relnatts | 12\n> relhassubclass | f\n> reloptions |\n> pg_table_size | 335970304\n>\n> Thanks for your time!\n>\n> --\n> Barbu Paul - Gheorghe\n>\nCan you create an index on scheduler_task_executions.device_id and run \nit again?\n\n\n",
"msg_date": "Tue, 20 Aug 2019 11:58:30 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "Yes, adding another index might help reduce the number of rows filtered \n--> Rows Removed by Filter: 1308337\n\nAlso, make sure you run vacuum analyze on this query.\n\nRegards,\nMichael Vitale\n\nLuís Roberto Weck wrote on 8/20/2019 10:58 AM:\n> Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:\n>> Hello,\n>> I'm running \"PostgreSQL 11.2, compiled by Visual C++ build 1914,\n>> 64-bit\" and I have a query that runs several times per user action\n>> (9-10 times).\n>> The query takes a long time to execute, specially at first, due to\n>> cold caches I think, but the performance varies greatly during a run\n>> of the application (while applying the said action by the user several\n>> times).\n>>\n>> My tables are only getting bigger with time, not much DELETEs and even\n>> less UPDATEs as far as I can tell.\n>>\n>> Problematic query:\n>>\n>> EXPLAIN (ANALYZE,BUFFERS)\n>> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n>> results.data FROM results\n>> JOIN scheduler_operation_executions ON\n>> scheduler_operation_executions.id = results.operation_execution_id\n>> JOIN scheduler_task_executions ON scheduler_task_executions.id =\n>> scheduler_operation_executions.task_execution_id\n>> WHERE scheduler_task_executions.device_id = 97\n>> AND results.data <> '<NullData/>'\n>> AND results.data IS NOT NULL\n>> AND results.object_id = 1955\n>> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n>> AND results.data_access_result = 'SUCCESS'\n>> ORDER BY results.attribute_id, results.timestamp DESC\n>> LIMIT 2 -- limit by the length of the attributes list\n>>\n>> In words: I want the latest (ORDER BY results.timestamp DESC) results\n>> of a device (scheduler_task_executions.device_id = 97 - hence the\n>> joins results -> scheduler_operation_executions ->\n>> scheduler_task_executions)\n>> for a given object and attributes with some additional constraints on\n>> the data column. But I only want the latest attributes for which we\n>> have results, hence the DISTINCT ON (results.attribute_id) and LIMIT.\n>>\n>> First run: https://explain.depesz.com/s/qh4C\n>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n>> time=44068.166..44086.970 rows=2 loops=1)\n>> Buffers: shared hit=215928 read=85139\n>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n>> time=44068.164..44069.301 rows=2 loops=1)\n>> Buffers: shared hit=215928 read=85139\n>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n>> (actual time=44068.161..44068.464 rows=2052 loops=1)\n>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n>> Sort Method: quicksort Memory: 641kB\n>> Buffers: shared hit=215928 read=85139\n>> -> Gather (cost=62853.04..157098.57 rows=3162\n>> width=54) (actual time=23518.745..44076.385 rows=4102 loops=1)\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=215928 read=85139\n>> -> Nested Loop (cost=61853.04..155782.37\n>> rows=1318 width=54) (actual time=23290.514..43832.223 rows=1367\n>> loops=3)\n>> Buffers: shared hit=215928 read=85139\n>> -> Parallel Hash Join\n>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n>> time=23271.275..40018.451 rows=19756 loops=3)\n>> Hash Cond:\n>> (scheduler_operation_executions.task_execution_id =\n>> scheduler_task_executions.id)\n>> Buffers: shared hit=6057 read=85139\n>> -> Parallel Seq Scan on\n>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n>> width=8) (actual time=7.575..15694.435 rows=1986887 loops=3)\n>> Buffers: shared hit=2996 \n>> read=47120\n>> -> Parallel Hash\n>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n>> time=23253.337..23253.337 rows=13558 loops=3)\n>> Buckets: 65536 Batches: 1\n>> Memory Usage: 2144kB\n>> Buffers: shared hit=2977 \n>> read=38019\n>> -> Parallel Seq Scan on\n>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n>> (actual time=25.939..23222.174 rows=13558 loops=3)\n>> Filter: (device_id = 97)\n>> Rows Removed by Filter: \n>> 1308337\n>> Buffers: shared hit=2977 \n>> read=38019\n>> -> Index Scan using\n>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n>> width=58) (actual time=0.191..0.191 rows=0 loops=59269)\n>> Index Cond: (operation_execution_id =\n>> scheduler_operation_executions.id)\n>> Filter: ((data IS NOT NULL) AND (data\n>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n>> Rows Removed by Filter: 0\n>> Buffers: shared hit=209871\n>> Planning Time: 29.295 ms\n>> Execution Time: 44087.365 ms\n>>\n>>\n>> Second run: https://explain.depesz.com/s/uy9f\n>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n>> time=789.363..810.440 rows=2 loops=1)\n>> Buffers: shared hit=216312 read=84755\n>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n>> time=789.361..789.535 rows=2 loops=1)\n>> Buffers: shared hit=216312 read=84755\n>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n>> (actual time=789.361..789.418 rows=2052 loops=1)\n>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n>> Sort Method: quicksort Memory: 641kB\n>> Buffers: shared hit=216312 read=84755\n>> -> Gather (cost=62853.04..157098.57 rows=3162\n>> width=54) (actual time=290.356..808.454 rows=4102 loops=1)\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=216312 read=84755\n>> -> Nested Loop (cost=61853.04..155782.37\n>> rows=1318 width=54) (actual time=238.313..735.472 rows=1367 loops=3)\n>> Buffers: shared hit=216312 read=84755\n>> -> Parallel Hash Join\n>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n>> time=237.966..677.975 rows=19756 loops=3)\n>> Hash Cond:\n>> (scheduler_operation_executions.task_execution_id =\n>> scheduler_task_executions.id)\n>> Buffers: shared hit=6441 read=84755\n>> -> Parallel Seq Scan on\n>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n>> width=8) (actual time=0.117..234.279 rows=1986887 loops=3)\n>> Buffers: shared hit=3188 \n>> read=46928\n>> -> Parallel Hash\n>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n>> time=236.631..236.631 rows=13558 loops=3)\n>> Buckets: 65536 Batches: 1\n>> Memory Usage: 2144kB\n>> Buffers: shared hit=3169 \n>> read=37827\n>> -> Parallel Seq Scan on\n>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n>> (actual time=0.132..232.758 rows=13558 loops=3)\n>> Filter: (device_id = 97)\n>> Rows Removed by Filter: \n>> 1308337\n>> Buffers: shared hit=3169 \n>> read=37827\n>> -> Index Scan using\n>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n>> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n>> Index Cond: (operation_execution_id =\n>> scheduler_operation_executions.id)\n>> Filter: ((data IS NOT NULL) AND (data\n>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n>> Rows Removed by Filter: 0\n>> Buffers: shared hit=209871\n>> Planning Time: 1.787 ms\n>> Execution Time: 810.634 ms\n>>\n>> You can see that the second run takes less than one second to run...\n>> which is 43 seconds better than the first try, just by re-running the\n>> query.\n>> Other runs take maybe 1s, 3s, still a long time.\n>>\n>> How can I improve it to be consistently fast (is it possible to get to\n>> several milliseconds?)?\n>> What I don't really understand is why the nested loop has 3 loops\n>> (three joined tables)?\n>> And why does the first index scan indicate ~60k loops? And does it\n>> really work? It doesn't seem to filter out any rows.\n>>\n>> Should I add an index only on (attribute_id, object_id)? And maybe\n>> data_access_result?\n>> Does it make sens to add it on a text column (results.data)?\n>>\n>> My tables:\n>> https://gist.githubusercontent.com/paulbarbu/0d36271d710349d8fb6102d9a466bb54/raw/7a6946ba7c2adec5b87ed90f343f1aff37432d21/gistfile1.txt \n>>\n>>\n>> As you can see from the gist the foreign keys are indexed. Other\n>> indices were added to speed up other queries.\n>> Other relevant information (my tables have 3+ millions of rows, not\n>> very big I think?), additional info with regards to size also included\n>> below.\n>> This query has poor performance on two PCs (both running off of HDDs)\n>> so I think it has more to do with my indices and query than Postgres\n>> config & hardware, will post those if necessary.\n>>\n>>\n>> Size info:\n>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n>> relname IN ('results', 'scheduler_operation_executions',\n>> 'scheduler_task_executions');\n>> -[ RECORD 1 ]--+-------------------------------\n>> relname | results\n>> relpages | 65922\n>> reltuples | 3.17104e+06\n>> relallvisible | 65922\n>> relkind | r\n>> relnatts | 9\n>> relhassubclass | f\n>> reloptions |\n>> pg_table_size | 588791808\n>> -[ RECORD 2 ]--+-------------------------------\n>> relname | scheduler_operation_executions\n>> relpages | 50116\n>> reltuples | 5.95916e+06\n>> relallvisible | 50116\n>> relkind | r\n>> relnatts | 8\n>> relhassubclass | f\n>> reloptions |\n>> pg_table_size | 410697728\n>> -[ RECORD 3 ]--+-------------------------------\n>> relname | scheduler_task_executions\n>> relpages | 40996\n>> reltuples | 3.966e+06\n>> relallvisible | 40996\n>> relkind | r\n>> relnatts | 12\n>> relhassubclass | f\n>> reloptions |\n>> pg_table_size | 335970304\n>>\n>> Thanks for your time!\n>>\n>> -- \n>> Barbu Paul - Gheorghe\n>>\n> Can you create an index on scheduler_task_executions.device_id and run \n> it again?\n>\n>\n\n\n\n",
"msg_date": "Tue, 20 Aug 2019 11:07:11 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "I wonder how I missed that... probabily because of the \"WHERE\" clause\nin what I already had.\n\nI indexed by scheduler_task_executions.device_id and the new plan is\nas follows: https://explain.depesz.com/s/cQRq\n\nCan it be further improved?\n\nLimit (cost=138511.45..138519.36 rows=2 width=54) (actual\ntime=598.703..618.524 rows=2 loops=1)\n Buffers: shared hit=242389 read=44098\n -> Unique (cost=138511.45..138527.26 rows=4 width=54) (actual\ntime=598.701..598.878 rows=2 loops=1)\n Buffers: shared hit=242389 read=44098\n -> Sort (cost=138511.45..138519.36 rows=3162 width=54)\n(actual time=598.699..598.767 rows=2052 loops=1)\n Sort Key: results.attribute_id, results.\"timestamp\" DESC\n Sort Method: quicksort Memory: 641kB\n Buffers: shared hit=242389 read=44098\n -> Gather (cost=44082.11..138327.64 rows=3162\nwidth=54) (actual time=117.548..616.456 rows=4102 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=242389 read=44098\n -> Nested Loop (cost=43082.11..137011.44\nrows=1318 width=54) (actual time=47.436..525.664 rows=1367 loops=3)\n Buffers: shared hit=242389 read=44098\n -> Parallel Hash Join\n(cost=43081.68..124545.34 rows=24085 width=4) (actual\ntime=33.099..469.958 rows=19756 loops=3)\n Hash Cond:\n(scheduler_operation_executions.task_execution_id =\nscheduler_task_executions.id)\n Buffers: shared hit=32518 read=44098\n -> Parallel Seq Scan on\nscheduler_operation_executions (cost=0.00..74945.82 rows=2482982\nwidth=8) (actual time=8.493..245.190 rows=1986887 loops=3)\n Buffers: shared hit=6018 read=44098\n -> Parallel Hash\n(cost=42881.33..42881.33 rows=16028 width=4) (actual\ntime=23.272..23.272 rows=13558 loops=3)\n Buckets: 65536 Batches: 1\nMemory Usage: 2112kB\n Buffers: shared hit=26416\n -> Parallel Bitmap Heap Scan on\nscheduler_task_executions (cost=722.55..42881.33 rows=16028 width=4)\n(actual time=27.290..61.563 rows=40675 loops=1)\n Recheck Cond: (device_id = 97)\n Heap Blocks: exact=26302\n Buffers: shared hit=26416\n -> Bitmap Index Scan on\nscheduler_task_executions_device_id_idx (cost=0.00..712.93 rows=38467\nwidth=0) (actual time=17.087..17.087 rows=40675 loops=1)\n Index Cond: (device_id = 97)\n Buffers: shared hit=114\n -> Index Scan using\nindex_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\nwidth=58) (actual time=0.003..0.003 rows=0 loops=59269)\n Index Cond: (operation_execution_id =\nscheduler_operation_executions.id)\n Filter: ((data IS NOT NULL) AND (data\n<> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\nAND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n Rows Removed by Filter: 0\n Buffers: shared hit=209871\nPlanning Time: 2.327 ms\nExecution Time: 618.935 ms\n\nOn Tue, Aug 20, 2019 at 5:54 PM Luís Roberto Weck\n<[email protected]> wrote:\n>\n> Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:\n> > Hello,\n> > I'm running \"PostgreSQL 11.2, compiled by Visual C++ build 1914,\n> > 64-bit\" and I have a query that runs several times per user action\n> > (9-10 times).\n> > The query takes a long time to execute, specially at first, due to\n> > cold caches I think, but the performance varies greatly during a run\n> > of the application (while applying the said action by the user several\n> > times).\n> >\n> > My tables are only getting bigger with time, not much DELETEs and even\n> > less UPDATEs as far as I can tell.\n> >\n> > Problematic query:\n> >\n> > EXPLAIN (ANALYZE,BUFFERS)\n> > SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n> > results.data FROM results\n> > JOIN scheduler_operation_executions ON\n> > scheduler_operation_executions.id = results.operation_execution_id\n> > JOIN scheduler_task_executions ON scheduler_task_executions.id =\n> > scheduler_operation_executions.task_execution_id\n> > WHERE scheduler_task_executions.device_id = 97\n> > AND results.data <> '<NullData/>'\n> > AND results.data IS NOT NULL\n> > AND results.object_id = 1955\n> > AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> > AND results.data_access_result = 'SUCCESS'\n> > ORDER BY results.attribute_id, results.timestamp DESC\n> > LIMIT 2 -- limit by the length of the attributes list\n> >\n> > In words: I want the latest (ORDER BY results.timestamp DESC) results\n> > of a device (scheduler_task_executions.device_id = 97 - hence the\n> > joins results -> scheduler_operation_executions ->\n> > scheduler_task_executions)\n> > for a given object and attributes with some additional constraints on\n> > the data column. But I only want the latest attributes for which we\n> > have results, hence the DISTINCT ON (results.attribute_id) and LIMIT.\n> >\n> > First run: https://explain.depesz.com/s/qh4C\n> > Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> > time=44068.166..44086.970 rows=2 loops=1)\n> > Buffers: shared hit=215928 read=85139\n> > -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> > time=44068.164..44069.301 rows=2 loops=1)\n> > Buffers: shared hit=215928 read=85139\n> > -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> > (actual time=44068.161..44068.464 rows=2052 loops=1)\n> > Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> > Sort Method: quicksort Memory: 641kB\n> > Buffers: shared hit=215928 read=85139\n> > -> Gather (cost=62853.04..157098.57 rows=3162\n> > width=54) (actual time=23518.745..44076.385 rows=4102 loops=1)\n> > Workers Planned: 2\n> > Workers Launched: 2\n> > Buffers: shared hit=215928 read=85139\n> > -> Nested Loop (cost=61853.04..155782.37\n> > rows=1318 width=54) (actual time=23290.514..43832.223 rows=1367\n> > loops=3)\n> > Buffers: shared hit=215928 read=85139\n> > -> Parallel Hash Join\n> > (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> > time=23271.275..40018.451 rows=19756 loops=3)\n> > Hash Cond:\n> > (scheduler_operation_executions.task_execution_id =\n> > scheduler_task_executions.id)\n> > Buffers: shared hit=6057 read=85139\n> > -> Parallel Seq Scan on\n> > scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> > width=8) (actual time=7.575..15694.435 rows=1986887 loops=3)\n> > Buffers: shared hit=2996 read=47120\n> > -> Parallel Hash\n> > (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> > time=23253.337..23253.337 rows=13558 loops=3)\n> > Buckets: 65536 Batches: 1\n> > Memory Usage: 2144kB\n> > Buffers: shared hit=2977 read=38019\n> > -> Parallel Seq Scan on\n> > scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> > (actual time=25.939..23222.174 rows=13558 loops=3)\n> > Filter: (device_id = 97)\n> > Rows Removed by Filter: 1308337\n> > Buffers: shared hit=2977 read=38019\n> > -> Index Scan using\n> > index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> > width=58) (actual time=0.191..0.191 rows=0 loops=59269)\n> > Index Cond: (operation_execution_id =\n> > scheduler_operation_executions.id)\n> > Filter: ((data IS NOT NULL) AND (data\n> > <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> > AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> > Rows Removed by Filter: 0\n> > Buffers: shared hit=209871\n> > Planning Time: 29.295 ms\n> > Execution Time: 44087.365 ms\n> >\n> >\n> > Second run: https://explain.depesz.com/s/uy9f\n> > Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> > time=789.363..810.440 rows=2 loops=1)\n> > Buffers: shared hit=216312 read=84755\n> > -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> > time=789.361..789.535 rows=2 loops=1)\n> > Buffers: shared hit=216312 read=84755\n> > -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> > (actual time=789.361..789.418 rows=2052 loops=1)\n> > Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> > Sort Method: quicksort Memory: 641kB\n> > Buffers: shared hit=216312 read=84755\n> > -> Gather (cost=62853.04..157098.57 rows=3162\n> > width=54) (actual time=290.356..808.454 rows=4102 loops=1)\n> > Workers Planned: 2\n> > Workers Launched: 2\n> > Buffers: shared hit=216312 read=84755\n> > -> Nested Loop (cost=61853.04..155782.37\n> > rows=1318 width=54) (actual time=238.313..735.472 rows=1367 loops=3)\n> > Buffers: shared hit=216312 read=84755\n> > -> Parallel Hash Join\n> > (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> > time=237.966..677.975 rows=19756 loops=3)\n> > Hash Cond:\n> > (scheduler_operation_executions.task_execution_id =\n> > scheduler_task_executions.id)\n> > Buffers: shared hit=6441 read=84755\n> > -> Parallel Seq Scan on\n> > scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> > width=8) (actual time=0.117..234.279 rows=1986887 loops=3)\n> > Buffers: shared hit=3188 read=46928\n> > -> Parallel Hash\n> > (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> > time=236.631..236.631 rows=13558 loops=3)\n> > Buckets: 65536 Batches: 1\n> > Memory Usage: 2144kB\n> > Buffers: shared hit=3169 read=37827\n> > -> Parallel Seq Scan on\n> > scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> > (actual time=0.132..232.758 rows=13558 loops=3)\n> > Filter: (device_id = 97)\n> > Rows Removed by Filter: 1308337\n> > Buffers: shared hit=3169 read=37827\n> > -> Index Scan using\n> > index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> > width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n> > Index Cond: (operation_execution_id =\n> > scheduler_operation_executions.id)\n> > Filter: ((data IS NOT NULL) AND (data\n> > <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> > AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> > Rows Removed by Filter: 0\n> > Buffers: shared hit=209871\n> > Planning Time: 1.787 ms\n> > Execution Time: 810.634 ms\n> >\n> > You can see that the second run takes less than one second to run...\n> > which is 43 seconds better than the first try, just by re-running the\n> > query.\n> > Other runs take maybe 1s, 3s, still a long time.\n> >\n> > How can I improve it to be consistently fast (is it possible to get to\n> > several milliseconds?)?\n> > What I don't really understand is why the nested loop has 3 loops\n> > (three joined tables)?\n> > And why does the first index scan indicate ~60k loops? And does it\n> > really work? It doesn't seem to filter out any rows.\n> >\n> > Should I add an index only on (attribute_id, object_id)? And maybe\n> > data_access_result?\n> > Does it make sens to add it on a text column (results.data)?\n> >\n> > My tables:\n> > https://gist.githubusercontent.com/paulbarbu/0d36271d710349d8fb6102d9a466bb54/raw/7a6946ba7c2adec5b87ed90f343f1aff37432d21/gistfile1.txt\n> >\n> > As you can see from the gist the foreign keys are indexed. Other\n> > indices were added to speed up other queries.\n> > Other relevant information (my tables have 3+ millions of rows, not\n> > very big I think?), additional info with regards to size also included\n> > below.\n> > This query has poor performance on two PCs (both running off of HDDs)\n> > so I think it has more to do with my indices and query than Postgres\n> > config & hardware, will post those if necessary.\n> >\n> >\n> > Size info:\n> > SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n> > relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n> > relname IN ('results', 'scheduler_operation_executions',\n> > 'scheduler_task_executions');\n> > -[ RECORD 1 ]--+-------------------------------\n> > relname | results\n> > relpages | 65922\n> > reltuples | 3.17104e+06\n> > relallvisible | 65922\n> > relkind | r\n> > relnatts | 9\n> > relhassubclass | f\n> > reloptions |\n> > pg_table_size | 588791808\n> > -[ RECORD 2 ]--+-------------------------------\n> > relname | scheduler_operation_executions\n> > relpages | 50116\n> > reltuples | 5.95916e+06\n> > relallvisible | 50116\n> > relkind | r\n> > relnatts | 8\n> > relhassubclass | f\n> > reloptions |\n> > pg_table_size | 410697728\n> > -[ RECORD 3 ]--+-------------------------------\n> > relname | scheduler_task_executions\n> > relpages | 40996\n> > reltuples | 3.966e+06\n> > relallvisible | 40996\n> > relkind | r\n> > relnatts | 12\n> > relhassubclass | f\n> > reloptions |\n> > pg_table_size | 335970304\n> >\n> > Thanks for your time!\n> >\n> > --\n> > Barbu Paul - Gheorghe\n> >\n> Can you create an index on scheduler_task_executions.device_id and run\n> it again?\n\n\n\n-- \n\nBarbu Paul - Gheorghe\n\n\n",
"msg_date": "Wed, 21 Aug 2019 10:30:36 +0300",
"msg_from": "Barbu Paul - Gheorghe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "Em 21/08/2019 04:30, Barbu Paul - Gheorghe escreveu:\n> I wonder how I missed that... probabily because of the \"WHERE\" clause\n> in what I already had.\n>\n> I indexed by scheduler_task_executions.device_id and the new plan is\n> as follows: https://explain.depesz.com/s/cQRq\n>\n> Can it be further improved?\n>\n> Limit (cost=138511.45..138519.36 rows=2 width=54) (actual\n> time=598.703..618.524 rows=2 loops=1)\n> Buffers: shared hit=242389 read=44098\n> -> Unique (cost=138511.45..138527.26 rows=4 width=54) (actual\n> time=598.701..598.878 rows=2 loops=1)\n> Buffers: shared hit=242389 read=44098\n> -> Sort (cost=138511.45..138519.36 rows=3162 width=54)\n> (actual time=598.699..598.767 rows=2052 loops=1)\n> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> Sort Method: quicksort Memory: 641kB\n> Buffers: shared hit=242389 read=44098\n> -> Gather (cost=44082.11..138327.64 rows=3162\n> width=54) (actual time=117.548..616.456 rows=4102 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=242389 read=44098\n> -> Nested Loop (cost=43082.11..137011.44\n> rows=1318 width=54) (actual time=47.436..525.664 rows=1367 loops=3)\n> Buffers: shared hit=242389 read=44098\n> -> Parallel Hash Join\n> (cost=43081.68..124545.34 rows=24085 width=4) (actual\n> time=33.099..469.958 rows=19756 loops=3)\n> Hash Cond:\n> (scheduler_operation_executions.task_execution_id =\n> scheduler_task_executions.id)\n> Buffers: shared hit=32518 read=44098\n> -> Parallel Seq Scan on\n> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> width=8) (actual time=8.493..245.190 rows=1986887 loops=3)\n> Buffers: shared hit=6018 read=44098\n> -> Parallel Hash\n> (cost=42881.33..42881.33 rows=16028 width=4) (actual\n> time=23.272..23.272 rows=13558 loops=3)\n> Buckets: 65536 Batches: 1\n> Memory Usage: 2112kB\n> Buffers: shared hit=26416\n> -> Parallel Bitmap Heap Scan on\n> scheduler_task_executions (cost=722.55..42881.33 rows=16028 width=4)\n> (actual time=27.290..61.563 rows=40675 loops=1)\n> Recheck Cond: (device_id = 97)\n> Heap Blocks: exact=26302\n> Buffers: shared hit=26416\n> -> Bitmap Index Scan on\n> scheduler_task_executions_device_id_idx (cost=0.00..712.93 rows=38467\n> width=0) (actual time=17.087..17.087 rows=40675 loops=1)\n> Index Cond: (device_id = 97)\n> Buffers: shared hit=114\n> -> Index Scan using\n> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n> Index Cond: (operation_execution_id =\n> scheduler_operation_executions.id)\n> Filter: ((data IS NOT NULL) AND (data\n> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> Rows Removed by Filter: 0\n> Buffers: shared hit=209871\n> Planning Time: 2.327 ms\n> Execution Time: 618.935 ms\n>\n> On Tue, Aug 20, 2019 at 5:54 PM Luís Roberto Weck\n> <[email protected]> wrote:\n>> Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:\n>>> Hello,\n>>> I'm running \"PostgreSQL 11.2, compiled by Visual C++ build 1914,\n>>> 64-bit\" and I have a query that runs several times per user action\n>>> (9-10 times).\n>>> The query takes a long time to execute, specially at first, due to\n>>> cold caches I think, but the performance varies greatly during a run\n>>> of the application (while applying the said action by the user several\n>>> times).\n>>>\n>>> My tables are only getting bigger with time, not much DELETEs and even\n>>> less UPDATEs as far as I can tell.\n>>>\n>>> Problematic query:\n>>>\n>>> EXPLAIN (ANALYZE,BUFFERS)\n>>> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n>>> results.data FROM results\n>>> JOIN scheduler_operation_executions ON\n>>> scheduler_operation_executions.id = results.operation_execution_id\n>>> JOIN scheduler_task_executions ON scheduler_task_executions.id =\n>>> scheduler_operation_executions.task_execution_id\n>>> WHERE scheduler_task_executions.device_id = 97\n>>> AND results.data <> '<NullData/>'\n>>> AND results.data IS NOT NULL\n>>> AND results.object_id = 1955\n>>> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n>>> AND results.data_access_result = 'SUCCESS'\n>>> ORDER BY results.attribute_id, results.timestamp DESC\n>>> LIMIT 2 -- limit by the length of the attributes list\n>>>\n>>> In words: I want the latest (ORDER BY results.timestamp DESC) results\n>>> of a device (scheduler_task_executions.device_id = 97 - hence the\n>>> joins results -> scheduler_operation_executions ->\n>>> scheduler_task_executions)\n>>> for a given object and attributes with some additional constraints on\n>>> the data column. But I only want the latest attributes for which we\n>>> have results, hence the DISTINCT ON (results.attribute_id) and LIMIT.\n>>>\n>>> First run: https://explain.depesz.com/s/qh4C\n>>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n>>> time=44068.166..44086.970 rows=2 loops=1)\n>>> Buffers: shared hit=215928 read=85139\n>>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n>>> time=44068.164..44069.301 rows=2 loops=1)\n>>> Buffers: shared hit=215928 read=85139\n>>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n>>> (actual time=44068.161..44068.464 rows=2052 loops=1)\n>>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n>>> Sort Method: quicksort Memory: 641kB\n>>> Buffers: shared hit=215928 read=85139\n>>> -> Gather (cost=62853.04..157098.57 rows=3162\n>>> width=54) (actual time=23518.745..44076.385 rows=4102 loops=1)\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>>> Buffers: shared hit=215928 read=85139\n>>> -> Nested Loop (cost=61853.04..155782.37\n>>> rows=1318 width=54) (actual time=23290.514..43832.223 rows=1367\n>>> loops=3)\n>>> Buffers: shared hit=215928 read=85139\n>>> -> Parallel Hash Join\n>>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n>>> time=23271.275..40018.451 rows=19756 loops=3)\n>>> Hash Cond:\n>>> (scheduler_operation_executions.task_execution_id =\n>>> scheduler_task_executions.id)\n>>> Buffers: shared hit=6057 read=85139\n>>> -> Parallel Seq Scan on\n>>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n>>> width=8) (actual time=7.575..15694.435 rows=1986887 loops=3)\n>>> Buffers: shared hit=2996 read=47120\n>>> -> Parallel Hash\n>>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n>>> time=23253.337..23253.337 rows=13558 loops=3)\n>>> Buckets: 65536 Batches: 1\n>>> Memory Usage: 2144kB\n>>> Buffers: shared hit=2977 read=38019\n>>> -> Parallel Seq Scan on\n>>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n>>> (actual time=25.939..23222.174 rows=13558 loops=3)\n>>> Filter: (device_id = 97)\n>>> Rows Removed by Filter: 1308337\n>>> Buffers: shared hit=2977 read=38019\n>>> -> Index Scan using\n>>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n>>> width=58) (actual time=0.191..0.191 rows=0 loops=59269)\n>>> Index Cond: (operation_execution_id =\n>>> scheduler_operation_executions.id)\n>>> Filter: ((data IS NOT NULL) AND (data\n>>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n>>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n>>> Rows Removed by Filter: 0\n>>> Buffers: shared hit=209871\n>>> Planning Time: 29.295 ms\n>>> Execution Time: 44087.365 ms\n>>>\n>>>\n>>> Second run: https://explain.depesz.com/s/uy9f\n>>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n>>> time=789.363..810.440 rows=2 loops=1)\n>>> Buffers: shared hit=216312 read=84755\n>>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n>>> time=789.361..789.535 rows=2 loops=1)\n>>> Buffers: shared hit=216312 read=84755\n>>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n>>> (actual time=789.361..789.418 rows=2052 loops=1)\n>>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n>>> Sort Method: quicksort Memory: 641kB\n>>> Buffers: shared hit=216312 read=84755\n>>> -> Gather (cost=62853.04..157098.57 rows=3162\n>>> width=54) (actual time=290.356..808.454 rows=4102 loops=1)\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>>> Buffers: shared hit=216312 read=84755\n>>> -> Nested Loop (cost=61853.04..155782.37\n>>> rows=1318 width=54) (actual time=238.313..735.472 rows=1367 loops=3)\n>>> Buffers: shared hit=216312 read=84755\n>>> -> Parallel Hash Join\n>>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n>>> time=237.966..677.975 rows=19756 loops=3)\n>>> Hash Cond:\n>>> (scheduler_operation_executions.task_execution_id =\n>>> scheduler_task_executions.id)\n>>> Buffers: shared hit=6441 read=84755\n>>> -> Parallel Seq Scan on\n>>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n>>> width=8) (actual time=0.117..234.279 rows=1986887 loops=3)\n>>> Buffers: shared hit=3188 read=46928\n>>> -> Parallel Hash\n>>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n>>> time=236.631..236.631 rows=13558 loops=3)\n>>> Buckets: 65536 Batches: 1\n>>> Memory Usage: 2144kB\n>>> Buffers: shared hit=3169 read=37827\n>>> -> Parallel Seq Scan on\n>>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n>>> (actual time=0.132..232.758 rows=13558 loops=3)\n>>> Filter: (device_id = 97)\n>>> Rows Removed by Filter: 1308337\n>>> Buffers: shared hit=3169 read=37827\n>>> -> Index Scan using\n>>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n>>> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n>>> Index Cond: (operation_execution_id =\n>>> scheduler_operation_executions.id)\n>>> Filter: ((data IS NOT NULL) AND (data\n>>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n>>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n>>> Rows Removed by Filter: 0\n>>> Buffers: shared hit=209871\n>>> Planning Time: 1.787 ms\n>>> Execution Time: 810.634 ms\n>>>\n>>> You can see that the second run takes less than one second to run...\n>>> which is 43 seconds better than the first try, just by re-running the\n>>> query.\n>>> Other runs take maybe 1s, 3s, still a long time.\n>>>\n>>> How can I improve it to be consistently fast (is it possible to get to\n>>> several milliseconds?)?\n>>> What I don't really understand is why the nested loop has 3 loops\n>>> (three joined tables)?\n>>> And why does the first index scan indicate ~60k loops? And does it\n>>> really work? It doesn't seem to filter out any rows.\n>>>\n>>> Should I add an index only on (attribute_id, object_id)? And maybe\n>>> data_access_result?\n>>> Does it make sens to add it on a text column (results.data)?\n>>>\n>>> My tables:\n>>> https://gist.githubusercontent.com/paulbarbu/0d36271d710349d8fb6102d9a466bb54/raw/7a6946ba7c2adec5b87ed90f343f1aff37432d21/gistfile1.txt\n>>>\n>>> As you can see from the gist the foreign keys are indexed. Other\n>>> indices were added to speed up other queries.\n>>> Other relevant information (my tables have 3+ millions of rows, not\n>>> very big I think?), additional info with regards to size also included\n>>> below.\n>>> This query has poor performance on two PCs (both running off of HDDs)\n>>> so I think it has more to do with my indices and query than Postgres\n>>> config & hardware, will post those if necessary.\n>>>\n>>>\n>>> Size info:\n>>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n>>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n>>> relname IN ('results', 'scheduler_operation_executions',\n>>> 'scheduler_task_executions');\n>>> -[ RECORD 1 ]--+-------------------------------\n>>> relname | results\n>>> relpages | 65922\n>>> reltuples | 3.17104e+06\n>>> relallvisible | 65922\n>>> relkind | r\n>>> relnatts | 9\n>>> relhassubclass | f\n>>> reloptions |\n>>> pg_table_size | 588791808\n>>> -[ RECORD 2 ]--+-------------------------------\n>>> relname | scheduler_operation_executions\n>>> relpages | 50116\n>>> reltuples | 5.95916e+06\n>>> relallvisible | 50116\n>>> relkind | r\n>>> relnatts | 8\n>>> relhassubclass | f\n>>> reloptions |\n>>> pg_table_size | 410697728\n>>> -[ RECORD 3 ]--+-------------------------------\n>>> relname | scheduler_task_executions\n>>> relpages | 40996\n>>> reltuples | 3.966e+06\n>>> relallvisible | 40996\n>>> relkind | r\n>>> relnatts | 12\n>>> relhassubclass | f\n>>> reloptions |\n>>> pg_table_size | 335970304\n>>>\n>>> Thanks for your time!\n>>>\n>>> --\n>>> Barbu Paul - Gheorghe\n>>>\n>> Can you create an index on scheduler_task_executions.device_id and run\n>> it again?\nCan you try this query, please? Although I'm not really sure it'll give \nyou the same results.\n\n SELECT DISTINCT ON (results.attribute_id)\n results.timestamp,\n results.data\n FROM results\n WHERE results.data <> '<NullData/>'\n AND results.data IS NOT NULL\n AND results.object_id = 1955\n AND results.attribute_id IN (4, 5) -- possibly a longer list here\n AND results.data_access_result = 'SUCCESS'\n AND EXISTS (SELECT 1\n FROM scheduler_operation_executions\n JOIN scheduler_task_executions ON \nscheduler_task_executions.id = \nscheduler_operation_executions.task_execution_id\n WHERE scheduler_operation_executions.id = \nresults.operation_execution_id\n AND scheduler_task_executions.device_id = 97)\n ORDER BY results.attribute_id, results.timestamp DESC\n LIMIT 2 -- limit by the length of the attributes list\n\n\n",
"msg_date": "Wed, 21 Aug 2019 08:30:16 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "That query, if I add the ORDER BY and LIMIT, returns the same results.\n\nThe problem is the fact that it behaves the same way regarding its\nspeed as the original query with the index you suggested.\nSometimes it takes 800ms, sometimes it takes 6s to run, how the hell\ncan I get it to behave the same every time?\nAfter I added the index you suggested, it was fine for a while, next\nmorning the run time exploded back to several seconds per query... and\nit oscillates.\n\nOn Wed, Aug 21, 2019 at 2:25 PM Luís Roberto Weck\n<[email protected]> wrote:\n>\n> Em 21/08/2019 04:30, Barbu Paul - Gheorghe escreveu:\n> > I wonder how I missed that... probabily because of the \"WHERE\" clause\n> > in what I already had.\n> >\n> > I indexed by scheduler_task_executions.device_id and the new plan is\n> > as follows: https://explain.depesz.com/s/cQRq\n> >\n> > Can it be further improved?\n> >\n> > Limit (cost=138511.45..138519.36 rows=2 width=54) (actual\n> > time=598.703..618.524 rows=2 loops=1)\n> > Buffers: shared hit=242389 read=44098\n> > -> Unique (cost=138511.45..138527.26 rows=4 width=54) (actual\n> > time=598.701..598.878 rows=2 loops=1)\n> > Buffers: shared hit=242389 read=44098\n> > -> Sort (cost=138511.45..138519.36 rows=3162 width=54)\n> > (actual time=598.699..598.767 rows=2052 loops=1)\n> > Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> > Sort Method: quicksort Memory: 641kB\n> > Buffers: shared hit=242389 read=44098\n> > -> Gather (cost=44082.11..138327.64 rows=3162\n> > width=54) (actual time=117.548..616.456 rows=4102 loops=1)\n> > Workers Planned: 2\n> > Workers Launched: 2\n> > Buffers: shared hit=242389 read=44098\n> > -> Nested Loop (cost=43082.11..137011.44\n> > rows=1318 width=54) (actual time=47.436..525.664 rows=1367 loops=3)\n> > Buffers: shared hit=242389 read=44098\n> > -> Parallel Hash Join\n> > (cost=43081.68..124545.34 rows=24085 width=4) (actual\n> > time=33.099..469.958 rows=19756 loops=3)\n> > Hash Cond:\n> > (scheduler_operation_executions.task_execution_id =\n> > scheduler_task_executions.id)\n> > Buffers: shared hit=32518 read=44098\n> > -> Parallel Seq Scan on\n> > scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> > width=8) (actual time=8.493..245.190 rows=1986887 loops=3)\n> > Buffers: shared hit=6018 read=44098\n> > -> Parallel Hash\n> > (cost=42881.33..42881.33 rows=16028 width=4) (actual\n> > time=23.272..23.272 rows=13558 loops=3)\n> > Buckets: 65536 Batches: 1\n> > Memory Usage: 2112kB\n> > Buffers: shared hit=26416\n> > -> Parallel Bitmap Heap Scan on\n> > scheduler_task_executions (cost=722.55..42881.33 rows=16028 width=4)\n> > (actual time=27.290..61.563 rows=40675 loops=1)\n> > Recheck Cond: (device_id = 97)\n> > Heap Blocks: exact=26302\n> > Buffers: shared hit=26416\n> > -> Bitmap Index Scan on\n> > scheduler_task_executions_device_id_idx (cost=0.00..712.93 rows=38467\n> > width=0) (actual time=17.087..17.087 rows=40675 loops=1)\n> > Index Cond: (device_id = 97)\n> > Buffers: shared hit=114\n> > -> Index Scan using\n> > index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> > width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n> > Index Cond: (operation_execution_id =\n> > scheduler_operation_executions.id)\n> > Filter: ((data IS NOT NULL) AND (data\n> > <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> > AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> > Rows Removed by Filter: 0\n> > Buffers: shared hit=209871\n> > Planning Time: 2.327 ms\n> > Execution Time: 618.935 ms\n> >\n> > On Tue, Aug 20, 2019 at 5:54 PM Luís Roberto Weck\n> > <[email protected]> wrote:\n> >> Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:\n> >>> Hello,\n> >>> I'm running \"PostgreSQL 11.2, compiled by Visual C++ build 1914,\n> >>> 64-bit\" and I have a query that runs several times per user action\n> >>> (9-10 times).\n> >>> The query takes a long time to execute, specially at first, due to\n> >>> cold caches I think, but the performance varies greatly during a run\n> >>> of the application (while applying the said action by the user several\n> >>> times).\n> >>>\n> >>> My tables are only getting bigger with time, not much DELETEs and even\n> >>> less UPDATEs as far as I can tell.\n> >>>\n> >>> Problematic query:\n> >>>\n> >>> EXPLAIN (ANALYZE,BUFFERS)\n> >>> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n> >>> results.data FROM results\n> >>> JOIN scheduler_operation_executions ON\n> >>> scheduler_operation_executions.id = results.operation_execution_id\n> >>> JOIN scheduler_task_executions ON scheduler_task_executions.id =\n> >>> scheduler_operation_executions.task_execution_id\n> >>> WHERE scheduler_task_executions.device_id = 97\n> >>> AND results.data <> '<NullData/>'\n> >>> AND results.data IS NOT NULL\n> >>> AND results.object_id = 1955\n> >>> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> >>> AND results.data_access_result = 'SUCCESS'\n> >>> ORDER BY results.attribute_id, results.timestamp DESC\n> >>> LIMIT 2 -- limit by the length of the attributes list\n> >>>\n> >>> In words: I want the latest (ORDER BY results.timestamp DESC) results\n> >>> of a device (scheduler_task_executions.device_id = 97 - hence the\n> >>> joins results -> scheduler_operation_executions ->\n> >>> scheduler_task_executions)\n> >>> for a given object and attributes with some additional constraints on\n> >>> the data column. But I only want the latest attributes for which we\n> >>> have results, hence the DISTINCT ON (results.attribute_id) and LIMIT.\n> >>>\n> >>> First run: https://explain.depesz.com/s/qh4C\n> >>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> >>> time=44068.166..44086.970 rows=2 loops=1)\n> >>> Buffers: shared hit=215928 read=85139\n> >>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> >>> time=44068.164..44069.301 rows=2 loops=1)\n> >>> Buffers: shared hit=215928 read=85139\n> >>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> >>> (actual time=44068.161..44068.464 rows=2052 loops=1)\n> >>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> >>> Sort Method: quicksort Memory: 641kB\n> >>> Buffers: shared hit=215928 read=85139\n> >>> -> Gather (cost=62853.04..157098.57 rows=3162\n> >>> width=54) (actual time=23518.745..44076.385 rows=4102 loops=1)\n> >>> Workers Planned: 2\n> >>> Workers Launched: 2\n> >>> Buffers: shared hit=215928 read=85139\n> >>> -> Nested Loop (cost=61853.04..155782.37\n> >>> rows=1318 width=54) (actual time=23290.514..43832.223 rows=1367\n> >>> loops=3)\n> >>> Buffers: shared hit=215928 read=85139\n> >>> -> Parallel Hash Join\n> >>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> >>> time=23271.275..40018.451 rows=19756 loops=3)\n> >>> Hash Cond:\n> >>> (scheduler_operation_executions.task_execution_id =\n> >>> scheduler_task_executions.id)\n> >>> Buffers: shared hit=6057 read=85139\n> >>> -> Parallel Seq Scan on\n> >>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> >>> width=8) (actual time=7.575..15694.435 rows=1986887 loops=3)\n> >>> Buffers: shared hit=2996 read=47120\n> >>> -> Parallel Hash\n> >>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> >>> time=23253.337..23253.337 rows=13558 loops=3)\n> >>> Buckets: 65536 Batches: 1\n> >>> Memory Usage: 2144kB\n> >>> Buffers: shared hit=2977 read=38019\n> >>> -> Parallel Seq Scan on\n> >>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> >>> (actual time=25.939..23222.174 rows=13558 loops=3)\n> >>> Filter: (device_id = 97)\n> >>> Rows Removed by Filter: 1308337\n> >>> Buffers: shared hit=2977 read=38019\n> >>> -> Index Scan using\n> >>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> >>> width=58) (actual time=0.191..0.191 rows=0 loops=59269)\n> >>> Index Cond: (operation_execution_id =\n> >>> scheduler_operation_executions.id)\n> >>> Filter: ((data IS NOT NULL) AND (data\n> >>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> >>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> >>> Rows Removed by Filter: 0\n> >>> Buffers: shared hit=209871\n> >>> Planning Time: 29.295 ms\n> >>> Execution Time: 44087.365 ms\n> >>>\n> >>>\n> >>> Second run: https://explain.depesz.com/s/uy9f\n> >>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> >>> time=789.363..810.440 rows=2 loops=1)\n> >>> Buffers: shared hit=216312 read=84755\n> >>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> >>> time=789.361..789.535 rows=2 loops=1)\n> >>> Buffers: shared hit=216312 read=84755\n> >>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> >>> (actual time=789.361..789.418 rows=2052 loops=1)\n> >>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> >>> Sort Method: quicksort Memory: 641kB\n> >>> Buffers: shared hit=216312 read=84755\n> >>> -> Gather (cost=62853.04..157098.57 rows=3162\n> >>> width=54) (actual time=290.356..808.454 rows=4102 loops=1)\n> >>> Workers Planned: 2\n> >>> Workers Launched: 2\n> >>> Buffers: shared hit=216312 read=84755\n> >>> -> Nested Loop (cost=61853.04..155782.37\n> >>> rows=1318 width=54) (actual time=238.313..735.472 rows=1367 loops=3)\n> >>> Buffers: shared hit=216312 read=84755\n> >>> -> Parallel Hash Join\n> >>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> >>> time=237.966..677.975 rows=19756 loops=3)\n> >>> Hash Cond:\n> >>> (scheduler_operation_executions.task_execution_id =\n> >>> scheduler_task_executions.id)\n> >>> Buffers: shared hit=6441 read=84755\n> >>> -> Parallel Seq Scan on\n> >>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> >>> width=8) (actual time=0.117..234.279 rows=1986887 loops=3)\n> >>> Buffers: shared hit=3188 read=46928\n> >>> -> Parallel Hash\n> >>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> >>> time=236.631..236.631 rows=13558 loops=3)\n> >>> Buckets: 65536 Batches: 1\n> >>> Memory Usage: 2144kB\n> >>> Buffers: shared hit=3169 read=37827\n> >>> -> Parallel Seq Scan on\n> >>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> >>> (actual time=0.132..232.758 rows=13558 loops=3)\n> >>> Filter: (device_id = 97)\n> >>> Rows Removed by Filter: 1308337\n> >>> Buffers: shared hit=3169 read=37827\n> >>> -> Index Scan using\n> >>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> >>> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n> >>> Index Cond: (operation_execution_id =\n> >>> scheduler_operation_executions.id)\n> >>> Filter: ((data IS NOT NULL) AND (data\n> >>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> >>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> >>> Rows Removed by Filter: 0\n> >>> Buffers: shared hit=209871\n> >>> Planning Time: 1.787 ms\n> >>> Execution Time: 810.634 ms\n> >>>\n> >>> You can see that the second run takes less than one second to run...\n> >>> which is 43 seconds better than the first try, just by re-running the\n> >>> query.\n> >>> Other runs take maybe 1s, 3s, still a long time.\n> >>>\n> >>> How can I improve it to be consistently fast (is it possible to get to\n> >>> several milliseconds?)?\n> >>> What I don't really understand is why the nested loop has 3 loops\n> >>> (three joined tables)?\n> >>> And why does the first index scan indicate ~60k loops? And does it\n> >>> really work? It doesn't seem to filter out any rows.\n> >>>\n> >>> Should I add an index only on (attribute_id, object_id)? And maybe\n> >>> data_access_result?\n> >>> Does it make sens to add it on a text column (results.data)?\n> >>>\n> >>> My tables:\n> >>> https://gist.githubusercontent.com/paulbarbu/0d36271d710349d8fb6102d9a466bb54/raw/7a6946ba7c2adec5b87ed90f343f1aff37432d21/gistfile1.txt\n> >>>\n> >>> As you can see from the gist the foreign keys are indexed. Other\n> >>> indices were added to speed up other queries.\n> >>> Other relevant information (my tables have 3+ millions of rows, not\n> >>> very big I think?), additional info with regards to size also included\n> >>> below.\n> >>> This query has poor performance on two PCs (both running off of HDDs)\n> >>> so I think it has more to do with my indices and query than Postgres\n> >>> config & hardware, will post those if necessary.\n> >>>\n> >>>\n> >>> Size info:\n> >>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n> >>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n> >>> relname IN ('results', 'scheduler_operation_executions',\n> >>> 'scheduler_task_executions');\n> >>> -[ RECORD 1 ]--+-------------------------------\n> >>> relname | results\n> >>> relpages | 65922\n> >>> reltuples | 3.17104e+06\n> >>> relallvisible | 65922\n> >>> relkind | r\n> >>> relnatts | 9\n> >>> relhassubclass | f\n> >>> reloptions |\n> >>> pg_table_size | 588791808\n> >>> -[ RECORD 2 ]--+-------------------------------\n> >>> relname | scheduler_operation_executions\n> >>> relpages | 50116\n> >>> reltuples | 5.95916e+06\n> >>> relallvisible | 50116\n> >>> relkind | r\n> >>> relnatts | 8\n> >>> relhassubclass | f\n> >>> reloptions |\n> >>> pg_table_size | 410697728\n> >>> -[ RECORD 3 ]--+-------------------------------\n> >>> relname | scheduler_task_executions\n> >>> relpages | 40996\n> >>> reltuples | 3.966e+06\n> >>> relallvisible | 40996\n> >>> relkind | r\n> >>> relnatts | 12\n> >>> relhassubclass | f\n> >>> reloptions |\n> >>> pg_table_size | 335970304\n> >>>\n> >>> Thanks for your time!\n> >>>\n> >>> --\n> >>> Barbu Paul - Gheorghe\n> >>>\n> >> Can you create an index on scheduler_task_executions.device_id and run\n> >> it again?\n> Can you try this query, please? Although I'm not really sure it'll give\n> you the same results.\n>\n> SELECT DISTINCT ON (results.attribute_id)\n> results.timestamp,\n> results.data\n> FROM results\n> WHERE results.data <> '<NullData/>'\n> AND results.data IS NOT NULL\n> AND results.object_id = 1955\n> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> AND results.data_access_result = 'SUCCESS'\n> AND EXISTS (SELECT 1\n> FROM scheduler_operation_executions\n> JOIN scheduler_task_executions ON\n> scheduler_task_executions.id =\n> scheduler_operation_executions.task_execution_id\n> WHERE scheduler_operation_executions.id =\n> results.operation_execution_id\n> AND scheduler_task_executions.device_id = 97)\n> ORDER BY results.attribute_id, results.timestamp DESC\n> LIMIT 2 -- limit by the length of the attributes list\n\n\n\n-- \n\nBarbu Paul - Gheorghe\n\n\n",
"msg_date": "Thu, 22 Aug 2019 14:51:50 +0300",
"msg_from": "Barbu Paul - Gheorghe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "Em 22/08/2019 08:51, Barbu Paul - Gheorghe escreveu:\n> That query, if I add the ORDER BY and LIMIT, returns the same results.\n>\n> The problem is the fact that it behaves the same way regarding its\n> speed as the original query with the index you suggested.\n> Sometimes it takes 800ms, sometimes it takes 6s to run, how the hell\n> can I get it to behave the same every time?\n> After I added the index you suggested, it was fine for a while, next\n> morning the run time exploded back to several seconds per query... and\n> it oscillates.\n>\n> On Wed, Aug 21, 2019 at 2:25 PM Luís Roberto Weck\n> <[email protected]> wrote:\n>> Em 21/08/2019 04:30, Barbu Paul - Gheorghe escreveu:\n>>> I wonder how I missed that... probabily because of the \"WHERE\" clause\n>>> in what I already had.\n>>>\n>>> I indexed by scheduler_task_executions.device_id and the new plan is\n>>> as follows: https://explain.depesz.com/s/cQRq\n>>>\n>>> Can it be further improved?\n>>>\n>>> Limit (cost=138511.45..138519.36 rows=2 width=54) (actual\n>>> time=598.703..618.524 rows=2 loops=1)\n>>> Buffers: shared hit=242389 read=44098\n>>> -> Unique (cost=138511.45..138527.26 rows=4 width=54) (actual\n>>> time=598.701..598.878 rows=2 loops=1)\n>>> Buffers: shared hit=242389 read=44098\n>>> -> Sort (cost=138511.45..138519.36 rows=3162 width=54)\n>>> (actual time=598.699..598.767 rows=2052 loops=1)\n>>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n>>> Sort Method: quicksort Memory: 641kB\n>>> Buffers: shared hit=242389 read=44098\n>>> -> Gather (cost=44082.11..138327.64 rows=3162\n>>> width=54) (actual time=117.548..616.456 rows=4102 loops=1)\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>>> Buffers: shared hit=242389 read=44098\n>>> -> Nested Loop (cost=43082.11..137011.44\n>>> rows=1318 width=54) (actual time=47.436..525.664 rows=1367 loops=3)\n>>> Buffers: shared hit=242389 read=44098\n>>> -> Parallel Hash Join\n>>> (cost=43081.68..124545.34 rows=24085 width=4) (actual\n>>> time=33.099..469.958 rows=19756 loops=3)\n>>> Hash Cond:\n>>> (scheduler_operation_executions.task_execution_id =\n>>> scheduler_task_executions.id)\n>>> Buffers: shared hit=32518 read=44098\n>>> -> Parallel Seq Scan on\n>>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n>>> width=8) (actual time=8.493..245.190 rows=1986887 loops=3)\n>>> Buffers: shared hit=6018 read=44098\n>>> -> Parallel Hash\n>>> (cost=42881.33..42881.33 rows=16028 width=4) (actual\n>>> time=23.272..23.272 rows=13558 loops=3)\n>>> Buckets: 65536 Batches: 1\n>>> Memory Usage: 2112kB\n>>> Buffers: shared hit=26416\n>>> -> Parallel Bitmap Heap Scan on\n>>> scheduler_task_executions (cost=722.55..42881.33 rows=16028 width=4)\n>>> (actual time=27.290..61.563 rows=40675 loops=1)\n>>> Recheck Cond: (device_id = 97)\n>>> Heap Blocks: exact=26302\n>>> Buffers: shared hit=26416\n>>> -> Bitmap Index Scan on\n>>> scheduler_task_executions_device_id_idx (cost=0.00..712.93 rows=38467\n>>> width=0) (actual time=17.087..17.087 rows=40675 loops=1)\n>>> Index Cond: (device_id = 97)\n>>> Buffers: shared hit=114\n>>> -> Index Scan using\n>>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n>>> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n>>> Index Cond: (operation_execution_id =\n>>> scheduler_operation_executions.id)\n>>> Filter: ((data IS NOT NULL) AND (data\n>>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n>>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n>>> Rows Removed by Filter: 0\n>>> Buffers: shared hit=209871\n>>> Planning Time: 2.327 ms\n>>> Execution Time: 618.935 ms\n>>>\n>>> On Tue, Aug 20, 2019 at 5:54 PM Luís Roberto Weck\n>>> <[email protected]> wrote:\n>>>> Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:\n>>>>> Hello,\n>>>>> I'm running \"PostgreSQL 11.2, compiled by Visual C++ build 1914,\n>>>>> 64-bit\" and I have a query that runs several times per user action\n>>>>> (9-10 times).\n>>>>> The query takes a long time to execute, specially at first, due to\n>>>>> cold caches I think, but the performance varies greatly during a run\n>>>>> of the application (while applying the said action by the user several\n>>>>> times).\n>>>>>\n>>>>> My tables are only getting bigger with time, not much DELETEs and even\n>>>>> less UPDATEs as far as I can tell.\n>>>>>\n>>>>> Problematic query:\n>>>>>\n>>>>> EXPLAIN (ANALYZE,BUFFERS)\n>>>>> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n>>>>> results.data FROM results\n>>>>> JOIN scheduler_operation_executions ON\n>>>>> scheduler_operation_executions.id = results.operation_execution_id\n>>>>> JOIN scheduler_task_executions ON scheduler_task_executions.id =\n>>>>> scheduler_operation_executions.task_execution_id\n>>>>> WHERE scheduler_task_executions.device_id = 97\n>>>>> AND results.data <> '<NullData/>'\n>>>>> AND results.data IS NOT NULL\n>>>>> AND results.object_id = 1955\n>>>>> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n>>>>> AND results.data_access_result = 'SUCCESS'\n>>>>> ORDER BY results.attribute_id, results.timestamp DESC\n>>>>> LIMIT 2 -- limit by the length of the attributes list\n>>>>>\n>>>>> In words: I want the latest (ORDER BY results.timestamp DESC) results\n>>>>> of a device (scheduler_task_executions.device_id = 97 - hence the\n>>>>> joins results -> scheduler_operation_executions ->\n>>>>> scheduler_task_executions)\n>>>>> for a given object and attributes with some additional constraints on\n>>>>> the data column. But I only want the latest attributes for which we\n>>>>> have results, hence the DISTINCT ON (results.attribute_id) and LIMIT.\n>>>>>\n>>>>> First run: https://explain.depesz.com/s/qh4C\n>>>>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n>>>>> time=44068.166..44086.970 rows=2 loops=1)\n>>>>> Buffers: shared hit=215928 read=85139\n>>>>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n>>>>> time=44068.164..44069.301 rows=2 loops=1)\n>>>>> Buffers: shared hit=215928 read=85139\n>>>>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n>>>>> (actual time=44068.161..44068.464 rows=2052 loops=1)\n>>>>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n>>>>> Sort Method: quicksort Memory: 641kB\n>>>>> Buffers: shared hit=215928 read=85139\n>>>>> -> Gather (cost=62853.04..157098.57 rows=3162\n>>>>> width=54) (actual time=23518.745..44076.385 rows=4102 loops=1)\n>>>>> Workers Planned: 2\n>>>>> Workers Launched: 2\n>>>>> Buffers: shared hit=215928 read=85139\n>>>>> -> Nested Loop (cost=61853.04..155782.37\n>>>>> rows=1318 width=54) (actual time=23290.514..43832.223 rows=1367\n>>>>> loops=3)\n>>>>> Buffers: shared hit=215928 read=85139\n>>>>> -> Parallel Hash Join\n>>>>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n>>>>> time=23271.275..40018.451 rows=19756 loops=3)\n>>>>> Hash Cond:\n>>>>> (scheduler_operation_executions.task_execution_id =\n>>>>> scheduler_task_executions.id)\n>>>>> Buffers: shared hit=6057 read=85139\n>>>>> -> Parallel Seq Scan on\n>>>>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n>>>>> width=8) (actual time=7.575..15694.435 rows=1986887 loops=3)\n>>>>> Buffers: shared hit=2996 read=47120\n>>>>> -> Parallel Hash\n>>>>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n>>>>> time=23253.337..23253.337 rows=13558 loops=3)\n>>>>> Buckets: 65536 Batches: 1\n>>>>> Memory Usage: 2144kB\n>>>>> Buffers: shared hit=2977 read=38019\n>>>>> -> Parallel Seq Scan on\n>>>>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n>>>>> (actual time=25.939..23222.174 rows=13558 loops=3)\n>>>>> Filter: (device_id = 97)\n>>>>> Rows Removed by Filter: 1308337\n>>>>> Buffers: shared hit=2977 read=38019\n>>>>> -> Index Scan using\n>>>>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n>>>>> width=58) (actual time=0.191..0.191 rows=0 loops=59269)\n>>>>> Index Cond: (operation_execution_id =\n>>>>> scheduler_operation_executions.id)\n>>>>> Filter: ((data IS NOT NULL) AND (data\n>>>>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n>>>>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n>>>>> Rows Removed by Filter: 0\n>>>>> Buffers: shared hit=209871\n>>>>> Planning Time: 29.295 ms\n>>>>> Execution Time: 44087.365 ms\n>>>>>\n>>>>>\n>>>>> Second run: https://explain.depesz.com/s/uy9f\n>>>>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n>>>>> time=789.363..810.440 rows=2 loops=1)\n>>>>> Buffers: shared hit=216312 read=84755\n>>>>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n>>>>> time=789.361..789.535 rows=2 loops=1)\n>>>>> Buffers: shared hit=216312 read=84755\n>>>>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n>>>>> (actual time=789.361..789.418 rows=2052 loops=1)\n>>>>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n>>>>> Sort Method: quicksort Memory: 641kB\n>>>>> Buffers: shared hit=216312 read=84755\n>>>>> -> Gather (cost=62853.04..157098.57 rows=3162\n>>>>> width=54) (actual time=290.356..808.454 rows=4102 loops=1)\n>>>>> Workers Planned: 2\n>>>>> Workers Launched: 2\n>>>>> Buffers: shared hit=216312 read=84755\n>>>>> -> Nested Loop (cost=61853.04..155782.37\n>>>>> rows=1318 width=54) (actual time=238.313..735.472 rows=1367 loops=3)\n>>>>> Buffers: shared hit=216312 read=84755\n>>>>> -> Parallel Hash Join\n>>>>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n>>>>> time=237.966..677.975 rows=19756 loops=3)\n>>>>> Hash Cond:\n>>>>> (scheduler_operation_executions.task_execution_id =\n>>>>> scheduler_task_executions.id)\n>>>>> Buffers: shared hit=6441 read=84755\n>>>>> -> Parallel Seq Scan on\n>>>>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n>>>>> width=8) (actual time=0.117..234.279 rows=1986887 loops=3)\n>>>>> Buffers: shared hit=3188 read=46928\n>>>>> -> Parallel Hash\n>>>>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n>>>>> time=236.631..236.631 rows=13558 loops=3)\n>>>>> Buckets: 65536 Batches: 1\n>>>>> Memory Usage: 2144kB\n>>>>> Buffers: shared hit=3169 read=37827\n>>>>> -> Parallel Seq Scan on\n>>>>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n>>>>> (actual time=0.132..232.758 rows=13558 loops=3)\n>>>>> Filter: (device_id = 97)\n>>>>> Rows Removed by Filter: 1308337\n>>>>> Buffers: shared hit=3169 read=37827\n>>>>> -> Index Scan using\n>>>>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n>>>>> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n>>>>> Index Cond: (operation_execution_id =\n>>>>> scheduler_operation_executions.id)\n>>>>> Filter: ((data IS NOT NULL) AND (data\n>>>>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n>>>>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n>>>>> Rows Removed by Filter: 0\n>>>>> Buffers: shared hit=209871\n>>>>> Planning Time: 1.787 ms\n>>>>> Execution Time: 810.634 ms\n>>>>>\n>>>>> You can see that the second run takes less than one second to run...\n>>>>> which is 43 seconds better than the first try, just by re-running the\n>>>>> query.\n>>>>> Other runs take maybe 1s, 3s, still a long time.\n>>>>>\n>>>>> How can I improve it to be consistently fast (is it possible to get to\n>>>>> several milliseconds?)?\n>>>>> What I don't really understand is why the nested loop has 3 loops\n>>>>> (three joined tables)?\n>>>>> And why does the first index scan indicate ~60k loops? And does it\n>>>>> really work? It doesn't seem to filter out any rows.\n>>>>>\n>>>>> Should I add an index only on (attribute_id, object_id)? And maybe\n>>>>> data_access_result?\n>>>>> Does it make sens to add it on a text column (results.data)?\n>>>>>\n>>>>> My tables:\n>>>>> https://gist.githubusercontent.com/paulbarbu/0d36271d710349d8fb6102d9a466bb54/raw/7a6946ba7c2adec5b87ed90f343f1aff37432d21/gistfile1.txt\n>>>>>\n>>>>> As you can see from the gist the foreign keys are indexed. Other\n>>>>> indices were added to speed up other queries.\n>>>>> Other relevant information (my tables have 3+ millions of rows, not\n>>>>> very big I think?), additional info with regards to size also included\n>>>>> below.\n>>>>> This query has poor performance on two PCs (both running off of HDDs)\n>>>>> so I think it has more to do with my indices and query than Postgres\n>>>>> config & hardware, will post those if necessary.\n>>>>>\n>>>>>\n>>>>> Size info:\n>>>>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n>>>>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n>>>>> relname IN ('results', 'scheduler_operation_executions',\n>>>>> 'scheduler_task_executions');\n>>>>> -[ RECORD 1 ]--+-------------------------------\n>>>>> relname | results\n>>>>> relpages | 65922\n>>>>> reltuples | 3.17104e+06\n>>>>> relallvisible | 65922\n>>>>> relkind | r\n>>>>> relnatts | 9\n>>>>> relhassubclass | f\n>>>>> reloptions |\n>>>>> pg_table_size | 588791808\n>>>>> -[ RECORD 2 ]--+-------------------------------\n>>>>> relname | scheduler_operation_executions\n>>>>> relpages | 50116\n>>>>> reltuples | 5.95916e+06\n>>>>> relallvisible | 50116\n>>>>> relkind | r\n>>>>> relnatts | 8\n>>>>> relhassubclass | f\n>>>>> reloptions |\n>>>>> pg_table_size | 410697728\n>>>>> -[ RECORD 3 ]--+-------------------------------\n>>>>> relname | scheduler_task_executions\n>>>>> relpages | 40996\n>>>>> reltuples | 3.966e+06\n>>>>> relallvisible | 40996\n>>>>> relkind | r\n>>>>> relnatts | 12\n>>>>> relhassubclass | f\n>>>>> reloptions |\n>>>>> pg_table_size | 335970304\n>>>>>\n>>>>> Thanks for your time!\n>>>>>\n>>>>> --\n>>>>> Barbu Paul - Gheorghe\n>>>>>\n>>>> Can you create an index on scheduler_task_executions.device_id and run\n>>>> it again?\n>> Can you try this query, please? Although I'm not really sure it'll give\n>> you the same results.\n>>\n>> SELECT DISTINCT ON (results.attribute_id)\n>> results.timestamp,\n>> results.data\n>> FROM results\n>> WHERE results.data <> '<NullData/>'\n>> AND results.data IS NOT NULL\n>> AND results.object_id = 1955\n>> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n>> AND results.data_access_result = 'SUCCESS'\n>> AND EXISTS (SELECT 1\n>> FROM scheduler_operation_executions\n>> JOIN scheduler_task_executions ON\n>> scheduler_task_executions.id =\n>> scheduler_operation_executions.task_execution_id\n>> WHERE scheduler_operation_executions.id =\n>> results.operation_execution_id\n>> AND scheduler_task_executions.device_id = 97)\n>> ORDER BY results.attribute_id, results.timestamp DESC\n>> LIMIT 2 -- limit by the length of the attributes list\nCan you provide the EXPLAIN ANALYZE plan for the query I sent you?\n\n\n",
"msg_date": "Thu, 22 Aug 2019 09:10:22 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "with ORDER BY so I get the correct results (163 seconds):\nhttps://explain.depesz.com/s/j3o1\n\nUnique (cost=164620.19..164650.19 rows=4 width=54) (actual\ntime=163953.091..163954.621 rows=2 loops=1)\n Buffers: shared hit=183080 read=103411\n -> Sort (cost=164620.19..164635.19 rows=5999 width=54) (actual\ntime=163953.081..163953.570 rows=4102 loops=1)\n Sort Key: results.attribute_id, results.\"timestamp\" DESC\n Sort Method: quicksort Memory: 641kB\n Buffers: shared hit=183080 read=103411\n -> Nested Loop (cost=132172.41..164243.74 rows=5999\nwidth=54) (actual time=3054.965..163928.686 rows=4102 loops=1)\n Buffers: shared hit=183074 read=103411\n -> HashAggregate (cost=132171.98..132779.88 rows=60790\nwidth=4) (actual time=2484.449..2581.582 rows=59269 loops=1)\n Group Key: scheduler_operation_executions.id\n Buffers: shared hit=87 read=76529\n -> Gather (cost=44474.33..132020.01 rows=60790\nwidth=4) (actual time=312.503..2463.254 rows=59269 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=87 read=76529\n -> Parallel Hash Join\n(cost=43474.33..124941.01 rows=25329 width=4) (actual\ntime=124.733..2279.986 rows=19756 loops=3)\n Hash Cond:\n(scheduler_operation_executions.task_execution_id =\nscheduler_task_executions.id)\n Buffers: shared hit=87 read=76529\n -> Parallel Seq Scan on\nscheduler_operation_executions (cost=0.00..74948.21 rows=2483221\nwidth=8) (actual time=0.126..1828.461 rows=1986887 loops=3)\n Buffers: shared hit=2 read=50114\n -> Parallel Hash\n(cost=43263.67..43263.67 rows=16853 width=4) (actual\ntime=123.631..123.631 rows=13558 loops=3)\n Buckets: 65536 Batches: 1\nMemory Usage: 2144kB\n Buffers: shared hit=1 read=26415\n -> Parallel Bitmap Heap Scan on\nscheduler_task_executions (cost=757.90..43263.67 rows=16853 width=4)\n(actual time=6.944..120.405 rows=13558 loops=3)\n Recheck Cond: (device_id = 97)\n Heap Blocks: exact=24124\n Buffers: shared hit=1 read=26415\n -> Bitmap Index Scan on\nscheduler_task_executions_device_id_idx (cost=0.00..747.79 rows=40448\nwidth=0) (actual time=13.378..13.378 rows=40675 loops=1)\n Index Cond: (device_id = 97)\n Buffers: shared read=114\n -> Index Scan using index_operation_execution_id_asc on\nresults (cost=0.43..0.51 rows=1 width=58) (actual time=2.720..2.720\nrows=0 loops=59269)\n Index Cond: (operation_execution_id =\nscheduler_operation_executions.id)\n Filter: ((data IS NOT NULL) AND (data <>\n'<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[])) AND\n(object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n Rows Removed by Filter: 0\n Buffers: shared hit=182987 read=26882\nPlanning Time: 349.908 ms\nExecution Time: 163962.314 ms\n\n\nWith ORDER BY (on the second run, 0.6 seconds):\nhttps://explain.depesz.com/s/QZ1Z\nUnique (cost=164620.19..164650.19 rows=4 width=54) (actual\ntime=621.057..621.527 rows=2 loops=1)\n Buffers: shared hit=236659 read=49826\n -> Sort (cost=164620.19..164635.19 rows=5999 width=54) (actual\ntime=621.056..621.188 rows=4102 loops=1)\n Sort Key: results.attribute_id, results.\"timestamp\" DESC\n Sort Method: quicksort Memory: 641kB\n Buffers: shared hit=236659 read=49826\n -> Nested Loop (cost=132172.41..164243.74 rows=5999\nwidth=54) (actual time=503.577..619.250 rows=4102 loops=1)\n Buffers: shared hit=236659 read=49826\n -> HashAggregate (cost=132171.98..132779.88 rows=60790\nwidth=4) (actual time=503.498..513.551 rows=59269 loops=1)\n Group Key: scheduler_operation_executions.id\n Buffers: shared hit=26790 read=49826\n -> Gather (cost=44474.33..132020.01 rows=60790\nwidth=4) (actual time=65.499..489.396 rows=59269 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=26790 read=49826\n -> Parallel Hash Join\n(cost=43474.33..124941.01 rows=25329 width=4) (actual\ntime=22.059..441.847 rows=19756 loops=3)\n Hash Cond:\n(scheduler_operation_executions.task_execution_id =\nscheduler_task_executions.id)\n Buffers: shared hit=26790 read=49826\n -> Parallel Seq Scan on\nscheduler_operation_executions (cost=0.00..74948.21 rows=2483221\nwidth=8) (actual time=0.083..229.120 rows=1986887 loops=3)\n Buffers: shared hit=290 read=49826\n -> Parallel Hash\n(cost=43263.67..43263.67 rows=16853 width=4) (actual\ntime=20.648..20.648 rows=13558 loops=3)\n Buckets: 65536 Batches: 1\nMemory Usage: 2144kB\n Buffers: shared hit=26416\n -> Parallel Bitmap Heap Scan on\nscheduler_task_executions (cost=757.90..43263.67 rows=16853 width=4)\n(actual time=12.833..26.689 rows=20338 loops=2)\n Recheck Cond: (device_id = 97)\n Heap Blocks: exact=26052\n Buffers: shared hit=26416\n -> Bitmap Index Scan on\nscheduler_task_executions_device_id_idx (cost=0.00..747.79 rows=40448\nwidth=0) (actual time=19.424..19.424 rows=40675 loops=1)\n Index Cond: (device_id = 97)\n Buffers: shared hit=114\n -> Index Scan using index_operation_execution_id_asc on\nresults (cost=0.43..0.51 rows=1 width=58) (actual time=0.002..0.002\nrows=0 loops=59269)\n Index Cond: (operation_execution_id =\nscheduler_operation_executions.id)\n Filter: ((data IS NOT NULL) AND (data <>\n'<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[])) AND\n(object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n Rows Removed by Filter: 0\n Buffers: shared hit=209869\nPlanning Time: 1.893 ms\nExecution Time: 627.590 ms\n\n\n\nWithout (exactly as you wrote it, 1.1s): https://explain.depesz.com/s/qKmj\nUnique (cost=164620.19..164650.19 rows=4 width=54) (actual\ntime=1103.230..1103.587 rows=2 loops=1)\n Buffers: shared hit=183077 read=103411\n -> Sort (cost=164620.19..164635.19 rows=5999 width=54) (actual\ntime=1103.230..1103.359 rows=4102 loops=1)\n Sort Key: results.attribute_id\n Sort Method: quicksort Memory: 641kB\n Buffers: shared hit=183077 read=103411\n -> Nested Loop (cost=132172.41..164243.74 rows=5999\nwidth=54) (actual time=605.314..1101.687 rows=4102 loops=1)\n Buffers: shared hit=183074 read=103411\n -> HashAggregate (cost=132171.98..132779.88 rows=60790\nwidth=4) (actual time=604.710..615.933 rows=59269 loops=1)\n Group Key: scheduler_operation_executions.id\n Buffers: shared hit=87 read=76529\n -> Gather (cost=44474.33..132020.01 rows=60790\nwidth=4) (actual time=173.528..590.757 rows=59269 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=87 read=76529\n -> Parallel Hash Join\n(cost=43474.33..124941.01 rows=25329 width=4) (actual\ntime=143.420..563.646 rows=19756 loops=3)\n Hash Cond:\n(scheduler_operation_executions.task_execution_id =\nscheduler_task_executions.id)\n Buffers: shared hit=87 read=76529\n -> Parallel Seq Scan on\nscheduler_operation_executions (cost=0.00..74948.21 rows=2483221\nwidth=8) (actual time=0.121..228.542 rows=1986887 loops=3)\n Buffers: shared hit=2 read=50114\n -> Parallel Hash\n(cost=43263.67..43263.67 rows=16853 width=4) (actual\ntime=142.853..142.853 rows=13558 loops=3)\n Buckets: 65536 Batches: 1\nMemory Usage: 2112kB\n Buffers: shared hit=1 read=26415\n -> Parallel Bitmap Heap Scan on\nscheduler_task_executions (cost=757.90..43263.67 rows=16853 width=4)\n(actual time=2.869..139.083 rows=13558 loops=3)\n Recheck Cond: (device_id = 97)\n Heap Blocks: exact=10677\n Buffers: shared hit=1 read=26415\n -> Bitmap Index Scan on\nscheduler_task_executions_device_id_idx (cost=0.00..747.79 rows=40448\nwidth=0) (actual time=5.347..5.347 rows=40675 loops=1)\n Index Cond: (device_id = 97)\n Buffers: shared read=114\n -> Index Scan using index_operation_execution_id_asc on\nresults (cost=0.43..0.51 rows=1 width=58) (actual time=0.008..0.008\nrows=0 loops=59269)\n Index Cond: (operation_execution_id =\nscheduler_operation_executions.id)\n Filter: ((data IS NOT NULL) AND (data <>\n'<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[])) AND\n(object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n Rows Removed by Filter: 0\n Buffers: shared hit=182987 read=26882\nPlanning Time: 23.634 ms\nExecution Time: 1106.375 ms\n\nOn Thu, Aug 22, 2019 at 3:05 PM Luís Roberto Weck\n<[email protected]> wrote:\n>\n> Em 22/08/2019 08:51, Barbu Paul - Gheorghe escreveu:\n> > That query, if I add the ORDER BY and LIMIT, returns the same results.\n> >\n> > The problem is the fact that it behaves the same way regarding its\n> > speed as the original query with the index you suggested.\n> > Sometimes it takes 800ms, sometimes it takes 6s to run, how the hell\n> > can I get it to behave the same every time?\n> > After I added the index you suggested, it was fine for a while, next\n> > morning the run time exploded back to several seconds per query... and\n> > it oscillates.\n> >\n> > On Wed, Aug 21, 2019 at 2:25 PM Luís Roberto Weck\n> > <[email protected]> wrote:\n> >> Em 21/08/2019 04:30, Barbu Paul - Gheorghe escreveu:\n> >>> I wonder how I missed that... probabily because of the \"WHERE\" clause\n> >>> in what I already had.\n> >>>\n> >>> I indexed by scheduler_task_executions.device_id and the new plan is\n> >>> as follows: https://explain.depesz.com/s/cQRq\n> >>>\n> >>> Can it be further improved?\n> >>>\n> >>> Limit (cost=138511.45..138519.36 rows=2 width=54) (actual\n> >>> time=598.703..618.524 rows=2 loops=1)\n> >>> Buffers: shared hit=242389 read=44098\n> >>> -> Unique (cost=138511.45..138527.26 rows=4 width=54) (actual\n> >>> time=598.701..598.878 rows=2 loops=1)\n> >>> Buffers: shared hit=242389 read=44098\n> >>> -> Sort (cost=138511.45..138519.36 rows=3162 width=54)\n> >>> (actual time=598.699..598.767 rows=2052 loops=1)\n> >>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> >>> Sort Method: quicksort Memory: 641kB\n> >>> Buffers: shared hit=242389 read=44098\n> >>> -> Gather (cost=44082.11..138327.64 rows=3162\n> >>> width=54) (actual time=117.548..616.456 rows=4102 loops=1)\n> >>> Workers Planned: 2\n> >>> Workers Launched: 2\n> >>> Buffers: shared hit=242389 read=44098\n> >>> -> Nested Loop (cost=43082.11..137011.44\n> >>> rows=1318 width=54) (actual time=47.436..525.664 rows=1367 loops=3)\n> >>> Buffers: shared hit=242389 read=44098\n> >>> -> Parallel Hash Join\n> >>> (cost=43081.68..124545.34 rows=24085 width=4) (actual\n> >>> time=33.099..469.958 rows=19756 loops=3)\n> >>> Hash Cond:\n> >>> (scheduler_operation_executions.task_execution_id =\n> >>> scheduler_task_executions.id)\n> >>> Buffers: shared hit=32518 read=44098\n> >>> -> Parallel Seq Scan on\n> >>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> >>> width=8) (actual time=8.493..245.190 rows=1986887 loops=3)\n> >>> Buffers: shared hit=6018 read=44098\n> >>> -> Parallel Hash\n> >>> (cost=42881.33..42881.33 rows=16028 width=4) (actual\n> >>> time=23.272..23.272 rows=13558 loops=3)\n> >>> Buckets: 65536 Batches: 1\n> >>> Memory Usage: 2112kB\n> >>> Buffers: shared hit=26416\n> >>> -> Parallel Bitmap Heap Scan on\n> >>> scheduler_task_executions (cost=722.55..42881.33 rows=16028 width=4)\n> >>> (actual time=27.290..61.563 rows=40675 loops=1)\n> >>> Recheck Cond: (device_id = 97)\n> >>> Heap Blocks: exact=26302\n> >>> Buffers: shared hit=26416\n> >>> -> Bitmap Index Scan on\n> >>> scheduler_task_executions_device_id_idx (cost=0.00..712.93 rows=38467\n> >>> width=0) (actual time=17.087..17.087 rows=40675 loops=1)\n> >>> Index Cond: (device_id = 97)\n> >>> Buffers: shared hit=114\n> >>> -> Index Scan using\n> >>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> >>> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n> >>> Index Cond: (operation_execution_id =\n> >>> scheduler_operation_executions.id)\n> >>> Filter: ((data IS NOT NULL) AND (data\n> >>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> >>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> >>> Rows Removed by Filter: 0\n> >>> Buffers: shared hit=209871\n> >>> Planning Time: 2.327 ms\n> >>> Execution Time: 618.935 ms\n> >>>\n> >>> On Tue, Aug 20, 2019 at 5:54 PM Luís Roberto Weck\n> >>> <[email protected]> wrote:\n> >>>> Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:\n> >>>>> Hello,\n> >>>>> I'm running \"PostgreSQL 11.2, compiled by Visual C++ build 1914,\n> >>>>> 64-bit\" and I have a query that runs several times per user action\n> >>>>> (9-10 times).\n> >>>>> The query takes a long time to execute, specially at first, due to\n> >>>>> cold caches I think, but the performance varies greatly during a run\n> >>>>> of the application (while applying the said action by the user several\n> >>>>> times).\n> >>>>>\n> >>>>> My tables are only getting bigger with time, not much DELETEs and even\n> >>>>> less UPDATEs as far as I can tell.\n> >>>>>\n> >>>>> Problematic query:\n> >>>>>\n> >>>>> EXPLAIN (ANALYZE,BUFFERS)\n> >>>>> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n> >>>>> results.data FROM results\n> >>>>> JOIN scheduler_operation_executions ON\n> >>>>> scheduler_operation_executions.id = results.operation_execution_id\n> >>>>> JOIN scheduler_task_executions ON scheduler_task_executions.id =\n> >>>>> scheduler_operation_executions.task_execution_id\n> >>>>> WHERE scheduler_task_executions.device_id = 97\n> >>>>> AND results.data <> '<NullData/>'\n> >>>>> AND results.data IS NOT NULL\n> >>>>> AND results.object_id = 1955\n> >>>>> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> >>>>> AND results.data_access_result = 'SUCCESS'\n> >>>>> ORDER BY results.attribute_id, results.timestamp DESC\n> >>>>> LIMIT 2 -- limit by the length of the attributes list\n> >>>>>\n> >>>>> In words: I want the latest (ORDER BY results.timestamp DESC) results\n> >>>>> of a device (scheduler_task_executions.device_id = 97 - hence the\n> >>>>> joins results -> scheduler_operation_executions ->\n> >>>>> scheduler_task_executions)\n> >>>>> for a given object and attributes with some additional constraints on\n> >>>>> the data column. But I only want the latest attributes for which we\n> >>>>> have results, hence the DISTINCT ON (results.attribute_id) and LIMIT.\n> >>>>>\n> >>>>> First run: https://explain.depesz.com/s/qh4C\n> >>>>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> >>>>> time=44068.166..44086.970 rows=2 loops=1)\n> >>>>> Buffers: shared hit=215928 read=85139\n> >>>>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> >>>>> time=44068.164..44069.301 rows=2 loops=1)\n> >>>>> Buffers: shared hit=215928 read=85139\n> >>>>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> >>>>> (actual time=44068.161..44068.464 rows=2052 loops=1)\n> >>>>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> >>>>> Sort Method: quicksort Memory: 641kB\n> >>>>> Buffers: shared hit=215928 read=85139\n> >>>>> -> Gather (cost=62853.04..157098.57 rows=3162\n> >>>>> width=54) (actual time=23518.745..44076.385 rows=4102 loops=1)\n> >>>>> Workers Planned: 2\n> >>>>> Workers Launched: 2\n> >>>>> Buffers: shared hit=215928 read=85139\n> >>>>> -> Nested Loop (cost=61853.04..155782.37\n> >>>>> rows=1318 width=54) (actual time=23290.514..43832.223 rows=1367\n> >>>>> loops=3)\n> >>>>> Buffers: shared hit=215928 read=85139\n> >>>>> -> Parallel Hash Join\n> >>>>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> >>>>> time=23271.275..40018.451 rows=19756 loops=3)\n> >>>>> Hash Cond:\n> >>>>> (scheduler_operation_executions.task_execution_id =\n> >>>>> scheduler_task_executions.id)\n> >>>>> Buffers: shared hit=6057 read=85139\n> >>>>> -> Parallel Seq Scan on\n> >>>>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> >>>>> width=8) (actual time=7.575..15694.435 rows=1986887 loops=3)\n> >>>>> Buffers: shared hit=2996 read=47120\n> >>>>> -> Parallel Hash\n> >>>>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> >>>>> time=23253.337..23253.337 rows=13558 loops=3)\n> >>>>> Buckets: 65536 Batches: 1\n> >>>>> Memory Usage: 2144kB\n> >>>>> Buffers: shared hit=2977 read=38019\n> >>>>> -> Parallel Seq Scan on\n> >>>>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> >>>>> (actual time=25.939..23222.174 rows=13558 loops=3)\n> >>>>> Filter: (device_id = 97)\n> >>>>> Rows Removed by Filter: 1308337\n> >>>>> Buffers: shared hit=2977 read=38019\n> >>>>> -> Index Scan using\n> >>>>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> >>>>> width=58) (actual time=0.191..0.191 rows=0 loops=59269)\n> >>>>> Index Cond: (operation_execution_id =\n> >>>>> scheduler_operation_executions.id)\n> >>>>> Filter: ((data IS NOT NULL) AND (data\n> >>>>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> >>>>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> >>>>> Rows Removed by Filter: 0\n> >>>>> Buffers: shared hit=209871\n> >>>>> Planning Time: 29.295 ms\n> >>>>> Execution Time: 44087.365 ms\n> >>>>>\n> >>>>>\n> >>>>> Second run: https://explain.depesz.com/s/uy9f\n> >>>>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> >>>>> time=789.363..810.440 rows=2 loops=1)\n> >>>>> Buffers: shared hit=216312 read=84755\n> >>>>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> >>>>> time=789.361..789.535 rows=2 loops=1)\n> >>>>> Buffers: shared hit=216312 read=84755\n> >>>>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> >>>>> (actual time=789.361..789.418 rows=2052 loops=1)\n> >>>>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> >>>>> Sort Method: quicksort Memory: 641kB\n> >>>>> Buffers: shared hit=216312 read=84755\n> >>>>> -> Gather (cost=62853.04..157098.57 rows=3162\n> >>>>> width=54) (actual time=290.356..808.454 rows=4102 loops=1)\n> >>>>> Workers Planned: 2\n> >>>>> Workers Launched: 2\n> >>>>> Buffers: shared hit=216312 read=84755\n> >>>>> -> Nested Loop (cost=61853.04..155782.37\n> >>>>> rows=1318 width=54) (actual time=238.313..735.472 rows=1367 loops=3)\n> >>>>> Buffers: shared hit=216312 read=84755\n> >>>>> -> Parallel Hash Join\n> >>>>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> >>>>> time=237.966..677.975 rows=19756 loops=3)\n> >>>>> Hash Cond:\n> >>>>> (scheduler_operation_executions.task_execution_id =\n> >>>>> scheduler_task_executions.id)\n> >>>>> Buffers: shared hit=6441 read=84755\n> >>>>> -> Parallel Seq Scan on\n> >>>>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> >>>>> width=8) (actual time=0.117..234.279 rows=1986887 loops=3)\n> >>>>> Buffers: shared hit=3188 read=46928\n> >>>>> -> Parallel Hash\n> >>>>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> >>>>> time=236.631..236.631 rows=13558 loops=3)\n> >>>>> Buckets: 65536 Batches: 1\n> >>>>> Memory Usage: 2144kB\n> >>>>> Buffers: shared hit=3169 read=37827\n> >>>>> -> Parallel Seq Scan on\n> >>>>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> >>>>> (actual time=0.132..232.758 rows=13558 loops=3)\n> >>>>> Filter: (device_id = 97)\n> >>>>> Rows Removed by Filter: 1308337\n> >>>>> Buffers: shared hit=3169 read=37827\n> >>>>> -> Index Scan using\n> >>>>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> >>>>> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n> >>>>> Index Cond: (operation_execution_id =\n> >>>>> scheduler_operation_executions.id)\n> >>>>> Filter: ((data IS NOT NULL) AND (data\n> >>>>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> >>>>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> >>>>> Rows Removed by Filter: 0\n> >>>>> Buffers: shared hit=209871\n> >>>>> Planning Time: 1.787 ms\n> >>>>> Execution Time: 810.634 ms\n> >>>>>\n> >>>>> You can see that the second run takes less than one second to run...\n> >>>>> which is 43 seconds better than the first try, just by re-running the\n> >>>>> query.\n> >>>>> Other runs take maybe 1s, 3s, still a long time.\n> >>>>>\n> >>>>> How can I improve it to be consistently fast (is it possible to get to\n> >>>>> several milliseconds?)?\n> >>>>> What I don't really understand is why the nested loop has 3 loops\n> >>>>> (three joined tables)?\n> >>>>> And why does the first index scan indicate ~60k loops? And does it\n> >>>>> really work? It doesn't seem to filter out any rows.\n> >>>>>\n> >>>>> Should I add an index only on (attribute_id, object_id)? And maybe\n> >>>>> data_access_result?\n> >>>>> Does it make sens to add it on a text column (results.data)?\n> >>>>>\n> >>>>> My tables:\n> >>>>> https://gist.githubusercontent.com/paulbarbu/0d36271d710349d8fb6102d9a466bb54/raw/7a6946ba7c2adec5b87ed90f343f1aff37432d21/gistfile1.txt\n> >>>>>\n> >>>>> As you can see from the gist the foreign keys are indexed. Other\n> >>>>> indices were added to speed up other queries.\n> >>>>> Other relevant information (my tables have 3+ millions of rows, not\n> >>>>> very big I think?), additional info with regards to size also included\n> >>>>> below.\n> >>>>> This query has poor performance on two PCs (both running off of HDDs)\n> >>>>> so I think it has more to do with my indices and query than Postgres\n> >>>>> config & hardware, will post those if necessary.\n> >>>>>\n> >>>>>\n> >>>>> Size info:\n> >>>>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n> >>>>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n> >>>>> relname IN ('results', 'scheduler_operation_executions',\n> >>>>> 'scheduler_task_executions');\n> >>>>> -[ RECORD 1 ]--+-------------------------------\n> >>>>> relname | results\n> >>>>> relpages | 65922\n> >>>>> reltuples | 3.17104e+06\n> >>>>> relallvisible | 65922\n> >>>>> relkind | r\n> >>>>> relnatts | 9\n> >>>>> relhassubclass | f\n> >>>>> reloptions |\n> >>>>> pg_table_size | 588791808\n> >>>>> -[ RECORD 2 ]--+-------------------------------\n> >>>>> relname | scheduler_operation_executions\n> >>>>> relpages | 50116\n> >>>>> reltuples | 5.95916e+06\n> >>>>> relallvisible | 50116\n> >>>>> relkind | r\n> >>>>> relnatts | 8\n> >>>>> relhassubclass | f\n> >>>>> reloptions |\n> >>>>> pg_table_size | 410697728\n> >>>>> -[ RECORD 3 ]--+-------------------------------\n> >>>>> relname | scheduler_task_executions\n> >>>>> relpages | 40996\n> >>>>> reltuples | 3.966e+06\n> >>>>> relallvisible | 40996\n> >>>>> relkind | r\n> >>>>> relnatts | 12\n> >>>>> relhassubclass | f\n> >>>>> reloptions |\n> >>>>> pg_table_size | 335970304\n> >>>>>\n> >>>>> Thanks for your time!\n> >>>>>\n> >>>>> --\n> >>>>> Barbu Paul - Gheorghe\n> >>>>>\n> >>>> Can you create an index on scheduler_task_executions.device_id and run\n> >>>> it again?\n> >> Can you try this query, please? Although I'm not really sure it'll give\n> >> you the same results.\n> >>\n> >> SELECT DISTINCT ON (results.attribute_id)\n> >> results.timestamp,\n> >> results.data\n> >> FROM results\n> >> WHERE results.data <> '<NullData/>'\n> >> AND results.data IS NOT NULL\n> >> AND results.object_id = 1955\n> >> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> >> AND results.data_access_result = 'SUCCESS'\n> >> AND EXISTS (SELECT 1\n> >> FROM scheduler_operation_executions\n> >> JOIN scheduler_task_executions ON\n> >> scheduler_task_executions.id =\n> >> scheduler_operation_executions.task_execution_id\n> >> WHERE scheduler_operation_executions.id =\n> >> results.operation_execution_id\n> >> AND scheduler_task_executions.device_id = 97)\n> >> ORDER BY results.attribute_id, results.timestamp DESC\n> >> LIMIT 2 -- limit by the length of the attributes list\n> Can you provide the EXPLAIN ANALYZE plan for the query I sent you?\n\n\n\n-- \n\nBarbu Paul - Gheorghe\n\n\n",
"msg_date": "Thu, 22 Aug 2019 15:22:01 +0300",
"msg_from": "Barbu Paul - Gheorghe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "If I restart the PostgreSQL server, then the performance is bad,\nseveral seconds to one or two hundred seconds.\nThis is reflected in the \"buffers read\" indicator, which is >0 when\nperformance is bad for the first \"Index Scan using\nindex_operation_execution_id_asc on\nresults\".\n\nProbably this explains the oscillations in running time as well?\nCache gets filled after the first run, hence the performance improves,\nthen as the system runs, the cache gets dirty and performance for this\nparticular query degrades again.\n\nOn Thu, Aug 22, 2019 at 3:22 PM Barbu Paul - Gheorghe\n<[email protected]> wrote:\n>\n> with ORDER BY so I get the correct results (163 seconds):\n> https://explain.depesz.com/s/j3o1\n>\n> Unique (cost=164620.19..164650.19 rows=4 width=54) (actual\n> time=163953.091..163954.621 rows=2 loops=1)\n> Buffers: shared hit=183080 read=103411\n> -> Sort (cost=164620.19..164635.19 rows=5999 width=54) (actual\n> time=163953.081..163953.570 rows=4102 loops=1)\n> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> Sort Method: quicksort Memory: 641kB\n> Buffers: shared hit=183080 read=103411\n> -> Nested Loop (cost=132172.41..164243.74 rows=5999\n> width=54) (actual time=3054.965..163928.686 rows=4102 loops=1)\n> Buffers: shared hit=183074 read=103411\n> -> HashAggregate (cost=132171.98..132779.88 rows=60790\n> width=4) (actual time=2484.449..2581.582 rows=59269 loops=1)\n> Group Key: scheduler_operation_executions.id\n> Buffers: shared hit=87 read=76529\n> -> Gather (cost=44474.33..132020.01 rows=60790\n> width=4) (actual time=312.503..2463.254 rows=59269 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=87 read=76529\n> -> Parallel Hash Join\n> (cost=43474.33..124941.01 rows=25329 width=4) (actual\n> time=124.733..2279.986 rows=19756 loops=3)\n> Hash Cond:\n> (scheduler_operation_executions.task_execution_id =\n> scheduler_task_executions.id)\n> Buffers: shared hit=87 read=76529\n> -> Parallel Seq Scan on\n> scheduler_operation_executions (cost=0.00..74948.21 rows=2483221\n> width=8) (actual time=0.126..1828.461 rows=1986887 loops=3)\n> Buffers: shared hit=2 read=50114\n> -> Parallel Hash\n> (cost=43263.67..43263.67 rows=16853 width=4) (actual\n> time=123.631..123.631 rows=13558 loops=3)\n> Buckets: 65536 Batches: 1\n> Memory Usage: 2144kB\n> Buffers: shared hit=1 read=26415\n> -> Parallel Bitmap Heap Scan on\n> scheduler_task_executions (cost=757.90..43263.67 rows=16853 width=4)\n> (actual time=6.944..120.405 rows=13558 loops=3)\n> Recheck Cond: (device_id = 97)\n> Heap Blocks: exact=24124\n> Buffers: shared hit=1 read=26415\n> -> Bitmap Index Scan on\n> scheduler_task_executions_device_id_idx (cost=0.00..747.79 rows=40448\n> width=0) (actual time=13.378..13.378 rows=40675 loops=1)\n> Index Cond: (device_id = 97)\n> Buffers: shared read=114\n> -> Index Scan using index_operation_execution_id_asc on\n> results (cost=0.43..0.51 rows=1 width=58) (actual time=2.720..2.720\n> rows=0 loops=59269)\n> Index Cond: (operation_execution_id =\n> scheduler_operation_executions.id)\n> Filter: ((data IS NOT NULL) AND (data <>\n> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[])) AND\n> (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> Rows Removed by Filter: 0\n> Buffers: shared hit=182987 read=26882\n> Planning Time: 349.908 ms\n> Execution Time: 163962.314 ms\n>\n>\n> With ORDER BY (on the second run, 0.6 seconds):\n> https://explain.depesz.com/s/QZ1Z\n> Unique (cost=164620.19..164650.19 rows=4 width=54) (actual\n> time=621.057..621.527 rows=2 loops=1)\n> Buffers: shared hit=236659 read=49826\n> -> Sort (cost=164620.19..164635.19 rows=5999 width=54) (actual\n> time=621.056..621.188 rows=4102 loops=1)\n> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> Sort Method: quicksort Memory: 641kB\n> Buffers: shared hit=236659 read=49826\n> -> Nested Loop (cost=132172.41..164243.74 rows=5999\n> width=54) (actual time=503.577..619.250 rows=4102 loops=1)\n> Buffers: shared hit=236659 read=49826\n> -> HashAggregate (cost=132171.98..132779.88 rows=60790\n> width=4) (actual time=503.498..513.551 rows=59269 loops=1)\n> Group Key: scheduler_operation_executions.id\n> Buffers: shared hit=26790 read=49826\n> -> Gather (cost=44474.33..132020.01 rows=60790\n> width=4) (actual time=65.499..489.396 rows=59269 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=26790 read=49826\n> -> Parallel Hash Join\n> (cost=43474.33..124941.01 rows=25329 width=4) (actual\n> time=22.059..441.847 rows=19756 loops=3)\n> Hash Cond:\n> (scheduler_operation_executions.task_execution_id =\n> scheduler_task_executions.id)\n> Buffers: shared hit=26790 read=49826\n> -> Parallel Seq Scan on\n> scheduler_operation_executions (cost=0.00..74948.21 rows=2483221\n> width=8) (actual time=0.083..229.120 rows=1986887 loops=3)\n> Buffers: shared hit=290 read=49826\n> -> Parallel Hash\n> (cost=43263.67..43263.67 rows=16853 width=4) (actual\n> time=20.648..20.648 rows=13558 loops=3)\n> Buckets: 65536 Batches: 1\n> Memory Usage: 2144kB\n> Buffers: shared hit=26416\n> -> Parallel Bitmap Heap Scan on\n> scheduler_task_executions (cost=757.90..43263.67 rows=16853 width=4)\n> (actual time=12.833..26.689 rows=20338 loops=2)\n> Recheck Cond: (device_id = 97)\n> Heap Blocks: exact=26052\n> Buffers: shared hit=26416\n> -> Bitmap Index Scan on\n> scheduler_task_executions_device_id_idx (cost=0.00..747.79 rows=40448\n> width=0) (actual time=19.424..19.424 rows=40675 loops=1)\n> Index Cond: (device_id = 97)\n> Buffers: shared hit=114\n> -> Index Scan using index_operation_execution_id_asc on\n> results (cost=0.43..0.51 rows=1 width=58) (actual time=0.002..0.002\n> rows=0 loops=59269)\n> Index Cond: (operation_execution_id =\n> scheduler_operation_executions.id)\n> Filter: ((data IS NOT NULL) AND (data <>\n> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[])) AND\n> (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> Rows Removed by Filter: 0\n> Buffers: shared hit=209869\n> Planning Time: 1.893 ms\n> Execution Time: 627.590 ms\n>\n>\n>\n> Without (exactly as you wrote it, 1.1s): https://explain.depesz.com/s/qKmj\n> Unique (cost=164620.19..164650.19 rows=4 width=54) (actual\n> time=1103.230..1103.587 rows=2 loops=1)\n> Buffers: shared hit=183077 read=103411\n> -> Sort (cost=164620.19..164635.19 rows=5999 width=54) (actual\n> time=1103.230..1103.359 rows=4102 loops=1)\n> Sort Key: results.attribute_id\n> Sort Method: quicksort Memory: 641kB\n> Buffers: shared hit=183077 read=103411\n> -> Nested Loop (cost=132172.41..164243.74 rows=5999\n> width=54) (actual time=605.314..1101.687 rows=4102 loops=1)\n> Buffers: shared hit=183074 read=103411\n> -> HashAggregate (cost=132171.98..132779.88 rows=60790\n> width=4) (actual time=604.710..615.933 rows=59269 loops=1)\n> Group Key: scheduler_operation_executions.id\n> Buffers: shared hit=87 read=76529\n> -> Gather (cost=44474.33..132020.01 rows=60790\n> width=4) (actual time=173.528..590.757 rows=59269 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=87 read=76529\n> -> Parallel Hash Join\n> (cost=43474.33..124941.01 rows=25329 width=4) (actual\n> time=143.420..563.646 rows=19756 loops=3)\n> Hash Cond:\n> (scheduler_operation_executions.task_execution_id =\n> scheduler_task_executions.id)\n> Buffers: shared hit=87 read=76529\n> -> Parallel Seq Scan on\n> scheduler_operation_executions (cost=0.00..74948.21 rows=2483221\n> width=8) (actual time=0.121..228.542 rows=1986887 loops=3)\n> Buffers: shared hit=2 read=50114\n> -> Parallel Hash\n> (cost=43263.67..43263.67 rows=16853 width=4) (actual\n> time=142.853..142.853 rows=13558 loops=3)\n> Buckets: 65536 Batches: 1\n> Memory Usage: 2112kB\n> Buffers: shared hit=1 read=26415\n> -> Parallel Bitmap Heap Scan on\n> scheduler_task_executions (cost=757.90..43263.67 rows=16853 width=4)\n> (actual time=2.869..139.083 rows=13558 loops=3)\n> Recheck Cond: (device_id = 97)\n> Heap Blocks: exact=10677\n> Buffers: shared hit=1 read=26415\n> -> Bitmap Index Scan on\n> scheduler_task_executions_device_id_idx (cost=0.00..747.79 rows=40448\n> width=0) (actual time=5.347..5.347 rows=40675 loops=1)\n> Index Cond: (device_id = 97)\n> Buffers: shared read=114\n> -> Index Scan using index_operation_execution_id_asc on\n> results (cost=0.43..0.51 rows=1 width=58) (actual time=0.008..0.008\n> rows=0 loops=59269)\n> Index Cond: (operation_execution_id =\n> scheduler_operation_executions.id)\n> Filter: ((data IS NOT NULL) AND (data <>\n> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[])) AND\n> (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> Rows Removed by Filter: 0\n> Buffers: shared hit=182987 read=26882\n> Planning Time: 23.634 ms\n> Execution Time: 1106.375 ms\n>\n> On Thu, Aug 22, 2019 at 3:05 PM Luís Roberto Weck\n> <[email protected]> wrote:\n> >\n> > Em 22/08/2019 08:51, Barbu Paul - Gheorghe escreveu:\n> > > That query, if I add the ORDER BY and LIMIT, returns the same results.\n> > >\n> > > The problem is the fact that it behaves the same way regarding its\n> > > speed as the original query with the index you suggested.\n> > > Sometimes it takes 800ms, sometimes it takes 6s to run, how the hell\n> > > can I get it to behave the same every time?\n> > > After I added the index you suggested, it was fine for a while, next\n> > > morning the run time exploded back to several seconds per query... and\n> > > it oscillates.\n> > >\n> > > On Wed, Aug 21, 2019 at 2:25 PM Luís Roberto Weck\n> > > <[email protected]> wrote:\n> > >> Em 21/08/2019 04:30, Barbu Paul - Gheorghe escreveu:\n> > >>> I wonder how I missed that... probabily because of the \"WHERE\" clause\n> > >>> in what I already had.\n> > >>>\n> > >>> I indexed by scheduler_task_executions.device_id and the new plan is\n> > >>> as follows: https://explain.depesz.com/s/cQRq\n> > >>>\n> > >>> Can it be further improved?\n> > >>>\n> > >>> Limit (cost=138511.45..138519.36 rows=2 width=54) (actual\n> > >>> time=598.703..618.524 rows=2 loops=1)\n> > >>> Buffers: shared hit=242389 read=44098\n> > >>> -> Unique (cost=138511.45..138527.26 rows=4 width=54) (actual\n> > >>> time=598.701..598.878 rows=2 loops=1)\n> > >>> Buffers: shared hit=242389 read=44098\n> > >>> -> Sort (cost=138511.45..138519.36 rows=3162 width=54)\n> > >>> (actual time=598.699..598.767 rows=2052 loops=1)\n> > >>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> > >>> Sort Method: quicksort Memory: 641kB\n> > >>> Buffers: shared hit=242389 read=44098\n> > >>> -> Gather (cost=44082.11..138327.64 rows=3162\n> > >>> width=54) (actual time=117.548..616.456 rows=4102 loops=1)\n> > >>> Workers Planned: 2\n> > >>> Workers Launched: 2\n> > >>> Buffers: shared hit=242389 read=44098\n> > >>> -> Nested Loop (cost=43082.11..137011.44\n> > >>> rows=1318 width=54) (actual time=47.436..525.664 rows=1367 loops=3)\n> > >>> Buffers: shared hit=242389 read=44098\n> > >>> -> Parallel Hash Join\n> > >>> (cost=43081.68..124545.34 rows=24085 width=4) (actual\n> > >>> time=33.099..469.958 rows=19756 loops=3)\n> > >>> Hash Cond:\n> > >>> (scheduler_operation_executions.task_execution_id =\n> > >>> scheduler_task_executions.id)\n> > >>> Buffers: shared hit=32518 read=44098\n> > >>> -> Parallel Seq Scan on\n> > >>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> > >>> width=8) (actual time=8.493..245.190 rows=1986887 loops=3)\n> > >>> Buffers: shared hit=6018 read=44098\n> > >>> -> Parallel Hash\n> > >>> (cost=42881.33..42881.33 rows=16028 width=4) (actual\n> > >>> time=23.272..23.272 rows=13558 loops=3)\n> > >>> Buckets: 65536 Batches: 1\n> > >>> Memory Usage: 2112kB\n> > >>> Buffers: shared hit=26416\n> > >>> -> Parallel Bitmap Heap Scan on\n> > >>> scheduler_task_executions (cost=722.55..42881.33 rows=16028 width=4)\n> > >>> (actual time=27.290..61.563 rows=40675 loops=1)\n> > >>> Recheck Cond: (device_id = 97)\n> > >>> Heap Blocks: exact=26302\n> > >>> Buffers: shared hit=26416\n> > >>> -> Bitmap Index Scan on\n> > >>> scheduler_task_executions_device_id_idx (cost=0.00..712.93 rows=38467\n> > >>> width=0) (actual time=17.087..17.087 rows=40675 loops=1)\n> > >>> Index Cond: (device_id = 97)\n> > >>> Buffers: shared hit=114\n> > >>> -> Index Scan using\n> > >>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> > >>> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n> > >>> Index Cond: (operation_execution_id =\n> > >>> scheduler_operation_executions.id)\n> > >>> Filter: ((data IS NOT NULL) AND (data\n> > >>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> > >>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> > >>> Rows Removed by Filter: 0\n> > >>> Buffers: shared hit=209871\n> > >>> Planning Time: 2.327 ms\n> > >>> Execution Time: 618.935 ms\n> > >>>\n> > >>> On Tue, Aug 20, 2019 at 5:54 PM Luís Roberto Weck\n> > >>> <[email protected]> wrote:\n> > >>>> Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:\n> > >>>>> Hello,\n> > >>>>> I'm running \"PostgreSQL 11.2, compiled by Visual C++ build 1914,\n> > >>>>> 64-bit\" and I have a query that runs several times per user action\n> > >>>>> (9-10 times).\n> > >>>>> The query takes a long time to execute, specially at first, due to\n> > >>>>> cold caches I think, but the performance varies greatly during a run\n> > >>>>> of the application (while applying the said action by the user several\n> > >>>>> times).\n> > >>>>>\n> > >>>>> My tables are only getting bigger with time, not much DELETEs and even\n> > >>>>> less UPDATEs as far as I can tell.\n> > >>>>>\n> > >>>>> Problematic query:\n> > >>>>>\n> > >>>>> EXPLAIN (ANALYZE,BUFFERS)\n> > >>>>> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n> > >>>>> results.data FROM results\n> > >>>>> JOIN scheduler_operation_executions ON\n> > >>>>> scheduler_operation_executions.id = results.operation_execution_id\n> > >>>>> JOIN scheduler_task_executions ON scheduler_task_executions.id =\n> > >>>>> scheduler_operation_executions.task_execution_id\n> > >>>>> WHERE scheduler_task_executions.device_id = 97\n> > >>>>> AND results.data <> '<NullData/>'\n> > >>>>> AND results.data IS NOT NULL\n> > >>>>> AND results.object_id = 1955\n> > >>>>> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> > >>>>> AND results.data_access_result = 'SUCCESS'\n> > >>>>> ORDER BY results.attribute_id, results.timestamp DESC\n> > >>>>> LIMIT 2 -- limit by the length of the attributes list\n> > >>>>>\n> > >>>>> In words: I want the latest (ORDER BY results.timestamp DESC) results\n> > >>>>> of a device (scheduler_task_executions.device_id = 97 - hence the\n> > >>>>> joins results -> scheduler_operation_executions ->\n> > >>>>> scheduler_task_executions)\n> > >>>>> for a given object and attributes with some additional constraints on\n> > >>>>> the data column. But I only want the latest attributes for which we\n> > >>>>> have results, hence the DISTINCT ON (results.attribute_id) and LIMIT.\n> > >>>>>\n> > >>>>> First run: https://explain.depesz.com/s/qh4C\n> > >>>>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> > >>>>> time=44068.166..44086.970 rows=2 loops=1)\n> > >>>>> Buffers: shared hit=215928 read=85139\n> > >>>>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> > >>>>> time=44068.164..44069.301 rows=2 loops=1)\n> > >>>>> Buffers: shared hit=215928 read=85139\n> > >>>>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> > >>>>> (actual time=44068.161..44068.464 rows=2052 loops=1)\n> > >>>>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> > >>>>> Sort Method: quicksort Memory: 641kB\n> > >>>>> Buffers: shared hit=215928 read=85139\n> > >>>>> -> Gather (cost=62853.04..157098.57 rows=3162\n> > >>>>> width=54) (actual time=23518.745..44076.385 rows=4102 loops=1)\n> > >>>>> Workers Planned: 2\n> > >>>>> Workers Launched: 2\n> > >>>>> Buffers: shared hit=215928 read=85139\n> > >>>>> -> Nested Loop (cost=61853.04..155782.37\n> > >>>>> rows=1318 width=54) (actual time=23290.514..43832.223 rows=1367\n> > >>>>> loops=3)\n> > >>>>> Buffers: shared hit=215928 read=85139\n> > >>>>> -> Parallel Hash Join\n> > >>>>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> > >>>>> time=23271.275..40018.451 rows=19756 loops=3)\n> > >>>>> Hash Cond:\n> > >>>>> (scheduler_operation_executions.task_execution_id =\n> > >>>>> scheduler_task_executions.id)\n> > >>>>> Buffers: shared hit=6057 read=85139\n> > >>>>> -> Parallel Seq Scan on\n> > >>>>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> > >>>>> width=8) (actual time=7.575..15694.435 rows=1986887 loops=3)\n> > >>>>> Buffers: shared hit=2996 read=47120\n> > >>>>> -> Parallel Hash\n> > >>>>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> > >>>>> time=23253.337..23253.337 rows=13558 loops=3)\n> > >>>>> Buckets: 65536 Batches: 1\n> > >>>>> Memory Usage: 2144kB\n> > >>>>> Buffers: shared hit=2977 read=38019\n> > >>>>> -> Parallel Seq Scan on\n> > >>>>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> > >>>>> (actual time=25.939..23222.174 rows=13558 loops=3)\n> > >>>>> Filter: (device_id = 97)\n> > >>>>> Rows Removed by Filter: 1308337\n> > >>>>> Buffers: shared hit=2977 read=38019\n> > >>>>> -> Index Scan using\n> > >>>>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> > >>>>> width=58) (actual time=0.191..0.191 rows=0 loops=59269)\n> > >>>>> Index Cond: (operation_execution_id =\n> > >>>>> scheduler_operation_executions.id)\n> > >>>>> Filter: ((data IS NOT NULL) AND (data\n> > >>>>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> > >>>>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> > >>>>> Rows Removed by Filter: 0\n> > >>>>> Buffers: shared hit=209871\n> > >>>>> Planning Time: 29.295 ms\n> > >>>>> Execution Time: 44087.365 ms\n> > >>>>>\n> > >>>>>\n> > >>>>> Second run: https://explain.depesz.com/s/uy9f\n> > >>>>> Limit (cost=157282.39..157290.29 rows=2 width=54) (actual\n> > >>>>> time=789.363..810.440 rows=2 loops=1)\n> > >>>>> Buffers: shared hit=216312 read=84755\n> > >>>>> -> Unique (cost=157282.39..157298.20 rows=4 width=54) (actual\n> > >>>>> time=789.361..789.535 rows=2 loops=1)\n> > >>>>> Buffers: shared hit=216312 read=84755\n> > >>>>> -> Sort (cost=157282.39..157290.29 rows=3162 width=54)\n> > >>>>> (actual time=789.361..789.418 rows=2052 loops=1)\n> > >>>>> Sort Key: results.attribute_id, results.\"timestamp\" DESC\n> > >>>>> Sort Method: quicksort Memory: 641kB\n> > >>>>> Buffers: shared hit=216312 read=84755\n> > >>>>> -> Gather (cost=62853.04..157098.57 rows=3162\n> > >>>>> width=54) (actual time=290.356..808.454 rows=4102 loops=1)\n> > >>>>> Workers Planned: 2\n> > >>>>> Workers Launched: 2\n> > >>>>> Buffers: shared hit=216312 read=84755\n> > >>>>> -> Nested Loop (cost=61853.04..155782.37\n> > >>>>> rows=1318 width=54) (actual time=238.313..735.472 rows=1367 loops=3)\n> > >>>>> Buffers: shared hit=216312 read=84755\n> > >>>>> -> Parallel Hash Join\n> > >>>>> (cost=61852.61..143316.27 rows=24085 width=4) (actual\n> > >>>>> time=237.966..677.975 rows=19756 loops=3)\n> > >>>>> Hash Cond:\n> > >>>>> (scheduler_operation_executions.task_execution_id =\n> > >>>>> scheduler_task_executions.id)\n> > >>>>> Buffers: shared hit=6441 read=84755\n> > >>>>> -> Parallel Seq Scan on\n> > >>>>> scheduler_operation_executions (cost=0.00..74945.82 rows=2482982\n> > >>>>> width=8) (actual time=0.117..234.279 rows=1986887 loops=3)\n> > >>>>> Buffers: shared hit=3188 read=46928\n> > >>>>> -> Parallel Hash\n> > >>>>> (cost=61652.25..61652.25 rows=16029 width=4) (actual\n> > >>>>> time=236.631..236.631 rows=13558 loops=3)\n> > >>>>> Buckets: 65536 Batches: 1\n> > >>>>> Memory Usage: 2144kB\n> > >>>>> Buffers: shared hit=3169 read=37827\n> > >>>>> -> Parallel Seq Scan on\n> > >>>>> scheduler_task_executions (cost=0.00..61652.25 rows=16029 width=4)\n> > >>>>> (actual time=0.132..232.758 rows=13558 loops=3)\n> > >>>>> Filter: (device_id = 97)\n> > >>>>> Rows Removed by Filter: 1308337\n> > >>>>> Buffers: shared hit=3169 read=37827\n> > >>>>> -> Index Scan using\n> > >>>>> index_operation_execution_id_asc on results (cost=0.43..0.51 rows=1\n> > >>>>> width=58) (actual time=0.003..0.003 rows=0 loops=59269)\n> > >>>>> Index Cond: (operation_execution_id =\n> > >>>>> scheduler_operation_executions.id)\n> > >>>>> Filter: ((data IS NOT NULL) AND (data\n> > >>>>> <> '<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[]))\n> > >>>>> AND (object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n> > >>>>> Rows Removed by Filter: 0\n> > >>>>> Buffers: shared hit=209871\n> > >>>>> Planning Time: 1.787 ms\n> > >>>>> Execution Time: 810.634 ms\n> > >>>>>\n> > >>>>> You can see that the second run takes less than one second to run...\n> > >>>>> which is 43 seconds better than the first try, just by re-running the\n> > >>>>> query.\n> > >>>>> Other runs take maybe 1s, 3s, still a long time.\n> > >>>>>\n> > >>>>> How can I improve it to be consistently fast (is it possible to get to\n> > >>>>> several milliseconds?)?\n> > >>>>> What I don't really understand is why the nested loop has 3 loops\n> > >>>>> (three joined tables)?\n> > >>>>> And why does the first index scan indicate ~60k loops? And does it\n> > >>>>> really work? It doesn't seem to filter out any rows.\n> > >>>>>\n> > >>>>> Should I add an index only on (attribute_id, object_id)? And maybe\n> > >>>>> data_access_result?\n> > >>>>> Does it make sens to add it on a text column (results.data)?\n> > >>>>>\n> > >>>>> My tables:\n> > >>>>> https://gist.githubusercontent.com/paulbarbu/0d36271d710349d8fb6102d9a466bb54/raw/7a6946ba7c2adec5b87ed90f343f1aff37432d21/gistfile1.txt\n> > >>>>>\n> > >>>>> As you can see from the gist the foreign keys are indexed. Other\n> > >>>>> indices were added to speed up other queries.\n> > >>>>> Other relevant information (my tables have 3+ millions of rows, not\n> > >>>>> very big I think?), additional info with regards to size also included\n> > >>>>> below.\n> > >>>>> This query has poor performance on two PCs (both running off of HDDs)\n> > >>>>> so I think it has more to do with my indices and query than Postgres\n> > >>>>> config & hardware, will post those if necessary.\n> > >>>>>\n> > >>>>>\n> > >>>>> Size info:\n> > >>>>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n> > >>>>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n> > >>>>> relname IN ('results', 'scheduler_operation_executions',\n> > >>>>> 'scheduler_task_executions');\n> > >>>>> -[ RECORD 1 ]--+-------------------------------\n> > >>>>> relname | results\n> > >>>>> relpages | 65922\n> > >>>>> reltuples | 3.17104e+06\n> > >>>>> relallvisible | 65922\n> > >>>>> relkind | r\n> > >>>>> relnatts | 9\n> > >>>>> relhassubclass | f\n> > >>>>> reloptions |\n> > >>>>> pg_table_size | 588791808\n> > >>>>> -[ RECORD 2 ]--+-------------------------------\n> > >>>>> relname | scheduler_operation_executions\n> > >>>>> relpages | 50116\n> > >>>>> reltuples | 5.95916e+06\n> > >>>>> relallvisible | 50116\n> > >>>>> relkind | r\n> > >>>>> relnatts | 8\n> > >>>>> relhassubclass | f\n> > >>>>> reloptions |\n> > >>>>> pg_table_size | 410697728\n> > >>>>> -[ RECORD 3 ]--+-------------------------------\n> > >>>>> relname | scheduler_task_executions\n> > >>>>> relpages | 40996\n> > >>>>> reltuples | 3.966e+06\n> > >>>>> relallvisible | 40996\n> > >>>>> relkind | r\n> > >>>>> relnatts | 12\n> > >>>>> relhassubclass | f\n> > >>>>> reloptions |\n> > >>>>> pg_table_size | 335970304\n> > >>>>>\n> > >>>>> Thanks for your time!\n> > >>>>>\n> > >>>>> --\n> > >>>>> Barbu Paul - Gheorghe\n> > >>>>>\n> > >>>> Can you create an index on scheduler_task_executions.device_id and run\n> > >>>> it again?\n> > >> Can you try this query, please? Although I'm not really sure it'll give\n> > >> you the same results.\n> > >>\n> > >> SELECT DISTINCT ON (results.attribute_id)\n> > >> results.timestamp,\n> > >> results.data\n> > >> FROM results\n> > >> WHERE results.data <> '<NullData/>'\n> > >> AND results.data IS NOT NULL\n> > >> AND results.object_id = 1955\n> > >> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> > >> AND results.data_access_result = 'SUCCESS'\n> > >> AND EXISTS (SELECT 1\n> > >> FROM scheduler_operation_executions\n> > >> JOIN scheduler_task_executions ON\n> > >> scheduler_task_executions.id =\n> > >> scheduler_operation_executions.task_execution_id\n> > >> WHERE scheduler_operation_executions.id =\n> > >> results.operation_execution_id\n> > >> AND scheduler_task_executions.device_id = 97)\n> > >> ORDER BY results.attribute_id, results.timestamp DESC\n> > >> LIMIT 2 -- limit by the length of the attributes list\n> > Can you provide the EXPLAIN ANALYZE plan for the query I sent you?\n>\n>\n>\n> --\n>\n> Barbu Paul - Gheorghe\n\n\n\n-- \n\nBarbu Paul - Gheorghe\n\n\n",
"msg_date": "Thu, 22 Aug 2019 15:49:16 +0300",
"msg_from": "Barbu Paul - Gheorghe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "Hello,\n\n1/ access scheduler_task_executions \n\tby index with device_id = 97\nseems ok\n\n2/ \nI don't understand why \njoining\nscheduler_task_executions.id=scheduler_operation_executions.task_execution_id\nis done using a parallel hash join \nwhen a nested loop would be better (regarding the number of rows involved)\n\nmaybe because index on scheduler_operation_executions.task_execution_id\n\n \"index_task_execution_id_desc\" btree (task_execution_id DESC NULLS LAST)\n\nis not usable or bloated or because of DESC NULLS LAST ?\n\n\n3/ join with results.operation_execution_id\n\tby index\nseems OK\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Thu, 22 Aug 2019 10:43:10 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "On Tue, Aug 20, 2019 at 10:22 AM Barbu Paul - Gheorghe <\[email protected]> wrote:\n\n>\n> The query takes a long time to execute, specially at first, due to\n> cold caches I think, but the performance varies greatly during a run\n> of the application (while applying the said action by the user several\n> times).\n>\n\nYes, it certainly looks like it is due to cold caches. But you say it is\nslow at first, and then say it varies greatly during a run. Is being slow\nat first the only way it varies greatly, or is there large variation even\nbeyond that?\n\nYou can use pg_rewarm to overcome the cold cache issue when you first start\nup the server.\n\nEXPLAIN (ANALYZE,BUFFERS)\n> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n> results.data FROM results\n> JOIN scheduler_operation_executions ON\n> scheduler_operation_executions.id = results.operation_execution_id\n> JOIN scheduler_task_executions ON scheduler_task_executions.id =\n> scheduler_operation_executions.task_execution_id\n> WHERE scheduler_task_executions.device_id = 97\n> AND results.data <> '<NullData/>'\n> AND results.data IS NOT NULL\n> AND results.object_id = 1955\n> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> AND results.data_access_result = 'SUCCESS'\n> ORDER BY results.attribute_id, results.timestamp DESC\n> LIMIT 2 -- limit by the length of the attributes list\n>\n\nIf you query only on \"results\" with only the conditions that apply to\n\"results\", what is the expected number of rows, and what is the actual\nnumber of rows?\n\n ...\n\nHow can I improve it to be consistently fast (is it possible to get to\n> several milliseconds?)?\n>\n\nMaybe. Depends on the answer to my previous question.\n\n\n> What I don't really understand is why the nested loop has 3 loops\n> (three joined tables)?\n>\n\nEach parallel execution counts as a loop. There are 2 parallel workers,\nplus the leader also participates, making three.\n\n\n> And why does the first index scan indicate ~60k loops? And does it\n> really work? It doesn't seem to filter out any rows.\n>\n\nThe parallel hash join returns about 20,000 rows, but I think that that is\njust for one of the three parallel executions, making about 60,000 in\ntotal. I don't know why one of the nodes report combined execution and the\nother just a single worker. Parallel queries are hard to understand. When\nI want to optimize a query that does parallel execution, I just turn off\nparallelism (\"set max_parallel_workers_per_gather TO 0;\") at first to make\nis simpler to understand.\n\nApparently everything with device_id = 97 just happens to pass all the rest\nof your filters. If you need those filters to make sure you get the right\nanswer in all cases, then you need them. A lifeboat isn't useless just\nbecause your ship didn't happen to sink today.\n\n\n>\n> Should I add an index only on (attribute_id, object_id)? And maybe\n> data_access_result?\n> Does it make sens to add it on a text column (results.data)?\n>\n\nWhich parts of query you give are going to change from execution to\nexecution?\n\nAssuming the parts for object_id and attribute_id are variable and the rest\nare static, I think the optimal index would be \"create index on results\n(object_id, attribute_id) where data IS NOT NULL and data <> '<NullData/>'\nand data_access_result = 'SUCCESS'\"\n\nWhy does results.data have two different \"spellings\" for null data?\n\nHowever, if the number of rows from \"results\" that meet all your criteria\nare high, the index won't make much of a difference. The planner has a\nfundamental choice to make, should it seek things with device_id = 97, and\nthen check each of those to see if they satisfy your conditions on\n\"results\" fields conditions. Or, should it seek things that satisfy the\n\"results\" fields conditions, and then check each of those to see if they\nsatisfy device_id = 97. It is currently doing the first of those. Whether\nit should be doing the second, and whether creating the index will cause it\nto switch to using the second, are two (separate) questions which can't be\nanswered with the data given.\n\nCheers,\n\nJeff\n\nOn Tue, Aug 20, 2019 at 10:22 AM Barbu Paul - Gheorghe <[email protected]> wrote:\nThe query takes a long time to execute, specially at first, due to\ncold caches I think, but the performance varies greatly during a run\nof the application (while applying the said action by the user several\ntimes).Yes, it certainly looks like it is due to cold caches. But you say it is slow at first, and then say it varies greatly during a run. Is being slow at first the only way it varies greatly, or is there large variation even beyond that? You can use pg_rewarm to overcome the cold cache issue when you first start up the server.EXPLAIN (ANALYZE,BUFFERS)\nSELECT DISTINCT ON (results.attribute_id) results.timestamp,\nresults.data FROM results\n JOIN scheduler_operation_executions ON\nscheduler_operation_executions.id = results.operation_execution_id\n JOIN scheduler_task_executions ON scheduler_task_executions.id =\nscheduler_operation_executions.task_execution_id\nWHERE scheduler_task_executions.device_id = 97\n AND results.data <> '<NullData/>'\n AND results.data IS NOT NULL\n AND results.object_id = 1955\n AND results.attribute_id IN (4, 5) -- possibly a longer list here\n AND results.data_access_result = 'SUCCESS'\nORDER BY results.attribute_id, results.timestamp DESC\nLIMIT 2 -- limit by the length of the attributes listIf you query only on \"results\" with only the conditions that apply to \"results\", what is the expected number of rows, and what is the actual number of rows? ...\nHow can I improve it to be consistently fast (is it possible to get to\nseveral milliseconds?)?Maybe. Depends on the answer to my previous question. \nWhat I don't really understand is why the nested loop has 3 loops\n(three joined tables)?Each parallel execution counts as a loop. There are 2 parallel workers, plus the leader also participates, making three. \nAnd why does the first index scan indicate ~60k loops? And does it\nreally work? It doesn't seem to filter out any rows.The parallel hash join returns about 20,000 rows, but I think that that is just for one of the three parallel executions, making about 60,000 in total. I don't know why one of the nodes report combined execution and the other just a single worker. Parallel queries are hard to understand. When I want to optimize a query that does parallel execution, I just turn off parallelism (\"set max_parallel_workers_per_gather TO 0;\") at first to make is simpler to understand.Apparently everything with device_id = 97 just happens to pass all the rest of your filters. If you need those filters to make sure you get the right answer in all cases, then you need them. A lifeboat isn't useless just because your ship didn't happen to sink today. \n\nShould I add an index only on (attribute_id, object_id)? And maybe\ndata_access_result?\nDoes it make sens to add it on a text column (results.data)?Which parts of query you give are going to change from execution to execution?Assuming the parts for object_id and attribute_id are variable and the rest are static, I think the optimal index would be \"create index on results (object_id, attribute_id) where data IS NOT NULL and data <> '<NullData/>' and data_access_result = 'SUCCESS'\"Why does results.data have two different \"spellings\" for null data?However, if the number of rows from \"results\" that meet all your criteria are high, the index won't make much of a difference. The planner has a fundamental choice to make, should it seek things with device_id = 97, and then check each of those to see if they satisfy your conditions on \"results\" fields conditions. Or, should it seek things that satisfy the \"results\" fields conditions, and then check each of those to see if they satisfy device_id = 97. It is currently doing the first of those. Whether it should be doing the second, and whether creating the index will cause it to switch to using the second, are two (separate) questions which can't be answered with the data given.Cheers,Jeff",
"msg_date": "Sun, 25 Aug 2019 10:50:55 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "On Sun, Aug 25, 2019 at 5:51 PM Jeff Janes <[email protected]> wrote:\n>\n> Yes, it certainly looks like it is due to cold caches. But you say it is slow at first, and then say it varies greatly during a run. Is being slow at first the only way it varies greatly, or is there large variation even beyond that?\n\nThere is a great variation in run times (hundreds of ms to several\nseconds) even beyond the start of the server.\nThe query runs several times with a different device_id, object_id and\nanother list of attribute_ids and it varies from one another.\n\n> You can use pg_rewarm to overcome the cold cache issue when you first start up the server.\n\nI cannot find anything related to pg_rewarm other than some dead ends\nfrom 2013 from which I gather it only works on Linux.\nAnyway, I have problems even beyond the start of the database, it's\njust easier to reproduce the problem at the start, otherwise I have to\nleave the application running for a while (to invalidate part of the\ncache I think).\n\n> If you query only on \"results\" with only the conditions that apply to \"results\", what is the expected number of rows, and what is the actual number of rows?\n\nExplain for the query on results only: https://explain.depesz.com/s/Csau\n\nEXPLAIN (ANALYZE,BUFFERS)\n SELECT DISTINCT ON (results.attribute_id) results.timestamp,\nresults.data FROM results\n WHERE\n results.data <> '<NullData/>'\n AND results.data IS NOT NULL\n AND results.object_id = 1955\n AND results.attribute_id IN (4, 5) -- possibly a longer list here\n AND results.data_access_result = 'SUCCESS'\n ORDER BY results.attribute_id, results.timestamp DESC\n LIMIT 2 -- limit by the length of the attributes list\n\nLimit (cost=166793.28..167335.52 rows=2 width=54) (actual\ntime=134783.510..134816.941 rows=2 loops=1)\n Buffers: shared hit=19086 read=46836, temp read=1522 written=3311\n -> Unique (cost=166793.28..168420.01 rows=6 width=54) (actual\ntime=134783.507..134816.850 rows=2 loops=1)\n Buffers: shared hit=19086 read=46836, temp read=1522 written=3311\n -> Sort (cost=166793.28..167606.64 rows=325346 width=54)\n(actual time=134783.505..134802.602 rows=205380 loops=1)\n Sort Key: attribute_id, \"timestamp\" DESC\n Sort Method: external merge Disk: 26456kB\n Buffers: shared hit=19086 read=46836, temp read=1522 written=3311\n -> Gather (cost=1000.00..125882.23 rows=325346\nwidth=54) (actual time=32.325..133815.793 rows=410749 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=19086 read=46836\n -> Parallel Seq Scan on results\n(cost=0.00..92347.63 rows=135561 width=54) (actual\ntime=18.496..133871.888 rows=136916 loops=3)\n Filter: ((data IS NOT NULL) AND (data <>\n'<NullData/>'::text) AND (attribute_id = ANY ('{4,5}'::integer[])) AND\n(object_id = 1955) AND (data_access_result = 'SUCCESS'::text))\n Rows Removed by Filter: 920123\n Buffers: shared hit=19086 read=46836\nPlanning Time: 5.071 ms\nExecution Time: 134874.687 ms\n\nAs far as I can see the estimates are close to the real returned rows\nin the \"Parallel Seq Scan on results\".\nThe numbers are similar (of course) if I turn off query parallelism.\nOr should I VACUUM ANALYZE again?\nI'm sure I ran it enough.\n\n>> How can I improve it to be consistently fast (is it possible to get to\n>> several milliseconds?)?\n>\n>\n> Maybe. Depends on the answer to my previous question.\n>\n>>\n>> What I don't really understand is why the nested loop has 3 loops\n>> (three joined tables)?\n>\n>\n> Each parallel execution counts as a loop. There are 2 parallel workers, plus the leader also participates, making three.\n>\n>>\n>> And why does the first index scan indicate ~60k loops? And does it\n>> really work? It doesn't seem to filter out any rows.\n>\n>\n> The parallel hash join returns about 20,000 rows, but I think that that is just for one of the three parallel executions, making about 60,000 in total. I don't know why one of the nodes report combined execution and the other just a single worker. Parallel queries are hard to understand. When I want to optimize a query that does parallel execution, I just turn off parallelism (\"set max_parallel_workers_per_gather TO 0;\") at first to make is simpler to understand.\n>\n> Apparently everything with device_id = 97 just happens to pass all the rest of your filters. If you need those filters to make sure you get the right answer in all cases, then you need them. A lifeboat isn't useless just because your ship didn't happen to sink today.\n\nThe part with the workers makes sense, thanks.\nFor the condition, I thought there is something more contrived going\non in the planner, but I failed to see it was that simple: there is no\nneed to remove anything since everything matches the condition.\n\n>> Should I add an index only on (attribute_id, object_id)? And maybe\n>> data_access_result?\n>> Does it make sens to add it on a text column (results.data)?\n>\n>\n> Which parts of query you give are going to change from execution to execution?\n>\n> Assuming the parts for object_id and attribute_id are variable and the rest are static, I think the optimal index would be \"create index on results (object_id, attribute_id) where data IS NOT NULL and data <> '<NullData/>' and data_access_result = 'SUCCESS'\"\n\nThat's right, object_id and attribute_id are variable, the device_id\nis variable as well.\nIf I create that index it is not chosen by the planner when executing\nthe full \"joined\" query, it is though if I run it only on \"results\"\nrelated conditions, without joining, which makes sense corroborated\nwith what you said below.\n\n> Why does results.data have two different \"spellings\" for null data?\n\nOne where the device couldn't be contacted (NULL), one where the\ndevice was contacted and for my (object_id, attribute_ids)\ncombinations actually returned null data ('<NullData/>').\n\n> However, if the number of rows from \"results\" that meet all your criteria are high, the index won't make much of a difference. The planner has a fundamental choice to make, should it seek things with device_id = 97, and then check each of those to see if they satisfy your conditions on \"results\" fields conditions. Or, should it seek things that satisfy the \"results\" fields conditions, and then check each of those to see if they satisfy device_id = 97. It is currently doing the first of those. Whether it should be doing the second, and whether creating the index will cause it to switch to using the second, are two (separate) questions which can't be answered with the data given.\n\nSo maybe I should de-normalize and place the device_id column into the\n\"results\" table and add it to the index in your suggestion above?\n\n> Cheers,\n>\n> Jeff\n\nThank you for the detailed response.\n\n-- \n\nBarbu Paul - Gheorghe\n\n\n",
"msg_date": "Mon, 26 Aug 2019 11:25:57 +0300",
"msg_from": "Barbu Paul - Gheorghe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "On Mon, Aug 26, 2019 at 11:25:57AM +0300, Barbu Paul - Gheorghe wrote:\n> On Sun, Aug 25, 2019 at 5:51 PM Jeff Janes <[email protected]> wrote:\n> \n> > You can use pg_rewarm to overcome the cold cache issue when you first start up the server.\n> \n> I cannot find anything related to pg_rewarm other than some dead ends\n> from 2013 from which I gather it only works on Linux.\n\nIt's a current extension - spelled PREwarm, not rewarm, and definitely not\nreworm.\n\nhttps://www.postgresql.org/docs/current/pgprewarm.html\n\n\n",
"msg_date": "Mon, 26 Aug 2019 06:51:41 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "On Mon, Aug 26, 2019 at 4:26 AM Barbu Paul - Gheorghe <\[email protected]> wrote:\n\n> On Sun, Aug 25, 2019 at 5:51 PM Jeff Janes <[email protected]> wrote:\n> >\n> > Yes, it certainly looks like it is due to cold caches. But you say it\n> is slow at first, and then say it varies greatly during a run. Is being\n> slow at first the only way it varies greatly, or is there large variation\n> even beyond that?\n>\n> There is a great variation in run times (hundreds of ms to several\n> seconds) even beyond the start of the server.\n> The query runs several times with a different device_id, object_id and\n> another list of attribute_ids and it varies from one another.\n>\n\nIf you run the exact same query (with the same parameters) once the cache\nis hot, is the performance than pretty consistent within a given\nparameterization? Or is still variable even within one parameterization.\n\nIf they are consistent, could you capture a fast parameterizaton and a slow\nparameterization and show then and the plans or them?\n\n\n> > You can use pg_rewarm to overcome the cold cache issue when you first\n> start up the server.\n>\n> I cannot find anything related to pg_rewarm other than some dead ends\n> from 2013 from which I gather it only works on Linux.\n> Anyway, I have problems even beyond the start of the database, it's\n> just easier to reproduce the problem at the start, otherwise I have to\n> leave the application running for a while (to invalidate part of the\n> cache I think).\n>\n\nSorry, should have been pg_prewarm, not pg_rewarm. Unfortunately, you\nprobably have two different problems. Reproducing it one way is unlikely\nto help you solve the other one.\n\n\n> > If you query only on \"results\" with only the conditions that apply to\n> \"results\", what is the expected number of rows, and what is the actual\n> number of rows?\n>\n> Explain for the query on results only: https://explain.depesz.com/s/Csau\n\n\n>\n> EXPLAIN (ANALYZE,BUFFERS)\n> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n> results.data FROM results\n> WHERE\n> results.data <> '<NullData/>'\n> AND results.data IS NOT NULL\n> AND results.object_id = 1955\n> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n> AND results.data_access_result = 'SUCCESS'\n> ORDER BY results.attribute_id, results.timestamp DESC\n> LIMIT 2 -- limit by the length of the attributes list\n>\n> Limit (cost=166793.28..167335.52 rows=2 width=54) (actual\n> time=134783.510..134816.941 rows=2 loops=1)\n> Buffers: shared hit=19086 read=46836, temp read=1522 written=3311\n> -> Unique (cost=166793.28..168420.01 rows=6 width=54) (actual\n> time=134783.507..134816.850 rows=2 loops=1)\n> Buffers: shared hit=19086 read=46836, temp read=1522 written=3311\n> -> Sort (cost=166793.28..167606.64 rows=325346 width=54)\n> (actual time=134783.505..134802.602 rows=205380 loops=1)\n> Sort Key: attribute_id, \"timestamp\" DESC\n>\n\nDo you have an index on (attribute_id, \"timestamp\" DESC)? That might\nreally help if it can step through the rows already sorted, filter out the\nones that need filtering out (building the partial index might help here),\nhit the other two tables for each of those rows using a nested loop, and\nstop after 2 rows which meet those conditions. The problem is if you have\nto step through an enormous number for rows before finding 2 of them with\ndevice_id=97.\n\n\n> So maybe I should de-normalize and place the device_id column into the\n> \"results\" table and add it to the index in your suggestion above?\n>\n\nYes, if nothing else works, that should. How hard would it be to maintain\nthat column in the correct state?\n\nCheers,\n\nJeff\n\nOn Mon, Aug 26, 2019 at 4:26 AM Barbu Paul - Gheorghe <[email protected]> wrote:On Sun, Aug 25, 2019 at 5:51 PM Jeff Janes <[email protected]> wrote:\n>\n> Yes, it certainly looks like it is due to cold caches. But you say it is slow at first, and then say it varies greatly during a run. Is being slow at first the only way it varies greatly, or is there large variation even beyond that?\n\nThere is a great variation in run times (hundreds of ms to several\nseconds) even beyond the start of the server.\nThe query runs several times with a different device_id, object_id and\nanother list of attribute_ids and it varies from one another.If you run the exact same query (with the same parameters) once the cache is hot, is the performance than pretty consistent within a given parameterization? Or is still variable even within one parameterization.If they are consistent, could you capture a fast parameterizaton and a slow parameterization and show then and the plans or them? \n> You can use pg_rewarm to overcome the cold cache issue when you first start up the server.\n\nI cannot find anything related to pg_rewarm other than some dead ends\nfrom 2013 from which I gather it only works on Linux.\nAnyway, I have problems even beyond the start of the database, it's\njust easier to reproduce the problem at the start, otherwise I have to\nleave the application running for a while (to invalidate part of the\ncache I think).Sorry, should have been pg_prewarm, not pg_rewarm. Unfortunately, you probably have two different problems. Reproducing it one way is unlikely to help you solve the other one.\n\n> If you query only on \"results\" with only the conditions that apply to \"results\", what is the expected number of rows, and what is the actual number of rows?\n\nExplain for the query on results only: https://explain.depesz.com/s/Csau \n\nEXPLAIN (ANALYZE,BUFFERS)\n SELECT DISTINCT ON (results.attribute_id) results.timestamp,\nresults.data FROM results\n WHERE\n results.data <> '<NullData/>'\n AND results.data IS NOT NULL\n AND results.object_id = 1955\n AND results.attribute_id IN (4, 5) -- possibly a longer list here\n AND results.data_access_result = 'SUCCESS'\n ORDER BY results.attribute_id, results.timestamp DESC\n LIMIT 2 -- limit by the length of the attributes list\n\nLimit (cost=166793.28..167335.52 rows=2 width=54) (actual\ntime=134783.510..134816.941 rows=2 loops=1)\n Buffers: shared hit=19086 read=46836, temp read=1522 written=3311\n -> Unique (cost=166793.28..168420.01 rows=6 width=54) (actual\ntime=134783.507..134816.850 rows=2 loops=1)\n Buffers: shared hit=19086 read=46836, temp read=1522 written=3311\n -> Sort (cost=166793.28..167606.64 rows=325346 width=54)\n(actual time=134783.505..134802.602 rows=205380 loops=1)\n Sort Key: attribute_id, \"timestamp\" DESCDo you have an index on (attribute_id, \"timestamp\" DESC)? That might really help if it can step through the rows already sorted, filter out the ones that need filtering out (building the partial index might help here), hit the other two tables for each of those rows using a nested loop, and stop after 2 rows which meet those conditions. The problem is if you have to step through an enormous number for rows before finding 2 of them with device_id=97. \nSo maybe I should de-normalize and place the device_id column into the\n\"results\" table and add it to the index in your suggestion above?Yes, if nothing else works, that should. How hard would it be to maintain that column in the correct state?Cheers,Jeff",
"msg_date": "Mon, 2 Sep 2019 17:57:17 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erratically behaving query needs optimization"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 12:57 AM Jeff Janes <[email protected]> wrote:\n>\n> On Mon, Aug 26, 2019 at 4:26 AM Barbu Paul - Gheorghe <[email protected]> wrote:\n>>\n>> On Sun, Aug 25, 2019 at 5:51 PM Jeff Janes <[email protected]> wrote:\n>> >\n>> > Yes, it certainly looks like it is due to cold caches. But you say it is slow at first, and then say it varies greatly during a run. Is being slow at first the only way it varies greatly, or is there large variation even beyond that?\n>>\n>> There is a great variation in run times (hundreds of ms to several\n>> seconds) even beyond the start of the server.\n>> The query runs several times with a different device_id, object_id and\n>> another list of attribute_ids and it varies from one another.\n>\n>\n> If you run the exact same query (with the same parameters) once the cache is hot, is the performance than pretty consistent within a given parameterization? Or is still variable even within one parameterization.\n>\n> If they are consistent, could you capture a fast parameterizaton and a slow parameterization and show then and the plans or them?\n\nCannot test right now, but I think I had both cases.\nIn the same parametrization I had both fast and slow runs and of\ncourse it varied when changed parametrization.\n\n>>\n>> EXPLAIN (ANALYZE,BUFFERS)\n>> SELECT DISTINCT ON (results.attribute_id) results.timestamp,\n>> results.data FROM results\n>> WHERE\n>> results.data <> '<NullData/>'\n>> AND results.data IS NOT NULL\n>> AND results.object_id = 1955\n>> AND results.attribute_id IN (4, 5) -- possibly a longer list here\n>> AND results.data_access_result = 'SUCCESS'\n>> ORDER BY results.attribute_id, results.timestamp DESC\n>> LIMIT 2 -- limit by the length of the attributes list\n>>\n>> Limit (cost=166793.28..167335.52 rows=2 width=54) (actual\n>> time=134783.510..134816.941 rows=2 loops=1)\n>> Buffers: shared hit=19086 read=46836, temp read=1522 written=3311\n>> -> Unique (cost=166793.28..168420.01 rows=6 width=54) (actual\n>> time=134783.507..134816.850 rows=2 loops=1)\n>> Buffers: shared hit=19086 read=46836, temp read=1522 written=3311\n>> -> Sort (cost=166793.28..167606.64 rows=325346 width=54)\n>> (actual time=134783.505..134802.602 rows=205380 loops=1)\n>> Sort Key: attribute_id, \"timestamp\" DESC\n>\n>\n> Do you have an index on (attribute_id, \"timestamp\" DESC)? That might really help if it can step through the rows already sorted, filter out the ones that need filtering out (building the partial index might help here), hit the other two tables for each of those rows using a nested loop, and stop after 2 rows which meet those conditions. The problem is if you have to step through an enormous number for rows before finding 2 of them with device_id=97.\n\nI tried that index and it wasn't used, it still chose to do an\nin-memory quicksort of ~600 kB. I wonder why?\n\n>>\n>> So maybe I should de-normalize and place the device_id column into the\n>> \"results\" table and add it to the index in your suggestion above?\n>\n>\n> Yes, if nothing else works, that should. How hard would it be to maintain that column in the correct state?\n\nIn the end I used this solution. It works ... fine, still I see slow\nresponse times when the caches are cold, but afterwards things seem to\nbe fine (for now at least).\nI had this in mind for a while, but wasn't convinced it was \"good\ndesign\" since I had to denormalize the DB, but seeing the erratic\nbehaviour of the query, I finally gave up on using smart indices\ntrying to satisfy the planner.\n\nIt's also the first time I do this outside of a controlled learning\nenvironment so there could be things that I missed.\n\nThanks for the help, all of you!\n\n> Cheers,\n>\n> Jeff\n\n\n\n-- \n\nBarbu Paul - Gheorghe\n\n\n",
"msg_date": "Tue, 3 Sep 2019 15:41:33 +0300",
"msg_from": "Barbu Paul - Gheorghe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Erratically behaving query needs optimization"
}
] |
[
{
"msg_contents": "Hi all,\n\ntoday I debugged a query that was executing about 100x slower than expected, and was very surprised by what I found.\n\nI'm posting to this list to see if this might be an issue that should be fixed in PostgreSQL itself.\n\nBelow is a simplified version of the query in question:\n\nSET work_mem='64MB';\nEXPLAIN ANALYZE\nSELECT * FROM generate_series(1, 1) a, generate_series(1, 1) b\nUNION\nSELECT * FROM generate_series(1, 1) a, generate_series(1, 1) b;\n\nHashAggregate (cost=80020.01..100020.01 rows=2000000 width=8) (actual time=19.349..23.123 rows=1 loops=1)\n Group Key: a.a, b.b\n -> Append (cost=0.01..70020.01 rows=2000000 width=8) (actual time=0.022..0.030 rows=2 loops=1)\n -> Nested Loop (cost=0.01..20010.01 rows=1000000 width=8) (actual time=0.021..0.022 rows=1 loops=1)\n -> Function Scan on generate_series a (cost=0.00..10.00 rows=1000 width=4) (actual time=0.014..0.014 rows=1 loops=1)\n -> Function Scan on generate_series b (cost=0.00..10.00 rows=1000 width=4) (actual time=0.002..0.003 rows=1 loops=1)\n -> Nested Loop (cost=0.01..20010.01 rows=1000000 width=8) (actual time=0.006..0.007 rows=1 loops=1)\n -> Function Scan on generate_series a_1 (cost=0.00..10.00 rows=1000 width=4) (actual time=0.002..0.003 rows=1 loops=1)\n -> Function Scan on generate_series b_1 (cost=0.00..10.00 rows=1000 width=4) (actual time=0.002..0.002 rows=1 loops=1)\nPlanning Time: 0.101 ms\nExecution Time: 45.986 ms\n\nAs you can see, it takes over 45ms (!) to execute what appear to be a very simple query. How is this possible?\n\nBased on my debugging, I think the following is going on:\n\n1. The query overestimates the final output rows by a factor of 2 million. [1]\n2. The executor uses the bad estimate to allocate a huge hash table [2], and the increased work_mem [3] gives it enough rope to hang itself [4].\n3. Somehow EXPLAIN gets confused by this and only ends up tracking 23ms of the query execution instead of 45ms [5].\n\nI'm certainly a novice when it comes to PostgreSQL internals, but I'm wondering if this could be fixed by taking a more dynamic approach for allocating HashAggregate hash tables?\n\nAnyway, for my query using UNION ALL was acceptable, so I'm not too troubled by this. I was just really caught by surprise that bad estimates can not only cause bad query plans, but also cause good query plans to execute extremely slowly. I had never seen that before.\n\nBest Regards\nFelix\n\n[1] My actual query had bad estimates for other reasons (GIN Index), but that's another story. The query above was of course deliberately designed to have bad estimates.\n[2] nodeAgg.c: build_hash_table()\n[3] A lot of queries in my application benefit from increased work_mem.\n[4] execGrouping.c: nbuckets = Min(nbuckets, (long) ((work_mem * 1024L) / entrysize));\n[5] In my actual query it was even worse, only 6 out of 40ms of the total execution time were reported as being spent in the query nodes.\n\n",
"msg_date": "Tue, 20 Aug 2019 17:11:58 +0200",
"msg_from": "=?utf-8?Q?Felix_Geisend=C3=B6rfer?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "I believe this would be relevant-\nhttps://www.cybertec-postgresql.com/en/optimizer-support-functions/\n\nIt seems there is hope down the road to improve those estimates.\n\nI believe this would be relevant-https://www.cybertec-postgresql.com/en/optimizer-support-functions/It seems there is hope down the road to improve those estimates.",
"msg_date": "Tue, 20 Aug 2019 10:19:05 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "út 20. 8. 2019 v 17:12 odesílatel Felix Geisendörfer <[email protected]>\nnapsal:\n\n> Hi all,\n>\n> today I debugged a query that was executing about 100x slower than\n> expected, and was very surprised by what I found.\n>\n> I'm posting to this list to see if this might be an issue that should be\n> fixed in PostgreSQL itself.\n>\n> Below is a simplified version of the query in question:\n>\n> SET work_mem='64MB';q\n> EXPLAIN ANALYZE\n> SELECT * FROM generate_series(1, 1) a, generate_series(1, 1) b\n> UNION\n> SELECT * FROM generate_series(1, 1) a, generate_series(1, 1) b;\n>\n> HashAggregate (cost=80020.01..100020.01 rows=2000000 width=8) (actual\n> time=19.349..23.123 rows=1 loops=1)\n> Group Key: a.a, b.b\n> -> Append (cost=0.01..70020.01 rows=2000000 width=8) (actual\n> time=0.022..0.030 rows=2 loops=1)\n> -> Nested Loop (cost=0.01..20010.01 rows=1000000 width=8)\n> (actual time=0.021..0.022 rows=1 loops=1)\n> -> Function Scan on generate_series a (cost=0.00..10.00\n> rows=1000 width=4) (actual time=0.014..0.014 rows=1 loops=1)\n> -> Function Scan on generate_series b (cost=0.00..10.00\n> rows=1000 width=4) (actual time=0.002..0.003 rows=1 loops=1)\n> -> Nested Loop (cost=0.01..20010.01 rows=1000000 width=8)\n> (actual time=0.006..0.007 rows=1 loops=1)\n> -> Function Scan on generate_series a_1 (cost=0.00..10.00\n> rows=1000 width=4) (actual time=0.002..0.003 rows=1 loops=1)\n> -> Function Scan on generate_series b_1 (cost=0.00..10.00\n> rows=1000 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n> Planning Time: 0.101 ms\n> Execution Time: 45.986 ms\n>\n> As you can see, it takes over 45ms (!) to execute what appear to be a very\n> simple query. How is this possible?\n>\n> Based on my debugging, I think the following is going on:\n>\n> 1. The query overestimates the final output rows by a factor of 2 million.\n> [1]\n> 2. The executor uses the bad estimate to allocate a huge hash table [2],\n> and the increased work_mem [3] gives it enough rope to hang itself [4].\n> 3. Somehow EXPLAIN gets confused by this and only ends up tracking 23ms of\n> the query execution instead of 45ms [5].\n>\n> I'm certainly a novice when it comes to PostgreSQL internals, but I'm\n> wondering if this could be fixed by taking a more dynamic approach for\n> allocating HashAggregate hash tables?\n>\n\nTable functions has not statistics - and default ROWS value is 1000, so it\nis reason why there is very ever estimating. This specific case can be\nsolved better in PostgreSQL 12, where some functions like generate_series\nhas support functions with better estimations.\n\nYou can get profile of this query with some profiler\n\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n\nand you can see a reason why the query is slow.\n\nThe speed on PostgreSQL 12 of your example is good - about 1ms\n\nBut when I repeat your example, the speed was more terrible. On second hand\n- nobody can expect optimal plan when there is this crazy miss estimation.\nLooks so some wrong is inside some cleaning part.\n\nPavel\n\n\n\n\n> Anyway, for my query using UNION ALL was acceptable, so I'm not too\n> troubled by this. I was just really caught by surprise that bad estimates\n> can not only cause bad query plans, but also cause good query plans to\n> execute extremely slowly. I had never seen that before.\n>\n> Best Regards\n> Felix\n>\n> [1] My actual query had bad estimates for other reasons (GIN Index), but\n> that's another story. The query above was of course deliberately designed\n> to have bad estimates.\n> [2] nodeAgg.c: build_hash_table()\n> [3] A lot of queries in my application benefit from increased work_mem.\n> [4] execGrouping.c: nbuckets = Min(nbuckets, (long) ((work_mem * 1024L) /\n> entrysize));\n> [5] In my actual query it was even worse, only 6 out of 40ms of the total\n> execution time were reported as being spent in the query nodes.\n>\n>\n\nút 20. 8. 2019 v 17:12 odesílatel Felix Geisendörfer <[email protected]> napsal:Hi all,\n\ntoday I debugged a query that was executing about 100x slower than expected, and was very surprised by what I found.\n\nI'm posting to this list to see if this might be an issue that should be fixed in PostgreSQL itself.\n\nBelow is a simplified version of the query in question:\n\nSET work_mem='64MB';q\nEXPLAIN ANALYZE\nSELECT * FROM generate_series(1, 1) a, generate_series(1, 1) b\nUNION\nSELECT * FROM generate_series(1, 1) a, generate_series(1, 1) b;\n\nHashAggregate (cost=80020.01..100020.01 rows=2000000 width=8) (actual time=19.349..23.123 rows=1 loops=1)\n Group Key: a.a, b.b\n -> Append (cost=0.01..70020.01 rows=2000000 width=8) (actual time=0.022..0.030 rows=2 loops=1)\n -> Nested Loop (cost=0.01..20010.01 rows=1000000 width=8) (actual time=0.021..0.022 rows=1 loops=1)\n -> Function Scan on generate_series a (cost=0.00..10.00 rows=1000 width=4) (actual time=0.014..0.014 rows=1 loops=1)\n -> Function Scan on generate_series b (cost=0.00..10.00 rows=1000 width=4) (actual time=0.002..0.003 rows=1 loops=1)\n -> Nested Loop (cost=0.01..20010.01 rows=1000000 width=8) (actual time=0.006..0.007 rows=1 loops=1)\n -> Function Scan on generate_series a_1 (cost=0.00..10.00 rows=1000 width=4) (actual time=0.002..0.003 rows=1 loops=1)\n -> Function Scan on generate_series b_1 (cost=0.00..10.00 rows=1000 width=4) (actual time=0.002..0.002 rows=1 loops=1)\nPlanning Time: 0.101 ms\nExecution Time: 45.986 ms\n\nAs you can see, it takes over 45ms (!) to execute what appear to be a very simple query. How is this possible?\n\nBased on my debugging, I think the following is going on:\n\n1. The query overestimates the final output rows by a factor of 2 million. [1]\n2. The executor uses the bad estimate to allocate a huge hash table [2], and the increased work_mem [3] gives it enough rope to hang itself [4].\n3. Somehow EXPLAIN gets confused by this and only ends up tracking 23ms of the query execution instead of 45ms [5].\n\nI'm certainly a novice when it comes to PostgreSQL internals, but I'm wondering if this could be fixed by taking a more dynamic approach for allocating HashAggregate hash tables?Table functions has not statistics - and default ROWS value is 1000, so it is reason why there is very ever estimating. This specific case can be solved better in PostgreSQL 12, where some functions like generate_series has support functions with better estimations. You can get profile of this query with some profiler https://wiki.postgresql.org/wiki/Profiling_with_perfand you can see a reason why the query is slow.The speed on PostgreSQL 12 of your example is good - about 1msBut when I repeat your example, the speed was more terrible. On second hand - nobody can expect optimal plan when there is this crazy miss estimation. Looks so some wrong is inside some cleaning part. Pavel \n\nAnyway, for my query using UNION ALL was acceptable, so I'm not too troubled by this. I was just really caught by surprise that bad estimates can not only cause bad query plans, but also cause good query plans to execute extremely slowly. I had never seen that before.\n\nBest Regards\nFelix\n\n[1] My actual query had bad estimates for other reasons (GIN Index), but that's another story. The query above was of course deliberately designed to have bad estimates.\n[2] nodeAgg.c: build_hash_table()\n[3] A lot of queries in my application benefit from increased work_mem.\n[4] execGrouping.c: nbuckets = Min(nbuckets, (long) ((work_mem * 1024L) / entrysize));\n[5] In my actual query it was even worse, only 6 out of 40ms of the total execution time were reported as being spent in the query nodes.",
"msg_date": "Tue, 20 Aug 2019 18:57:03 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-20 17:11:58 +0200, Felix Geisend�rfer wrote:\n> today I debugged a query that was executing about 100x slower than expected, and was very surprised by what I found.\n>\n> I'm posting to this list to see if this might be an issue that should be fixed in PostgreSQL itself.\n>\n> Below is a simplified version of the query in question:\n>\n> SET work_mem='64MB';\n> EXPLAIN ANALYZE\n> SELECT * FROM generate_series(1, 1) a, generate_series(1, 1) b\n> UNION\n> SELECT * FROM generate_series(1, 1) a, generate_series(1, 1) b;\n>\n> HashAggregate (cost=80020.01..100020.01 rows=2000000 width=8) (actual time=19.349..23.123 rows=1 loops=1)\n\nFWIW, that's not a mis-estimate I'm getting on master ;). Obviously\nthat doesn't actually address your concern...\n\n\n> 1. The query overestimates the final output rows by a factor of 2 million. [1]\n\nRight. There's not really that much we can do about that in\ngeneral. That'll always be possible. Although we can obviously improve\nthe estimates a good bit more.\n\n\n> I'm certainly a novice when it comes to PostgreSQL internals, but I'm\n> wondering if this could be fixed by taking a more dynamic approach for\n> allocating HashAggregate hash tables?\n\nUnder-sizing the hashtable just out of caution will have add overhead to\na lot more common cases. That requires copying data around during\ngrowth, which is far far from free. Or you can use hashtables that don't\nneed to copy, but they're also considerably slower in the more common\ncases.\n\n\n> 3. Somehow EXPLAIN gets confused by this and only ends up tracking 23ms of the query execution instead of 45ms [5].\n\nWell, there's plenty work that's not attributed to nodes. IIRC we don't\ntrack executor startup/shutdown overhead on a per-node basis. So I don't\nreally think this is necessarily something that suspicious. Which\nindeed seems to be what's happening here (this is with 11, to be able to\nhit the problem with your reproducer):\n\n+ 33.01% postgres postgres [.] tuplehash_iterate\n- 18.39% postgres libc-2.28.so [.] __memset_avx2_erms\n - 90.94% page_fault\n __memset_avx2_erms\n tuplehash_allocate\n tuplehash_create\n BuildTupleHashTableExt\n build_hash_table\n ExecInitAgg\n ExecInitNode\n InitPlan\n standard_ExecutorStart\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Aug 2019 10:32:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "Hi,\n\n> On 20. Aug 2019, at 19:32, Andres Freund <[email protected]> wrote:\n> \n> Hi,\n> \n> On 2019-08-20 17:11:58 +0200, Felix Geisendörfer wrote:\n>> \n>> HashAggregate (cost=80020.01..100020.01 rows=2000000 width=8) (actual time=19.349..23.123 rows=1 loops=1)\n> \n> FWIW, that's not a mis-estimate I'm getting on master ;). Obviously\n> that doesn't actually address your concern...\n\nI suppose this is thanks to the new optimizer support functions\nmentioned by Michael and Pavel?\n\nOf course I'm very excited about those improvements, but yeah, my \nreal query is mis-estimating things for totally different reasons not\ninvolving any SRFs.\n\n>> I'm certainly a novice when it comes to PostgreSQL internals, but I'm\n>> wondering if this could be fixed by taking a more dynamic approach for\n>> allocating HashAggregate hash tables?\n> \n> Under-sizing the hashtable just out of caution will have add overhead to\n> a lot more common cases. That requires copying data around during\n> growth, which is far far from free. Or you can use hashtables that don't\n> need to copy, but they're also considerably slower in the more common\n> cases.\n\nHow does PostgreSQL currently handle the case where the initial hash\ntable is under-sized due to the planner having underestimated things?\nAre the growth costs getting amortized by using an exponential growth\nfunction?\n\nAnyway, I can accept my situation to be an edge case that doesn't justify\nmaking things more complicated.\n\n>> 3. Somehow EXPLAIN gets confused by this and only ends up tracking 23ms of the query execution instead of 45ms [5].\n> \n> Well, there's plenty work that's not attributed to nodes. IIRC we don't\n> track executor startup/shutdown overhead on a per-node basis.\n\nI didn't know that, thanks for clarifying : ).\n\n",
"msg_date": "Tue, 20 Aug 2019 19:55:56 +0200",
"msg_from": "=?utf-8?Q?Felix_Geisend=C3=B6rfer?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-20 19:55:56 +0200, Felix Geisend�rfer wrote:\n> > On 20. Aug 2019, at 19:32, Andres Freund <[email protected]> wrote:\n> > FWIW, that's not a mis-estimate I'm getting on master ;). Obviously\n> > that doesn't actually address your concern...\n> \n> I suppose this is thanks to the new optimizer support functions\n> mentioned by Michael and Pavel?\n\nRight.\n\n\n> > Under-sizing the hashtable just out of caution will have add overhead to\n> > a lot more common cases. That requires copying data around during\n> > growth, which is far far from free. Or you can use hashtables that don't\n> > need to copy, but they're also considerably slower in the more common\n> > cases.\n> \n> How does PostgreSQL currently handle the case where the initial hash\n> table is under-sized due to the planner having underestimated things?\n> Are the growth costs getting amortized by using an exponential growth\n> function?\n\nYes. But that's still far from free.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Aug 2019 11:27:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "On Tue, Aug 20, 2019 at 11:12 AM Felix Geisendörfer <[email protected]>\nwrote:\n ...\n\n\n> [1] My actual query had bad estimates for other reasons (GIN Index), but\n> that's another story. The query above was of course deliberately designed\n> to have bad estimates.\n>\n\nAs noted elsewhere, v12 thwarts your attempts to deliberately design the\nbad estimates. You can still get them, you just have to work a bit harder\nat it:\n\nCREATE FUNCTION j (bigint, bigint) returns setof bigint as $$ select\ngenerate_series($1,$2) $$ rows 1000 language sql;\n\nEXPLAIN ANALYZE\nSELECT * FROM j(1, 1) a, j(1, 1) b\nUNION\nSELECT * FROM j(1, 1) a, j(1, 1) b;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=80021.00..100021.00 rows=2000000 width=16) (actual\ntime=11.332..13.241 rows=1 loops=1)\n Group Key: a.a, b.b\n -> Append (cost=0.50..70021.00 rows=2000000 width=16) (actual\ntime=0.118..0.163 rows=2 loops=1)\n -> Nested Loop (cost=0.50..20010.50 rows=1000000 width=16)\n(actual time=0.117..0.118 rows=1 loops=1)\n -> Function Scan on j a (cost=0.25..10.25 rows=1000\nwidth=8) (actual time=0.087..0.088 rows=1 loops=1)\n -> Function Scan on j b (cost=0.25..10.25 rows=1000\nwidth=8) (actual time=0.027..0.027 rows=1 loops=1)\n -> Nested Loop (cost=0.50..20010.50 rows=1000000 width=16)\n(actual time=0.044..0.044 rows=1 loops=1)\n -> Function Scan on j a_1 (cost=0.25..10.25 rows=1000\nwidth=8) (actual time=0.022..0.022 rows=1 loops=1)\n -> Function Scan on j b_1 (cost=0.25..10.25 rows=1000\nwidth=8) (actual time=0.020..0.021 rows=1 loops=1)\n Planning Time: 0.085 ms\n Execution Time: 69.277 ms\n(11 rows)\n\nBut the same advance in v12 which makes it harder to fool with your test\ncase also opens the possibility of fixing your real case.\n\nI've made an extension which has a function which always returns true, but\nlies about how often it is expected to return true. See the attachment.\nWith that, you can fine-tune the planner.\n\nCREATE EXTENSION pg_selectivities ;\n\nEXPLAIN ANALYZE\nSELECT * FROM j(1, 1) a, j(1, 1) b where pg_always(0.00001)\nUNION\nSELECT * FROM j(1, 1) a, j(1, 1) b where pg_always(0.00001);\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=45021.40..45021.60 rows=20 width=16) (actual\ntime=0.226..0.227 rows=1 loops=1)\n Group Key: a.a, b.b\n -> Append (cost=0.50..45021.30 rows=20 width=16) (actual\ntime=0.105..0.220 rows=2 loops=1)\n -> Nested Loop (cost=0.50..22510.50 rows=10 width=16) (actual\ntime=0.104..0.105 rows=1 loops=1)\n Join Filter: pg_always('1e-05'::double precision)\n -> Function Scan on j a (cost=0.25..10.25 rows=1000\nwidth=8) (actual time=0.066..0.066 rows=1 loops=1)\n -> Function Scan on j b (cost=0.25..10.25 rows=1000\nwidth=8) (actual time=0.035..0.035 rows=1 loops=1)\n -> Nested Loop (cost=0.50..22510.50 rows=10 width=16) (actual\ntime=0.112..0.113 rows=1 loops=1)\n Join Filter: pg_always('1e-05'::double precision)\n -> Function Scan on j a_1 (cost=0.25..10.25 rows=1000\nwidth=8) (actual time=0.077..0.077 rows=1 loops=1)\n -> Function Scan on j b_1 (cost=0.25..10.25 rows=1000\nwidth=8) (actual time=0.034..0.034 rows=1 loops=1)\n Planning Time: 0.139 ms\n Execution Time: 0.281 ms\n\nCheers,\n\nJeff",
"msg_date": "Wed, 21 Aug 2019 14:26:33 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "čt 22. 8. 2019 v 3:11 odesílatel Jeff Janes <[email protected]> napsal:\n\n> On Tue, Aug 20, 2019 at 11:12 AM Felix Geisendörfer <[email protected]>\n> wrote:\n> ...\n>\n>\n>> [1] My actual query had bad estimates for other reasons (GIN Index), but\n>> that's another story. The query above was of course deliberately designed\n>> to have bad estimates.\n>>\n>\n> As noted elsewhere, v12 thwarts your attempts to deliberately design the\n> bad estimates. You can still get them, you just have to work a bit harder\n> at it:\n>\n> CREATE FUNCTION j (bigint, bigint) returns setof bigint as $$ select\n> generate_series($1,$2) $$ rows 1000 language sql;\n>\n> EXPLAIN ANALYZE\n> SELECT * FROM j(1, 1) a, j(1, 1) b\n> UNION\n> SELECT * FROM j(1, 1) a, j(1, 1) b;\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=80021.00..100021.00 rows=2000000 width=16) (actual\n> time=11.332..13.241 rows=1 loops=1)\n> Group Key: a.a, b.b\n> -> Append (cost=0.50..70021.00 rows=2000000 width=16) (actual\n> time=0.118..0.163 rows=2 loops=1)\n> -> Nested Loop (cost=0.50..20010.50 rows=1000000 width=16)\n> (actual time=0.117..0.118 rows=1 loops=1)\n> -> Function Scan on j a (cost=0.25..10.25 rows=1000\n> width=8) (actual time=0.087..0.088 rows=1 loops=1)\n> -> Function Scan on j b (cost=0.25..10.25 rows=1000\n> width=8) (actual time=0.027..0.027 rows=1 loops=1)\n> -> Nested Loop (cost=0.50..20010.50 rows=1000000 width=16)\n> (actual time=0.044..0.044 rows=1 loops=1)\n> -> Function Scan on j a_1 (cost=0.25..10.25 rows=1000\n> width=8) (actual time=0.022..0.022 rows=1 loops=1)\n> -> Function Scan on j b_1 (cost=0.25..10.25 rows=1000\n> width=8) (actual time=0.020..0.021 rows=1 loops=1)\n> Planning Time: 0.085 ms\n> Execution Time: 69.277 ms\n> (11 rows)\n>\n> But the same advance in v12 which makes it harder to fool with your test\n> case also opens the possibility of fixing your real case.\n>\n\nI think so much more interesting should be long time after query processing\n- last row was processed in 13ms, but Execution Time was 69ms .. so some\ncleaning is 56ms - that is pretty long.\n\n\n> I've made an extension which has a function which always returns true, but\n> lies about how often it is expected to return true. See the attachment.\n> With that, you can fine-tune the planner.\n>\n> CREATE EXTENSION pg_selectivities ;\n>\n> EXPLAIN ANALYZE\n> SELECT * FROM j(1, 1) a, j(1, 1) b where pg_always(0.00001)\n> UNION\n> SELECT * FROM j(1, 1) a, j(1, 1) b where pg_always(0.00001);\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=45021.40..45021.60 rows=20 width=16) (actual\n> time=0.226..0.227 rows=1 loops=1)\n> Group Key: a.a, b.b\n> -> Append (cost=0.50..45021.30 rows=20 width=16) (actual\n> time=0.105..0.220 rows=2 loops=1)\n> -> Nested Loop (cost=0.50..22510.50 rows=10 width=16) (actual\n> time=0.104..0.105 rows=1 loops=1)\n> Join Filter: pg_always('1e-05'::double precision)\n> -> Function Scan on j a (cost=0.25..10.25 rows=1000\n> width=8) (actual time=0.066..0.066 rows=1 loops=1)\n> -> Function Scan on j b (cost=0.25..10.25 rows=1000\n> width=8) (actual time=0.035..0.035 rows=1 loops=1)\n> -> Nested Loop (cost=0.50..22510.50 rows=10 width=16) (actual\n> time=0.112..0.113 rows=1 loops=1)\n> Join Filter: pg_always('1e-05'::double precision)\n> -> Function Scan on j a_1 (cost=0.25..10.25 rows=1000\n> width=8) (actual time=0.077..0.077 rows=1 loops=1)\n> -> Function Scan on j b_1 (cost=0.25..10.25 rows=1000\n> width=8) (actual time=0.034..0.034 rows=1 loops=1)\n> Planning Time: 0.139 ms\n> Execution Time: 0.281 ms\n>\n> Cheers,\n>\n> Jeff\n>\n>\n\nčt 22. 8. 2019 v 3:11 odesílatel Jeff Janes <[email protected]> napsal:On Tue, Aug 20, 2019 at 11:12 AM Felix Geisendörfer <[email protected]> wrote: ... \n[1] My actual query had bad estimates for other reasons (GIN Index), but that's another story. The query above was of course deliberately designed to have bad estimates.As noted elsewhere, v12 thwarts your attempts to deliberately design the bad estimates. You can still get them, you just have to work a bit harder at it:CREATE FUNCTION j (bigint, bigint) returns setof bigint as $$ select generate_series($1,$2) $$ rows 1000 language sql;EXPLAIN ANALYZESELECT * FROM j(1, 1) a, j(1, 1) b UNIONSELECT * FROM j(1, 1) a, j(1, 1) b; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=80021.00..100021.00 rows=2000000 width=16) (actual time=11.332..13.241 rows=1 loops=1) Group Key: a.a, b.b -> Append (cost=0.50..70021.00 rows=2000000 width=16) (actual time=0.118..0.163 rows=2 loops=1) -> Nested Loop (cost=0.50..20010.50 rows=1000000 width=16) (actual time=0.117..0.118 rows=1 loops=1) -> Function Scan on j a (cost=0.25..10.25 rows=1000 width=8) (actual time=0.087..0.088 rows=1 loops=1) -> Function Scan on j b (cost=0.25..10.25 rows=1000 width=8) (actual time=0.027..0.027 rows=1 loops=1) -> Nested Loop (cost=0.50..20010.50 rows=1000000 width=16) (actual time=0.044..0.044 rows=1 loops=1) -> Function Scan on j a_1 (cost=0.25..10.25 rows=1000 width=8) (actual time=0.022..0.022 rows=1 loops=1) -> Function Scan on j b_1 (cost=0.25..10.25 rows=1000 width=8) (actual time=0.020..0.021 rows=1 loops=1) Planning Time: 0.085 ms Execution Time: 69.277 ms(11 rows)But the same advance in v12 which makes it harder to fool with your test case also opens the possibility of fixing your real case.I think so much more interesting should be long time after query processing - last row was processed in 13ms, but Execution Time was 69ms .. so some cleaning is 56ms - that is pretty long. I've made an extension which has a function which always returns true, but lies about how often it is expected to return true. See the attachment. With that, you can fine-tune the planner.CREATE EXTENSION pg_selectivities ;EXPLAIN ANALYZESELECT * FROM j(1, 1) a, j(1, 1) b where pg_always(0.00001)UNIONSELECT * FROM j(1, 1) a, j(1, 1) b where pg_always(0.00001); QUERY PLAN -------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=45021.40..45021.60 rows=20 width=16) (actual time=0.226..0.227 rows=1 loops=1) Group Key: a.a, b.b -> Append (cost=0.50..45021.30 rows=20 width=16) (actual time=0.105..0.220 rows=2 loops=1) -> Nested Loop (cost=0.50..22510.50 rows=10 width=16) (actual time=0.104..0.105 rows=1 loops=1) Join Filter: pg_always('1e-05'::double precision) -> Function Scan on j a (cost=0.25..10.25 rows=1000 width=8) (actual time=0.066..0.066 rows=1 loops=1) -> Function Scan on j b (cost=0.25..10.25 rows=1000 width=8) (actual time=0.035..0.035 rows=1 loops=1) -> Nested Loop (cost=0.50..22510.50 rows=10 width=16) (actual time=0.112..0.113 rows=1 loops=1) Join Filter: pg_always('1e-05'::double precision) -> Function Scan on j a_1 (cost=0.25..10.25 rows=1000 width=8) (actual time=0.077..0.077 rows=1 loops=1) -> Function Scan on j b_1 (cost=0.25..10.25 rows=1000 width=8) (actual time=0.034..0.034 rows=1 loops=1) Planning Time: 0.139 ms Execution Time: 0.281 msCheers,Jeff",
"msg_date": "Thu, 22 Aug 2019 07:08:23 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "\n\n> On 21. Aug 2019, at 20:26, Jeff Janes <[email protected]> wrote:\n> \n> As noted elsewhere, v12 thwarts your attempts to deliberately design the bad estimates. You can still get them, you just have to work a bit harder at it:\n> \n> CREATE FUNCTION j (bigint, bigint) returns setof bigint as $$ select generate_series($1,$2) $$ rows 1000 language sql;\n\nYeah, that's awesome! I didn't know about this until I ran into this issue, I'll definitely be using it for future estimation problems that are difficult to fix otherwise!\n\n> I've made an extension which has a function which always returns true, but lies about how often it is expected to return true. See the attachment. With that, you can fine-tune the planner.\n> \n> CREATE EXTENSION pg_selectivities ;\n\nVery cool and useful : )!\n\nI think in most cases I'll be okay with declaring a function with a static ROWS estimate, but I'll consider your extension if I need more flexibility in the future!\n\nThanks\nFelix\n\n",
"msg_date": "Thu, 22 Aug 2019 11:06:37 +0200",
"msg_from": "=?utf-8?Q?Felix_Geisend=C3=B6rfer?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "On Thu, Aug 22, 2019 at 1:09 AM Pavel Stehule <[email protected]>\nwrote:\n\n> čt 22. 8. 2019 v 3:11 odesílatel Jeff Janes <[email protected]> napsal:\n>\n>> ...\n\n\n> But the same advance in v12 which makes it harder to fool with your test\n>> case also opens the possibility of fixing your real case.\n>>\n>\n> I think so much more interesting should be long time after query\n> processing - last row was processed in 13ms, but Execution Time was 69ms ..\n> so some cleaning is 56ms - that is pretty long.\n>\n\nMost of the time is not after the clock stops, but before the stepwise\nANALYZE clock starts. If you just do an EXPLAIN rather than EXPLAIN\nANALYZE, that is also slow. The giant hash table is created during the\nplanning step (or somewhere around there--I notice that EXPLAIN ANALYZE\noutput doesn't count it in what it labels as the planning step--but it is\nsome step that EXPLAIN without ANALYZE does execute, which to me makes it a\nplanning step).\n\nFor me, \"perf top\" shows kernel's __do_page_fault as the top\nfunction. tuplehash_iterate does show up at 20% (which I think is\noverattributed, considering how little the speedup is when dropping\nANALYZE), but everything else just looks like kernel memory management code.\n\nCheers,\n\nJeff\n\nOn Thu, Aug 22, 2019 at 1:09 AM Pavel Stehule <[email protected]> wrote:čt 22. 8. 2019 v 3:11 odesílatel Jeff Janes <[email protected]> napsal:... But the same advance in v12 which makes it harder to fool with your test case also opens the possibility of fixing your real case.I think so much more interesting should be long time after query processing - last row was processed in 13ms, but Execution Time was 69ms .. so some cleaning is 56ms - that is pretty long.Most of the time is not after the clock stops, but before the stepwise ANALYZE clock starts. If you just do an EXPLAIN rather than EXPLAIN ANALYZE, that is also slow. The giant hash table is created during the planning step (or somewhere around there--I notice that EXPLAIN ANALYZE output doesn't count it in what it labels as the planning step--but it is some step that EXPLAIN without ANALYZE does execute, which to me makes it a planning step).For me, \"perf top\" shows kernel's __do_page_fault as the top function. tuplehash_iterate does show up at 20% (which I think is overattributed, considering how little the speedup is when dropping ANALYZE), but everything else just looks like kernel memory management code.Cheers,Jeff",
"msg_date": "Sat, 24 Aug 2019 15:16:10 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> Most of the time is not after the clock stops, but before the stepwise\n> ANALYZE clock starts. If you just do an EXPLAIN rather than EXPLAIN\n> ANALYZE, that is also slow. The giant hash table is created during the\n> planning step (or somewhere around there\n\nIt's in executor startup, I believe.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Aug 2019 15:41:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
},
{
"msg_contents": "Hi,\n\nOn August 24, 2019 12:41:03 PM PDT, Tom Lane <[email protected]> wrote:\n>Jeff Janes <[email protected]> writes:\n>> Most of the time is not after the clock stops, but before the\n>stepwise\n>> ANALYZE clock starts. If you just do an EXPLAIN rather than EXPLAIN\n>> ANALYZE, that is also slow. The giant hash table is created during\n>the\n>> planning step (or somewhere around there\n>\n>It's in executor startup, I believe.\n\nI'm sure personally interested in doing so, but it'd not be hard to measure the executor startup time separately. And display it either on a per node basis, or as a total number.\n\nQuite unconvinced this thread is a convincing reason to do so (or really do anything). But for some other workloads executor startup is a very large fraction of the total time, without massive misestimations. Optimizing that could be easier with that information available.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sat, 24 Aug 2019 12:54:03 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow HashAggregate in simple UNION query"
}
] |
[
{
"msg_contents": "Hello!\n\nAny help would be greatly appreciated.\nI need to run these simple queries on a table with millions of rows:\n\n```\nSELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" =\n123;\n```\n\n```\nSELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" =\n123 AND \"subscriptions\".\"trashed_at\" IS NULL;\n```\n\nThe count result for both queries, for project 123, is about 5M.\n\nI have an index in place on `project_id`, and also another index on\n`(project_id, trashed_at)`:\n\n```\n\"index_subscriptions_on_project_id_and_created_at\" btree (project_id,\ncreated_at DESC)\n\"index_subscriptions_on_project_id_and_trashed_at\" btree (project_id,\ntrashed_at DESC)\n```\n\nThe problem is that both queries are extremely slow and take about 17s each.\n\nThese are the results of `EXPLAIN ANALIZE`:\n\n\n```\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2068127.29..2068127.30 rows=1 width=0) (actual\ntime=17342.420..17342.420 rows=1 loops=1)\n -> Bitmap Heap Scan on subscriptions (cost=199573.94..2055635.23\nrows=4996823 width=0) (actual time=1666.409..16855.610 rows=4994254 loops=1)\n Recheck Cond: (project_id = 123)\n Rows Removed by Index Recheck: 23746378\n Heap Blocks: exact=131205 lossy=1480411\n -> Bitmap Index Scan on\nindex_subscriptions_on_project_id_and_trashed_at (cost=0.00..198324.74\nrows=4996823 width=0) (actual time=1582.717..1582.717 rows=4994877 loops=1)\n Index Cond: (project_id = 123)\n Planning time: 0.090 ms\n Execution time: 17344.182 ms\n(9 rows)\n```\n\n\n```\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2047881.69..2047881.70 rows=1 width=0) (actual\ntime=17557.218..17557.218 rows=1 loops=1)\n -> Bitmap Heap Scan on subscriptions (cost=187953.70..2036810.19\nrows=4428599 width=0) (actual time=1644.966..17078.378 rows=4994130 loops=1)\n Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL))\n Rows Removed by Index Recheck: 23746273\n Heap Blocks: exact=131144 lossy=1480409\n -> Bitmap Index Scan on\nindex_subscriptions_on_project_id_and_trashed_at (cost=0.00..186846.55\nrows=4428599 width=0) (actual time=1566.163..1566.163 rows=4994749 loops=1)\n Index Cond: ((project_id = 123) AND (trashed_at IS NULL))\n Planning time: 0.084 ms\n Execution time: 17558.522 ms\n(9 rows)\n```\n\nWhat is the problem?\nWhat can I do to improve the performance (i.e. count in a few seconds)?\n\nI have also tried to increase work_mem from 16MB to 128MB without any\nimprovement.\nEven an approximate count would be enough.\nPostgresql v9.5\n\nHello!Any help would be greatly appreciated.I need to run these simple queries on a table with millions of rows:```SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123;``````SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123 AND \"subscriptions\".\"trashed_at\" IS NULL;```The count result for both queries, for project 123, is about 5M.I have an index in place on `project_id`, and also another index on `(project_id, trashed_at)`:```\"index_subscriptions_on_project_id_and_created_at\" btree (project_id, created_at DESC)\"index_subscriptions_on_project_id_and_trashed_at\" btree (project_id, trashed_at DESC)```The problem is that both queries are extremely slow and take about 17s each.These are the results of `EXPLAIN ANALIZE`:``` QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=2068127.29..2068127.30 rows=1 width=0) (actual time=17342.420..17342.420 rows=1 loops=1) -> Bitmap Heap Scan on subscriptions (cost=199573.94..2055635.23 rows=4996823 width=0) (actual time=1666.409..16855.610 rows=4994254 loops=1) Recheck Cond: (project_id = 123) Rows Removed by Index Recheck: 23746378 Heap Blocks: exact=131205 lossy=1480411 -> Bitmap Index Scan on index_subscriptions_on_project_id_and_trashed_at (cost=0.00..198324.74 rows=4996823 width=0) (actual time=1582.717..1582.717 rows=4994877 loops=1) Index Cond: (project_id = 123) Planning time: 0.090 ms Execution time: 17344.182 ms(9 rows)`````` QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=2047881.69..2047881.70 rows=1 width=0) (actual time=17557.218..17557.218 rows=1 loops=1) -> Bitmap Heap Scan on subscriptions (cost=187953.70..2036810.19 rows=4428599 width=0) (actual time=1644.966..17078.378 rows=4994130 loops=1) Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL)) Rows Removed by Index Recheck: 23746273 Heap Blocks: exact=131144 lossy=1480409 -> Bitmap Index Scan on index_subscriptions_on_project_id_and_trashed_at (cost=0.00..186846.55 rows=4428599 width=0) (actual time=1566.163..1566.163 rows=4994749 loops=1) Index Cond: ((project_id = 123) AND (trashed_at IS NULL)) Planning time: 0.084 ms Execution time: 17558.522 ms(9 rows)```What is the problem? What can I do to improve the performance (i.e. count in a few seconds)?I have also tried to increase work_mem from 16MB to 128MB without any improvement.Even an approximate count would be enough.Postgresql v9.5",
"msg_date": "Thu, 22 Aug 2019 14:44:15 +0200",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extremely slow count (simple query, with index)"
},
{
"msg_contents": "On Thu, Aug 22, 2019 at 02:44:15PM +0200, Marco Colli wrote:\n> SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123;\n> SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123 AND \"subscriptions\".\"trashed_at\" IS NULL;\n\n> -> Bitmap Heap Scan on subscriptions (cost=199573.94..2055635.23 rows=4996823 width=0) (actual time=1666.409..16855.610 rows=4994254 loops=1)\n> Recheck Cond: (project_id = 123)\n> Rows Removed by Index Recheck: 23746378\n> Heap Blocks: exact=131205 lossy=1480411\n> -> Bitmap Index Scan on index_subscriptions_on_project_id_and_trashed_at (cost=0.00..198324.74 rows=4996823 width=0) (actual time=1582.717..1582.717 rows=4994877 loops=1)\n\n> -> Bitmap Heap Scan on subscriptions (cost=187953.70..2036810.19 rows=4428599 width=0) (actual time=1644.966..17078.378 rows=4994130 loops=1)\n> Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL))\n> Rows Removed by Index Recheck: 23746273\n> Heap Blocks: exact=131144 lossy=1480409\n> -> Bitmap Index Scan on index_subscriptions_on_project_id_and_trashed_at (cost=0.00..186846.55 rows=4428599 width=0) (actual time=1566.163..1566.163 rows=4994749 loops=1)\n\nYou can see it used the same index in both cases, and the index scan was\nreasonably fast (compared to your goal), but the heap component was slow.\n\nI suggest to run VACUUM FREEZE on the table, to try to encourage index only\nscan. If that works, you should condider setting aggressive autovacuum\nparameter, at least for the table:\nALTER TABLE subscriptions SET (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005);\n-- And possibly lower value of autovacuum_freeze_max_age\n\nOr, running manual vacuum possibly during quiet hours (possibly setting\nvacuum_freeze_table_age to encourage aggressive vacuum).\n\n> Even an approximate count would be enough.\n\nYou can SELECT reltuples FROM pg_class WHERE oid='subscriptions'::oid, but its\naccuracy depends on frequency of vacuum (and if a large delete/insert happened\nsince the most recent vacuum/analyze).\n\nJustin\n\n\n",
"msg_date": "Thu, 22 Aug 2019 08:19:10 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow count (simple query, with index)"
},
{
"msg_contents": "Dear ,\n\nCreate the below indexes and try it !!!\n\ncreate index ind_ subscriptions_ project_id on\n\"subscriptions\"(\"project_id\")\nWhere \"project_id\"= 1\n\ncreate index ind_ subscriptions_ trashed_at on \"subscriptions\"(\"\ntrashed_at \")\nWhere \"trashed_at\" is null\n\n\n\nOn Thu, Aug 22, 2019 at 6:36 PM Marco Colli <[email protected]> wrote:\n\n> Hello!\n>\n> Any help would be greatly appreciated.\n> I need to run these simple queries on a table with millions of rows:\n>\n> ```\n> SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" =\n> 123;\n> ```\n>\n> ```\n> SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" =\n> 123 AND \"subscriptions\".\"trashed_at\" IS NULL;\n> ```\n>\n> The count result for both queries, for project 123, is about 5M.\n>\n> I have an index in place on `project_id`, and also another index on\n> `(project_id, trashed_at)`:\n>\n> ```\n> \"index_subscriptions_on_project_id_and_created_at\" btree (project_id,\n> created_at DESC)\n> \"index_subscriptions_on_project_id_and_trashed_at\" btree (project_id,\n> trashed_at DESC)\n> ```\n>\n> The problem is that both queries are extremely slow and take about 17s\n> each.\n>\n> These are the results of `EXPLAIN ANALIZE`:\n>\n>\n> ```\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=2068127.29..2068127.30 rows=1 width=0) (actual\n> time=17342.420..17342.420 rows=1 loops=1)\n> -> Bitmap Heap Scan on subscriptions (cost=199573.94..2055635.23\n> rows=4996823 width=0) (actual time=1666.409..16855.610 rows=4994254 loops=1)\n> Recheck Cond: (project_id = 123)\n> Rows Removed by Index Recheck: 23746378\n> Heap Blocks: exact=131205 lossy=1480411\n> -> Bitmap Index Scan on\n> index_subscriptions_on_project_id_and_trashed_at (cost=0.00..198324.74\n> rows=4996823 width=0) (actual time=1582.717..1582.717 rows=4994877 loops=1)\n> Index Cond: (project_id = 123)\n> Planning time: 0.090 ms\n> Execution time: 17344.182 ms\n> (9 rows)\n> ```\n>\n>\n> ```\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=2047881.69..2047881.70 rows=1 width=0) (actual\n> time=17557.218..17557.218 rows=1 loops=1)\n> -> Bitmap Heap Scan on subscriptions (cost=187953.70..2036810.19\n> rows=4428599 width=0) (actual time=1644.966..17078.378 rows=4994130 loops=1)\n> Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL))\n> Rows Removed by Index Recheck: 23746273\n> Heap Blocks: exact=131144 lossy=1480409\n> -> Bitmap Index Scan on\n> index_subscriptions_on_project_id_and_trashed_at (cost=0.00..186846.55\n> rows=4428599 width=0) (actual time=1566.163..1566.163 rows=4994749 loops=1)\n> Index Cond: ((project_id = 123) AND (trashed_at IS NULL))\n> Planning time: 0.084 ms\n> Execution time: 17558.522 ms\n> (9 rows)\n> ```\n>\n> What is the problem?\n> What can I do to improve the performance (i.e. count in a few seconds)?\n>\n> I have also tried to increase work_mem from 16MB to 128MB without any\n> improvement.\n> Even an approximate count would be enough.\n> Postgresql v9.5\n>\n>\n\n-- \n*Regards,*\n*Ravikumar S,*\n*Ph: 8106741263*\n\nDear ,Create the below indexes and try it !!!create index ind_\n\nsubscriptions_\n\nproject_id on \"subscriptions\"(\"project_id\")Where \"project_id\"= 1 create index ind_ subscriptions_\n\ntrashed_at on \"subscriptions\"(\"\n\ntrashed_at \")Where \"trashed_at\" is null On Thu, Aug 22, 2019 at 6:36 PM Marco Colli <[email protected]> wrote:Hello!Any help would be greatly appreciated.I need to run these simple queries on a table with millions of rows:```SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123;``````SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123 AND \"subscriptions\".\"trashed_at\" IS NULL;```The count result for both queries, for project 123, is about 5M.I have an index in place on `project_id`, and also another index on `(project_id, trashed_at)`:```\"index_subscriptions_on_project_id_and_created_at\" btree (project_id, created_at DESC)\"index_subscriptions_on_project_id_and_trashed_at\" btree (project_id, trashed_at DESC)```The problem is that both queries are extremely slow and take about 17s each.These are the results of `EXPLAIN ANALIZE`:``` QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=2068127.29..2068127.30 rows=1 width=0) (actual time=17342.420..17342.420 rows=1 loops=1) -> Bitmap Heap Scan on subscriptions (cost=199573.94..2055635.23 rows=4996823 width=0) (actual time=1666.409..16855.610 rows=4994254 loops=1) Recheck Cond: (project_id = 123) Rows Removed by Index Recheck: 23746378 Heap Blocks: exact=131205 lossy=1480411 -> Bitmap Index Scan on index_subscriptions_on_project_id_and_trashed_at (cost=0.00..198324.74 rows=4996823 width=0) (actual time=1582.717..1582.717 rows=4994877 loops=1) Index Cond: (project_id = 123) Planning time: 0.090 ms Execution time: 17344.182 ms(9 rows)`````` QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=2047881.69..2047881.70 rows=1 width=0) (actual time=17557.218..17557.218 rows=1 loops=1) -> Bitmap Heap Scan on subscriptions (cost=187953.70..2036810.19 rows=4428599 width=0) (actual time=1644.966..17078.378 rows=4994130 loops=1) Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL)) Rows Removed by Index Recheck: 23746273 Heap Blocks: exact=131144 lossy=1480409 -> Bitmap Index Scan on index_subscriptions_on_project_id_and_trashed_at (cost=0.00..186846.55 rows=4428599 width=0) (actual time=1566.163..1566.163 rows=4994749 loops=1) Index Cond: ((project_id = 123) AND (trashed_at IS NULL)) Planning time: 0.084 ms Execution time: 17558.522 ms(9 rows)```What is the problem? What can I do to improve the performance (i.e. count in a few seconds)?I have also tried to increase work_mem from 16MB to 128MB without any improvement.Even an approximate count would be enough.Postgresql v9.5\n-- Regards,Ravikumar S,Ph: 8106741263",
"msg_date": "Thu, 22 Aug 2019 18:55:35 +0530",
"msg_from": "Ravikumar Reddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow count (simple query, with index)"
},
{
"msg_contents": "Hi Marco,\n\nSince you said approximates would be good enough, there are two ways to \ndo that. Query pg_class.reltuples or pg_stat_user_tables.n_live_tup. \nPersonally, I prefer the pg_stat_user tables since it is more current \nthan pg_class table, unless you run ANALYZE on your target table before \nquerying pg_class table. Then of course you get results in a few \nmilliseconds since you do not incur the tablescan cost of selecting \ndirectly from the target table.\n\nRegards,\nMichael Vitale\n\n\nMarco Colli wrote on 8/22/2019 8:44 AM:\n> Hello!\n>\n> Any help would be greatly appreciated.\n> I need to run these simple queries on a table with millions of rows:\n>\n> ```\n> SELECT COUNT(*) FROM \"subscriptions\" WHERE \n> \"subscriptions\".\"project_id\" = 123;\n> ```\n>\n> ```\n> SELECT COUNT(*) FROM \"subscriptions\" WHERE \n> \"subscriptions\".\"project_id\" = 123 AND \"subscriptions\".\"trashed_at\" IS \n> NULL;\n> ```\n>\n> The count result for both queries, for project 123, is about 5M.\n>\n> I have an index in place on `project_id`, and also another index on \n> `(project_id, trashed_at)`:\n>\n> ```\n> \"index_subscriptions_on_project_id_and_created_at\" btree (project_id, \n> created_at DESC)\n> \"index_subscriptions_on_project_id_and_trashed_at\" btree (project_id, \n> trashed_at DESC)\n> ```\n>\n> The problem is that both queries are extremely slow and take about 17s \n> each.\n>\n> These are the results of `EXPLAIN ANALIZE`:\n>\n>\n> ```\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=2068127.29..2068127.30 rows=1 width=0) (actual \n> time=17342.420..17342.420 rows=1 loops=1)\n> -> Bitmap Heap Scan on subscriptions (cost=199573.94..2055635.23 \n> rows=4996823 width=0) (actual time=1666.409..16855.610 rows=4994254 \n> loops=1)\n> Recheck Cond: (project_id = 123)\n> Rows Removed by Index Recheck: 23746378\n> Heap Blocks: exact=131205 lossy=1480411\n> -> Bitmap Index Scan on \n> index_subscriptions_on_project_id_and_trashed_at \n> (cost=0.00..198324.74 rows=4996823 width=0) (actual \n> time=1582.717..1582.717 rows=4994877 loops=1)\n> Index Cond: (project_id = 123)\n> Planning time: 0.090 ms\n> Execution time: 17344.182 ms\n> (9 rows)\n> ```\n>\n>\n> ```\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=2047881.69..2047881.70 rows=1 width=0) (actual \n> time=17557.218..17557.218 rows=1 loops=1)\n> -> Bitmap Heap Scan on subscriptions (cost=187953.70..2036810.19 \n> rows=4428599 width=0) (actual time=1644.966..17078.378 rows=4994130 \n> loops=1)\n> Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL))\n> Rows Removed by Index Recheck: 23746273\n> Heap Blocks: exact=131144 lossy=1480409\n> -> Bitmap Index Scan on \n> index_subscriptions_on_project_id_and_trashed_at \n> (cost=0.00..186846.55 rows=4428599 width=0) (actual \n> time=1566.163..1566.163 rows=4994749 loops=1)\n> Index Cond: ((project_id = 123) AND (trashed_at IS NULL))\n> Planning time: 0.084 ms\n> Execution time: 17558.522 ms\n> (9 rows)\n> ```\n>\n> What is the problem?\n> What can I do to improve the performance (i.e. count in a few seconds)?\n>\n> I have also tried to increase work_mem from 16MB to 128MB without any \n> improvement.\n> Even an approximate count would be enough.\n> Postgresql v9.5\n>\n\n\n\n\nHi Marco,\n\nSince you said approximates would be good enough, there are two ways to \ndo that. Query pg_class.reltuples\n or pg_stat_user_tables.n_live_tup. \n Personally, I prefer the pg_stat_user tables since it is more current \nthan pg_class table, unless you run ANALYZE on your target table before \nquerying pg_class table. Then of course you get results in a few \nmilliseconds since you do not incur the tablescan cost of selecting \ndirectly from the target table.\n\nRegards,\nMichael Vitale\n\n\nMarco Colli wrote on 8/22/2019 8:44 AM:\n\n\nHello!Any\n help would be greatly appreciated.I need to run these simple\n queries on a table with millions of rows:```SELECT\n COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123;``````SELECT\n COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123 \nAND \"subscriptions\".\"trashed_at\" IS NULL;```The\n count result for both queries, for project 123, is about 5M.I\n have an index in place on `project_id`, and also another index on \n`(project_id, trashed_at)`:```\"index_subscriptions_on_project_id_and_created_at\"\n btree (project_id, created_at DESC)\"index_subscriptions_on_project_id_and_trashed_at\"\n btree (project_id, trashed_at DESC)```The\n problem is that both queries are extremely slow and take about 17s \neach.These are the results of `EXPLAIN \nANALIZE`:``` \nQUERY PLAN \n -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate\n (cost=2068127.29..2068127.30 rows=1 width=0) (actual \ntime=17342.420..17342.420 rows=1 loops=1) -> Bitmap \nHeap Scan on subscriptions (cost=199573.94..2055635.23 rows=4996823 \nwidth=0) (actual time=1666.409..16855.610 rows=4994254 loops=1) \n Recheck Cond: (project_id = 123) Rows Removed\n by Index Recheck: 23746378 Heap Blocks: exact=131205\n lossy=1480411 -> Bitmap Index Scan on \nindex_subscriptions_on_project_id_and_trashed_at (cost=0.00..198324.74 \nrows=4996823 width=0) (actual time=1582.717..1582.717 rows=4994877 \nloops=1) Index Cond: (project_id = 123) Planning\n time: 0.090 ms Execution time: 17344.182 ms(9 \nrows)`````` \n QUERY PLAN \n -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate\n (cost=2047881.69..2047881.70 rows=1 width=0) (actual \ntime=17557.218..17557.218 rows=1 loops=1) -> Bitmap \nHeap Scan on subscriptions (cost=187953.70..2036810.19 rows=4428599 \nwidth=0) (actual time=1644.966..17078.378 rows=4994130 loops=1) \n Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL)) \n Rows Removed by Index Recheck: 23746273 Heap \nBlocks: exact=131144 lossy=1480409 -> Bitmap \nIndex Scan on index_subscriptions_on_project_id_and_trashed_at \n (cost=0.00..186846.55 rows=4428599 width=0) (actual \ntime=1566.163..1566.163 rows=4994749 loops=1) \n Index Cond: ((project_id = 123) AND (trashed_at IS NULL)) Planning\n time: 0.084 ms Execution time: 17558.522 ms(9 \nrows)```What is the problem? What\n can I do to improve the performance (i.e. count in a few seconds)?I\n have also tried to increase work_mem from 16MB to 128MB without any \nimprovement.Even an approximate count would be enough.Postgresql\n v9.5",
"msg_date": "Thu, 22 Aug 2019 09:25:36 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow count (simple query, with index)"
},
{
"msg_contents": ">\n> You can SELECT reltuples FROM pg_class WHERE oid='subscriptions'::oid, but\n> its\n> accuracy depends on frequency of vacuum (and if a large delete/insert\n> happened\n> since the most recent vacuum/analyze).\n>\n\nThis only seems helpful to find approx. count for the entire table, without\nconsidering the WHERE condition.\n\nMarco,\nAs Justin pointed out, you have most of your time in the bitmap heap scan.\nAre you running SSDs? I wonder about tuning effective_io_concurrency to\nmake more use of them.\n\n\"Currently, this setting only affects bitmap heap scans.\"\nhttps://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR\n\nAlso, how many million rows is this table in total? Have you considered\npartitioning?\n\nYou can SELECT reltuples FROM pg_class WHERE oid='subscriptions'::oid, but its\naccuracy depends on frequency of vacuum (and if a large delete/insert happened\nsince the most recent vacuum/analyze).This only seems helpful to find approx. count for the entire table, without considering the WHERE condition.Marco,As Justin pointed out, you have most of your time in the bitmap heap scan. Are you running SSDs? I wonder about tuning effective_io_concurrency to make more use of them.\"Currently, this setting only affects bitmap heap scans.\"https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIORAlso, how many million rows is this table in total? Have you considered partitioning?",
"msg_date": "Thu, 22 Aug 2019 11:37:04 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow count (simple query, with index)"
},
{
"msg_contents": "I have completely solved (from 17s to 1s) by running this command:\nvacuum analyze subscriptions;\n\nNow I run the autovacuum more frequently using these settings in\npostgresql.conf:\nautovacuum_vacuum_scale_factor = 0.01\nautovacuum_analyze_scale_factor = 0.01\n\nThanks to everyone - and in particular to Justin Pryzby for pointing me in\nthe right direction.\n\nOn Thu, Aug 22, 2019 at 7:37 PM Michael Lewis <[email protected]> wrote:\n\n> You can SELECT reltuples FROM pg_class WHERE oid='subscriptions'::oid, but\n>> its\n>> accuracy depends on frequency of vacuum (and if a large delete/insert\n>> happened\n>> since the most recent vacuum/analyze).\n>>\n>\n> This only seems helpful to find approx. count for the entire table,\n> without considering the WHERE condition.\n>\n> Marco,\n> As Justin pointed out, you have most of your time in the bitmap heap scan.\n> Are you running SSDs? I wonder about tuning effective_io_concurrency to\n> make more use of them.\n>\n> \"Currently, this setting only affects bitmap heap scans.\"\n>\n> https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR\n>\n> Also, how many million rows is this table in total? Have you considered\n> partitioning?\n>\n\nI have completely solved (from 17s to 1s) by running this command:vacuum analyze subscriptions;Now I run the autovacuum more frequently using these settings in postgresql.conf:autovacuum_vacuum_scale_factor = 0.01autovacuum_analyze_scale_factor = 0.01Thanks to everyone - and in particular to Justin Pryzby for pointing me in the right direction.On Thu, Aug 22, 2019 at 7:37 PM Michael Lewis <[email protected]> wrote:You can SELECT reltuples FROM pg_class WHERE oid='subscriptions'::oid, but its\naccuracy depends on frequency of vacuum (and if a large delete/insert happened\nsince the most recent vacuum/analyze).This only seems helpful to find approx. count for the entire table, without considering the WHERE condition.Marco,As Justin pointed out, you have most of your time in the bitmap heap scan. Are you running SSDs? I wonder about tuning effective_io_concurrency to make more use of them.\"Currently, this setting only affects bitmap heap scans.\"https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIORAlso, how many million rows is this table in total? Have you considered partitioning?",
"msg_date": "Thu, 22 Aug 2019 19:54:57 +0200",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extremely slow count (simple query, with index)"
},
{
"msg_contents": "On Thu, Aug 22, 2019 at 07:54:57PM +0200, Marco Colli wrote:\n> I have completely solved (from 17s to 1s) by running this command:\n> vacuum analyze subscriptions;\n\nThanks for following though.\n\nOn Thu, Aug 22, 2019 at 08:19:10AM -0500, Justin Pryzby wrote:\n> You can see it used the same index in both cases, and the index scan was\n> reasonably fast (compared to your goal), but the heap component was slow.\n> \n> I suggest to run VACUUM FREEZE on the table, to try to encourage index only\n> scan. If that works, you should condider setting aggressive autovacuum\n\nI should've used a better word, since aggressive means something specific.\nPerhaps just: \"parameter to encourage more frequent autovacuums\".\n\n> parameter, at least for the table:\n> ALTER TABLE subscriptions SET (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005);\n> -- And possibly lower value of autovacuum_freeze_max_age\n> \n> Or, running manual vacuum possibly during quiet hours (possibly setting\n> vacuum_freeze_table_age to encourage aggressive vacuum).\n\nI think my reference to autovacuum_freeze_max_age and vacuum_freeze_table_age\nwere incorrect; what's important is \"relallvisible\" and not \"relfrozenxid\".\nAnd xid wraparound isn't at issue here.\n\n> > Even an approximate count would be enough.\n> \n> You can SELECT reltuples FROM pg_class WHERE oid='subscriptions'::oid, but its\n\nShould be: oid='subscriptions'::regclass\n\n> accuracy depends on frequency of vacuum (and if a large delete/insert happened\n> since the most recent vacuum/analyze).\n\nJustin\n\n\n",
"msg_date": "Thu, 22 Aug 2019 13:04:00 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow count (simple query, with index)"
}
] |
[
{
"msg_contents": "Hi Folks,\n\n\nI am having trouble setting up replication with Postgres 11.3.\npg_basebackup is taking an unusually long time for an small Postgres\ndatabase. Anything wrong in my configuration or anything I could do to\nspeed up pg_basebackup?\n\n\nI recently upgraded form Postgres 9.2.1. Using a similar postgres\nconfiguration, apart from some updates to config for Postgres 11.3. I am\nusing pg_basebackup to replicate from the master. I am using secure ssh\ntunnel for the replication between master and slave, I.e. there is a ssh\ntunnel that forwards data from the localhost on port 5433 on the slave to\nthe master server’s port 5432.\n\n\npg_basebackup is taking about 30 seconds.\n\nc12-array2-c1:/# du ./path/to/database\n\n249864 ./nimble/var/private/config/versions/group/sodb\n\n\npg_basebackup -D $PGSQL_BASEBKUP_PATH -U $DBUSER -c fast -l $backup_name -h\nlocalhost -p 5433 --wal-method=stream -Pv -s 10\n\n\npostgresql.conf:\n\n….\n\nmax_connections = 100 # (change requires restart)\n\n# Note: Increasing max_connections costs ~400 bytes of shared memory per\n\n# connection slot, plus lock space (see max_locks_per_transaction).\n\n#superuser_reserved_connections = 3 # (change requires restart)\n\nunix_socket_directories = '/var/run/postgresql' # (change requires\nrestart)\n\n…..\n\n# - Security and Authentication -\n\n#authentication_timeout = 1min # 1s-600s\n\n#ssl = off # (change requires restart)\n\n#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL\nciphers\n\n # (change requires restart)\n\n#ssl_renegotiation_limit = 512MB # amount of data between\nrenegotiations\n\n#ssl_cert_file = 'server.crt' # (change requires restart)\n\n#ssl_key_file = 'server.key' # (change requires restart)\n\n#ssl_ca_file = '' # (change requires restart)\n\n#ssl_crl_file = '' # (change requires restart)\n\n#password_encryption = on\n\n#db_user_namespace = off\n\n# Kerberos and GSSAPI\n\n#krb_server_keyfile = ''\n\n#krb_srvname = 'postgres' # (Kerberos only)\n\n#krb_caseins_users = off\n\n# - TCP Keepalives -\n\n# see \"man 7 tcp\" for details\n\n#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n\n # 0 selects the system default\n\n#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n\n # 0 selects the system default\n\n#tcp_keepalives_count = 0 # TCP_KEEPCNT;\n\n # 0 selects the system default\n\n…..\n\nshared_buffers = 32MB # 32 or 300MB based on model\n\n # (change requires restart)\n\n#temp_buffers = 8MB # min 800kB\n\n….\n\nwork_mem = 10MB # min 64kB\n\n#maintenance_work_mem = 16MB # min 1MB\n\n…..\n\nwal_level = replica # minimal, archive, or hot_standby\n\n # (change requires restart)\n\n#fsync = on # turns forced synchronization on\nor off\n\n#synchronous_commit = on # synchronization level;\n\n # off, local, remote_write, or on\n\nwal_sync_method = open_sync # the default is the first option\n\n # supported by the operating system:\n\n # open_datasync\n\n # fdatasync (default on Linux)\n\n # fsync\n\n # fsync_writethrough\n\n # open_sync\n\n#full_page_writes = on # recover from partial page writes\n\n#wal_buffers = -1 # min 32kB, -1 sets based on\nshared_buffers\n\n # (change requires restart)\n\n#wal_writer_delay = 200ms # 1-10000 milliseconds\n\n#commit_delay = 0 # range 0-100000, in microseconds\n\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB\neach\n\ncheckpoint_timeout = 1min # range 30s-1h\n\n#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 -\n1.0\n\n#checkpoint_warning = 30s # 0 disables\n\n# - Archiving -\n\narchive_mode = on # allows archiving to be done\n\n # (change requires restart)\n\narchive_command = '/bin/true' # command to use to archive a logfile segment\n\n # placeholders: %p = path of file to archive\n\n # %f = file name only\n\n # e.g. 'test ! -f /mnt/server/archivedir/%f\n&& cp %p /mnt/server/archivedir/%f'\n\n#archive_timeout = 0 # force a logfile segment switch after this\n\n # number of seconds; 0 disables\n\n…..\n\nmax_wal_senders = 10 # max number of walsender processes\n\n # (change requires restart)\n\nwal_keep_segments = 10 # in logfile segments, 16MB each; 0 disables\n\nwal_sender_timeout = 10s # in milliseconds; 0 disables\n\n# - Master Server -\n\n# These settings are ignored on a standby server.\n\nsynchronous_standby_names = '' # standby servers that provide sync rep\n\n # comma-separated list of application_name\n\n # from standby(s); '*' = all\n\n#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is\ndelayed\n\n# - Standby Servers -\n\n# These settings are ignored on a master server.\n\nhot_standby = on # \"on\" allows queries during recovery\n\n # (change requires restart)\n\n#max_standby_archive_delay = 30s # max delay before canceling queries\n\n # when reading WAL from archive;\n\n # -1 allows indefinite delay\n\n#max_standby_streaming_delay = 30s # max delay before canceling queries\n\n # when reading streaming WAL;\n\n # -1 allows indefinite delay\n\nwal_receiver_status_interval = 3s # send replies at least this often\n\n # 0 disables\n\n#hot_standby_feedback = off # send info from standby to prevent\n\n # query conflicts\n\n\nHi Folks,\n\nI am having trouble setting up replication with Postgres 11.3. pg_basebackup is taking an unusually long time for an small Postgres database. Anything wrong in my configuration or anything I could do to speed up pg_basebackup?\n\nI recently upgraded form Postgres 9.2.1. Using a similar postgres configuration, apart from some updates to config for Postgres 11.3. I am using pg_basebackup to replicate from the master. I am using secure ssh tunnel for the replication between master and slave, I.e. there is a ssh tunnel that forwards data from the localhost on port 5433 on the slave to the master server’s port 5432. \n\npg_basebackup is taking about 30 seconds.\nc12-array2-c1:/# du ./path/to/database\n249864 ./nimble/var/private/config/versions/group/sodb\n\npg_basebackup -D $PGSQL_BASEBKUP_PATH -U $DBUSER -c fast -l $backup_name -h localhost -p 5433 --wal-method=stream -Pv -s 10\n\npostgresql.conf:\n….\nmax_connections = 100 # (change requires restart)\n# Note: Increasing max_connections costs ~400 bytes of shared memory per\n# connection slot, plus lock space (see max_locks_per_transaction).\n#superuser_reserved_connections = 3 # (change requires restart)\nunix_socket_directories = '/var/run/postgresql' # (change requires restart)\n…..\n# - Security and Authentication -\n#authentication_timeout = 1min # 1s-600s\n#ssl = off # (change requires restart)\n#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers\n # (change requires restart)\n#ssl_renegotiation_limit = 512MB # amount of data between renegotiations\n#ssl_cert_file = 'server.crt' # (change requires restart)\n#ssl_key_file = 'server.key' # (change requires restart)\n#ssl_ca_file = '' # (change requires restart)\n#ssl_crl_file = '' # (change requires restart)\n#password_encryption = on\n#db_user_namespace = off\n# Kerberos and GSSAPI\n#krb_server_keyfile = ''\n#krb_srvname = 'postgres' # (Kerberos only)\n#krb_caseins_users = off\n# - TCP Keepalives -\n# see \"man 7 tcp\" for details\n#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n # 0 selects the system default\n#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n # 0 selects the system default\n#tcp_keepalives_count = 0 # TCP_KEEPCNT;\n # 0 selects the system default\n…..\nshared_buffers = 32MB # 32 or 300MB based on model\n # (change requires restart)\n#temp_buffers = 8MB # min 800kB\n….\nwork_mem = 10MB # min 64kB\n#maintenance_work_mem = 16MB # min 1MB\n…..\nwal_level = replica # minimal, archive, or hot_standby\n # (change requires restart)\n#fsync = on # turns forced synchronization on or off\n#synchronous_commit = on # synchronization level;\n # off, local, remote_write, or on\nwal_sync_method = open_sync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync (default on Linux)\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\n#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers\n # (change requires restart)\n#wal_writer_delay = 200ms # 1-10000 milliseconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n# - Checkpoints -\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 1min # range 30s-1h\n#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\n#checkpoint_warning = 30s # 0 disables\n# - Archiving -\narchive_mode = on # allows archiving to be done\n # (change requires restart)\narchive_command = '/bin/true' # command to use to archive a logfile segment\n # placeholders: %p = path of file to archive\n # %f = file name only\n # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'\n#archive_timeout = 0 # force a logfile segment switch after this\n # number of seconds; 0 disables\n…..\nmax_wal_senders = 10 # max number of walsender processes\n # (change requires restart)\nwal_keep_segments = 10 # in logfile segments, 16MB each; 0 disables\nwal_sender_timeout = 10s # in milliseconds; 0 disables\n# - Master Server -\n# These settings are ignored on a standby server.\nsynchronous_standby_names = '' # standby servers that provide sync rep\n # comma-separated list of application_name\n # from standby(s); '*' = all\n#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed\n# - Standby Servers -\n# These settings are ignored on a master server.\nhot_standby = on # \"on\" allows queries during recovery\n # (change requires restart)\n#max_standby_archive_delay = 30s # max delay before canceling queries\n # when reading WAL from archive;\n # -1 allows indefinite delay\n#max_standby_streaming_delay = 30s # max delay before canceling queries\n # when reading streaming WAL;\n # -1 allows indefinite delay\nwal_receiver_status_interval = 3s # send replies at least this often\n # 0 disables\n#hot_standby_feedback = off # send info from standby to prevent\n # query conflicts",
"msg_date": "Fri, 23 Aug 2019 20:24:47 -0700",
"msg_from": "andy andy <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_basebackup is taking an unusually long time with Postgres 11.3"
}
] |
[
{
"msg_contents": "Hi All,\n\nWith PostgreSQL 10 and 11, the planner doesn't use the lists of most \ncommon values to determine the selectivity of \"=\" for Nested Loop as it \ndoes for a normal inner join in eqjoinsel_inner(). Incorrect choice of a \nnested loops join strategy causes poor query performance.\nTo demonstrate it one can make the following test case:\n\n create table t(f integer not null,g integer not null);\n create table u(f integer not null,g integer not null);\n create sequence s cache 1000;\n insert into t select 0,s from (select nextval('s') as s) as d;\n insert into t select 0,s from (select nextval('s') as s) as d;\n insert into t select 0,s from (select nextval('s') as s from t,t t1,t \nt2) as d;\n insert into t select 0,s from (select nextval('s') as s from t,t t1,t \nt2,t t3) as d;\n insert into t(f,g) select g,f from t;\n insert into u select * from t;\n create index t_f on t(f);\n vacuum analyze;\n\nThe columns f and g of both tables t and u have a skewed distribution: \n10010 values of 0 and 10010 unique values starting from 1.\nLet's see query plan for the join of t and u:\n\n explain analyze select * from t,u where t.f=u.f and t.g=u.g;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.29..7629.95 rows=25055030 width=16) (actual \ntime=0.042..22540.629 rows=20020 loops=1)\n -> Seq Scan on u (cost=0.00..289.20 rows=20020 width=8) (actual \ntime=0.011..3.025 rows=20020 loops=1)\n -> Index Scan using t_f on t (cost=0.29..0.36 rows=1 width=8) \n(actual time=0.565..1.125 rows=1 loops=20020)\n Index Cond: (f = u.f)\n Filter: (u.g = g)\n Rows Removed by Filter: 5004\n Planning Time: 0.394 ms\n Execution Time: 22542.639 ms\n\nAfter dropping the index\n drop index t_f;\nwe obtain much better query plan (without Nested Loop):\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Merge Join (cost=3439.09..442052.26 rows=25055030 width=16) (actual \ntime=15.708..32.735 rows=20020 loops=1)\n Merge Cond: ((t.f = u.f) AND (t.g = u.g))\n -> Sort (cost=1719.54..1769.59 rows=20020 width=8) (actual \ntime=8.189..10.189 rows=20020 loops=1)\n Sort Key: t.f, t.g\n Sort Method: quicksort Memory: 1707kB\n -> Seq Scan on t (cost=0.00..289.20 rows=20020 width=8) \n(actual time=0.012..2.958 rows=20020 loops=1)\n -> Sort (cost=1719.54..1769.59 rows=20020 width=8) (actual \ntime=7.510..9.459 rows=20020 loops=1)\n Sort Key: u.f, u.g\n Sort Method: quicksort Memory: 1707kB\n -> Seq Scan on u (cost=0.00..289.20 rows=20020 width=8) \n(actual time=0.008..2.748 rows=20020 loops=1)\n Planning Time: 0.239 ms\n Execution Time: 34.585 ms\n\nUsing MCV lists in var_eq_non_const() would prevent from choosing Nested \nLoop in such cases.\n\nRegards,\nOleg Kharin\n\n\n\n",
"msg_date": "Tue, 3 Sep 2019 21:47:42 +0400",
"msg_from": "Oleg Kharin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Incorrect choice of Nested Loop for a skewed distribution"
},
{
"msg_contents": "On Tue, Sep 03, 2019 at 09:47:42PM +0400, Oleg Kharin wrote:\n> Hi All,\n> \n> With PostgreSQL 10 and 11, the planner doesn't use the lists of most common\n> values to determine the selectivity of \"=\" for Nested Loop as it does for a\n> normal inner join in eqjoinsel_inner(). Incorrect choice of a nested loops\n> join strategy causes poor query performance.\n> To demonstrate it one can make the following test case:\n...\n> The columns f and g of both tables t and u have a skewed distribution: 10010\n> values of 0 and 10010 unique values starting from 1.\n> Let's see query plan for the join of t and u:\n> \n> � explain analyze select * from t,u where t.f=u.f and t.g=u.g;\n> �Nested Loop� (cost=0.29..7629.95 rows=25055030 width=16) (actual time=0.042..22540.629 rows=20020 loops=1)\n> �� ->� Seq Scan on u� (cost=0.00..289.20 rows=20020 width=8) (actual time=0.011..3.025 rows=20020 loops=1)\n> �� ->� Index Scan using t_f on t� (cost=0.29..0.36 rows=1 width=8) (actual time=0.565..1.125 rows=1 loops=20020)\n> �������� Index Cond: (f = u.f)\n> �������� Filter: (u.g = g)\n> �������� Rows Removed by Filter: 5004\n> �Planning Time: 0.394 ms\n> �Execution Time: 22542.639 ms\n> \n> After dropping the index\n> � drop index t_f;\n> we obtain much better query plan (without Nested Loop):\n> �Merge Join� (cost=3439.09..442052.26 rows=25055030 width=16) (actual time=15.708..32.735 rows=20020 loops=1)\n> �� Merge Cond: ((t.f = u.f) AND (t.g = u.g))\n> �� ->� Sort� (cost=1719.54..1769.59 rows=20020 width=8) (actual time=8.189..10.189 rows=20020 loops=1)\n> �������� Sort Key: t.f, t.g\n> �������� Sort Method: quicksort� Memory: 1707kB\n> �������� ->� Seq Scan on t� (cost=0.00..289.20 rows=20020 width=8) (actual time=0.012..2.958 rows=20020 loops=1)\n> �� ->� Sort� (cost=1719.54..1769.59 rows=20020 width=8) (actual time=7.510..9.459 rows=20020 loops=1)\n> �������� Sort Key: u.f, u.g\n> �������� Sort Method: quicksort� Memory: 1707kB\n> �������� ->� Seq Scan on u� (cost=0.00..289.20 rows=20020 width=8) (actual time=0.008..2.748 rows=20020 loops=1)\n> �Planning Time: 0.239 ms\n> �Execution Time: 34.585 ms\n\n> �Nested Loop� (cost=0.29.. 7629.95 rows=25055030 width=16) (actual...\n> �Merge Join � (cost=3439.09..442052.26 rows=25055030 width=16) (actual...\n\nWhen you dropped the index, you necessarily refused the plan involving index scan.\nSo it did merge join instead (which it thinks of as expensive because it has to\nsort both sides).\n\nAs you saw, the rowcount estimate of the join result is badly off. But that's\ntrue of both plans.\n\nChoice of join type is affected by the size of its *inputs*, but doesn't and\nshouldn't have any effect on the result rowcount (or its) estimate. The\nrowcount *output* of the join would only affect any *following* plan nodes (of\nwhich there are none in this case).\n\nYou suggested using the MCV list, but I don't think that's possible, since the\nnested loop is evaluating its \"inner\" table multiple times, in a \"loop\":\n\n> �� ->� Index Scan using t_f on t� (cost=0.29..0.36 rows=1 width=8) (actual time=0.565..1.125 rows=1 loops=20020)\n\nHypothetically, one could plan the query 20k times, for each value of u.f and\nu.g, but that's tantamount to actually executing the query, which is why it\nuses (I gather) var_eq_non_const.\n\nIf you enable BUFFERS, you can see:\n\npostgres=# explain (analyze on,buffers) select * from t,u WHERE t.g=u.g AND t.f=u.f ;\n Nested Loop (cost=0.29..7629.95 rows=25055030 width=16) (actual time=0.031..22634.482 rows=20020 loops=1)\n Buffers: shared hit=770913\n -> Seq Scan on t (cost=0.00..289.20 rows=20020 width=8) (actual time=0.011..22.883 rows=20020 loops=1)\n Buffers: shared hit=89\n -> Index Scan using u_f_idx on u (cost=0.29..0.36 rows=1 width=8) (actual time=0.596..1.125 rows=1 loops=20020)\n ...\n Buffers: shared hit=770824\n\nvs.\n\npostgres=# SET enable_nestloop=off;\npostgres=# explain (analyze on,buffers) select * from t,u WHERE t.g=u.g AND t.f=u.f ;\n Merge Join (cost=3439.09..442052.26 rows=25055030 width=16) (actual time=74.262..187.454 rows=20020 loops=1)\n Merge Cond: ((t.g = u.g) AND (t.f = u.f))\n Buffers: shared hit=178\n ...\n\nSo the nest loop plan is hitting 770k buffer pages (5GB) while looping 20k\ntimes, rather than 178 buffers when each page is read once (to sort).\n\nPerhaps you could get a good plan by playing with these, but it's unclear why\nthat's necessary.\n\n effective_cache_size | 1280\n cpu_index_tuple_cost | 0.005\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n random_page_cost | 4\n seq_page_cost | 1\n\nAlso, since PG96, FKs can improve join estimates:\n\npostgres=# CREATE UNIQUE INDEX ON u(f,g);\npostgres=# CREATE UNIQUE INDEX ON t(f,g);\npostgres=# ALTER TABLE t ADD CONSTRAINT fk_f FOREIGN KEY (f,g) REFERENCES u(f,g);\npostgres=# ALTER TABLE u ADD CONSTRAINT fk_u FOREIGN KEY (f,g) REFERENCES t(f,g);\npostgres=# explain analyze select * from t,u WHERE t.g=u.g AND t.f=u.f ;\n Hash Join (cost=589.50..999.14 rows=20020 width=16) (actual time=29.054..69.296 rows=20020 loops=1)\n Hash Cond: ((t.g = u.g) AND (t.f = u.f))\n -> Seq Scan on t (cost=0.00..289.20 rows=20020 width=8) (actual time=0.016..11.331 rows=20020 loops=1)\n -> Hash (cost=289.20..289.20 rows=20020 width=8) (actual time=28.980..28.981 rows=20020 loops=1)\n Buckets: 32768 Batches: 1 Memory Usage: 1039kB\n -> Seq Scan on u (cost=0.00..289.20 rows=20020 width=8) (actual time=0.010..12.730 rows=20020 loops=1)\n\nJustin\n\n\n",
"msg_date": "Wed, 4 Sep 2019 19:18:33 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect choice of Nested Loop for a skewed distribution"
},
{
"msg_contents": "Thank you Justin!\n\nOn Wed, 04 Sep 2019 17:18:36 -0700 (PDT), Justin Pryzby wrote:\n\n> When you dropped the index, you necessarily refused the plan involving \n> index scan.\n> So it did merge join instead (which it thinks of as expensive because it has to\n> sort both sides).\n>\n> As you saw, the rowcount estimate of the join result is badly off. But that's\n> true of both plans.\n>\n> Choice of join type is affected by the size of its *inputs*, but doesn't and\n> shouldn't have any effect on the result rowcount (or its) estimate. The\n> rowcount *output* of the join would only affect any *following* plan nodes (of\n> which there are none in this case).\n>\n> You suggested using the MCV list, but I don't think that's possible, since the\n> nested loop is evaluating its \"inner\" table multiple times, in a \"loop\":\n>\n>> -> Index Scan using t_f on t (cost=0.29..0.36 rows=1 width=8) (actual time=0.565..1.125 rows=1 loops=20020)\n> Hypothetically, one could plan the query 20k times, for each value of u.f and\n> u.g, but that's tantamount to actually executing the query, which is why it\n> uses (I gather) var_eq_non_const.\n\nIn fact the planner has information about the outer loop relation when \nit is estimating the number of inner loop rows for a nested-loop join. \nIt ignores this information and considers the outer/inner loop relations \nas independent. So it uses for the rowcount estimate the number of \ndistinct values only, not MCV lists of both tables.\n\nFor the test case above, the planner estimates the selectivity of the \nclause \"t.f=u.f\" as 1/10011. It hopes to scan 1/10011*20020=2 rows in a \ninner loop for each row of the outer loop. In fact it scans 10010 rows \n10010 times and 1 row 10010 times. The average number of rows scanned in \na inner loop is (10010*10010+10010)/20020=5005. That is badly off from 2 \nrows expected. The planner should use 5005/20020 as a more accurate \nvalue for the effective selectivity of \"t.f=u.f\".\n\nI tried to make improvements to the function var_eq_non_const() so that \nit calculates the effective selectivity using MCV lists if possible. The \npatch for PostgreSQL 11.5 is attached to the letter.\n\nThe patched PostgreSQL chooses an optimal plan for the test case. As a \nresult the query execution time is reduced by more than 600 times.\n\nNow if we force the planner to choose the nested-loop join\n\n set enable_mergejoin=off;\n\n set enable_hashjoin=off;\n\n explain analyze select * from t,u where t.f=u.f and t.g=u.g;\n\n Nested Loop (cost=0.29..2261681.75 rows=25055030 width=16) (actual \ntime=0.048..22519.232 rows=20020 loops=1)\n -> Seq Scan on u (cost=0.00..289.20 rows=20020 width=8) (actual \ntime=0.012..2.970 rows=20020 loops=1)\n -> Index Scan using t_f on t (cost=0.29..100.44 rows=1252 width=8) \n(actual time=0.565..1.124 rows=1 loops=20020)\n Index Cond: (f = u.f)\n Filter: (u.g = g)\n Rows Removed by Filter: 5004\n Planning Time: 0.188 ms\n Execution Time: 22521.248 ms\n\nwe will see that the cost of the index scan is increased from 0.36 to \n100.44, which is much more realistic and reflects 2 to 5005 rows ratio.\n\nRegards,\n\nOleg",
"msg_date": "Fri, 6 Sep 2019 09:02:43 +0400",
"msg_from": "Oleg Kharin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect choice of Nested Loop for a skewed distribution"
}
] |
[
{
"msg_contents": "Hi,\n\nI have tried doing some research for quite a while on the performance\nimplications of the built-in upsert (INSERT ... ON CONFLICT UPDATE...) when\na lot of upserts are made. The scale is something like 1 million\nrecords/hour, that is split up in groups of around 300 records each.\n\nThe table is fairly simple, just a few int/bool fields and a date field.\nThe total size of the table is somewhere between 50-60 million records.\nEssentially, all the rows are supposed to stay up-to-date within a certain\ncycle (that is currently about 2 days). Sometimes a lot of information\nchanges, sometimes very little/no information changes (e.g. 300 records get\nupserted but they are identical to the existing records).\n\nSo far, one hypothesis is that this project seems to be suffering from the\nlarge amount of writes that happen constantly since even if the upsert\nresults in no inserts/updates, the \"failed\" inserts from the upsert will\nstill get written somewhere (according to our knowledge). Therefore, the\nidea is to utilize old-fashioned upserts (writeable CTEs) and do more\ngranular operations that can make sure to only insert data that doesn't\nalready exist, and only update data that has actually changed. Naturally,\nhowever, this will put more read-load on the DB and increase query\ncomplexity.\n\nThe question is, are we right in our assumptions that the built-in upsert\nis useless from a performance perspective (e.g. it's only good for much\nsmaller tasks) or are we wrong? I read a bit about HOT updates and\nautovacuum tuning, but nothing that references something more similar to\nthis question.\n\nWorth mentioning is that this DB (PostgreSQL 10.9) is running on Heroku so\nwe are not able to tune it to our needs. We are planning to move at some\npoint, depending on how important it ends up being. Finally, it is also\nworth mentioning that this DB also has one follower (i.e. read replica).\n\nWould really appreciate some good insight on this! Thanks beforehand.\n\nBest,\nFredrik\n\nHi,I have tried doing some research for quite a while on the performance implications of the built-in upsert (INSERT ... ON CONFLICT UPDATE...) when a lot of upserts are made. The scale is something like 1 million records/hour, that is split up in groups of around 300 records each.The table is fairly simple, just a few int/bool fields and a date field. The total size of the table is somewhere between 50-60 million records. Essentially, all the rows are supposed to stay up-to-date within a certain cycle (that is currently about 2 days). Sometimes a lot of information changes, sometimes very little/no information changes (e.g. 300 records get upserted but they are identical to the existing records).So far, one hypothesis is that this project seems to be suffering from the large amount of writes that happen constantly since even if the upsert results in no inserts/updates, the \"failed\" inserts from the upsert will still get written somewhere (according to our knowledge). Therefore, the idea is to utilize old-fashioned upserts (writeable CTEs) and do more granular operations that can make sure to only insert data that doesn't already exist, and only update data that has actually changed. Naturally, however, this will put more read-load on the DB and increase query complexity.The question is, are we right in our assumptions that the built-in upsert is useless from a performance perspective (e.g. it's only good for much smaller tasks) or are we wrong? I read a bit about HOT updates and autovacuum tuning, but nothing that references something more similar to this question.Worth mentioning is that this DB (PostgreSQL 10.9) is running on Heroku so we are not able to tune it to our needs. We are planning to move at some point, depending on how important it ends up being. Finally, it is also worth mentioning that this DB also has one follower (i.e. read replica).Would really appreciate some good insight on this! Thanks beforehand.Best,Fredrik",
"msg_date": "Wed, 4 Sep 2019 13:29:45 -0400",
"msg_from": "Fredrik Blomqvist <[email protected]>",
"msg_from_op": true,
"msg_subject": "Upsert performance considerations (~1 mil/hour)"
},
{
"msg_contents": "On Wed, Sep 4, 2019 at 1:30 PM Fredrik Blomqvist <\[email protected]> wrote:\n\n> Hi,\n>\n> I have tried doing some research for quite a while on the performance\n> implications of the built-in upsert (INSERT ... ON CONFLICT UPDATE...) when\n> a lot of upserts are made. The scale is something like 1 million\n> records/hour, that is split up in groups of around 300 records each.\n>\n\nHow is that done? 300 single-valued insert statements, grouped into on\ntransaction? one 300-valued insert statement?\n\n\n> So far, one hypothesis is that this project seems to be suffering from the\n> large amount of writes that happen constantly since even if the upsert\n> results in no inserts/updates, the \"failed\" inserts from the upsert will\n> still get written somewhere (according to our knowledge).\n>\n\nYou can suppress redundant updates with a trigger, as described\nhttps://www.postgresql.org/docs/current/functions-trigger.html. This works\neven for updates that are the result of insert..on conflict..update. There\nis still some writing, as each tuple does get locked, but it is much less\n(at least from a WAL perspective). You can also put a WHERE clause on\nthe DO UPDATE so it only updates is a field has changed, but you have to\nlist each field connected with OR.\n\n\n> Therefore, the idea is to utilize old-fashioned upserts (writeable CTEs)\n> and do more granular operations that can make sure to only insert data that\n> doesn't already exist, and only update data that has actually changed.\n> Naturally, however, this will put more read-load on the DB and increase\n> query complexity.\n>\n\nIt shouldn't put a meaningful additional read load on the database, as the\nON CONFLICT code still needs to do the read as well. Yes, it makes the\ncode slightly more complex.\n\nCheers,\n\nJeff\n\n>\n\nOn Wed, Sep 4, 2019 at 1:30 PM Fredrik Blomqvist <[email protected]> wrote:Hi,I have tried doing some research for quite a while on the performance implications of the built-in upsert (INSERT ... ON CONFLICT UPDATE...) when a lot of upserts are made. The scale is something like 1 million records/hour, that is split up in groups of around 300 records each.How is that done? 300 single-valued insert statements, grouped into on transaction? one 300-valued insert statement? So far, one hypothesis is that this project seems to be suffering from the large amount of writes that happen constantly since even if the upsert results in no inserts/updates, the \"failed\" inserts from the upsert will still get written somewhere (according to our knowledge).You can suppress redundant updates with a trigger, as described https://www.postgresql.org/docs/current/functions-trigger.html. This works even for updates that are the result of insert..on conflict..update. There is still some writing, as each tuple does get locked, but it is much less (at least from a WAL perspective). You can also put a WHERE clause on the DO UPDATE so it only updates is a field has changed, but you have to list each field connected with OR. Therefore, the idea is to utilize old-fashioned upserts (writeable CTEs) and do more granular operations that can make sure to only insert data that doesn't already exist, and only update data that has actually changed. Naturally, however, this will put more read-load on the DB and increase query complexity.It shouldn't put a meaningful additional read load on the database, as the ON CONFLICT code still needs to do the read as well. Yes, it makes the code slightly more complex.Cheers,Jeff",
"msg_date": "Wed, 4 Sep 2019 15:58:05 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Upsert performance considerations (~1 mil/hour)"
},
{
"msg_contents": "Thanks for the response Jeff!\n\nOn Wed, Sep 4, 2019 at 3:58 PM Jeff Janes <[email protected]> wrote:\n\n> On Wed, Sep 4, 2019 at 1:30 PM Fredrik Blomqvist <\n> [email protected]> wrote:\n>\n>> Hi,\n>>\n>> I have tried doing some research for quite a while on the performance\n>> implications of the built-in upsert (INSERT ... ON CONFLICT UPDATE...) when\n>> a lot of upserts are made. The scale is something like 1 million\n>> records/hour, that is split up in groups of around 300 records each.\n>>\n>\n> How is that done? 300 single-valued insert statements, grouped into on\n> transaction? one 300-valued insert statement?\n>\n\nIt's done using one 300-valued insert.\n\n\n>\n>\n>> So far, one hypothesis is that this project seems to be suffering from\n>> the large amount of writes that happen constantly since even if the upsert\n>> results in no inserts/updates, the \"failed\" inserts from the upsert will\n>> still get written somewhere (according to our knowledge).\n>>\n>\n> You can suppress redundant updates with a trigger, as described\n> https://www.postgresql.org/docs/current/functions-trigger.html. This\n> works even for updates that are the result of insert..on conflict..update.\n> There is still some writing, as each tuple does get locked, but it is much\n> less (at least from a WAL perspective). You can also put a WHERE clause\n> on the DO UPDATE so it only updates is a field has changed, but you have to\n> list each field connected with OR.\n>\n\nDidn't know about the trigger method, handy. We were planning on utilizing\nthe WHERE clause to prevent unnecessary updates, so I suppose that will\nmake the situation slightly better. However, we are still left with the\nunnecessary insert, right? If all 300 values already exist and are up to\ndate, there will be a failed insert that will have to be vacuumed, right?\nWhich in turn means that we'd probably need to tune the auto vacuuming to a\nmore aggressive setting if we want to use this kind of upsert.\n\n\n> Therefore, the idea is to utilize old-fashioned upserts (writeable CTEs)\n>> and do more granular operations that can make sure to only insert data that\n>> doesn't already exist, and only update data that has actually changed.\n>> Naturally, however, this will put more read-load on the DB and increase\n>> query complexity.\n>>\n>\n> It shouldn't put a meaningful additional read load on the database, as the\n> ON CONFLICT code still needs to do the read as well. Yes, it makes the\n> code slightly more complex.\n>\n\nRight, okay. Based on what I have told you so far, would you recommend\ngoing with the old-fashioned upsert or the built-in one? Or is there some\nother key information that could swing that decision?\n\nBest,\nFredrik\n\n>\n\nThanks for the response Jeff!On Wed, Sep 4, 2019 at 3:58 PM Jeff Janes <[email protected]> wrote:On Wed, Sep 4, 2019 at 1:30 PM Fredrik Blomqvist <[email protected]> wrote:Hi,I have tried doing some research for quite a while on the performance implications of the built-in upsert (INSERT ... ON CONFLICT UPDATE...) when a lot of upserts are made. The scale is something like 1 million records/hour, that is split up in groups of around 300 records each.How is that done? 300 single-valued insert statements, grouped into on transaction? one 300-valued insert statement?It's done using one 300-valued insert. So far, one hypothesis is that this project seems to be suffering from the large amount of writes that happen constantly since even if the upsert results in no inserts/updates, the \"failed\" inserts from the upsert will still get written somewhere (according to our knowledge).You can suppress redundant updates with a trigger, as described https://www.postgresql.org/docs/current/functions-trigger.html. This works even for updates that are the result of insert..on conflict..update. There is still some writing, as each tuple does get locked, but it is much less (at least from a WAL perspective). You can also put a WHERE clause on the DO UPDATE so it only updates is a field has changed, but you have to list each field connected with OR.Didn't know about the trigger method, handy. We were planning on utilizing the WHERE clause to prevent unnecessary updates, so I suppose that will make the situation slightly better. However, we are still left with the unnecessary insert, right? If all 300 values already exist and are up to date, there will be a failed insert that will have to be vacuumed, right? Which in turn means that we'd probably need to tune the auto vacuuming to a more aggressive setting if we want to use this kind of upsert. Therefore, the idea is to utilize old-fashioned upserts (writeable CTEs) and do more granular operations that can make sure to only insert data that doesn't already exist, and only update data that has actually changed. Naturally, however, this will put more read-load on the DB and increase query complexity.It shouldn't put a meaningful additional read load on the database, as the ON CONFLICT code still needs to do the read as well. Yes, it makes the code slightly more complex.Right, okay. Based on what I have told you so far, would you recommend going with the old-fashioned upsert or the built-in one? Or is there some other key information that could swing that decision?Best,Fredrik",
"msg_date": "Wed, 4 Sep 2019 17:18:27 -0400",
"msg_from": "Fredrik Blomqvist <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Upsert performance considerations (~1 mil/hour)"
}
] |
[
{
"msg_contents": "We have a query that takes 1min to execute in postgres 10.6 and the same\nexecutes in 4 sec in Oracle database. The query is doing 'select distinct'.\nIf I add a 'group by' clause, performance in postgres improves\nsignificantly and fetches results in 2 sec (better than oracle). But\nunfortunately, we cannot modify the query. Could you please suggest a way\nto improve performance in Postgres without modifying the query.\n\n*Original condition: time taken 1min*\n\nSort Method: external merge Disk: 90656kB\n\n\n\n*After removing distinct from query: time taken 2sec*\n\nSort Method: top-N heapsort Memory: 201kB\n\n\n\n*After increasing work_mem to 180MB; it takes 20sec*\n\nSort Method: quicksort Memory: 172409kB\n\n\n\nSELECT * FROM pg_stat_statements ORDER BY total_time DESC limit 1;\n\n-[ RECORD 1\n]-------+-----------------------------------------------------------------------------------------------------------------------------------------\n\nuserid | 174862\n\ndbid | 174861\n\nqueryid | 1469376470\n\nquery | <query is too long. It selects around 300 columns>\n\ncalls | 1\n\ntotal_time | 59469.972661\n\nmin_time | 59469.972661\n\nmax_time | 59469.972661\n\nmean_time | 59469.972661\n\nstddev_time | 0\n\nrows | 25\n\nshared_blks_hit | 27436\n\nshared_blks_read | 2542\n\nshared_blks_dirtied | 0\n\nshared_blks_written | 0\n\nlocal_blks_hit | 0\n\nlocal_blks_read | 0\n\nlocal_blks_dirtied | 0\n\nlocal_blks_written | 0\n\ntemp_blks_read | 257\n\ntemp_blks_written | 11333\n\nblk_read_time | 0\n\nblk_write_time | 0\n\nWe have a query that takes 1min to execute in postgres 10.6 and the same executes in 4 sec in Oracle database. The query is doing 'select distinct'. If I add a 'group by' clause, performance in postgres improves significantly and fetches results in 2 sec (better than oracle). But unfortunately, we cannot modify the query. Could you please suggest a way to improve performance in Postgres without modifying the query. Original condition: time taken 1min\nSort Method: external merge Disk: 90656kB\n \nAfter removing distinct from query: time taken 2sec\nSort Method: top-N heapsort Memory: 201kB\n \nAfter increasing work_mem to 180MB; it takes 20sec\nSort Method: quicksort Memory: 172409kB\n \nSELECT\n* FROM pg_stat_statements ORDER BY total_time DESC limit 1;-[\nRECORD 1\n]-------+-----------------------------------------------------------------------------------------------------------------------------------------userid \n| 174862dbid \n| 174861queryid \n| 1469376470query \n| <query is too long. It selects around 300 columns>calls \n| 1total_time \n| 59469.972661min_time \n| 59469.972661max_time \n| 59469.972661mean_time \n| 59469.972661stddev_time \n| 0rows \n| 25shared_blks_hit \n| 27436shared_blks_read |\n2542shared_blks_dirtied\n| 0shared_blks_written\n| 0local_blks_hit \n| 0local_blks_read \n| 0local_blks_dirtied \n| 0local_blks_written \n| 0temp_blks_read \n| 257temp_blks_written | 11333blk_read_time \n| 0\nblk_write_time \n| 0",
"msg_date": "Mon, 9 Sep 2019 14:00:01 +0530",
"msg_from": "yash mehta <[email protected]>",
"msg_from_op": true,
"msg_subject": "select distinct runs slow on pg 10.6"
},
{
"msg_contents": "In addition to below mail, we have used btree indexes for primary key\ncolumns. Below is the query:\n\n select distinct shipmentre0_.FIN_ID as FIN1_53_0_,\nworkflowst10_.FIN_ID as FIN1_57_1_,\ncarriers3_.FIN_ID as FIN1_40_2_,\nshipmentro1_.FIN_ID as FIN1_33_3_,\nshipmentme11_.FIN_ID as FIN1_5_4_,\nworkflowst9_.FIN_ID as FIN1_57_5_,\nworkflowst8_.FIN_ID as FIN1_57_6_,\nworkflowst7_.FIN_ID as FIN1_57_7_,\nconsignees5_.FIN_ID as FIN1_81_8_,\nconsignees6_.FIN_ID as FIN1_81_9_,\nshipmentty4_.FIN_ID as FIN1_8_10_,\nshipmentsc2_.FIN_ID as FIN1_78_11_,\nshipmentre0_.MOD_ID as MOD2_53_0_,\nshipmentre0_.SHIPMENT_METHOD_ID as SHIPMENT3_53_0_,\nshipmentre0_.SHIPPER_ID as SHIPPER4_53_0_,\nshipmentre0_.CONSIGNEES_ID as CONSIGNEES5_53_0_,\nshipmentre0_.SHIPMENT_BASIS_ID as SHIPMENT6_53_0_,\nshipmentre0_.SHIPMENT_TYPE_ID as SHIPMENT7_53_0_,\nshipmentre0_.SHIPMENT_ARRANGEMENT_ID as SHIPMENT8_53_0_,\nshipmentre0_.SHIPMENT_DATE as SHIPMENT9_53_0_,\nshipmentre0_.SHIPMENT_CURRENCY_ID as SHIPMENT10_53_0_,\nshipmentre0_.CARRIER_CREW_EXTN_ID as CARRIER11_53_0_,\nshipmentre0_.END_TIME as END12_53_0_,\nshipmentre0_.SHIPMENT_VALUE_USD as SHIPMENT13_53_0_,\nshipmentre0_.SHIPMENT_VALUE_BASE as SHIPMENT14_53_0_,\nshipmentre0_.INSURANCE_VALUE_USD as INSURANCE15_53_0_,\nshipmentre0_.INSURANCE_VALUE_BASE as INSURANCE16_53_0_,\nshipmentre0_.REMARKS as REMARKS53_0_,\nshipmentre0_.DELETION_REMARKS as DELETION18_53_0_,\nshipmentre0_.SHIPMENT_STATUS_ID as SHIPMENT19_53_0_,\nshipmentre0_.VAULT_STATUS_ID as VAULT20_53_0_,\nshipmentre0_.SHIPMENT_CHARGE_STATUS as SHIPMENT21_53_0_,\nshipmentre0_.SHIPMENT_DOCUMENT_STATUS as SHIPMENT22_53_0_,\nshipmentre0_.INSURANCE_PROVIDER as INSURANCE23_53_0_,\nshipmentre0_.SHIPMENT_PROVIDER as SHIPMENT24_53_0_,\nshipmentre0_.SECURITY_PROVIDER_ID as SECURITY25_53_0_,\nshipmentre0_.CONSIGNEE_CONTACT_NAME as CONSIGNEE26_53_0_,\nshipmentre0_.SIGNAL as SIGNAL53_0_,\nshipmentre0_.CHARGEABLE_WT as CHARGEABLE28_53_0_,\nshipmentre0_.NO_OF_PIECES as NO29_53_0_,\nshipmentre0_.REGIONS_ID as REGIONS30_53_0_,\nshipmentre0_.IS_DELETED as IS31_53_0_,\nshipmentre0_.CREATED as CREATED53_0_,\nshipmentre0_.CREATED_BY as CREATED33_53_0_,\nshipmentre0_.LAST_UPDATED as LAST34_53_0_,\nshipmentre0_.LAST_UPDATED_BY as LAST35_53_0_,\nshipmentre0_.LAST_CHECKED_BY as LAST36_53_0_,\nshipmentre0_.LAST_MAKED as LAST37_53_0_,\nshipmentre0_.MAKER_CHECKER_STATUS as MAKER38_53_0_,\nshipmentre0_.SHADOW_ID as SHADOW39_53_0_,\n--(select now()) as formula48_0_,\nworkflowst10_.WORKFLOW_MODULE as WORKFLOW2_57_1_,\nworkflowst10_.NAME as NAME57_1_,\nworkflowst10_.DEAL_DISPLAY_MODULE as DEAL4_57_1_,\nworkflowst10_.WORKFLOW_LEVEL as WORKFLOW5_57_1_,\nworkflowst10_.IS_DEAL_EDITABLE as IS6_57_1_,\nworkflowst10_.GEN_CONFO as GEN7_57_1_,\nworkflowst10_.GEN_DEAL_TICKET as GEN8_57_1_,\nworkflowst10_.GEN_SETTLEMENTS as GEN9_57_1_,\nworkflowst10_.VAULT_START as VAULT10_57_1_,\nworkflowst10_.UPDATE_MAIN_INV as UPDATE11_57_1_,\nworkflowst10_.UPDATE_OTHER_INV as UPDATE12_57_1_,\nworkflowst10_.RELEASE_SHIPMENT as RELEASE13_57_1_,\nworkflowst10_.IS_DEAL_SPLITTABLE as IS14_57_1_,\nworkflowst10_.SEND_EMAIL as SEND15_57_1_,\nworkflowst10_.IS_DELETED as IS16_57_1_,\nworkflowst10_.CREATED as CREATED57_1_,\nworkflowst10_.CREATED_BY as CREATED18_57_1_,\nworkflowst10_.LAST_UPDATED as LAST19_57_1_,\nworkflowst10_.LAST_UPDATED_BY as LAST20_57_1_,\nworkflowst10_.LAST_CHECKED_BY as LAST21_57_1_,\nworkflowst10_.LAST_MAKED as LAST22_57_1_,\nworkflowst10_.MOD_ID as MOD23_57_1_,\nworkflowst10_.MAKER_CHECKER_STATUS as MAKER24_57_1_,\nworkflowst10_.SHADOW_ID as SHADOW25_57_1_,\n--(select now()) as formula52_1_,\ncarriers3_.MOD_ID as MOD2_40_2_,\ncarriers3_.CITIES_ID as CITIES3_40_2_,\ncarriers3_.CODE as CODE40_2_,\ncarriers3_.NAME as NAME40_2_,\ncarriers3_.CARRIER_TYPES as CARRIER6_40_2_,\ncarriers3_.NAME_IN_FL as NAME7_40_2_,\ncarriers3_.IATA_CODE as IATA8_40_2_,\ncarriers3_.KC_CODE as KC9_40_2_,\ncarriers3_.AIRLINE_ACCT as AIRLINE10_40_2_,\ncarriers3_.ADDRESS1 as ADDRESS11_40_2_,\ncarriers3_.ADDRESS2 as ADDRESS12_40_2_,\ncarriers3_.ADDRESS3 as ADDRESS13_40_2_,\ncarriers3_.ADDRESS4 as ADDRESS14_40_2_,\ncarriers3_.TERMINAL as TERMINAL40_2_,\ncarriers3_.AIRLINE_AGENT as AIRLINE16_40_2_,\ncarriers3_.ACCOUNTINGINFO as ACCOUNT17_40_2_,\ncarriers3_.IMPORT_DEPT as IMPORT18_40_2_,\ncarriers3_.IMPORT_AFTER_OFFICE_HOUR as IMPORT19_40_2_,\ncarriers3_.IMPORT_CONTACT as IMPORT20_40_2_,\ncarriers3_.IMPORT_FAX as IMPORT21_40_2_,\ncarriers3_.IMPORT_EMAIL as IMPORT22_40_2_,\ncarriers3_.EXPORT_DEPTT as EXPORT23_40_2_,\ncarriers3_.EXPORT_AFTER_OFFICE_HOUR as EXPORT24_40_2_,\ncarriers3_.EXPORT_CONTACT as EXPORT25_40_2_,\ncarriers3_.EXPORT_FAX as EXPORT26_40_2_,\ncarriers3_.IMPORT_CONTACT_NO as IMPORT27_40_2_,\ncarriers3_.EXPORT_CONTACT_NO as EXPORT28_40_2_,\ncarriers3_.EXPORT_EMAIL as EXPORT29_40_2_,\ncarriers3_.AWB_ISSUED_BY as AWB30_40_2_,\ncarriers3_.IS_DELETED as IS31_40_2_,\ncarriers3_.CREATED as CREATED40_2_,\ncarriers3_.CREATED_BY as CREATED33_40_2_,\ncarriers3_.LAST_UPDATED as LAST34_40_2_,\ncarriers3_.LAST_UPDATED_BY as LAST35_40_2_,\ncarriers3_.LAST_CHECKED_BY as LAST36_40_2_,\ncarriers3_.LAST_MAKED as LAST37_40_2_,\ncarriers3_.MAKER_CHECKER_STATUS as MAKER38_40_2_,\ncarriers3_.SHADOW_ID as SHADOW39_40_2_,\n--(select now()) as formula36_2_,\nshipmentro1_.MOD_ID as MOD2_33_3_,\nshipmentro1_.REGION_ID as REGION3_33_3_,\nshipmentro1_.SHIPMENT_SCHEDULE_ID as SHIPMENT4_33_3_,\nshipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5_33_3_,\nshipmentro1_.AIRWAY_BILL_NO as AIRWAY6_33_3_,\nshipmentro1_.SHIPMENT_DATE as SHIPMENT7_33_3_,\nshipmentro1_.ARRIVAL_DATE as ARRIVAL8_33_3_,\nshipmentro1_.LEG_NO as LEG9_33_3_,\nshipmentro1_.NO_OF_PCS as NO10_33_3_,\nshipmentro1_.CHARGEABLE_WEIGHT as CHARGEABLE11_33_3_,\nshipmentro1_.CARRIER_CREW_EXTN_ID as CARRIER12_33_3_,\nshipmentro1_.IS_DELETED as IS13_33_3_,\nshipmentro1_.CREATED as CREATED33_3_,\nshipmentro1_.CREATED_BY as CREATED15_33_3_,\nshipmentro1_.LAST_UPDATED as LAST16_33_3_,\nshipmentro1_.LAST_UPDATED_BY as LAST17_33_3_,\nshipmentro1_.LAST_CHECKED_BY as LAST18_33_3_,\nshipmentro1_.LAST_MAKED as LAST19_33_3_,\nshipmentro1_.MAKER_CHECKER_STATUS as MAKER20_33_3_,\nshipmentro1_.SHADOW_ID as SHADOW21_33_3_,\n--(select now()) as formula29_3_,\nshipmentme11_.MOD_ID as MOD2_5_4_,\nshipmentme11_.CODE as CODE5_4_,\nshipmentme11_.NAME as NAME5_4_,\nshipmentme11_.SHIPMENT_METHOD_TYPE as SHIPMENT5_5_4_,\nshipmentme11_.IS_DELETED as IS6_5_4_,\nshipmentme11_.CREATED as CREATED5_4_,\nshipmentme11_.CREATED_BY as CREATED8_5_4_,\nshipmentme11_.LAST_UPDATED as LAST9_5_4_,\nshipmentme11_.LAST_UPDATED_BY as LAST10_5_4_,\nshipmentme11_.LAST_CHECKED_BY as LAST11_5_4_,\nshipmentme11_.LAST_MAKED as LAST12_5_4_,\nshipmentme11_.MAKER_CHECKER_STATUS as MAKER13_5_4_,\nshipmentme11_.SHADOW_ID as SHADOW14_5_4_,\n--(select now()) as formula4_4_,\nworkflowst9_.WORKFLOW_MODULE as WORKFLOW2_57_5_,\nworkflowst9_.NAME as NAME57_5_,\nworkflowst9_.DEAL_DISPLAY_MODULE as DEAL4_57_5_,\nworkflowst9_.WORKFLOW_LEVEL as WORKFLOW5_57_5_,\nworkflowst9_.IS_DEAL_EDITABLE as IS6_57_5_,\nworkflowst9_.GEN_CONFO as GEN7_57_5_,\nworkflowst9_.GEN_DEAL_TICKET as GEN8_57_5_,\nworkflowst9_.GEN_SETTLEMENTS as GEN9_57_5_,\nworkflowst9_.VAULT_START as VAULT10_57_5_,\nworkflowst9_.UPDATE_MAIN_INV as UPDATE11_57_5_,\nworkflowst9_.UPDATE_OTHER_INV as UPDATE12_57_5_,\nworkflowst9_.RELEASE_SHIPMENT as RELEASE13_57_5_,\nworkflowst9_.IS_DEAL_SPLITTABLE as IS14_57_5_,\nworkflowst9_.SEND_EMAIL as SEND15_57_5_,\nworkflowst9_.IS_DELETED as IS16_57_5_,\nworkflowst9_.CREATED as CREATED57_5_,\nworkflowst9_.CREATED_BY as CREATED18_57_5_,\nworkflowst9_.LAST_UPDATED as LAST19_57_5_,\nworkflowst9_.LAST_UPDATED_BY as LAST20_57_5_,\nworkflowst9_.LAST_CHECKED_BY as LAST21_57_5_,\nworkflowst9_.LAST_MAKED as LAST22_57_5_,\nworkflowst9_.MOD_ID as MOD23_57_5_,\nworkflowst9_.MAKER_CHECKER_STATUS as MAKER24_57_5_,\nworkflowst9_.SHADOW_ID as SHADOW25_57_5_,\n--(select now()) as formula52_5_,\nworkflowst8_.WORKFLOW_MODULE as WORKFLOW2_57_6_,\nworkflowst8_.NAME as NAME57_6_,\nworkflowst8_.DEAL_DISPLAY_MODULE as DEAL4_57_6_,\nworkflowst8_.WORKFLOW_LEVEL as WORKFLOW5_57_6_,\nworkflowst8_.IS_DEAL_EDITABLE as IS6_57_6_,\nworkflowst8_.GEN_CONFO as GEN7_57_6_,\nworkflowst8_.GEN_DEAL_TICKET as GEN8_57_6_,\nworkflowst8_.GEN_SETTLEMENTS as GEN9_57_6_,\nworkflowst8_.VAULT_START as VAULT10_57_6_,\nworkflowst8_.UPDATE_MAIN_INV as UPDATE11_57_6_,\nworkflowst8_.UPDATE_OTHER_INV as UPDATE12_57_6_,\nworkflowst8_.RELEASE_SHIPMENT as RELEASE13_57_6_,\nworkflowst8_.IS_DEAL_SPLITTABLE as IS14_57_6_,\nworkflowst8_.SEND_EMAIL as SEND15_57_6_,\nworkflowst8_.IS_DELETED as IS16_57_6_,\nworkflowst8_.CREATED as CREATED57_6_,\nworkflowst8_.CREATED_BY as CREATED18_57_6_,\nworkflowst8_.LAST_UPDATED as LAST19_57_6_,\nworkflowst8_.LAST_UPDATED_BY as LAST20_57_6_,\nworkflowst8_.LAST_CHECKED_BY as LAST21_57_6_,\nworkflowst8_.LAST_MAKED as LAST22_57_6_,\nworkflowst8_.MOD_ID as MOD23_57_6_,\nworkflowst8_.MAKER_CHECKER_STATUS as MAKER24_57_6_,\nworkflowst8_.SHADOW_ID as SHADOW25_57_6_,\n--(select now()) as formula52_6_,\nworkflowst7_.WORKFLOW_MODULE as WORKFLOW2_57_7_,\nworkflowst7_.NAME as NAME57_7_,\nworkflowst7_.DEAL_DISPLAY_MODULE as DEAL4_57_7_,\nworkflowst7_.WORKFLOW_LEVEL as WORKFLOW5_57_7_,\nworkflowst7_.IS_DEAL_EDITABLE as IS6_57_7_,\nworkflowst7_.GEN_CONFO as GEN7_57_7_,\nworkflowst7_.GEN_DEAL_TICKET as GEN8_57_7_,\nworkflowst7_.GEN_SETTLEMENTS as GEN9_57_7_,\nworkflowst7_.VAULT_START as VAULT10_57_7_,\nworkflowst7_.UPDATE_MAIN_INV as UPDATE11_57_7_,\nworkflowst7_.UPDATE_OTHER_INV as UPDATE12_57_7_,\nworkflowst7_.RELEASE_SHIPMENT as RELEASE13_57_7_,\nworkflowst7_.IS_DEAL_SPLITTABLE as IS14_57_7_,\nworkflowst7_.SEND_EMAIL as SEND15_57_7_,\nworkflowst7_.IS_DELETED as IS16_57_7_,\nworkflowst7_.CREATED as CREATED57_7_,\nworkflowst7_.CREATED_BY as CREATED18_57_7_,\nworkflowst7_.LAST_UPDATED as LAST19_57_7_,\nworkflowst7_.LAST_UPDATED_BY as LAST20_57_7_,\nworkflowst7_.LAST_CHECKED_BY as LAST21_57_7_,\nworkflowst7_.LAST_MAKED as LAST22_57_7_,\nworkflowst7_.MOD_ID as MOD23_57_7_,\nworkflowst7_.MAKER_CHECKER_STATUS as MAKER24_57_7_,\nworkflowst7_.SHADOW_ID as SHADOW25_57_7_,\n--(select now()) as formula52_7_,\nconsignees5_.MOD_ID as MOD2_81_8_,\nconsignees5_.COUNTRIES_ID as COUNTRIES3_81_8_,\nconsignees5_.CITIES_ID as CITIES4_81_8_,\nconsignees5_.REGIONS_ID as REGIONS5_81_8_,\nconsignees5_.SHORT_NAME as SHORT6_81_8_,\nconsignees5_.IS_COUNTERPARTY as IS7_81_8_,\nconsignees5_.NAME as NAME81_8_,\nconsignees5_.AIRPORTS_ID as AIRPORTS9_81_8_,\nconsignees5_.ADDRESS1 as ADDRESS10_81_8_,\nconsignees5_.ADDRESS2 as ADDRESS11_81_8_,\nconsignees5_.ADDRESS3 as ADDRESS12_81_8_,\nconsignees5_.ADDRESS4 as ADDRESS13_81_8_,\nconsignees5_.AWB_SPECIAL_CLAUSE as AWB14_81_8_,\nconsignees5_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_8_,\nconsignees5_.AGENT_ADDRESS1 as AGENT16_81_8_,\nconsignees5_.AGENT_ADDRESS2 as AGENT17_81_8_,\nconsignees5_.POSTAL_CODE as POSTAL18_81_8_,\nconsignees5_.IS_DELETED as IS19_81_8_,\nconsignees5_.CREATED as CREATED81_8_,\nconsignees5_.CREATED_BY as CREATED21_81_8_,\nconsignees5_.LAST_UPDATED as LAST22_81_8_,\nconsignees5_.LAST_UPDATED_BY as LAST23_81_8_,\nconsignees5_.LAST_CHECKED_BY as LAST24_81_8_,\nconsignees5_.LAST_MAKED as LAST25_81_8_,\nconsignees5_.MAKER_CHECKER_STATUS as MAKER26_81_8_,\nconsignees5_.SHADOW_ID as SHADOW27_81_8_,\n--(select now()) as formula74_8_,\nconsignees6_.MOD_ID as MOD2_81_9_,\nconsignees6_.COUNTRIES_ID as COUNTRIES3_81_9_,\nconsignees6_.CITIES_ID as CITIES4_81_9_,\nconsignees6_.REGIONS_ID as REGIONS5_81_9_,\nconsignees6_.SHORT_NAME as SHORT6_81_9_,\nconsignees6_.IS_COUNTERPARTY as IS7_81_9_,\nconsignees6_.NAME as NAME81_9_,\nconsignees6_.AIRPORTS_ID as AIRPORTS9_81_9_,\nconsignees6_.ADDRESS1 as ADDRESS10_81_9_,\nconsignees6_.ADDRESS2 as ADDRESS11_81_9_,\nconsignees6_.ADDRESS3 as ADDRESS12_81_9_,\nconsignees6_.ADDRESS4 as ADDRESS13_81_9_,\nconsignees6_.AWB_SPECIAL_CLAUSE as AWB14_81_9_,\nconsignees6_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_9_,\nconsignees6_.AGENT_ADDRESS1 as AGENT16_81_9_,\nconsignees6_.AGENT_ADDRESS2 as AGENT17_81_9_,\nconsignees6_.POSTAL_CODE as POSTAL18_81_9_,\nconsignees6_.IS_DELETED as IS19_81_9_,\nconsignees6_.CREATED as CREATED81_9_,\nconsignees6_.CREATED_BY as CREATED21_81_9_,\nconsignees6_.LAST_UPDATED as LAST22_81_9_,\nconsignees6_.LAST_UPDATED_BY as LAST23_81_9_,\nconsignees6_.LAST_CHECKED_BY as LAST24_81_9_,\nconsignees6_.LAST_MAKED as LAST25_81_9_,\nconsignees6_.MAKER_CHECKER_STATUS as MAKER26_81_9_,\nconsignees6_.SHADOW_ID as SHADOW27_81_9_,\n--(select now()) as formula74_9_,\nshipmentty4_.MOD_ID as MOD2_8_10_,\nshipmentty4_.CODE as CODE8_10_,\nshipmentty4_.NAME as NAME8_10_,\nshipmentty4_.REGIONS_ID as REGIONS5_8_10_,\nshipmentty4_.IS_DELETED as IS6_8_10_,\nshipmentty4_.CREATED as CREATED8_10_,\nshipmentty4_.CREATED_BY as CREATED8_8_10_,\nshipmentty4_.LAST_UPDATED as LAST9_8_10_,\nshipmentty4_.LAST_UPDATED_BY as LAST10_8_10_,\nshipmentty4_.LAST_CHECKED_BY as LAST11_8_10_,\nshipmentty4_.LAST_MAKED as LAST12_8_10_,\nshipmentty4_.MAKER_CHECKER_STATUS as MAKER13_8_10_,\nshipmentty4_.SHADOW_ID as SHADOW14_8_10_,\n--(select now()) as formula6_10_,\nshipmentsc2_.MOD_ID as MOD2_78_11_,\nshipmentsc2_.CARRIER_ID as CARRIER3_78_11_,\nshipmentsc2_.ORIGIN_AIRPORTS_ID as ORIGIN4_78_11_,\nshipmentsc2_.DEST_AIRPORTS_ID as DEST5_78_11_,\nshipmentsc2_.SCHEDULE as SCHEDULE78_11_,\nshipmentsc2_.ARRIVAL_DATE as ARRIVAL7_78_11_,\nshipmentsc2_.EST_TIME_DEPARTURE as EST8_78_11_,\nshipmentsc2_.EST_TIME_ARRIVAL as EST9_78_11_,\nshipmentsc2_.ROUTE_LEG_SEQ_NO as ROUTE10_78_11_,\nshipmentsc2_.CUTOFF_HOURS_BEFORE_DEPARTURE as CUTOFF11_78_11_,\nshipmentsc2_.AVAILABLE_IN_A_WEEK as AVAILABLE12_78_11_,\nshipmentsc2_.REMARKS as REMARKS78_11_,\nshipmentsc2_.STATUS as STATUS78_11_,\nshipmentsc2_.REGION_ID as REGION15_78_11_,\nshipmentsc2_.IS_DELETED as IS16_78_11_,\nshipmentsc2_.CREATED as CREATED78_11_,\nshipmentsc2_.CREATED_BY as CREATED18_78_11_,\nshipmentsc2_.LAST_UPDATED as LAST19_78_11_,\nshipmentsc2_.LAST_UPDATED_BY as LAST20_78_11_,\nshipmentsc2_.LAST_CHECKED_BY as LAST21_78_11_,\nshipmentsc2_.LAST_MAKED as LAST22_78_11_,\nshipmentsc2_.MAKER_CHECKER_STATUS as MAKER23_78_11_,\nshipmentsc2_.SHADOW_ID as SHADOW24_78_11_,\n--(select now()) as formula71_11_,\nshipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5___,\nshipmentro1_.FIN_ID as FIN1___\nfrom TBLS_SHIPMENT_RECORDS shipmentre0_\ninner join TBLS_SHIPMENT_RECORD_ROUTING shipmentro1_ on shipmentre0_.FIN_ID\n= shipmentro1_.SHIPMENT_RECORD_ID\ninner join TBLS_SHIPMENT_SCHEDULES shipmentsc2_ on\nshipmentro1_.SHIPMENT_SCHEDULE_ID = shipmentsc2_.FIN_ID\ninner join TBLS_CARRIERS carriers3_ on shipmentsc2_.CARRIER_ID =\ncarriers3_.FIN_ID\ninner join TBLS_SHIPMENT_TYPES shipmentty4_ on\nshipmentre0_.SHIPMENT_TYPE_ID = shipmentty4_.FIN_ID\ninner join TBLS_CONSIGNEES consignees5_ on shipmentre0_.SHIPPER_ID =\nconsignees5_.FIN_ID\ninner join TBLS_CONSIGNEES consignees6_ on shipmentre0_.CONSIGNEES_ID =\nconsignees6_.FIN_ID\ninner join TBLS_WORKFLOW_STATES workflowst7_ on\nshipmentre0_.SHIPMENT_STATUS_ID = workflowst7_.FIN_ID\ninner join TBLS_WORKFLOW_STATES workflowst8_ on\nshipmentre0_.SHIPMENT_CHARGE_STATUS = workflowst8_.FIN_ID\ninner join TBLS_WORKFLOW_STATES workflowst9_ on\nshipmentre0_.SHIPMENT_DOCUMENT_STATUS = workflowst9_.FIN_ID\ninner join TBLS_WORKFLOW_STATES workflowst10_ on\nshipmentre0_.VAULT_STATUS_ID = workflowst10_.FIN_ID\nleft outer join TBLS_SHIPMENT_METHODS shipmentme11_ on\nshipmentre0_.SHIPMENT_METHOD_ID = shipmentme11_.FIN_ID\nleft outer join TBLS_BANK_NOTES_DEALS_LEGS deallegs12_ on\nshipmentre0_.FIN_ID = deallegs12_.SHIPMENT_RECORDS_ID\nwhere (shipmentro1_.LEG_NO = (select min(shipmentro13_.LEG_NO)\n from TBLS_SHIPMENT_RECORD_ROUTING shipmentro13_\n where shipmentre0_.FIN_ID = shipmentro13_.SHIPMENT_RECORD_ID\nand ((shipmentro13_.IS_DELETED = 'N'))))\n and (shipmentre0_.IS_DELETED = 'N')\n and (TO_CHAR(shipmentro1_.ARRIVAL_DATE, 'YYYY-MM-DD') <= '2019-08-29')\norder by shipmentre0_.SHIPMENT_DATE\nlimit 25\n;\n\n\nOn Mon, Sep 9, 2019 at 2:00 PM yash mehta <[email protected]> wrote:\n\n> We have a query that takes 1min to execute in postgres 10.6 and the same\n> executes in 4 sec in Oracle database. The query is doing 'select distinct'.\n> If I add a 'group by' clause, performance in postgres improves\n> significantly and fetches results in 2 sec (better than oracle). But\n> unfortunately, we cannot modify the query. Could you please suggest a way\n> to improve performance in Postgres without modifying the query.\n>\n> *Original condition: time taken 1min*\n>\n> Sort Method: external merge Disk: 90656kB\n>\n>\n>\n> *After removing distinct from query: time taken 2sec*\n>\n> Sort Method: top-N heapsort Memory: 201kB\n>\n>\n>\n> *After increasing work_mem to 180MB; it takes 20sec*\n>\n> Sort Method: quicksort Memory: 172409kB\n>\n>\n>\n> SELECT * FROM pg_stat_statements ORDER BY total_time DESC limit 1;\n>\n> -[ RECORD 1\n> ]-------+-----------------------------------------------------------------------------------------------------------------------------------------\n>\n> userid | 174862\n>\n> dbid | 174861\n>\n> queryid | 1469376470\n>\n> query | <query is too long. It selects around 300 columns>\n>\n> calls | 1\n>\n> total_time | 59469.972661\n>\n> min_time | 59469.972661\n>\n> max_time | 59469.972661\n>\n> mean_time | 59469.972661\n>\n> stddev_time | 0\n>\n> rows | 25\n>\n> shared_blks_hit | 27436\n>\n> shared_blks_read | 2542\n>\n> shared_blks_dirtied | 0\n>\n> shared_blks_written | 0\n>\n> local_blks_hit | 0\n>\n> local_blks_read | 0\n>\n> local_blks_dirtied | 0\n>\n> local_blks_written | 0\n>\n> temp_blks_read | 257\n>\n> temp_blks_written | 11333\n>\n> blk_read_time | 0\n>\n> blk_write_time | 0\n>\n\nIn addition to below mail, we have used btree indexes for primary key columns. Below is the query: select distinct shipmentre0_.FIN_ID as FIN1_53_0_,\t\t\t\t\tworkflowst10_.FIN_ID as FIN1_57_1_,\t\t\t\t\tcarriers3_.FIN_ID as FIN1_40_2_,\t\t\t\t\tshipmentro1_.FIN_ID as FIN1_33_3_,\t\t\t\t\tshipmentme11_.FIN_ID as FIN1_5_4_,\t\t\t\t\tworkflowst9_.FIN_ID as FIN1_57_5_,\t\t\t\t\tworkflowst8_.FIN_ID as FIN1_57_6_,\t\t\t\t\tworkflowst7_.FIN_ID as FIN1_57_7_,\t\t\t\t\tconsignees5_.FIN_ID as FIN1_81_8_,\t\t\t\t\tconsignees6_.FIN_ID as FIN1_81_9_,\t\t\t\t\tshipmentty4_.FIN_ID as FIN1_8_10_,\t\t\t\t\tshipmentsc2_.FIN_ID as FIN1_78_11_,\t\t\t\t\tshipmentre0_.MOD_ID as MOD2_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_METHOD_ID as SHIPMENT3_53_0_,\t\t\t\t\tshipmentre0_.SHIPPER_ID as SHIPPER4_53_0_,\t\t\t\t\tshipmentre0_.CONSIGNEES_ID as CONSIGNEES5_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_BASIS_ID as SHIPMENT6_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_TYPE_ID as SHIPMENT7_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_ARRANGEMENT_ID as SHIPMENT8_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_DATE as SHIPMENT9_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_CURRENCY_ID as SHIPMENT10_53_0_,\t\t\t\t\tshipmentre0_.CARRIER_CREW_EXTN_ID as CARRIER11_53_0_,\t\t\t\t\tshipmentre0_.END_TIME as END12_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_VALUE_USD as SHIPMENT13_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_VALUE_BASE as SHIPMENT14_53_0_,\t\t\t\t\tshipmentre0_.INSURANCE_VALUE_USD as INSURANCE15_53_0_,\t\t\t\t\tshipmentre0_.INSURANCE_VALUE_BASE as INSURANCE16_53_0_,\t\t\t\t\tshipmentre0_.REMARKS as REMARKS53_0_,\t\t\t\t\tshipmentre0_.DELETION_REMARKS as DELETION18_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_STATUS_ID as SHIPMENT19_53_0_,\t\t\t\t\tshipmentre0_.VAULT_STATUS_ID as VAULT20_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_CHARGE_STATUS as SHIPMENT21_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_DOCUMENT_STATUS as SHIPMENT22_53_0_,\t\t\t\t\tshipmentre0_.INSURANCE_PROVIDER as INSURANCE23_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_PROVIDER as SHIPMENT24_53_0_,\t\t\t\t\tshipmentre0_.SECURITY_PROVIDER_ID as SECURITY25_53_0_,\t\t\t\t\tshipmentre0_.CONSIGNEE_CONTACT_NAME as CONSIGNEE26_53_0_,\t\t\t\t\tshipmentre0_.SIGNAL as SIGNAL53_0_,\t\t\t\t\tshipmentre0_.CHARGEABLE_WT as CHARGEABLE28_53_0_,\t\t\t\t\tshipmentre0_.NO_OF_PIECES as NO29_53_0_,\t\t\t\t\tshipmentre0_.REGIONS_ID as REGIONS30_53_0_,\t\t\t\t\tshipmentre0_.IS_DELETED as IS31_53_0_,\t\t\t\t\tshipmentre0_.CREATED as CREATED53_0_,\t\t\t\t\tshipmentre0_.CREATED_BY as CREATED33_53_0_,\t\t\t\t\tshipmentre0_.LAST_UPDATED as LAST34_53_0_,\t\t\t\t\tshipmentre0_.LAST_UPDATED_BY as LAST35_53_0_,\t\t\t\t\tshipmentre0_.LAST_CHECKED_BY as LAST36_53_0_,\t\t\t\t\tshipmentre0_.LAST_MAKED as LAST37_53_0_,\t\t\t\t\tshipmentre0_.MAKER_CHECKER_STATUS as MAKER38_53_0_,\t\t\t\t\tshipmentre0_.SHADOW_ID as SHADOW39_53_0_,\t\t\t\t\t--(select now()) as formula48_0_,\t\t\t\t\tworkflowst10_.WORKFLOW_MODULE as WORKFLOW2_57_1_,\t\t\t\t\tworkflowst10_.NAME as NAME57_1_,\t\t\t\t\tworkflowst10_.DEAL_DISPLAY_MODULE as DEAL4_57_1_,\t\t\t\t\tworkflowst10_.WORKFLOW_LEVEL as WORKFLOW5_57_1_,\t\t\t\t\tworkflowst10_.IS_DEAL_EDITABLE as IS6_57_1_,\t\t\t\t\tworkflowst10_.GEN_CONFO as GEN7_57_1_,\t\t\t\t\tworkflowst10_.GEN_DEAL_TICKET as GEN8_57_1_,\t\t\t\t\tworkflowst10_.GEN_SETTLEMENTS as GEN9_57_1_,\t\t\t\t\tworkflowst10_.VAULT_START as VAULT10_57_1_,\t\t\t\t\tworkflowst10_.UPDATE_MAIN_INV as UPDATE11_57_1_,\t\t\t\t\tworkflowst10_.UPDATE_OTHER_INV as UPDATE12_57_1_,\t\t\t\t\tworkflowst10_.RELEASE_SHIPMENT as RELEASE13_57_1_,\t\t\t\t\tworkflowst10_.IS_DEAL_SPLITTABLE as IS14_57_1_,\t\t\t\t\tworkflowst10_.SEND_EMAIL as SEND15_57_1_,\t\t\t\t\tworkflowst10_.IS_DELETED as IS16_57_1_,\t\t\t\t\tworkflowst10_.CREATED as CREATED57_1_,\t\t\t\t\tworkflowst10_.CREATED_BY as CREATED18_57_1_,\t\t\t\t\tworkflowst10_.LAST_UPDATED as LAST19_57_1_,\t\t\t\t\tworkflowst10_.LAST_UPDATED_BY as LAST20_57_1_,\t\t\t\t\tworkflowst10_.LAST_CHECKED_BY as LAST21_57_1_,\t\t\t\t\tworkflowst10_.LAST_MAKED as LAST22_57_1_,\t\t\t\t\tworkflowst10_.MOD_ID as MOD23_57_1_,\t\t\t\t\tworkflowst10_.MAKER_CHECKER_STATUS as MAKER24_57_1_,\t\t\t\t\tworkflowst10_.SHADOW_ID as SHADOW25_57_1_,\t\t\t\t\t--(select now()) as formula52_1_,\t\t\t\t\tcarriers3_.MOD_ID as MOD2_40_2_,\t\t\t\t\tcarriers3_.CITIES_ID as CITIES3_40_2_,\t\t\t\t\tcarriers3_.CODE as CODE40_2_,\t\t\t\t\tcarriers3_.NAME as NAME40_2_,\t\t\t\t\tcarriers3_.CARRIER_TYPES as CARRIER6_40_2_,\t\t\t\t\tcarriers3_.NAME_IN_FL as NAME7_40_2_,\t\t\t\t\tcarriers3_.IATA_CODE as IATA8_40_2_,\t\t\t\t\tcarriers3_.KC_CODE as KC9_40_2_,\t\t\t\t\tcarriers3_.AIRLINE_ACCT as AIRLINE10_40_2_,\t\t\t\t\tcarriers3_.ADDRESS1 as ADDRESS11_40_2_,\t\t\t\t\tcarriers3_.ADDRESS2 as ADDRESS12_40_2_,\t\t\t\t\tcarriers3_.ADDRESS3 as ADDRESS13_40_2_,\t\t\t\t\tcarriers3_.ADDRESS4 as ADDRESS14_40_2_,\t\t\t\t\tcarriers3_.TERMINAL as TERMINAL40_2_,\t\t\t\t\tcarriers3_.AIRLINE_AGENT as AIRLINE16_40_2_,\t\t\t\t\tcarriers3_.ACCOUNTINGINFO as ACCOUNT17_40_2_,\t\t\t\t\tcarriers3_.IMPORT_DEPT as IMPORT18_40_2_,\t\t\t\t\tcarriers3_.IMPORT_AFTER_OFFICE_HOUR as IMPORT19_40_2_,\t\t\t\t\tcarriers3_.IMPORT_CONTACT as IMPORT20_40_2_,\t\t\t\t\tcarriers3_.IMPORT_FAX as IMPORT21_40_2_,\t\t\t\t\tcarriers3_.IMPORT_EMAIL as IMPORT22_40_2_,\t\t\t\t\tcarriers3_.EXPORT_DEPTT as EXPORT23_40_2_,\t\t\t\t\tcarriers3_.EXPORT_AFTER_OFFICE_HOUR as EXPORT24_40_2_,\t\t\t\t\tcarriers3_.EXPORT_CONTACT as EXPORT25_40_2_,\t\t\t\t\tcarriers3_.EXPORT_FAX as EXPORT26_40_2_,\t\t\t\t\tcarriers3_.IMPORT_CONTACT_NO as IMPORT27_40_2_,\t\t\t\t\tcarriers3_.EXPORT_CONTACT_NO as EXPORT28_40_2_,\t\t\t\t\tcarriers3_.EXPORT_EMAIL as EXPORT29_40_2_,\t\t\t\t\tcarriers3_.AWB_ISSUED_BY as AWB30_40_2_,\t\t\t\t\tcarriers3_.IS_DELETED as IS31_40_2_,\t\t\t\t\tcarriers3_.CREATED as CREATED40_2_,\t\t\t\t\tcarriers3_.CREATED_BY as CREATED33_40_2_,\t\t\t\t\tcarriers3_.LAST_UPDATED as LAST34_40_2_,\t\t\t\t\tcarriers3_.LAST_UPDATED_BY as LAST35_40_2_,\t\t\t\t\tcarriers3_.LAST_CHECKED_BY as LAST36_40_2_,\t\t\t\t\tcarriers3_.LAST_MAKED as LAST37_40_2_,\t\t\t\t\tcarriers3_.MAKER_CHECKER_STATUS as MAKER38_40_2_,\t\t\t\t\tcarriers3_.SHADOW_ID as SHADOW39_40_2_,\t\t\t\t\t--(select now()) as formula36_2_,\t\t\t\t\tshipmentro1_.MOD_ID as MOD2_33_3_,\t\t\t\t\tshipmentro1_.REGION_ID as REGION3_33_3_,\t\t\t\t\tshipmentro1_.SHIPMENT_SCHEDULE_ID as SHIPMENT4_33_3_,\t\t\t\t\tshipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5_33_3_,\t\t\t\t\tshipmentro1_.AIRWAY_BILL_NO as AIRWAY6_33_3_,\t\t\t\t\tshipmentro1_.SHIPMENT_DATE as SHIPMENT7_33_3_,\t\t\t\t\tshipmentro1_.ARRIVAL_DATE as ARRIVAL8_33_3_,\t\t\t\t\tshipmentro1_.LEG_NO as LEG9_33_3_,\t\t\t\t\tshipmentro1_.NO_OF_PCS as NO10_33_3_,\t\t\t\t\tshipmentro1_.CHARGEABLE_WEIGHT as CHARGEABLE11_33_3_,\t\t\t\t\tshipmentro1_.CARRIER_CREW_EXTN_ID as CARRIER12_33_3_,\t\t\t\t\tshipmentro1_.IS_DELETED as IS13_33_3_,\t\t\t\t\tshipmentro1_.CREATED as CREATED33_3_,\t\t\t\t\tshipmentro1_.CREATED_BY as CREATED15_33_3_,\t\t\t\t\tshipmentro1_.LAST_UPDATED as LAST16_33_3_,\t\t\t\t\tshipmentro1_.LAST_UPDATED_BY as LAST17_33_3_,\t\t\t\t\tshipmentro1_.LAST_CHECKED_BY as LAST18_33_3_,\t\t\t\t\tshipmentro1_.LAST_MAKED as LAST19_33_3_,\t\t\t\t\tshipmentro1_.MAKER_CHECKER_STATUS as MAKER20_33_3_,\t\t\t\t\tshipmentro1_.SHADOW_ID as SHADOW21_33_3_,\t\t\t\t\t--(select now()) as formula29_3_,\t\t\t\t\tshipmentme11_.MOD_ID as MOD2_5_4_,\t\t\t\t\tshipmentme11_.CODE as CODE5_4_,\t\t\t\t\tshipmentme11_.NAME as NAME5_4_,\t\t\t\t\tshipmentme11_.SHIPMENT_METHOD_TYPE as SHIPMENT5_5_4_,\t\t\t\t\tshipmentme11_.IS_DELETED as IS6_5_4_,\t\t\t\t\tshipmentme11_.CREATED as CREATED5_4_,\t\t\t\t\tshipmentme11_.CREATED_BY as CREATED8_5_4_,\t\t\t\t\tshipmentme11_.LAST_UPDATED as LAST9_5_4_,\t\t\t\t\tshipmentme11_.LAST_UPDATED_BY as LAST10_5_4_,\t\t\t\t\tshipmentme11_.LAST_CHECKED_BY as LAST11_5_4_,\t\t\t\t\tshipmentme11_.LAST_MAKED as LAST12_5_4_,\t\t\t\t\tshipmentme11_.MAKER_CHECKER_STATUS as MAKER13_5_4_,\t\t\t\t\tshipmentme11_.SHADOW_ID as SHADOW14_5_4_,\t\t\t\t\t--(select now()) as formula4_4_,\t\t\t\t\tworkflowst9_.WORKFLOW_MODULE as WORKFLOW2_57_5_,\t\t\t\t\tworkflowst9_.NAME as NAME57_5_,\t\t\t\t\tworkflowst9_.DEAL_DISPLAY_MODULE as DEAL4_57_5_,\t\t\t\t\tworkflowst9_.WORKFLOW_LEVEL as WORKFLOW5_57_5_,\t\t\t\t\tworkflowst9_.IS_DEAL_EDITABLE as IS6_57_5_,\t\t\t\t\tworkflowst9_.GEN_CONFO as GEN7_57_5_,\t\t\t\t\tworkflowst9_.GEN_DEAL_TICKET as GEN8_57_5_,\t\t\t\t\tworkflowst9_.GEN_SETTLEMENTS as GEN9_57_5_,\t\t\t\t\tworkflowst9_.VAULT_START as VAULT10_57_5_,\t\t\t\t\tworkflowst9_.UPDATE_MAIN_INV as UPDATE11_57_5_,\t\t\t\t\tworkflowst9_.UPDATE_OTHER_INV as UPDATE12_57_5_,\t\t\t\t\tworkflowst9_.RELEASE_SHIPMENT as RELEASE13_57_5_,\t\t\t\t\tworkflowst9_.IS_DEAL_SPLITTABLE as IS14_57_5_,\t\t\t\t\tworkflowst9_.SEND_EMAIL as SEND15_57_5_,\t\t\t\t\tworkflowst9_.IS_DELETED as IS16_57_5_,\t\t\t\t\tworkflowst9_.CREATED as CREATED57_5_,\t\t\t\t\tworkflowst9_.CREATED_BY as CREATED18_57_5_,\t\t\t\t\tworkflowst9_.LAST_UPDATED as LAST19_57_5_,\t\t\t\t\tworkflowst9_.LAST_UPDATED_BY as LAST20_57_5_,\t\t\t\t\tworkflowst9_.LAST_CHECKED_BY as LAST21_57_5_,\t\t\t\t\tworkflowst9_.LAST_MAKED as LAST22_57_5_,\t\t\t\t\tworkflowst9_.MOD_ID as MOD23_57_5_,\t\t\t\t\tworkflowst9_.MAKER_CHECKER_STATUS as MAKER24_57_5_,\t\t\t\t\tworkflowst9_.SHADOW_ID as SHADOW25_57_5_,\t\t\t\t\t--(select now()) as formula52_5_,\t\t\t\t\tworkflowst8_.WORKFLOW_MODULE as WORKFLOW2_57_6_,\t\t\t\t\tworkflowst8_.NAME as NAME57_6_,\t\t\t\t\tworkflowst8_.DEAL_DISPLAY_MODULE as DEAL4_57_6_,\t\t\t\t\tworkflowst8_.WORKFLOW_LEVEL as WORKFLOW5_57_6_,\t\t\t\t\tworkflowst8_.IS_DEAL_EDITABLE as IS6_57_6_,\t\t\t\t\tworkflowst8_.GEN_CONFO as GEN7_57_6_,\t\t\t\t\tworkflowst8_.GEN_DEAL_TICKET as GEN8_57_6_,\t\t\t\t\tworkflowst8_.GEN_SETTLEMENTS as GEN9_57_6_,\t\t\t\t\tworkflowst8_.VAULT_START as VAULT10_57_6_,\t\t\t\t\tworkflowst8_.UPDATE_MAIN_INV as UPDATE11_57_6_,\t\t\t\t\tworkflowst8_.UPDATE_OTHER_INV as UPDATE12_57_6_,\t\t\t\t\tworkflowst8_.RELEASE_SHIPMENT as RELEASE13_57_6_,\t\t\t\t\tworkflowst8_.IS_DEAL_SPLITTABLE as IS14_57_6_,\t\t\t\t\tworkflowst8_.SEND_EMAIL as SEND15_57_6_,\t\t\t\t\tworkflowst8_.IS_DELETED as IS16_57_6_,\t\t\t\t\tworkflowst8_.CREATED as CREATED57_6_,\t\t\t\t\tworkflowst8_.CREATED_BY as CREATED18_57_6_,\t\t\t\t\tworkflowst8_.LAST_UPDATED as LAST19_57_6_,\t\t\t\t\tworkflowst8_.LAST_UPDATED_BY as LAST20_57_6_,\t\t\t\t\tworkflowst8_.LAST_CHECKED_BY as LAST21_57_6_,\t\t\t\t\tworkflowst8_.LAST_MAKED as LAST22_57_6_,\t\t\t\t\tworkflowst8_.MOD_ID as MOD23_57_6_,\t\t\t\t\tworkflowst8_.MAKER_CHECKER_STATUS as MAKER24_57_6_,\t\t\t\t\tworkflowst8_.SHADOW_ID as SHADOW25_57_6_,\t\t\t\t\t--(select now()) as formula52_6_,\t\t\t\t\tworkflowst7_.WORKFLOW_MODULE as WORKFLOW2_57_7_,\t\t\t\t\tworkflowst7_.NAME as NAME57_7_,\t\t\t\t\tworkflowst7_.DEAL_DISPLAY_MODULE as DEAL4_57_7_,\t\t\t\t\tworkflowst7_.WORKFLOW_LEVEL as WORKFLOW5_57_7_,\t\t\t\t\tworkflowst7_.IS_DEAL_EDITABLE as IS6_57_7_,\t\t\t\t\tworkflowst7_.GEN_CONFO as GEN7_57_7_,\t\t\t\t\tworkflowst7_.GEN_DEAL_TICKET as GEN8_57_7_,\t\t\t\t\tworkflowst7_.GEN_SETTLEMENTS as GEN9_57_7_,\t\t\t\t\tworkflowst7_.VAULT_START as VAULT10_57_7_,\t\t\t\t\tworkflowst7_.UPDATE_MAIN_INV as UPDATE11_57_7_,\t\t\t\t\tworkflowst7_.UPDATE_OTHER_INV as UPDATE12_57_7_,\t\t\t\t\tworkflowst7_.RELEASE_SHIPMENT as RELEASE13_57_7_,\t\t\t\t\tworkflowst7_.IS_DEAL_SPLITTABLE as IS14_57_7_,\t\t\t\t\tworkflowst7_.SEND_EMAIL as SEND15_57_7_,\t\t\t\t\tworkflowst7_.IS_DELETED as IS16_57_7_,\t\t\t\t\tworkflowst7_.CREATED as CREATED57_7_,\t\t\t\t\tworkflowst7_.CREATED_BY as CREATED18_57_7_,\t\t\t\t\tworkflowst7_.LAST_UPDATED as LAST19_57_7_,\t\t\t\t\tworkflowst7_.LAST_UPDATED_BY as LAST20_57_7_,\t\t\t\t\tworkflowst7_.LAST_CHECKED_BY as LAST21_57_7_,\t\t\t\t\tworkflowst7_.LAST_MAKED as LAST22_57_7_,\t\t\t\t\tworkflowst7_.MOD_ID as MOD23_57_7_,\t\t\t\t\tworkflowst7_.MAKER_CHECKER_STATUS as MAKER24_57_7_,\t\t\t\t\tworkflowst7_.SHADOW_ID as SHADOW25_57_7_,\t\t\t\t\t--(select now()) as formula52_7_,\t\t\t\t\tconsignees5_.MOD_ID as MOD2_81_8_,\t\t\t\t\tconsignees5_.COUNTRIES_ID as COUNTRIES3_81_8_,\t\t\t\t\tconsignees5_.CITIES_ID as CITIES4_81_8_,\t\t\t\t\tconsignees5_.REGIONS_ID as REGIONS5_81_8_,\t\t\t\t\tconsignees5_.SHORT_NAME as SHORT6_81_8_,\t\t\t\t\tconsignees5_.IS_COUNTERPARTY as IS7_81_8_,\t\t\t\t\tconsignees5_.NAME as NAME81_8_,\t\t\t\t\tconsignees5_.AIRPORTS_ID as AIRPORTS9_81_8_,\t\t\t\t\tconsignees5_.ADDRESS1 as ADDRESS10_81_8_,\t\t\t\t\tconsignees5_.ADDRESS2 as ADDRESS11_81_8_,\t\t\t\t\tconsignees5_.ADDRESS3 as ADDRESS12_81_8_,\t\t\t\t\tconsignees5_.ADDRESS4 as ADDRESS13_81_8_,\t\t\t\t\tconsignees5_.AWB_SPECIAL_CLAUSE as AWB14_81_8_,\t\t\t\t\tconsignees5_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_8_,\t\t\t\t\tconsignees5_.AGENT_ADDRESS1 as AGENT16_81_8_,\t\t\t\t\tconsignees5_.AGENT_ADDRESS2 as AGENT17_81_8_,\t\t\t\t\tconsignees5_.POSTAL_CODE as POSTAL18_81_8_,\t\t\t\t\tconsignees5_.IS_DELETED as IS19_81_8_,\t\t\t\t\tconsignees5_.CREATED as CREATED81_8_,\t\t\t\t\tconsignees5_.CREATED_BY as CREATED21_81_8_,\t\t\t\t\tconsignees5_.LAST_UPDATED as LAST22_81_8_,\t\t\t\t\tconsignees5_.LAST_UPDATED_BY as LAST23_81_8_,\t\t\t\t\tconsignees5_.LAST_CHECKED_BY as LAST24_81_8_,\t\t\t\t\tconsignees5_.LAST_MAKED as LAST25_81_8_,\t\t\t\t\tconsignees5_.MAKER_CHECKER_STATUS as MAKER26_81_8_,\t\t\t\t\tconsignees5_.SHADOW_ID as SHADOW27_81_8_,\t\t\t\t\t--(select now()) as formula74_8_,\t\t\t\t\tconsignees6_.MOD_ID as MOD2_81_9_,\t\t\t\t\tconsignees6_.COUNTRIES_ID as COUNTRIES3_81_9_,\t\t\t\t\tconsignees6_.CITIES_ID as CITIES4_81_9_,\t\t\t\t\tconsignees6_.REGIONS_ID as REGIONS5_81_9_,\t\t\t\t\tconsignees6_.SHORT_NAME as SHORT6_81_9_,\t\t\t\t\tconsignees6_.IS_COUNTERPARTY as IS7_81_9_,\t\t\t\t\tconsignees6_.NAME as NAME81_9_,\t\t\t\t\tconsignees6_.AIRPORTS_ID as AIRPORTS9_81_9_,\t\t\t\t\tconsignees6_.ADDRESS1 as ADDRESS10_81_9_,\t\t\t\t\tconsignees6_.ADDRESS2 as ADDRESS11_81_9_,\t\t\t\t\tconsignees6_.ADDRESS3 as ADDRESS12_81_9_,\t\t\t\t\tconsignees6_.ADDRESS4 as ADDRESS13_81_9_,\t\t\t\t\tconsignees6_.AWB_SPECIAL_CLAUSE as AWB14_81_9_,\t\t\t\t\tconsignees6_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_9_,\t\t\t\t\tconsignees6_.AGENT_ADDRESS1 as AGENT16_81_9_,\t\t\t\t\tconsignees6_.AGENT_ADDRESS2 as AGENT17_81_9_,\t\t\t\t\tconsignees6_.POSTAL_CODE as POSTAL18_81_9_,\t\t\t\t\tconsignees6_.IS_DELETED as IS19_81_9_,\t\t\t\t\tconsignees6_.CREATED as CREATED81_9_,\t\t\t\t\tconsignees6_.CREATED_BY as CREATED21_81_9_,\t\t\t\t\tconsignees6_.LAST_UPDATED as LAST22_81_9_,\t\t\t\t\tconsignees6_.LAST_UPDATED_BY as LAST23_81_9_,\t\t\t\t\tconsignees6_.LAST_CHECKED_BY as LAST24_81_9_,\t\t\t\t\tconsignees6_.LAST_MAKED as LAST25_81_9_,\t\t\t\t\tconsignees6_.MAKER_CHECKER_STATUS as MAKER26_81_9_,\t\t\t\t\tconsignees6_.SHADOW_ID as SHADOW27_81_9_,\t\t\t\t\t--(select now()) as formula74_9_,\t\t\t\t\tshipmentty4_.MOD_ID as MOD2_8_10_,\t\t\t\t\tshipmentty4_.CODE as CODE8_10_,\t\t\t\t\tshipmentty4_.NAME as NAME8_10_,\t\t\t\t\tshipmentty4_.REGIONS_ID as REGIONS5_8_10_,\t\t\t\t\tshipmentty4_.IS_DELETED as IS6_8_10_,\t\t\t\t\tshipmentty4_.CREATED as CREATED8_10_,\t\t\t\t\tshipmentty4_.CREATED_BY as CREATED8_8_10_,\t\t\t\t\tshipmentty4_.LAST_UPDATED as LAST9_8_10_,\t\t\t\t\tshipmentty4_.LAST_UPDATED_BY as LAST10_8_10_,\t\t\t\t\tshipmentty4_.LAST_CHECKED_BY as LAST11_8_10_,\t\t\t\t\tshipmentty4_.LAST_MAKED as LAST12_8_10_,\t\t\t\t\tshipmentty4_.MAKER_CHECKER_STATUS as MAKER13_8_10_,\t\t\t\t\tshipmentty4_.SHADOW_ID as SHADOW14_8_10_,\t\t\t\t\t--(select now()) as formula6_10_,\t\t\t\t\tshipmentsc2_.MOD_ID as MOD2_78_11_,\t\t\t\t\tshipmentsc2_.CARRIER_ID as CARRIER3_78_11_,\t\t\t\t\tshipmentsc2_.ORIGIN_AIRPORTS_ID as ORIGIN4_78_11_,\t\t\t\t\tshipmentsc2_.DEST_AIRPORTS_ID as DEST5_78_11_,\t\t\t\t\tshipmentsc2_.SCHEDULE as SCHEDULE78_11_,\t\t\t\t\tshipmentsc2_.ARRIVAL_DATE as ARRIVAL7_78_11_,\t\t\t\t\tshipmentsc2_.EST_TIME_DEPARTURE as EST8_78_11_,\t\t\t\t\tshipmentsc2_.EST_TIME_ARRIVAL as EST9_78_11_,\t\t\t\t\tshipmentsc2_.ROUTE_LEG_SEQ_NO as ROUTE10_78_11_,\t\t\t\t\tshipmentsc2_.CUTOFF_HOURS_BEFORE_DEPARTURE as CUTOFF11_78_11_,\t\t\t\t\tshipmentsc2_.AVAILABLE_IN_A_WEEK as AVAILABLE12_78_11_,\t\t\t\t\tshipmentsc2_.REMARKS as REMARKS78_11_,\t\t\t\t\tshipmentsc2_.STATUS as STATUS78_11_,\t\t\t\t\tshipmentsc2_.REGION_ID as REGION15_78_11_,\t\t\t\t\tshipmentsc2_.IS_DELETED as IS16_78_11_,\t\t\t\t\tshipmentsc2_.CREATED as CREATED78_11_,\t\t\t\t\tshipmentsc2_.CREATED_BY as CREATED18_78_11_,\t\t\t\t\tshipmentsc2_.LAST_UPDATED as LAST19_78_11_,\t\t\t\t\tshipmentsc2_.LAST_UPDATED_BY as LAST20_78_11_,\t\t\t\t\tshipmentsc2_.LAST_CHECKED_BY as LAST21_78_11_,\t\t\t\t\tshipmentsc2_.LAST_MAKED as LAST22_78_11_,\t\t\t\t\tshipmentsc2_.MAKER_CHECKER_STATUS as MAKER23_78_11_,\t\t\t\t\tshipmentsc2_.SHADOW_ID as SHADOW24_78_11_,\t\t\t\t\t--(select now()) as formula71_11_,\t\t\t\t\tshipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5___,\t\t\t\t\tshipmentro1_.FIN_ID as FIN1___\tfrom TBLS_SHIPMENT_RECORDS shipmentre0_\t\t\t inner join TBLS_SHIPMENT_RECORD_ROUTING shipmentro1_ on shipmentre0_.FIN_ID = shipmentro1_.SHIPMENT_RECORD_ID\t\t\t inner join TBLS_SHIPMENT_SCHEDULES shipmentsc2_ on shipmentro1_.SHIPMENT_SCHEDULE_ID = shipmentsc2_.FIN_ID\t\t\t inner join TBLS_CARRIERS carriers3_ on shipmentsc2_.CARRIER_ID = carriers3_.FIN_ID\t\t\t inner join TBLS_SHIPMENT_TYPES shipmentty4_ on shipmentre0_.SHIPMENT_TYPE_ID = shipmentty4_.FIN_ID\t\t\t inner join TBLS_CONSIGNEES consignees5_ on shipmentre0_.SHIPPER_ID = consignees5_.FIN_ID\t\t\t inner join TBLS_CONSIGNEES consignees6_ on shipmentre0_.CONSIGNEES_ID = consignees6_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst7_ on shipmentre0_.SHIPMENT_STATUS_ID = workflowst7_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst8_ on shipmentre0_.SHIPMENT_CHARGE_STATUS = workflowst8_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst9_ on shipmentre0_.SHIPMENT_DOCUMENT_STATUS = workflowst9_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst10_ on shipmentre0_.VAULT_STATUS_ID = workflowst10_.FIN_ID\t\t\t left outer join TBLS_SHIPMENT_METHODS shipmentme11_ on shipmentre0_.SHIPMENT_METHOD_ID = shipmentme11_.FIN_ID\t\t\t left outer join TBLS_BANK_NOTES_DEALS_LEGS deallegs12_ on shipmentre0_.FIN_ID = deallegs12_.SHIPMENT_RECORDS_ID\twhere (shipmentro1_.LEG_NO = (select min(shipmentro13_.LEG_NO)\t\t\t\t\t\t\t\t from TBLS_SHIPMENT_RECORD_ROUTING shipmentro13_\t\t\t\t\t\t\t\t where shipmentre0_.FIN_ID = shipmentro13_.SHIPMENT_RECORD_ID\t\t\t\t\t\t\t\t\tand ((shipmentro13_.IS_DELETED = 'N'))))\t and (shipmentre0_.IS_DELETED = 'N')\t and (TO_CHAR(shipmentro1_.ARRIVAL_DATE, 'YYYY-MM-DD') <= '2019-08-29')\torder by shipmentre0_.SHIPMENT_DATE \tlimit 25\t;On Mon, Sep 9, 2019 at 2:00 PM yash mehta <[email protected]> wrote:We have a query that takes 1min to execute in postgres 10.6 and the same executes in 4 sec in Oracle database. The query is doing 'select distinct'. If I add a 'group by' clause, performance in postgres improves significantly and fetches results in 2 sec (better than oracle). But unfortunately, we cannot modify the query. Could you please suggest a way to improve performance in Postgres without modifying the query. Original condition: time taken 1min\nSort Method: external merge Disk: 90656kB\n \nAfter removing distinct from query: time taken 2sec\nSort Method: top-N heapsort Memory: 201kB\n \nAfter increasing work_mem to 180MB; it takes 20sec\nSort Method: quicksort Memory: 172409kB\n \nSELECT\n* FROM pg_stat_statements ORDER BY total_time DESC limit 1;-[\nRECORD 1\n]-------+-----------------------------------------------------------------------------------------------------------------------------------------userid \n| 174862dbid \n| 174861queryid \n| 1469376470query \n| <query is too long. It selects around 300 columns>calls \n| 1total_time \n| 59469.972661min_time \n| 59469.972661max_time \n| 59469.972661mean_time \n| 59469.972661stddev_time \n| 0rows \n| 25shared_blks_hit \n| 27436shared_blks_read |\n2542shared_blks_dirtied\n| 0shared_blks_written\n| 0local_blks_hit \n| 0local_blks_read \n| 0local_blks_dirtied \n| 0local_blks_written \n| 0temp_blks_read \n| 257temp_blks_written | 11333blk_read_time \n| 0\nblk_write_time \n| 0",
"msg_date": "Mon, 9 Sep 2019 14:08:03 +0530",
"msg_from": "yash mehta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "On Mon, Sep 9, 2019 at 10:38 AM yash mehta <[email protected]> wrote:\n\n> In addition to below mail, we have used btree indexes for primary key\n> columns. Below is the query:\n>\n> select distinct shipmentre0_.FIN_ID as FIN1_53_0_,\n> workflowst10_.FIN_ID as FIN1_57_1_,\n> carriers3_.FIN_ID as FIN1_40_2_,\n> shipmentro1_.FIN_ID as FIN1_33_3_,\n> shipmentme11_.FIN_ID as FIN1_5_4_,\n> workflowst9_.FIN_ID as FIN1_57_5_,\n> workflowst8_.FIN_ID as FIN1_57_6_,\n> workflowst7_.FIN_ID as FIN1_57_7_,\n> consignees5_.FIN_ID as FIN1_81_8_,\n> consignees6_.FIN_ID as FIN1_81_9_,\n> shipmentty4_.FIN_ID as FIN1_8_10_,\n> shipmentsc2_.FIN_ID as FIN1_78_11_,\n> shipmentre0_.MOD_ID as MOD2_53_0_,\n> shipmentre0_.SHIPMENT_METHOD_ID as SHIPMENT3_53_0_,\n> shipmentre0_.SHIPPER_ID as SHIPPER4_53_0_,\n> shipmentre0_.CONSIGNEES_ID as CONSIGNEES5_53_0_,\n> shipmentre0_.SHIPMENT_BASIS_ID as SHIPMENT6_53_0_,\n> shipmentre0_.SHIPMENT_TYPE_ID as SHIPMENT7_53_0_,\n> shipmentre0_.SHIPMENT_ARRANGEMENT_ID as SHIPMENT8_53_0_,\n> shipmentre0_.SHIPMENT_DATE as SHIPMENT9_53_0_,\n> shipmentre0_.SHIPMENT_CURRENCY_ID as SHIPMENT10_53_0_,\n> shipmentre0_.CARRIER_CREW_EXTN_ID as CARRIER11_53_0_,\n> shipmentre0_.END_TIME as END12_53_0_,\n> shipmentre0_.SHIPMENT_VALUE_USD as SHIPMENT13_53_0_,\n> shipmentre0_.SHIPMENT_VALUE_BASE as SHIPMENT14_53_0_,\n> shipmentre0_.INSURANCE_VALUE_USD as INSURANCE15_53_0_,\n> shipmentre0_.INSURANCE_VALUE_BASE as INSURANCE16_53_0_,\n> shipmentre0_.REMARKS as REMARKS53_0_,\n> shipmentre0_.DELETION_REMARKS as DELETION18_53_0_,\n> shipmentre0_.SHIPMENT_STATUS_ID as SHIPMENT19_53_0_,\n> shipmentre0_.VAULT_STATUS_ID as VAULT20_53_0_,\n> shipmentre0_.SHIPMENT_CHARGE_STATUS as SHIPMENT21_53_0_,\n> shipmentre0_.SHIPMENT_DOCUMENT_STATUS as SHIPMENT22_53_0_,\n> shipmentre0_.INSURANCE_PROVIDER as INSURANCE23_53_0_,\n> shipmentre0_.SHIPMENT_PROVIDER as SHIPMENT24_53_0_,\n> shipmentre0_.SECURITY_PROVIDER_ID as SECURITY25_53_0_,\n> shipmentre0_.CONSIGNEE_CONTACT_NAME as CONSIGNEE26_53_0_,\n> shipmentre0_.SIGNAL as SIGNAL53_0_,\n> shipmentre0_.CHARGEABLE_WT as CHARGEABLE28_53_0_,\n> shipmentre0_.NO_OF_PIECES as NO29_53_0_,\n> shipmentre0_.REGIONS_ID as REGIONS30_53_0_,\n> shipmentre0_.IS_DELETED as IS31_53_0_,\n> shipmentre0_.CREATED as CREATED53_0_,\n> shipmentre0_.CREATED_BY as CREATED33_53_0_,\n> shipmentre0_.LAST_UPDATED as LAST34_53_0_,\n> shipmentre0_.LAST_UPDATED_BY as LAST35_53_0_,\n> shipmentre0_.LAST_CHECKED_BY as LAST36_53_0_,\n> shipmentre0_.LAST_MAKED as LAST37_53_0_,\n> shipmentre0_.MAKER_CHECKER_STATUS as MAKER38_53_0_,\n> shipmentre0_.SHADOW_ID as SHADOW39_53_0_,\n> --(select now()) as formula48_0_,\n> workflowst10_.WORKFLOW_MODULE as WORKFLOW2_57_1_,\n> workflowst10_.NAME as NAME57_1_,\n> workflowst10_.DEAL_DISPLAY_MODULE as DEAL4_57_1_,\n> workflowst10_.WORKFLOW_LEVEL as WORKFLOW5_57_1_,\n> workflowst10_.IS_DEAL_EDITABLE as IS6_57_1_,\n> workflowst10_.GEN_CONFO as GEN7_57_1_,\n> workflowst10_.GEN_DEAL_TICKET as GEN8_57_1_,\n> workflowst10_.GEN_SETTLEMENTS as GEN9_57_1_,\n> workflowst10_.VAULT_START as VAULT10_57_1_,\n> workflowst10_.UPDATE_MAIN_INV as UPDATE11_57_1_,\n> workflowst10_.UPDATE_OTHER_INV as UPDATE12_57_1_,\n> workflowst10_.RELEASE_SHIPMENT as RELEASE13_57_1_,\n> workflowst10_.IS_DEAL_SPLITTABLE as IS14_57_1_,\n> workflowst10_.SEND_EMAIL as SEND15_57_1_,\n> workflowst10_.IS_DELETED as IS16_57_1_,\n> workflowst10_.CREATED as CREATED57_1_,\n> workflowst10_.CREATED_BY as CREATED18_57_1_,\n> workflowst10_.LAST_UPDATED as LAST19_57_1_,\n> workflowst10_.LAST_UPDATED_BY as LAST20_57_1_,\n> workflowst10_.LAST_CHECKED_BY as LAST21_57_1_,\n> workflowst10_.LAST_MAKED as LAST22_57_1_,\n> workflowst10_.MOD_ID as MOD23_57_1_,\n> workflowst10_.MAKER_CHECKER_STATUS as MAKER24_57_1_,\n> workflowst10_.SHADOW_ID as SHADOW25_57_1_,\n> --(select now()) as formula52_1_,\n> carriers3_.MOD_ID as MOD2_40_2_,\n> carriers3_.CITIES_ID as CITIES3_40_2_,\n> carriers3_.CODE as CODE40_2_,\n> carriers3_.NAME as NAME40_2_,\n> carriers3_.CARRIER_TYPES as CARRIER6_40_2_,\n> carriers3_.NAME_IN_FL as NAME7_40_2_,\n> carriers3_.IATA_CODE as IATA8_40_2_,\n> carriers3_.KC_CODE as KC9_40_2_,\n> carriers3_.AIRLINE_ACCT as AIRLINE10_40_2_,\n> carriers3_.ADDRESS1 as ADDRESS11_40_2_,\n> carriers3_.ADDRESS2 as ADDRESS12_40_2_,\n> carriers3_.ADDRESS3 as ADDRESS13_40_2_,\n> carriers3_.ADDRESS4 as ADDRESS14_40_2_,\n> carriers3_.TERMINAL as TERMINAL40_2_,\n> carriers3_.AIRLINE_AGENT as AIRLINE16_40_2_,\n> carriers3_.ACCOUNTINGINFO as ACCOUNT17_40_2_,\n> carriers3_.IMPORT_DEPT as IMPORT18_40_2_,\n> carriers3_.IMPORT_AFTER_OFFICE_HOUR as IMPORT19_40_2_,\n> carriers3_.IMPORT_CONTACT as IMPORT20_40_2_,\n> carriers3_.IMPORT_FAX as IMPORT21_40_2_,\n> carriers3_.IMPORT_EMAIL as IMPORT22_40_2_,\n> carriers3_.EXPORT_DEPTT as EXPORT23_40_2_,\n> carriers3_.EXPORT_AFTER_OFFICE_HOUR as EXPORT24_40_2_,\n> carriers3_.EXPORT_CONTACT as EXPORT25_40_2_,\n> carriers3_.EXPORT_FAX as EXPORT26_40_2_,\n> carriers3_.IMPORT_CONTACT_NO as IMPORT27_40_2_,\n> carriers3_.EXPORT_CONTACT_NO as EXPORT28_40_2_,\n> carriers3_.EXPORT_EMAIL as EXPORT29_40_2_,\n> carriers3_.AWB_ISSUED_BY as AWB30_40_2_,\n> carriers3_.IS_DELETED as IS31_40_2_,\n> carriers3_.CREATED as CREATED40_2_,\n> carriers3_.CREATED_BY as CREATED33_40_2_,\n> carriers3_.LAST_UPDATED as LAST34_40_2_,\n> carriers3_.LAST_UPDATED_BY as LAST35_40_2_,\n> carriers3_.LAST_CHECKED_BY as LAST36_40_2_,\n> carriers3_.LAST_MAKED as LAST37_40_2_,\n> carriers3_.MAKER_CHECKER_STATUS as MAKER38_40_2_,\n> carriers3_.SHADOW_ID as SHADOW39_40_2_,\n> --(select now()) as formula36_2_,\n> shipmentro1_.MOD_ID as MOD2_33_3_,\n> shipmentro1_.REGION_ID as REGION3_33_3_,\n> shipmentro1_.SHIPMENT_SCHEDULE_ID as SHIPMENT4_33_3_,\n> shipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5_33_3_,\n> shipmentro1_.AIRWAY_BILL_NO as AIRWAY6_33_3_,\n> shipmentro1_.SHIPMENT_DATE as SHIPMENT7_33_3_,\n> shipmentro1_.ARRIVAL_DATE as ARRIVAL8_33_3_,\n> shipmentro1_.LEG_NO as LEG9_33_3_,\n> shipmentro1_.NO_OF_PCS as NO10_33_3_,\n> shipmentro1_.CHARGEABLE_WEIGHT as CHARGEABLE11_33_3_,\n> shipmentro1_.CARRIER_CREW_EXTN_ID as CARRIER12_33_3_,\n> shipmentro1_.IS_DELETED as IS13_33_3_,\n> shipmentro1_.CREATED as CREATED33_3_,\n> shipmentro1_.CREATED_BY as CREATED15_33_3_,\n> shipmentro1_.LAST_UPDATED as LAST16_33_3_,\n> shipmentro1_.LAST_UPDATED_BY as LAST17_33_3_,\n> shipmentro1_.LAST_CHECKED_BY as LAST18_33_3_,\n> shipmentro1_.LAST_MAKED as LAST19_33_3_,\n> shipmentro1_.MAKER_CHECKER_STATUS as MAKER20_33_3_,\n> shipmentro1_.SHADOW_ID as SHADOW21_33_3_,\n> --(select now()) as formula29_3_,\n> shipmentme11_.MOD_ID as MOD2_5_4_,\n> shipmentme11_.CODE as CODE5_4_,\n> shipmentme11_.NAME as NAME5_4_,\n> shipmentme11_.SHIPMENT_METHOD_TYPE as SHIPMENT5_5_4_,\n> shipmentme11_.IS_DELETED as IS6_5_4_,\n> shipmentme11_.CREATED as CREATED5_4_,\n> shipmentme11_.CREATED_BY as CREATED8_5_4_,\n> shipmentme11_.LAST_UPDATED as LAST9_5_4_,\n> shipmentme11_.LAST_UPDATED_BY as LAST10_5_4_,\n> shipmentme11_.LAST_CHECKED_BY as LAST11_5_4_,\n> shipmentme11_.LAST_MAKED as LAST12_5_4_,\n> shipmentme11_.MAKER_CHECKER_STATUS as MAKER13_5_4_,\n> shipmentme11_.SHADOW_ID as SHADOW14_5_4_,\n> --(select now()) as formula4_4_,\n> workflowst9_.WORKFLOW_MODULE as WORKFLOW2_57_5_,\n> workflowst9_.NAME as NAME57_5_,\n> workflowst9_.DEAL_DISPLAY_MODULE as DEAL4_57_5_,\n> workflowst9_.WORKFLOW_LEVEL as WORKFLOW5_57_5_,\n> workflowst9_.IS_DEAL_EDITABLE as IS6_57_5_,\n> workflowst9_.GEN_CONFO as GEN7_57_5_,\n> workflowst9_.GEN_DEAL_TICKET as GEN8_57_5_,\n> workflowst9_.GEN_SETTLEMENTS as GEN9_57_5_,\n> workflowst9_.VAULT_START as VAULT10_57_5_,\n> workflowst9_.UPDATE_MAIN_INV as UPDATE11_57_5_,\n> workflowst9_.UPDATE_OTHER_INV as UPDATE12_57_5_,\n> workflowst9_.RELEASE_SHIPMENT as RELEASE13_57_5_,\n> workflowst9_.IS_DEAL_SPLITTABLE as IS14_57_5_,\n> workflowst9_.SEND_EMAIL as SEND15_57_5_,\n> workflowst9_.IS_DELETED as IS16_57_5_,\n> workflowst9_.CREATED as CREATED57_5_,\n> workflowst9_.CREATED_BY as CREATED18_57_5_,\n> workflowst9_.LAST_UPDATED as LAST19_57_5_,\n> workflowst9_.LAST_UPDATED_BY as LAST20_57_5_,\n> workflowst9_.LAST_CHECKED_BY as LAST21_57_5_,\n> workflowst9_.LAST_MAKED as LAST22_57_5_,\n> workflowst9_.MOD_ID as MOD23_57_5_,\n> workflowst9_.MAKER_CHECKER_STATUS as MAKER24_57_5_,\n> workflowst9_.SHADOW_ID as SHADOW25_57_5_,\n> --(select now()) as formula52_5_,\n> workflowst8_.WORKFLOW_MODULE as WORKFLOW2_57_6_,\n> workflowst8_.NAME as NAME57_6_,\n> workflowst8_.DEAL_DISPLAY_MODULE as DEAL4_57_6_,\n> workflowst8_.WORKFLOW_LEVEL as WORKFLOW5_57_6_,\n> workflowst8_.IS_DEAL_EDITABLE as IS6_57_6_,\n> workflowst8_.GEN_CONFO as GEN7_57_6_,\n> workflowst8_.GEN_DEAL_TICKET as GEN8_57_6_,\n> workflowst8_.GEN_SETTLEMENTS as GEN9_57_6_,\n> workflowst8_.VAULT_START as VAULT10_57_6_,\n> workflowst8_.UPDATE_MAIN_INV as UPDATE11_57_6_,\n> workflowst8_.UPDATE_OTHER_INV as UPDATE12_57_6_,\n> workflowst8_.RELEASE_SHIPMENT as RELEASE13_57_6_,\n> workflowst8_.IS_DEAL_SPLITTABLE as IS14_57_6_,\n> workflowst8_.SEND_EMAIL as SEND15_57_6_,\n> workflowst8_.IS_DELETED as IS16_57_6_,\n> workflowst8_.CREATED as CREATED57_6_,\n> workflowst8_.CREATED_BY as CREATED18_57_6_,\n> workflowst8_.LAST_UPDATED as LAST19_57_6_,\n> workflowst8_.LAST_UPDATED_BY as LAST20_57_6_,\n> workflowst8_.LAST_CHECKED_BY as LAST21_57_6_,\n> workflowst8_.LAST_MAKED as LAST22_57_6_,\n> workflowst8_.MOD_ID as MOD23_57_6_,\n> workflowst8_.MAKER_CHECKER_STATUS as MAKER24_57_6_,\n> workflowst8_.SHADOW_ID as SHADOW25_57_6_,\n> --(select now()) as formula52_6_,\n> workflowst7_.WORKFLOW_MODULE as WORKFLOW2_57_7_,\n> workflowst7_.NAME as NAME57_7_,\n> workflowst7_.DEAL_DISPLAY_MODULE as DEAL4_57_7_,\n> workflowst7_.WORKFLOW_LEVEL as WORKFLOW5_57_7_,\n> workflowst7_.IS_DEAL_EDITABLE as IS6_57_7_,\n> workflowst7_.GEN_CONFO as GEN7_57_7_,\n> workflowst7_.GEN_DEAL_TICKET as GEN8_57_7_,\n> workflowst7_.GEN_SETTLEMENTS as GEN9_57_7_,\n> workflowst7_.VAULT_START as VAULT10_57_7_,\n> workflowst7_.UPDATE_MAIN_INV as UPDATE11_57_7_,\n> workflowst7_.UPDATE_OTHER_INV as UPDATE12_57_7_,\n> workflowst7_.RELEASE_SHIPMENT as RELEASE13_57_7_,\n> workflowst7_.IS_DEAL_SPLITTABLE as IS14_57_7_,\n> workflowst7_.SEND_EMAIL as SEND15_57_7_,\n> workflowst7_.IS_DELETED as IS16_57_7_,\n> workflowst7_.CREATED as CREATED57_7_,\n> workflowst7_.CREATED_BY as CREATED18_57_7_,\n> workflowst7_.LAST_UPDATED as LAST19_57_7_,\n> workflowst7_.LAST_UPDATED_BY as LAST20_57_7_,\n> workflowst7_.LAST_CHECKED_BY as LAST21_57_7_,\n> workflowst7_.LAST_MAKED as LAST22_57_7_,\n> workflowst7_.MOD_ID as MOD23_57_7_,\n> workflowst7_.MAKER_CHECKER_STATUS as MAKER24_57_7_,\n> workflowst7_.SHADOW_ID as SHADOW25_57_7_,\n> --(select now()) as formula52_7_,\n> consignees5_.MOD_ID as MOD2_81_8_,\n> consignees5_.COUNTRIES_ID as COUNTRIES3_81_8_,\n> consignees5_.CITIES_ID as CITIES4_81_8_,\n> consignees5_.REGIONS_ID as REGIONS5_81_8_,\n> consignees5_.SHORT_NAME as SHORT6_81_8_,\n> consignees5_.IS_COUNTERPARTY as IS7_81_8_,\n> consignees5_.NAME as NAME81_8_,\n> consignees5_.AIRPORTS_ID as AIRPORTS9_81_8_,\n> consignees5_.ADDRESS1 as ADDRESS10_81_8_,\n> consignees5_.ADDRESS2 as ADDRESS11_81_8_,\n> consignees5_.ADDRESS3 as ADDRESS12_81_8_,\n> consignees5_.ADDRESS4 as ADDRESS13_81_8_,\n> consignees5_.AWB_SPECIAL_CLAUSE as AWB14_81_8_,\n> consignees5_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_8_,\n> consignees5_.AGENT_ADDRESS1 as AGENT16_81_8_,\n> consignees5_.AGENT_ADDRESS2 as AGENT17_81_8_,\n> consignees5_.POSTAL_CODE as POSTAL18_81_8_,\n> consignees5_.IS_DELETED as IS19_81_8_,\n> consignees5_.CREATED as CREATED81_8_,\n> consignees5_.CREATED_BY as CREATED21_81_8_,\n> consignees5_.LAST_UPDATED as LAST22_81_8_,\n> consignees5_.LAST_UPDATED_BY as LAST23_81_8_,\n> consignees5_.LAST_CHECKED_BY as LAST24_81_8_,\n> consignees5_.LAST_MAKED as LAST25_81_8_,\n> consignees5_.MAKER_CHECKER_STATUS as MAKER26_81_8_,\n> consignees5_.SHADOW_ID as SHADOW27_81_8_,\n> --(select now()) as formula74_8_,\n> consignees6_.MOD_ID as MOD2_81_9_,\n> consignees6_.COUNTRIES_ID as COUNTRIES3_81_9_,\n> consignees6_.CITIES_ID as CITIES4_81_9_,\n> consignees6_.REGIONS_ID as REGIONS5_81_9_,\n> consignees6_.SHORT_NAME as SHORT6_81_9_,\n> consignees6_.IS_COUNTERPARTY as IS7_81_9_,\n> consignees6_.NAME as NAME81_9_,\n> consignees6_.AIRPORTS_ID as AIRPORTS9_81_9_,\n> consignees6_.ADDRESS1 as ADDRESS10_81_9_,\n> consignees6_.ADDRESS2 as ADDRESS11_81_9_,\n> consignees6_.ADDRESS3 as ADDRESS12_81_9_,\n> consignees6_.ADDRESS4 as ADDRESS13_81_9_,\n> consignees6_.AWB_SPECIAL_CLAUSE as AWB14_81_9_,\n> consignees6_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_9_,\n> consignees6_.AGENT_ADDRESS1 as AGENT16_81_9_,\n> consignees6_.AGENT_ADDRESS2 as AGENT17_81_9_,\n> consignees6_.POSTAL_CODE as POSTAL18_81_9_,\n> consignees6_.IS_DELETED as IS19_81_9_,\n> consignees6_.CREATED as CREATED81_9_,\n> consignees6_.CREATED_BY as CREATED21_81_9_,\n> consignees6_.LAST_UPDATED as LAST22_81_9_,\n> consignees6_.LAST_UPDATED_BY as LAST23_81_9_,\n> consignees6_.LAST_CHECKED_BY as LAST24_81_9_,\n> consignees6_.LAST_MAKED as LAST25_81_9_,\n> consignees6_.MAKER_CHECKER_STATUS as MAKER26_81_9_,\n> consignees6_.SHADOW_ID as SHADOW27_81_9_,\n> --(select now()) as formula74_9_,\n> shipmentty4_.MOD_ID as MOD2_8_10_,\n> shipmentty4_.CODE as CODE8_10_,\n> shipmentty4_.NAME as NAME8_10_,\n> shipmentty4_.REGIONS_ID as REGIONS5_8_10_,\n> shipmentty4_.IS_DELETED as IS6_8_10_,\n> shipmentty4_.CREATED as CREATED8_10_,\n> shipmentty4_.CREATED_BY as CREATED8_8_10_,\n> shipmentty4_.LAST_UPDATED as LAST9_8_10_,\n> shipmentty4_.LAST_UPDATED_BY as LAST10_8_10_,\n> shipmentty4_.LAST_CHECKED_BY as LAST11_8_10_,\n> shipmentty4_.LAST_MAKED as LAST12_8_10_,\n> shipmentty4_.MAKER_CHECKER_STATUS as MAKER13_8_10_,\n> shipmentty4_.SHADOW_ID as SHADOW14_8_10_,\n> --(select now()) as formula6_10_,\n> shipmentsc2_.MOD_ID as MOD2_78_11_,\n> shipmentsc2_.CARRIER_ID as CARRIER3_78_11_,\n> shipmentsc2_.ORIGIN_AIRPORTS_ID as ORIGIN4_78_11_,\n> shipmentsc2_.DEST_AIRPORTS_ID as DEST5_78_11_,\n> shipmentsc2_.SCHEDULE as SCHEDULE78_11_,\n> shipmentsc2_.ARRIVAL_DATE as ARRIVAL7_78_11_,\n> shipmentsc2_.EST_TIME_DEPARTURE as EST8_78_11_,\n> shipmentsc2_.EST_TIME_ARRIVAL as EST9_78_11_,\n> shipmentsc2_.ROUTE_LEG_SEQ_NO as ROUTE10_78_11_,\n> shipmentsc2_.CUTOFF_HOURS_BEFORE_DEPARTURE as CUTOFF11_78_11_,\n> shipmentsc2_.AVAILABLE_IN_A_WEEK as AVAILABLE12_78_11_,\n> shipmentsc2_.REMARKS as REMARKS78_11_,\n> shipmentsc2_.STATUS as STATUS78_11_,\n> shipmentsc2_.REGION_ID as REGION15_78_11_,\n> shipmentsc2_.IS_DELETED as IS16_78_11_,\n> shipmentsc2_.CREATED as CREATED78_11_,\n> shipmentsc2_.CREATED_BY as CREATED18_78_11_,\n> shipmentsc2_.LAST_UPDATED as LAST19_78_11_,\n> shipmentsc2_.LAST_UPDATED_BY as LAST20_78_11_,\n> shipmentsc2_.LAST_CHECKED_BY as LAST21_78_11_,\n> shipmentsc2_.LAST_MAKED as LAST22_78_11_,\n> shipmentsc2_.MAKER_CHECKER_STATUS as MAKER23_78_11_,\n> shipmentsc2_.SHADOW_ID as SHADOW24_78_11_,\n> --(select now()) as formula71_11_,\n> shipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5___,\n> shipmentro1_.FIN_ID as FIN1___\n> from TBLS_SHIPMENT_RECORDS shipmentre0_\n> inner join TBLS_SHIPMENT_RECORD_ROUTING shipmentro1_ on\n> shipmentre0_.FIN_ID = shipmentro1_.SHIPMENT_RECORD_ID\n> inner join TBLS_SHIPMENT_SCHEDULES shipmentsc2_ on\n> shipmentro1_.SHIPMENT_SCHEDULE_ID = shipmentsc2_.FIN_ID\n> inner join TBLS_CARRIERS carriers3_ on shipmentsc2_.CARRIER_ID =\n> carriers3_.FIN_ID\n> inner join TBLS_SHIPMENT_TYPES shipmentty4_ on\n> shipmentre0_.SHIPMENT_TYPE_ID = shipmentty4_.FIN_ID\n> inner join TBLS_CONSIGNEES consignees5_ on shipmentre0_.SHIPPER_ID =\n> consignees5_.FIN_ID\n> inner join TBLS_CONSIGNEES consignees6_ on shipmentre0_.CONSIGNEES_ID =\n> consignees6_.FIN_ID\n> inner join TBLS_WORKFLOW_STATES workflowst7_ on\n> shipmentre0_.SHIPMENT_STATUS_ID = workflowst7_.FIN_ID\n> inner join TBLS_WORKFLOW_STATES workflowst8_ on\n> shipmentre0_.SHIPMENT_CHARGE_STATUS = workflowst8_.FIN_ID\n> inner join TBLS_WORKFLOW_STATES workflowst9_ on\n> shipmentre0_.SHIPMENT_DOCUMENT_STATUS = workflowst9_.FIN_ID\n> inner join TBLS_WORKFLOW_STATES workflowst10_ on\n> shipmentre0_.VAULT_STATUS_ID = workflowst10_.FIN_ID\n> left outer join TBLS_SHIPMENT_METHODS shipmentme11_ on\n> shipmentre0_.SHIPMENT_METHOD_ID = shipmentme11_.FIN_ID\n> left outer join TBLS_BANK_NOTES_DEALS_LEGS deallegs12_ on\n> shipmentre0_.FIN_ID = deallegs12_.SHIPMENT_RECORDS_ID\n> where (shipmentro1_.LEG_NO = (select min(shipmentro13_.LEG_NO)\n> from TBLS_SHIPMENT_RECORD_ROUTING shipmentro13_\n> where shipmentre0_.FIN_ID = shipmentro13_.SHIPMENT_RECORD_ID\n> and ((shipmentro13_.IS_DELETED = 'N'))))\n> and (shipmentre0_.IS_DELETED = 'N')\n> and (TO_CHAR(shipmentro1_.ARRIVAL_DATE, 'YYYY-MM-DD') <= '2019-08-29')\n> order by shipmentre0_.SHIPMENT_DATE\n> limit 25\n> ;\n>\n>\n> On Mon, Sep 9, 2019 at 2:00 PM yash mehta <[email protected]> wrote:\n>\n>> We have a query that takes 1min to execute in postgres 10.6 and the same\n>> executes in 4 sec in Oracle database. The query is doing 'select distinct'.\n>> If I add a 'group by' clause, performance in postgres improves\n>> significantly and fetches results in 2 sec (better than oracle). But\n>> unfortunately, we cannot modify the query. Could you please suggest a way\n>> to improve performance in Postgres without modifying the query.\n>>\n>> *Original condition: time taken 1min*\n>>\n>> Sort Method: external merge Disk: 90656kB\n>>\n>>\n>>\n>> *After removing distinct from query: time taken 2sec*\n>>\n>> Sort Method: top-N heapsort Memory: 201kB\n>>\n>>\n>>\n>> *After increasing work_mem to 180MB; it takes 20sec*\n>>\n>> Sort Method: quicksort Memory: 172409kB\n>>\n>>\n>>\n>> SELECT * FROM pg_stat_statements ORDER BY total_time DESC limit 1;\n>>\n>> -[ RECORD 1\n>> ]-------+-----------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> userid | 174862\n>>\n>> dbid | 174861\n>>\n>> queryid | 1469376470\n>>\n>> query | <query is too long. It selects around 300 columns>\n>>\n>> calls | 1\n>>\n>> total_time | 59469.972661\n>>\n>> min_time | 59469.972661\n>>\n>> max_time | 59469.972661\n>>\n>> mean_time | 59469.972661\n>>\n>> stddev_time | 0\n>>\n>> rows | 25\n>>\n>> shared_blks_hit | 27436\n>>\n>> shared_blks_read | 2542\n>>\n>> shared_blks_dirtied | 0\n>>\n>> shared_blks_written | 0\n>>\n>> local_blks_hit | 0\n>>\n>> local_blks_read | 0\n>>\n>> local_blks_dirtied | 0\n>>\n>> local_blks_written | 0\n>>\n>> temp_blks_read | 257\n>>\n>> temp_blks_written | 11333\n>>\n>> blk_read_time | 0\n>>\n>> blk_write_time | 0\n>>\n>\nIMO, an explain analyze of the query would be useful in order for people to\nhelp you.\n\ne.g. https://explain.depesz.com\n\nRegards,\nFlo\n\nOn Mon, Sep 9, 2019 at 10:38 AM yash mehta <[email protected]> wrote:In addition to below mail, we have used btree indexes for primary key columns. Below is the query: select distinct shipmentre0_.FIN_ID as FIN1_53_0_,\t\t\t\t\tworkflowst10_.FIN_ID as FIN1_57_1_,\t\t\t\t\tcarriers3_.FIN_ID as FIN1_40_2_,\t\t\t\t\tshipmentro1_.FIN_ID as FIN1_33_3_,\t\t\t\t\tshipmentme11_.FIN_ID as FIN1_5_4_,\t\t\t\t\tworkflowst9_.FIN_ID as FIN1_57_5_,\t\t\t\t\tworkflowst8_.FIN_ID as FIN1_57_6_,\t\t\t\t\tworkflowst7_.FIN_ID as FIN1_57_7_,\t\t\t\t\tconsignees5_.FIN_ID as FIN1_81_8_,\t\t\t\t\tconsignees6_.FIN_ID as FIN1_81_9_,\t\t\t\t\tshipmentty4_.FIN_ID as FIN1_8_10_,\t\t\t\t\tshipmentsc2_.FIN_ID as FIN1_78_11_,\t\t\t\t\tshipmentre0_.MOD_ID as MOD2_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_METHOD_ID as SHIPMENT3_53_0_,\t\t\t\t\tshipmentre0_.SHIPPER_ID as SHIPPER4_53_0_,\t\t\t\t\tshipmentre0_.CONSIGNEES_ID as CONSIGNEES5_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_BASIS_ID as SHIPMENT6_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_TYPE_ID as SHIPMENT7_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_ARRANGEMENT_ID as SHIPMENT8_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_DATE as SHIPMENT9_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_CURRENCY_ID as SHIPMENT10_53_0_,\t\t\t\t\tshipmentre0_.CARRIER_CREW_EXTN_ID as CARRIER11_53_0_,\t\t\t\t\tshipmentre0_.END_TIME as END12_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_VALUE_USD as SHIPMENT13_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_VALUE_BASE as SHIPMENT14_53_0_,\t\t\t\t\tshipmentre0_.INSURANCE_VALUE_USD as INSURANCE15_53_0_,\t\t\t\t\tshipmentre0_.INSURANCE_VALUE_BASE as INSURANCE16_53_0_,\t\t\t\t\tshipmentre0_.REMARKS as REMARKS53_0_,\t\t\t\t\tshipmentre0_.DELETION_REMARKS as DELETION18_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_STATUS_ID as SHIPMENT19_53_0_,\t\t\t\t\tshipmentre0_.VAULT_STATUS_ID as VAULT20_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_CHARGE_STATUS as SHIPMENT21_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_DOCUMENT_STATUS as SHIPMENT22_53_0_,\t\t\t\t\tshipmentre0_.INSURANCE_PROVIDER as INSURANCE23_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_PROVIDER as SHIPMENT24_53_0_,\t\t\t\t\tshipmentre0_.SECURITY_PROVIDER_ID as SECURITY25_53_0_,\t\t\t\t\tshipmentre0_.CONSIGNEE_CONTACT_NAME as CONSIGNEE26_53_0_,\t\t\t\t\tshipmentre0_.SIGNAL as SIGNAL53_0_,\t\t\t\t\tshipmentre0_.CHARGEABLE_WT as CHARGEABLE28_53_0_,\t\t\t\t\tshipmentre0_.NO_OF_PIECES as NO29_53_0_,\t\t\t\t\tshipmentre0_.REGIONS_ID as REGIONS30_53_0_,\t\t\t\t\tshipmentre0_.IS_DELETED as IS31_53_0_,\t\t\t\t\tshipmentre0_.CREATED as CREATED53_0_,\t\t\t\t\tshipmentre0_.CREATED_BY as CREATED33_53_0_,\t\t\t\t\tshipmentre0_.LAST_UPDATED as LAST34_53_0_,\t\t\t\t\tshipmentre0_.LAST_UPDATED_BY as LAST35_53_0_,\t\t\t\t\tshipmentre0_.LAST_CHECKED_BY as LAST36_53_0_,\t\t\t\t\tshipmentre0_.LAST_MAKED as LAST37_53_0_,\t\t\t\t\tshipmentre0_.MAKER_CHECKER_STATUS as MAKER38_53_0_,\t\t\t\t\tshipmentre0_.SHADOW_ID as SHADOW39_53_0_,\t\t\t\t\t--(select now()) as formula48_0_,\t\t\t\t\tworkflowst10_.WORKFLOW_MODULE as WORKFLOW2_57_1_,\t\t\t\t\tworkflowst10_.NAME as NAME57_1_,\t\t\t\t\tworkflowst10_.DEAL_DISPLAY_MODULE as DEAL4_57_1_,\t\t\t\t\tworkflowst10_.WORKFLOW_LEVEL as WORKFLOW5_57_1_,\t\t\t\t\tworkflowst10_.IS_DEAL_EDITABLE as IS6_57_1_,\t\t\t\t\tworkflowst10_.GEN_CONFO as GEN7_57_1_,\t\t\t\t\tworkflowst10_.GEN_DEAL_TICKET as GEN8_57_1_,\t\t\t\t\tworkflowst10_.GEN_SETTLEMENTS as GEN9_57_1_,\t\t\t\t\tworkflowst10_.VAULT_START as VAULT10_57_1_,\t\t\t\t\tworkflowst10_.UPDATE_MAIN_INV as UPDATE11_57_1_,\t\t\t\t\tworkflowst10_.UPDATE_OTHER_INV as UPDATE12_57_1_,\t\t\t\t\tworkflowst10_.RELEASE_SHIPMENT as RELEASE13_57_1_,\t\t\t\t\tworkflowst10_.IS_DEAL_SPLITTABLE as IS14_57_1_,\t\t\t\t\tworkflowst10_.SEND_EMAIL as SEND15_57_1_,\t\t\t\t\tworkflowst10_.IS_DELETED as IS16_57_1_,\t\t\t\t\tworkflowst10_.CREATED as CREATED57_1_,\t\t\t\t\tworkflowst10_.CREATED_BY as CREATED18_57_1_,\t\t\t\t\tworkflowst10_.LAST_UPDATED as LAST19_57_1_,\t\t\t\t\tworkflowst10_.LAST_UPDATED_BY as LAST20_57_1_,\t\t\t\t\tworkflowst10_.LAST_CHECKED_BY as LAST21_57_1_,\t\t\t\t\tworkflowst10_.LAST_MAKED as LAST22_57_1_,\t\t\t\t\tworkflowst10_.MOD_ID as MOD23_57_1_,\t\t\t\t\tworkflowst10_.MAKER_CHECKER_STATUS as MAKER24_57_1_,\t\t\t\t\tworkflowst10_.SHADOW_ID as SHADOW25_57_1_,\t\t\t\t\t--(select now()) as formula52_1_,\t\t\t\t\tcarriers3_.MOD_ID as MOD2_40_2_,\t\t\t\t\tcarriers3_.CITIES_ID as CITIES3_40_2_,\t\t\t\t\tcarriers3_.CODE as CODE40_2_,\t\t\t\t\tcarriers3_.NAME as NAME40_2_,\t\t\t\t\tcarriers3_.CARRIER_TYPES as CARRIER6_40_2_,\t\t\t\t\tcarriers3_.NAME_IN_FL as NAME7_40_2_,\t\t\t\t\tcarriers3_.IATA_CODE as IATA8_40_2_,\t\t\t\t\tcarriers3_.KC_CODE as KC9_40_2_,\t\t\t\t\tcarriers3_.AIRLINE_ACCT as AIRLINE10_40_2_,\t\t\t\t\tcarriers3_.ADDRESS1 as ADDRESS11_40_2_,\t\t\t\t\tcarriers3_.ADDRESS2 as ADDRESS12_40_2_,\t\t\t\t\tcarriers3_.ADDRESS3 as ADDRESS13_40_2_,\t\t\t\t\tcarriers3_.ADDRESS4 as ADDRESS14_40_2_,\t\t\t\t\tcarriers3_.TERMINAL as TERMINAL40_2_,\t\t\t\t\tcarriers3_.AIRLINE_AGENT as AIRLINE16_40_2_,\t\t\t\t\tcarriers3_.ACCOUNTINGINFO as ACCOUNT17_40_2_,\t\t\t\t\tcarriers3_.IMPORT_DEPT as IMPORT18_40_2_,\t\t\t\t\tcarriers3_.IMPORT_AFTER_OFFICE_HOUR as IMPORT19_40_2_,\t\t\t\t\tcarriers3_.IMPORT_CONTACT as IMPORT20_40_2_,\t\t\t\t\tcarriers3_.IMPORT_FAX as IMPORT21_40_2_,\t\t\t\t\tcarriers3_.IMPORT_EMAIL as IMPORT22_40_2_,\t\t\t\t\tcarriers3_.EXPORT_DEPTT as EXPORT23_40_2_,\t\t\t\t\tcarriers3_.EXPORT_AFTER_OFFICE_HOUR as EXPORT24_40_2_,\t\t\t\t\tcarriers3_.EXPORT_CONTACT as EXPORT25_40_2_,\t\t\t\t\tcarriers3_.EXPORT_FAX as EXPORT26_40_2_,\t\t\t\t\tcarriers3_.IMPORT_CONTACT_NO as IMPORT27_40_2_,\t\t\t\t\tcarriers3_.EXPORT_CONTACT_NO as EXPORT28_40_2_,\t\t\t\t\tcarriers3_.EXPORT_EMAIL as EXPORT29_40_2_,\t\t\t\t\tcarriers3_.AWB_ISSUED_BY as AWB30_40_2_,\t\t\t\t\tcarriers3_.IS_DELETED as IS31_40_2_,\t\t\t\t\tcarriers3_.CREATED as CREATED40_2_,\t\t\t\t\tcarriers3_.CREATED_BY as CREATED33_40_2_,\t\t\t\t\tcarriers3_.LAST_UPDATED as LAST34_40_2_,\t\t\t\t\tcarriers3_.LAST_UPDATED_BY as LAST35_40_2_,\t\t\t\t\tcarriers3_.LAST_CHECKED_BY as LAST36_40_2_,\t\t\t\t\tcarriers3_.LAST_MAKED as LAST37_40_2_,\t\t\t\t\tcarriers3_.MAKER_CHECKER_STATUS as MAKER38_40_2_,\t\t\t\t\tcarriers3_.SHADOW_ID as SHADOW39_40_2_,\t\t\t\t\t--(select now()) as formula36_2_,\t\t\t\t\tshipmentro1_.MOD_ID as MOD2_33_3_,\t\t\t\t\tshipmentro1_.REGION_ID as REGION3_33_3_,\t\t\t\t\tshipmentro1_.SHIPMENT_SCHEDULE_ID as SHIPMENT4_33_3_,\t\t\t\t\tshipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5_33_3_,\t\t\t\t\tshipmentro1_.AIRWAY_BILL_NO as AIRWAY6_33_3_,\t\t\t\t\tshipmentro1_.SHIPMENT_DATE as SHIPMENT7_33_3_,\t\t\t\t\tshipmentro1_.ARRIVAL_DATE as ARRIVAL8_33_3_,\t\t\t\t\tshipmentro1_.LEG_NO as LEG9_33_3_,\t\t\t\t\tshipmentro1_.NO_OF_PCS as NO10_33_3_,\t\t\t\t\tshipmentro1_.CHARGEABLE_WEIGHT as CHARGEABLE11_33_3_,\t\t\t\t\tshipmentro1_.CARRIER_CREW_EXTN_ID as CARRIER12_33_3_,\t\t\t\t\tshipmentro1_.IS_DELETED as IS13_33_3_,\t\t\t\t\tshipmentro1_.CREATED as CREATED33_3_,\t\t\t\t\tshipmentro1_.CREATED_BY as CREATED15_33_3_,\t\t\t\t\tshipmentro1_.LAST_UPDATED as LAST16_33_3_,\t\t\t\t\tshipmentro1_.LAST_UPDATED_BY as LAST17_33_3_,\t\t\t\t\tshipmentro1_.LAST_CHECKED_BY as LAST18_33_3_,\t\t\t\t\tshipmentro1_.LAST_MAKED as LAST19_33_3_,\t\t\t\t\tshipmentro1_.MAKER_CHECKER_STATUS as MAKER20_33_3_,\t\t\t\t\tshipmentro1_.SHADOW_ID as SHADOW21_33_3_,\t\t\t\t\t--(select now()) as formula29_3_,\t\t\t\t\tshipmentme11_.MOD_ID as MOD2_5_4_,\t\t\t\t\tshipmentme11_.CODE as CODE5_4_,\t\t\t\t\tshipmentme11_.NAME as NAME5_4_,\t\t\t\t\tshipmentme11_.SHIPMENT_METHOD_TYPE as SHIPMENT5_5_4_,\t\t\t\t\tshipmentme11_.IS_DELETED as IS6_5_4_,\t\t\t\t\tshipmentme11_.CREATED as CREATED5_4_,\t\t\t\t\tshipmentme11_.CREATED_BY as CREATED8_5_4_,\t\t\t\t\tshipmentme11_.LAST_UPDATED as LAST9_5_4_,\t\t\t\t\tshipmentme11_.LAST_UPDATED_BY as LAST10_5_4_,\t\t\t\t\tshipmentme11_.LAST_CHECKED_BY as LAST11_5_4_,\t\t\t\t\tshipmentme11_.LAST_MAKED as LAST12_5_4_,\t\t\t\t\tshipmentme11_.MAKER_CHECKER_STATUS as MAKER13_5_4_,\t\t\t\t\tshipmentme11_.SHADOW_ID as SHADOW14_5_4_,\t\t\t\t\t--(select now()) as formula4_4_,\t\t\t\t\tworkflowst9_.WORKFLOW_MODULE as WORKFLOW2_57_5_,\t\t\t\t\tworkflowst9_.NAME as NAME57_5_,\t\t\t\t\tworkflowst9_.DEAL_DISPLAY_MODULE as DEAL4_57_5_,\t\t\t\t\tworkflowst9_.WORKFLOW_LEVEL as WORKFLOW5_57_5_,\t\t\t\t\tworkflowst9_.IS_DEAL_EDITABLE as IS6_57_5_,\t\t\t\t\tworkflowst9_.GEN_CONFO as GEN7_57_5_,\t\t\t\t\tworkflowst9_.GEN_DEAL_TICKET as GEN8_57_5_,\t\t\t\t\tworkflowst9_.GEN_SETTLEMENTS as GEN9_57_5_,\t\t\t\t\tworkflowst9_.VAULT_START as VAULT10_57_5_,\t\t\t\t\tworkflowst9_.UPDATE_MAIN_INV as UPDATE11_57_5_,\t\t\t\t\tworkflowst9_.UPDATE_OTHER_INV as UPDATE12_57_5_,\t\t\t\t\tworkflowst9_.RELEASE_SHIPMENT as RELEASE13_57_5_,\t\t\t\t\tworkflowst9_.IS_DEAL_SPLITTABLE as IS14_57_5_,\t\t\t\t\tworkflowst9_.SEND_EMAIL as SEND15_57_5_,\t\t\t\t\tworkflowst9_.IS_DELETED as IS16_57_5_,\t\t\t\t\tworkflowst9_.CREATED as CREATED57_5_,\t\t\t\t\tworkflowst9_.CREATED_BY as CREATED18_57_5_,\t\t\t\t\tworkflowst9_.LAST_UPDATED as LAST19_57_5_,\t\t\t\t\tworkflowst9_.LAST_UPDATED_BY as LAST20_57_5_,\t\t\t\t\tworkflowst9_.LAST_CHECKED_BY as LAST21_57_5_,\t\t\t\t\tworkflowst9_.LAST_MAKED as LAST22_57_5_,\t\t\t\t\tworkflowst9_.MOD_ID as MOD23_57_5_,\t\t\t\t\tworkflowst9_.MAKER_CHECKER_STATUS as MAKER24_57_5_,\t\t\t\t\tworkflowst9_.SHADOW_ID as SHADOW25_57_5_,\t\t\t\t\t--(select now()) as formula52_5_,\t\t\t\t\tworkflowst8_.WORKFLOW_MODULE as WORKFLOW2_57_6_,\t\t\t\t\tworkflowst8_.NAME as NAME57_6_,\t\t\t\t\tworkflowst8_.DEAL_DISPLAY_MODULE as DEAL4_57_6_,\t\t\t\t\tworkflowst8_.WORKFLOW_LEVEL as WORKFLOW5_57_6_,\t\t\t\t\tworkflowst8_.IS_DEAL_EDITABLE as IS6_57_6_,\t\t\t\t\tworkflowst8_.GEN_CONFO as GEN7_57_6_,\t\t\t\t\tworkflowst8_.GEN_DEAL_TICKET as GEN8_57_6_,\t\t\t\t\tworkflowst8_.GEN_SETTLEMENTS as GEN9_57_6_,\t\t\t\t\tworkflowst8_.VAULT_START as VAULT10_57_6_,\t\t\t\t\tworkflowst8_.UPDATE_MAIN_INV as UPDATE11_57_6_,\t\t\t\t\tworkflowst8_.UPDATE_OTHER_INV as UPDATE12_57_6_,\t\t\t\t\tworkflowst8_.RELEASE_SHIPMENT as RELEASE13_57_6_,\t\t\t\t\tworkflowst8_.IS_DEAL_SPLITTABLE as IS14_57_6_,\t\t\t\t\tworkflowst8_.SEND_EMAIL as SEND15_57_6_,\t\t\t\t\tworkflowst8_.IS_DELETED as IS16_57_6_,\t\t\t\t\tworkflowst8_.CREATED as CREATED57_6_,\t\t\t\t\tworkflowst8_.CREATED_BY as CREATED18_57_6_,\t\t\t\t\tworkflowst8_.LAST_UPDATED as LAST19_57_6_,\t\t\t\t\tworkflowst8_.LAST_UPDATED_BY as LAST20_57_6_,\t\t\t\t\tworkflowst8_.LAST_CHECKED_BY as LAST21_57_6_,\t\t\t\t\tworkflowst8_.LAST_MAKED as LAST22_57_6_,\t\t\t\t\tworkflowst8_.MOD_ID as MOD23_57_6_,\t\t\t\t\tworkflowst8_.MAKER_CHECKER_STATUS as MAKER24_57_6_,\t\t\t\t\tworkflowst8_.SHADOW_ID as SHADOW25_57_6_,\t\t\t\t\t--(select now()) as formula52_6_,\t\t\t\t\tworkflowst7_.WORKFLOW_MODULE as WORKFLOW2_57_7_,\t\t\t\t\tworkflowst7_.NAME as NAME57_7_,\t\t\t\t\tworkflowst7_.DEAL_DISPLAY_MODULE as DEAL4_57_7_,\t\t\t\t\tworkflowst7_.WORKFLOW_LEVEL as WORKFLOW5_57_7_,\t\t\t\t\tworkflowst7_.IS_DEAL_EDITABLE as IS6_57_7_,\t\t\t\t\tworkflowst7_.GEN_CONFO as GEN7_57_7_,\t\t\t\t\tworkflowst7_.GEN_DEAL_TICKET as GEN8_57_7_,\t\t\t\t\tworkflowst7_.GEN_SETTLEMENTS as GEN9_57_7_,\t\t\t\t\tworkflowst7_.VAULT_START as VAULT10_57_7_,\t\t\t\t\tworkflowst7_.UPDATE_MAIN_INV as UPDATE11_57_7_,\t\t\t\t\tworkflowst7_.UPDATE_OTHER_INV as UPDATE12_57_7_,\t\t\t\t\tworkflowst7_.RELEASE_SHIPMENT as RELEASE13_57_7_,\t\t\t\t\tworkflowst7_.IS_DEAL_SPLITTABLE as IS14_57_7_,\t\t\t\t\tworkflowst7_.SEND_EMAIL as SEND15_57_7_,\t\t\t\t\tworkflowst7_.IS_DELETED as IS16_57_7_,\t\t\t\t\tworkflowst7_.CREATED as CREATED57_7_,\t\t\t\t\tworkflowst7_.CREATED_BY as CREATED18_57_7_,\t\t\t\t\tworkflowst7_.LAST_UPDATED as LAST19_57_7_,\t\t\t\t\tworkflowst7_.LAST_UPDATED_BY as LAST20_57_7_,\t\t\t\t\tworkflowst7_.LAST_CHECKED_BY as LAST21_57_7_,\t\t\t\t\tworkflowst7_.LAST_MAKED as LAST22_57_7_,\t\t\t\t\tworkflowst7_.MOD_ID as MOD23_57_7_,\t\t\t\t\tworkflowst7_.MAKER_CHECKER_STATUS as MAKER24_57_7_,\t\t\t\t\tworkflowst7_.SHADOW_ID as SHADOW25_57_7_,\t\t\t\t\t--(select now()) as formula52_7_,\t\t\t\t\tconsignees5_.MOD_ID as MOD2_81_8_,\t\t\t\t\tconsignees5_.COUNTRIES_ID as COUNTRIES3_81_8_,\t\t\t\t\tconsignees5_.CITIES_ID as CITIES4_81_8_,\t\t\t\t\tconsignees5_.REGIONS_ID as REGIONS5_81_8_,\t\t\t\t\tconsignees5_.SHORT_NAME as SHORT6_81_8_,\t\t\t\t\tconsignees5_.IS_COUNTERPARTY as IS7_81_8_,\t\t\t\t\tconsignees5_.NAME as NAME81_8_,\t\t\t\t\tconsignees5_.AIRPORTS_ID as AIRPORTS9_81_8_,\t\t\t\t\tconsignees5_.ADDRESS1 as ADDRESS10_81_8_,\t\t\t\t\tconsignees5_.ADDRESS2 as ADDRESS11_81_8_,\t\t\t\t\tconsignees5_.ADDRESS3 as ADDRESS12_81_8_,\t\t\t\t\tconsignees5_.ADDRESS4 as ADDRESS13_81_8_,\t\t\t\t\tconsignees5_.AWB_SPECIAL_CLAUSE as AWB14_81_8_,\t\t\t\t\tconsignees5_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_8_,\t\t\t\t\tconsignees5_.AGENT_ADDRESS1 as AGENT16_81_8_,\t\t\t\t\tconsignees5_.AGENT_ADDRESS2 as AGENT17_81_8_,\t\t\t\t\tconsignees5_.POSTAL_CODE as POSTAL18_81_8_,\t\t\t\t\tconsignees5_.IS_DELETED as IS19_81_8_,\t\t\t\t\tconsignees5_.CREATED as CREATED81_8_,\t\t\t\t\tconsignees5_.CREATED_BY as CREATED21_81_8_,\t\t\t\t\tconsignees5_.LAST_UPDATED as LAST22_81_8_,\t\t\t\t\tconsignees5_.LAST_UPDATED_BY as LAST23_81_8_,\t\t\t\t\tconsignees5_.LAST_CHECKED_BY as LAST24_81_8_,\t\t\t\t\tconsignees5_.LAST_MAKED as LAST25_81_8_,\t\t\t\t\tconsignees5_.MAKER_CHECKER_STATUS as MAKER26_81_8_,\t\t\t\t\tconsignees5_.SHADOW_ID as SHADOW27_81_8_,\t\t\t\t\t--(select now()) as formula74_8_,\t\t\t\t\tconsignees6_.MOD_ID as MOD2_81_9_,\t\t\t\t\tconsignees6_.COUNTRIES_ID as COUNTRIES3_81_9_,\t\t\t\t\tconsignees6_.CITIES_ID as CITIES4_81_9_,\t\t\t\t\tconsignees6_.REGIONS_ID as REGIONS5_81_9_,\t\t\t\t\tconsignees6_.SHORT_NAME as SHORT6_81_9_,\t\t\t\t\tconsignees6_.IS_COUNTERPARTY as IS7_81_9_,\t\t\t\t\tconsignees6_.NAME as NAME81_9_,\t\t\t\t\tconsignees6_.AIRPORTS_ID as AIRPORTS9_81_9_,\t\t\t\t\tconsignees6_.ADDRESS1 as ADDRESS10_81_9_,\t\t\t\t\tconsignees6_.ADDRESS2 as ADDRESS11_81_9_,\t\t\t\t\tconsignees6_.ADDRESS3 as ADDRESS12_81_9_,\t\t\t\t\tconsignees6_.ADDRESS4 as ADDRESS13_81_9_,\t\t\t\t\tconsignees6_.AWB_SPECIAL_CLAUSE as AWB14_81_9_,\t\t\t\t\tconsignees6_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_9_,\t\t\t\t\tconsignees6_.AGENT_ADDRESS1 as AGENT16_81_9_,\t\t\t\t\tconsignees6_.AGENT_ADDRESS2 as AGENT17_81_9_,\t\t\t\t\tconsignees6_.POSTAL_CODE as POSTAL18_81_9_,\t\t\t\t\tconsignees6_.IS_DELETED as IS19_81_9_,\t\t\t\t\tconsignees6_.CREATED as CREATED81_9_,\t\t\t\t\tconsignees6_.CREATED_BY as CREATED21_81_9_,\t\t\t\t\tconsignees6_.LAST_UPDATED as LAST22_81_9_,\t\t\t\t\tconsignees6_.LAST_UPDATED_BY as LAST23_81_9_,\t\t\t\t\tconsignees6_.LAST_CHECKED_BY as LAST24_81_9_,\t\t\t\t\tconsignees6_.LAST_MAKED as LAST25_81_9_,\t\t\t\t\tconsignees6_.MAKER_CHECKER_STATUS as MAKER26_81_9_,\t\t\t\t\tconsignees6_.SHADOW_ID as SHADOW27_81_9_,\t\t\t\t\t--(select now()) as formula74_9_,\t\t\t\t\tshipmentty4_.MOD_ID as MOD2_8_10_,\t\t\t\t\tshipmentty4_.CODE as CODE8_10_,\t\t\t\t\tshipmentty4_.NAME as NAME8_10_,\t\t\t\t\tshipmentty4_.REGIONS_ID as REGIONS5_8_10_,\t\t\t\t\tshipmentty4_.IS_DELETED as IS6_8_10_,\t\t\t\t\tshipmentty4_.CREATED as CREATED8_10_,\t\t\t\t\tshipmentty4_.CREATED_BY as CREATED8_8_10_,\t\t\t\t\tshipmentty4_.LAST_UPDATED as LAST9_8_10_,\t\t\t\t\tshipmentty4_.LAST_UPDATED_BY as LAST10_8_10_,\t\t\t\t\tshipmentty4_.LAST_CHECKED_BY as LAST11_8_10_,\t\t\t\t\tshipmentty4_.LAST_MAKED as LAST12_8_10_,\t\t\t\t\tshipmentty4_.MAKER_CHECKER_STATUS as MAKER13_8_10_,\t\t\t\t\tshipmentty4_.SHADOW_ID as SHADOW14_8_10_,\t\t\t\t\t--(select now()) as formula6_10_,\t\t\t\t\tshipmentsc2_.MOD_ID as MOD2_78_11_,\t\t\t\t\tshipmentsc2_.CARRIER_ID as CARRIER3_78_11_,\t\t\t\t\tshipmentsc2_.ORIGIN_AIRPORTS_ID as ORIGIN4_78_11_,\t\t\t\t\tshipmentsc2_.DEST_AIRPORTS_ID as DEST5_78_11_,\t\t\t\t\tshipmentsc2_.SCHEDULE as SCHEDULE78_11_,\t\t\t\t\tshipmentsc2_.ARRIVAL_DATE as ARRIVAL7_78_11_,\t\t\t\t\tshipmentsc2_.EST_TIME_DEPARTURE as EST8_78_11_,\t\t\t\t\tshipmentsc2_.EST_TIME_ARRIVAL as EST9_78_11_,\t\t\t\t\tshipmentsc2_.ROUTE_LEG_SEQ_NO as ROUTE10_78_11_,\t\t\t\t\tshipmentsc2_.CUTOFF_HOURS_BEFORE_DEPARTURE as CUTOFF11_78_11_,\t\t\t\t\tshipmentsc2_.AVAILABLE_IN_A_WEEK as AVAILABLE12_78_11_,\t\t\t\t\tshipmentsc2_.REMARKS as REMARKS78_11_,\t\t\t\t\tshipmentsc2_.STATUS as STATUS78_11_,\t\t\t\t\tshipmentsc2_.REGION_ID as REGION15_78_11_,\t\t\t\t\tshipmentsc2_.IS_DELETED as IS16_78_11_,\t\t\t\t\tshipmentsc2_.CREATED as CREATED78_11_,\t\t\t\t\tshipmentsc2_.CREATED_BY as CREATED18_78_11_,\t\t\t\t\tshipmentsc2_.LAST_UPDATED as LAST19_78_11_,\t\t\t\t\tshipmentsc2_.LAST_UPDATED_BY as LAST20_78_11_,\t\t\t\t\tshipmentsc2_.LAST_CHECKED_BY as LAST21_78_11_,\t\t\t\t\tshipmentsc2_.LAST_MAKED as LAST22_78_11_,\t\t\t\t\tshipmentsc2_.MAKER_CHECKER_STATUS as MAKER23_78_11_,\t\t\t\t\tshipmentsc2_.SHADOW_ID as SHADOW24_78_11_,\t\t\t\t\t--(select now()) as formula71_11_,\t\t\t\t\tshipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5___,\t\t\t\t\tshipmentro1_.FIN_ID as FIN1___\tfrom TBLS_SHIPMENT_RECORDS shipmentre0_\t\t\t inner join TBLS_SHIPMENT_RECORD_ROUTING shipmentro1_ on shipmentre0_.FIN_ID = shipmentro1_.SHIPMENT_RECORD_ID\t\t\t inner join TBLS_SHIPMENT_SCHEDULES shipmentsc2_ on shipmentro1_.SHIPMENT_SCHEDULE_ID = shipmentsc2_.FIN_ID\t\t\t inner join TBLS_CARRIERS carriers3_ on shipmentsc2_.CARRIER_ID = carriers3_.FIN_ID\t\t\t inner join TBLS_SHIPMENT_TYPES shipmentty4_ on shipmentre0_.SHIPMENT_TYPE_ID = shipmentty4_.FIN_ID\t\t\t inner join TBLS_CONSIGNEES consignees5_ on shipmentre0_.SHIPPER_ID = consignees5_.FIN_ID\t\t\t inner join TBLS_CONSIGNEES consignees6_ on shipmentre0_.CONSIGNEES_ID = consignees6_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst7_ on shipmentre0_.SHIPMENT_STATUS_ID = workflowst7_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst8_ on shipmentre0_.SHIPMENT_CHARGE_STATUS = workflowst8_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst9_ on shipmentre0_.SHIPMENT_DOCUMENT_STATUS = workflowst9_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst10_ on shipmentre0_.VAULT_STATUS_ID = workflowst10_.FIN_ID\t\t\t left outer join TBLS_SHIPMENT_METHODS shipmentme11_ on shipmentre0_.SHIPMENT_METHOD_ID = shipmentme11_.FIN_ID\t\t\t left outer join TBLS_BANK_NOTES_DEALS_LEGS deallegs12_ on shipmentre0_.FIN_ID = deallegs12_.SHIPMENT_RECORDS_ID\twhere (shipmentro1_.LEG_NO = (select min(shipmentro13_.LEG_NO)\t\t\t\t\t\t\t\t from TBLS_SHIPMENT_RECORD_ROUTING shipmentro13_\t\t\t\t\t\t\t\t where shipmentre0_.FIN_ID = shipmentro13_.SHIPMENT_RECORD_ID\t\t\t\t\t\t\t\t\tand ((shipmentro13_.IS_DELETED = 'N'))))\t and (shipmentre0_.IS_DELETED = 'N')\t and (TO_CHAR(shipmentro1_.ARRIVAL_DATE, 'YYYY-MM-DD') <= '2019-08-29')\torder by shipmentre0_.SHIPMENT_DATE \tlimit 25\t;On Mon, Sep 9, 2019 at 2:00 PM yash mehta <[email protected]> wrote:We have a query that takes 1min to execute in postgres 10.6 and the same executes in 4 sec in Oracle database. The query is doing 'select distinct'. If I add a 'group by' clause, performance in postgres improves significantly and fetches results in 2 sec (better than oracle). But unfortunately, we cannot modify the query. Could you please suggest a way to improve performance in Postgres without modifying the query. Original condition: time taken 1min\nSort Method: external merge Disk: 90656kB\n \nAfter removing distinct from query: time taken 2sec\nSort Method: top-N heapsort Memory: 201kB\n \nAfter increasing work_mem to 180MB; it takes 20sec\nSort Method: quicksort Memory: 172409kB\n \nSELECT\n* FROM pg_stat_statements ORDER BY total_time DESC limit 1;-[\nRECORD 1\n]-------+-----------------------------------------------------------------------------------------------------------------------------------------userid \n| 174862dbid \n| 174861queryid \n| 1469376470query \n| <query is too long. It selects around 300 columns>calls \n| 1total_time \n| 59469.972661min_time \n| 59469.972661max_time \n| 59469.972661mean_time \n| 59469.972661stddev_time \n| 0rows \n| 25shared_blks_hit \n| 27436shared_blks_read |\n2542shared_blks_dirtied\n| 0shared_blks_written\n| 0local_blks_hit \n| 0local_blks_read \n| 0local_blks_dirtied \n| 0local_blks_written \n| 0temp_blks_read \n| 257temp_blks_written | 11333blk_read_time \n| 0\nblk_write_time \n| 0IMO, an explain analyze of the query would be useful in order for people to help you.e.g. https://explain.depesz.comRegards,Flo",
"msg_date": "Mon, 9 Sep 2019 12:00:14 +0200",
"msg_from": "Flo Rance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "Hi Flo,\n\nPFB the explain plan:\n\n\n\n\"Limit (cost=5925.59..5944.03 rows=25 width=6994) (actual\ntime=57997.219..58002.451 rows=25 loops=1)\"\n\n\" -> Unique (cost=5925.59..5969.10 rows=59 width=6994) (actual\ntime=57997.218..58002.416 rows=25 loops=1)\"\n\n\" -> Sort (cost=5925.59..5925.74 rows=59 width=6994) (actual\ntime=57997.214..57997.537 rows=550 loops=1)\"\n\n\" Sort Key: shipmentre0_.shipment_date, shipmentre0_.fin_id,\nworkflowst10_.fin_id, carriers3_.fin_id, shipmentro1_.fin_id,\nshipmentme11_.fin_id, workflowst9_.fin_id, workflowst8_.fin_id,\nworkflowst7_.fin_id, consignees5_.fin_id, consignees6_.fin_id,\nshipmentty4_.fin_id, shipmentsc2_.fin_id, shipmentre0_.mod_id,\nshipmentre0_.shipment_method_id, shipmentre0_.shipment_basis_id,\nshipmentre0_.shipment_arrangement_id, shipmentre0_.shipment_currency_id,\nshipmentre0_.carrier_crew_extn_id, shipmentre0_.end_time,\nshipmentre0_.shipment_value_usd, shipmentre0_.shipment_value_base,\nshipmentre0_.insurance_value_usd, shipmentre0_.insurance_value_base,\nshipmentre0_.remarks, shipmentre0_.deletion_remarks,\nshipmentre0_.insurance_provider, shipmentre0_.shipment_provider,\nshipmentre0_.security_provider_id, shipmentre0_.consignee_contact_name,\nshipmentre0_.signal, shipmentre0_.chargeable_wt, shipmentre0_.no_of_pieces,\nshipmentre0_.regions_id, shipmentre0_.created, shipmentre0_.created_by,\nshipmentre0_.last_updated, shipmentre0_.last_updated_by,\nshipmentre0_.last_checked_by, shipmentre0_.last_maked,\nshipmentre0_.maker_checker_status, shipmentre0_.shadow_id,\nworkflowst10_.workflow_module, workflowst10_.name,\nworkflowst10_.deal_display_module, workflowst10_.workflow_level,\nworkflowst10_.is_deal_editable, workflowst10_.gen_confo,\nworkflowst10_.gen_deal_ticket, workflowst10_.gen_settlements,\nworkflowst10_.vault_start, workflowst10_.update_main_inv,\nworkflowst10_.update_other_inv, workflowst10_.release_shipment,\nworkflowst10_.is_deal_splittable, workflowst10_.send_email,\nworkflowst10_.is_deleted, workflowst10_.created, workflowst10_.created_by,\nworkflowst10_.last_updated, workflowst10_.last_updated_by,\nworkflowst10_.last_checked_by, workflowst10_.last_maked,\nworkflowst10_.mod_id, workflowst10_.maker_checker_status,\nworkflowst10_.shadow_id, carriers3_.mod_id, carriers3_.cities_id,\ncarriers3_.code, carriers3_.name, carriers3_.carrier_types,\ncarriers3_.name_in_fl, carriers3_.iata_code, carriers3_.kc_code,\ncarriers3_.airline_acct, carriers3_.address1, carriers3_.address2,\ncarriers3_.address3, carriers3_.address4, carriers3_.terminal,\ncarriers3_.airline_agent, carriers3_.accountinginfo,\ncarriers3_.import_dept, carriers3_.import_after_office_hour,\ncarriers3_.import_contact, carriers3_.import_fax, carriers3_.import_email,\ncarriers3_.export_deptt, carriers3_.export_after_office_hour,\ncarriers3_.export_contact, carriers3_.export_fax,\ncarriers3_.import_contact_no, carriers3_.export_contact_no,\ncarriers3_.export_email, carriers3_.awb_issued_by, carriers3_.is_deleted,\ncarriers3_.created, carriers3_.created_by, carriers3_.last_updated,\ncarriers3_.last_updated_by, carriers3_.last_checked_by,\ncarriers3_.last_maked, carriers3_.maker_checker_status,\ncarriers3_.shadow_id, shipmentro1_.mod_id, shipmentro1_.region_id,\nshipmentro1_.airway_bill_no, shipmentro1_.shipment_date,\nshipmentro1_.arrival_date, shipmentro1_.leg_no, shipmentro1_.no_of_pcs,\nshipmentro1_.chargeable_weight, shipmentro1_.carrier_crew_extn_id,\nshipmentro1_.is_deleted, shipmentro1_.created, shipmentro1_.created_by,\nshipmentro1_.last_updated, shipmentro1_.last_updated_by,\nshipmentro1_.last_checked_by, shipmentro1_.last_maked,\nshipmentro1_.maker_checker_status, shipmentro1_.shadow_id,\nshipmentme11_.mod_id, shipmentme11_.code, shipmentme11_.name,\nshipmentme11_.shipment_method_type, shipmentme11_.is_deleted,\nshipmentme11_.created, shipmentme11_.created_by,\nshipmentme11_.last_updated, shipmentme11_.last_updated_by,\nshipmentme11_.last_checked_by, shipmentme11_.last_maked,\nshipmentme11_.maker_checker_status, shipmentme11_.shadow_id,\nworkflowst9_.workflow_module, workflowst9_.name,\nworkflowst9_.deal_display_module, workflowst9_.workflow_level,\nworkflowst9_.is_deal_editable, workflowst9_.gen_confo,\nworkflowst9_.gen_deal_ticket, workflowst9_.gen_settlements,\nworkflowst9_.vault_start, workflowst9_.update_main_inv,\nworkflowst9_.update_other_inv, workflowst9_.release_shipment,\nworkflowst9_.is_deal_splittable, workflowst9_.send_email,\nworkflowst9_.is_deleted, workflowst9_.created, workflowst9_.created_by,\nworkflowst9_.last_updated, workflowst9_.last_updated_by,\nworkflowst9_.last_checked_by, workflowst9_.last_maked, workflowst9_.mod_id,\nworkflowst9_.maker_checker_status, workflowst9_.shadow_id,\nworkflowst8_.workflow_module, workflowst8_.name,\nworkflowst8_.deal_display_module, workflowst8_.workflow_level,\nworkflowst8_.is_deal_editable, workflowst8_.gen_confo,\nworkflowst8_.gen_deal_ticket, workflowst8_.gen_settlements,\nworkflowst8_.vault_start, workflowst8_.update_main_inv,\nworkflowst8_.update_other_inv, workflowst8_.release_shipment,\nworkflowst8_.is_deal_splittable, workflowst8_.send_email,\nworkflowst8_.is_deleted, workflowst8_.created, workflowst8_.created_by,\nworkflowst8_.last_updated, workflowst8_.last_updated_by,\nworkflowst8_.last_checked_by, workflowst8_.last_maked, workflowst8_.mod_id,\nworkflowst8_.maker_checker_status, workflowst8_.shadow_id,\nworkflowst7_.workflow_module, workflowst7_.name,\nworkflowst7_.deal_display_module, workflowst7_.workflow_level,\nworkflowst7_.is_deal_editable, workflowst7_.gen_confo,\nworkflowst7_.gen_deal_ticket, workflowst7_.gen_settlements,\nworkflowst7_.vault_start, workflowst7_.update_main_inv,\nworkflowst7_.update_other_inv, workflowst7_.release_shipment,\nworkflowst7_.is_deal_splittable, workflowst7_.send_email,\nworkflowst7_.is_deleted, workflowst7_.created, workflowst7_.created_by,\nworkflowst7_.last_updated, workflowst7_.last_updated_by,\nworkflowst7_.last_checked_by, workflowst7_.last_maked, workflowst7_.mod_id,\nworkflowst7_.maker_checker_status, workflowst7_.shadow_id,\nconsignees5_.mod_id, consignees5_.countries_id, consignees5_.cities_id,\nconsignees5_.regions_id, consignees5_.short_name,\nconsignees5_.is_counterparty, consignees5_.name, consignees5_.airports_id,\nconsignees5_.address1, consignees5_.address2, consignees5_.address3,\nconsignees5_.address4, consignees5_.awb_special_clause,\nconsignees5_.issuing_carrier_agent_name, consignees5_.agent_address1,\nconsignees5_.agent_address2, consignees5_.postal_code,\nconsignees5_.is_deleted, consignees5_.created, consignees5_.created_by,\nconsignees5_.last_updated, consignees5_.last_updated_by,\nconsignees5_.last_checked_by, consignees5_.last_maked,\nconsignees5_.maker_checker_status, consignees5_.shadow_id,\nconsignees6_.mod_id, consignees6_.countries_id, consignees6_.cities_id,\nconsignees6_.regions_id, consignees6_.short_name,\nconsignees6_.is_counterparty, consignees6_.name, consignees6_.airports_id,\nconsignees6_.address1, consignees6_.address2, consignees6_.address3,\nconsignees6_.address4, consignees6_.awb_special_clause,\nconsignees6_.issuing_carrier_agent_name, consignees6_.agent_address1,\nconsignees6_.agent_address2, consignees6_.postal_code,\nconsignees6_.is_deleted, consignees6_.created, consignees6_.created_by,\nconsignees6_.last_updated, consignees6_.last_updated_by,\nconsignees6_.last_checked_by, consignees6_.last_maked,\nconsignees6_.maker_checker_status, consignees6_.shadow_id,\nshipmentty4_.mod_id, shipmentty4_.code, shipmentty4_.name,\nshipmentty4_.regions_id, shipmentty4_.is_deleted, shipmentty4_.created,\nshipmentty4_.created_by, shipmentty4_.last_updated,\nshipmentty4_.last_updated_by, shipmentty4_.last_checked_by,\nshipmentty4_.last_maked, shipmentty4_.maker_checker_status,\nshipmentty4_.shadow_id, shipmentsc2_.mod_id,\nshipmentsc2_.origin_airports_id, shipmentsc2_.dest_airports_id,\nshipmentsc2_.schedule, shipmentsc2_.arrival_date,\nshipmentsc2_.est_time_departure, shipmentsc2_.est_time_arrival,\nshipmentsc2_.route_leg_seq_no, shipmentsc2_.cutoff_hours_before_departure,\nshipmentsc2_.available_in_a_week, shipmentsc2_.remarks,\nshipmentsc2_.status, shipmentsc2_.region_id, shipmentsc2_.is_deleted,\nshipmentsc2_.created, shipmentsc2_.created_by, shipmentsc2_.last_updated,\nshipmentsc2_.last_updated_by, shipmentsc2_.last_checked_by,\nshipmentsc2_.last_maked, shipmentsc2_.maker_checker_status,\nshipmentsc2_.shadow_id\"\n\n\" Sort Method: external merge Disk: 90656kB\"\n\n\" -> Hash Right Join (cost=388.61..5923.86 rows=59\nwidth=6994) (actual time=143.405..372.903 rows=42759 loops=1)\"\n\n\" Hash Cond: ((deallegs12_.shipment_records_id)::text =\n(shipmentre0_.fin_id)::text)\"\n\n\" -> Seq Scan on tbls_bank_notes_deals_legs\ndeallegs12_ (cost=0.00..5337.57 rows=52557 width=16) (actual\ntime=0.005..26.702 rows=52557 loops=1)\"\n\n\" -> Hash (cost=388.58..388.58 rows=2 width=6960)\n(actual time=143.371..143.371 rows=1442 loops=1)\"\n\n\" Buckets: 2048 (originally 1024) Batches: 1\n(originally 1) Memory Usage: 3107kB\"\n\n\" -> Nested Loop Left Join (cost=106.73..388.58\nrows=2 width=6960) (actual time=55.316..134.874 rows=1442 loops=1)\"\n\n\" Join Filter:\n((shipmentre0_.shipment_method_id)::text = (shipmentme11_.fin_id)::text)\"\n\n\" Rows Removed by Join Filter: 2350\"\n\n\" -> Nested Loop (cost=106.73..387.37\nrows=2 width=6721) (actual time=55.300..130.529 rows=1442 loops=1)\"\n\n\" -> Nested Loop\n(cost=106.59..387.03 rows=2 width=6582) (actual time=55.282..124.351\nrows=1442 loops=1)\"\n\n\" -> Nested Loop\n(cost=106.45..386.69 rows=2 width=6443) (actual time=55.267..118.047\nrows=1442 loops=1)\"\n\n\" -> Nested Loop\n(cost=106.31..386.36 rows=2 width=6304) (actual time=55.250..111.408\nrows=1442 loops=1)\"\n\n\" -> Nested Loop\n(cost=106.17..386.02 rows=2 width=6165) (actual time=55.228..105.002\nrows=1442 loops=1)\"\n\n\" Join Filter:\n((shipmentre0_.consignees_id)::text = (consignees6_.fin_id)::text)\"\n\n\" Rows Removed\nby Join Filter: 40376\"\n\n\" -> Seq Scan\non tbls_consignees consignees6_ (cost=0.00..1.29 rows=29 width=1060)\n(actual time=0.012..0.021 rows=29 loops=1)\"\n\n\" ->\nMaterialize (cost=106.17..383.86 rows=2 width=5105) (actual\ntime=1.904..3.142 rows=1442 loops=29)\"\n\n\" ->\nNested Loop (cost=106.17..383.85 rows=2 width=5105) (actual\ntime=55.203..78.206 rows=1442 loops=1)\"\n\n\"\nJoin Filter: ((shipmentre0_.shipper_id)::text =\n(consignees5_.fin_id)::text)\"\n\n\"\nRows Removed by Join Filter: 40376\"\n\n\"\n-> Seq Scan on tbls_consignees consignees5_ (cost=0.00..1.29 rows=29\nwidth=1060) (actual time=0.003..0.013 rows=29 loops=1)\"\n\n\"\n-> Materialize (cost=106.17..381.70 rows=2 width=4045) (actual\ntime=0.524..2.244 rows=1442 loops=29)\"\n\n\"\n ->\nNested Loop (cost=106.17..381.69 rows=2 width=4045) (actual\ntime=15.195..53.051 rows=1442 loops=1)\"\n\n\"\n Join Filter: ((shipmentre0_.shipment_type_id)::text =\n(shipmentty4_.fin_id)::text)\"\n\n\"\nRows Removed by Join Filter: 7210\"\n\n\"\n -> Seq Scan on\ntbls_shipment_types shipmentty4_ (cost=0.00..1.06 rows=6 width=95) (actual\ntime=0.002..0.005 rows=6 loops=1)\"\n\n\"\n -> Materialize (cost=106.17..380.45 rows=2 width=3950) (actual\ntime=2.478..8.157 rows=1442 loops=6)\"\n\n\"\n-> Nested Loop (cost=106.17..380.44 rows=2 width=3950) (actual\ntime=14.856..43.625 rows=1442 loops=1)\"\n\n\"\n-> Nested Loop (cost=106.03..379.95 rows=2 width=1696) (actual\ntime=14.824..38.885 rows=1442 loops=1)\"\n\n\"\n-> Hash Join (cost=105.76..379.20 rows=2 width=1371) (actual\ntime=14.807..32.459 rows=1442 loops=1)\"\n\n\"\n Hash Cond:\n(((shipmentro1_.shipment_record_id)::text = (shipmentre0_.fin_id)::text)\nAND (shipmentro1_.leg_no = (SubPlan 1)))\"\n\n\"\n -> Seq Scan on\ntbls_shipment_record_routing shipmentro1_ (cost=0.00..69.80 rows=484\nwidth=444) (actual time=0.017..2.534 rows=1452 loops=1)\"\n\n\"\n Filter:\n(to_char(arrival_date, 'YYYY-MM-DD'::text) <= '2019-08-29'::text)\"\n\n\"\nRows Removed by Filter: 1\"\n\n\"\n-> Hash (cost=84.11..84.11 rows=1443 width=927) (actual\ntime=14.762..14.763 rows=1443 loops=1)\"\n\n\"\n\n Buckets:\n2048 Batches: 1 Memory Usage: 497kB\"\n\n\"\n-> Seq Scan on tbls_shipment_records shipmentre0_ (cost=0.00..84.11\nrows=1443 width=927) (actual time=0.005..1.039 rows=1443 loops=1)\"\n\n\"\nFilter: ((is_deleted)::text = 'N'::text)\"\n\n\"\nRows Removed by Filter: 6\"\n\n\"\n SubPlan 1\"\n\n\"\n-> Aggregate (cost=8.30..8.31 rows=1 width=8) (actual time=0.008..0.008\nrows=1 loops=2885)\"\n\n\"\n-> Index Scan using xbls_shipment_record_rout001 on\ntbls_shipment_record_routing shipmentro13_ (cost=0.28..8.30 rows=1\nwidth=8) (actual time=0.006..0.007 rows=1 loops=2885)\"\n\n\"\nIndex Cond: ((shipmentre0_.fin_id)::text = (shipment_record_id)::text)\"\n\n\"\nFilter: ((is_deleted)::text = 'N'::text)\"\n\n\"\n Rows Removed by Filter: 0\"\n\n\"\n-> Index Scan using pk_bls_shipment_schedules on tbls_shipment_schedules\nshipmentsc2_ (cost=0.27..0.38 rows=1 width=325) (actual time=0.003..0.003\nrows=1 loops=1442)\"\n\n\"\nIndex Cond: ((fin_id)::text = (shipmentro1_.shipment_schedule_id)::text)\"\n\n\"\n-> Index Scan using pk_bls_carriers on tbls_carriers carriers3_\n(cost=0.14..0.24 rows=1 width=2254) (actual time=0.002..0.002 rows=1\nloops=1442)\"\n\n\"\nIndex Cond: ((fin_id)::text = (shipmentsc2_.carrier_id)::text)\"\n\n\" -> Index Scan\nusing pk_bls_workflow_states on tbls_workflow_states workflowst7_\n(cost=0.14..0.17 rows=1 width=139) (actual time=0.003..0.003 rows=1\nloops=1442)\"\n\n\" Index Cond:\n((fin_id)::text = (shipmentre0_.shipment_status_id)::text)\"\n\n\" -> Index Scan using\npk_bls_workflow_states on tbls_workflow_states workflowst8_\n(cost=0.14..0.17 rows=1 width=139) (actual time=0.003..0.003 rows=1\nloops=1442)\"\n\n\" Index Cond:\n((fin_id)::text = (shipmentre0_.shipment_charge_status)::text)\"\n\n\" -> Index Scan using\npk_bls_workflow_states on tbls_workflow_states workflowst9_\n(cost=0.14..0.17 rows=1 width=139) (actual time=0.002..0.002 rows=1\nloops=1442)\"\n\n\" Index Cond:\n((fin_id)::text = (shipmentre0_.shipment_document_status)::text)\"\n\n\" -> Index Scan using\npk_bls_workflow_states on tbls_workflow_states workflowst10_\n(cost=0.14..0.17 rows=1 width=139) (actual time=0.002..0.002 rows=1\nloops=1442)\"\n\n\" Index Cond: ((fin_id)::text =\n(shipmentre0_.vault_status_id)::text)\"\n\n\" -> Materialize (cost=0.00..1.07 rows=5\nwidth=239) (actual time=0.000..0.001 rows=3 loops=1442)\"\n\n\" -> Seq Scan on\ntbls_shipment_methods shipmentme11_ (cost=0.00..1.05 rows=5 width=239)\n(actual time=0.006..0.010 rows=5 loops=1)\"\n\n\"Planning time: 368.495 ms\"\n\n\"Execution time: 58018.486 ms\"\n\nOn Mon, Sep 9, 2019 at 3:30 PM Flo Rance <[email protected]> wrote:\n\n>\n>\n> On Mon, Sep 9, 2019 at 10:38 AM yash mehta <[email protected]> wrote:\n>\n>> In addition to below mail, we have used btree indexes for primary key\n>> columns. Below is the query:\n>>\n>> select distinct shipmentre0_.FIN_ID as\n>> FIN1_53_0_,\n>> workflowst10_.FIN_ID as FIN1_57_1_,\n>> carriers3_.FIN_ID as FIN1_40_2_,\n>> shipmentro1_.FIN_ID as FIN1_33_3_,\n>> shipmentme11_.FIN_ID as FIN1_5_4_,\n>> workflowst9_.FIN_ID as FIN1_57_5_,\n>> workflowst8_.FIN_ID as FIN1_57_6_,\n>> workflowst7_.FIN_ID as FIN1_57_7_,\n>> consignees5_.FIN_ID as FIN1_81_8_,\n>> consignees6_.FIN_ID as FIN1_81_9_,\n>> shipmentty4_.FIN_ID as FIN1_8_10_,\n>> shipmentsc2_.FIN_ID as FIN1_78_11_,\n>> shipmentre0_.MOD_ID as MOD2_53_0_,\n>> shipmentre0_.SHIPMENT_METHOD_ID as SHIPMENT3_53_0_,\n>> shipmentre0_.SHIPPER_ID as SHIPPER4_53_0_,\n>> shipmentre0_.CONSIGNEES_ID as CONSIGNEES5_53_0_,\n>> shipmentre0_.SHIPMENT_BASIS_ID as SHIPMENT6_53_0_,\n>> shipmentre0_.SHIPMENT_TYPE_ID as SHIPMENT7_53_0_,\n>> shipmentre0_.SHIPMENT_ARRANGEMENT_ID as SHIPMENT8_53_0_,\n>> shipmentre0_.SHIPMENT_DATE as SHIPMENT9_53_0_,\n>> shipmentre0_.SHIPMENT_CURRENCY_ID as SHIPMENT10_53_0_,\n>> shipmentre0_.CARRIER_CREW_EXTN_ID as CARRIER11_53_0_,\n>> shipmentre0_.END_TIME as END12_53_0_,\n>> shipmentre0_.SHIPMENT_VALUE_USD as SHIPMENT13_53_0_,\n>> shipmentre0_.SHIPMENT_VALUE_BASE as SHIPMENT14_53_0_,\n>> shipmentre0_.INSURANCE_VALUE_USD as INSURANCE15_53_0_,\n>> shipmentre0_.INSURANCE_VALUE_BASE as INSURANCE16_53_0_,\n>> shipmentre0_.REMARKS as REMARKS53_0_,\n>> shipmentre0_.DELETION_REMARKS as DELETION18_53_0_,\n>> shipmentre0_.SHIPMENT_STATUS_ID as SHIPMENT19_53_0_,\n>> shipmentre0_.VAULT_STATUS_ID as VAULT20_53_0_,\n>> shipmentre0_.SHIPMENT_CHARGE_STATUS as SHIPMENT21_53_0_,\n>> shipmentre0_.SHIPMENT_DOCUMENT_STATUS as SHIPMENT22_53_0_,\n>> shipmentre0_.INSURANCE_PROVIDER as INSURANCE23_53_0_,\n>> shipmentre0_.SHIPMENT_PROVIDER as SHIPMENT24_53_0_,\n>> shipmentre0_.SECURITY_PROVIDER_ID as SECURITY25_53_0_,\n>> shipmentre0_.CONSIGNEE_CONTACT_NAME as CONSIGNEE26_53_0_,\n>> shipmentre0_.SIGNAL as SIGNAL53_0_,\n>> shipmentre0_.CHARGEABLE_WT as CHARGEABLE28_53_0_,\n>> shipmentre0_.NO_OF_PIECES as NO29_53_0_,\n>> shipmentre0_.REGIONS_ID as REGIONS30_53_0_,\n>> shipmentre0_.IS_DELETED as IS31_53_0_,\n>> shipmentre0_.CREATED as CREATED53_0_,\n>> shipmentre0_.CREATED_BY as CREATED33_53_0_,\n>> shipmentre0_.LAST_UPDATED as LAST34_53_0_,\n>> shipmentre0_.LAST_UPDATED_BY as LAST35_53_0_,\n>> shipmentre0_.LAST_CHECKED_BY as LAST36_53_0_,\n>> shipmentre0_.LAST_MAKED as LAST37_53_0_,\n>> shipmentre0_.MAKER_CHECKER_STATUS as MAKER38_53_0_,\n>> shipmentre0_.SHADOW_ID as SHADOW39_53_0_,\n>> --(select now()) as formula48_0_,\n>> workflowst10_.WORKFLOW_MODULE as WORKFLOW2_57_1_,\n>> workflowst10_.NAME as NAME57_1_,\n>> workflowst10_.DEAL_DISPLAY_MODULE as DEAL4_57_1_,\n>> workflowst10_.WORKFLOW_LEVEL as WORKFLOW5_57_1_,\n>> workflowst10_.IS_DEAL_EDITABLE as IS6_57_1_,\n>> workflowst10_.GEN_CONFO as GEN7_57_1_,\n>> workflowst10_.GEN_DEAL_TICKET as GEN8_57_1_,\n>> workflowst10_.GEN_SETTLEMENTS as GEN9_57_1_,\n>> workflowst10_.VAULT_START as VAULT10_57_1_,\n>> workflowst10_.UPDATE_MAIN_INV as UPDATE11_57_1_,\n>> workflowst10_.UPDATE_OTHER_INV as UPDATE12_57_1_,\n>> workflowst10_.RELEASE_SHIPMENT as RELEASE13_57_1_,\n>> workflowst10_.IS_DEAL_SPLITTABLE as IS14_57_1_,\n>> workflowst10_.SEND_EMAIL as SEND15_57_1_,\n>> workflowst10_.IS_DELETED as IS16_57_1_,\n>> workflowst10_.CREATED as CREATED57_1_,\n>> workflowst10_.CREATED_BY as CREATED18_57_1_,\n>> workflowst10_.LAST_UPDATED as LAST19_57_1_,\n>> workflowst10_.LAST_UPDATED_BY as LAST20_57_1_,\n>> workflowst10_.LAST_CHECKED_BY as LAST21_57_1_,\n>> workflowst10_.LAST_MAKED as LAST22_57_1_,\n>> workflowst10_.MOD_ID as MOD23_57_1_,\n>> workflowst10_.MAKER_CHECKER_STATUS as MAKER24_57_1_,\n>> workflowst10_.SHADOW_ID as SHADOW25_57_1_,\n>> --(select now()) as formula52_1_,\n>> carriers3_.MOD_ID as MOD2_40_2_,\n>> carriers3_.CITIES_ID as CITIES3_40_2_,\n>> carriers3_.CODE as CODE40_2_,\n>> carriers3_.NAME as NAME40_2_,\n>> carriers3_.CARRIER_TYPES as CARRIER6_40_2_,\n>> carriers3_.NAME_IN_FL as NAME7_40_2_,\n>> carriers3_.IATA_CODE as IATA8_40_2_,\n>> carriers3_.KC_CODE as KC9_40_2_,\n>> carriers3_.AIRLINE_ACCT as AIRLINE10_40_2_,\n>> carriers3_.ADDRESS1 as ADDRESS11_40_2_,\n>> carriers3_.ADDRESS2 as ADDRESS12_40_2_,\n>> carriers3_.ADDRESS3 as ADDRESS13_40_2_,\n>> carriers3_.ADDRESS4 as ADDRESS14_40_2_,\n>> carriers3_.TERMINAL as TERMINAL40_2_,\n>> carriers3_.AIRLINE_AGENT as AIRLINE16_40_2_,\n>> carriers3_.ACCOUNTINGINFO as ACCOUNT17_40_2_,\n>> carriers3_.IMPORT_DEPT as IMPORT18_40_2_,\n>> carriers3_.IMPORT_AFTER_OFFICE_HOUR as IMPORT19_40_2_,\n>> carriers3_.IMPORT_CONTACT as IMPORT20_40_2_,\n>> carriers3_.IMPORT_FAX as IMPORT21_40_2_,\n>> carriers3_.IMPORT_EMAIL as IMPORT22_40_2_,\n>> carriers3_.EXPORT_DEPTT as EXPORT23_40_2_,\n>> carriers3_.EXPORT_AFTER_OFFICE_HOUR as EXPORT24_40_2_,\n>> carriers3_.EXPORT_CONTACT as EXPORT25_40_2_,\n>> carriers3_.EXPORT_FAX as EXPORT26_40_2_,\n>> carriers3_.IMPORT_CONTACT_NO as IMPORT27_40_2_,\n>> carriers3_.EXPORT_CONTACT_NO as EXPORT28_40_2_,\n>> carriers3_.EXPORT_EMAIL as EXPORT29_40_2_,\n>> carriers3_.AWB_ISSUED_BY as AWB30_40_2_,\n>> carriers3_.IS_DELETED as IS31_40_2_,\n>> carriers3_.CREATED as CREATED40_2_,\n>> carriers3_.CREATED_BY as CREATED33_40_2_,\n>> carriers3_.LAST_UPDATED as LAST34_40_2_,\n>> carriers3_.LAST_UPDATED_BY as LAST35_40_2_,\n>> carriers3_.LAST_CHECKED_BY as LAST36_40_2_,\n>> carriers3_.LAST_MAKED as LAST37_40_2_,\n>> carriers3_.MAKER_CHECKER_STATUS as MAKER38_40_2_,\n>> carriers3_.SHADOW_ID as SHADOW39_40_2_,\n>> --(select now()) as formula36_2_,\n>> shipmentro1_.MOD_ID as MOD2_33_3_,\n>> shipmentro1_.REGION_ID as REGION3_33_3_,\n>> shipmentro1_.SHIPMENT_SCHEDULE_ID as SHIPMENT4_33_3_,\n>> shipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5_33_3_,\n>> shipmentro1_.AIRWAY_BILL_NO as AIRWAY6_33_3_,\n>> shipmentro1_.SHIPMENT_DATE as SHIPMENT7_33_3_,\n>> shipmentro1_.ARRIVAL_DATE as ARRIVAL8_33_3_,\n>> shipmentro1_.LEG_NO as LEG9_33_3_,\n>> shipmentro1_.NO_OF_PCS as NO10_33_3_,\n>> shipmentro1_.CHARGEABLE_WEIGHT as CHARGEABLE11_33_3_,\n>> shipmentro1_.CARRIER_CREW_EXTN_ID as CARRIER12_33_3_,\n>> shipmentro1_.IS_DELETED as IS13_33_3_,\n>> shipmentro1_.CREATED as CREATED33_3_,\n>> shipmentro1_.CREATED_BY as CREATED15_33_3_,\n>> shipmentro1_.LAST_UPDATED as LAST16_33_3_,\n>> shipmentro1_.LAST_UPDATED_BY as LAST17_33_3_,\n>> shipmentro1_.LAST_CHECKED_BY as LAST18_33_3_,\n>> shipmentro1_.LAST_MAKED as LAST19_33_3_,\n>> shipmentro1_.MAKER_CHECKER_STATUS as MAKER20_33_3_,\n>> shipmentro1_.SHADOW_ID as SHADOW21_33_3_,\n>> --(select now()) as formula29_3_,\n>> shipmentme11_.MOD_ID as MOD2_5_4_,\n>> shipmentme11_.CODE as CODE5_4_,\n>> shipmentme11_.NAME as NAME5_4_,\n>> shipmentme11_.SHIPMENT_METHOD_TYPE as SHIPMENT5_5_4_,\n>> shipmentme11_.IS_DELETED as IS6_5_4_,\n>> shipmentme11_.CREATED as CREATED5_4_,\n>> shipmentme11_.CREATED_BY as CREATED8_5_4_,\n>> shipmentme11_.LAST_UPDATED as LAST9_5_4_,\n>> shipmentme11_.LAST_UPDATED_BY as LAST10_5_4_,\n>> shipmentme11_.LAST_CHECKED_BY as LAST11_5_4_,\n>> shipmentme11_.LAST_MAKED as LAST12_5_4_,\n>> shipmentme11_.MAKER_CHECKER_STATUS as MAKER13_5_4_,\n>> shipmentme11_.SHADOW_ID as SHADOW14_5_4_,\n>> --(select now()) as formula4_4_,\n>> workflowst9_.WORKFLOW_MODULE as WORKFLOW2_57_5_,\n>> workflowst9_.NAME as NAME57_5_,\n>> workflowst9_.DEAL_DISPLAY_MODULE as DEAL4_57_5_,\n>> workflowst9_.WORKFLOW_LEVEL as WORKFLOW5_57_5_,\n>> workflowst9_.IS_DEAL_EDITABLE as IS6_57_5_,\n>> workflowst9_.GEN_CONFO as GEN7_57_5_,\n>> workflowst9_.GEN_DEAL_TICKET as GEN8_57_5_,\n>> workflowst9_.GEN_SETTLEMENTS as GEN9_57_5_,\n>> workflowst9_.VAULT_START as VAULT10_57_5_,\n>> workflowst9_.UPDATE_MAIN_INV as UPDATE11_57_5_,\n>> workflowst9_.UPDATE_OTHER_INV as UPDATE12_57_5_,\n>> workflowst9_.RELEASE_SHIPMENT as RELEASE13_57_5_,\n>> workflowst9_.IS_DEAL_SPLITTABLE as IS14_57_5_,\n>> workflowst9_.SEND_EMAIL as SEND15_57_5_,\n>> workflowst9_.IS_DELETED as IS16_57_5_,\n>> workflowst9_.CREATED as CREATED57_5_,\n>> workflowst9_.CREATED_BY as CREATED18_57_5_,\n>> workflowst9_.LAST_UPDATED as LAST19_57_5_,\n>> workflowst9_.LAST_UPDATED_BY as LAST20_57_5_,\n>> workflowst9_.LAST_CHECKED_BY as LAST21_57_5_,\n>> workflowst9_.LAST_MAKED as LAST22_57_5_,\n>> workflowst9_.MOD_ID as MOD23_57_5_,\n>> workflowst9_.MAKER_CHECKER_STATUS as MAKER24_57_5_,\n>> workflowst9_.SHADOW_ID as SHADOW25_57_5_,\n>> --(select now()) as formula52_5_,\n>> workflowst8_.WORKFLOW_MODULE as WORKFLOW2_57_6_,\n>> workflowst8_.NAME as NAME57_6_,\n>> workflowst8_.DEAL_DISPLAY_MODULE as DEAL4_57_6_,\n>> workflowst8_.WORKFLOW_LEVEL as WORKFLOW5_57_6_,\n>> workflowst8_.IS_DEAL_EDITABLE as IS6_57_6_,\n>> workflowst8_.GEN_CONFO as GEN7_57_6_,\n>> workflowst8_.GEN_DEAL_TICKET as GEN8_57_6_,\n>> workflowst8_.GEN_SETTLEMENTS as GEN9_57_6_,\n>> workflowst8_.VAULT_START as VAULT10_57_6_,\n>> workflowst8_.UPDATE_MAIN_INV as UPDATE11_57_6_,\n>> workflowst8_.UPDATE_OTHER_INV as UPDATE12_57_6_,\n>> workflowst8_.RELEASE_SHIPMENT as RELEASE13_57_6_,\n>> workflowst8_.IS_DEAL_SPLITTABLE as IS14_57_6_,\n>> workflowst8_.SEND_EMAIL as SEND15_57_6_,\n>> workflowst8_.IS_DELETED as IS16_57_6_,\n>> workflowst8_.CREATED as CREATED57_6_,\n>> workflowst8_.CREATED_BY as CREATED18_57_6_,\n>> workflowst8_.LAST_UPDATED as LAST19_57_6_,\n>> workflowst8_.LAST_UPDATED_BY as LAST20_57_6_,\n>> workflowst8_.LAST_CHECKED_BY as LAST21_57_6_,\n>> workflowst8_.LAST_MAKED as LAST22_57_6_,\n>> workflowst8_.MOD_ID as MOD23_57_6_,\n>> workflowst8_.MAKER_CHECKER_STATUS as MAKER24_57_6_,\n>> workflowst8_.SHADOW_ID as SHADOW25_57_6_,\n>> --(select now()) as formula52_6_,\n>> workflowst7_.WORKFLOW_MODULE as WORKFLOW2_57_7_,\n>> workflowst7_.NAME as NAME57_7_,\n>> workflowst7_.DEAL_DISPLAY_MODULE as DEAL4_57_7_,\n>> workflowst7_.WORKFLOW_LEVEL as WORKFLOW5_57_7_,\n>> workflowst7_.IS_DEAL_EDITABLE as IS6_57_7_,\n>> workflowst7_.GEN_CONFO as GEN7_57_7_,\n>> workflowst7_.GEN_DEAL_TICKET as GEN8_57_7_,\n>> workflowst7_.GEN_SETTLEMENTS as GEN9_57_7_,\n>> workflowst7_.VAULT_START as VAULT10_57_7_,\n>> workflowst7_.UPDATE_MAIN_INV as UPDATE11_57_7_,\n>> workflowst7_.UPDATE_OTHER_INV as UPDATE12_57_7_,\n>> workflowst7_.RELEASE_SHIPMENT as RELEASE13_57_7_,\n>> workflowst7_.IS_DEAL_SPLITTABLE as IS14_57_7_,\n>> workflowst7_.SEND_EMAIL as SEND15_57_7_,\n>> workflowst7_.IS_DELETED as IS16_57_7_,\n>> workflowst7_.CREATED as CREATED57_7_,\n>> workflowst7_.CREATED_BY as CREATED18_57_7_,\n>> workflowst7_.LAST_UPDATED as LAST19_57_7_,\n>> workflowst7_.LAST_UPDATED_BY as LAST20_57_7_,\n>> workflowst7_.LAST_CHECKED_BY as LAST21_57_7_,\n>> workflowst7_.LAST_MAKED as LAST22_57_7_,\n>> workflowst7_.MOD_ID as MOD23_57_7_,\n>> workflowst7_.MAKER_CHECKER_STATUS as MAKER24_57_7_,\n>> workflowst7_.SHADOW_ID as SHADOW25_57_7_,\n>> --(select now()) as formula52_7_,\n>> consignees5_.MOD_ID as MOD2_81_8_,\n>> consignees5_.COUNTRIES_ID as COUNTRIES3_81_8_,\n>> consignees5_.CITIES_ID as CITIES4_81_8_,\n>> consignees5_.REGIONS_ID as REGIONS5_81_8_,\n>> consignees5_.SHORT_NAME as SHORT6_81_8_,\n>> consignees5_.IS_COUNTERPARTY as IS7_81_8_,\n>> consignees5_.NAME as NAME81_8_,\n>> consignees5_.AIRPORTS_ID as AIRPORTS9_81_8_,\n>> consignees5_.ADDRESS1 as ADDRESS10_81_8_,\n>> consignees5_.ADDRESS2 as ADDRESS11_81_8_,\n>> consignees5_.ADDRESS3 as ADDRESS12_81_8_,\n>> consignees5_.ADDRESS4 as ADDRESS13_81_8_,\n>> consignees5_.AWB_SPECIAL_CLAUSE as AWB14_81_8_,\n>> consignees5_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_8_,\n>> consignees5_.AGENT_ADDRESS1 as AGENT16_81_8_,\n>> consignees5_.AGENT_ADDRESS2 as AGENT17_81_8_,\n>> consignees5_.POSTAL_CODE as POSTAL18_81_8_,\n>> consignees5_.IS_DELETED as IS19_81_8_,\n>> consignees5_.CREATED as CREATED81_8_,\n>> consignees5_.CREATED_BY as CREATED21_81_8_,\n>> consignees5_.LAST_UPDATED as LAST22_81_8_,\n>> consignees5_.LAST_UPDATED_BY as LAST23_81_8_,\n>> consignees5_.LAST_CHECKED_BY as LAST24_81_8_,\n>> consignees5_.LAST_MAKED as LAST25_81_8_,\n>> consignees5_.MAKER_CHECKER_STATUS as MAKER26_81_8_,\n>> consignees5_.SHADOW_ID as SHADOW27_81_8_,\n>> --(select now()) as formula74_8_,\n>> consignees6_.MOD_ID as MOD2_81_9_,\n>> consignees6_.COUNTRIES_ID as COUNTRIES3_81_9_,\n>> consignees6_.CITIES_ID as CITIES4_81_9_,\n>> consignees6_.REGIONS_ID as REGIONS5_81_9_,\n>> consignees6_.SHORT_NAME as SHORT6_81_9_,\n>> consignees6_.IS_COUNTERPARTY as IS7_81_9_,\n>> consignees6_.NAME as NAME81_9_,\n>> consignees6_.AIRPORTS_ID as AIRPORTS9_81_9_,\n>> consignees6_.ADDRESS1 as ADDRESS10_81_9_,\n>> consignees6_.ADDRESS2 as ADDRESS11_81_9_,\n>> consignees6_.ADDRESS3 as ADDRESS12_81_9_,\n>> consignees6_.ADDRESS4 as ADDRESS13_81_9_,\n>> consignees6_.AWB_SPECIAL_CLAUSE as AWB14_81_9_,\n>> consignees6_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_9_,\n>> consignees6_.AGENT_ADDRESS1 as AGENT16_81_9_,\n>> consignees6_.AGENT_ADDRESS2 as AGENT17_81_9_,\n>> consignees6_.POSTAL_CODE as POSTAL18_81_9_,\n>> consignees6_.IS_DELETED as IS19_81_9_,\n>> consignees6_.CREATED as CREATED81_9_,\n>> consignees6_.CREATED_BY as CREATED21_81_9_,\n>> consignees6_.LAST_UPDATED as LAST22_81_9_,\n>> consignees6_.LAST_UPDATED_BY as LAST23_81_9_,\n>> consignees6_.LAST_CHECKED_BY as LAST24_81_9_,\n>> consignees6_.LAST_MAKED as LAST25_81_9_,\n>> consignees6_.MAKER_CHECKER_STATUS as MAKER26_81_9_,\n>> consignees6_.SHADOW_ID as SHADOW27_81_9_,\n>> --(select now()) as formula74_9_,\n>> shipmentty4_.MOD_ID as MOD2_8_10_,\n>> shipmentty4_.CODE as CODE8_10_,\n>> shipmentty4_.NAME as NAME8_10_,\n>> shipmentty4_.REGIONS_ID as REGIONS5_8_10_,\n>> shipmentty4_.IS_DELETED as IS6_8_10_,\n>> shipmentty4_.CREATED as CREATED8_10_,\n>> shipmentty4_.CREATED_BY as CREATED8_8_10_,\n>> shipmentty4_.LAST_UPDATED as LAST9_8_10_,\n>> shipmentty4_.LAST_UPDATED_BY as LAST10_8_10_,\n>> shipmentty4_.LAST_CHECKED_BY as LAST11_8_10_,\n>> shipmentty4_.LAST_MAKED as LAST12_8_10_,\n>> shipmentty4_.MAKER_CHECKER_STATUS as MAKER13_8_10_,\n>> shipmentty4_.SHADOW_ID as SHADOW14_8_10_,\n>> --(select now()) as formula6_10_,\n>> shipmentsc2_.MOD_ID as MOD2_78_11_,\n>> shipmentsc2_.CARRIER_ID as CARRIER3_78_11_,\n>> shipmentsc2_.ORIGIN_AIRPORTS_ID as ORIGIN4_78_11_,\n>> shipmentsc2_.DEST_AIRPORTS_ID as DEST5_78_11_,\n>> shipmentsc2_.SCHEDULE as SCHEDULE78_11_,\n>> shipmentsc2_.ARRIVAL_DATE as ARRIVAL7_78_11_,\n>> shipmentsc2_.EST_TIME_DEPARTURE as EST8_78_11_,\n>> shipmentsc2_.EST_TIME_ARRIVAL as EST9_78_11_,\n>> shipmentsc2_.ROUTE_LEG_SEQ_NO as ROUTE10_78_11_,\n>> shipmentsc2_.CUTOFF_HOURS_BEFORE_DEPARTURE as CUTOFF11_78_11_,\n>> shipmentsc2_.AVAILABLE_IN_A_WEEK as AVAILABLE12_78_11_,\n>> shipmentsc2_.REMARKS as REMARKS78_11_,\n>> shipmentsc2_.STATUS as STATUS78_11_,\n>> shipmentsc2_.REGION_ID as REGION15_78_11_,\n>> shipmentsc2_.IS_DELETED as IS16_78_11_,\n>> shipmentsc2_.CREATED as CREATED78_11_,\n>> shipmentsc2_.CREATED_BY as CREATED18_78_11_,\n>> shipmentsc2_.LAST_UPDATED as LAST19_78_11_,\n>> shipmentsc2_.LAST_UPDATED_BY as LAST20_78_11_,\n>> shipmentsc2_.LAST_CHECKED_BY as LAST21_78_11_,\n>> shipmentsc2_.LAST_MAKED as LAST22_78_11_,\n>> shipmentsc2_.MAKER_CHECKER_STATUS as MAKER23_78_11_,\n>> shipmentsc2_.SHADOW_ID as SHADOW24_78_11_,\n>> --(select now()) as formula71_11_,\n>> shipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5___,\n>> shipmentro1_.FIN_ID as FIN1___\n>> from TBLS_SHIPMENT_RECORDS shipmentre0_\n>> inner join TBLS_SHIPMENT_RECORD_ROUTING shipmentro1_ on\n>> shipmentre0_.FIN_ID = shipmentro1_.SHIPMENT_RECORD_ID\n>> inner join TBLS_SHIPMENT_SCHEDULES shipmentsc2_ on\n>> shipmentro1_.SHIPMENT_SCHEDULE_ID = shipmentsc2_.FIN_ID\n>> inner join TBLS_CARRIERS carriers3_ on shipmentsc2_.CARRIER_ID =\n>> carriers3_.FIN_ID\n>> inner join TBLS_SHIPMENT_TYPES shipmentty4_ on\n>> shipmentre0_.SHIPMENT_TYPE_ID = shipmentty4_.FIN_ID\n>> inner join TBLS_CONSIGNEES consignees5_ on shipmentre0_.SHIPPER_ID =\n>> consignees5_.FIN_ID\n>> inner join TBLS_CONSIGNEES consignees6_ on shipmentre0_.CONSIGNEES_ID =\n>> consignees6_.FIN_ID\n>> inner join TBLS_WORKFLOW_STATES workflowst7_ on\n>> shipmentre0_.SHIPMENT_STATUS_ID = workflowst7_.FIN_ID\n>> inner join TBLS_WORKFLOW_STATES workflowst8_ on\n>> shipmentre0_.SHIPMENT_CHARGE_STATUS = workflowst8_.FIN_ID\n>> inner join TBLS_WORKFLOW_STATES workflowst9_ on\n>> shipmentre0_.SHIPMENT_DOCUMENT_STATUS = workflowst9_.FIN_ID\n>> inner join TBLS_WORKFLOW_STATES workflowst10_ on\n>> shipmentre0_.VAULT_STATUS_ID = workflowst10_.FIN_ID\n>> left outer join TBLS_SHIPMENT_METHODS shipmentme11_ on\n>> shipmentre0_.SHIPMENT_METHOD_ID = shipmentme11_.FIN_ID\n>> left outer join TBLS_BANK_NOTES_DEALS_LEGS deallegs12_ on\n>> shipmentre0_.FIN_ID = deallegs12_.SHIPMENT_RECORDS_ID\n>> where (shipmentro1_.LEG_NO = (select min(shipmentro13_.LEG_NO)\n>> from TBLS_SHIPMENT_RECORD_ROUTING shipmentro13_\n>> where shipmentre0_.FIN_ID = shipmentro13_.SHIPMENT_RECORD_ID\n>> and ((shipmentro13_.IS_DELETED = 'N'))))\n>> and (shipmentre0_.IS_DELETED = 'N')\n>> and (TO_CHAR(shipmentro1_.ARRIVAL_DATE, 'YYYY-MM-DD') <= '2019-08-29')\n>> order by shipmentre0_.SHIPMENT_DATE\n>> limit 25\n>> ;\n>>\n>>\n>> On Mon, Sep 9, 2019 at 2:00 PM yash mehta <[email protected]> wrote:\n>>\n>>> We have a query that takes 1min to execute in postgres 10.6 and the same\n>>> executes in 4 sec in Oracle database. The query is doing 'select distinct'.\n>>> If I add a 'group by' clause, performance in postgres improves\n>>> significantly and fetches results in 2 sec (better than oracle). But\n>>> unfortunately, we cannot modify the query. Could you please suggest a way\n>>> to improve performance in Postgres without modifying the query.\n>>>\n>>> *Original condition: time taken 1min*\n>>>\n>>> Sort Method: external merge Disk: 90656kB\n>>>\n>>>\n>>>\n>>> *After removing distinct from query: time taken 2sec*\n>>>\n>>> Sort Method: top-N heapsort Memory: 201kB\n>>>\n>>>\n>>>\n>>> *After increasing work_mem to 180MB; it takes 20sec*\n>>>\n>>> Sort Method: quicksort Memory: 172409kB\n>>>\n>>>\n>>>\n>>> SELECT * FROM pg_stat_statements ORDER BY total_time DESC limit 1;\n>>>\n>>> -[ RECORD 1\n>>> ]-------+-----------------------------------------------------------------------------------------------------------------------------------------\n>>>\n>>> userid | 174862\n>>>\n>>> dbid | 174861\n>>>\n>>> queryid | 1469376470\n>>>\n>>> query | <query is too long. It selects around 300 columns>\n>>>\n>>> calls | 1\n>>>\n>>> total_time | 59469.972661\n>>>\n>>> min_time | 59469.972661\n>>>\n>>> max_time | 59469.972661\n>>>\n>>> mean_time | 59469.972661\n>>>\n>>> stddev_time | 0\n>>>\n>>> rows | 25\n>>>\n>>> shared_blks_hit | 27436\n>>>\n>>> shared_blks_read | 2542\n>>>\n>>> shared_blks_dirtied | 0\n>>>\n>>> shared_blks_written | 0\n>>>\n>>> local_blks_hit | 0\n>>>\n>>> local_blks_read | 0\n>>>\n>>> local_blks_dirtied | 0\n>>>\n>>> local_blks_written | 0\n>>>\n>>> temp_blks_read | 257\n>>>\n>>> temp_blks_written | 11333\n>>>\n>>> blk_read_time | 0\n>>>\n>>> blk_write_time | 0\n>>>\n>>\n> IMO, an explain analyze of the query would be useful in order for people\n> to help you.\n>\n> e.g. https://explain.depesz.com\n>\n> Regards,\n> Flo\n>\n\nHi Flo,PFB the explain plan: \n\"Limit (cost=5925.59..5944.03 rows=25 width=6994)\r\n(actual time=57997.219..58002.451 rows=25 loops=1)\"\n\" -> Unique (cost=5925.59..5969.10\r\nrows=59 width=6994) (actual time=57997.218..58002.416 rows=25 loops=1)\"\n\" -> \r\nSort (cost=5925.59..5925.74 rows=59 width=6994) (actual\r\ntime=57997.214..57997.537 rows=550 loops=1)\"\n\" \r\nSort Key: shipmentre0_.shipment_date, shipmentre0_.fin_id,\r\nworkflowst10_.fin_id, carriers3_.fin_id, shipmentro1_.fin_id,\r\nshipmentme11_.fin_id, workflowst9_.fin_id, workflowst8_.fin_id,\r\nworkflowst7_.fin_id, consignees5_.fin_id, consignees6_.fin_id,\r\nshipmentty4_.fin_id, shipmentsc2_.fin_id, shipmentre0_.mod_id,\r\nshipmentre0_.shipment_method_id, shipmentre0_.shipment_basis_id,\r\nshipmentre0_.shipment_arrangement_id, shipmentre0_.shipment_currency_id,\r\nshipmentre0_.carrier_crew_extn_id, shipmentre0_.end_time,\r\nshipmentre0_.shipment_value_usd, shipmentre0_.shipment_value_base,\r\nshipmentre0_.insurance_value_usd, shipmentre0_.insurance_value_base,\r\nshipmentre0_.remarks, shipmentre0_.deletion_remarks,\r\nshipmentre0_.insurance_provider, shipmentre0_.shipment_provider,\r\nshipmentre0_.security_provider_id, shipmentre0_.consignee_contact_name,\r\nshipmentre0_.signal, shipmentre0_.chargeable_wt, shipmentre0_.no_of_pieces,\r\nshipmentre0_.regions_id, shipmentre0_.created, shipmentre0_.created_by,\r\nshipmentre0_.last_updated, shipmentre0_.last_updated_by,\r\nshipmentre0_.last_checked_by, shipmentre0_.last_maked,\r\nshipmentre0_.maker_checker_status, shipmentre0_.shadow_id,\r\nworkflowst10_.workflow_module, workflowst10_.name,\r\nworkflowst10_.deal_display_module, workflowst10_.workflow_level,\r\nworkflowst10_.is_deal_editable, workflowst10_.gen_confo,\r\nworkflowst10_.gen_deal_ticket, workflowst10_.gen_settlements,\r\nworkflowst10_.vault_start, workflowst10_.update_main_inv, workflowst10_.update_other_inv,\r\nworkflowst10_.release_shipment, workflowst10_.is_deal_splittable,\r\nworkflowst10_.send_email, workflowst10_.is_deleted, workflowst10_.created,\r\nworkflowst10_.created_by, workflowst10_.last_updated,\r\nworkflowst10_.last_updated_by, workflowst10_.last_checked_by,\r\nworkflowst10_.last_maked, workflowst10_.mod_id,\r\nworkflowst10_.maker_checker_status, workflowst10_.shadow_id, carriers3_.mod_id,\r\ncarriers3_.cities_id, carriers3_.code, carriers3_.name,\r\ncarriers3_.carrier_types, carriers3_.name_in_fl, carriers3_.iata_code,\r\ncarriers3_.kc_code, carriers3_.airline_acct, carriers3_.address1,\r\ncarriers3_.address2, carriers3_.address3, carriers3_.address4,\r\ncarriers3_.terminal, carriers3_.airline_agent, carriers3_.accountinginfo,\r\ncarriers3_.import_dept, carriers3_.import_after_office_hour,\r\ncarriers3_.import_contact, carriers3_.import_fax, carriers3_.import_email,\r\ncarriers3_.export_deptt, carriers3_.export_after_office_hour,\r\ncarriers3_.export_contact, carriers3_.export_fax, carriers3_.import_contact_no,\r\ncarriers3_.export_contact_no, carriers3_.export_email,\r\ncarriers3_.awb_issued_by, carriers3_.is_deleted, carriers3_.created,\r\ncarriers3_.created_by, carriers3_.last_updated, carriers3_.last_updated_by,\r\ncarriers3_.last_checked_by, carriers3_.last_maked, carriers3_.maker_checker_status,\r\ncarriers3_.shadow_id, shipmentro1_.mod_id, shipmentro1_.region_id,\r\nshipmentro1_.airway_bill_no, shipmentro1_.shipment_date,\r\nshipmentro1_.arrival_date, shipmentro1_.leg_no, shipmentro1_.no_of_pcs,\r\nshipmentro1_.chargeable_weight, shipmentro1_.carrier_crew_extn_id,\r\nshipmentro1_.is_deleted, shipmentro1_.created, shipmentro1_.created_by,\r\nshipmentro1_.last_updated, shipmentro1_.last_updated_by,\r\nshipmentro1_.last_checked_by, shipmentro1_.last_maked,\r\nshipmentro1_.maker_checker_status, shipmentro1_.shadow_id,\r\nshipmentme11_.mod_id, shipmentme11_.code, shipmentme11_.name,\r\nshipmentme11_.shipment_method_type, shipmentme11_.is_deleted,\r\nshipmentme11_.created, shipmentme11_.created_by, shipmentme11_.last_updated,\r\nshipmentme11_.last_updated_by, shipmentme11_.last_checked_by,\r\nshipmentme11_.last_maked, shipmentme11_.maker_checker_status,\r\nshipmentme11_.shadow_id, workflowst9_.workflow_module, workflowst9_.name,\r\nworkflowst9_.deal_display_module, workflowst9_.workflow_level,\r\nworkflowst9_.is_deal_editable, workflowst9_.gen_confo,\r\nworkflowst9_.gen_deal_ticket, workflowst9_.gen_settlements,\r\nworkflowst9_.vault_start, workflowst9_.update_main_inv,\r\nworkflowst9_.update_other_inv, workflowst9_.release_shipment,\r\nworkflowst9_.is_deal_splittable, workflowst9_.send_email,\r\nworkflowst9_.is_deleted, workflowst9_.created, workflowst9_.created_by,\r\nworkflowst9_.last_updated, workflowst9_.last_updated_by,\r\nworkflowst9_.last_checked_by, workflowst9_.last_maked, workflowst9_.mod_id,\r\nworkflowst9_.maker_checker_status, workflowst9_.shadow_id,\r\nworkflowst8_.workflow_module, workflowst8_.name,\r\nworkflowst8_.deal_display_module, workflowst8_.workflow_level,\r\nworkflowst8_.is_deal_editable, workflowst8_.gen_confo,\r\nworkflowst8_.gen_deal_ticket, workflowst8_.gen_settlements, workflowst8_.vault_start,\r\nworkflowst8_.update_main_inv, workflowst8_.update_other_inv,\r\nworkflowst8_.release_shipment, workflowst8_.is_deal_splittable,\r\nworkflowst8_.send_email, workflowst8_.is_deleted, workflowst8_.created,\r\nworkflowst8_.created_by, workflowst8_.last_updated,\r\nworkflowst8_.last_updated_by, workflowst8_.last_checked_by,\r\nworkflowst8_.last_maked, workflowst8_.mod_id,\r\nworkflowst8_.maker_checker_status, workflowst8_.shadow_id,\r\nworkflowst7_.workflow_module, workflowst7_.name,\r\nworkflowst7_.deal_display_module, workflowst7_.workflow_level,\r\nworkflowst7_.is_deal_editable, workflowst7_.gen_confo,\r\nworkflowst7_.gen_deal_ticket, workflowst7_.gen_settlements,\r\nworkflowst7_.vault_start, workflowst7_.update_main_inv,\r\nworkflowst7_.update_other_inv, workflowst7_.release_shipment,\r\nworkflowst7_.is_deal_splittable, workflowst7_.send_email,\r\nworkflowst7_.is_deleted, workflowst7_.created, workflowst7_.created_by,\r\nworkflowst7_.last_updated, workflowst7_.last_updated_by,\r\nworkflowst7_.last_checked_by, workflowst7_.last_maked, workflowst7_.mod_id,\r\nworkflowst7_.maker_checker_status, workflowst7_.shadow_id, consignees5_.mod_id,\r\nconsignees5_.countries_id, consignees5_.cities_id, consignees5_.regions_id,\r\nconsignees5_.short_name, consignees5_.is_counterparty, consignees5_.name,\r\nconsignees5_.airports_id, consignees5_.address1, consignees5_.address2,\r\nconsignees5_.address3, consignees5_.address4, consignees5_.awb_special_clause,\r\nconsignees5_.issuing_carrier_agent_name, consignees5_.agent_address1,\r\nconsignees5_.agent_address2, consignees5_.postal_code, consignees5_.is_deleted,\r\nconsignees5_.created, consignees5_.created_by, consignees5_.last_updated,\r\nconsignees5_.last_updated_by, consignees5_.last_checked_by,\r\nconsignees5_.last_maked, consignees5_.maker_checker_status,\r\nconsignees5_.shadow_id, consignees6_.mod_id, consignees6_.countries_id,\r\nconsignees6_.cities_id, consignees6_.regions_id, consignees6_.short_name,\r\nconsignees6_.is_counterparty, consignees6_.name, consignees6_.airports_id,\r\nconsignees6_.address1, consignees6_.address2, consignees6_.address3,\r\nconsignees6_.address4, consignees6_.awb_special_clause,\r\nconsignees6_.issuing_carrier_agent_name, consignees6_.agent_address1,\r\nconsignees6_.agent_address2, consignees6_.postal_code, consignees6_.is_deleted,\r\nconsignees6_.created, consignees6_.created_by, consignees6_.last_updated,\r\nconsignees6_.last_updated_by, consignees6_.last_checked_by,\r\nconsignees6_.last_maked, consignees6_.maker_checker_status,\r\nconsignees6_.shadow_id, shipmentty4_.mod_id, shipmentty4_.code,\r\nshipmentty4_.name, shipmentty4_.regions_id, shipmentty4_.is_deleted,\r\nshipmentty4_.created, shipmentty4_.created_by, shipmentty4_.last_updated,\r\nshipmentty4_.last_updated_by, shipmentty4_.last_checked_by,\r\nshipmentty4_.last_maked, shipmentty4_.maker_checker_status,\r\nshipmentty4_.shadow_id, shipmentsc2_.mod_id, shipmentsc2_.origin_airports_id,\r\nshipmentsc2_.dest_airports_id, shipmentsc2_.schedule,\r\nshipmentsc2_.arrival_date, shipmentsc2_.est_time_departure,\r\nshipmentsc2_.est_time_arrival, shipmentsc2_.route_leg_seq_no,\r\nshipmentsc2_.cutoff_hours_before_departure, shipmentsc2_.available_in_a_week,\r\nshipmentsc2_.remarks, shipmentsc2_.status, shipmentsc2_.region_id,\r\nshipmentsc2_.is_deleted, shipmentsc2_.created, shipmentsc2_.created_by,\r\nshipmentsc2_.last_updated, shipmentsc2_.last_updated_by, shipmentsc2_.last_checked_by,\r\nshipmentsc2_.last_maked, shipmentsc2_.maker_checker_status,\r\nshipmentsc2_.shadow_id\"\n\" \r\nSort Method: external merge Disk: 90656kB\"\n\" \r\n-> Hash Right Join (cost=388.61..5923.86 rows=59 width=6994)\r\n(actual time=143.405..372.903 rows=42759 loops=1)\"\n\" \r\nHash Cond: ((deallegs12_.shipment_records_id)::text =\r\n(shipmentre0_.fin_id)::text)\"\n\" \r\n-> Seq Scan on tbls_bank_notes_deals_legs deallegs12_ \r\n(cost=0.00..5337.57 rows=52557 width=16) (actual time=0.005..26.702 rows=52557\r\nloops=1)\"\n\" \r\n-> Hash (cost=388.58..388.58 rows=2 width=6960) (actual\r\ntime=143.371..143.371 rows=1442 loops=1)\"\n\" \r\nBuckets: 2048 (originally 1024) Batches: 1 (originally 1) Memory\r\nUsage: 3107kB\"\n\" \r\n-> Nested Loop Left Join (cost=106.73..388.58 rows=2 width=6960)\r\n(actual time=55.316..134.874 rows=1442 loops=1)\"\n\" \r\nJoin Filter: ((shipmentre0_.shipment_method_id)::text = (shipmentme11_.fin_id)::text)\"\n\" \r\nRows Removed by Join Filter: 2350\"\n\" \r\n-> Nested Loop (cost=106.73..387.37 rows=2 width=6721) (actual\r\ntime=55.300..130.529 rows=1442 loops=1)\"\n\" \r\n -> Nested Loop \r\n(cost=106.59..387.03 rows=2 width=6582) (actual time=55.282..124.351 rows=1442\r\nloops=1)\"\n\" \r\n-> Nested Loop (cost=106.45..386.69 rows=2 width=6443) (actual\r\ntime=55.267..118.047 rows=1442 loops=1)\"\n\" \r\n-> Nested Loop (cost=106.31..386.36 rows=2 width=6304) (actual\r\ntime=55.250..111.408 rows=1442 loops=1)\"\n\" \r\n-> Nested Loop (cost=106.17..386.02 rows=2 width=6165) (actual\r\ntime=55.228..105.002 rows=1442 loops=1)\"\n\" \r\nJoin Filter: ((shipmentre0_.consignees_id)::text =\r\n(consignees6_.fin_id)::text)\"\n\" \r\nRows Removed by Join Filter: 40376\"\n\" \r\n-> Seq Scan on tbls_consignees consignees6_ (cost=0.00..1.29\r\nrows=29 width=1060) (actual time=0.012..0.021 rows=29 loops=1)\"\n\" \r\n-> Materialize (cost=106.17..383.86 rows=2 width=5105) (actual\r\ntime=1.904..3.142 rows=1442 loops=29)\"\n\" \r\n -> \r\nNested Loop (cost=106.17..383.85 rows=2 width=5105) (actual\r\ntime=55.203..78.206 rows=1442 loops=1)\"\n\" \r\nJoin Filter: ((shipmentre0_.shipper_id)::text =\r\n(consignees5_.fin_id)::text)\"\n\" \r\nRows Removed by Join Filter: 40376\"\n\" \r\n-> Seq Scan on tbls_consignees consignees5_ (cost=0.00..1.29\r\nrows=29 width=1060) (actual time=0.003..0.013 rows=29 loops=1)\"\n\" \r\n-> Materialize (cost=106.17..381.70 rows=2 width=4045) (actual\r\ntime=0.524..2.244 rows=1442 loops=29)\"\n\" \r\n -> \r\nNested Loop (cost=106.17..381.69 rows=2 width=4045) (actual\r\ntime=15.195..53.051 rows=1442 loops=1)\"\n\" \r\n Join\r\nFilter: ((shipmentre0_.shipment_type_id)::text =\r\n(shipmentty4_.fin_id)::text)\"\n\" \r\nRows Removed by Join Filter: 7210\"\n\" \r\n -> \r\nSeq Scan on tbls_shipment_types shipmentty4_ (cost=0.00..1.06 rows=6\r\nwidth=95) (actual time=0.002..0.005 rows=6 loops=1)\"\n\" \r\n -> Materialize \r\n(cost=106.17..380.45 rows=2 width=3950) (actual time=2.478..8.157 rows=1442\r\nloops=6)\"\n\" \r\n-> Nested Loop (cost=106.17..380.44 rows=2 width=3950) (actual\r\ntime=14.856..43.625 rows=1442 loops=1)\"\n\" \r\n-> Nested Loop (cost=106.03..379.95 rows=2 width=1696) (actual\r\ntime=14.824..38.885 rows=1442 loops=1)\"\n\" \r\n-> Hash Join (cost=105.76..379.20 rows=2 width=1371) (actual\r\ntime=14.807..32.459 rows=1442 loops=1)\"\n\" \r\n Hash\r\nCond: (((shipmentro1_.shipment_record_id)::text = (shipmentre0_.fin_id)::text)\r\nAND (shipmentro1_.leg_no = (SubPlan 1)))\"\n\" \r\n -> \r\nSeq Scan on tbls_shipment_record_routing shipmentro1_ (cost=0.00..69.80\r\nrows=484 width=444) (actual time=0.017..2.534 rows=1452 loops=1)\"\n\" \r\n Filter:\r\n(to_char(arrival_date, 'YYYY-MM-DD'::text) <= '2019-08-29'::text)\"\n\" \r\nRows Removed by Filter: 1\"\n\" \r\n-> Hash (cost=84.11..84.11 rows=1443 width=927) (actual\r\ntime=14.762..14.763 rows=1443 loops=1)\"\n\" \r\n Buckets:\r\n2048 Batches: 1 Memory Usage: 497kB\"\n\" \r\n-> Seq Scan on tbls_shipment_records shipmentre0_ (cost=0.00..84.11\r\nrows=1443 width=927) (actual time=0.005..1.039 rows=1443 loops=1)\"\n\" \r\nFilter: ((is_deleted)::text = 'N'::text)\"\n\" \r\nRows Removed by Filter: 6\"\n\" \r\n SubPlan\r\n1\"\n\" \r\n-> Aggregate (cost=8.30..8.31 rows=1 width=8) (actual\r\ntime=0.008..0.008 rows=1 loops=2885)\"\n\" \r\n-> Index Scan using xbls_shipment_record_rout001 on\r\ntbls_shipment_record_routing shipmentro13_ (cost=0.28..8.30 rows=1\r\nwidth=8) (actual time=0.006..0.007 rows=1 loops=2885)\"\n\" \r\nIndex Cond: ((shipmentre0_.fin_id)::text = (shipment_record_id)::text)\"\n\" \r\nFilter: ((is_deleted)::text = 'N'::text)\"\n\" \r\n Rows\r\nRemoved by Filter: 0\"\n\" \r\n-> Index Scan using pk_bls_shipment_schedules on tbls_shipment_schedules\r\nshipmentsc2_ (cost=0.27..0.38 rows=1 width=325) (actual time=0.003..0.003\r\nrows=1 loops=1442)\"\n\" \r\nIndex Cond: ((fin_id)::text = (shipmentro1_.shipment_schedule_id)::text)\"\n\" \r\n-> Index Scan using pk_bls_carriers on tbls_carriers carriers3_ \r\n(cost=0.14..0.24 rows=1 width=2254) (actual time=0.002..0.002 rows=1\r\nloops=1442)\"\n\" \r\nIndex Cond: ((fin_id)::text = (shipmentsc2_.carrier_id)::text)\"\n\" \r\n-> Index Scan using pk_bls_workflow_states on tbls_workflow_states\r\nworkflowst7_ (cost=0.14..0.17 rows=1 width=139) (actual time=0.003..0.003\r\nrows=1 loops=1442)\"\n\" \r\nIndex Cond: ((fin_id)::text = (shipmentre0_.shipment_status_id)::text)\"\n\" \r\n-> Index Scan using pk_bls_workflow_states on tbls_workflow_states\r\nworkflowst8_ (cost=0.14..0.17 rows=1 width=139) (actual time=0.003..0.003\r\nrows=1 loops=1442)\"\n\" \r\n Index\r\nCond: ((fin_id)::text = (shipmentre0_.shipment_charge_status)::text)\"\n\" \r\n-> Index Scan using pk_bls_workflow_states on tbls_workflow_states\r\nworkflowst9_ (cost=0.14..0.17 rows=1 width=139) (actual time=0.002..0.002\r\nrows=1 loops=1442)\"\n\" \r\nIndex Cond: ((fin_id)::text =\r\n(shipmentre0_.shipment_document_status)::text)\"\n\" \r\n-> Index Scan using pk_bls_workflow_states on tbls_workflow_states\r\nworkflowst10_ (cost=0.14..0.17 rows=1 width=139) (actual\r\ntime=0.002..0.002 rows=1 loops=1442)\"\n\" \r\nIndex Cond: ((fin_id)::text = (shipmentre0_.vault_status_id)::text)\"\n\" \r\n -> \r\nMaterialize (cost=0.00..1.07 rows=5 width=239) (actual time=0.000..0.001\r\nrows=3 loops=1442)\"\n\" \r\n-> Seq Scan on tbls_shipment_methods shipmentme11_ \r\n(cost=0.00..1.05 rows=5 width=239) (actual time=0.006..0.010 rows=5\r\nloops=1)\"\n\"Planning time: 368.495 ms\"\n\"Execution time: 58018.486 ms\"On Mon, Sep 9, 2019 at 3:30 PM Flo Rance <[email protected]> wrote:On Mon, Sep 9, 2019 at 10:38 AM yash mehta <[email protected]> wrote:In addition to below mail, we have used btree indexes for primary key columns. Below is the query: select distinct shipmentre0_.FIN_ID as FIN1_53_0_,\t\t\t\t\tworkflowst10_.FIN_ID as FIN1_57_1_,\t\t\t\t\tcarriers3_.FIN_ID as FIN1_40_2_,\t\t\t\t\tshipmentro1_.FIN_ID as FIN1_33_3_,\t\t\t\t\tshipmentme11_.FIN_ID as FIN1_5_4_,\t\t\t\t\tworkflowst9_.FIN_ID as FIN1_57_5_,\t\t\t\t\tworkflowst8_.FIN_ID as FIN1_57_6_,\t\t\t\t\tworkflowst7_.FIN_ID as FIN1_57_7_,\t\t\t\t\tconsignees5_.FIN_ID as FIN1_81_8_,\t\t\t\t\tconsignees6_.FIN_ID as FIN1_81_9_,\t\t\t\t\tshipmentty4_.FIN_ID as FIN1_8_10_,\t\t\t\t\tshipmentsc2_.FIN_ID as FIN1_78_11_,\t\t\t\t\tshipmentre0_.MOD_ID as MOD2_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_METHOD_ID as SHIPMENT3_53_0_,\t\t\t\t\tshipmentre0_.SHIPPER_ID as SHIPPER4_53_0_,\t\t\t\t\tshipmentre0_.CONSIGNEES_ID as CONSIGNEES5_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_BASIS_ID as SHIPMENT6_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_TYPE_ID as SHIPMENT7_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_ARRANGEMENT_ID as SHIPMENT8_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_DATE as SHIPMENT9_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_CURRENCY_ID as SHIPMENT10_53_0_,\t\t\t\t\tshipmentre0_.CARRIER_CREW_EXTN_ID as CARRIER11_53_0_,\t\t\t\t\tshipmentre0_.END_TIME as END12_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_VALUE_USD as SHIPMENT13_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_VALUE_BASE as SHIPMENT14_53_0_,\t\t\t\t\tshipmentre0_.INSURANCE_VALUE_USD as INSURANCE15_53_0_,\t\t\t\t\tshipmentre0_.INSURANCE_VALUE_BASE as INSURANCE16_53_0_,\t\t\t\t\tshipmentre0_.REMARKS as REMARKS53_0_,\t\t\t\t\tshipmentre0_.DELETION_REMARKS as DELETION18_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_STATUS_ID as SHIPMENT19_53_0_,\t\t\t\t\tshipmentre0_.VAULT_STATUS_ID as VAULT20_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_CHARGE_STATUS as SHIPMENT21_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_DOCUMENT_STATUS as SHIPMENT22_53_0_,\t\t\t\t\tshipmentre0_.INSURANCE_PROVIDER as INSURANCE23_53_0_,\t\t\t\t\tshipmentre0_.SHIPMENT_PROVIDER as SHIPMENT24_53_0_,\t\t\t\t\tshipmentre0_.SECURITY_PROVIDER_ID as SECURITY25_53_0_,\t\t\t\t\tshipmentre0_.CONSIGNEE_CONTACT_NAME as CONSIGNEE26_53_0_,\t\t\t\t\tshipmentre0_.SIGNAL as SIGNAL53_0_,\t\t\t\t\tshipmentre0_.CHARGEABLE_WT as CHARGEABLE28_53_0_,\t\t\t\t\tshipmentre0_.NO_OF_PIECES as NO29_53_0_,\t\t\t\t\tshipmentre0_.REGIONS_ID as REGIONS30_53_0_,\t\t\t\t\tshipmentre0_.IS_DELETED as IS31_53_0_,\t\t\t\t\tshipmentre0_.CREATED as CREATED53_0_,\t\t\t\t\tshipmentre0_.CREATED_BY as CREATED33_53_0_,\t\t\t\t\tshipmentre0_.LAST_UPDATED as LAST34_53_0_,\t\t\t\t\tshipmentre0_.LAST_UPDATED_BY as LAST35_53_0_,\t\t\t\t\tshipmentre0_.LAST_CHECKED_BY as LAST36_53_0_,\t\t\t\t\tshipmentre0_.LAST_MAKED as LAST37_53_0_,\t\t\t\t\tshipmentre0_.MAKER_CHECKER_STATUS as MAKER38_53_0_,\t\t\t\t\tshipmentre0_.SHADOW_ID as SHADOW39_53_0_,\t\t\t\t\t--(select now()) as formula48_0_,\t\t\t\t\tworkflowst10_.WORKFLOW_MODULE as WORKFLOW2_57_1_,\t\t\t\t\tworkflowst10_.NAME as NAME57_1_,\t\t\t\t\tworkflowst10_.DEAL_DISPLAY_MODULE as DEAL4_57_1_,\t\t\t\t\tworkflowst10_.WORKFLOW_LEVEL as WORKFLOW5_57_1_,\t\t\t\t\tworkflowst10_.IS_DEAL_EDITABLE as IS6_57_1_,\t\t\t\t\tworkflowst10_.GEN_CONFO as GEN7_57_1_,\t\t\t\t\tworkflowst10_.GEN_DEAL_TICKET as GEN8_57_1_,\t\t\t\t\tworkflowst10_.GEN_SETTLEMENTS as GEN9_57_1_,\t\t\t\t\tworkflowst10_.VAULT_START as VAULT10_57_1_,\t\t\t\t\tworkflowst10_.UPDATE_MAIN_INV as UPDATE11_57_1_,\t\t\t\t\tworkflowst10_.UPDATE_OTHER_INV as UPDATE12_57_1_,\t\t\t\t\tworkflowst10_.RELEASE_SHIPMENT as RELEASE13_57_1_,\t\t\t\t\tworkflowst10_.IS_DEAL_SPLITTABLE as IS14_57_1_,\t\t\t\t\tworkflowst10_.SEND_EMAIL as SEND15_57_1_,\t\t\t\t\tworkflowst10_.IS_DELETED as IS16_57_1_,\t\t\t\t\tworkflowst10_.CREATED as CREATED57_1_,\t\t\t\t\tworkflowst10_.CREATED_BY as CREATED18_57_1_,\t\t\t\t\tworkflowst10_.LAST_UPDATED as LAST19_57_1_,\t\t\t\t\tworkflowst10_.LAST_UPDATED_BY as LAST20_57_1_,\t\t\t\t\tworkflowst10_.LAST_CHECKED_BY as LAST21_57_1_,\t\t\t\t\tworkflowst10_.LAST_MAKED as LAST22_57_1_,\t\t\t\t\tworkflowst10_.MOD_ID as MOD23_57_1_,\t\t\t\t\tworkflowst10_.MAKER_CHECKER_STATUS as MAKER24_57_1_,\t\t\t\t\tworkflowst10_.SHADOW_ID as SHADOW25_57_1_,\t\t\t\t\t--(select now()) as formula52_1_,\t\t\t\t\tcarriers3_.MOD_ID as MOD2_40_2_,\t\t\t\t\tcarriers3_.CITIES_ID as CITIES3_40_2_,\t\t\t\t\tcarriers3_.CODE as CODE40_2_,\t\t\t\t\tcarriers3_.NAME as NAME40_2_,\t\t\t\t\tcarriers3_.CARRIER_TYPES as CARRIER6_40_2_,\t\t\t\t\tcarriers3_.NAME_IN_FL as NAME7_40_2_,\t\t\t\t\tcarriers3_.IATA_CODE as IATA8_40_2_,\t\t\t\t\tcarriers3_.KC_CODE as KC9_40_2_,\t\t\t\t\tcarriers3_.AIRLINE_ACCT as AIRLINE10_40_2_,\t\t\t\t\tcarriers3_.ADDRESS1 as ADDRESS11_40_2_,\t\t\t\t\tcarriers3_.ADDRESS2 as ADDRESS12_40_2_,\t\t\t\t\tcarriers3_.ADDRESS3 as ADDRESS13_40_2_,\t\t\t\t\tcarriers3_.ADDRESS4 as ADDRESS14_40_2_,\t\t\t\t\tcarriers3_.TERMINAL as TERMINAL40_2_,\t\t\t\t\tcarriers3_.AIRLINE_AGENT as AIRLINE16_40_2_,\t\t\t\t\tcarriers3_.ACCOUNTINGINFO as ACCOUNT17_40_2_,\t\t\t\t\tcarriers3_.IMPORT_DEPT as IMPORT18_40_2_,\t\t\t\t\tcarriers3_.IMPORT_AFTER_OFFICE_HOUR as IMPORT19_40_2_,\t\t\t\t\tcarriers3_.IMPORT_CONTACT as IMPORT20_40_2_,\t\t\t\t\tcarriers3_.IMPORT_FAX as IMPORT21_40_2_,\t\t\t\t\tcarriers3_.IMPORT_EMAIL as IMPORT22_40_2_,\t\t\t\t\tcarriers3_.EXPORT_DEPTT as EXPORT23_40_2_,\t\t\t\t\tcarriers3_.EXPORT_AFTER_OFFICE_HOUR as EXPORT24_40_2_,\t\t\t\t\tcarriers3_.EXPORT_CONTACT as EXPORT25_40_2_,\t\t\t\t\tcarriers3_.EXPORT_FAX as EXPORT26_40_2_,\t\t\t\t\tcarriers3_.IMPORT_CONTACT_NO as IMPORT27_40_2_,\t\t\t\t\tcarriers3_.EXPORT_CONTACT_NO as EXPORT28_40_2_,\t\t\t\t\tcarriers3_.EXPORT_EMAIL as EXPORT29_40_2_,\t\t\t\t\tcarriers3_.AWB_ISSUED_BY as AWB30_40_2_,\t\t\t\t\tcarriers3_.IS_DELETED as IS31_40_2_,\t\t\t\t\tcarriers3_.CREATED as CREATED40_2_,\t\t\t\t\tcarriers3_.CREATED_BY as CREATED33_40_2_,\t\t\t\t\tcarriers3_.LAST_UPDATED as LAST34_40_2_,\t\t\t\t\tcarriers3_.LAST_UPDATED_BY as LAST35_40_2_,\t\t\t\t\tcarriers3_.LAST_CHECKED_BY as LAST36_40_2_,\t\t\t\t\tcarriers3_.LAST_MAKED as LAST37_40_2_,\t\t\t\t\tcarriers3_.MAKER_CHECKER_STATUS as MAKER38_40_2_,\t\t\t\t\tcarriers3_.SHADOW_ID as SHADOW39_40_2_,\t\t\t\t\t--(select now()) as formula36_2_,\t\t\t\t\tshipmentro1_.MOD_ID as MOD2_33_3_,\t\t\t\t\tshipmentro1_.REGION_ID as REGION3_33_3_,\t\t\t\t\tshipmentro1_.SHIPMENT_SCHEDULE_ID as SHIPMENT4_33_3_,\t\t\t\t\tshipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5_33_3_,\t\t\t\t\tshipmentro1_.AIRWAY_BILL_NO as AIRWAY6_33_3_,\t\t\t\t\tshipmentro1_.SHIPMENT_DATE as SHIPMENT7_33_3_,\t\t\t\t\tshipmentro1_.ARRIVAL_DATE as ARRIVAL8_33_3_,\t\t\t\t\tshipmentro1_.LEG_NO as LEG9_33_3_,\t\t\t\t\tshipmentro1_.NO_OF_PCS as NO10_33_3_,\t\t\t\t\tshipmentro1_.CHARGEABLE_WEIGHT as CHARGEABLE11_33_3_,\t\t\t\t\tshipmentro1_.CARRIER_CREW_EXTN_ID as CARRIER12_33_3_,\t\t\t\t\tshipmentro1_.IS_DELETED as IS13_33_3_,\t\t\t\t\tshipmentro1_.CREATED as CREATED33_3_,\t\t\t\t\tshipmentro1_.CREATED_BY as CREATED15_33_3_,\t\t\t\t\tshipmentro1_.LAST_UPDATED as LAST16_33_3_,\t\t\t\t\tshipmentro1_.LAST_UPDATED_BY as LAST17_33_3_,\t\t\t\t\tshipmentro1_.LAST_CHECKED_BY as LAST18_33_3_,\t\t\t\t\tshipmentro1_.LAST_MAKED as LAST19_33_3_,\t\t\t\t\tshipmentro1_.MAKER_CHECKER_STATUS as MAKER20_33_3_,\t\t\t\t\tshipmentro1_.SHADOW_ID as SHADOW21_33_3_,\t\t\t\t\t--(select now()) as formula29_3_,\t\t\t\t\tshipmentme11_.MOD_ID as MOD2_5_4_,\t\t\t\t\tshipmentme11_.CODE as CODE5_4_,\t\t\t\t\tshipmentme11_.NAME as NAME5_4_,\t\t\t\t\tshipmentme11_.SHIPMENT_METHOD_TYPE as SHIPMENT5_5_4_,\t\t\t\t\tshipmentme11_.IS_DELETED as IS6_5_4_,\t\t\t\t\tshipmentme11_.CREATED as CREATED5_4_,\t\t\t\t\tshipmentme11_.CREATED_BY as CREATED8_5_4_,\t\t\t\t\tshipmentme11_.LAST_UPDATED as LAST9_5_4_,\t\t\t\t\tshipmentme11_.LAST_UPDATED_BY as LAST10_5_4_,\t\t\t\t\tshipmentme11_.LAST_CHECKED_BY as LAST11_5_4_,\t\t\t\t\tshipmentme11_.LAST_MAKED as LAST12_5_4_,\t\t\t\t\tshipmentme11_.MAKER_CHECKER_STATUS as MAKER13_5_4_,\t\t\t\t\tshipmentme11_.SHADOW_ID as SHADOW14_5_4_,\t\t\t\t\t--(select now()) as formula4_4_,\t\t\t\t\tworkflowst9_.WORKFLOW_MODULE as WORKFLOW2_57_5_,\t\t\t\t\tworkflowst9_.NAME as NAME57_5_,\t\t\t\t\tworkflowst9_.DEAL_DISPLAY_MODULE as DEAL4_57_5_,\t\t\t\t\tworkflowst9_.WORKFLOW_LEVEL as WORKFLOW5_57_5_,\t\t\t\t\tworkflowst9_.IS_DEAL_EDITABLE as IS6_57_5_,\t\t\t\t\tworkflowst9_.GEN_CONFO as GEN7_57_5_,\t\t\t\t\tworkflowst9_.GEN_DEAL_TICKET as GEN8_57_5_,\t\t\t\t\tworkflowst9_.GEN_SETTLEMENTS as GEN9_57_5_,\t\t\t\t\tworkflowst9_.VAULT_START as VAULT10_57_5_,\t\t\t\t\tworkflowst9_.UPDATE_MAIN_INV as UPDATE11_57_5_,\t\t\t\t\tworkflowst9_.UPDATE_OTHER_INV as UPDATE12_57_5_,\t\t\t\t\tworkflowst9_.RELEASE_SHIPMENT as RELEASE13_57_5_,\t\t\t\t\tworkflowst9_.IS_DEAL_SPLITTABLE as IS14_57_5_,\t\t\t\t\tworkflowst9_.SEND_EMAIL as SEND15_57_5_,\t\t\t\t\tworkflowst9_.IS_DELETED as IS16_57_5_,\t\t\t\t\tworkflowst9_.CREATED as CREATED57_5_,\t\t\t\t\tworkflowst9_.CREATED_BY as CREATED18_57_5_,\t\t\t\t\tworkflowst9_.LAST_UPDATED as LAST19_57_5_,\t\t\t\t\tworkflowst9_.LAST_UPDATED_BY as LAST20_57_5_,\t\t\t\t\tworkflowst9_.LAST_CHECKED_BY as LAST21_57_5_,\t\t\t\t\tworkflowst9_.LAST_MAKED as LAST22_57_5_,\t\t\t\t\tworkflowst9_.MOD_ID as MOD23_57_5_,\t\t\t\t\tworkflowst9_.MAKER_CHECKER_STATUS as MAKER24_57_5_,\t\t\t\t\tworkflowst9_.SHADOW_ID as SHADOW25_57_5_,\t\t\t\t\t--(select now()) as formula52_5_,\t\t\t\t\tworkflowst8_.WORKFLOW_MODULE as WORKFLOW2_57_6_,\t\t\t\t\tworkflowst8_.NAME as NAME57_6_,\t\t\t\t\tworkflowst8_.DEAL_DISPLAY_MODULE as DEAL4_57_6_,\t\t\t\t\tworkflowst8_.WORKFLOW_LEVEL as WORKFLOW5_57_6_,\t\t\t\t\tworkflowst8_.IS_DEAL_EDITABLE as IS6_57_6_,\t\t\t\t\tworkflowst8_.GEN_CONFO as GEN7_57_6_,\t\t\t\t\tworkflowst8_.GEN_DEAL_TICKET as GEN8_57_6_,\t\t\t\t\tworkflowst8_.GEN_SETTLEMENTS as GEN9_57_6_,\t\t\t\t\tworkflowst8_.VAULT_START as VAULT10_57_6_,\t\t\t\t\tworkflowst8_.UPDATE_MAIN_INV as UPDATE11_57_6_,\t\t\t\t\tworkflowst8_.UPDATE_OTHER_INV as UPDATE12_57_6_,\t\t\t\t\tworkflowst8_.RELEASE_SHIPMENT as RELEASE13_57_6_,\t\t\t\t\tworkflowst8_.IS_DEAL_SPLITTABLE as IS14_57_6_,\t\t\t\t\tworkflowst8_.SEND_EMAIL as SEND15_57_6_,\t\t\t\t\tworkflowst8_.IS_DELETED as IS16_57_6_,\t\t\t\t\tworkflowst8_.CREATED as CREATED57_6_,\t\t\t\t\tworkflowst8_.CREATED_BY as CREATED18_57_6_,\t\t\t\t\tworkflowst8_.LAST_UPDATED as LAST19_57_6_,\t\t\t\t\tworkflowst8_.LAST_UPDATED_BY as LAST20_57_6_,\t\t\t\t\tworkflowst8_.LAST_CHECKED_BY as LAST21_57_6_,\t\t\t\t\tworkflowst8_.LAST_MAKED as LAST22_57_6_,\t\t\t\t\tworkflowst8_.MOD_ID as MOD23_57_6_,\t\t\t\t\tworkflowst8_.MAKER_CHECKER_STATUS as MAKER24_57_6_,\t\t\t\t\tworkflowst8_.SHADOW_ID as SHADOW25_57_6_,\t\t\t\t\t--(select now()) as formula52_6_,\t\t\t\t\tworkflowst7_.WORKFLOW_MODULE as WORKFLOW2_57_7_,\t\t\t\t\tworkflowst7_.NAME as NAME57_7_,\t\t\t\t\tworkflowst7_.DEAL_DISPLAY_MODULE as DEAL4_57_7_,\t\t\t\t\tworkflowst7_.WORKFLOW_LEVEL as WORKFLOW5_57_7_,\t\t\t\t\tworkflowst7_.IS_DEAL_EDITABLE as IS6_57_7_,\t\t\t\t\tworkflowst7_.GEN_CONFO as GEN7_57_7_,\t\t\t\t\tworkflowst7_.GEN_DEAL_TICKET as GEN8_57_7_,\t\t\t\t\tworkflowst7_.GEN_SETTLEMENTS as GEN9_57_7_,\t\t\t\t\tworkflowst7_.VAULT_START as VAULT10_57_7_,\t\t\t\t\tworkflowst7_.UPDATE_MAIN_INV as UPDATE11_57_7_,\t\t\t\t\tworkflowst7_.UPDATE_OTHER_INV as UPDATE12_57_7_,\t\t\t\t\tworkflowst7_.RELEASE_SHIPMENT as RELEASE13_57_7_,\t\t\t\t\tworkflowst7_.IS_DEAL_SPLITTABLE as IS14_57_7_,\t\t\t\t\tworkflowst7_.SEND_EMAIL as SEND15_57_7_,\t\t\t\t\tworkflowst7_.IS_DELETED as IS16_57_7_,\t\t\t\t\tworkflowst7_.CREATED as CREATED57_7_,\t\t\t\t\tworkflowst7_.CREATED_BY as CREATED18_57_7_,\t\t\t\t\tworkflowst7_.LAST_UPDATED as LAST19_57_7_,\t\t\t\t\tworkflowst7_.LAST_UPDATED_BY as LAST20_57_7_,\t\t\t\t\tworkflowst7_.LAST_CHECKED_BY as LAST21_57_7_,\t\t\t\t\tworkflowst7_.LAST_MAKED as LAST22_57_7_,\t\t\t\t\tworkflowst7_.MOD_ID as MOD23_57_7_,\t\t\t\t\tworkflowst7_.MAKER_CHECKER_STATUS as MAKER24_57_7_,\t\t\t\t\tworkflowst7_.SHADOW_ID as SHADOW25_57_7_,\t\t\t\t\t--(select now()) as formula52_7_,\t\t\t\t\tconsignees5_.MOD_ID as MOD2_81_8_,\t\t\t\t\tconsignees5_.COUNTRIES_ID as COUNTRIES3_81_8_,\t\t\t\t\tconsignees5_.CITIES_ID as CITIES4_81_8_,\t\t\t\t\tconsignees5_.REGIONS_ID as REGIONS5_81_8_,\t\t\t\t\tconsignees5_.SHORT_NAME as SHORT6_81_8_,\t\t\t\t\tconsignees5_.IS_COUNTERPARTY as IS7_81_8_,\t\t\t\t\tconsignees5_.NAME as NAME81_8_,\t\t\t\t\tconsignees5_.AIRPORTS_ID as AIRPORTS9_81_8_,\t\t\t\t\tconsignees5_.ADDRESS1 as ADDRESS10_81_8_,\t\t\t\t\tconsignees5_.ADDRESS2 as ADDRESS11_81_8_,\t\t\t\t\tconsignees5_.ADDRESS3 as ADDRESS12_81_8_,\t\t\t\t\tconsignees5_.ADDRESS4 as ADDRESS13_81_8_,\t\t\t\t\tconsignees5_.AWB_SPECIAL_CLAUSE as AWB14_81_8_,\t\t\t\t\tconsignees5_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_8_,\t\t\t\t\tconsignees5_.AGENT_ADDRESS1 as AGENT16_81_8_,\t\t\t\t\tconsignees5_.AGENT_ADDRESS2 as AGENT17_81_8_,\t\t\t\t\tconsignees5_.POSTAL_CODE as POSTAL18_81_8_,\t\t\t\t\tconsignees5_.IS_DELETED as IS19_81_8_,\t\t\t\t\tconsignees5_.CREATED as CREATED81_8_,\t\t\t\t\tconsignees5_.CREATED_BY as CREATED21_81_8_,\t\t\t\t\tconsignees5_.LAST_UPDATED as LAST22_81_8_,\t\t\t\t\tconsignees5_.LAST_UPDATED_BY as LAST23_81_8_,\t\t\t\t\tconsignees5_.LAST_CHECKED_BY as LAST24_81_8_,\t\t\t\t\tconsignees5_.LAST_MAKED as LAST25_81_8_,\t\t\t\t\tconsignees5_.MAKER_CHECKER_STATUS as MAKER26_81_8_,\t\t\t\t\tconsignees5_.SHADOW_ID as SHADOW27_81_8_,\t\t\t\t\t--(select now()) as formula74_8_,\t\t\t\t\tconsignees6_.MOD_ID as MOD2_81_9_,\t\t\t\t\tconsignees6_.COUNTRIES_ID as COUNTRIES3_81_9_,\t\t\t\t\tconsignees6_.CITIES_ID as CITIES4_81_9_,\t\t\t\t\tconsignees6_.REGIONS_ID as REGIONS5_81_9_,\t\t\t\t\tconsignees6_.SHORT_NAME as SHORT6_81_9_,\t\t\t\t\tconsignees6_.IS_COUNTERPARTY as IS7_81_9_,\t\t\t\t\tconsignees6_.NAME as NAME81_9_,\t\t\t\t\tconsignees6_.AIRPORTS_ID as AIRPORTS9_81_9_,\t\t\t\t\tconsignees6_.ADDRESS1 as ADDRESS10_81_9_,\t\t\t\t\tconsignees6_.ADDRESS2 as ADDRESS11_81_9_,\t\t\t\t\tconsignees6_.ADDRESS3 as ADDRESS12_81_9_,\t\t\t\t\tconsignees6_.ADDRESS4 as ADDRESS13_81_9_,\t\t\t\t\tconsignees6_.AWB_SPECIAL_CLAUSE as AWB14_81_9_,\t\t\t\t\tconsignees6_.ISSUING_CARRIER_AGENT_NAME as ISSUING15_81_9_,\t\t\t\t\tconsignees6_.AGENT_ADDRESS1 as AGENT16_81_9_,\t\t\t\t\tconsignees6_.AGENT_ADDRESS2 as AGENT17_81_9_,\t\t\t\t\tconsignees6_.POSTAL_CODE as POSTAL18_81_9_,\t\t\t\t\tconsignees6_.IS_DELETED as IS19_81_9_,\t\t\t\t\tconsignees6_.CREATED as CREATED81_9_,\t\t\t\t\tconsignees6_.CREATED_BY as CREATED21_81_9_,\t\t\t\t\tconsignees6_.LAST_UPDATED as LAST22_81_9_,\t\t\t\t\tconsignees6_.LAST_UPDATED_BY as LAST23_81_9_,\t\t\t\t\tconsignees6_.LAST_CHECKED_BY as LAST24_81_9_,\t\t\t\t\tconsignees6_.LAST_MAKED as LAST25_81_9_,\t\t\t\t\tconsignees6_.MAKER_CHECKER_STATUS as MAKER26_81_9_,\t\t\t\t\tconsignees6_.SHADOW_ID as SHADOW27_81_9_,\t\t\t\t\t--(select now()) as formula74_9_,\t\t\t\t\tshipmentty4_.MOD_ID as MOD2_8_10_,\t\t\t\t\tshipmentty4_.CODE as CODE8_10_,\t\t\t\t\tshipmentty4_.NAME as NAME8_10_,\t\t\t\t\tshipmentty4_.REGIONS_ID as REGIONS5_8_10_,\t\t\t\t\tshipmentty4_.IS_DELETED as IS6_8_10_,\t\t\t\t\tshipmentty4_.CREATED as CREATED8_10_,\t\t\t\t\tshipmentty4_.CREATED_BY as CREATED8_8_10_,\t\t\t\t\tshipmentty4_.LAST_UPDATED as LAST9_8_10_,\t\t\t\t\tshipmentty4_.LAST_UPDATED_BY as LAST10_8_10_,\t\t\t\t\tshipmentty4_.LAST_CHECKED_BY as LAST11_8_10_,\t\t\t\t\tshipmentty4_.LAST_MAKED as LAST12_8_10_,\t\t\t\t\tshipmentty4_.MAKER_CHECKER_STATUS as MAKER13_8_10_,\t\t\t\t\tshipmentty4_.SHADOW_ID as SHADOW14_8_10_,\t\t\t\t\t--(select now()) as formula6_10_,\t\t\t\t\tshipmentsc2_.MOD_ID as MOD2_78_11_,\t\t\t\t\tshipmentsc2_.CARRIER_ID as CARRIER3_78_11_,\t\t\t\t\tshipmentsc2_.ORIGIN_AIRPORTS_ID as ORIGIN4_78_11_,\t\t\t\t\tshipmentsc2_.DEST_AIRPORTS_ID as DEST5_78_11_,\t\t\t\t\tshipmentsc2_.SCHEDULE as SCHEDULE78_11_,\t\t\t\t\tshipmentsc2_.ARRIVAL_DATE as ARRIVAL7_78_11_,\t\t\t\t\tshipmentsc2_.EST_TIME_DEPARTURE as EST8_78_11_,\t\t\t\t\tshipmentsc2_.EST_TIME_ARRIVAL as EST9_78_11_,\t\t\t\t\tshipmentsc2_.ROUTE_LEG_SEQ_NO as ROUTE10_78_11_,\t\t\t\t\tshipmentsc2_.CUTOFF_HOURS_BEFORE_DEPARTURE as CUTOFF11_78_11_,\t\t\t\t\tshipmentsc2_.AVAILABLE_IN_A_WEEK as AVAILABLE12_78_11_,\t\t\t\t\tshipmentsc2_.REMARKS as REMARKS78_11_,\t\t\t\t\tshipmentsc2_.STATUS as STATUS78_11_,\t\t\t\t\tshipmentsc2_.REGION_ID as REGION15_78_11_,\t\t\t\t\tshipmentsc2_.IS_DELETED as IS16_78_11_,\t\t\t\t\tshipmentsc2_.CREATED as CREATED78_11_,\t\t\t\t\tshipmentsc2_.CREATED_BY as CREATED18_78_11_,\t\t\t\t\tshipmentsc2_.LAST_UPDATED as LAST19_78_11_,\t\t\t\t\tshipmentsc2_.LAST_UPDATED_BY as LAST20_78_11_,\t\t\t\t\tshipmentsc2_.LAST_CHECKED_BY as LAST21_78_11_,\t\t\t\t\tshipmentsc2_.LAST_MAKED as LAST22_78_11_,\t\t\t\t\tshipmentsc2_.MAKER_CHECKER_STATUS as MAKER23_78_11_,\t\t\t\t\tshipmentsc2_.SHADOW_ID as SHADOW24_78_11_,\t\t\t\t\t--(select now()) as formula71_11_,\t\t\t\t\tshipmentro1_.SHIPMENT_RECORD_ID as SHIPMENT5___,\t\t\t\t\tshipmentro1_.FIN_ID as FIN1___\tfrom TBLS_SHIPMENT_RECORDS shipmentre0_\t\t\t inner join TBLS_SHIPMENT_RECORD_ROUTING shipmentro1_ on shipmentre0_.FIN_ID = shipmentro1_.SHIPMENT_RECORD_ID\t\t\t inner join TBLS_SHIPMENT_SCHEDULES shipmentsc2_ on shipmentro1_.SHIPMENT_SCHEDULE_ID = shipmentsc2_.FIN_ID\t\t\t inner join TBLS_CARRIERS carriers3_ on shipmentsc2_.CARRIER_ID = carriers3_.FIN_ID\t\t\t inner join TBLS_SHIPMENT_TYPES shipmentty4_ on shipmentre0_.SHIPMENT_TYPE_ID = shipmentty4_.FIN_ID\t\t\t inner join TBLS_CONSIGNEES consignees5_ on shipmentre0_.SHIPPER_ID = consignees5_.FIN_ID\t\t\t inner join TBLS_CONSIGNEES consignees6_ on shipmentre0_.CONSIGNEES_ID = consignees6_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst7_ on shipmentre0_.SHIPMENT_STATUS_ID = workflowst7_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst8_ on shipmentre0_.SHIPMENT_CHARGE_STATUS = workflowst8_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst9_ on shipmentre0_.SHIPMENT_DOCUMENT_STATUS = workflowst9_.FIN_ID\t\t\t inner join TBLS_WORKFLOW_STATES workflowst10_ on shipmentre0_.VAULT_STATUS_ID = workflowst10_.FIN_ID\t\t\t left outer join TBLS_SHIPMENT_METHODS shipmentme11_ on shipmentre0_.SHIPMENT_METHOD_ID = shipmentme11_.FIN_ID\t\t\t left outer join TBLS_BANK_NOTES_DEALS_LEGS deallegs12_ on shipmentre0_.FIN_ID = deallegs12_.SHIPMENT_RECORDS_ID\twhere (shipmentro1_.LEG_NO = (select min(shipmentro13_.LEG_NO)\t\t\t\t\t\t\t\t from TBLS_SHIPMENT_RECORD_ROUTING shipmentro13_\t\t\t\t\t\t\t\t where shipmentre0_.FIN_ID = shipmentro13_.SHIPMENT_RECORD_ID\t\t\t\t\t\t\t\t\tand ((shipmentro13_.IS_DELETED = 'N'))))\t and (shipmentre0_.IS_DELETED = 'N')\t and (TO_CHAR(shipmentro1_.ARRIVAL_DATE, 'YYYY-MM-DD') <= '2019-08-29')\torder by shipmentre0_.SHIPMENT_DATE \tlimit 25\t;On Mon, Sep 9, 2019 at 2:00 PM yash mehta <[email protected]> wrote:We have a query that takes 1min to execute in postgres 10.6 and the same executes in 4 sec in Oracle database. The query is doing 'select distinct'. If I add a 'group by' clause, performance in postgres improves significantly and fetches results in 2 sec (better than oracle). But unfortunately, we cannot modify the query. Could you please suggest a way to improve performance in Postgres without modifying the query. Original condition: time taken 1min\nSort Method: external merge Disk: 90656kB\n \nAfter removing distinct from query: time taken 2sec\nSort Method: top-N heapsort Memory: 201kB\n \nAfter increasing work_mem to 180MB; it takes 20sec\nSort Method: quicksort Memory: 172409kB\n \nSELECT\r\n* FROM pg_stat_statements ORDER BY total_time DESC limit 1;-[\r\nRECORD 1\r\n]-------+-----------------------------------------------------------------------------------------------------------------------------------------userid \r\n| 174862dbid \r\n| 174861queryid \r\n| 1469376470query \r\n| <query is too long. It selects around 300 columns>calls \r\n| 1total_time \r\n| 59469.972661min_time \r\n| 59469.972661max_time \r\n| 59469.972661mean_time \r\n| 59469.972661stddev_time \r\n| 0rows \r\n| 25shared_blks_hit \r\n| 27436shared_blks_read |\r\n2542shared_blks_dirtied\r\n| 0shared_blks_written\r\n| 0local_blks_hit \r\n| 0local_blks_read \r\n| 0local_blks_dirtied \r\n| 0local_blks_written \r\n| 0temp_blks_read \r\n| 257temp_blks_written | 11333blk_read_time \r\n| 0\nblk_write_time \r\n| 0IMO, an explain analyze of the query would be useful in order for people to help you.e.g. https://explain.depesz.comRegards,Flo",
"msg_date": "Mon, 9 Sep 2019 15:59:53 +0530",
"msg_from": "yash mehta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "On Mon, Sep 09, 2019 at 02:00:01PM +0530, yash mehta wrote:\n> We have a query that takes 1min to execute in postgres 10.6 and the same\n> executes in 4 sec in Oracle database. The query is doing 'select distinct'.\n> If I add a 'group by' clause, performance in postgres improves\n> significantly and fetches results in 2 sec (better than oracle). But\n> unfortunately, we cannot modify the query. Could you please suggest a way\n> to improve performance in Postgres without modifying the query.\n\nNot sure it helps, but I remember this:\nhttps://www.postgresql.org/message-id/CAKJS1f9q0j3BgMUsDbtf9%3DecfVLnqvkYB44MXj0gpVuamcN8Xw%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 9 Sep 2019 05:39:30 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "There are few things to consider:\n- you don't need to use distinct on all columns (and therefore sort all\ncolumns)\n- you should try to sort in memory, better than on-disk\n- it seems that the planner doesn't predict the good number of rows\n\nRegards,\nFlorian\n\nOn Mon, Sep 9, 2019 at 12:46 PM Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Sep 09, 2019 at 02:00:01PM +0530, yash mehta wrote:\n> > We have a query that takes 1min to execute in postgres 10.6 and the same\n> > executes in 4 sec in Oracle database. The query is doing 'select\n> distinct'.\n> > If I add a 'group by' clause, performance in postgres improves\n> > significantly and fetches results in 2 sec (better than oracle). But\n> > unfortunately, we cannot modify the query. Could you please suggest a way\n> > to improve performance in Postgres without modifying the query.\n>\n> Not sure it helps, but I remember this:\n>\n> https://www.postgresql.org/message-id/CAKJS1f9q0j3BgMUsDbtf9%3DecfVLnqvkYB44MXj0gpVuamcN8Xw%40mail.gmail.com\n>\n>\n>\n\nThere are few things to consider:- you don't need to use distinct on all columns (and therefore sort all columns)- you should try to sort in memory, better than on-disk- it seems that the planner doesn't predict the good number of rowsRegards,FlorianOn Mon, Sep 9, 2019 at 12:46 PM Justin Pryzby <[email protected]> wrote:On Mon, Sep 09, 2019 at 02:00:01PM +0530, yash mehta wrote:\n> We have a query that takes 1min to execute in postgres 10.6 and the same\n> executes in 4 sec in Oracle database. The query is doing 'select distinct'.\n> If I add a 'group by' clause, performance in postgres improves\n> significantly and fetches results in 2 sec (better than oracle). But\n> unfortunately, we cannot modify the query. Could you please suggest a way\n> to improve performance in Postgres without modifying the query.\n\nNot sure it helps, but I remember this:\nhttps://www.postgresql.org/message-id/CAKJS1f9q0j3BgMUsDbtf9%3DecfVLnqvkYB44MXj0gpVuamcN8Xw%40mail.gmail.com",
"msg_date": "Mon, 9 Sep 2019 15:16:10 +0200",
"msg_from": "Flo Rance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "If you can't modify the query, then there is nothing more to be done to\noptimize the execution afaik. Distinct is much slower than group by in\nscenarios like this with many columns. You already identified the disk sort\nand increased work mem to get it faster by 3x. There are not any other\ntricks of which I am aware.\n\nIf you can't modify the query, then there is nothing more to be done to optimize the execution afaik. Distinct is much slower than group by in scenarios like this with many columns. You already identified the disk sort and increased work mem to get it faster by 3x. There are not any other tricks of which I am aware.",
"msg_date": "Mon, 9 Sep 2019 09:39:27 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "Hi Michael/Justin/Flo,\n\nThank you all for your assistance. As Michael said, looks like there are no\nmore tricks left.\n\nOn Mon, Sep 9, 2019 at 9:09 PM Michael Lewis <[email protected]> wrote:\n\n> If you can't modify the query, then there is nothing more to be done to\n> optimize the execution afaik. Distinct is much slower than group by in\n> scenarios like this with many columns. You already identified the disk sort\n> and increased work mem to get it faster by 3x. There are not any other\n> tricks of which I am aware.\n>\n\nHi Michael/Justin/Flo,Thank you all for your assistance. As Michael said, looks like there are no more tricks left. On Mon, Sep 9, 2019 at 9:09 PM Michael Lewis <[email protected]> wrote:If you can't modify the query, then there is nothing more to be done to optimize the execution afaik. Distinct is much slower than group by in scenarios like this with many columns. You already identified the disk sort and increased work mem to get it faster by 3x. There are not any other tricks of which I am aware.",
"msg_date": "Tue, 10 Sep 2019 10:23:01 +0530",
"msg_from": "yash mehta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "On Tue, Sep 10, 2019 at 12:53 AM yash mehta <[email protected]> wrote:\n\n> Hi Michael/Justin/Flo,\n>\n> Thank you all for your assistance. As Michael said, looks like there are\n> no more tricks left.\n>\n> On Mon, Sep 9, 2019 at 9:09 PM Michael Lewis <[email protected]> wrote:\n>\n>> If you can't modify the query, then there is nothing more to be done to\n>> optimize the execution afaik. Distinct is much slower than group by in\n>> scenarios like this with many columns. You already identified the disk sort\n>> and increased work mem to get it faster by 3x. There are not any other\n>> tricks of which I am aware.\n>>\n>\nCould you put a view in between the real table and the query that does the\ngroup by ? (since you can't change the query)\nI'm wondering if the sort/processing time would be faster when that\ndistinct is invoked if the rows are already distinct.\n\nOn Tue, Sep 10, 2019 at 12:53 AM yash mehta <[email protected]> wrote:Hi Michael/Justin/Flo,Thank you all for your assistance. As Michael said, looks like there are no more tricks left. On Mon, Sep 9, 2019 at 9:09 PM Michael Lewis <[email protected]> wrote:If you can't modify the query, then there is nothing more to be done to optimize the execution afaik. Distinct is much slower than group by in scenarios like this with many columns. You already identified the disk sort and increased work mem to get it faster by 3x. There are not any other tricks of which I am aware.Could you put a view in between the real table and the query that does the group by ? (since you can't change the query)I'm wondering if the sort/processing time would be faster when that distinct is invoked if the rows are already distinct.",
"msg_date": "Tue, 10 Sep 2019 07:56:54 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "On Mon, Sep 9, 2019 at 3:55 AM yash mehta <[email protected]> wrote:\n>\n> We have a query that takes 1min to execute in postgres 10.6 and the same executes in 4 sec in Oracle database. The query is doing 'select distinct'. If I add a 'group by' clause, performance in postgres improves significantly and fetches results in 2 sec (better than oracle). But unfortunately, we cannot modify the query. Could you please suggest a way to improve performance in Postgres without modifying the query.\n\nWell, here's the bad news. Postgres doesn't optimize this specific\nformulation as well as oracle does. Normally tweaking the query along\nwith some creativity would get the expected result; it's pretty rare\nthat I can't coerce the planner to do something fairly optimally. I'm\nguessing this is an Oracle conversion app, and we do not have the\nability to change the underlying source code? Can you elaborate why\nnot?\n\nIn lieu of changing the query in the application, we have high level\nstrategies to consider.\n*) Eat the 20 seconds, and gripe to your oracle buddies (they will\nappreciate this)\n\n*) Mess around with with planner variables to get a better plan.\nUnfortunately, since we can't do tricks like SET before running the\nquery, the changes will be global, and I'm not expecting this to bear\nfruit, unless we can have this query be separated from other queries\nat the connection level (we might be able to intervene on connect and\nset influential non-global planner settings there)\n\n*) Experiment with pg11/pg12 to see if upcoming versions can handle\nthis strategy better. pg12 is in beta obviously, but an upgrade\nstrategy would be the easiest out.\n\n*) Attempt to intervene with views. I think this is out, since all\nthe tables are schema qualified. To avoid a global change, the typical\nstrategy is to tuck some views into a private schema and manipulate\nsearch_path to have them resolve first, but that won't work if you\ndon't have control of the query string.\n\n*) Try to change the query string anyways. Say, this is a compiled\napplication for which you don't have the code, we might be able to\nlocate the query text within the compiled binary and modify it. This\nis actually a pretty effective trick (although in many scenarios we'd\nwant the query string to be the same length as before but you have\nplenty of whitespace to play with) although in certain\nlegal/regulatory contexts we might not be able to do it.\n\n*) Hack some C to adjust the query in flight. This is *SUPER* hacky,\nbut let's say that the application was dynamically linked against the\nlibpq driver, but with some C change and a fearless attitude we could\nadjust the query after it leaves the application but before it hits\nthe database. Other candidate interventions might be in the database\nitself or in pgbouncer. We could also do this in jdbc if your\napplication connects via that driver. This is would be 'absolutely\nlast resort' tactics, but sometimes you simply must find a solution.\n\nmerlin\n\n\n",
"msg_date": "Wed, 11 Sep 2019 10:54:01 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "I think Merlin has outlined pretty much all the options and very neatly.\n(As an asides Merlin could you possibly elaborate on the \"C Hack\" how that\nmight be accomplished.)\n\nTo OP, I am curious if the performance changes were the query rewritten\nsuch that all timestamp columns were listed first in the selection. I\nunderstand it might not be feasible to make this change in your real\napplication without breaking the contract.\n\nRegards\nDinesh\n\nOn Wed, Sep 11, 2019 at 8:54 AM Merlin Moncure <[email protected]> wrote:\n\n> On Mon, Sep 9, 2019 at 3:55 AM yash mehta <[email protected]> wrote:\n> >\n> > We have a query that takes 1min to execute in postgres 10.6 and the same\n> executes in 4 sec in Oracle database. The query is doing 'select distinct'.\n> If I add a 'group by' clause, performance in postgres improves\n> significantly and fetches results in 2 sec (better than oracle). But\n> unfortunately, we cannot modify the query. Could you please suggest a way\n> to improve performance in Postgres without modifying the query.\n>\n> Well, here's the bad news. Postgres doesn't optimize this specific\n> formulation as well as oracle does. Normally tweaking the query along\n> with some creativity would get the expected result; it's pretty rare\n> that I can't coerce the planner to do something fairly optimally. I'm\n> guessing this is an Oracle conversion app, and we do not have the\n> ability to change the underlying source code? Can you elaborate why\n> not?\n>\n> In lieu of changing the query in the application, we have high level\n> strategies to consider.\n> *) Eat the 20 seconds, and gripe to your oracle buddies (they will\n> appreciate this)\n>\n> *) Mess around with with planner variables to get a better plan.\n> Unfortunately, since we can't do tricks like SET before running the\n> query, the changes will be global, and I'm not expecting this to bear\n> fruit, unless we can have this query be separated from other queries\n> at the connection level (we might be able to intervene on connect and\n> set influential non-global planner settings there)\n>\n> *) Experiment with pg11/pg12 to see if upcoming versions can handle\n> this strategy better. pg12 is in beta obviously, but an upgrade\n> strategy would be the easiest out.\n>\n> *) Attempt to intervene with views. I think this is out, since all\n> the tables are schema qualified. To avoid a global change, the typical\n> strategy is to tuck some views into a private schema and manipulate\n> search_path to have them resolve first, but that won't work if you\n> don't have control of the query string.\n>\n> *) Try to change the query string anyways. Say, this is a compiled\n> application for which you don't have the code, we might be able to\n> locate the query text within the compiled binary and modify it. This\n> is actually a pretty effective trick (although in many scenarios we'd\n> want the query string to be the same length as before but you have\n> plenty of whitespace to play with) although in certain\n> legal/regulatory contexts we might not be able to do it.\n>\n> *) Hack some C to adjust the query in flight. This is *SUPER* hacky,\n> but let's say that the application was dynamically linked against the\n> libpq driver, but with some C change and a fearless attitude we could\n> adjust the query after it leaves the application but before it hits\n> the database. Other candidate interventions might be in the database\n> itself or in pgbouncer. We could also do this in jdbc if your\n> application connects via that driver. This is would be 'absolutely\n> last resort' tactics, but sometimes you simply must find a solution.\n>\n> merlin\n>\n>\n>\n\nI think Merlin has outlined pretty much all the options and very neatly. (As an asides Merlin could you possibly elaborate on the \"C Hack\" how that might be accomplished.)To OP, I am curious if the performance changes were the query rewritten such that all timestamp columns were listed first in the selection. I understand it might not be feasible to make this change in your real application without breaking the contract.RegardsDineshOn Wed, Sep 11, 2019 at 8:54 AM Merlin Moncure <[email protected]> wrote:On Mon, Sep 9, 2019 at 3:55 AM yash mehta <[email protected]> wrote:\n>\n> We have a query that takes 1min to execute in postgres 10.6 and the same executes in 4 sec in Oracle database. The query is doing 'select distinct'. If I add a 'group by' clause, performance in postgres improves significantly and fetches results in 2 sec (better than oracle). But unfortunately, we cannot modify the query. Could you please suggest a way to improve performance in Postgres without modifying the query.\n\nWell, here's the bad news. Postgres doesn't optimize this specific\nformulation as well as oracle does. Normally tweaking the query along\nwith some creativity would get the expected result; it's pretty rare\nthat I can't coerce the planner to do something fairly optimally. I'm\nguessing this is an Oracle conversion app, and we do not have the\nability to change the underlying source code? Can you elaborate why\nnot?\n\nIn lieu of changing the query in the application, we have high level\nstrategies to consider.\n*) Eat the 20 seconds, and gripe to your oracle buddies (they will\nappreciate this)\n\n*) Mess around with with planner variables to get a better plan.\nUnfortunately, since we can't do tricks like SET before running the\nquery, the changes will be global, and I'm not expecting this to bear\nfruit, unless we can have this query be separated from other queries\nat the connection level (we might be able to intervene on connect and\nset influential non-global planner settings there)\n\n*) Experiment with pg11/pg12 to see if upcoming versions can handle\nthis strategy better. pg12 is in beta obviously, but an upgrade\nstrategy would be the easiest out.\n\n*) Attempt to intervene with views. I think this is out, since all\nthe tables are schema qualified. To avoid a global change, the typical\nstrategy is to tuck some views into a private schema and manipulate\nsearch_path to have them resolve first, but that won't work if you\ndon't have control of the query string.\n\n*) Try to change the query string anyways. Say, this is a compiled\napplication for which you don't have the code, we might be able to\nlocate the query text within the compiled binary and modify it. This\nis actually a pretty effective trick (although in many scenarios we'd\nwant the query string to be the same length as before but you have\nplenty of whitespace to play with) although in certain\nlegal/regulatory contexts we might not be able to do it.\n\n*) Hack some C to adjust the query in flight. This is *SUPER* hacky,\nbut let's say that the application was dynamically linked against the\nlibpq driver, but with some C change and a fearless attitude we could\nadjust the query after it leaves the application but before it hits\nthe database. Other candidate interventions might be in the database\nitself or in pgbouncer. We could also do this in jdbc if your\napplication connects via that driver. This is would be 'absolutely\nlast resort' tactics, but sometimes you simply must find a solution.\n\nmerlin",
"msg_date": "Wed, 11 Sep 2019 09:38:19 -0700",
"msg_from": "Dinesh Somani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 12:38 PM Dinesh Somani <[email protected]> wrote:\n\n> I think Merlin has outlined pretty much all the options and very neatly.\n> (As an asides Merlin could you possibly elaborate on the \"C Hack\" how that\n> might be accomplished.)\n>\n> To OP, I am curious if the performance changes were the query rewritten\n> such that all timestamp columns were listed first in the selection. I\n> understand it might not be feasible to make this change in your real\n> application without breaking the contract.\n>\n> Regards\n> Dinesh\n>\n\nIt looks like AWS has a pgbouncer query re-writer service that might be a\nstarting point:\nhttps://aws.amazon.com/blogs/big-data/query-routing-and-rewrite-introducing-pgbouncer-rr-for-amazon-redshift-and-postgresql/\n\nI've never used it.\n\nOn Wed, Sep 11, 2019 at 12:38 PM Dinesh Somani <[email protected]> wrote:I think Merlin has outlined pretty much all the options and very neatly. (As an asides Merlin could you possibly elaborate on the \"C Hack\" how that might be accomplished.)To OP, I am curious if the performance changes were the query rewritten such that all timestamp columns were listed first in the selection. I understand it might not be feasible to make this change in your real application without breaking the contract.RegardsDineshIt looks like AWS has a pgbouncer query re-writer service that might be a starting point:https://aws.amazon.com/blogs/big-data/query-routing-and-rewrite-introducing-pgbouncer-rr-for-amazon-redshift-and-postgresql/I've never used it.",
"msg_date": "Wed, 11 Sep 2019 13:57:25 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 12:57 PM Rick Otten <[email protected]> wrote:\n>\n> On Wed, Sep 11, 2019 at 12:38 PM Dinesh Somani <[email protected]> wrote:\n>>\n>> I think Merlin has outlined pretty much all the options and very neatly. (As an asides Merlin could you possibly elaborate on the \"C Hack\" how that might be accomplished.)\n>>\n>> To OP, I am curious if the performance changes were the query rewritten such that all timestamp columns were listed first in the selection. I understand it might not be feasible to make this change in your real application without breaking the contract.\n>>\n>> Regards\n>> Dinesh\n>\n>\n> It looks like AWS has a pgbouncer query re-writer service that might be a starting point:\n> https://aws.amazon.com/blogs/big-data/query-routing-and-rewrite-introducing-pgbouncer-rr-for-amazon-redshift-and-postgresql/\n>\n> I've never used it.\n\nYeah, I haven't either. Side note: this system also provides the\nability to load balance queries across distributed system; that's a\nhuge benefit. Say you have master server and five replica, it seems\nthat you can round robin the read only queries using this system or\nother neat little tricks. I would be cautious about pgbouncer-rr\nbecoming the bottleneck itself for certain workloads though.\n\nAnyways, a 'hack' strategy on linux might be to:\n*) Check and verify that libpq is dynamically linked (which is almost\nalwasys the case). ldd /your/application should give the dynamic\nlibrary dependency to libpq.\n*) Grab postgres sources for same version as production\n*) configure\n*) switch to interfaces/libpq\n*) figure out which interface routine(s) being called into. The\napproach will be slightly different if the query is\nprepared/paramterized or not. Assuming it isn't, you'd have to modify\nthe PQsendQuery routine to check for the signature (say, with\nstrcmp), create a new string, and have that be put instead of the\nincoming const char* query. The parameterized versions\n(PQsendQueryParams) would be easier since you'd be able to use a\nstatic string rather than parsing it out.\n*) Build the library, do some testing with hand written C program\n*) inject the modified libpq with LD_LIBRARY_PATH\n\nIt must be stated that some people might read this and be compelled to\nbarf :-) -- it's pretty gross. Having said that, sometimes you have to\nfind a solution. I would definitely try the pgbouncer-rr approach\nfirst however; this has a *lot* of potential benefit.\n\nmerlin\n\n\n",
"msg_date": "Thu, 12 Sep 2019 09:19:11 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
},
{
"msg_contents": "Thanks a lot, Merlin.\n\nYes, it could appear kinda gross to some ;-)\n\nOn Thu, Sep 12, 2019 at 7:19 AM Merlin Moncure <[email protected]> wrote:\n\n> On Wed, Sep 11, 2019 at 12:57 PM Rick Otten <[email protected]>\n> wrote:\n> >\n> > On Wed, Sep 11, 2019 at 12:38 PM Dinesh Somani <[email protected]>\n> wrote:\n> >>\n> >> I think Merlin has outlined pretty much all the options and very\n> neatly. (As an asides Merlin could you possibly elaborate on the \"C Hack\"\n> how that might be accomplished.)\n> >>\n> >> To OP, I am curious if the performance changes were the query rewritten\n> such that all timestamp columns were listed first in the selection. I\n> understand it might not be feasible to make this change in your real\n> application without breaking the contract.\n> >>\n> >> Regards\n> >> Dinesh\n> >\n> >\n> > It looks like AWS has a pgbouncer query re-writer service that might be\n> a starting point:\n> >\n> https://aws.amazon.com/blogs/big-data/query-routing-and-rewrite-introducing-pgbouncer-rr-for-amazon-redshift-and-postgresql/\n> >\n> > I've never used it.\n>\n> Yeah, I haven't either. Side note: this system also provides the\n> ability to load balance queries across distributed system; that's a\n> huge benefit. Say you have master server and five replica, it seems\n> that you can round robin the read only queries using this system or\n> other neat little tricks. I would be cautious about pgbouncer-rr\n> becoming the bottleneck itself for certain workloads though.\n>\n> Anyways, a 'hack' strategy on linux might be to:\n> *) Check and verify that libpq is dynamically linked (which is almost\n> alwasys the case). ldd /your/application should give the dynamic\n> library dependency to libpq.\n> *) Grab postgres sources for same version as production\n> *) configure\n> *) switch to interfaces/libpq\n> *) figure out which interface routine(s) being called into. The\n> approach will be slightly different if the query is\n> prepared/paramterized or not. Assuming it isn't, you'd have to modify\n> the PQsendQuery routine to check for the signature (say, with\n> strcmp), create a new string, and have that be put instead of the\n> incoming const char* query. The parameterized versions\n> (PQsendQueryParams) would be easier since you'd be able to use a\n> static string rather than parsing it out.\n> *) Build the library, do some testing with hand written C program\n> *) inject the modified libpq with LD_LIBRARY_PATH\n>\n> It must be stated that some people might read this and be compelled to\n> barf :-) -- it's pretty gross. Having said that, sometimes you have to\n> find a solution. I would definitely try the pgbouncer-rr approach\n> first however; this has a *lot* of potential benefit.\n>\n> merlin\n>\n>\n>\n\nThanks a lot, Merlin.Yes, it could appear kinda gross to some ;-) On Thu, Sep 12, 2019 at 7:19 AM Merlin Moncure <[email protected]> wrote:On Wed, Sep 11, 2019 at 12:57 PM Rick Otten <[email protected]> wrote:\n>\n> On Wed, Sep 11, 2019 at 12:38 PM Dinesh Somani <[email protected]> wrote:\n>>\n>> I think Merlin has outlined pretty much all the options and very neatly. (As an asides Merlin could you possibly elaborate on the \"C Hack\" how that might be accomplished.)\n>>\n>> To OP, I am curious if the performance changes were the query rewritten such that all timestamp columns were listed first in the selection. I understand it might not be feasible to make this change in your real application without breaking the contract.\n>>\n>> Regards\n>> Dinesh\n>\n>\n> It looks like AWS has a pgbouncer query re-writer service that might be a starting point:\n> https://aws.amazon.com/blogs/big-data/query-routing-and-rewrite-introducing-pgbouncer-rr-for-amazon-redshift-and-postgresql/\n>\n> I've never used it.\n\nYeah, I haven't either. Side note: this system also provides the\nability to load balance queries across distributed system; that's a\nhuge benefit. Say you have master server and five replica, it seems\nthat you can round robin the read only queries using this system or\nother neat little tricks. I would be cautious about pgbouncer-rr\nbecoming the bottleneck itself for certain workloads though.\n\nAnyways, a 'hack' strategy on linux might be to:\n*) Check and verify that libpq is dynamically linked (which is almost\nalwasys the case). ldd /your/application should give the dynamic\nlibrary dependency to libpq.\n*) Grab postgres sources for same version as production\n*) configure\n*) switch to interfaces/libpq\n*) figure out which interface routine(s) being called into. The\napproach will be slightly different if the query is\nprepared/paramterized or not. Assuming it isn't, you'd have to modify\nthe PQsendQuery routine to check for the signature (say, with\nstrcmp), create a new string, and have that be put instead of the\nincoming const char* query. The parameterized versions\n(PQsendQueryParams) would be easier since you'd be able to use a\nstatic string rather than parsing it out.\n*) Build the library, do some testing with hand written C program\n*) inject the modified libpq with LD_LIBRARY_PATH\n\nIt must be stated that some people might read this and be compelled to\nbarf :-) -- it's pretty gross. Having said that, sometimes you have to\nfind a solution. I would definitely try the pgbouncer-rr approach\nfirst however; this has a *lot* of potential benefit.\n\nmerlin",
"msg_date": "Thu, 12 Sep 2019 09:25:39 -0700",
"msg_from": "Dinesh Somani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct runs slow on pg 10.6"
}
] |
[
{
"msg_contents": "Hi,\nAs part of one query tuning, it was observed that query execution time was\nmore even though cost was decreased.\n\n*Initial Query :* Nested Loop Left Join (cost=159.88..*38530.02* rows=1\nwidth=8) (actual time=0.387..*40.766* rows=300 loops=1)\n\n*Changed Query :* Nested Loop Anti Join (cost=171.66..*5961.96* rows=1\nwidth=8) (actual time=0.921..*110.862* rows=300 loops=1)\n\nMay i know the reason behind in increase in response time, even though cost\nwas reduced by 6.4 times.\n\nDetailed execution plans can be found below along with the queries\n\n*Initial Query*\n\n=> explain(analyze,buffers,costs) SELECT ku.user_id\n> FROM konotor_user ku\n> LEFT JOIN agent_details ad\n> ON ku.user_id = ad.user_id\n> WHERE ku.app_id = '12132818272260'\n> AND (ku.user_type = 1 OR ku.user_type = 2)\n> AND (ad.deleted isnull OR ad.deleted = 0)\n> AND ku.user_id NOT IN (\n> SELECT gu.user_id\n> FROM group_user gu\n> INNER JOIN groups\n> ON gu.group_id = groups.group_id\n> AND app_id = ku.app_id\n> WHERE gu.user_id = ku.user_id\n> AND groups.app_id = ku.app_id\n> AND groups.deleted = false);\n\n\n\n\nQUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=159.88..38530.02 rows=1 width=8) (actual\ntime=0.387..40.766 rows=300 loops=1)\n Filter: ((ad.deleted IS NULL) OR (ad.deleted = 0))\n Buffers: shared hit=52138\n -> Bitmap Heap Scan on konotor_user ku (cost=159.73..38383.64 rows=712\nwidth=8) (actual time=0.383..40.221 rows=300 loops=1)\n Recheck Cond: (((app_id = '12132818272260'::bigint) AND (user_type\n= 1)) OR ((app_id = '12132818272260'::bigint) AND (user_type = 2)))\n Filter: (NOT (SubPlan 1))\n Rows Removed by Filter: 485\n Heap Blocks: exact=729\n Buffers: shared hit=51838\n -> BitmapOr (cost=159.73..159.73 rows=1425 width=0) (actual\ntime=0.112..0.112 rows=0 loops=1)\n Buffers: shared hit=11\n -> Bitmap Index Scan on konotor_user_app_id_user_type_idx\n (cost=0.00..88.42 rows=786 width=0) (actual time=0.009..0.009 rows=1\nloops=1)\n Index Cond: ((app_id = '12132818272260'::bigint) AND\n(user_type = 1))\n Buffers: shared hit=4\n -> Bitmap Index Scan on konotor_user_app_id_user_type_idx\n (cost=0.00..70.95 rows=639 width=0) (actual time=0.101..0.101 rows=784\nloops=1)\n Index Cond: ((app_id = '12132818272260'::bigint) AND\n(user_type = 2))\n Buffers: shared hit=7\n SubPlan 1\n -> Nested Loop (cost=0.57..45.28 rows=1 width=8) (actual\ntime=0.049..0.049 rows=1 loops=785)\n Buffers: shared hit=51098\n -> Index Scan using groups_app_id_group_id_idx on groups\n (cost=0.28..20.33 rows=3 width=8) (actual time=0.002..0.014 rows=20\nloops=785)\n Index Cond: (app_id = ku.app_id)\n Filter: (NOT deleted)\n Rows Removed by Filter: 2\n Buffers: shared hit=18888\n -> Index Only Scan using uk_groupid_userid on group_user\ngu (cost=0.29..8.30 rows=1 width=16) (actual time=0.001..0.001 rows=0\nloops=15832)\n Index Cond: ((group_id = groups.group_id) AND\n(user_id = ku.user_id))\n Heap Fetches: 455\n Buffers: shared hit=32210\n -> Index Scan using agent_details_user_id_idx on agent_details ad\n (cost=0.15..0.19 rows=1 width=10) (actual time=0.001..0.001 rows=0\nloops=300)\n Index Cond: (ku.user_id = user_id)\n Buffers: shared hit=300\n Planning time: 0.493 ms\n Execution time: 40.901 ms\n\n\n*Changed Query *\n\n=> explain(analyze,buffers,costs) SELECT ku.user_id FROM konotor_user ku\n> LEFT OUTER JOIN agent_details ad ON ku.user_id = ad.user_id LEFT OUTER JOIN\n> (SELECT gu.user_id\n> FROM group_user gu INNER JOIN groups ON\n> gu.group_id = groups.group_id WHERE app_id='12132818272260' AND\n> groups.deleted = false)t ON t.user_id= ku.user_id\n> WHERE ku.app_id = '12132818272260'\n> AND (ku.user_type = 1 OR ku.user_type = 2) AND\n> (ad.deleted isnull OR ad.deleted = 0) AND t.user_id is NULL;\n\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Anti Join (cost=171.66..5961.96 rows=1 width=8) (actual\ntime=0.921..110.862 rows=300 loops=1)\n Buffers: shared hit=47730\n -> Hash Left Join (cost=171.10..5730.86 rows=1 width=8) (actual\ntime=0.435..2.201 rows=785 loops=1)\n Hash Cond: (ku.user_id = ad.user_id)\n Filter: ((ad.deleted IS NULL) OR (ad.deleted = 0))\n Buffers: shared hit=743\n -> Bitmap Heap Scan on konotor_user ku (cost=160.09..5714.50\nrows=1424 width=8) (actual time=0.208..1.327 rows=785 loops=1)\n Recheck Cond: (((app_id = '12132818272260'::bigint) AND\n(user_type = 1)) OR ((app_id = '12132818272260'::bigint) AND (user_type =\n2)))\n Heap Blocks: exact=729\n Buffers: shared hit=740\n -> BitmapOr (cost=160.09..160.09 rows=1425 width=0)\n(actual time=0.116..0.116 rows=0 loops=1)\n Buffers: shared hit=11\n -> Bitmap Index Scan on\nkonotor_user_app_id_user_type_idx (cost=0.00..88.42 rows=786 width=0)\n(actual time=0.010..0.010 rows=1 loops=1)\n Index Cond: ((app_id = '12132818272260'::bigint)\nAND (user_type = 1))\n Buffers: shared hit=4\n -> Bitmap Index Scan on\nkonotor_user_app_id_user_type_idx (cost=0.00..70.95 rows=639 width=0)\n(actual time=0.105..0.105 rows=784 loops=1)\n Index Cond: ((app_id = '12132818272260'::bigint)\nAND (user_type = 2))\n Buffers: shared hit=7\n -> Hash (cost=6.56..6.56 rows=356 width=10) (actual\ntime=0.220..0.220 rows=356 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 23kB\n Buffers: shared hit=3\n -> Seq Scan on agent_details ad (cost=0.00..6.56 rows=356\nwidth=10) (actual time=0.003..0.101 rows=356 loops=1)\n Buffers: shared hit=3\n -> Nested Loop (cost=0.57..115.82 rows=1 width=8) (actual\ntime=0.138..0.138 rows=1 loops=785)\n Buffers: shared hit=46987\n -> Index Only Scan using uk_groupid_userid on group_user gu\n (cost=0.29..115.12 rows=2 width=16) (actual time=0.135..0.135 rows=1\nloops=785)\n Index Cond: (user_id = ku.user_id)\n Heap Fetches: 456\n Buffers: shared hit=45529\n -> Index Scan using groups_pkey on groups (cost=0.28..0.34\nrows=1 width=8) (actual time=0.002..0.002 rows=1 loops=486)\n Index Cond: (group_id = gu.group_id)\n Filter: ((NOT deleted) AND (app_id =\n'12132818272260'::bigint))\n Rows Removed by Filter: 0\n Buffers: shared hit=1458\n Planning time: 0.534 ms\n Execution time: 110.999 ms\n(36 rows)\n\n\n*PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by\ngcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit*\n\n\nThanks in advance for your valuable time and inputs.\n\nRegards, Amarendra\n\nHi,As part of one query tuning, it was observed that query execution time was more even though cost was decreased.Initial Query : Nested Loop Left Join (cost=159.88..38530.02 rows=1 width=8) (actual time=0.387..40.766 rows=300 loops=1)Changed Query : Nested Loop Anti Join (cost=171.66..5961.96 rows=1 width=8) (actual time=0.921..110.862 rows=300 loops=1)May i know the reason behind in increase in response time, even though cost was reduced by 6.4 times.Detailed execution plans can be found below along with the queriesInitial Query => explain(analyze,buffers,costs) SELECT ku.user_id FROM konotor_user ku LEFT JOIN agent_details ad ON ku.user_id = ad.user_id WHERE ku.app_id = '12132818272260' AND (ku.user_type = 1 OR ku.user_type = 2) AND (ad.deleted isnull OR ad.deleted = 0) AND ku.user_id NOT IN ( SELECT gu.user_id FROM group_user gu INNER JOIN groups ON gu.group_id = groups.group_id AND app_id = ku.app_id WHERE gu.user_id = ku.user_id AND groups.app_id = ku.app_id AND groups.deleted = false); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=159.88..38530.02 rows=1 width=8) (actual time=0.387..40.766 rows=300 loops=1) Filter: ((ad.deleted IS NULL) OR (ad.deleted = 0)) Buffers: shared hit=52138 -> Bitmap Heap Scan on konotor_user ku (cost=159.73..38383.64 rows=712 width=8) (actual time=0.383..40.221 rows=300 loops=1) Recheck Cond: (((app_id = '12132818272260'::bigint) AND (user_type = 1)) OR ((app_id = '12132818272260'::bigint) AND (user_type = 2))) Filter: (NOT (SubPlan 1)) Rows Removed by Filter: 485 Heap Blocks: exact=729 Buffers: shared hit=51838 -> BitmapOr (cost=159.73..159.73 rows=1425 width=0) (actual time=0.112..0.112 rows=0 loops=1) Buffers: shared hit=11 -> Bitmap Index Scan on konotor_user_app_id_user_type_idx (cost=0.00..88.42 rows=786 width=0) (actual time=0.009..0.009 rows=1 loops=1) Index Cond: ((app_id = '12132818272260'::bigint) AND (user_type = 1)) Buffers: shared hit=4 -> Bitmap Index Scan on konotor_user_app_id_user_type_idx (cost=0.00..70.95 rows=639 width=0) (actual time=0.101..0.101 rows=784 loops=1) Index Cond: ((app_id = '12132818272260'::bigint) AND (user_type = 2)) Buffers: shared hit=7 SubPlan 1 -> Nested Loop (cost=0.57..45.28 rows=1 width=8) (actual time=0.049..0.049 rows=1 loops=785) Buffers: shared hit=51098 -> Index Scan using groups_app_id_group_id_idx on groups (cost=0.28..20.33 rows=3 width=8) (actual time=0.002..0.014 rows=20 loops=785) Index Cond: (app_id = ku.app_id) Filter: (NOT deleted) Rows Removed by Filter: 2 Buffers: shared hit=18888 -> Index Only Scan using uk_groupid_userid on group_user gu (cost=0.29..8.30 rows=1 width=16) (actual time=0.001..0.001 rows=0 loops=15832) Index Cond: ((group_id = groups.group_id) AND (user_id = ku.user_id)) Heap Fetches: 455 Buffers: shared hit=32210 -> Index Scan using agent_details_user_id_idx on agent_details ad (cost=0.15..0.19 rows=1 width=10) (actual time=0.001..0.001 rows=0 loops=300) Index Cond: (ku.user_id = user_id) Buffers: shared hit=300 Planning time: 0.493 ms Execution time: 40.901 msChanged Query => explain(analyze,buffers,costs) SELECT ku.user_id FROM konotor_user ku LEFT OUTER JOIN agent_details ad ON ku.user_id = ad.user_id LEFT OUTER JOIN (SELECT gu.user_id FROM group_user gu INNER JOIN groups ON gu.group_id = groups.group_id WHERE app_id='12132818272260' AND groups.deleted = false)t ON t.user_id= ku.user_id WHERE ku.app_id = '12132818272260' AND (ku.user_type = 1 OR ku.user_type = 2) AND (ad.deleted isnull OR ad.deleted = 0) AND t.user_id is NULL; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Anti Join (cost=171.66..5961.96 rows=1 width=8) (actual time=0.921..110.862 rows=300 loops=1) Buffers: shared hit=47730 -> Hash Left Join (cost=171.10..5730.86 rows=1 width=8) (actual time=0.435..2.201 rows=785 loops=1) Hash Cond: (ku.user_id = ad.user_id) Filter: ((ad.deleted IS NULL) OR (ad.deleted = 0)) Buffers: shared hit=743 -> Bitmap Heap Scan on konotor_user ku (cost=160.09..5714.50 rows=1424 width=8) (actual time=0.208..1.327 rows=785 loops=1) Recheck Cond: (((app_id = '12132818272260'::bigint) AND (user_type = 1)) OR ((app_id = '12132818272260'::bigint) AND (user_type = 2))) Heap Blocks: exact=729 Buffers: shared hit=740 -> BitmapOr (cost=160.09..160.09 rows=1425 width=0) (actual time=0.116..0.116 rows=0 loops=1) Buffers: shared hit=11 -> Bitmap Index Scan on konotor_user_app_id_user_type_idx (cost=0.00..88.42 rows=786 width=0) (actual time=0.010..0.010 rows=1 loops=1) Index Cond: ((app_id = '12132818272260'::bigint) AND (user_type = 1)) Buffers: shared hit=4 -> Bitmap Index Scan on konotor_user_app_id_user_type_idx (cost=0.00..70.95 rows=639 width=0) (actual time=0.105..0.105 rows=784 loops=1) Index Cond: ((app_id = '12132818272260'::bigint) AND (user_type = 2)) Buffers: shared hit=7 -> Hash (cost=6.56..6.56 rows=356 width=10) (actual time=0.220..0.220 rows=356 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 23kB Buffers: shared hit=3 -> Seq Scan on agent_details ad (cost=0.00..6.56 rows=356 width=10) (actual time=0.003..0.101 rows=356 loops=1) Buffers: shared hit=3 -> Nested Loop (cost=0.57..115.82 rows=1 width=8) (actual time=0.138..0.138 rows=1 loops=785) Buffers: shared hit=46987 -> Index Only Scan using uk_groupid_userid on group_user gu (cost=0.29..115.12 rows=2 width=16) (actual time=0.135..0.135 rows=1 loops=785) Index Cond: (user_id = ku.user_id) Heap Fetches: 456 Buffers: shared hit=45529 -> Index Scan using groups_pkey on groups (cost=0.28..0.34 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=486) Index Cond: (group_id = gu.group_id) Filter: ((NOT deleted) AND (app_id = '12132818272260'::bigint)) Rows Removed by Filter: 0 Buffers: shared hit=1458 Planning time: 0.534 ms Execution time: 110.999 ms(36 rows)PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\nThanks in advance for your valuable time and inputs.Regards, Amarendra",
"msg_date": "Fri, 13 Sep 2019 16:38:50 +0530",
"msg_from": "Amarendra Konda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query execution time Vs Cost"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 04:38:50PM +0530, Amarendra Konda wrote:\n> As part of one query tuning, it was observed that query execution time was\n> more even though cost was decreased.\n\n..\n\n> May i know the reason behind in increase in response time, even though cost\n> was reduced by 6.4 times.\n\nThe \"cost\" is postgres model for how expensive a plan will be, based on table\nstatistics, and parameters like seq/rand_page_cost, etc. It's an imperfect\nmodel and not exact.\n\n> *Initial Query*\n> \n> => explain(analyze,buffers,costs) SELECT ku.user_id\n> > FROM konotor_user ku\n> > LEFT JOIN agent_details ad\n> > ON ku.user_id = ad.user_id\n> > WHERE ku.app_id = '12132818272260'\n> > AND (ku.user_type = 1 OR ku.user_type = 2)\n> > AND (ad.deleted isnull OR ad.deleted = 0)\n> > AND ku.user_id NOT IN (\n> > SELECT gu.user_id\n> > FROM group_user gu\n> > INNER JOIN groups\n> > ON gu.group_id = groups.group_id\n> > AND app_id = ku.app_id\n> > WHERE gu.user_id = ku.user_id\n> > AND groups.app_id = ku.app_id\n> > AND groups.deleted = false);\n\nIt seems to me the major difference is in group_user JOIN groups.\n\nIn the fast query, it did\n> -> Index Only Scan using uk_groupid_userid on group_user gu (cost=0.29..8.30 rows=1 width=16) (actual time=0.001..0.001 rows=0 loops=15832)\n> Index Cond: ((group_id = groups.group_id) AND (user_id = ku.user_id))\n> Heap Fetches: 455\n> Buffers: shared hit=32210\n\n=> 15832*0.001sec = 15ms \n\nIn the slow query it did:\n> -> Index Only Scan using uk_groupid_userid on group_user gu (cost=0.29..115.12 rows=2 width=16) (actual time=0.135..0.135 rows=1 loops=785)\n> Index Cond: (user_id = ku.user_id)\n> Heap Fetches: 456\n> Buffers: shared hit=45529\n\n=> 785*0.115sec = 90ms\n\nIt scanned using non-leading columns of index, so it took 6x longer even though\nit did 20x fewer loops. Also it did 456 heap fetches (which were probably\nnonsequential). Vacuuming the table will probably help; if so, you should\nconsider setting parameter to encourage more frequent autovacuums:\n| ALTER TABLE group_user SET (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005);\n\nJustin\n\n\n",
"msg_date": "Fri, 13 Sep 2019 18:06:15 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution time Vs Cost"
},
{
"msg_contents": "Hi Justin,\n\nThanks a lot for the detailed analysis and explanation for slowness that\nwas seen. Pointed noted related to the vacuum tuning option.\n\nRegards, Amarendra\n\n\nOn Sat, Sep 14, 2019 at 4:36 AM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, Sep 13, 2019 at 04:38:50PM +0530, Amarendra Konda wrote:\n> > As part of one query tuning, it was observed that query execution time\n> was\n> > more even though cost was decreased.\n>\n> ..\n>\n> > May i know the reason behind in increase in response time, even though\n> cost\n> > was reduced by 6.4 times.\n>\n> The \"cost\" is postgres model for how expensive a plan will be, based on\n> table\n> statistics, and parameters like seq/rand_page_cost, etc. It's an imperfect\n> model and not exact.\n>\n> > *Initial Query*\n> >\n> > => explain(analyze,buffers,costs) SELECT ku.user_id\n> > > FROM konotor_user ku\n> > > LEFT JOIN agent_details ad\n> > > ON ku.user_id = ad.user_id\n> > > WHERE ku.app_id = '12132818272260'\n> > > AND (ku.user_type = 1 OR ku.user_type = 2)\n> > > AND (ad.deleted isnull OR ad.deleted = 0)\n> > > AND ku.user_id NOT IN (\n> > > SELECT gu.user_id\n> > > FROM group_user gu\n> > > INNER JOIN groups\n> > > ON gu.group_id = groups.group_id\n> > > AND app_id = ku.app_id\n> > > WHERE gu.user_id = ku.user_id\n> > > AND groups.app_id = ku.app_id\n> > > AND groups.deleted = false);\n>\n> It seems to me the major difference is in group_user JOIN groups.\n>\n> In the fast query, it did\n> > -> Index Only Scan using uk_groupid_userid on\n> group_user gu (cost=0.29..8.30 rows=1 width=16) (actual time=0.001..0.001\n> rows=0 loops=15832)\n> > Index Cond: ((group_id = groups.group_id) AND\n> (user_id = ku.user_id))\n> > Heap Fetches: 455\n> > Buffers: shared hit=32210\n>\n> => 15832*0.001sec = 15ms\n>\n> In the slow query it did:\n> > -> Index Only Scan using uk_groupid_userid on group_user gu\n> (cost=0.29..115.12 rows=2 width=16) (actual time=0.135..0.135 rows=1\n> loops=785)\n> > Index Cond: (user_id = ku.user_id)\n> > Heap Fetches: 456\n> > Buffers: shared hit=45529\n>\n> => 785*0.115sec = 90ms\n>\n> It scanned using non-leading columns of index, so it took 6x longer even\n> though\n> it did 20x fewer loops. Also it did 456 heap fetches (which were probably\n> nonsequential). Vacuuming the table will probably help; if so, you should\n> consider setting parameter to encourage more frequent autovacuums:\n> | ALTER TABLE group_user SET (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005);\n>\n> Justin\n>\n\nHi Justin,Thanks a lot for the detailed analysis and explanation for slowness that was seen. Pointed noted related to the vacuum tuning option. Regards, AmarendraOn Sat, Sep 14, 2019 at 4:36 AM Justin Pryzby <[email protected]> wrote:On Fri, Sep 13, 2019 at 04:38:50PM +0530, Amarendra Konda wrote:\n> As part of one query tuning, it was observed that query execution time was\n> more even though cost was decreased.\n\n..\n\n> May i know the reason behind in increase in response time, even though cost\n> was reduced by 6.4 times.\n\nThe \"cost\" is postgres model for how expensive a plan will be, based on table\nstatistics, and parameters like seq/rand_page_cost, etc. It's an imperfect\nmodel and not exact.\n\n> *Initial Query*\n> \n> => explain(analyze,buffers,costs) SELECT ku.user_id\n> > FROM konotor_user ku\n> > LEFT JOIN agent_details ad\n> > ON ku.user_id = ad.user_id\n> > WHERE ku.app_id = '12132818272260'\n> > AND (ku.user_type = 1 OR ku.user_type = 2)\n> > AND (ad.deleted isnull OR ad.deleted = 0)\n> > AND ku.user_id NOT IN (\n> > SELECT gu.user_id\n> > FROM group_user gu\n> > INNER JOIN groups\n> > ON gu.group_id = groups.group_id\n> > AND app_id = ku.app_id\n> > WHERE gu.user_id = ku.user_id\n> > AND groups.app_id = ku.app_id\n> > AND groups.deleted = false);\n\nIt seems to me the major difference is in group_user JOIN groups.\n\nIn the fast query, it did\n> -> Index Only Scan using uk_groupid_userid on group_user gu (cost=0.29..8.30 rows=1 width=16) (actual time=0.001..0.001 rows=0 loops=15832)\n> Index Cond: ((group_id = groups.group_id) AND (user_id = ku.user_id))\n> Heap Fetches: 455\n> Buffers: shared hit=32210\n\n=> 15832*0.001sec = 15ms \n\nIn the slow query it did:\n> -> Index Only Scan using uk_groupid_userid on group_user gu (cost=0.29..115.12 rows=2 width=16) (actual time=0.135..0.135 rows=1 loops=785)\n> Index Cond: (user_id = ku.user_id)\n> Heap Fetches: 456\n> Buffers: shared hit=45529\n\n=> 785*0.115sec = 90ms\n\nIt scanned using non-leading columns of index, so it took 6x longer even though\nit did 20x fewer loops. Also it did 456 heap fetches (which were probably\nnonsequential). Vacuuming the table will probably help; if so, you should\nconsider setting parameter to encourage more frequent autovacuums:\n| ALTER TABLE group_user SET (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005);\n\nJustin",
"msg_date": "Sat, 14 Sep 2019 09:54:43 +0530",
"msg_from": "Amarendra Konda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query execution time Vs Cost"
}
] |
[
{
"msg_contents": "Hi\n\nWe are trying to diagnose why postgres might be making poor decisions\nregarding query plans. One theory is that it does not assume it has\nthe memory suggested in effective_cache_size.\n\nWe do know that max_connections is set quite high (600) when we don't\nreally expect more than 100. I wonder does the planner take\nmax_connections x work_mem into account when considering the memory it\nhas potentially available?\n\nRegards\nBob\n\n\n",
"msg_date": "Tue, 17 Sep 2019 08:40:53 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "does max_connections affect the query planner"
},
{
"msg_contents": "Bob Jolliffe <[email protected]> writes:\n> We do know that max_connections is set quite high (600) when we don't\n> really expect more than 100. I wonder does the planner take\n> max_connections x work_mem into account when considering the memory it\n> has potentially available?\n\nNo. There have been discussions to the effect that it ought to have\na more holistic view about available memory; but nothing's been done\nabout that, and certainly no existing release does so.\n\nUsually the proximate cause of bad plan choices is bad rowcount\nestimates --- you can spot that by comparing estimated and actual\nrowcounts in EXPLAIN ANALYZE results.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Sep 2019 10:13:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does max_connections affect the query planner"
},
{
"msg_contents": "Thanks Tom. Will check that.\n\nOn Tue, 17 Sep 2019 at 14:13, Tom Lane <[email protected]> wrote:\n>\n> Bob Jolliffe <[email protected]> writes:\n> > We do know that max_connections is set quite high (600) when we don't\n> > really expect more than 100. I wonder does the planner take\n> > max_connections x work_mem into account when considering the memory it\n> > has potentially available?\n>\n> No. There have been discussions to the effect that it ought to have\n> a more holistic view about available memory; but nothing's been done\n> about that, and certainly no existing release does so.\n>\n> Usually the proximate cause of bad plan choices is bad rowcount\n> estimates --- you can spot that by comparing estimated and actual\n> rowcounts in EXPLAIN ANALYZE results.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Tue, 17 Sep 2019 14:15:04 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: does max_connections affect the query planner"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 4:41 AM Bob Jolliffe <[email protected]> wrote:\n\n> Hi\n>\n> We are trying to diagnose why postgres might be making poor decisions\n> regarding query plans. One theory is that it does not assume it has\n> the memory suggested in effective_cache_size.\n\n\n> We do know that max_connections is set quite high (600) when we don't\n> really expect more than 100. I wonder does the planner take\n> max_connections x work_mem into account when considering the memory it\n> has potentially available?\n>\n\n\nNo, it doesn't try to guess how many connections might be sharing\neffective_cache_size. It assumes the entire thing is available to any use\nat any given time.\n\nBut it is only used for cases where a single query is going to be accessing\nblocks over and over again--it estimates that the block will still be in\ncache on subsequent visits. But this doesn't work for blocks visited\nrepeatedly in different queries, either on the same connection or different\nones. There is no notion that some objects might be hotter than others,\nother than within one query.\n\nCheers,\n\nJeff\n\nOn Tue, Sep 17, 2019 at 4:41 AM Bob Jolliffe <[email protected]> wrote:Hi\n\nWe are trying to diagnose why postgres might be making poor decisions\nregarding query plans. One theory is that it does not assume it has\nthe memory suggested in effective_cache_size. \n\nWe do know that max_connections is set quite high (600) when we don't\nreally expect more than 100. I wonder does the planner take\nmax_connections x work_mem into account when considering the memory it\nhas potentially available?No, it doesn't try to guess how many connections might be sharing effective_cache_size. It assumes the entire thing is available to any use at any given time.But it is only used for cases where a single query is going to be accessing blocks over and over again--it estimates that the block will still be in cache on subsequent visits. But this doesn't work for blocks visited repeatedly in different queries, either on the same connection or different ones. There is no notion that some objects might be hotter than others, other than within one query. Cheers,Jeff",
"msg_date": "Tue, 17 Sep 2019 10:25:19 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does max_connections affect the query planner"
}
] |
[
{
"msg_contents": "Hey there;\n\nI have a weird use case where I am basically taking data from many\ndifferent sources and merging it into a single table, while trying to avoid\nduplicates as much as possible. None of them share any kind of primary\nkey, but I have determined 3 columns that, together, will almost always be\nunique so I am planning on using those 3 columns as a composite primary key.\n\nTwo of those columns are integers, which is great. The third column is a\nstring, UTF-8, which may be quite long (though probably no longer than 50\ncharacters ... on average probably around 10 - 30 characters). The strings\ncould be practically anything, and they absolutely will not be unique on\ntheir own (these three data points are basically x, y coordinates and then\nsome name...for a given x,y coordinate there may be multiple names, but the\nlikihood of the same name at the same x, y is almost 0)\n\nI really don't want to do a string comparison if possible because this DB\nwill be getting very large, very quickly -- 20 million or so rows\nanticipated in the near future (i.e. next few weeks), with possible growth\nup to around 200 million (1+ year later).\n\nMy idea was to hash the string to a bigint, because the likelihood of all 3\ncolumns colliding is almost 0, and if a duplicate does crop up, it isn't\nthe end of the world.\n\nHowever, Postgresql doesn't seem to have any 'native' hashing calls that\nresult in a bigint. The closest I've found is pgcrypto's 'digest' call --\nI could theoretically take an md5 hash, and just use the first 8 bytes of\nit to make a bigint.\n\nHOWEVER... there is no straight forward way to do this. The most straight\nforward way I've seen is md5 -> hex string -> substring -> bigint. This is\nridiculous to me -- I'm basically converting binary to text, then\nconverting text to binary. However, you can't convert a bytea to a bigint\nin any fashion that I can tell so I have to eat a bunch of overhead for fun.\n\nWhat would be the fastest way to do this? I will be generating potentially\na LOT of these keys so I want to do it the least dumb way. I am using\nDigital Ocean's hosted PostgreSQL so I can't use my own C code -- but I can\nuse PL/Psql, PL/Perl or any of these extensions:\n\nhttps://www.digitalocean.com/docs/databases/postgresql/resources/supported-extensions/\n\nIf my concerns about string comparisons are unfounded and I'm working way\ntoo hard to avoid something that doesn't matter ... feel free to tell me\nthat as well. Basically, PostgreSQL performance guys, how would you tackle\nthis one?\n\n\nThanks,\n\nStephen\n\nHey there;I have a weird use case where I am basically taking data from many different sources and merging it into a single table, while trying to avoid duplicates as much as possible. None of them share any kind of primary key, but I have determined 3 columns that, together, will almost always be unique so I am planning on using those 3 columns as a composite primary key.Two of those columns are integers, which is great. The third column is a string, UTF-8, which may be quite long (though probably no longer than 50 characters ... on average probably around 10 - 30 characters). The strings could be practically anything, and they absolutely will not be unique on their own (these three data points are basically x, y coordinates and then some name...for a given x,y coordinate there may be multiple names, but the likihood of the same name at the same x, y is almost 0)I really don't want to do a string comparison if possible because this DB will be getting very large, very quickly -- 20 million or so rows anticipated in the near future (i.e. next few weeks), with possible growth up to around 200 million (1+ year later).My idea was to hash the string to a bigint, because the likelihood of all 3 columns colliding is almost 0, and if a duplicate does crop up, it isn't the end of the world.However, Postgresql doesn't seem to have any 'native' hashing calls that result in a bigint. The closest I've found is pgcrypto's 'digest' call -- I could theoretically take an md5 hash, and just use the first 8 bytes of it to make a bigint.HOWEVER... there is no straight forward way to do this. The most straight forward way I've seen is md5 -> hex string -> substring -> bigint. This is ridiculous to me -- I'm basically converting binary to text, then converting text to binary. However, you can't convert a bytea to a bigint in any fashion that I can tell so I have to eat a bunch of overhead for fun.What would be the fastest way to do this? I will be generating potentially a LOT of these keys so I want to do it the least dumb way. I am using Digital Ocean's hosted PostgreSQL so I can't use my own C code -- but I can use PL/Psql, PL/Perl or any of these extensions:https://www.digitalocean.com/docs/databases/postgresql/resources/supported-extensions/If my concerns about string comparisons are unfounded and I'm working way too hard to avoid something that doesn't matter ... feel free to tell me that as well. Basically, PostgreSQL performance guys, how would you tackle this one?Thanks,Stephen",
"msg_date": "Wed, 18 Sep 2019 12:41:28 -0400",
"msg_from": "Stephen Conley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question regarding fast-hashing in PGSQL"
},
{
"msg_contents": "I've had a similar issue in the past.\n\nI used the md5 hash function and stored it in a UUID column for my\ncomparisons. Bigger than a bigint, but still much faster than string\ncomparisons directly for my use case.\nUUID works fine for storing md5 hashes and gives you the ability to\npiggyback on all the index support built for them.\n\nHope that helps,\n-Adam\n\nI've had a similar issue in the past.I used the md5 hash function and stored it in a UUID column for my comparisons. Bigger than a bigint, but still much faster than string comparisons directly for my use case.UUID works fine for storing md5 hashes and gives you the ability to piggyback on all the index support built for them.Hope that helps,-Adam",
"msg_date": "Wed, 18 Sep 2019 12:49:58 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding fast-hashing in PGSQL"
},
{
"msg_contents": "This should work perfectly for me. Thank you so much!\n\nOn Wed, Sep 18, 2019 at 12:50 PM Adam Brusselback <[email protected]>\nwrote:\n\n> I've had a similar issue in the past.\n>\n> I used the md5 hash function and stored it in a UUID column for my\n> comparisons. Bigger than a bigint, but still much faster than string\n> comparisons directly for my use case.\n> UUID works fine for storing md5 hashes and gives you the ability to\n> piggyback on all the index support built for them.\n>\n> Hope that helps,\n> -Adam\n>\n\nThis should work perfectly for me. Thank you so much!On Wed, Sep 18, 2019 at 12:50 PM Adam Brusselback <[email protected]> wrote:I've had a similar issue in the past.I used the md5 hash function and stored it in a UUID column for my comparisons. Bigger than a bigint, but still much faster than string comparisons directly for my use case.UUID works fine for storing md5 hashes and gives you the ability to piggyback on all the index support built for them.Hope that helps,-Adam",
"msg_date": "Wed, 18 Sep 2019 12:56:32 -0400",
"msg_from": "Stephen Conley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question regarding fast-hashing in PGSQL"
},
{
"msg_contents": "Stephen Conley <[email protected]> writes:\n> My idea was to hash the string to a bigint, because the likelihood of all 3\n> columns colliding is almost 0, and if a duplicate does crop up, it isn't\n> the end of the world.\n\n> However, Postgresql doesn't seem to have any 'native' hashing calls that\n> result in a bigint.\n\nregression=# \\df hashtext*\n List of functions\n Schema | Name | Result data type | Argument data types | Type \n------------+------------------+------------------+---------------------+------\n pg_catalog | hashtext | integer | text | func\n pg_catalog | hashtextextended | bigint | text, bigint | func\n(2 rows)\n\nThe \"extended\" hash API has only been there since v11, so you\ncouldn't rely on it if you need portability to old servers.\nBut otherwise it seems to respond precisely to your question.\n\nIf you do need portability ... does the text string's part of the\nhash *really* have to be 64 bits wide? Why not just concatenate\nit with a 32-bit hash of the other fields?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Sep 2019 16:39:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding fast-hashing in PGSQL"
}
] |
[
{
"msg_contents": "Hey,\nThanks to the new partitions features in pg12 (referencing partition table\nis possible) I was trying to migrate some of my tables into a partitions\nstructure.\n\nLets assume I have the following non partitions structure :\n\nProduct(id int PK,vendor int references Vendor(id),price int)\nProductPic(int picId PK,product int references product(id) )\nVendor(id int PK,name text)\n.... more tables that has references to the Product(id).\n\nI understand that the PK on the Product table must include also the\npartition column in order to ensure the uniqueness across all the\npartitions. However, on the other hand I'll need to add the partition\ncolumn (Vendor) to all the tables that has a reference to the Product(id) +\nupdate that column with the relevant data. This type of maintenance\nrequires a lot of time because I have a lot of references to the Product\ntable. Is there any other option in PG12 to allow references to partition\ntable ?\n\nHey,Thanks to the new partitions features in pg12 (referencing partition table is possible) I was trying to migrate some of my tables into a partitions structure.Lets assume I have the following non partitions structure : Product(id int PK,vendor int references Vendor(id),price int)ProductPic(int picId PK,product int references product(id) )Vendor(id int PK,name text).... more tables that has references to the Product(id).I understand that the PK on the Product table must include also the partition column in order to ensure the uniqueness across all the partitions. However, on the other hand I'll need to add the partition column (Vendor) to all the tables that has a reference to the Product(id) + update that column with the relevant data. This type of maintenance requires a lot of time because I have a lot of references to the Product table. Is there any other option in PG12 to allow references to partition table ?",
"msg_date": "Wed, 18 Sep 2019 21:02:22 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg12 - migrate tables to partitions structure"
},
{
"msg_contents": "How many rows are you dealing with currently? What are your queries like?\nHave you looked at doing a hash partition on product.id? Is this on a test\nsystem or destined for a production environment in the near future? I ask\nbecause PG12 is still in beta.\n\n How many rows are you dealing with currently? What are your queries like? Have you looked at doing a hash partition on product.id? Is this on a test system or destined for a production environment in the near future? I ask because PG12 is still in beta.",
"msg_date": "Wed, 18 Sep 2019 13:09:04 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg12 - migrate tables to partitions structure"
},
{
"msg_contents": "Hey Michael,\nfirst of all thanks for the quick response.\nRight now the production env is on a different version(10). I'm doing all\nmy tests on a test environment. I'm familiar with the hash partitions but\nmy queries doesnt involve the product.id therefore iti isnt relevant. All\nthe queries uses the vendor product and thats why this column is a perfect\nfit as a partition column.\nMy main table is big (10M+) (Product), but other tables can also be\nbig(1M+)..\n\n>\n>\n\nHey Michael,first of all thanks for the quick response.Right now the production env is on a different version(10). I'm doing all my tests on a test environment. I'm familiar with the hash partitions but my queries doesnt involve the product.id therefore iti isnt relevant. All the queries uses the vendor product and thats why this column is a perfect fit as a partition column.My main table is big (10M+) (Product), but other tables can also be big(1M+)..",
"msg_date": "Wed, 18 Sep 2019 22:29:35 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg12 - migrate tables to partitions structure"
},
{
"msg_contents": ">\n> All the queries uses the vendor product and thats why this column is a\n> perfect fit as a partition column.\n> My main table is big (10M+) (Product), but other tables can also be\n> big(1M+)..\n>\n\nI assume you have query performance problems and are hoping partitioning\nwill help? Are you read heavy, or write intensive, or both? 10 million\nrows, especially if they aren't super wide, doesn't seem like a huge number\nto me. Do you have example queries with explain plans that you think would\nbenefit from the system being partitioned? I just know that as an engineer,\nsometimes I like to make use of new tools, even when it isn't the best\nsolution for the problem I am actually experiencing. How confident are you\nthat you NEED partitions is my real question.\n\nAll the queries uses the vendor product and thats why this column is a perfect fit as a partition column.My main table is big (10M+) (Product), but other tables can also be big(1M+)..I assume you have query performance problems and are hoping partitioning will help? Are you read heavy, or write intensive, or both? 10 million rows, especially if they aren't super wide, doesn't seem like a huge number to me. Do you have example queries with explain plans that you think would benefit from the system being partitioned? I just know that as an engineer, sometimes I like to make use of new tools, even when it isn't the best solution for the problem I am actually experiencing. How confident are you that you NEED partitions is my real question.",
"msg_date": "Wed, 18 Sep 2019 14:08:16 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg12 - migrate tables to partitions structure"
},
{
"msg_contents": "Well, if u have 10M rows, and all your queries use the same column in the\nquery and the data can split pretty even between the partitions, any\nspecific reason not to use is ? An index will help u reach a complexity of\n(logn) while partition + index can be in complexity of (logm) when m = rows\nin partition , n=total rows\n\n>\n\nWell, if u have 10M rows, and all your queries use the same column in the query and the data can split pretty even between the partitions, any specific reason not to use is ? An index will help u reach a complexity of (logn) while partition + index can be in complexity of (logm) when m = rows in partition , n=total rows",
"msg_date": "Wed, 18 Sep 2019 23:13:14 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg12 - migrate tables to partitions structure"
},
{
"msg_contents": "Is this being done because it can be, or is it solving a real-life pain\npoint? Just wondering what the perspective is here.\n\nMuch of partitioning strategy seems to me to revolve around how the system\nis used, and not just the schema and what is possible. For instance, you\ncan mimic primary and foreign key behavior with triggers as described here,\nand that would bypass some of the restrictions on what can be done.\nhttps://www.depesz.com/2018/10/02/foreign-key-to-partitioned-table/\n\nThis would allow you to change out the primary key for a simple index\nperhaps, and partition however you want. Just because something can be\ndone, does not mean it should be.\n\n>\n\nIs this being done because it can be, or is it solving a real-life pain point? Just wondering what the perspective is here.Much of partitioning strategy seems to me to revolve around how the system is used, and not just the schema and what is possible. For instance, you can mimic primary and foreign key behavior with triggers as described here, and that would bypass some of the restrictions on what can be done.https://www.depesz.com/2018/10/02/foreign-key-to-partitioned-table/This would allow you to change out the primary key for a simple index perhaps, and partition however you want. Just because something can be done, does not mean it should be.",
"msg_date": "Wed, 18 Sep 2019 14:56:22 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg12 - migrate tables to partitions structure"
}
] |
[
{
"msg_contents": "https://blog.jooq.org/2019/09/19/whats-faster-count-or-count1/\n\nIs there a reason why count(*) seems to be faster? \n\n\n\n",
"msg_date": "Thu, 19 Sep 2019 12:09:32 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Surprising benchmark count(1) vs. count(*)"
},
{
"msg_contents": "On Thu, 2019-09-19 at 12:09 +0200, Thomas Kellerer wrote:\n> https://blog.jooq.org/2019/09/19/whats-faster-count-or-count1/\n> \n> Is there a reason why count(*) seems to be faster?\n\n\"count(*)\" is just the SQL standard's way of saying what you'd\nnormally call \"count()\", that is, an aggregate without arguments.\n\n\"count(1)\" has to check if 1 IS NULL for each row, because NULL\nvalues are not counted. \"count(*)\" doesn't have to do that.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 19 Sep 2019 12:22:42 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surprising benchmark count(1) vs. count(*)"
},
{
"msg_contents": "Laurenz Albe schrieb am 19.09.2019 um 12:22:\n>> https://blog.jooq.org/2019/09/19/whats-faster-count-or-count1/\n>>\n>> Is there a reason why count(*) seems to be faster?\n> \n> \"count(*)\" is just the SQL standard's way of saying what you'd\n> normally call \"count()\", that is, an aggregate without arguments.\n> \n> \"count(1)\" has to check if 1 IS NULL for each row, because NULL\n> values are not counted. \"count(*)\" doesn't have to do that.\n\nBut 1 is a constant, why does it need to check it for each row? \n\n\n\n\n",
"msg_date": "Thu, 19 Sep 2019 13:22:40 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Surprising benchmark count(1) vs. count(*)"
},
{
"msg_contents": "Thomas Kellerer <[email protected]> writes:\n> Laurenz Albe schrieb am 19.09.2019 um 12:22:\n>> \"count(1)\" has to check if 1 IS NULL for each row, because NULL\n>> values are not counted. \"count(*)\" doesn't have to do that.\n\n> But 1 is a constant, why does it need to check it for each row? \n\n[ shrug... ] There's no special optimization for that case.\nAnd I can't say that it seems attractive to add one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2019 10:11:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surprising benchmark count(1) vs. count(*)"
},
{
"msg_contents": "I will say I've seen count(1) in the wild a ton, as well as at my own\ncompany from developers who were used to it not making a difference.\n\nThere have been a couple queries in the hot path that I have had to changed\nfrom count(1) to count(*) as part of performance tuning, but in general\nit's not worth me worrying about. There are usually larger performance\nissues to track down in complex queries.\n\nIt would be nice if Postgres optimized this case though because it is\nreally really common from what i've seen.\n\nThanks,\n-Adam\n\nI will say I've seen count(1) in the wild a ton, as well as at my own company from developers who were used to it not making a difference.There have been a couple queries in the hot path that I have had to changed from count(1) to count(*) as part of performance tuning, but in general it's not worth me worrying about. There are usually larger performance issues to track down in complex queries.It would be nice if Postgres optimized this case though because it is really really common from what i've seen.Thanks,-Adam",
"msg_date": "Thu, 19 Sep 2019 16:38:33 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surprising benchmark count(1) vs. count(*)"
},
{
"msg_contents": "Adam Brusselback <[email protected]> writes:\n> It would be nice if Postgres optimized this case though because it is\n> really really common from what i've seen.\n\nSince the introduction of the \"planner support function\" infrastructure,\nit'd be possible to do this without it being a completely ugly kluge:\nwe could put the logic for it into a planner support function attached\nto count(any). Currently planner support functions are only called for\nregular functions, but we could certainly envision adding the ability to\ndo it for aggregates (and window functions too, why not).\n\nI'm not particularly planning to do that myself, but if someone else\nwants to write a patch, have at it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2019 17:36:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surprising benchmark count(1) vs. count(*)"
}
] |
[
{
"msg_contents": "Hey,\nI tried to get a list of all tables that has a reference to my_table. I\nused two different queries :\n\n1)select R.*\nfrom INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE u\ninner join INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS FK\n on U.CONSTRAINT_CATALOG = FK.UNIQUE_CONSTRAINT_CATALOG\n and U.CONSTRAINT_SCHEMA = FK.UNIQUE_CONSTRAINT_SCHEMA\n and U.CONSTRAINT_NAME = FK.UNIQUE_CONSTRAINT_NAME\ninner join INFORMATION_SCHEMA.KEY_COLUMN_USAGE R\n ON R.CONSTRAINT_CATALOG = FK.CONSTRAINT_CATALOG\n AND R.CONSTRAINT_SCHEMA = FK.CONSTRAINT_SCHEMA\n AND R.CONSTRAINT_NAME = FK.CONSTRAINT_NAME\n AND U.TABLE_NAME = '*my_table*'\n\n2)select conname, (select r.relname from pg_class r where r.oid =\nc.confrelid) as orig_table, (select array_agg(attname) from pg_attribute\n where attrelid = c.confrelid and ARRAY[attnum] <@ c.conkey) as\norig_cols, (select r.relname from pg_class r where r.oid = c.conrelid) as\nforeign_table, (select array_agg(attname) from pg_attribute where\nattrelid = c.conrelid and ARRAY[attnum] <@ c.conkey) as foreign_cols from\npg_constraint c where c.confrelid = (select oid from pg_class where\nrelname = '*my_table*') and c.contype='f'\n\nOn the second output in the orig_cols I got a few weird outputs like\n: {........pg.dropped.5........} or even a columns that doesnt have a\nunique index (just a random column from the orig_table).\n\ntried to vacuum the table but still didnt help. The db is at version 9, but\nI tried to upgrade it to 10/11/12 and in all versions it stayed the same.\n\n;\n\nHey,I tried to get a list of all tables that has a reference to my_table. I used two different queries : 1)select R.*from INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE uinner join INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS FK on U.CONSTRAINT_CATALOG = FK.UNIQUE_CONSTRAINT_CATALOG and U.CONSTRAINT_SCHEMA = FK.UNIQUE_CONSTRAINT_SCHEMA and U.CONSTRAINT_NAME = FK.UNIQUE_CONSTRAINT_NAMEinner join INFORMATION_SCHEMA.KEY_COLUMN_USAGE R ON R.CONSTRAINT_CATALOG = FK.CONSTRAINT_CATALOG AND R.CONSTRAINT_SCHEMA = FK.CONSTRAINT_SCHEMA AND R.CONSTRAINT_NAME = FK.CONSTRAINT_NAME AND U.TABLE_NAME = 'my_table'2)select conname, (select r.relname from pg_class r where r.oid = c.confrelid) as orig_table, (select array_agg(attname) from pg_attribute where attrelid = c.confrelid and ARRAY[attnum] <@ c.conkey) as orig_cols, (select r.relname from pg_class r where r.oid = c.conrelid) as foreign_table, (select array_agg(attname) from pg_attribute where attrelid = c.conrelid and ARRAY[attnum] <@ c.conkey) as foreign_cols from pg_constraint c where c.confrelid = (select oid from pg_class where relname = 'my_table') and c.contype='f'On the second output in the orig_cols I got a few weird outputs like : {........pg.dropped.5........} or even a columns that doesnt have a unique index (just a random column from the orig_table).tried to vacuum the table but still didnt help. The db is at version 9, but I tried to upgrade it to 10/11/12 and in all versions it stayed the same.;",
"msg_date": "Thu, 19 Sep 2019 18:50:44 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "comparing output of internal pg tables of referenced tables"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> 2)select conname, (select r.relname from pg_class r where r.oid =\n> c.confrelid) as orig_table, (select array_agg(attname) from pg_attribute\n> where attrelid = c.confrelid and ARRAY[attnum] <@ c.conkey) as\n> orig_cols, (select r.relname from pg_class r where r.oid = c.conrelid) as\n> foreign_table, (select array_agg(attname) from pg_attribute where\n> attrelid = c.conrelid and ARRAY[attnum] <@ c.conkey) as foreign_cols from\n> pg_constraint c where c.confrelid = (select oid from pg_class where\n> relname = '*my_table*') and c.contype='f'\n\n> On the second output in the orig_cols I got a few weird outputs like\n> : {........pg.dropped.5........} or even a columns that doesnt have a\n> unique index (just a random column from the orig_table).\n\nYou need to be looking at confkey not conkey for the columns in the\nconfrelid table.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2019 12:28:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: comparing output of internal pg tables of referenced tables"
},
{
"msg_contents": "Hi!\n\nI have a query that SELECT's only one tuple using a PK \n(https://explain.depesz.com/s/Hskt) <https://explain.depesz.com/s/Hskt>\n\nthe field I am selecting are a bigint and a text. Why does it read 1095 \nshared buffers read?\n\nIf I adda LIMIT 1 clause, the query runs much faster: \nhttps://explain.depesz.com/s/bSZn\n\nThis table has only one tuple anyway, so I can't understand why does it \ntakes so long without the LIMIT 1.\n<https://explain.depesz.com/s/Hskt>\n\n\n\n\n\n\n Hi!\n\n I have a query that SELECT's only one tuple using a PK (https://explain.depesz.com/s/Hskt)\n \n\n the field I am selecting are a bigint and a text. Why does it read\n 1095 shared buffers read? \n\n If I adda LIMIT 1 clause, the query runs much faster: https://explain.depesz.com/s/bSZn\n\n This table has only one tuple anyway, so I can't understand why does\n it takes so long without the LIMIT 1.",
"msg_date": "Thu, 19 Sep 2019 13:55:46 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Slow query on a one-tuple table"
},
{
"msg_contents": "Is this result able to be repeated?\n\nIs this result able to be repeated?",
"msg_date": "Thu, 19 Sep 2019 11:21:26 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
},
{
"msg_contents": "\nEm 19/09/2019 14:21, Michael Lewis escreveu:\n> Is this result able to be repeated?\n\nYes, I can consistently repeat it.\n\nPostgres version is 11.1.\n\nOther executions:\n\nIndex Scan using assessoria_pkey on public.assessoria (cost=0.25..2.47 \nrows=1 width=62) (actual time=1.591..4.035 rows=1 loops=1)\n Output: asscod, asscambol\n Index Cond: (assessoria.asscod = 1)\n Buffers: shared hit=1187\nPlanning Time: 0.053 ms\nExecution Time: 4.055 ms\n\nIndex Scan using assessoria_pkey on public.assessoria (cost=0.25..2.47 \nrows=1 width=62) (actual time=1.369..3.838 rows=1 loops=1)\n Output: asscod, asscambol\n Index Cond: (assessoria.asscod = 1)\n Buffers: shared hit=1187\nPlanning Time: 0.033 ms\nExecution Time: 3.851 ms\n\n\n",
"msg_date": "Thu, 19 Sep 2019 15:30:08 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
},
{
"msg_contents": "\r\n-----Original Message-----\r\nFrom: Luís Roberto Weck [mailto:[email protected]] \r\nSent: Thursday, September 19, 2019 2:30 PM\r\nTo: Michael Lewis <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: Slow query on a one-tuple table\r\n\r\nWARNING: This email originated from outside of Perceptron! Please be mindful of PHISHING and MALWARE risks.\r\n\r\nEm 19/09/2019 14:21, Michael Lewis escreveu:\r\n> Is this result able to be repeated?\r\n\r\nYes, I can consistently repeat it.\r\n\r\nPostgres version is 11.1.\r\n\r\nOther executions:\r\n\r\nIndex Scan using assessoria_pkey on public.assessoria (cost=0.25..2.47\r\nrows=1 width=62) (actual time=1.591..4.035 rows=1 loops=1)\r\n Output: asscod, asscambol\r\n Index Cond: (assessoria.asscod = 1)\r\n Buffers: shared hit=1187\r\nPlanning Time: 0.053 ms\r\nExecution Time: 4.055 ms\r\n\r\nIndex Scan using assessoria_pkey on public.assessoria (cost=0.25..2.47\r\nrows=1 width=62) (actual time=1.369..3.838 rows=1 loops=1)\r\n Output: asscod, asscambol\r\n Index Cond: (assessoria.asscod = 1)\r\n Buffers: shared hit=1187\r\nPlanning Time: 0.033 ms\r\nExecution Time: 3.851 ms\r\n\r\n________________________________________________________________________________________________________________\r\n\r\nBut can you repeat it with \"LIMIT 1\"?\r\nNotice huge difference in \"buffers hit\" while doing (the same) Index Scan in two plans.\r\n\r\nRegards,\r\nIgor Neyman\r\n",
"msg_date": "Thu, 19 Sep 2019 18:34:11 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow query on a one-tuple table"
},
{
"msg_contents": "Em 19/09/2019 15:34, Igor Neyman escreveu:\n> -----Original Message-----\n> From: Luís Roberto Weck [mailto:[email protected]]\n> Sent: Thursday, September 19, 2019 2:30 PM\n> To: Michael Lewis <[email protected]>\n> Cc: [email protected]\n> Subject: Re: Slow query on a one-tuple table\n>\n> WARNING: This email originated from outside of Perceptron! Please be mindful of PHISHING and MALWARE risks.\n>\n> Em 19/09/2019 14:21, Michael Lewis escreveu:\n>> Is this result able to be repeated?\n> Yes, I can consistently repeat it.\n>\n> Postgres version is 11.1.\n>\n> Other executions:\n>\n> Index Scan using assessoria_pkey on public.assessoria (cost=0.25..2.47\n> rows=1 width=62) (actual time=1.591..4.035 rows=1 loops=1)\n> Output: asscod, asscambol\n> Index Cond: (assessoria.asscod = 1)\n> Buffers: shared hit=1187\n> Planning Time: 0.053 ms\n> Execution Time: 4.055 ms\n>\n> Index Scan using assessoria_pkey on public.assessoria (cost=0.25..2.47\n> rows=1 width=62) (actual time=1.369..3.838 rows=1 loops=1)\n> Output: asscod, asscambol\n> Index Cond: (assessoria.asscod = 1)\n> Buffers: shared hit=1187\n> Planning Time: 0.033 ms\n> Execution Time: 3.851 ms\n>\n> ________________________________________________________________________________________________________________\n>\n> But can you repeat it with \"LIMIT 1\"?\n> Notice huge difference in \"buffers hit\" while doing (the same) Index Scan in two plans.\n>\n> Regards,\n> Igor Neyman\nWith LIMIT 1, I get 3 shared buffers hit, pretty much always.\n\n\n",
"msg_date": "Thu, 19 Sep 2019 16:59:01 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
},
{
"msg_contents": "With LIMIT 1, I get 3 shared buffers hit, pretty much always.\r\n\r\n____________________________________________________________________________________\r\n\r\nCheck if assessoria_pkey index is bloated.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\r\n",
"msg_date": "Thu, 19 Sep 2019 20:11:11 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow query on a one-tuple table"
},
{
"msg_contents": "Em 19/09/2019 17:11, Igor Neyman escreveu:\n> With LIMIT 1, I get 3 shared buffers hit, pretty much always.\n>\n> ____________________________________________________________________________________\n>\n> Check if assessoria_pkey index is bloated.\n>\n> Regards,\n> Igor Neyman\n>\n>\n\nWith this query[1] it shows:\n\ncurrent_database|schemaname|tblname |idxname \n|real_size|extra_size|extra_ratio|fillfactor|bloat_size|bloat_ratio|is_na|\n----------------|----------|----------|---------------|---------|----------|-----------|----------|----------|-----------|-----|\ndatabase_name |public |assessoria|assessoria_pkey| 16384| \n0| 0.0| 90| 0.0| 0.0|false|\n\n[1]https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat-superuser.sql \n\n\n\n",
"msg_date": "Thu, 19 Sep 2019 17:24:39 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
},
{
"msg_contents": "Em 19/09/2019 17:24, Luís Roberto Weck escreveu:\n> Em 19/09/2019 17:11, Igor Neyman escreveu:\n>> With LIMIT 1, I get 3 shared buffers hit, pretty much always.\n>>\n>> ____________________________________________________________________________________ \n>>\n>>\n>> Check if assessoria_pkey index is bloated.\n>>\n>> Regards,\n>> Igor Neyman\n>>\n>>\n>\n> With this query[1] it shows:\n>\n> current_database|schemaname|tblname |idxname \n> |real_size|extra_size|extra_ratio|fillfactor|bloat_size|bloat_ratio|is_na|\n> ----------------|----------|----------|---------------|---------|----------|-----------|----------|----------|-----------|-----| \n>\n> database_name |public |assessoria|assessoria_pkey| 16384| \n> 0| 0.0| 90| 0.0| 0.0|false|\n>\n> [1]https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat-superuser.sql \n>\n>\n>\n\nUsing the quer provided here[1] I see this comment:\n\n /*\n * distinct_real_item_keys is how many distinct \"data\" fields on page\n * (excludes highkey).\n *\n * If this is less than distinct_block_pointers on an internal page, that\n * means that there are so many duplicates in its children that there are\n * duplicate high keys in children, so the index is probably pretty \nbloated.\n *\n * Even unique indexes can have duplicates. It's sometimes \ninteresting to\n * watch out for how many distinct real items there are within leaf \npages,\n * compared to the number of live items, or total number of items. \nIdeally,\n * these will all be exactly the same for unique indexes.\n */\n\nIn my case, I'm seeing:\n\ndistinct_real_item_keys|distinct_block_pointers|\n-----------------------|-----------------------|\n 1| 63|\n\nThis is about half an hour after running VACUUM FULL ANALYZE on the table.\n\nWhat can I do to reduce this?\n\n\n[1] \nhttps://wiki.postgresql.org/wiki/Index_Maintenance#Summarize_keyspace_of_a_B-Tree_index\n\n\n",
"msg_date": "Thu, 19 Sep 2019 17:41:19 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
},
{
"msg_contents": "Em 19/09/2019 17:41, Luís Roberto Weck escreveu:\n> Em 19/09/2019 17:24, Luís Roberto Weck escreveu:\n>> Em 19/09/2019 17:11, Igor Neyman escreveu:\n>>> With LIMIT 1, I get 3 shared buffers hit, pretty much always.\n>>>\n>>> ____________________________________________________________________________________ \n>>>\n>>>\n>>> Check if assessoria_pkey index is bloated.\n>>>\n>>> Regards,\n>>> Igor Neyman\n>>>\n>>>\n>>\n>> With this query[1] it shows:\n>>\n>> current_database|schemaname|tblname |idxname \n>> |real_size|extra_size|extra_ratio|fillfactor|bloat_size|bloat_ratio|is_na|\n>> ----------------|----------|----------|---------------|---------|----------|-----------|----------|----------|-----------|-----| \n>>\n>> database_name |public |assessoria|assessoria_pkey| \n>> 16384| 0| 0.0| 90| 0.0| 0.0|false|\n>>\n>> [1]https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat-superuser.sql \n>>\n>>\n>>\n>\n> Using the quer provided here[1] I see this comment:\n>\n> /*\n> * distinct_real_item_keys is how many distinct \"data\" fields on page\n> * (excludes highkey).\n> *\n> * If this is less than distinct_block_pointers on an internal page, \n> that\n> * means that there are so many duplicates in its children that \n> there are\n> * duplicate high keys in children, so the index is probably pretty \n> bloated.\n> *\n> * Even unique indexes can have duplicates. It's sometimes \n> interesting to\n> * watch out for how many distinct real items there are within leaf \n> pages,\n> * compared to the number of live items, or total number of items. \n> Ideally,\n> * these will all be exactly the same for unique indexes.\n> */\n>\n> In my case, I'm seeing:\n>\n> distinct_real_item_keys|distinct_block_pointers|\n> -----------------------|-----------------------|\n> 1| 63|\n>\n> This is about half an hour after running VACUUM FULL ANALYZE on the \n> table.\n>\n> What can I do to reduce this?\n>\n>\n> [1] \n> https://wiki.postgresql.org/wiki/Index_Maintenance#Summarize_keyspace_of_a_B-Tree_inde\nLike Igor suggested, the index bloat seems to be at fault here. After \ndropping the PK, I'm getting these plans:\n\nFirst run (SELECT asscod, asscambol FROM ASSESSORIA WHERE asscod = 1 \nORDER BY asscod):\n\n Seq Scan on public.assessoria (cost=0.00..88.01 rows=1 width=62) \n(actual time=0.242..0.810 rows=1 loops=1)\n Output: asscod, asscambol\n Filter: (assessoria.asscod = 1)\n Buffers: shared hit=88\n Planning Time: 0.312 ms\n Execution Time: 0.876 ms\n(6 rows)\n\nSubsequent runs get increasingly faster, up to 0.080ms execution times.\n\nUsing LIMIT 1, I get on the first run:\n\n Limit (cost=0.00..88.01 rows=1 width=62) (actual time=0.252..0.254 \nrows=1 loops=1)\n Output: asscod, asscambol\n Buffers: shared hit=17\n -> Seq Scan on public.assessoria (cost=0.00..88.01 rows=1 \nwidth=62) (actual time=0.250..0.250 rows=1 loops=1)\n Output: asscod, asscambol\n Filter: (assessoria.asscod = 1)\n Buffers: shared hit=17\n Planning Time: 0.334 ms\n Execution Time: 0.296 ms\n\n\nSubsequent runs look more like this:\n\n Limit (cost=0.00..88.01 rows=1 width=62) (actual time=0.057..0.057 \nrows=1 loops=1)\n Output: asscod, asscambol\n Buffers: shared hit=17\n -> Seq Scan on public.assessoria (cost=0.00..88.01 rows=1 \nwidth=62) (actual time=0.056..0.056 rows=1 loops=1)\n Output: asscod, asscambol\n Filter: (assessoria.asscod = 1)\n Buffers: shared hit=17\n Planning Time: 0.082 ms\n Execution Time: 0.068 ms\n\nI have about 6 bigint fields in this table that are very frequently \nupdated, but none of these are indexed. I thought that by not having an \nindex on them, would make all updates HOT, therefore not bloating the \nprimary key index. Seems I was wrong?\n\n\n",
"msg_date": "Thu, 19 Sep 2019 19:27:34 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
},
{
"msg_contents": ">\n> I have about 6 bigint fields in this table that are very frequently\n> updated, but none of these are indexed. I thought that by not having an\n> index on them, would make all updates HOT, therefore not bloating the\n> primary key index. Seems I was wrong?\n>\n\nHOT update is only possible if there is room in the page. How wide is your\nsingle tuple?\n\nHave you tuned autovacuum or are you running defaults? Not sure of your\nperception of \"very frequently\" updated values, but if you have bloat\nissue, vacuum early and often. Not sure how the math works out on a table\nwith single tuple in terms of calculating when it is time to vacuum, but it\ncertainly needs to be tuned differently than a table with millions of rows\nwhich is what I would be more used to.\n\nI have about 6 bigint fields in this table that are very frequently \nupdated, but none of these are indexed. I thought that by not having an \nindex on them, would make all updates HOT, therefore not bloating the \nprimary key index. Seems I was wrong?HOT update is only possible if there is room in the page. How wide is your single tuple?Have you tuned autovacuum or are you running defaults? Not sure of your perception of \"very frequently\" updated values, but if you have bloat issue, vacuum early and often. Not sure how the math works out on a table with single tuple in terms of calculating when it is time to vacuum, but it certainly needs to be tuned differently than a table with millions of rows which is what I would be more used to.",
"msg_date": "Thu, 19 Sep 2019 16:32:19 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
},
{
"msg_contents": "Em 19/09/2019 19:32, Michael Lewis escreveu:\n>\n> I have about 6 bigint fields in this table that are very frequently\n> updated, but none of these are indexed. I thought that by not\n> having an\n> index on them, would make all updates HOT, therefore not bloating the\n> primary key index. Seems I was wrong?\n>\n>\n> HOT update is only possible if there is room in the page. How wide is \n> your single tuple?\n>\n> Have you tuned autovacuum or are you running defaults? Not sure of \n> your perception of \"very frequently\" updated values, but if you have \n> bloat issue, vacuum early and often. Not sure how the math works out \n> on a table with single tuple in terms of calculating when it is time \n> to vacuum, but it certainly needs to be tuned differently than a table \n> with millions of rows which is what I would be more used to.\n\nI'm not sure how to measure how wide the tuple is, can you point me in \nthe right direction?\n\nAs fas as autovacuum options, this is what I'm using:\n\nautovacuum_enabled=true,\nfillfactor=50,\nautovacuum_vacuum_threshold=25,\nautovacuum_vacuum_scale_factor=0,\nautovacuum_analyze_threshold=10,\nautovacuum_analyze_scale_factor=0.05,\nautovacuum_vacuum_cost_delay=10,\nautovacuum_vacuum_cost_limit=1000,\ntoast.autovacuum_enabled=true\n\nBy \"very frequently\" I mean I can update it up to 800000 times a day. \nUsually this number is closer to 100000.\n\n\n\n\n\n\n Em 19/09/2019 19:32, Michael Lewis escreveu:\n\n\n\n\nI have about 6 bigint\n fields in this table that are very frequently \n updated, but none of these are indexed. I thought that by\n not having an \n index on them, would make all updates HOT, therefore not\n bloating the \n primary key index. Seems I was wrong?\n\n\n\nHOT update is only possible if there is room in the page.\n How wide is your single tuple?\n\n\nHave you tuned autovacuum or are you running defaults?\n Not sure of your perception of \"very frequently\" updated\n values, but if you have bloat issue, vacuum early and often.\n Not sure how the math works out on a table with single tuple\n in terms of calculating when it is time to vacuum, but it\n certainly needs to be tuned differently than a table with\n millions of rows which is what I would be more used to.\n\n\n\n\n I'm not sure how to measure how wide the tuple is, can you point me\n in the right direction?\n\n As fas as autovacuum options, this is what I'm using:\n\n autovacuum_enabled=true, \n fillfactor=50, \n autovacuum_vacuum_threshold=25, \n autovacuum_vacuum_scale_factor=0, \n autovacuum_analyze_threshold=10,\n autovacuum_analyze_scale_factor=0.05, \n autovacuum_vacuum_cost_delay=10,\n autovacuum_vacuum_cost_limit=1000, \n toast.autovacuum_enabled=true\n\n By \"very frequently\" I mean I can update it up to 800000 times a\n day. Usually this number is closer to 100000.",
"msg_date": "Thu, 19 Sep 2019 19:46:46 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
},
{
"msg_contents": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]> writes:\n> As fas as autovacuum options, this is what I'm using:\n\n> autovacuum_vacuum_scale_factor=0,\n\nUgh ... maybe I'm misremembering, but I *think* that has the effect\nof disabling autovac completely. You don't want zero.\n\nCheck in pg_stat_all_tables.last_autovacuum to see if anything\nis happening. If the dates seem reasonably current, then I'm wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2019 18:57:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
},
{
"msg_contents": "Hi all,\n\nI sometimes set autovacuum_vacuum_scale factor = 0 but only when I also \nset autovacuum_vacuum_threshold to some non-zero number to force vacuums \nafter a certain number of rows are updated. It takes the math out of it \nby setting the threshold explicitly.\n\nBut in this case he has also set autovacuum_vacuum_threshold to only \n25! So I think you have to fix your settings by increasing one or both \naccordingly.\n\nRegards,\nMichael Vitale\n\n\nTom Lane wrote on 9/19/2019 6:57 PM:\n> =?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]> writes:\n>> As fas as autovacuum options, this is what I'm using:\n>> autovacuum_vacuum_scale_factor=0,\n> Ugh ... maybe I'm misremembering, but I *think* that has the effect\n> of disabling autovac completely. You don't want zero.\n>\n> Check in pg_stat_all_tables.last_autovacuum to see if anything\n> is happening. If the dates seem reasonably current, then I'm wrong.\n>\n> \t\t\tregards, tom lane\n>\n>\n\n\n\n",
"msg_date": "Fri, 20 Sep 2019 09:09:35 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on a one-tuple table"
}
] |
[
{
"msg_contents": "Hi,\nI am running Postgresql 9.6 XFS as filesystem , kernel Linux 2.6.32.\n\nI have a table that Is not being use anymore, I want to drop it.\nThe table is huge, around 800GB and it has some index on it.\n\nWhen I execute the drop table command it goes very slow, I realised that\nthe problem is the filesystem.\nIt seems that XFS doesn't handle well big files, there are some\ndiscussion about it in some lists.\n\nI have to find a way do delete the table in chunks.\n\nMy first attempt was:\n\nIterate from the tail of the table until the beginning.\nDelete some blocks of the table.\nRun vacuum on it\niterate again....\n\nThe plan is delete some amount of blocks at the end of the table, in chunks\nof some size and vacuum it waiting for vacuum shrink the table.\nit seems work, the table has been shrink but each vacuum takes a huge\namount of time, I suppose it is because of the index. there is another\npoint, the index still huge and will be.\n\nI am thinking of another way of doing this.\nI can get the relfilenode of the table, in this way I can get the files\nthat belongs to the table and simply delete batches of files in a way that\ndon't put so much load on disk.\nDo the same for the index.\nOnce I delete all table's files and index's files, I could simply execute\nthe command drop table and the entries from the catalog would deleted.\n\nI would appreciate any kind of comments.\nthanks!\n\nHi, I am running Postgresql 9.6 XFS as filesystem , kernel Linux 2.6.32.I have a table that Is not being use anymore, I want to drop it.The table is huge, around 800GB and it has some index on it.When I execute the drop table command it goes very slow, I realised that the problem is the filesystem.It seems that XFS doesn't handle well big files, there are some discussion about it in some lists.I have to find a way do delete the table in chunks.My first attempt was:Iterate from the tail of the table until the beginning.Delete some blocks of the table.Run vacuum on ititerate again....The plan is delete some amount of blocks at the end of the table, in chunks of some size and vacuum it waiting for vacuum shrink the table.it seems work, the table has been shrink but each vacuum takes a huge amount of time, I suppose it is because of the index. there is another point, the index still huge and will be.I am thinking of another way of doing this.I can get the relfilenode of the table, in this way I can get the files that belongs to the table and simply delete batches of files in a way that don't put so much load on disk.Do the same for the index.Once I delete all table's files and index's files, I could simply execute the command drop table and the entries from the catalog would deleted.I would appreciate any kind of comments.thanks!",
"msg_date": "Thu, 19 Sep 2019 17:59:55 +0200",
"msg_from": "Joao Junior <[email protected]>",
"msg_from_op": true,
"msg_subject": "Delete huge Table under XFS"
},
{
"msg_contents": "\n\nAm 19.09.19 um 17:59 schrieb Joao Junior:\n>\n>\n> I have a table that Is not being use anymore, I want to drop it.\n> The table is huge, around 800GB and it has some index on it.\n>\n> When I execute the drop table command it goes very slow, I realised \n> that the problem is the filesystem.\n> It seems that XFS doesn't handle well big files, there are some \n> discussion about it in some lists.\n\nPG doesn't create one big file for this table, but about 800 files with \n1GB size each.\n\n>\n> I have to find a way do delete the table in chunks.\n\nWhy? If you want to delete all rows, just use TRUNCATE.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Thu, 19 Sep 2019 18:50:11 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete huge Table under XFS"
},
{
"msg_contents": "A table with 800 gb means 800 files of 1 gb. When I use truncate or drop\ntable, xfs that is a log based filesystem, will write lots of data in its\nlog and this is the problem. The problem is not postgres, it is the way\nthat xfs works with big files , or being more clear, the way that it\nhandles lots of files.\n\nRegards,\nJoao\n\nOn Thu, Sep 19, 2019, 18:50 Andreas Kretschmer <[email protected]>\nwrote:\n\n>\n>\n> Am 19.09.19 um 17:59 schrieb Joao Junior:\n> >\n> >\n> > I have a table that Is not being use anymore, I want to drop it.\n> > The table is huge, around 800GB and it has some index on it.\n> >\n> > When I execute the drop table command it goes very slow, I realised\n> > that the problem is the filesystem.\n> > It seems that XFS doesn't handle well big files, there are some\n> > discussion about it in some lists.\n>\n> PG doesn't create one big file for this table, but about 800 files with\n> 1GB size each.\n>\n> >\n> > I have to find a way do delete the table in chunks.\n>\n> Why? If you want to delete all rows, just use TRUNCATE.\n>\n>\n> Regards, Andreas\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company.\n> www.2ndQuadrant.com\n>\n>\n>\n>\n\nA table with 800 gb means 800 files of 1 gb. When I use truncate or drop table, xfs that is a log based filesystem, will write lots of data in its log and this is the problem. The problem is not postgres, it is the way that xfs works with big files , or being more clear, the way that it handles lots of files.Regards,Joao On Thu, Sep 19, 2019, 18:50 Andreas Kretschmer <[email protected]> wrote:\n\nAm 19.09.19 um 17:59 schrieb Joao Junior:\n>\n>\n> I have a table that Is not being use anymore, I want to drop it.\n> The table is huge, around 800GB and it has some index on it.\n>\n> When I execute the drop table command it goes very slow, I realised \n> that the problem is the filesystem.\n> It seems that XFS doesn't handle well big files, there are some \n> discussion about it in some lists.\n\nPG doesn't create one big file for this table, but about 800 files with \n1GB size each.\n\n>\n> I have to find a way do delete the table in chunks.\n\nWhy? If you want to delete all rows, just use TRUNCATE.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com",
"msg_date": "Thu, 19 Sep 2019 19:00:01 +0200",
"msg_from": "Joao Junior <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete huge Table under XFS"
},
{
"msg_contents": "Re: Joao Junior 2019-09-19 <CABnPa_hdHsdypn7HtXU81B9HcrVcimotnwfzE-MWwO1etWYJzA@mail.gmail.com>\n> A table with 800 gb means 800 files of 1 gb. When I use truncate or drop\n> table, xfs that is a log based filesystem, will write lots of data in its\n> log and this is the problem. The problem is not postgres, it is the way\n> that xfs works with big files , or being more clear, the way that it\n> handles lots of files.\n\nWhy is the runtime of a DROP TABLE command important? Is anything\nwaiting for it?\n\nChristoph\n\n\n",
"msg_date": "Thu, 19 Sep 2019 19:27:13 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete huge Table under XFS"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 07:00:01PM +0200, Joao Junior wrote:\n>A table with 800 gb means 800 files of 1 gb. When I use truncate or drop\n>table, xfs that is a log based filesystem, will write lots of data in its\n>log and this is the problem. The problem is not postgres, it is the way\n>that xfs works with big files , or being more clear, the way that it\n>handles lots of files.\n>\n\nI'm a bit skeptical about this explanation. Yes, XFS has journalling,\nbut only for metadata - and I have a hard time believing deleting 800\nfiles (or a small multiple of that) would write \"lots of data\" into the\njornal, and noticeable performance issues. I wonder how you concluded\nthis is actually the problem.\n\nThat being said, TRUNCATE is unlikely to perform better than DROP,\nbecause it also deletes all the files at once. What you might try is\ndropping the indexes one by one, and then the table. That should delete\nfiles in smaller chunks.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 6 Oct 2019 22:54:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete huge Table under XFS"
}
] |
[
{
"msg_contents": "Hey,\nI got the following partitions structure in pg12 beta 3 version :\n\npostgres=# \\d+ students\n Partitioned table \"public.students\"\n Column | Type | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n--------+---------+-----------+----------+---------+----------+--------------+-------------\n id | integer | | not null | | plain |\n |\n name | text | | | | extended |\n |\n class | integer | | | | plain |\n |\nPartition key: HASH (id)\nIndexes:\n \"students_pkey\" PRIMARY KEY, btree (id)\nForeign-key constraints:\n \"students_class_fkey\" FOREIGN KEY (class) REFERENCES class(id)\nPartitions: students_00 FOR VALUES WITH (modulus 20, remainder 0),\n students_01 FOR VALUES WITH (modulus 20, remainder 1),\n students_02 FOR VALUES WITH (modulus 20, remainder 2),\n students_03 FOR VALUES WITH (modulus 20, remainder 3),\n students_04 FOR VALUES WITH (modulus 20, remainder 4)\n\npostgres=# insert into students values(20,'a',1);\nERROR: no partition of relation \"students\" found for row\nDETAIL: Partition key of the failing row contains (id) = (20).\n\n\nI'm trying to insert a few rows but some of them fail on the following\nerror : no partition of relation \"sutdents\" found for row ...\n\nfor example :\npostgres=# insert into students values(20,'a',1);\nERROR: no partition of relation \"students\" found for row\nDETAIL: Partition key of the failing row contains (id) = (20).\npostgres=# insert into students values(2,'a',1);\nERROR: no partition of relation \"students\" found for row\nDETAIL: Partition key of the failing row contains (id) = (2).\npostgres=# insert into students values(1,'a',1);\nERROR: duplicate key value violates unique constraint \"students_00_pkey\"\nDETAIL: Key (id)=(1) already exists.\npostgres=# insert into students values(2,'a',1);\nERROR: no partition of relation \"students\" found for row\nDETAIL: Partition key of the failing row contains (id) = (2).\npostgres=# insert into students values(3,'a',1);\nINSERT 0 1\npostgres=# insert into students values(4,'a',1);\nERROR: no partition of relation \"students\" found for row\nDETAIL: Partition key of the failing row contains (id) = (4).\n\nThe current content of the table :\npostgres=# select * from students;\n id | name | class\n----+------+-------\n 1 | a | 1\n 3 | a | 1\n(2 rows)\n\nwhat am I missing ?\n\nHey,I got the following partitions structure in pg12 beta 3 version : postgres=# \\d+ students Partitioned table \"public.students\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description--------+---------+-----------+----------+---------+----------+--------------+------------- id | integer | | not null | | plain | | name | text | | | | extended | | class | integer | | | | plain | |Partition key: HASH (id)Indexes: \"students_pkey\" PRIMARY KEY, btree (id)Foreign-key constraints: \"students_class_fkey\" FOREIGN KEY (class) REFERENCES class(id)Partitions: students_00 FOR VALUES WITH (modulus 20, remainder 0), students_01 FOR VALUES WITH (modulus 20, remainder 1), students_02 FOR VALUES WITH (modulus 20, remainder 2), students_03 FOR VALUES WITH (modulus 20, remainder 3), students_04 FOR VALUES WITH (modulus 20, remainder 4)postgres=# insert into students values(20,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (20).I'm trying to insert a few rows but some of them fail on the following error : no partition of relation \"sutdents\" found for row ...for example : postgres=# insert into students values(20,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (20).postgres=# insert into students values(2,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (2).postgres=# insert into students values(1,'a',1);ERROR: duplicate key value violates unique constraint \"students_00_pkey\"DETAIL: Key (id)=(1) already exists.postgres=# insert into students values(2,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (2).postgres=# insert into students values(3,'a',1);INSERT 0 1postgres=# insert into students values(4,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (4).The current content of the table : postgres=# select * from students; id | name | class----+------+------- 1 | a | 1 3 | a | 1(2 rows)what am I missing ?",
"msg_date": "Mon, 23 Sep 2019 13:59:40 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg12 partitions question"
},
{
"msg_contents": "I understood my problem. thanks.\n\nבתאריך יום ב׳, 23 בספט׳ 2019 ב-13:59 מאת Mariel Cherkassky <\[email protected]>:\n\n> Hey,\n> I got the following partitions structure in pg12 beta 3 version :\n>\n> postgres=# \\d+ students\n> Partitioned table \"public.students\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats\n> target | Description\n>\n> --------+---------+-----------+----------+---------+----------+--------------+-------------\n> id | integer | | not null | | plain |\n> |\n> name | text | | | | extended |\n> |\n> class | integer | | | | plain |\n> |\n> Partition key: HASH (id)\n> Indexes:\n> \"students_pkey\" PRIMARY KEY, btree (id)\n> Foreign-key constraints:\n> \"students_class_fkey\" FOREIGN KEY (class) REFERENCES class(id)\n> Partitions: students_00 FOR VALUES WITH (modulus 20, remainder 0),\n> students_01 FOR VALUES WITH (modulus 20, remainder 1),\n> students_02 FOR VALUES WITH (modulus 20, remainder 2),\n> students_03 FOR VALUES WITH (modulus 20, remainder 3),\n> students_04 FOR VALUES WITH (modulus 20, remainder 4)\n>\n> postgres=# insert into students values(20,'a',1);\n> ERROR: no partition of relation \"students\" found for row\n> DETAIL: Partition key of the failing row contains (id) = (20).\n>\n>\n> I'm trying to insert a few rows but some of them fail on the following\n> error : no partition of relation \"sutdents\" found for row ...\n>\n> for example :\n> postgres=# insert into students values(20,'a',1);\n> ERROR: no partition of relation \"students\" found for row\n> DETAIL: Partition key of the failing row contains (id) = (20).\n> postgres=# insert into students values(2,'a',1);\n> ERROR: no partition of relation \"students\" found for row\n> DETAIL: Partition key of the failing row contains (id) = (2).\n> postgres=# insert into students values(1,'a',1);\n> ERROR: duplicate key value violates unique constraint \"students_00_pkey\"\n> DETAIL: Key (id)=(1) already exists.\n> postgres=# insert into students values(2,'a',1);\n> ERROR: no partition of relation \"students\" found for row\n> DETAIL: Partition key of the failing row contains (id) = (2).\n> postgres=# insert into students values(3,'a',1);\n> INSERT 0 1\n> postgres=# insert into students values(4,'a',1);\n> ERROR: no partition of relation \"students\" found for row\n> DETAIL: Partition key of the failing row contains (id) = (4).\n>\n> The current content of the table :\n> postgres=# select * from students;\n> id | name | class\n> ----+------+-------\n> 1 | a | 1\n> 3 | a | 1\n> (2 rows)\n>\n> what am I missing ?\n>\n>\n>\n\nI understood my problem. thanks.בתאריך יום ב׳, 23 בספט׳ 2019 ב-13:59 מאת Mariel Cherkassky <[email protected]>:Hey,I got the following partitions structure in pg12 beta 3 version : postgres=# \\d+ students Partitioned table \"public.students\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description--------+---------+-----------+----------+---------+----------+--------------+------------- id | integer | | not null | | plain | | name | text | | | | extended | | class | integer | | | | plain | |Partition key: HASH (id)Indexes: \"students_pkey\" PRIMARY KEY, btree (id)Foreign-key constraints: \"students_class_fkey\" FOREIGN KEY (class) REFERENCES class(id)Partitions: students_00 FOR VALUES WITH (modulus 20, remainder 0), students_01 FOR VALUES WITH (modulus 20, remainder 1), students_02 FOR VALUES WITH (modulus 20, remainder 2), students_03 FOR VALUES WITH (modulus 20, remainder 3), students_04 FOR VALUES WITH (modulus 20, remainder 4)postgres=# insert into students values(20,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (20).I'm trying to insert a few rows but some of them fail on the following error : no partition of relation \"sutdents\" found for row ...for example : postgres=# insert into students values(20,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (20).postgres=# insert into students values(2,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (2).postgres=# insert into students values(1,'a',1);ERROR: duplicate key value violates unique constraint \"students_00_pkey\"DETAIL: Key (id)=(1) already exists.postgres=# insert into students values(2,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (2).postgres=# insert into students values(3,'a',1);INSERT 0 1postgres=# insert into students values(4,'a',1);ERROR: no partition of relation \"students\" found for rowDETAIL: Partition key of the failing row contains (id) = (4).The current content of the table : postgres=# select * from students; id | name | class----+------+------- 1 | a | 1 3 | a | 1(2 rows)what am I missing ?",
"msg_date": "Mon, 23 Sep 2019 14:02:10 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg12 partitions question"
}
] |
[
{
"msg_contents": "Hi!\n\nRecently I've been looking for bloat in my databases and found a query \nto show which tables are more bloated and by how much.\n\nThis is the explain plan on v12.3: https://explain.depesz.com/s/8dW8C\nAnd this is with v11: https://explain.depesz.com/s/diXY\n\nBoth databases have approx. the same size and have the same schema, but \non v12 I the query takes much longer to run.\n\n\n****\n\n\n\n\n\n\n Hi!\n\n Recently I've been looking for bloat in my databases and found a\n query to show which tables are more bloated and by how much.\n\n This is the explain plan on v12.3: https://explain.depesz.com/s/8dW8C\n And this is with v11: https://explain.depesz.com/s/diXY\n\n Both databases have approx. the same size and have the same schema,\n but on v12 I the query takes much longer to run.",
"msg_date": "Mon, 23 Sep 2019 15:42:05 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query on V12."
},
{
"msg_contents": "Hi,\n\nCan you check by vacuum analyze the database. And run the query.\n\n\n**Remember don't use Vacuum full.\n\nOn Tue, 24 Sep 2019, 12:07 am Luís Roberto Weck, <\[email protected]> wrote:\n\n> Hi!\n>\n> Recently I've been looking for bloat in my databases and found a query to\n> show which tables are more bloated and by how much.\n>\n> This is the explain plan on v12.3: https://explain.depesz.com/s/8dW8C\n> And this is with v11: https://explain.depesz.com/s/diXY\n>\n> Both databases have approx. the same size and have the same schema, but on\n> v12 I the query takes much longer to run.\n>\n>\n>\n\nHi,Can you check by vacuum analyze the database. And run the query.**Remember don't use Vacuum full.On Tue, 24 Sep 2019, 12:07 am Luís Roberto Weck, <[email protected]> wrote:\n\n Hi!\n\n Recently I've been looking for bloat in my databases and found a\n query to show which tables are more bloated and by how much.\n\n This is the explain plan on v12.3: https://explain.depesz.com/s/8dW8C\n And this is with v11: https://explain.depesz.com/s/diXY\n\n Both databases have approx. the same size and have the same schema,\n but on v12 I the query takes much longer to run.",
"msg_date": "Tue, 24 Sep 2019 00:13:12 +0530",
"msg_from": "nikhil raj <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on V12."
},
{
"msg_contents": "Em 23/09/2019 15:43, nikhil raj escreveu:\n> Hi,\n>\n> Can you check by vacuum analyze the database. And run the query.\n>\n>\n> **Remember don't use Vacuum full.\n>\n> On Tue, 24 Sep 2019, 12:07 am Luís Roberto Weck, \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hi!\n>\n> Recently I've been looking for bloat in my databases and found a\n> query to show which tables are more bloated and by how much.\n>\n> This is the explain plan on v12.3: https://explain.depesz.com/s/8dW8C\n> And this is with v11: https://explain.depesz.com/s/diXY\n>\n> Both databases have approx. the same size and have the same\n> schema, but on v12 I the query takes much longer to run.\n>\n>\nHi!\n\nThanks for the reply!\n\nHere's the plan after running vacuum analyze: \nhttps://explain.depesz.com/s/lhcl\n\nThere was no difference in execution time.\n\n\n\n\n\n\n Em 23/09/2019 15:43, nikhil raj escreveu:\n\n\nHi,\n \n\nCan you check by vacuum analyze the database.\n And run the query.\n\n\n\n\n**Remember don't use Vacuum full.\n\n\n\nOn Tue, 24 Sep 2019, 12:07 am\n Luís Roberto Weck, <[email protected]>\n wrote:\n\n\n Hi!\n\n Recently I've been looking for bloat in my databases and\n found a query to show which tables are more bloated and by\n how much.\n\n This is the explain plan on v12.3: https://explain.depesz.com/s/8dW8C\n And this is with v11: https://explain.depesz.com/s/diXY\n\n Both databases have approx. the same size and have the same\n schema, but on v12 I the query takes much longer to run.\n\n\n\n\n\n\n Hi!\n\n Thanks for the reply!\n\n Here's the plan after running vacuum analyze: https://explain.depesz.com/s/lhcl\n\n There was no difference in execution time.",
"msg_date": "Mon, 23 Sep 2019 16:03:54 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query on V12."
},
{
"msg_contents": "Em 23/09/2019 16:03, Luís Roberto Weck escreveu:\n> Em 23/09/2019 15:43, nikhil raj escreveu:\n>> Hi,\n>>\n>> Can you check by vacuum analyze the database. And run the query.\n>>\n>>\n>> **Remember don't use Vacuum full.\n>>\n>> On Tue, 24 Sep 2019, 12:07 am Luís Roberto Weck, \n>> <[email protected] <mailto:[email protected]>> wrote:\n>>\n>> Hi!\n>>\n>> Recently I've been looking for bloat in my databases and found a\n>> query to show which tables are more bloated and by how much.\n>>\n>> This is the explain plan on v12.3: https://explain.depesz.com/s/8dW8C\n>> And this is with v11: https://explain.depesz.com/s/diXY\n>>\n>> Both databases have approx. the same size and have the same\n>> schema, but on v12 I the query takes much longer to run.\n>>\n>>\n> Hi!\n>\n> Thanks for the reply!\n>\n> Here's the plan after running vacuum analyze: \n> https://explain.depesz.com/s/lhcl\n>\n> There was no difference in execution time.\n\nThis is the query that is actually slow:\n\n-- EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nSELECT table_schema, table_name,\n n_live_tup::numeric as est_rows,\n pg_table_size(relid)::numeric as table_size\n FROM information_schema.columns\n JOIN pg_stat_user_tables as psut ON table_schema = \npsut.schemanameAND table_name = psut.relname\n LEFT JOIN pg_statsON table_schema = pg_stats.schemanameAND \ntable_name = pg_stats.tablenameAND column_name = attname\n WHERE attname IS NULL\n AND table_schema NOT IN ('pg_catalog', 'information_schema')\n GROUP BY table_schema, table_name, relid, n_live_tup\n\nIf I turn the left join to a inner join, the query runs very fast.\n\nPlans:\n LEFT JOIN: https://explain.depesz.com/s/i88x\n INNER JOIN: https://explain.depesz.com/s/ciSu\n\nOfcourse, that's not what the full query needs\n\n\n\n\n\n\n Em 23/09/2019 16:03, Luís Roberto Weck escreveu:\n\n\n Em 23/09/2019 15:43, nikhil raj escreveu:\n\n\nHi,\n \n\nCan you check by vacuum analyze the database.\n And run the query.\n\n\n\n\n**Remember don't use Vacuum full.\n\n\n\nOn Tue, 24 Sep 2019, 12:07\n am Luís Roberto Weck, <[email protected]>\n wrote:\n\n\n Hi!\n\n Recently I've been looking for bloat in my databases and\n found a query to show which tables are more bloated and by\n how much.\n\n This is the explain plan on v12.3: https://explain.depesz.com/s/8dW8C\n And this is with v11: https://explain.depesz.com/s/diXY\n\n Both databases have approx. the same size and have the\n same schema, but on v12 I the query takes much longer to\n run.\n\n\n\n\n\n\n Hi!\n\n Thanks for the reply!\n\n Here's the plan after running vacuum analyze: https://explain.depesz.com/s/lhcl\n\n There was no difference in execution time.\n\n\n This is the query that is actually slow:\n\n-- EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nSELECT table_schema, table_name, \n n_live_tup::numeric as est_rows,\n pg_table_size(relid)::numeric as table_size\n FROM information_schema.columns\n JOIN pg_stat_user_tables as psut ON\n table_schema = psut.schemaname AND table_name =\n psut.relname\n LEFT JOIN pg_stats ON\n table_schema = pg_stats.schemaname AND\n table_name = pg_stats.tablename AND column_name\n = attname \n WHERE attname IS NULL\n AND table_schema NOT IN ('pg_catalog',\n 'information_schema')\n GROUP BY table_schema, table_name, relid, n_live_tup\n\nIf I turn the left join to a inner join, the query runs very\n fast.\n\n Plans:\n LEFT JOIN: https://explain.depesz.com/s/i88x\n INNER JOIN: https://explain.depesz.com/s/ciSu\n\nOfcourse, that's not what the full query needs",
"msg_date": "Mon, 23 Sep 2019 16:12:02 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query on V12."
},
{
"msg_contents": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]> writes:\n> This is the query that is actually slow:\n\n> -- EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT table_schema, table_name,\n> n_live_tup::numeric as est_rows,\n> pg_table_size(relid)::numeric as table_size\n> FROM information_schema.columns\n> JOIN pg_stat_user_tables as psut ON table_schema = \n> psut.schemanameAND table_name = psut.relname\n> LEFT JOIN pg_statsON table_schema = pg_stats.schemanameAND \n> table_name = pg_stats.tablenameAND column_name = attname\n> WHERE attname IS NULL\n> AND table_schema NOT IN ('pg_catalog', 'information_schema')\n> GROUP BY table_schema, table_name, relid, n_live_tup\n\nAs a rule of thumb, mixing information_schema views and native\nPG catalog accesses in one query is a Bad Idea (TM). There are\na number of reasons for this, some of which have been alleviated\nas of v12, but it's still not going to be something you really\nwant to do if you have an alternative. I'd try replacing the\nuse of information_schema.columns with something like\n\n (pg_class c join pg_attribute a on c.oid = a.attrelid\n and a.attnum > 0 and not a.attisdropped)\n\n(Hm, I guess you also need to join to pg_namespace to get the\nschema name.) You could simplify the join condition with psut\nto be c.oid = psut.relid, though you're still stuck with doing\nschemaname+tablename comparison to join to pg_stats.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Sep 2019 15:44:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query on V12."
},
{
"msg_contents": "Em 23/09/2019 16:44, Tom Lane escreveu:\n> =?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]> writes:\n>> This is the query that is actually slow:\n>> -- EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n>> SELECT table_schema, table_name,\n>> n_live_tup::numeric as est_rows,\n>> pg_table_size(relid)::numeric as table_size\n>> FROM information_schema.columns\n>> JOIN pg_stat_user_tables as psut ON table_schema =\n>> psut.schemanameAND table_name = psut.relname\n>> LEFT JOIN pg_statsON table_schema = pg_stats.schemanameAND\n>> table_name = pg_stats.tablenameAND column_name = attname\n>> WHERE attname IS NULL\n>> AND table_schema NOT IN ('pg_catalog', 'information_schema')\n>> GROUP BY table_schema, table_name, relid, n_live_tup\n> As a rule of thumb, mixing information_schema views and native\n> PG catalog accesses in one query is a Bad Idea (TM). There are\n> a number of reasons for this, some of which have been alleviated\n> as of v12, but it's still not going to be something you really\n> want to do if you have an alternative. I'd try replacing the\n> use of information_schema.columns with something like\n>\n> (pg_class c join pg_attribute a on c.oid = a.attrelid\n> and a.attnum > 0 and not a.attisdropped)\n>\n> (Hm, I guess you also need to join to pg_namespace to get the\n> schema name.) You could simplify the join condition with psut\n> to be c.oid = psut.relid, though you're still stuck with doing\n> schemaname+tablename comparison to join to pg_stats.\n>\n> \t\t\tregards, tom lane\n\nThanks for the reply, but performance is still pretty bad:\n\nRegular query: https://explain.depesz.com/s/CiPS\nTom's optimization: https://explain.depesz.com/s/kKE0\n\nSure, 37 seconds down to 8 seems pretty good, but on V11:\n\nRegular query: https://explain.depesz.com/s/MMM9\nTom's optimization: https://explain.depesz.com/s/v2M8\n\n\n\n",
"msg_date": "Mon, 23 Sep 2019 17:23:10 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query on V12."
}
] |
[
{
"msg_contents": "Hi team,\n\nWe want to check the postgres database status on docker container just like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n\nBut I am not sure how we can do that with docker.\n\nThanks,\nDaulat\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi team,\n \nWe want to check the postgres database status on docker container just like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n \nBut I am not sure how we can do that with docker. \n \nThanks,\nDaulat",
"msg_date": "Tue, 24 Sep 2019 09:17:49 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Monitor Postgres database status on Docker"
},
{
"msg_contents": "Hi,\n\nI am not from PostgreSQL team.\nJust let you know that when we run PostgreSQL in Kubernetes, we use below command for liveness check.\n\npg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME\n\nBRs,\nFan Liu\n\n\nFrom: Daulat Ram <[email protected]>\nSent: Tuesday, September 24, 2019 5:18 PM\nTo: [email protected]\nSubject: Monitor Postgres database status on Docker\n\nHi team,\n\nWe want to check the postgres database status on docker container just like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n\nBut I am not sure how we can do that with docker.\n\nThanks,\nDaulat\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI am not from PostgreSQL team. \nJust let you know that when we run PostgreSQL in Kubernetes, we use below command for liveness check.\n \npg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME\n \n\nBRs,\nFan Liu\n \n \n\n\nFrom: Daulat Ram <[email protected]> \nSent: Tuesday, September 24, 2019 5:18 PM\nTo: [email protected]\nSubject: Monitor Postgres database status on Docker\n\n\n \nHi team,\n \nWe want to check the postgres database status on docker container just like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n \nBut I am not sure how we can do that with docker. \n \nThanks,\nDaulat",
"msg_date": "Tue, 24 Sep 2019 09:31:45 +0000",
"msg_from": "Fan Liu <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Monitor Postgres database status on Docker"
},
{
"msg_contents": "Thanks but how we can use it for docker container.\n\nRegards,\nDaulat\n\nFrom: Fan Liu <[email protected]>\nSent: Tuesday, September 24, 2019 3:02 PM\nTo: Daulat Ram <[email protected]>; [email protected]\nSubject: RE: Monitor Postgres database status on Docker\n\nHi,\n\nI am not from PostgreSQL team.\nJust let you know that when we run PostgreSQL in Kubernetes, we use below command for liveness check.\n\npg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME\n\n\nBRs,\nFan Liu\n\n\nFrom: Daulat Ram <[email protected]<mailto:[email protected]>>\nSent: Tuesday, September 24, 2019 5:18 PM\nTo: [email protected]<mailto:[email protected]>\nSubject: Monitor Postgres database status on Docker\n\nHi team,\n\nWe want to check the postgres database status on docker container just like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n\nBut I am not sure how we can do that with docker.\n\nThanks,\nDaulat\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThanks but how we can use it for docker container.\n \nRegards,\nDaulat\n \n\n\nFrom: Fan Liu <[email protected]> \nSent: Tuesday, September 24, 2019 3:02 PM\nTo: Daulat Ram <[email protected]>; [email protected]\nSubject: RE: Monitor Postgres database status on Docker\n\n\n \nHi,\n \nI am not from PostgreSQL team. \nJust let you know that when we run PostgreSQL in Kubernetes, we use below command for liveness check.\n \npg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME\n \n \nBRs,\nFan Liu\n \n \n\n\nFrom: Daulat Ram <[email protected]>\n\nSent: Tuesday, September 24, 2019 5:18 PM\nTo: [email protected]\nSubject: Monitor Postgres database status on Docker\n\n\n \nHi team,\n \nWe want to check the postgres database status on docker container just like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n \nBut I am not sure how we can do that with docker. \n \nThanks,\nDaulat",
"msg_date": "Tue, 24 Sep 2019 10:05:25 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Monitor Postgres database status on Docker"
},
{
"msg_contents": "Hi Fan Liu,\n\nI am able to make the connection to the Postgres database created in docker container via psql from postgres10 client but not able to connect through pg_isready.\n\npsql -c 'select count (*) from pg_stat_activity' -h localhost -p 5432 -U postgres -W\nPassword for user postgres:\ncount\n-------\n 7\n\nGive me suggestions.\n Thanks,\n\n\nFrom: Daulat Ram\nSent: Tuesday, September 24, 2019 3:35 PM\nTo: Fan Liu <[email protected]>; [email protected]\nSubject: RE: Monitor Postgres database status on Docker\n\nThanks but how we can use it for docker container.\n\nRegards,\nDaulat\n\nFrom: Fan Liu <[email protected]<mailto:[email protected]>>\nSent: Tuesday, September 24, 2019 3:02 PM\nTo: Daulat Ram <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>\nSubject: RE: Monitor Postgres database status on Docker\n\nHi,\n\nI am not from PostgreSQL team.\nJust let you know that when we run PostgreSQL in Kubernetes, we use below command for liveness check.\n\npg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME\n\n\nBRs,\nFan Liu\n\n\nFrom: Daulat Ram <[email protected]<mailto:[email protected]>>\nSent: Tuesday, September 24, 2019 5:18 PM\nTo: [email protected]<mailto:[email protected]>\nSubject: Monitor Postgres database status on Docker\n\nHi team,\n\nWe want to check the postgres database status on docker container just like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n\nBut I am not sure how we can do that with docker.\n\nThanks,\nDaulat\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Fan Liu,\n \nI am able to make the connection to the Postgres database created in docker container via psql from postgres10 client but not able to connect through pg_isready.\n \npsql -c 'select count (*) from pg_stat_activity' -h localhost -p 5432 -U postgres -W\nPassword for user postgres:\ncount\n-------\n 7\n \nGive me suggestions.\n\n Thanks,\n \n \n\n\nFrom: Daulat Ram \nSent: Tuesday, September 24, 2019 3:35 PM\nTo: Fan Liu <[email protected]>; [email protected]\nSubject: RE: Monitor Postgres database status on Docker\n\n\n \nThanks but how we can use it for docker container.\n \nRegards,\nDaulat\n \n\n\nFrom: Fan Liu <[email protected]>\n\nSent: Tuesday, September 24, 2019 3:02 PM\nTo: Daulat Ram <[email protected]>;\[email protected]\nSubject: RE: Monitor Postgres database status on Docker\n\n\n \nHi,\n \nI am not from PostgreSQL team. \nJust let you know that when we run PostgreSQL in Kubernetes, we use below command for liveness check.\n \npg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME\n \n \nBRs,\nFan Liu\n \n \n\n\nFrom: Daulat Ram <[email protected]>\n\nSent: Tuesday, September 24, 2019 5:18 PM\nTo: [email protected]\nSubject: Monitor Postgres database status on Docker\n\n\n \nHi team,\n \nWe want to check the postgres database status on docker container just like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n \nBut I am not sure how we can do that with docker. \n \nThanks,\nDaulat",
"msg_date": "Fri, 27 Sep 2019 05:25:28 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Monitor Postgres database status on Docker"
},
{
"msg_contents": "Hi Daulat,\n\nEl mar., 24 de septiembre de 2019 07:05, Daulat Ram <\[email protected]> escribió:\n\n> Thanks but how we can use it for docker container.\n>\n\nYou have basically 2 ways:\n\n1) Publish the port 5432 on the container and access it from the host, or\n\n2) Use \"docker exec\" to run the commands natively inside the container.\n\n\n\n> Regards,\n>\n> Daulat\n>\n>\n>\n> *From:* Fan Liu <[email protected]>\n> *Sent:* Tuesday, September 24, 2019 3:02 PM\n> *To:* Daulat Ram <[email protected]>;\n> [email protected]\n> *Subject:* RE: Monitor Postgres database status on Docker\n>\n>\n>\n> Hi,\n>\n>\n>\n> I am not from PostgreSQL team.\n>\n> Just let you know that when we run PostgreSQL in Kubernetes, we use below\n> command for liveness check.\n>\n>\n>\n> pg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME\n>\n>\n>\n>\n>\n> BRs,\n>\n> Fan Liu\n>\n>\n>\n>\n>\n> *From:* Daulat Ram <[email protected]>\n> *Sent:* Tuesday, September 24, 2019 5:18 PM\n> *To:* [email protected]\n> *Subject:* Monitor Postgres database status on Docker\n>\n>\n>\n> Hi team,\n>\n>\n>\n> We want to check the postgres database status on docker container just\n> like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n>\n>\n>\n> But I am not sure how we can do that with docker.\n>\n>\n>\n> Thanks,\n>\n> Daulat\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\nHi Daulat,El mar., 24 de septiembre de 2019 07:05, Daulat Ram <[email protected]> escribió:\n\n\nThanks but how we can use it for docker container.You have basically 2 ways:1) Publish the port 5432 on the container and access it from the host, or2) Use \"docker exec\" to run the commands natively inside the container. \nRegards,\nDaulat\n \n\n\nFrom: Fan Liu <[email protected]> \nSent: Tuesday, September 24, 2019 3:02 PM\nTo: Daulat Ram <[email protected]>; [email protected]\nSubject: RE: Monitor Postgres database status on Docker\n\n\n \nHi,\n \nI am not from PostgreSQL team. \nJust let you know that when we run PostgreSQL in Kubernetes, we use below command for liveness check.\n \npg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME\n \n \nBRs,\nFan Liu\n \n \n\n\nFrom: Daulat Ram <[email protected]>\n\nSent: Tuesday, September 24, 2019 5:18 PM\nTo: [email protected]\nSubject: Monitor Postgres database status on Docker\n\n\n \nHi team,\n \nWe want to check the postgres database status on docker container just like we monitor Postgres (up / down) via /etc/init.d/postgresql status\n \nBut I am not sure how we can do that with docker. \n \nThanks,\nDaulat",
"msg_date": "Fri, 27 Sep 2019 09:36:13 -0300",
"msg_from": "Olivier Gautherot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitor Postgres database status on Docker"
}
] |
[
{
"msg_contents": "Hey,\nI'm handling a very weird situation. I tried to check which sequences\nbelong to a specific table (table_A) with the following query :\n WITH\nsequences AS\n(\nSELECT oid,relname FROM pg_class WHERE relkind = 'S'\n)\nSELECT s.oid as seq_oid,d.objid as objid,d.refobjid\nFROM pg_depend d,sequences s\nwhere\ns.oid = d.objid\nand d.deptype = 'a' and d.refobjid::regclass::text='table_A';\n seq_oid | objid | refobjid\n---------+-------+----------\n 17188 | 17188 | 17190\n 16566 | 16566 | 17190\n 16704 | 16704 | 17190\n 16704 | 16704 | 17190\n 16704 | 16704 | 17190\n(5 rows)\n\n17188 - The sequence of table_A(id)\n16566 and 16704 are sequences that belong to different tables and arent\nused by table_A.\n16566 - The sequence of table_c(id)\n16704 - The sequence of tableB(id)\n\n\nIn all my environments I got exactly one rows (one seq owned by the id\ncolumn (pk) of the table). In one specific environment I got a weird\noutput(The one u see here). The output indicates that 2 other sequences\nbelongs to the current table when one of them have two rows that indicate\nit.\n\nThe next step was checking why it happened. I run the following query :\nselect objid,refobjid::regclass from pg_depend where objid=16704;\n objid | refobjid\n-------+-------------------------\n 16704 | 2200\n 16704 | table_A\n 16704 | table_A\n 16704 | table_A\n 16704 | table_B\n(5 rows)\n\nfor unclear reason, both table A and table B depends on the sequence. When\nI check table_A I dont see any column that might use it..\n\nI also checked who else depends on the 16556 objid :\n select objid,refobjid::regclass from pg_depend where objid=16566;\n objid | refobjid\n-------+-----------------------\n 16566 | 2200\n 16566 | table_C\n 16566 | table_A\n 16566 | table_A_seq\n(4 rows)\n\nany idea how to handle this issue ? I checked this on both pg 9.6/12\nversions and I got the same weird results.\n\nHey,I'm handling a very weird situation. I tried to check which sequences belong to a specific table (table_A) with the following query : WITHsequences AS(SELECT oid,relname FROM pg_class WHERE relkind = 'S')SELECT s.oid as seq_oid,d.objid as objid,d.refobjidFROM pg_depend d,sequences swheres.oid = d.objidand d.deptype = 'a' and d.refobjid::regclass::text='table_A'; seq_oid | objid | refobjid---------+-------+---------- 17188 | 17188 | 17190 16566 | 16566 | 17190 16704 | 16704 | 17190 16704 | 16704 | 17190 16704 | 16704 | 17190(5 rows)17188 - The sequence of table_A(id)16566 and 16704 are sequences that belong to different tables and arent used by table_A.16566 - The sequence of table_c(id)16704 - The sequence of tableB(id)In all my environments I got exactly one rows (one seq owned by the id column (pk) of the table). In one specific environment I got a weird output(The one u see here). The output indicates that 2 other sequences belongs to the current table when one of them have two rows that indicate it.The next step was checking why it happened. I run the following query : select objid,refobjid::regclass from pg_depend where objid=16704; objid | refobjid-------+------------------------- 16704 | 2200 16704 | table_A 16704 | table_A 16704 | table_A 16704 | table_B(5 rows)for unclear reason, both table A and table B depends on the sequence. When I check table_A I dont see any column that might use it..I also checked who else depends on the 16556 objid : select objid,refobjid::regclass from pg_depend where objid=16566; objid | refobjid-------+----------------------- 16566 | 2200 16566 | table_C 16566 | table_A 16566 | table_A_seq(4 rows)any idea how to handle this issue ? I checked this on both pg 9.6/12 versions and I got the same weird results.",
"msg_date": "Wed, 25 Sep 2019 15:39:46 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "sequence depends on many tables"
},
{
"msg_contents": "On Wed, 2019-09-25 at 15:39 +0300, Mariel Cherkassky wrote:\n> select objid,refobjid::regclass from pg_depend where objid=16704;\n> objid | refobjid\n> -------+-------------------------\n> 16704 | 2200\n> 16704 | table_A\n> 16704 | table_A\n> 16704 | table_A\n> 16704 | table_B\n> (5 rows)\n> \n> for unclear reason, both table A and table B depends on the sequence.\n> When I check table_A I dont see any column that might use it..\n\nCould you select all rows from pg_depend so that it is easier to see\nwhat is going on?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 25 Sep 2019 21:19:22 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence depends on many tables"
},
{
"msg_contents": ">\n> There are many rows, anything specific u want to see ?\n\nThere are many rows, anything specific u want to see ?",
"msg_date": "Wed, 25 Sep 2019 22:20:21 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequence depends on many tables"
},
{
"msg_contents": "On Wed, 2019-09-25 at 22:20 +0300, Mariel Cherkassky wrote:\n[problems with sequence dependencies]\n> There are many rows, anything specific u want to see ?\n\nSorry, I didn't mean all of pg_depend, but your query\nwith all *columns* of pg_depend.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 25 Sep 2019 21:23:23 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence depends on many tables"
},
{
"msg_contents": "Hey,\nThis is the full output with all the columns :\n WITH\nsequences AS\n(\nSELECT oid,relname FROM pg_class WHERE relkind = 'S'\n)\nSELECT s.oid as seq_oid,d.*\nFROM pg_depend d,sequences s\nwhere\ns.oid = d.objid\nand d.deptype = 'a' and d.refobjid::regclass::text='table_A';\n seq_oid | classid | objid | objsubid | refclassid | refobjid | refobjsubid\n| deptype\n---------+---------+-------+----------+------------+----------+-------------+---------\n 17188 | 1259 | 17188 | 0 | 1259 | 17190 | 1\n| a\n 16566 | 2604 | 16566 | 0 | 1259 | 17190 | 1\n| a\n 16704 | 2606 | 16704 | 0 | 1259 | 17190 | 3\n| a\n 16704 | 2606 | 16704 | 0 | 1259 | 17190 | 5\n| a\n 16704 | 2606 | 16704 | 0 | 1259 | 17190 | 4\n| a\n(5 rows)\n\n\nselect *,refobjid::regclass from pg_depend where objid=16704;\n classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype\n| refobjid\n---------+-------+----------+------------+----------+-------------+---------+-------------------------\n 1259 | 16704 | 0 | 2615 | 2200 | 0 | n\n| 2200\n 2606 | 16704 | 0 | 1259 | 17190 | 3 | a\n| table_A\n 2606 | 16704 | 0 | 1259 | 17190 | 5 | a\n| table_A\n 2606 | 16704 | 0 | 1259 | 17190 | 4 | a\n| table_A\n 1259 | 16704 | 0 | 1259 | 16706 | 1 | a\n| table_B\n(5 rows)\n\n select *,refobjid::regclass from pg_depend where objid=16566;\n classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype\n| refobjid\n---------+-------+----------+------------+----------+-------------+---------+-----------------------\n 1259 | 16566 | 0 | 2615 | 2200 | 0 | n\n| 2200\n 1259 | 16566 | 0 | 1259 | 16568 | 2 | a\n| table_C\n 2604 | 16566 | 0 | 1259 | 17190 | 1 | a\n| table_A\n 2604 | 16566 | 0 | 1259 | 17188 | 0 | n\n| table_A_seq\n(4 rows)\n\nHey,This is the full output with all the columns : WITHsequences AS(SELECT oid,relname FROM pg_class WHERE relkind = 'S')SELECT s.oid as seq_oid,d.*FROM pg_depend d,sequences swheres.oid = d.objidand d.deptype = 'a' and d.refobjid::regclass::text='table_A'; seq_oid | classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype---------+---------+-------+----------+------------+----------+-------------+--------- 17188 | 1259 | 17188 | 0 | 1259 | 17190 | 1 | a 16566 | 2604 | 16566 | 0 | 1259 | 17190 | 1 | a 16704 | 2606 | 16704 | 0 | 1259 | 17190 | 3 | a 16704 | 2606 | 16704 | 0 | 1259 | 17190 | 5 | a 16704 | 2606 | 16704 | 0 | 1259 | 17190 | 4 | a(5 rows)select *,refobjid::regclass from pg_depend where objid=16704; classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype | refobjid---------+-------+----------+------------+----------+-------------+---------+------------------------- 1259 | 16704 | 0 | 2615 | 2200 | 0 | n | 2200 2606 | 16704 | 0 | 1259 | 17190 | 3 | a | table_A 2606 | 16704 | 0 | 1259 | 17190 | 5 | a | table_A 2606 | 16704 | 0 | 1259 | 17190 | 4 | a | table_A 1259 | 16704 | 0 | 1259 | 16706 | 1 | a | table_B(5 rows) select *,refobjid::regclass from pg_depend where objid=16566; classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype | refobjid---------+-------+----------+------------+----------+-------------+---------+----------------------- 1259 | 16566 | 0 | 2615 | 2200 | 0 | n | 2200 1259 | 16566 | 0 | 1259 | 16568 | 2 | a | table_C 2604 | 16566 | 0 | 1259 | 17190 | 1 | a | table_A 2604 | 16566 | 0 | 1259 | 17188 | 0 | n | table_A_seq(4 rows)",
"msg_date": "Sun, 29 Sep 2019 09:05:15 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequence depends on many tables"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> seq_oid | classid | objid | objsubid | refclassid | refobjid | refobjsubid\n> | deptype\n> ---------+---------+-------+----------+------------+----------+-------------+---------\n> 17188 | 1259 | 17188 | 0 | 1259 | 17190 | 1\n> | a\n> 16566 | 2604 | 16566 | 0 | 1259 | 17190 | 1\n> | a\n> 16704 | 2606 | 16704 | 0 | 1259 | 17190 | 3\n> | a\n> 16704 | 2606 | 16704 | 0 | 1259 | 17190 | 5\n> | a\n> 16704 | 2606 | 16704 | 0 | 1259 | 17190 | 4\n> | a\n> (5 rows)\n\nWell, those entries with objid = 16566 and 16704 are not for sequences,\nbecause the classid is wrong: 2604 is pg_attrdef, and 2606 is\npg_constraint, so the second row is for a default expression belonging\nto table 17190 column 1, and the rest are for some kind of constraint\ninvolving columns 3,4,5 (maybe a check constraint?)\n\nIn itself there's nothing wrong with these pg_depend entries, but it\nis odd that you have different objects with identical OIDs. Normally\nI'd only expect that to be possible once the OID counter has wrapped\naround ... but all these OIDs are small, which makes it seem unlikely\nthat you've consumed enough OIDs to reach wraparound. Maybe you had\na system crash, or did something weird with backup/recovery, causing\nthe counter to get reset?\n\nAnyway, the short answer here is that neither objid nor refobjid\nshould be considered sufficient to identify an object by themselves.\nYou need to also check classid (refclassid), because OIDs are only\nguaranteed unique within a given system catalog.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Sep 2019 11:32:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence depends on many tables"
}
] |
[
{
"msg_contents": "Hi,\nIs this necessary to run analyze on a slave using streaming replication\nafter promotion??\n\nHi,Is this necessary to run analyze on a slave using streaming replication after promotion??",
"msg_date": "Thu, 26 Sep 2019 09:54:40 +0200",
"msg_from": "Joao Junior <[email protected]>",
"msg_from_op": true,
"msg_subject": "Analyze on slave promoted."
},
{
"msg_contents": "Hi\n\nčt 26. 9. 2019 v 9:55 odesílatel Joao Junior <[email protected]> napsal:\n\n> Hi,\n> Is this necessary to run analyze on a slave using streaming replication\n> after promotion??\n>\n\nNo - column statistics are come from master - and are persistent - promote\nchange nothing.\n\nPavel\n\nHičt 26. 9. 2019 v 9:55 odesílatel Joao Junior <[email protected]> napsal:Hi,Is this necessary to run analyze on a slave using streaming replication after promotion??No - column statistics are come from master - and are persistent - promote change nothing. Pavel",
"msg_date": "Thu, 26 Sep 2019 10:31:10 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze on slave promoted."
},
{
"msg_contents": "On 26/09/2019 09:54, Joao Junior wrote:\n> Hi,\n> Is this necessary to run analyze on a slave using streaming\n> replication after promotion??\n>\n\nYes, you should run ANALYZE on all of your tables in all of your\ndatabases after a promotion. The data distribution statistics are\nreplicated, as Pavel mentioned, but other statistics are not. In\nparticular, pg_stat_all_tables.n_dead_tup is not replicated and so\nautovacuum has no idea when it needs to run.\n\n\n\n",
"msg_date": "Fri, 27 Sep 2019 12:41:05 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze on slave promoted."
}
] |
[
{
"msg_contents": "Hi,\n\nI use PG 11.5 into CentOS6 server, with 50 schemas, exactly equals in \ntables structure, and more than 400 tables/schema. Then, there is more \nthan 20000 tables.\n\nI found the discussion in pgsql-general thread:\n\nhttps://www.postgresql.org/message-id/flat/11566.1558463253%40sss.pgh.pa.us#ec144ebcd8a829010fc82a7fe2abfd3f\n\nbut thread was closed.\n\nThen, I sent here in performance list my problem.\n\n-------------------------------------\n\nI changed the original PG view like said in the above thread:\n\nCREATE OR REPLACE VIEW pg_catalog.pg_publication_tables AS\nSELECT\nP.pubname AS pubname,\nN.nspname AS schemaname,\nC.relname AS tablename\nFROM pg_publication P, pg_class C\nJOIN pg_namespace N ON (N.oid = C.relnamespace),\nLATERAL pg_get_publication_tables(P.pubname)\nWHERE C.oid = pg_get_publication_tables.relid;\n\nbut the problem continues. It is very slow to process the query used by \nreplication system:\n\nSELECT DISTINCT t.schemaname, t.tablename FROM \npg_catalog.pg_publication_tables t WHERE t.pubname IN ('mypubschema');\n\n-------------------------------------\n\nThen, in my case I created a publication for each schema and all tables \nwith the same same of the schema, creating 50 publications.\n\nAfter this, I changed the view above to this:\n\nCREATE OR REPLACE VIEW pg_catalog.pg_publication_tables AS\nSELECT p.pubname, c.schemaname, c.tablename\nFROM pg_publication p\nJOIN pg_tables c ON p.pubname = c.schemaname;\n\nAnd the query below became very fast:\n\nSELECT DISTINCT t.schemaname, t.tablename FROM \npg_catalog.pg_publication_tables t WHERE t.pubname IN ('mypubschema');\n\nMy problem was solved but I think next version of pg should verify this \nproblem to find a general solution.\n\n\n\n\n\n",
"msg_date": "Thu, 26 Sep 2019 16:37:02 -0400",
"msg_from": "Edilmar Alves <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow pg_publication_tables with many schemas and tables"
},
{
"msg_contents": "Edilmar Alves <[email protected]> writes:\n> I use PG 11.5 into CentOS6 server, with 50 schemas, exactly equals in \n> tables structure, and more than 400 tables/schema. Then, there is more \n> than 20000 tables.\n\nPossibly you should rethink that design, but ...\n\n> I changed the original PG view like said in the above thread:\n> CREATE OR REPLACE VIEW pg_catalog.pg_publication_tables AS\n> SELECT\n> P.pubname AS pubname,\n> N.nspname AS schemaname,\n> C.relname AS tablename\n> FROM pg_publication P, pg_class C\n> JOIN pg_namespace N ON (N.oid = C.relnamespace),\n> LATERAL pg_get_publication_tables(P.pubname)\n> WHERE C.oid = pg_get_publication_tables.relid;\n> but the problem continues. It is very slow to process the query used by \n> replication system:\n> SELECT DISTINCT t.schemaname, t.tablename FROM \n> pg_catalog.pg_publication_tables t WHERE t.pubname IN ('mypubschema');\n\nWhat do you get from EXPLAIN ANALYZE for that?\n\n> After this, I changed the view above to this:\n> CREATE OR REPLACE VIEW pg_catalog.pg_publication_tables AS\n> SELECT p.pubname, c.schemaname, c.tablename\n> FROM pg_publication p\n> JOIN pg_tables c ON p.pubname = c.schemaname;\n> And the query below became very fast:\n\nAs a wise man once said, I can make my program arbitrarily fast\nif it doesn't have to give the right answer ... and this query\nobviously doesn't produce the correct answer, except in the\ncontrived special case where the content of a publication is\nexactly the content of a schema. So I don't see what your\npoint is here.\n\nPlease see\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nregarding useful ways to present performance problems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Sep 2019 19:11:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow pg_publication_tables with many schemas and tables"
},
{
"msg_contents": "Hi,\n\nEm 26/09/2019 19:11, Tom Lane escreveu:\n> Edilmar Alves <[email protected]> writes:\n>> I use PG 11.5 into CentOS6 server, with 50 schemas, exactly equals in\n>> tables structure, and more than 400 tables/schema. Then, there is more\n>> than 20000 tables.\n> Possibly you should rethink that design, but ...\n\nMy design is this because I have a system with 50 enterprises using the \nsame server.\n\nBefore each enterprise used a separated database, and my webapp had a \nconnection pool\n\nfor each database. Then, if for example, my connection pool had \nminconn=10 and maxconn=20,\n\nit was totalminconn=500 and totalmaxconn=1000. When I migrated to just \none database and 50\n\nschemas, it was so better to manage just one connection pool, minor \nhardware resource usage.\n\n>\n>> I changed the original PG view like said in the above thread:\n>> CREATE OR REPLACE VIEW pg_catalog.pg_publication_tables AS\n>> SELECT\n>> P.pubname AS pubname,\n>> N.nspname AS schemaname,\n>> C.relname AS tablename\n>> FROM pg_publication P, pg_class C\n>> JOIN pg_namespace N ON (N.oid = C.relnamespace),\n>> LATERAL pg_get_publication_tables(P.pubname)\n>> WHERE C.oid = pg_get_publication_tables.relid;\n>> but the problem continues. It is very slow to process the query used by\n>> replication system:\n>> SELECT DISTINCT t.schemaname, t.tablename FROM\n>> pg_catalog.pg_publication_tables t WHERE t.pubname IN ('mypubschema');\n> What do you get from EXPLAIN ANALYZE for that?\n\nThe Analyze from original VIEW and the VIEW suggested below\n\nfor PGv12 update have a flow diagram very similar, just one\n\nstep better in the updated version, for my cenario with 50 schemas.\n\n>\n>> After this, I changed the view above to this:\n>> CREATE OR REPLACE VIEW pg_catalog.pg_publication_tables AS\n>> SELECT p.pubname, c.schemaname, c.tablename\n>> FROM pg_publication p\n>> JOIN pg_tables c ON p.pubname = c.schemaname;\n>> And the query below became very fast:\n> As a wise man once said, I can make my program arbitrarily fast\n> if it doesn't have to give the right answer ... and this query\n> obviously doesn't produce the correct answer, except in the\n> contrived special case where the content of a publication is\n> exactly the content of a schema. So I don't see what your\n> point is here.\n\nI know my VIEW is not a general purpose solution.\n\nI just submitted this message to the group because\n\nin this kind of situation of many schemas and tables/schema,\n\nthe original VIEW and the VIEW below suggested to become\n\nthe new on in PGv12 run very slow.\n\n>\n> Please see\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> regarding useful ways to present performance problems.\n>\n> \t\t\tregards, tom lane\n--",
"msg_date": "Fri, 27 Sep 2019 10:25:24 -0400",
"msg_from": "Edilmar Alves <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow pg_publication_tables with many schemas and tables"
}
] |
[
{
"msg_contents": "Hi,\n\nAs part of vacuum tuning, We have set the below set of parameters.\n\n\n\n\n\n*> select relname,reloptions, pg_namespace.nspname from pg_class join\npg_namespace on pg_namespace.oid=pg_class.relnamespace where relname\nIN('process_instance') and pg_namespace.nspname='public'; relname |\n\n reloptions\n |\nnspname--------------+------------------------------------------------------------------------------------------------------------------------------------------------------+---------\nprocess_instance\n|\n{autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=20000,autovacuum_vacuum_cost_limit=1000,autovacuum_vacuum_cost_delay=10}\n | public*\n\n\nautovaccumm threshold was set for 20,000. However after the vacuuming, it\nis cleaning up less than 2,000 tuples only. And also vacuuming frequncy was\nincreased as it is becoming eligible for the autovacuuming.\nHowever n_dead_tup value from pg_stat_user_tables was always showing very\nhigh value. Most of the time, it is greater than 100K dead tuples.\n\nOverall, we couldn't able to correlate on why autovacuum was able to\ncleanup only < 2K tuples, even though there are mode dead tuples based on\nthe statistics ? Can you please explain on why we are notcing huge\ndifference and what steps needs to taken to minimize the gap ?\n\n\n*Log message*\n\n* 2019-09-25 00:06:31 UTC::@:[80487]:LOG: automatic vacuum of table\n\"fc_db_web_2.public.*process_instance\n\n\n\n\n*\": index scans: 1 pages: 0 removed, 854445 remain, 0 skipped due to pins,\n774350 skipped frozen tuples: 1376 removed, 16819201 remain, 21 are dead\nbut not yet removable buffer usage: 553118 hits, 9070720 misses, 14175\ndirtied avg read rate: 13.926 MB/s, avg write rate: 0.022 MB/s system\nusage: CPU 44.57s/33.04u sec elapsed 5088.65 sec*\n\n\n*Table Information*\n\n\n\n\n\n\n\n\n\n\n\n\n*SELECT nspname || '.' || relname AS \"relation\",\npg_size_pretty(pg_total_relation_size(C.oid)) AS \"total_size\" FROM\npg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE\nnspname NOT IN ('pg_catalog', 'information_schema') AND C.relkind <>\n'i' AND nspname !~ '^pg_toast' AND relname='process_instance';\nrelation | total_size\n---------------------+------------ public.process_instance | 77 GB*\n\n\n\n*Live and Dead tuples*\n\n\n\n\n\n*select relname, n_live_tup, n_dead_tup FROM pg_stat_user_tables WHERE\nrelname='process_instance'; relname | n_live_tup | n_dead_tup\n--------------+------------+------------ conversation | 16841596 |\n144202*\n\n show track_counts;\n track_counts\n--------------\n on\n\n\n show default_statistics_target;\n default_statistics_target\n---------------------------\n 100\n\n*Version*\n\nPostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3\n20140911 (Red Hat 4.8.3-9), 64-bit\n\n\nThanks in advance.\n\nRegards, Amarendra\n\nHi,As part of vacuum tuning, We have set the below set of parameters. > select relname,reloptions, pg_namespace.nspname from pg_class join pg_namespace on pg_namespace.oid=pg_class.relnamespace where relname IN('process_instance') and pg_namespace.nspname='public'; relname | reloptions | nspname--------------+------------------------------------------------------------------------------------------------------------------------------------------------------+--------- process_instance | {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=20000,autovacuum_vacuum_cost_limit=1000,autovacuum_vacuum_cost_delay=10} | publicautovaccumm threshold was set for 20,000. However after the vacuuming, it is cleaning up less than 2,000 tuples only. And also vacuuming frequncy was increased as it is becoming eligible for the autovacuuming. However n_dead_tup value from pg_stat_user_tables was always showing very high value. Most of the time, it is greater than 100K dead tuples. Overall, we couldn't able to correlate on why autovacuum was able to cleanup only < 2K tuples, even though there are mode dead tuples based on the statistics ? Can you please explain on why we are notcing huge difference and what steps needs to taken to minimize the gap ?Log message 2019-09-25 00:06:31 UTC::@:[80487]:LOG: automatic vacuum of table \"fc_db_web_2.public.process_instance\": index scans: 1\tpages: 0 removed, 854445 remain, 0 skipped due to pins, 774350 skipped frozen\ttuples: 1376 removed, 16819201 remain, 21 are dead but not yet removable\tbuffer usage: 553118 hits, 9070720 misses, 14175 dirtied\tavg read rate: 13.926 MB/s, avg write rate: 0.022 MB/s\tsystem usage: CPU 44.57s/33.04u sec elapsed 5088.65 secTable InformationSELECT nspname || '.' || relname AS \"relation\", pg_size_pretty(pg_total_relation_size(C.oid)) AS \"total_size\" FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname NOT IN ('pg_catalog', 'information_schema') AND C.relkind <> 'i' AND nspname !~ '^pg_toast' AND relname='process_instance'; relation | total_size ---------------------+------------ public.process_instance | 77 GBLive and Dead tuplesselect relname, n_live_tup, n_dead_tup FROM pg_stat_user_tables WHERE relname='process_instance'; relname | n_live_tup | n_dead_tup --------------+------------+------------ conversation | 16841596 | 144202 show track_counts; track_counts -------------- on show default_statistics_target; default_statistics_target --------------------------- 100Version PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bitThanks in advance. Regards, Amarendra",
"msg_date": "Fri, 27 Sep 2019 11:10:50 +0530",
"msg_from": "Amarendra Konda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Autovacuum is cleaning very less dead tuples"
},
{
"msg_contents": "On Fri, 2019-09-27 at 11:10 +0530, Amarendra Konda wrote:\n> As part of vacuum tuning, We have set the below set of parameters. \n> \n> > select relname,reloptions, pg_namespace.nspname from pg_class join\n> pg_namespace on pg_namespace.oid=pg_class.relnamespace where relname\n> IN('process_instance') and pg_namespace.nspname='public';\n> relname | \n> reloptions \n> | nspname\n> --------------+----------------------------------------------------\n> -------------------------------------------------------------------\n> -------------------------------+---------\n> process_instance |\n> {autovacuum_vacuum_scale_factor=0,autovacuum_vacuum_threshold=20000,a\n> utovacuum_vacuum_cost_limit=1000,autovacuum_vacuum_cost_delay=10} \n> | public\n\nThat's not so much tuning as breaking.\n\nYou have set autovacuum to run all the time at a snail's pace.\nThat way, it will have trouble getting any work done.\n\nDon't touch autovacuum_vacuum_scale_factor and\nautovacuum_vacuum_threshold. Don't raise autovacuum_vacuum_cost_limit.\nIf anything, lower autovacuum_vacuum_cost_delay.\n \n> However n_dead_tup value from pg_stat_user_tables was always showing\n> very high value. Most of the time, it is greater than 100K dead\n> tuples. \n\nThat is only a problem if the number of live tuples is less than\n500000.\n\n> Overall, we couldn't able to correlate on why autovacuum was able to\n> cleanup only < 2K tuples, even though there are mode dead tuples\n> based on the statistics ? Can you please explain on why we are\n> notcing huge difference and what steps needs to taken to minimize the\n> gap ?\n\nIt is questionable if there is a problem at all.\n\n> Log message\n> \n> 2019-09-25 00:06:31 UTC::@:[80487]:LOG: automatic vacuum of table\n> \"fc_db_web_2.public.process_instance\": index scans: 1\n> pages: 0 removed, 854445 remain, 0 skipped due to pins, 774350\n> skipped frozen\n> tuples: 1376 removed, 16819201 remain, 21 are dead but not yet\n> removable\n> system usage: CPU 44.57s/33.04u sec elapsed 5088.65 sec\n\nThis shows that at least this table has no problem.\n\nEven with your settings, autovacuum finished in 5 seconds and\ncould clean up almost everything.\n\n> Live and Dead tuples\n> \n> select relname, n_live_tup, n_dead_tup FROM pg_stat_user_tables WHERE\n> relname='process_instance';\n> relname | n_live_tup | n_dead_tup \n> --------------+------------+------------\n> conversation | 16841596 | 144202\n\nPerfect. There is no problem at all.\n\nThe table has less than 20% dead tuples, so everything is in perfect\norder.\n\nJust stop fighting windmills.\n\nGive up your tuning attempts and reset all parameters back to the\ndefault.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Fri, 27 Sep 2019 07:55:06 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum is cleaning very less dead tuples"
}
] |
[
{
"msg_contents": "Hey,\n\nI'm working on PG12.\nI have the following table :\n\\d dates_table\n Table \"public. dates_table \"\n Column | Type | Collation | Nullable | Default\n----------+---------+-----------+----------+-----------------------------------------------\n id | integer | | not null | nextval('\ndates_table_seq'::regclass)\n end_time | date | | |\n\nI tried to get all the quarters of the dates(and the years) in order to\ncreate a range partition by quarters. I used the following query :\nselect distinct(extract(year from end_time),extract(quarter from end_time))\n from dates_table where end_time is not null;\n row\n----------\n (2017,3)\n (2017,4)\n (2018,1)\n (2018,2)\n (2018,3)\n (2018,4)\n (2019,1)\n (2019,2)\n (2019,3)\n(9 rows)\n\nI'm keep getting composite type (row) instead of two columns. Is there any\nsql way to convert the row type into two columns ? I want to get the first\nand last dates of each quarter with those columns and with this composite\ntype I failed doing it\n\nThanks.\n\nHey,I'm working on PG12.I have the following table : \\d dates_table Table \"public.\n\ndates_table\n\n\" Column | Type | Collation | Nullable | Default----------+---------+-----------+----------+----------------------------------------------- id | integer | | not null | nextval('\n\ndates_table_seq'::regclass) end_time | date | | |I tried to get all the quarters of the dates(and the years) in order to create a range partition by quarters. I used the following query : select distinct(extract(year from end_time),extract(quarter from end_time)) from \n\ndates_table\n\n\nwhere end_time is not null; row---------- (2017,3) (2017,4) (2018,1) (2018,2) (2018,3) (2018,4) (2019,1) (2019,2) (2019,3)(9 rows)I'm keep getting composite type (row) instead of two columns. Is there any sql way to convert the row type into two columns ? I want to get the first and last dates of each quarter with those columns and with this composite type I failed doing itThanks.",
"msg_date": "Sun, 29 Sep 2019 12:46:31 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "distinct on extract returns composite type"
},
{
"msg_contents": "Hello,\n\nOn Sun, Sep 29, 2019 at 11:46 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> I'm keep getting composite type (row) instead of two columns. Is there any\n> sql way to convert the row type into two columns ? I want to get the first\n> and last dates of each quarter with those columns and with this composite\n> type I failed doing it\n>\n\nThis seems to work as you expect:\n\nselect distinct extract(year from end_time) as year, extract(quarter from\nend_time) quarter from generate_series\n ( '2017-09-01'::timestamp\n , '2019-04-01'::timestamp\n , '3 month'::interval) end_time\n;\n\nhttps://www.postgresql.org/docs/current/sql-select.html#SQL-DISTINCT\n\n--\n\nFélix\n\nHello,On Sun, Sep 29, 2019 at 11:46 AM Mariel Cherkassky <[email protected]> wrote:I'm keep getting composite type (row) instead of two columns. Is there any sql way to convert the row type into two columns ? I want to get the first and last dates of each quarter with those columns and with this composite type I failed doing itThis seems to work as you expect:select distinct extract(year from end_time) as year, extract(quarter from end_time) quarter from generate_series ( '2017-09-01'::timestamp , '2019-04-01'::timestamp , '3 month'::interval) end_time;https://www.postgresql.org/docs/current/sql-select.html#SQL-DISTINCT--Félix",
"msg_date": "Sun, 29 Sep 2019 12:34:30 +0200",
"msg_from": "=?UTF-8?Q?F=C3=A9lix_GERZAGUET?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: distinct on extract returns composite type"
},
{
"msg_contents": "In my query I wrapped the columns with distinct : distinct (extract year...\n, extract quarter..).\nIn your query you didnt wrap the columns with distinct but you just\nmentioned it. I guess this is the difference, thanks !\n\n>\n\nIn my query I wrapped the columns with distinct : distinct (extract year... , extract quarter..).In your query you didnt wrap the columns with distinct but you just mentioned it. I guess this is the difference, thanks !",
"msg_date": "Mon, 30 Sep 2019 16:42:02 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: distinct on extract returns composite type"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> In my query I wrapped the columns with distinct : distinct (extract year...\n> , extract quarter..).\n> In your query you didnt wrap the columns with distinct but you just\n> mentioned it. I guess this is the difference, thanks !\n\nYeah. DISTINCT does not have an argument, it's just a keyword you\ncan stick in after SELECT. So what you had as the select's targetlist\nwas (expr,expr), which is read as an implicit row constructor, that\nis the same as ROW(expr,expr). One of many arguably not-well-designed\nthings about SQL syntax :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Sep 2019 09:48:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: distinct on extract returns composite type"
},
{
"msg_contents": "Understood, thanks for explanation Tom!\n\nUnderstood, thanks for explanation Tom!",
"msg_date": "Mon, 30 Sep 2019 16:51:14 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: distinct on extract returns composite type"
},
{
"msg_contents": "As long as we are on the performance list and not general, it might be\nworth noting that partitioning should be defined directly on the data and\nnot on a function result I believe. If you always do the extract year and\nextract quarter thing, it may work out just fine. But just a regular btree\nindex on the date/timestamp/timestamptz field and partitions like the below\nmight be much easier to work with.\n\nMINVALUE to 2018-01-01 /* the top end is always exclusive so it gets\nreferenced as top on this partition and start of the next partition */\n2018-01-01 to 2018-04-01\n2018-04-01 to 2018-07-01\n2018-07-01 to 2018-10-01\n2018-10-01 to 2019-01-01\n2019-01-01 to 2019-04-01\n2019-04-01 to 2019-07-01\n2019-07-01 to 2019-10-01\n2019-10-01 to 2020-01-01\n2020-01-01 to MAXVALUE\n\nAs long as we are on the performance list and not general, it might be worth noting that partitioning should be defined directly on the data and not on a function result I believe. If you always do the extract year and extract quarter thing, it may work out just fine. But just a regular btree index on the date/timestamp/timestamptz field and partitions like the below might be much easier to work with.MINVALUE to 2018-01-01 /* the top end is always exclusive so it gets referenced as top on this partition and start of the next partition */2018-01-01 to 2018-04-012018-04-01 to 2018-07-012018-07-01 to 2018-10-012018-10-01 to 2019-01-012019-01-01 to 2019-04-012019-04-01 to 2019-07-012019-07-01 to 2019-10-012019-10-01 to 2020-01-012020-01-01 to MAXVALUE",
"msg_date": "Mon, 7 Oct 2019 15:53:17 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: distinct on extract returns composite type"
}
] |
[
{
"msg_contents": "Hardware\n\n - CPU: Core i7 6700\n - OS: Ubuntu 19.04\n - RAM: 32GB (limited to 2GB for this test)\n\nAlso reproducible on a 2018 MacBook Pro.\nDetails\n\nOn my machine, this query that is generated by Hibernate runs in about 57\nms on MySQL 8 but it takes more than 1 second to run on PostgreSQL:\n\nSELECT bills.id AS bill_id,\n bills.bill_date AS bill_date,\n bills.bill_number AS bill_number,\n branch_bills.branch_id AS branch_id,\n company_bills.company_id AS company_id\nFROM tbl_bills bills\n LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id =\nbranch_bills.bill_id\n LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id =\ncompany_bills.bill_id\n INNER JOIN tbl_branches ON branch_bills.branch_id =\ntbl_branches.id\nWHERE branch_bills.branch_id IN (\n SELECT b.id\n FROM tbl_branches b\n INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n INNER JOIN tbl_groups g ON r.group_id = g.id\n INNER JOIN (tbl_group_permissions gp INNER JOIN\ntbl_permissions p ON gp.permission_id = p.id)\n ON g.id = gp.group_id\n INNER JOIN tbl_users u ON r.user_id = u.id\n WHERE u.id = 1\n AND r.rule_type = 'BRANCH'\n AND p.name = 'Permission W'\n);\n\nPostgreSQL does not seem to be choosing the best plan to execute this query\ndue to the IN( <subquery> ) expression. Adding indexes does not seem to\neliminate this particular bottleneck.\n\nAs the query is generated bt Hibernate, it is not possible to tweak it\neasily (there's a way to parse the generated SQL and modify it before it is\nexecuted, but ideally I would like to avoid that). Otherwise it was\npossible to rewrite the query without the subquery. Another tweak that\nseems to work (but again not supported by JPA/Hibernate) is adding a dummy\norder by clause to the sub query:\n\n```\nEXPLAIN ( ANALYZE , COSTS , VERBOSE , BUFFERS )\nSELECT bills.id AS bill_id,\n bills.bill_date AS bill_date,\n bills.bill_number AS bill_number,\n branch_bills.branch_id AS branch_id,\n company_bills.company_id AS company_id\nFROM tbl_bills bills\n LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id =\nbranch_bills.bill_id\n LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id =\ncompany_bills.bill_id\n INNER JOIN tbl_branches ON branch_bills.branch_id =\ntbl_branches.id\nWHERE branch_bills.branch_id IN (\n SELECT b.id\n FROM tbl_branches b\n INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n INNER JOIN tbl_groups g ON r.group_id = g.id\n INNER JOIN (tbl_group_permissions gp INNER JOIN\ntbl_permissions p ON gp.permission_id = p.id)\n ON g.id = gp.group_id\n INNER JOIN tbl_users u ON r.user_id = u.id\n WHERE u.id = 1\n AND r.rule_type = 'BRANCH'\n AND p.name = 'Permission W'\n ORDER BY b.id\n);\n\nHash Right Join (cost=69.70..105.15 rows=108 width=48) (actual\ntime=1.814..1.893 rows=324 loops=1)\n\" Output: bills.id, bills.bill_date, bills.bill_number,\nbranch_bills.branch_id, company_bills.company_id\"\n Hash Cond: (company_bills.bill_id = bills.id)\n Buffers: shared hit=1320 read=6\n -> Seq Scan on public.tbl_company_bills company_bills (cost=0.00..28.50\nrows=1850 width=16) (actual time=0.003..0.003 rows=0 loops=1)\n\" Output: company_bills.company_id, company_bills.bill_id\"\n -> Hash (cost=68.35..68.35 rows=108 width=40) (actual time=1.805..1.806\nrows=324 loops=1)\n\" Output: bills.id, bills.bill_date, bills.bill_number,\nbranch_bills.branch_id\"\n Buckets: 1024 Batches: 1 Memory Usage: 31kB\n Buffers: shared hit=1320 read=6\n -> Nested Loop (cost=6.87..68.35 rows=108 width=40) (actual\ntime=0.141..1.692 rows=324 loops=1)\n\" Output: bills.id, bills.bill_date, bills.bill_number,\nbranch_bills.branch_id\"\n Inner Unique: true\n Buffers: shared hit=1320 read=6\n -> Nested Loop (cost=6.44..15.55 rows=108 width=16) (actual\ntime=0.135..0.299 rows=324 loops=1)\n\" Output: branch_bills.branch_id, branch_bills.bill_id\"\n Buffers: shared hit=25 read=3\n -> Nested Loop (cost=6.01..10.04 rows=1 width=16)\n(actual time=0.086..0.094 rows=3 loops=1)\n\" Output: tbl_branches.id, b.id\"\n Inner Unique: true\n Buffers: shared hit=17\n -> HashAggregate (cost=5.73..5.74 rows=1\nwidth=8) (actual time=0.081..0.083 rows=3 loops=1)\n Output: b.id\n Group Key: b.id\n Buffers: shared hit=10\n -> Nested Loop (cost=1.40..5.72 rows=1\nwidth=8) (actual time=0.064..0.077 rows=3 loops=1)\n Output: b.id\n Buffers: shared hit=10\n -> Nested Loop (cost=1.40..4.69\nrows=1 width=16) (actual time=0.062..0.070 rows=3 loops=1)\n\" Output: b.id, r.user_id\"\n Join Filter: (r.group_id = g.id)\n Buffers: shared hit=7\n -> Merge Join\n (cost=1.40..1.55 rows=3 width=24) (actual time=0.050..0.054 rows=3 loops=1)\n\" Output: b.id,\nr.group_id, r.user_id\"\n Merge Cond: (b.id =\nr.branch_id)\n Buffers: shared hit=4\n -> Index Only Scan using\ntbl_branches_pkey on public.tbl_branches b (cost=0.29..270.29 rows=10000\nwidth=8) (actual time=0.021..0.022 rows=6 loops=1)\n Output: b.id\n Heap Fetches: 0\n Buffers: shared\nhit=3\n -> Sort\n (cost=1.11..1.12 rows=3 width=24) (actual time=0.023..0.024 rows=3 loops=1)\n\" Output:\nr.branch_id, r.group_id, r.user_id\"\n Sort Key:\nr.branch_id\n Sort Method:\nquicksort Memory: 25kB\n Buffers: shared\nhit=1\n -> Seq Scan on\npublic.tbl_rules r (cost=0.00..1.09 rows=3 width=24) (actual\ntime=0.010..0.013 rows=3 loops=1)\n\" Output:\nr.branch_id, r.group_id, r.user_id\"\n Filter:\n((r.user_id = 1) AND ((r.rule_type)::text = 'BRANCH'::text))\n Rows Removed\nby Filter: 3\n Buffers:\nshared hit=1\n -> Materialize\n (cost=0.00..3.10 rows=1 width=16) (actual time=0.004..0.004 rows=1 loops=3)\n\" Output: g.id,\ngp.group_id\"\n Buffers: shared hit=3\n -> Nested Loop\n (cost=0.00..3.10 rows=1 width=16) (actual time=0.010..0.011 rows=1 loops=1)\n\" Output: g.id,\ngp.group_id\"\n Inner Unique: true\n Join Filter:\n(gp.permission_id = p.id)\n Buffers: shared\nhit=3\n -> Nested Loop\n (cost=0.00..2.03 rows=1 width=24) (actual time=0.006..0.007 rows=1 loops=1)\n\" Output: g.id,\ngp.permission_id, gp.group_id\"\n Join Filter: (\ng.id = gp.group_id)\n Buffers:\nshared hit=2\n -> Seq Scan\non public.tbl_groups g (cost=0.00..1.01 rows=1 width=8) (actual\ntime=0.003..0.003 rows=1 loops=1)\n\"\n Output: g.id, g.name\"\n\nBuffers: shared hit=1\n -> Seq Scan\non public.tbl_group_permissions gp (cost=0.00..1.01 rows=1 width=16)\n(actual time=0.002..0.003 rows=1 loops=1)\n\"\n Output: gp.group_id, gp.permission_id\"\n\nBuffers: shared hit=1\n -> Seq Scan on\npublic.tbl_permissions p (cost=0.00..1.05 rows=1 width=8) (actual\ntime=0.002..0.003 rows=1 loops=1)\n\" Output: p.id,\np.name\"\n Filter: ((\np.name)::text = 'Permission W'::text)\n Buffers:\nshared hit=1\n -> Seq Scan on public.tbl_users u\n (cost=0.00..1.01 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=3)\n\" Output: u.id, u.user_email\"\n Filter: (u.id = 1)\n Buffers: shared hit=3\n -> Index Only Scan using tbl_branches_pkey on\npublic.tbl_branches (cost=0.29..4.30 rows=1 width=8) (actual\ntime=0.002..0.002 rows=1 loops=3)\n Output: tbl_branches.id\n Index Cond: (tbl_branches.id = b.id)\n Heap Fetches: 0\n Buffers: shared hit=7\n -> Index Only Scan using tbl_branch_bills_pkey on\npublic.tbl_branch_bills branch_bills (cost=0.43..4.43 rows=108 width=16)\n(actual time=0.020..0.047 rows=108 loops=3)\n\" Output: branch_bills.branch_id,\nbranch_bills.bill_id\"\n Index Cond: (branch_bills.branch_id =\ntbl_branches.id)\n Heap Fetches: 0\n Buffers: shared hit=8 read=3\n -> Index Scan using tbl_bills_pkey on public.tbl_bills bills\n (cost=0.43..0.49 rows=1 width=32) (actual time=0.004..0.004 rows=1\nloops=324)\n\" Output: bills.id, bills.bill_date, bills.bill_number\"\n Index Cond: (bills.id = branch_bills.bill_id)\n Buffers: shared hit=1295 read=3\nPlanning time: 1.999 ms\nExecution time: 2.005 ms\n\n```\n\nThis will reduce execution time from more than 1s to under 3ms.\n\nIs there a way to make PostgreSQL to choose the same plan as when the order\nby clause is present without changing it?\n\nHere are the necessary steps to reproduce this issue.\n1.1 Run MySQL 8 and PostgreSQL 10.6 locally\n\n$ docker run --name mysql8 \\\n -e MYSQL_ROOT_PASSWORD=password -p 13306:3306 \\\n -d mysql:8\n\n$ docker update --cpus 2 --memory 2GB mysql8\n\n1.2. Create the MySQL database\n\nCREATE TABLE `tbl_bills`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `bill_date` date NOT NULL,\n `bill_number` varchar(255) NOT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_branch_bills`\n(\n `branch_id` bigint(20) DEFAULT NULL,\n `bill_id` bigint(20) NOT NULL,\n PRIMARY KEY (`bill_id`),\n KEY `FKjr0egr9t34sxr1pv2ld1ux174` (`branch_id`),\n CONSTRAINT `FK7ekkvq33j12dw8a8bwx90a0gb` FOREIGN KEY (`bill_id`)\nREFERENCES `tbl_bills` (`id`),\n CONSTRAINT `FKjr0egr9t34sxr1pv2ld1ux174` FOREIGN KEY (`branch_id`)\nREFERENCES `tbl_branches` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_branches`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `name` varchar(255) NOT NULL,\n `company_id` bigint(20) DEFAULT NULL,\n PRIMARY KEY (`id`),\n KEY `FK1fde50hcsaf4os3fq6isshf23` (`company_id`),\n CONSTRAINT `FK1fde50hcsaf4os3fq6isshf23` FOREIGN KEY\n(`company_id`) REFERENCES `tbl_companies` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_companies`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `name` varchar(255) NOT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_company_bills`\n(\n `company_id` bigint(20) DEFAULT NULL,\n `bill_id` bigint(20) NOT NULL,\n PRIMARY KEY (`bill_id`),\n KEY `FKet3kkl9d16jeb5v8ic5pvq89` (`company_id`),\n CONSTRAINT `FK6d3r6to4orsc0mgflgt7aefsh` FOREIGN KEY (`bill_id`)\nREFERENCES `tbl_bills` (`id`),\n CONSTRAINT `FKet3kkl9d16jeb5v8ic5pvq89` FOREIGN KEY (`company_id`)\nREFERENCES `tbl_companies` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_group_permissions`\n(\n `group_id` bigint(20) NOT NULL,\n `permission_id` bigint(20) NOT NULL,\n PRIMARY KEY (`group_id`, `permission_id`),\n KEY `FKocxt78iv4ufox094sdr1pudf7` (`permission_id`),\n CONSTRAINT `FKe4adr2lkq2s61ju3pnbiq5m14` FOREIGN KEY (`group_id`)\nREFERENCES `tbl_groups` (`id`),\n CONSTRAINT `FKocxt78iv4ufox094sdr1pudf7` FOREIGN KEY\n(`permission_id`) REFERENCES `tbl_permissions` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_groups`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `name` varchar(255) DEFAULT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_permissions`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `name` varchar(256) NOT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_rules`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `rule_type` varchar(255) NOT NULL,\n `branch_id` bigint(20) DEFAULT NULL,\n `company_id` bigint(20) DEFAULT NULL,\n `group_id` bigint(20) DEFAULT NULL,\n `user_id` bigint(20) DEFAULT NULL,\n PRIMARY KEY (`id`),\n KEY `FK18sr791qaonsmvodm1v7g8vyr` (`branch_id`),\n KEY `FKtjjtlnfuxmbj4xij3j9t0m99m` (`company_id`),\n KEY `FKldsvxs2qijr9quon4srw627ky` (`group_id`),\n KEY `FKp28tcx68kdbb8flhl1xdtl0hp` (`user_id`),\n CONSTRAINT `FK18sr791qaonsmvodm1v7g8vyr` FOREIGN KEY (`branch_id`)\nREFERENCES `tbl_branches` (`id`),\n CONSTRAINT `FKldsvxs2qijr9quon4srw627ky` FOREIGN KEY (`group_id`)\nREFERENCES `tbl_groups` (`id`),\n CONSTRAINT `FKp28tcx68kdbb8flhl1xdtl0hp` FOREIGN KEY (`user_id`)\nREFERENCES `tbl_users` (`id`),\n CONSTRAINT `FKtjjtlnfuxmbj4xij3j9t0m99m` FOREIGN KEY\n(`company_id`) REFERENCES `tbl_companies` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_users`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `user_email` varchar(255) NOT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE OR REPLACE VIEW generator_16\nAS SELECT 0 n\n UNION ALL SELECT 1\n UNION ALL SELECT 2\n UNION ALL SELECT 3\n UNION ALL SELECT 4\n UNION ALL SELECT 5\n UNION ALL SELECT 6\n UNION ALL SELECT 7\n UNION ALL SELECT 8\n UNION ALL SELECT 9\n UNION ALL SELECT 10\n UNION ALL SELECT 11\n UNION ALL SELECT 12\n UNION ALL SELECT 13\n UNION ALL SELECT 14\n UNION ALL SELECT 15;\n\nCREATE OR REPLACE VIEW generator_256\nAS\nSELECT ((hi.n << 4) | lo.n) AS n\nFROM generator_16 lo,\n generator_16 hi;\n\nCREATE OR REPLACE VIEW generator_4k\nAS\nSELECT ((hi.n << 8) | lo.n) AS n\nFROM generator_256 lo,\n generator_16 hi;\n\nCREATE OR REPLACE VIEW generator_64k\nAS\nSELECT ((hi.n << 8) | lo.n) AS n\nFROM generator_256 lo,\n generator_256 hi;\n\nCREATE OR REPLACE VIEW generator_1m\nAS\nSELECT ((hi.n << 16) | lo.n) AS n\nFROM generator_64k lo,\n generator_16 hi;\n\nCREATE OR replace view dates_10y AS\n SELECT date('2010-01-01') d\n UNION ALL SELECT date('2010-02-01')\n UNION ALL SELECT date('2010-03-01')\n UNION ALL SELECT date('2010-04-01')\n UNION ALL SELECT date('2010-05-01')\n UNION ALL SELECT date('2010-06-01')\n UNION ALL SELECT date('2010-07-01')\n UNION ALL SELECT date('2010-08-01')\n UNION ALL SELECT date('2010-09-01')\n UNION ALL SELECT date('2010-10-01')\n UNION ALL SELECT date('2010-12-01')\n UNION ALL SELECT date('2010-12-01')\n UNION ALL SELECT date('2011-01-01')\n UNION ALL SELECT date('2011-02-01')\n UNION ALL SELECT date('2011-03-01')\n UNION ALL SELECT date('2011-04-01')\n UNION ALL SELECT date('2011-05-01')\n UNION ALL SELECT date('2011-06-01')\n UNION ALL SELECT date('2011-07-01')\n UNION ALL SELECT date('2011-08-01')\n UNION ALL SELECT date('2011-09-01')\n UNION ALL SELECT date('2011-10-01')\n UNION ALL SELECT date('2011-12-01')\n UNION ALL SELECT date('2011-12-01')\n UNION ALL SELECT date('2012-01-01')\n UNION ALL SELECT date('2012-02-01')\n UNION ALL SELECT date('2012-03-01')\n UNION ALL SELECT date('2012-04-01')\n UNION ALL SELECT date('2012-05-01')\n UNION ALL SELECT date('2012-06-01')\n UNION ALL SELECT date('2012-07-01')\n UNION ALL SELECT date('2012-08-01')\n UNION ALL SELECT date('2012-09-01')\n UNION ALL SELECT date('2012-10-01')\n UNION ALL SELECT date('2012-12-01')\n UNION ALL SELECT date('2012-12-01')\n UNION ALL SELECT date('2013-01-01')\n UNION ALL SELECT date('2013-02-01')\n UNION ALL SELECT date('2013-03-01')\n UNION ALL SELECT date('2013-04-01')\n UNION ALL SELECT date('2013-05-01')\n UNION ALL SELECT date('2013-06-01')\n UNION ALL SELECT date('2013-07-01')\n UNION ALL SELECT date('2013-08-01')\n UNION ALL SELECT date('2013-09-01')\n UNION ALL SELECT date('2013-10-01')\n UNION ALL SELECT date('2013-12-01')\n UNION ALL SELECT date('2013-12-01')\n UNION ALL SELECT date('2014-01-01')\n UNION ALL SELECT date('2014-02-01')\n UNION ALL SELECT date('2014-03-01')\n UNION ALL SELECT date('2014-04-01')\n UNION ALL SELECT date('2014-05-01')\n UNION ALL SELECT date('2014-06-01')\n UNION ALL SELECT date('2014-07-01')\n UNION ALL SELECT date('2014-08-01')\n UNION ALL SELECT date('2014-09-01')\n UNION ALL SELECT date('2014-10-01')\n UNION ALL SELECT date('2014-12-01')\n UNION ALL SELECT date('2014-12-01')\n UNION ALL SELECT date('2015-01-01')\n UNION ALL SELECT date('2015-02-01')\n UNION ALL SELECT date('2015-03-01')\n UNION ALL SELECT date('2015-04-01')\n UNION ALL SELECT date('2015-05-01')\n UNION ALL SELECT date('2015-06-01')\n UNION ALL SELECT date('2015-07-01')\n UNION ALL SELECT date('2015-08-01')\n UNION ALL SELECT date('2015-09-01')\n UNION ALL SELECT date('2015-10-01')\n UNION ALL SELECT date('2015-12-01')\n UNION ALL SELECT date('2015-12-01')\n UNION ALL SELECT date('2016-01-01')\n UNION ALL SELECT date('2016-02-01')\n UNION ALL SELECT date('2016-03-01')\n UNION ALL SELECT date('2016-04-01')\n UNION ALL SELECT date('2016-05-01')\n UNION ALL SELECT date('2016-06-01')\n UNION ALL SELECT date('2016-07-01')\n UNION ALL SELECT date('2016-08-01')\n UNION ALL SELECT date('2016-09-01')\n UNION ALL SELECT date('2016-10-01')\n UNION ALL SELECT date('2016-12-01')\n UNION ALL SELECT date('2016-12-01')\n UNION ALL SELECT date('2017-01-01')\n UNION ALL SELECT date('2017-02-01')\n UNION ALL SELECT date('2017-03-01')\n UNION ALL SELECT date('2017-04-01')\n UNION ALL SELECT date('2017-05-01')\n UNION ALL SELECT date('2017-06-01')\n UNION ALL SELECT date('2017-07-01')\n UNION ALL SELECT date('2017-08-01')\n UNION ALL SELECT date('2017-09-01')\n UNION ALL SELECT date('2017-10-01')\n UNION ALL SELECT date('2017-12-01')\n UNION ALL SELECT date('2017-12-01')\n UNION ALL SELECT date('2018-01-01')\n UNION ALL SELECT date('2018-02-01')\n UNION ALL SELECT date('2018-03-01')\n UNION ALL SELECT date('2018-04-01')\n UNION ALL SELECT date('2018-05-01')\n UNION ALL SELECT date('2018-06-01')\n UNION ALL SELECT date('2018-07-01')\n UNION ALL SELECT date('2018-08-01')\n UNION ALL SELECT date('2018-09-01')\n UNION ALL SELECT date('2018-10-01')\n UNION ALL SELECT date('2018-12-01')\n UNION ALL SELECT date('2018-12-01')\n UNION ALL SELECT date('2019-01-01')\n UNION ALL SELECT date('2019-02-01')\n UNION ALL SELECT date('2019-03-01')\n UNION ALL SELECT date('2019-04-01')\n UNION ALL SELECT date('2019-05-01')\n UNION ALL SELECT date('2019-06-01')\n UNION ALL SELECT date('2019-07-01')\n UNION ALL SELECT date('2019-08-01')\n UNION ALL SELECT date('2019-09-01')\n UNION ALL SELECT date('2019-10-01')\n UNION ALL SELECT date('2019-12-01')\n UNION ALL SELECT date('2019-12-01')\n UNION ALL SELECT date('2020-01-01')\n UNION ALL SELECT date('2020-02-01')\n UNION ALL SELECT date('2020-03-01')\n UNION ALL SELECT date('2020-04-01')\n UNION ALL SELECT date('2020-05-01')\n UNION ALL SELECT date('2020-06-01')\n UNION ALL SELECT date('2020-07-01')\n UNION ALL SELECT date('2020-08-01')\n UNION ALL SELECT date('2020-09-01')\n UNION ALL SELECT date('2020-10-01')\n UNION ALL SELECT date('2020-12-01')\n UNION ALL SELECT date('2020-12-01');\n\n1.3. Populate the MySQL database\n\nSET FOREIGN_KEY_CHECKS = 0;\n\nTRUNCATE tbl_users;\nTRUNCATE tbl_groups;\nTRUNCATE tbl_permissions;\nTRUNCATE tbl_group_permissions;\nTRUNCATE tbl_rules;\nTRUNCATE tbl_companies;\nTRUNCATE tbl_branches;\nTRUNCATE tbl_bills;\nTRUNCATE tbl_company_bills;\nTRUNCATE tbl_branch_bills;\n\nSET FOREIGN_KEY_CHECKS = 1;\n\nINSERT INTO tbl_companies(name)\nSELECT CONCAT('Company ', g.n)\nfrom generator_4k as g\nLIMIT 100;\n\nINSERT INTO tbl_branches(name, company_id)\nSELECT CONCAT('Branch ', b.n, ' (Company', c.id, ')'), c.id\nfrom generator_4k as b,\n tbl_companies c\nWHERE b.n < 100;\n\nINSERT INTO tbl_users(user_email)\nVALUES ('[email protected]');\n\nINSERT INTO tbl_groups(name)\nVALUES ('Group X');\n\nINSERT INTO tbl_permissions(name)\nVALUES ('Permission W'),\n ('Permission X'),\n ('Permission Y'),\n ('Permission Z');\n\nINSERT INTO tbl_group_permissions(group_id, permission_id)\nSELECT g.id, p.id\nFROM tbl_groups g,\n tbl_permissions p\nWHERE g.name = 'Group X'\n AND p.name = 'Permission W';\n\nINSERT INTO tbl_rules(rule_type, user_id, group_id, company_id, branch_id)\nSELECT 'BRANCH', u.id, g.id, b.company_id, b.id\nFROM tbl_branches b,\n tbl_groups g,\n tbl_users u\nWHERE (g.name = 'Group X' AND b.id IN (1, 3, 5));\n\nINSERT INTO tbl_rules(rule_type, user_id, group_id, company_id, branch_id)\nSELECT 'COMPANY', u.id, g.id, c.id, NULL\nFROM tbl_companies c,\n tbl_groups g,\n tbl_users u\nWHERE (g.name = 'Group X' AND c.id IN (2, 4, 6));\n\nSET FOREIGN_KEY_CHECKS = 0;\n\nINSERT INTO tbl_branch_bills(branch_id, bill_id)\nSELECT b.id, ROW_NUMBER() OVER ()\nfrom tbl_branches b,\n dates_10y d;\n\nINSERT INTO tbl_bills(id, bill_date, bill_number)\nSELECT ROW_NUMBER() OVER (), d.d, CONCAT('#NUM-', d.d, '-', b.id) from\ntbl_branches b,dates_10y d;\n\nSET FOREIGN_KEY_CHECKS = 1;\n\n1.4. Run the query\n\nEXPLAIN SELECT bills.id AS bill_id,\n bills.bill_date AS bill_date,\n bills.bill_number AS bill_number,\n branch_bills.branch_id AS branch_id,\n company_bills.company_id AS company_id\nFROM tbl_bills bills\n LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id =\nbranch_bills.bill_id\n LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id =\ncompany_bills.bill_id\n INNER JOIN tbl_branches ON branch_bills.branch_id =\ntbl_branches.id\nWHERE branch_bills.branch_id IN (\n SELECT b.id\n FROM tbl_branches b\n INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n INNER JOIN tbl_groups g ON r.group_id = g.id\n INNER JOIN (tbl_group_permissions gp INNER JOIN\ntbl_permissions p ON gp.permission_id = p.id)\n ON g.id = gp.group_id\n INNER JOIN tbl_users u ON r.user_id = u.id\n WHERE u.id = 1\n AND r.rule_type = 'BRANCH'\n AND p.name = 'Permission W'\n);\n\n1,SIMPLE,u,,const,PRIMARY,PRIMARY,8,const,1,100,Using index\n1,SIMPLE,g,,index,PRIMARY,PRIMARY,8,,1,100,Using index; Start temporary\n1,SIMPLE,gp,,ref,\"PRIMARY,FKocxt78iv4ufox094sdr1pudf7\",PRIMARY,8,companies_and_branches.g.id,1,100,Using\nindex\n1,SIMPLE,p,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.gp.permission_id,1,25,Using\nwhere\n1,SIMPLE,r,,ref,\"FK18sr791qaonsmvodm1v7g8vyr,FKldsvxs2qijr9quon4srw627ky,FKp28tcx68kdbb8flhl1xdtl0hp\",FKldsvxs2qijr9quon4srw627ky,9,companies_and_branches.g.id,1,16.67,Using\nwhere\n1,SIMPLE,b,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.r.branch_id,1,100,Using\nindex\n1,SIMPLE,tbl_branches,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.r.branch_id,1,100,Using\nindex\n1,SIMPLE,branch_bills,,ref,\"PRIMARY,FKjr0egr9t34sxr1pv2ld1ux174\",FKjr0egr9t34sxr1pv2ld1ux174,9,companies_and_branches.r.branch_id,1,100,Using\nwhere; Using index\n1,SIMPLE,bills,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.branch_bills.bill_id,1,100,\n1,SIMPLE,company_bills,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.branch_bills.bill_id,1,100,End\ntemporary\n\n10 rows retrieved starting from 1 in 50 ms (execution: 6 ms, fetching: 44 ms)\n\n2.1 Run PostgreSQL 10.6 locally\n\n$ docker run --name postgres106 \\\n -e POSTGRES_PASSWORD=password \\\n -p 15432:5432 \\\n -d postgres:10.6\n\n$ docker update --cpus 2 --memory 2GB postgres106\n\n2.2. Create the PostgreSQL database\n\nDROP TABLE IF EXISTS tbl_rules,\n tbl_permissions,\n tbl_groups,\n tbl_group_permissions,\n tbl_companies,\n tbl_branches,\n tbl_departments,\n tbl_users,\n tbl_company_bills,\n tbl_branch_bills,\n tbl_bills CASCADE;\n\nCREATE TABLE tbl_permissions\n(\n id bigserial NOT NULL PRIMARY KEY,\n name varchar(255) NOT NULL UNIQUE\n);\n\nCREATE TABLE tbl_groups\n(\n id bigserial NOT NULL PRIMARY KEY,\n name varchar(255) UNIQUE\n);\n\nCREATE TABLE tbl_group_permissions\n(\n group_id bigint NOT NULL REFERENCES tbl_groups (id),\n permission_id bigint NOT NULL REFERENCES tbl_permissions (id),\n PRIMARY KEY (group_id, permission_id)\n);\n\nCREATE TABLE tbl_companies\n(\n id bigserial NOT NULL PRIMARY KEY,\n name text NOT NULL\n);\n\nCREATE TABLE tbl_branches\n(\n id bigserial NOT NULL PRIMARY KEY,\n company_id bigint NOT NULL REFERENCES tbl_companies (id),\n name text NOT NULL\n);\n\nCREATE TABLE tbl_users\n(\n id bigserial NOT NULL PRIMARY KEY,\n user_email varchar(255) NOT NULL\n);\n\nCREATE TABLE tbl_rules\n(\n id bigserial NOT NULL PRIMARY KEY,\n rule_type varchar(255),\n user_id bigint REFERENCES tbl_users (id),\n group_id bigint REFERENCES tbl_groups (id),\n company_id bigint REFERENCES tbl_companies (id),\n branch_id bigint REFERENCES tbl_branches (id)\n);\n\nCREATE TABLE tbl_bills\n(\n id bigserial NOT NULL PRIMARY KEY,\n bill_date date NOT NULL,\n bill_number varchar(255) NOT NULL UNIQUE,\n CONSTRAINT bill_const1 UNIQUE (bill_date, bill_number)\n);\n\nCREATE TABLE tbl_company_bills\n(\n company_id bigint REFERENCES tbl_companies (id),\n bill_id bigint NOT NULL REFERENCES tbl_bills (id),\n PRIMARY KEY (company_id, bill_id)\n);\n\nCREATE TABLE tbl_branch_bills\n(\n branch_id bigint REFERENCES tbl_branches (id),\n bill_id bigint NOT NULL REFERENCES tbl_bills (id),\n PRIMARY KEY (branch_id, bill_id)\n);\n\n2.3. Populate the PostgreSQL database\n\nTRUNCATE tbl_users, tbl_companies, tbl_branches, tbl_groups,\ntbl_permissions, tbl_group_permissions, tbl_rules, tbl_bills,\ntbl_branch_bills, tbl_company_bills RESTART IDENTITY CASCADE;\n\nINSERT INTO tbl_users(user_email)\nVALUES ('[email protected]');\n\nWITH new_comps AS (INSERT INTO tbl_companies (id, name)\n SELECT nextval('tbl_companies_id_seq'),\n 'Company ' || currval('tbl_companies_id_seq')\n FROM generate_series(1, 100) num RETURNING id)\nINSERT\nINTO tbl_branches(id, company_id, name)\nSELECT nextval('tbl_branches_id_seq'),\n c.id,\n 'Branch ' || currval('tbl_branches_id_seq') || ' ( Company ' ||\nc.id || ')'\nFROM new_comps c,\n generate_series(1, 100) num;\n\nINSERT INTO tbl_groups(name)\nVALUES ('Group X');\n\nINSERT INTO tbl_permissions(name)\nVALUES ('Permission W'),\n ('Permission X'),\n ('Permission Y'),\n ('Permission Z');\n\nINSERT INTO tbl_group_permissions(group_id, permission_id)\nSELECT g.id, p.id\nFROM tbl_groups g,\n tbl_permissions p\nWHERE g.name = 'Group X'\n AND p.name = 'Permission W';\n\nINSERT INTO tbl_rules(rule_type, user_id, group_id, company_id, branch_id)\nSELECT 'BRANCH', u.id, g.id, b.company_id, b.id\nFROM tbl_branches b,\n tbl_groups g,\n tbl_users u\nWHERE (g.name = 'Group X' AND b.id IN (1, 3, 5));\n\nINSERT INTO tbl_rules(rule_type, user_id, group_id, company_id, branch_id)\nSELECT 'COMPANY', u.id, g.id, c.id, NULL\nFROM tbl_companies c,\n tbl_groups g,\n tbl_users u\nWHERE (g.name = 'Group X' AND c.id IN (2, 4, 6));\n\nWITH ids AS (SELECT nextval('tbl_bills_id_seq') AS bill_id,\n make_date(year, month, 1) AS bill_date,\n br.id AS branch_id\n FROM tbl_branches AS br,\n generate_series(2010, 2018) AS year,\n generate_series(1, 12) AS month\n),\n bills AS (INSERT INTO tbl_bills (id, bill_date, bill_number)\n SELECT ids.bill_id AS billl_id,\n ids.bill_date AS bill_date,\n '#NUM-' || ids.bill_date || '-' || ids.branch_id AS bill_num\n FROM ids RETURNING *)\nINSERT\nINTO tbl_branch_bills(branch_id, bill_id)\nSELECT branch_id, bill_id\nFROM ids;\n\nEXPLAIN ( ANALYZE , COSTS , VERBOSE , BUFFERS , FORMAT JSON )\nSELECT bills.id AS bill_id,\n bills.bill_date AS bill_date,\n bills.bill_number AS bill_number,\n branch_bills.branch_id AS branch_id,\n company_bills.company_id AS company_id\nFROM tbl_bills bills\n LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id =\nbranch_bills.bill_id\n LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id =\ncompany_bills.bill_id\n INNER JOIN tbl_branches ON branch_bills.branch_id =\ntbl_branches.id\nWHERE branch_bills.branch_id IN (\n SELECT b.id\n FROM tbl_branches b\n INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n INNER JOIN tbl_groups g ON r.group_id = g.id\n INNER JOIN (tbl_group_permissions gp INNER JOIN\ntbl_permissions p ON gp.permission_id = p.id)\n ON g.id = gp.group_id\n INNER JOIN tbl_users u ON r.user_id = u.id\n WHERE u.id = 1\n AND r.rule_type = 'BRANCH'\n AND p.name = 'Permission W'\n);\n\n2.4. Run the query\n\nVACUUM ANALYZE ;\n\nEXPLAIN ( ANALYZE , COSTS , VERBOSE , BUFFERS )\nSELECT bills.id AS bill_id,\n bills.bill_date AS bill_date,\n bills.bill_number AS bill_number,\n branch_bills.branch_id AS branch_id,\n company_bills.company_id AS company_id\nFROM tbl_bills bills\n LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id =\nbranch_bills.bill_id\n LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id =\ncompany_bills.bill_id\n INNER JOIN tbl_branches ON branch_bills.branch_id =\ntbl_branches.id\nWHERE branch_bills.branch_id IN (\n SELECT b.id\n FROM tbl_branches b\n INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n INNER JOIN tbl_groups g ON r.group_id = g.id\n INNER JOIN (tbl_group_permissions gp INNER JOIN\ntbl_permissions p ON gp.permission_id = p.id)\n ON g.id = gp.group_id\n INNER JOIN tbl_users u ON r.user_id = u.id\n WHERE u.id = 1\n AND r.rule_type = 'BRANCH'\n AND p.name = 'Permission W'\n);\n\nGather (cost=36865.05..89524.81 rows=108 width=48) (actual\ntime=667.105..1976.054 rows=324 loops=1)\n\" Output: bills.id, bills.bill_date, bills.bill_number,\nbranch_bills.branch_id, company_bills.company_id\"\n Workers Planned: 2\n Workers Launched: 2\n\" Buffers: shared hit=28392 read=4240 written=336, temp read=20821\nwritten=20635\"\n -> Hash Semi Join (cost=35865.05..88514.01 rows=45 width=48)\n(actual time=636.256..1948.638 rows=108 loops=3)\n\" Output: bills.id, bills.bill_date, bills.bill_number,\nbranch_bills.branch_id, company_bills.company_id\"\n Hash Cond: (branch_bills.branch_id = b.id)\n\" Buffers: shared hit=28392 read=4240 written=336, temp\nread=20821 written=20635\"\n Worker 0: actual time=563.702..1964.847 rows=105 loops=1\n\" Buffers: shared hit=10027 read=953 written=109, temp\nread=6971 written=6909\"\n Worker 1: actual time=679.468..1965.037 rows=122 loops=1\n\" Buffers: shared hit=9292 read=1628 written=114, temp\nread=6960 written=6898\"\n -> Hash Join (cost=35859.32..87326.53 rows=450000 width=56)\n(actual time=491.279..1875.725 rows=360000 loops=3)\n\" Output: bills.id, bills.bill_date, bills.bill_number,\nbranch_bills.branch_id, company_bills.company_id, tbl_branches.id\"\n Inner Unique: true\n Hash Cond: (branch_bills.branch_id = tbl_branches.id)\n\" Buffers: shared hit=28269 read=4239 written=336, temp\nread=20821 written=20635\"\n Worker 0: actual time=497.021..1870.969 rows=364536 loops=1\n\" Buffers: shared hit=9971 read=952 written=109, temp\nread=6971 written=6909\"\n Worker 1: actual time=479.286..1900.802 rows=363072 loops=1\n\" Buffers: shared hit=9235 read=1628 written=114, temp\nread=6960 written=6898\"\n -> Hash Join (cost=35541.32..85826.78 rows=450000\nwidth=48) (actual time=487.460..1545.962 rows=360000 loops=3)\n\" Output: bills.id, bills.bill_date,\nbills.bill_number, branch_bills.branch_id, company_bills.company_id\"\n Hash Cond: (bills.id = branch_bills.bill_id)\n\" Buffers: shared hit=27990 read=4239 written=336,\ntemp read=20821 written=20635\"\n Worker 0: actual time=493.881..1583.609 rows=364536 loops=1\n\" Buffers: shared hit=9878 read=952 written=109,\ntemp read=6971 written=6909\"\n Worker 1: actual time=474.878..1542.282 rows=363072 loops=1\n\" Buffers: shared hit=9142 read=1628 written=114,\ntemp read=6960 written=6898\"\n -> Merge Left Join (cost=129.32..31921.28\nrows=450000 width=40) (actual time=0.047..239.155 rows=360000 loops=3)\n\" Output: bills.id, bills.bill_date,\nbills.bill_number, company_bills.company_id\"\n Merge Cond: (bills.id = company_bills.bill_id)\n Buffers: shared hit=12327 read=2345 written=336\n Worker 0: actual time=0.058..248.250\nrows=364536 loops=1\n Buffers: shared hit=4336 read=637 written=109\n Worker 1: actual time=0.065..222.495\nrows=363072 loops=1\n Buffers: shared hit=3979 read=929 written=114\n -> Parallel Index Scan using tbl_bills_pkey\non public.tbl_bills bills (cost=0.43..30650.43 rows=450000 width=32)\n(actual time=0.030..127.785 rows=360000 loops=3)\n\" Output: bills.id, bills.bill_date,\nbills.bill_number\"\n Buffers: shared hit=12327 read=2345 written=336\n Worker 0: actual time=0.037..105.247\nrows=364536 loops=1\n Buffers: shared hit=4336 read=637 written=109\n Worker 1: actual time=0.044..108.513\nrows=363072 loops=1\n Buffers: shared hit=3979 read=929 written=114\n -> Sort (cost=128.89..133.52 rows=1850\nwidth=16) (actual time=0.015..0.015 rows=0 loops=3)\n\" Output: company_bills.company_id,\ncompany_bills.bill_id\"\n Sort Key: company_bills.bill_id\n Sort Method: quicksort Memory: 25kB\n Worker 0: actual time=0.019..0.019\nrows=0 loops=1\n Worker 1: actual time=0.018..0.018\nrows=0 loops=1\n -> Seq Scan on\npublic.tbl_company_bills company_bills (cost=0.00..28.50 rows=1850\nwidth=16) (actual time=0.006..0.006 rows=0 loops=3)\n\" Output:\ncompany_bills.company_id, company_bills.bill_id\"\n Worker 0: actual\ntime=0.007..0.007 rows=0 loops=1\n Worker 1: actual\ntime=0.008..0.008 rows=0 loops=1\n -> Hash (cost=16638.00..16638.00 rows=1080000\nwidth=16) (actual time=486.822..486.822 rows=1080000 loops=3)\n\" Output: branch_bills.branch_id, branch_bills.bill_id\"\n Buckets: 131072 Batches: 32 Memory Usage: 2614kB\n\" Buffers: shared hit=15620 read=1894, temp\nwritten=13740\"\n Worker 0: actual time=493.045..493.045\nrows=1080000 loops=1\n\" Buffers: shared hit=5523 read=315, temp\nwritten=4580\"\n Worker 1: actual time=474.144..474.144\nrows=1080000 loops=1\n\" Buffers: shared hit=5139 read=699, temp\nwritten=4580\"\n -> Seq Scan on public.tbl_branch_bills\nbranch_bills (cost=0.00..16638.00 rows=1080000 width=16) (actual\ntime=0.025..158.450 rows=1080000 loops=3)\n\" Output: branch_bills.branch_id,\nbranch_bills.bill_id\"\n Buffers: shared hit=15620 read=1894\n Worker 0: actual time=0.032..182.305\nrows=1080000 loops=1\n Buffers: shared hit=5523 read=315\n Worker 1: actual time=0.022..144.461\nrows=1080000 loops=1\n Buffers: shared hit=5139 read=699\n -> Hash (cost=193.00..193.00 rows=10000 width=8)\n(actual time=3.769..3.769 rows=10000 loops=3)\n Output: tbl_branches.id\n Buckets: 16384 Batches: 1 Memory Usage: 519kB\n Buffers: shared hit=279\n Worker 0: actual time=3.077..3.077 rows=10000 loops=1\n Buffers: shared hit=93\n Worker 1: actual time=4.331..4.331 rows=10000 loops=1\n Buffers: shared hit=93\n -> Seq Scan on public.tbl_branches\n(cost=0.00..193.00 rows=10000 width=8) (actual time=0.006..1.755\nrows=10000 loops=3)\n Output: tbl_branches.id\n Buffers: shared hit=279\n Worker 0: actual time=0.007..1.485 rows=10000 loops=1\n Buffers: shared hit=93\n Worker 1: actual time=0.008..1.980 rows=10000 loops=1\n Buffers: shared hit=93\n -> Hash (cost=5.72..5.72 rows=1 width=16) (actual\ntime=0.117..0.117 rows=3 loops=3)\n\" Output: b.id, r.branch_id\"\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=40\n Worker 0: actual time=0.125..0.125 rows=3 loops=1\n Buffers: shared hit=15\n Worker 1: actual time=0.156..0.156 rows=3 loops=1\n Buffers: shared hit=15\n -> Nested Loop (cost=1.40..5.72 rows=1 width=16)\n(actual time=0.102..0.113 rows=3 loops=3)\n\" Output: b.id, r.branch_id\"\n Buffers: shared hit=40\n Worker 0: actual time=0.111..0.120 rows=3 loops=1\n Buffers: shared hit=15\n Worker 1: actual time=0.140..0.153 rows=3 loops=1\n Buffers: shared hit=15\n -> Nested Loop (cost=1.40..4.69 rows=1 width=24)\n(actual time=0.096..0.103 rows=3 loops=3)\n\" Output: b.id, r.branch_id, r.user_id\"\n Join Filter: (r.group_id = g.id)\n Buffers: shared hit=31\n Worker 0: actual time=0.107..0.112 rows=3 loops=1\n Buffers: shared hit=12\n Worker 1: actual time=0.131..0.139 rows=3 loops=1\n Buffers: shared hit=12\n -> Merge Join (cost=1.40..1.55 rows=3\nwidth=32) (actual time=0.073..0.077 rows=3 loops=3)\n\" Output: b.id, r.branch_id,\nr.group_id, r.user_id\"\n Merge Cond: (b.id = r.branch_id)\n Buffers: shared hit=22\n Worker 0: actual time=0.079..0.082\nrows=3 loops=1\n Buffers: shared hit=9\n Worker 1: actual time=0.102..0.107\nrows=3 loops=1\n Buffers: shared hit=9\n -> Index Only Scan using\ntbl_branches_pkey on public.tbl_branches b (cost=0.29..270.29\nrows=10000 width=8) (actual time=0.035..0.036 rows=6 loops=3)\n Output: b.id\n Heap Fetches: 0\n Buffers: shared hit=11\n Worker 0: actual\ntime=0.038..0.039 rows=6 loops=1\n Buffers: shared hit=4\n Worker 1: actual\ntime=0.049..0.051 rows=6 loops=1\n Buffers: shared hit=4\n -> Sort (cost=1.11..1.12 rows=3\nwidth=24) (actual time=0.035..0.036 rows=3 loops=3)\n\" Output: r.branch_id,\nr.group_id, r.user_id\"\n Sort Key: r.branch_id\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=11\n Worker 0: actual\ntime=0.039..0.039 rows=3 loops=1\n Buffers: shared hit=5\n Worker 1: actual\ntime=0.050..0.051 rows=3 loops=1\n Buffers: shared hit=5\n -> Seq Scan on public.tbl_rules\nr (cost=0.00..1.09 rows=3 width=24) (actual time=0.017..0.019 rows=3\nloops=3)\n\" Output: r.branch_id,\nr.group_id, r.user_id\"\n Filter: ((r.user_id = 1)\nAND ((r.rule_type)::text = 'BRANCH'::text))\n Rows Removed by Filter: 3\n Buffers: shared hit=3\n Worker 0: actual\ntime=0.015..0.016 rows=3 loops=1\n Buffers: shared hit=1\n Worker 1: actual\ntime=0.028..0.030 rows=3 loops=1\n Buffers: shared hit=1\n -> Materialize (cost=0.00..3.10 rows=1\nwidth=16) (actual time=0.008..0.008 rows=1 loops=9)\n\" Output: g.id, gp.group_id\"\n Buffers: shared hit=9\n Worker 0: actual time=0.009..0.010\nrows=1 loops=3\n Buffers: shared hit=3\n Worker 1: actual time=0.009..0.010\nrows=1 loops=3\n Buffers: shared hit=3\n -> Nested Loop (cost=0.00..3.10\nrows=1 width=16) (actual time=0.019..0.020 rows=1 loops=3)\n\" Output: g.id, gp.group_id\"\n Inner Unique: true\n Join Filter: (gp.permission_id = p.id)\n Buffers: shared hit=9\n Worker 0: actual\ntime=0.024..0.025 rows=1 loops=1\n Buffers: shared hit=3\n Worker 1: actual\ntime=0.024..0.025 rows=1 loops=1\n Buffers: shared hit=3\n -> Nested Loop\n(cost=0.00..2.03 rows=1 width=24) (actual time=0.012..0.012 rows=1\nloops=3)\n\" Output: g.id,\ngp.permission_id, gp.group_id\"\n Join Filter: (g.id = gp.group_id)\n Buffers: shared hit=6\n Worker 0: actual\ntime=0.013..0.014 rows=1 loops=1\n Buffers: shared hit=2\n Worker 1: actual\ntime=0.015..0.016 rows=1 loops=1\n Buffers: shared hit=2\n -> Seq Scan on\npublic.tbl_groups g (cost=0.00..1.01 rows=1 width=8) (actual\ntime=0.005..0.005 rows=1 loops=3)\n\" Output: g.id, g.name\"\n Buffers: shared hit=3\n Worker 0: actual\ntime=0.006..0.006 rows=1 loops=1\n Buffers: shared hit=1\n Worker 1: actual\ntime=0.006..0.006 rows=1 loops=1\n Buffers: shared hit=1\n -> Seq Scan on\npublic.tbl_group_permissions gp (cost=0.00..1.01 rows=1 width=16)\n(actual time=0.006..0.006 rows=1 loops=3)\n\" Output:\ngp.group_id, gp.permission_id\"\n Buffers: shared hit=3\n Worker 0: actual\ntime=0.006..0.007 rows=1 loops=1\n Buffers: shared hit=1\n Worker 1: actual\ntime=0.008..0.008 rows=1 loops=1\n Buffers: shared hit=1\n -> Seq Scan on\npublic.tbl_permissions p (cost=0.00..1.05 rows=1 width=8) (actual\ntime=0.007..0.007 rows=1 loops=3)\n\" Output: p.id, p.name\"\n Filter: ((p.name)::text =\n'Permission W'::text)\n Buffers: shared hit=3\n Worker 0: actual\ntime=0.010..0.010 rows=1 loops=1\n Buffers: shared hit=1\n Worker 1: actual\ntime=0.008..0.008 rows=1 loops=1\n Buffers: shared hit=1\n -> Seq Scan on public.tbl_users u\n(cost=0.00..1.01 rows=1 width=8) (actual time=0.002..0.002 rows=1\nloops=9)\n\" Output: u.id, u.user_email\"\n Filter: (u.id = 1)\n Buffers: shared hit=9\n Worker 0: actual time=0.001..0.002 rows=1 loops=3\n Buffers: shared hit=3\n Worker 1: actual time=0.003..0.004 rows=1 loops=3\n Buffers: shared hit=3\nPlanning time: 2.680 ms\nExecution time: 1976.277 ms\n\n\nBest regards,\nBehrang Saeedzadeh\nblog.behrang.org\n\nHardware\n\nCPU: Core i7 6700\nOS: Ubuntu 19.04\nRAM: 32GB (limited to 2GB for this test)\n\nAlso reproducible on a 2018 MacBook Pro.\nDetails\nOn my machine, this query that is generated by Hibernate runs in about 57 ms on MySQL 8 but it takes more than 1 second to run on PostgreSQL:\nSELECT bills.id AS bill_id,\n bills.bill_date AS bill_date,\n bills.bill_number AS bill_number,\n branch_bills.branch_id AS branch_id,\n company_bills.company_id AS company_id\nFROM tbl_bills bills\n LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id = branch_bills.bill_id\n LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id = company_bills.bill_id\n INNER JOIN tbl_branches ON branch_bills.branch_id = tbl_branches.id\nWHERE branch_bills.branch_id IN (\n SELECT b.id\n FROM tbl_branches b\n INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n INNER JOIN tbl_groups g ON r.group_id = g.id\n INNER JOIN (tbl_group_permissions gp INNER JOIN tbl_permissions p ON gp.permission_id = p.id)\n ON g.id = gp.group_id\n INNER JOIN tbl_users u ON r.user_id = u.id\n WHERE u.id = 1\n AND r.rule_type = 'BRANCH'\n AND p.name = 'Permission W'\n);\nPostgreSQL does not seem to be choosing the best plan to execute this query due to the IN( <subquery> ) expression. Adding indexes does not seem to eliminate this particular bottleneck.As the query is generated bt Hibernate, it is not possible to tweak it easily (there's a way to parse the generated SQL and modify it before it is executed, but ideally I would like to avoid that). Otherwise it was possible to rewrite the query without the subquery. Another tweak that seems to work (but again not supported by JPA/Hibernate) is adding a dummy order by clause to the sub query:```EXPLAIN ( ANALYZE , COSTS , VERBOSE , BUFFERS )SELECT bills.id AS bill_id, bills.bill_date AS bill_date, bills.bill_number AS bill_number, branch_bills.branch_id AS branch_id, company_bills.company_id AS company_idFROM tbl_bills bills LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id = branch_bills.bill_id LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id = company_bills.bill_id INNER JOIN tbl_branches ON branch_bills.branch_id = tbl_branches.idWHERE branch_bills.branch_id IN ( SELECT b.id FROM tbl_branches b INNER JOIN tbl_rules r ON b.id = r.branch_id INNER JOIN tbl_groups g ON r.group_id = g.id INNER JOIN (tbl_group_permissions gp INNER JOIN tbl_permissions p ON gp.permission_id = p.id) ON g.id = gp.group_id INNER JOIN tbl_users u ON r.user_id = u.id WHERE u.id = 1 AND r.rule_type = 'BRANCH' AND p.name = 'Permission W' ORDER BY b.id);Hash Right Join (cost=69.70..105.15 rows=108 width=48) (actual time=1.814..1.893 rows=324 loops=1)\" Output: bills.id, bills.bill_date, bills.bill_number, branch_bills.branch_id, company_bills.company_id\" Hash Cond: (company_bills.bill_id = bills.id) Buffers: shared hit=1320 read=6 -> Seq Scan on public.tbl_company_bills company_bills (cost=0.00..28.50 rows=1850 width=16) (actual time=0.003..0.003 rows=0 loops=1)\" Output: company_bills.company_id, company_bills.bill_id\" -> Hash (cost=68.35..68.35 rows=108 width=40) (actual time=1.805..1.806 rows=324 loops=1)\" Output: bills.id, bills.bill_date, bills.bill_number, branch_bills.branch_id\" Buckets: 1024 Batches: 1 Memory Usage: 31kB Buffers: shared hit=1320 read=6 -> Nested Loop (cost=6.87..68.35 rows=108 width=40) (actual time=0.141..1.692 rows=324 loops=1)\" Output: bills.id, bills.bill_date, bills.bill_number, branch_bills.branch_id\" Inner Unique: true Buffers: shared hit=1320 read=6 -> Nested Loop (cost=6.44..15.55 rows=108 width=16) (actual time=0.135..0.299 rows=324 loops=1)\" Output: branch_bills.branch_id, branch_bills.bill_id\" Buffers: shared hit=25 read=3 -> Nested Loop (cost=6.01..10.04 rows=1 width=16) (actual time=0.086..0.094 rows=3 loops=1)\" Output: tbl_branches.id, b.id\" Inner Unique: true Buffers: shared hit=17 -> HashAggregate (cost=5.73..5.74 rows=1 width=8) (actual time=0.081..0.083 rows=3 loops=1) Output: b.id Group Key: b.id Buffers: shared hit=10 -> Nested Loop (cost=1.40..5.72 rows=1 width=8) (actual time=0.064..0.077 rows=3 loops=1) Output: b.id Buffers: shared hit=10 -> Nested Loop (cost=1.40..4.69 rows=1 width=16) (actual time=0.062..0.070 rows=3 loops=1)\" Output: b.id, r.user_id\" Join Filter: (r.group_id = g.id) Buffers: shared hit=7 -> Merge Join (cost=1.40..1.55 rows=3 width=24) (actual time=0.050..0.054 rows=3 loops=1)\" Output: b.id, r.group_id, r.user_id\" Merge Cond: (b.id = r.branch_id) Buffers: shared hit=4 -> Index Only Scan using tbl_branches_pkey on public.tbl_branches b (cost=0.29..270.29 rows=10000 width=8) (actual time=0.021..0.022 rows=6 loops=1) Output: b.id Heap Fetches: 0 Buffers: shared hit=3 -> Sort (cost=1.11..1.12 rows=3 width=24) (actual time=0.023..0.024 rows=3 loops=1)\" Output: r.branch_id, r.group_id, r.user_id\" Sort Key: r.branch_id Sort Method: quicksort Memory: 25kB Buffers: shared hit=1 -> Seq Scan on public.tbl_rules r (cost=0.00..1.09 rows=3 width=24) (actual time=0.010..0.013 rows=3 loops=1)\" Output: r.branch_id, r.group_id, r.user_id\" Filter: ((r.user_id = 1) AND ((r.rule_type)::text = 'BRANCH'::text)) Rows Removed by Filter: 3 Buffers: shared hit=1 -> Materialize (cost=0.00..3.10 rows=1 width=16) (actual time=0.004..0.004 rows=1 loops=3)\" Output: g.id, gp.group_id\" Buffers: shared hit=3 -> Nested Loop (cost=0.00..3.10 rows=1 width=16) (actual time=0.010..0.011 rows=1 loops=1)\" Output: g.id, gp.group_id\" Inner Unique: true Join Filter: (gp.permission_id = p.id) Buffers: shared hit=3 -> Nested Loop (cost=0.00..2.03 rows=1 width=24) (actual time=0.006..0.007 rows=1 loops=1)\" Output: g.id, gp.permission_id, gp.group_id\" Join Filter: (g.id = gp.group_id) Buffers: shared hit=2 -> Seq Scan on public.tbl_groups g (cost=0.00..1.01 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=1)\" Output: g.id, g.name\" Buffers: shared hit=1 -> Seq Scan on public.tbl_group_permissions gp (cost=0.00..1.01 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=1)\" Output: gp.group_id, gp.permission_id\" Buffers: shared hit=1 -> Seq Scan on public.tbl_permissions p (cost=0.00..1.05 rows=1 width=8) (actual time=0.002..0.003 rows=1 loops=1)\" Output: p.id, p.name\" Filter: ((p.name)::text = 'Permission W'::text) Buffers: shared hit=1 -> Seq Scan on public.tbl_users u (cost=0.00..1.01 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=3)\" Output: u.id, u.user_email\" Filter: (u.id = 1) Buffers: shared hit=3 -> Index Only Scan using tbl_branches_pkey on public.tbl_branches (cost=0.29..4.30 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=3) Output: tbl_branches.id Index Cond: (tbl_branches.id = b.id) Heap Fetches: 0 Buffers: shared hit=7 -> Index Only Scan using tbl_branch_bills_pkey on public.tbl_branch_bills branch_bills (cost=0.43..4.43 rows=108 width=16) (actual time=0.020..0.047 rows=108 loops=3)\" Output: branch_bills.branch_id, branch_bills.bill_id\" Index Cond: (branch_bills.branch_id = tbl_branches.id) Heap Fetches: 0 Buffers: shared hit=8 read=3 -> Index Scan using tbl_bills_pkey on public.tbl_bills bills (cost=0.43..0.49 rows=1 width=32) (actual time=0.004..0.004 rows=1 loops=324)\" Output: bills.id, bills.bill_date, bills.bill_number\" Index Cond: (bills.id = branch_bills.bill_id) Buffers: shared hit=1295 read=3Planning time: 1.999 msExecution time: 2.005 ms```This will reduce execution time from more than 1s to under 3ms.Is there a way to make PostgreSQL to choose the same plan as when the order by clause is present without changing it?Here are the necessary steps to reproduce this issue.\n1.1 Run MySQL 8 and PostgreSQL 10.6 locally\n$ docker run --name mysql8 \\\n -e MYSQL_ROOT_PASSWORD=password -p 13306:3306 \\\n -d mysql:8\n\n$ docker update --cpus 2 --memory 2GB mysql8\n1.2. Create the MySQL database\nCREATE TABLE `tbl_bills`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `bill_date` date NOT NULL,\n `bill_number` varchar(255) NOT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_branch_bills`\n(\n `branch_id` bigint(20) DEFAULT NULL,\n `bill_id` bigint(20) NOT NULL,\n PRIMARY KEY (`bill_id`),\n KEY `FKjr0egr9t34sxr1pv2ld1ux174` (`branch_id`),\n CONSTRAINT `FK7ekkvq33j12dw8a8bwx90a0gb` FOREIGN KEY (`bill_id`) REFERENCES `tbl_bills` (`id`),\n CONSTRAINT `FKjr0egr9t34sxr1pv2ld1ux174` FOREIGN KEY (`branch_id`) REFERENCES `tbl_branches` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_branches`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `name` varchar(255) NOT NULL,\n `company_id` bigint(20) DEFAULT NULL,\n PRIMARY KEY (`id`),\n KEY `FK1fde50hcsaf4os3fq6isshf23` (`company_id`),\n CONSTRAINT `FK1fde50hcsaf4os3fq6isshf23` FOREIGN KEY (`company_id`) REFERENCES `tbl_companies` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_companies`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `name` varchar(255) NOT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_company_bills`\n(\n `company_id` bigint(20) DEFAULT NULL,\n `bill_id` bigint(20) NOT NULL,\n PRIMARY KEY (`bill_id`),\n KEY `FKet3kkl9d16jeb5v8ic5pvq89` (`company_id`),\n CONSTRAINT `FK6d3r6to4orsc0mgflgt7aefsh` FOREIGN KEY (`bill_id`) REFERENCES `tbl_bills` (`id`),\n CONSTRAINT `FKet3kkl9d16jeb5v8ic5pvq89` FOREIGN KEY (`company_id`) REFERENCES `tbl_companies` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_group_permissions`\n(\n `group_id` bigint(20) NOT NULL,\n `permission_id` bigint(20) NOT NULL,\n PRIMARY KEY (`group_id`, `permission_id`),\n KEY `FKocxt78iv4ufox094sdr1pudf7` (`permission_id`),\n CONSTRAINT `FKe4adr2lkq2s61ju3pnbiq5m14` FOREIGN KEY (`group_id`) REFERENCES `tbl_groups` (`id`),\n CONSTRAINT `FKocxt78iv4ufox094sdr1pudf7` FOREIGN KEY (`permission_id`) REFERENCES `tbl_permissions` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_groups`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `name` varchar(255) DEFAULT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_permissions`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `name` varchar(256) NOT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_rules`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `rule_type` varchar(255) NOT NULL,\n `branch_id` bigint(20) DEFAULT NULL,\n `company_id` bigint(20) DEFAULT NULL,\n `group_id` bigint(20) DEFAULT NULL,\n `user_id` bigint(20) DEFAULT NULL,\n PRIMARY KEY (`id`),\n KEY `FK18sr791qaonsmvodm1v7g8vyr` (`branch_id`),\n KEY `FKtjjtlnfuxmbj4xij3j9t0m99m` (`company_id`),\n KEY `FKldsvxs2qijr9quon4srw627ky` (`group_id`),\n KEY `FKp28tcx68kdbb8flhl1xdtl0hp` (`user_id`),\n CONSTRAINT `FK18sr791qaonsmvodm1v7g8vyr` FOREIGN KEY (`branch_id`) REFERENCES `tbl_branches` (`id`),\n CONSTRAINT `FKldsvxs2qijr9quon4srw627ky` FOREIGN KEY (`group_id`) REFERENCES `tbl_groups` (`id`),\n CONSTRAINT `FKp28tcx68kdbb8flhl1xdtl0hp` FOREIGN KEY (`user_id`) REFERENCES `tbl_users` (`id`),\n CONSTRAINT `FKtjjtlnfuxmbj4xij3j9t0m99m` FOREIGN KEY (`company_id`) REFERENCES `tbl_companies` (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE TABLE `tbl_users`\n(\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `user_email` varchar(255) NOT NULL,\n PRIMARY KEY (`id`)\n) ENGINE = InnoDB\n DEFAULT CHARSET = utf8mb4\n COLLATE = utf8mb4_0900_ai_ci;\n\nCREATE OR REPLACE VIEW generator_16\nAS SELECT 0 n\n UNION ALL SELECT 1\n UNION ALL SELECT 2\n UNION ALL SELECT 3\n UNION ALL SELECT 4\n UNION ALL SELECT 5\n UNION ALL SELECT 6\n UNION ALL SELECT 7\n UNION ALL SELECT 8\n UNION ALL SELECT 9\n UNION ALL SELECT 10\n UNION ALL SELECT 11\n UNION ALL SELECT 12\n UNION ALL SELECT 13\n UNION ALL SELECT 14\n UNION ALL SELECT 15;\n\nCREATE OR REPLACE VIEW generator_256\nAS\nSELECT ((hi.n << 4) | lo.n) AS n\nFROM generator_16 lo,\n generator_16 hi;\n\nCREATE OR REPLACE VIEW generator_4k\nAS\nSELECT ((hi.n << 8) | lo.n) AS n\nFROM generator_256 lo,\n generator_16 hi;\n\nCREATE OR REPLACE VIEW generator_64k\nAS\nSELECT ((hi.n << 8) | lo.n) AS n\nFROM generator_256 lo,\n generator_256 hi;\n\nCREATE OR REPLACE VIEW generator_1m\nAS\nSELECT ((hi.n << 16) | lo.n) AS n\nFROM generator_64k lo,\n generator_16 hi;\n\nCREATE OR replace view dates_10y AS\n SELECT date('2010-01-01') d\n UNION ALL SELECT date('2010-02-01')\n UNION ALL SELECT date('2010-03-01')\n UNION ALL SELECT date('2010-04-01')\n UNION ALL SELECT date('2010-05-01')\n UNION ALL SELECT date('2010-06-01')\n UNION ALL SELECT date('2010-07-01')\n UNION ALL SELECT date('2010-08-01')\n UNION ALL SELECT date('2010-09-01')\n UNION ALL SELECT date('2010-10-01')\n UNION ALL SELECT date('2010-12-01')\n UNION ALL SELECT date('2010-12-01')\n UNION ALL SELECT date('2011-01-01')\n UNION ALL SELECT date('2011-02-01')\n UNION ALL SELECT date('2011-03-01')\n UNION ALL SELECT date('2011-04-01')\n UNION ALL SELECT date('2011-05-01')\n UNION ALL SELECT date('2011-06-01')\n UNION ALL SELECT date('2011-07-01')\n UNION ALL SELECT date('2011-08-01')\n UNION ALL SELECT date('2011-09-01')\n UNION ALL SELECT date('2011-10-01')\n UNION ALL SELECT date('2011-12-01')\n UNION ALL SELECT date('2011-12-01')\n UNION ALL SELECT date('2012-01-01')\n UNION ALL SELECT date('2012-02-01')\n UNION ALL SELECT date('2012-03-01')\n UNION ALL SELECT date('2012-04-01')\n UNION ALL SELECT date('2012-05-01')\n UNION ALL SELECT date('2012-06-01')\n UNION ALL SELECT date('2012-07-01')\n UNION ALL SELECT date('2012-08-01')\n UNION ALL SELECT date('2012-09-01')\n UNION ALL SELECT date('2012-10-01')\n UNION ALL SELECT date('2012-12-01')\n UNION ALL SELECT date('2012-12-01')\n UNION ALL SELECT date('2013-01-01')\n UNION ALL SELECT date('2013-02-01')\n UNION ALL SELECT date('2013-03-01')\n UNION ALL SELECT date('2013-04-01')\n UNION ALL SELECT date('2013-05-01')\n UNION ALL SELECT date('2013-06-01')\n UNION ALL SELECT date('2013-07-01')\n UNION ALL SELECT date('2013-08-01')\n UNION ALL SELECT date('2013-09-01')\n UNION ALL SELECT date('2013-10-01')\n UNION ALL SELECT date('2013-12-01')\n UNION ALL SELECT date('2013-12-01')\n UNION ALL SELECT date('2014-01-01')\n UNION ALL SELECT date('2014-02-01')\n UNION ALL SELECT date('2014-03-01')\n UNION ALL SELECT date('2014-04-01')\n UNION ALL SELECT date('2014-05-01')\n UNION ALL SELECT date('2014-06-01')\n UNION ALL SELECT date('2014-07-01')\n UNION ALL SELECT date('2014-08-01')\n UNION ALL SELECT date('2014-09-01')\n UNION ALL SELECT date('2014-10-01')\n UNION ALL SELECT date('2014-12-01')\n UNION ALL SELECT date('2014-12-01')\n UNION ALL SELECT date('2015-01-01')\n UNION ALL SELECT date('2015-02-01')\n UNION ALL SELECT date('2015-03-01')\n UNION ALL SELECT date('2015-04-01')\n UNION ALL SELECT date('2015-05-01')\n UNION ALL SELECT date('2015-06-01')\n UNION ALL SELECT date('2015-07-01')\n UNION ALL SELECT date('2015-08-01')\n UNION ALL SELECT date('2015-09-01')\n UNION ALL SELECT date('2015-10-01')\n UNION ALL SELECT date('2015-12-01')\n UNION ALL SELECT date('2015-12-01')\n UNION ALL SELECT date('2016-01-01')\n UNION ALL SELECT date('2016-02-01')\n UNION ALL SELECT date('2016-03-01')\n UNION ALL SELECT date('2016-04-01')\n UNION ALL SELECT date('2016-05-01')\n UNION ALL SELECT date('2016-06-01')\n UNION ALL SELECT date('2016-07-01')\n UNION ALL SELECT date('2016-08-01')\n UNION ALL SELECT date('2016-09-01')\n UNION ALL SELECT date('2016-10-01')\n UNION ALL SELECT date('2016-12-01')\n UNION ALL SELECT date('2016-12-01')\n UNION ALL SELECT date('2017-01-01')\n UNION ALL SELECT date('2017-02-01')\n UNION ALL SELECT date('2017-03-01')\n UNION ALL SELECT date('2017-04-01')\n UNION ALL SELECT date('2017-05-01')\n UNION ALL SELECT date('2017-06-01')\n UNION ALL SELECT date('2017-07-01')\n UNION ALL SELECT date('2017-08-01')\n UNION ALL SELECT date('2017-09-01')\n UNION ALL SELECT date('2017-10-01')\n UNION ALL SELECT date('2017-12-01')\n UNION ALL SELECT date('2017-12-01')\n UNION ALL SELECT date('2018-01-01')\n UNION ALL SELECT date('2018-02-01')\n UNION ALL SELECT date('2018-03-01')\n UNION ALL SELECT date('2018-04-01')\n UNION ALL SELECT date('2018-05-01')\n UNION ALL SELECT date('2018-06-01')\n UNION ALL SELECT date('2018-07-01')\n UNION ALL SELECT date('2018-08-01')\n UNION ALL SELECT date('2018-09-01')\n UNION ALL SELECT date('2018-10-01')\n UNION ALL SELECT date('2018-12-01')\n UNION ALL SELECT date('2018-12-01')\n UNION ALL SELECT date('2019-01-01')\n UNION ALL SELECT date('2019-02-01')\n UNION ALL SELECT date('2019-03-01')\n UNION ALL SELECT date('2019-04-01')\n UNION ALL SELECT date('2019-05-01')\n UNION ALL SELECT date('2019-06-01')\n UNION ALL SELECT date('2019-07-01')\n UNION ALL SELECT date('2019-08-01')\n UNION ALL SELECT date('2019-09-01')\n UNION ALL SELECT date('2019-10-01')\n UNION ALL SELECT date('2019-12-01')\n UNION ALL SELECT date('2019-12-01')\n UNION ALL SELECT date('2020-01-01')\n UNION ALL SELECT date('2020-02-01')\n UNION ALL SELECT date('2020-03-01')\n UNION ALL SELECT date('2020-04-01')\n UNION ALL SELECT date('2020-05-01')\n UNION ALL SELECT date('2020-06-01')\n UNION ALL SELECT date('2020-07-01')\n UNION ALL SELECT date('2020-08-01')\n UNION ALL SELECT date('2020-09-01')\n UNION ALL SELECT date('2020-10-01')\n UNION ALL SELECT date('2020-12-01')\n UNION ALL SELECT date('2020-12-01');\n1.3. Populate the MySQL database\nSET FOREIGN_KEY_CHECKS = 0;\n\nTRUNCATE tbl_users;\nTRUNCATE tbl_groups;\nTRUNCATE tbl_permissions;\nTRUNCATE tbl_group_permissions;\nTRUNCATE tbl_rules;\nTRUNCATE tbl_companies;\nTRUNCATE tbl_branches;\nTRUNCATE tbl_bills;\nTRUNCATE tbl_company_bills;\nTRUNCATE tbl_branch_bills;\n\nSET FOREIGN_KEY_CHECKS = 1;\n\nINSERT INTO tbl_companies(name)\nSELECT CONCAT('Company ', g.n)\nfrom generator_4k as g\nLIMIT 100;\n\nINSERT INTO tbl_branches(name, company_id)\nSELECT CONCAT('Branch ', b.n, ' (Company', c.id, ')'), c.id\nfrom generator_4k as b,\n tbl_companies c\nWHERE b.n < 100;\n\nINSERT INTO tbl_users(user_email)\nVALUES ('[email protected]');\n\nINSERT INTO tbl_groups(name)\nVALUES ('Group X');\n\nINSERT INTO tbl_permissions(name)\nVALUES ('Permission W'),\n ('Permission X'),\n ('Permission Y'),\n ('Permission Z');\n\nINSERT INTO tbl_group_permissions(group_id, permission_id)\nSELECT g.id, p.id\nFROM tbl_groups g,\n tbl_permissions p\nWHERE g.name = 'Group X'\n AND p.name = 'Permission W';\n\nINSERT INTO tbl_rules(rule_type, user_id, group_id, company_id, branch_id)\nSELECT 'BRANCH', u.id, g.id, b.company_id, b.id\nFROM tbl_branches b,\n tbl_groups g,\n tbl_users u\nWHERE (g.name = 'Group X' AND b.id IN (1, 3, 5));\n\nINSERT INTO tbl_rules(rule_type, user_id, group_id, company_id, branch_id)\nSELECT 'COMPANY', u.id, g.id, c.id, NULL\nFROM tbl_companies c,\n tbl_groups g,\n tbl_users u\nWHERE (g.name = 'Group X' AND c.id IN (2, 4, 6));\n\nSET FOREIGN_KEY_CHECKS = 0;\n\nINSERT INTO tbl_branch_bills(branch_id, bill_id)\nSELECT b.id, ROW_NUMBER() OVER ()\nfrom tbl_branches b,\n dates_10y d;\n\nINSERT INTO tbl_bills(id, bill_date, bill_number)\nSELECT ROW_NUMBER() OVER (), d.d, CONCAT('#NUM-', d.d, '-', b.id) from tbl_branches b,dates_10y d;\n\nSET FOREIGN_KEY_CHECKS = 1;\n1.4. Run the query\nEXPLAIN SELECT bills.id AS bill_id,\n bills.bill_date AS bill_date,\n bills.bill_number AS bill_number,\n branch_bills.branch_id AS branch_id,\n company_bills.company_id AS company_id\nFROM tbl_bills bills\n LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id = branch_bills.bill_id\n LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id = company_bills.bill_id\n INNER JOIN tbl_branches ON branch_bills.branch_id = tbl_branches.id\nWHERE branch_bills.branch_id IN (\n SELECT b.id\n FROM tbl_branches b\n INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n INNER JOIN tbl_groups g ON r.group_id = g.id\n INNER JOIN (tbl_group_permissions gp INNER JOIN tbl_permissions p ON gp.permission_id = p.id)\n ON g.id = gp.group_id\n INNER JOIN tbl_users u ON r.user_id = u.id\n WHERE u.id = 1\n AND r.rule_type = 'BRANCH'\n AND p.name = 'Permission W'\n);\n\n1,SIMPLE,u,,const,PRIMARY,PRIMARY,8,const,1,100,Using index\n1,SIMPLE,g,,index,PRIMARY,PRIMARY,8,,1,100,Using index; Start temporary\n1,SIMPLE,gp,,ref,\"PRIMARY,FKocxt78iv4ufox094sdr1pudf7\",PRIMARY,8,companies_and_branches.g.id,1,100,Using index\n1,SIMPLE,p,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.gp.permission_id,1,25,Using where\n1,SIMPLE,r,,ref,\"FK18sr791qaonsmvodm1v7g8vyr,FKldsvxs2qijr9quon4srw627ky,FKp28tcx68kdbb8flhl1xdtl0hp\",FKldsvxs2qijr9quon4srw627ky,9,companies_and_branches.g.id,1,16.67,Using where\n1,SIMPLE,b,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.r.branch_id,1,100,Using index\n1,SIMPLE,tbl_branches,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.r.branch_id,1,100,Using index\n1,SIMPLE,branch_bills,,ref,\"PRIMARY,FKjr0egr9t34sxr1pv2ld1ux174\",FKjr0egr9t34sxr1pv2ld1ux174,9,companies_and_branches.r.branch_id,1,100,Using where; Using index\n1,SIMPLE,bills,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.branch_bills.bill_id,1,100,\n1,SIMPLE,company_bills,,eq_ref,PRIMARY,PRIMARY,8,companies_and_branches.branch_bills.bill_id,1,100,End temporary\n\n10 rows retrieved starting from 1 in 50 ms (execution: 6 ms, fetching: 44 ms)\n2.1 Run PostgreSQL 10.6 locally\n$ docker run --name postgres106 \\\n -e POSTGRES_PASSWORD=password \\\n -p 15432:5432 \\\n -d postgres:10.6\n\n$ docker update --cpus 2 --memory 2GB postgres106\n2.2. Create the PostgreSQL database\nDROP TABLE IF EXISTS tbl_rules,\n tbl_permissions,\n tbl_groups,\n tbl_group_permissions,\n tbl_companies,\n tbl_branches,\n tbl_departments,\n tbl_users,\n tbl_company_bills,\n tbl_branch_bills,\n tbl_bills CASCADE;\n\nCREATE TABLE tbl_permissions\n(\n id bigserial NOT NULL PRIMARY KEY,\n name varchar(255) NOT NULL UNIQUE\n);\n\nCREATE TABLE tbl_groups\n(\n id bigserial NOT NULL PRIMARY KEY,\n name varchar(255) UNIQUE\n);\n\nCREATE TABLE tbl_group_permissions\n(\n group_id bigint NOT NULL REFERENCES tbl_groups (id),\n permission_id bigint NOT NULL REFERENCES tbl_permissions (id),\n PRIMARY KEY (group_id, permission_id)\n);\n\nCREATE TABLE tbl_companies\n(\n id bigserial NOT NULL PRIMARY KEY,\n name text NOT NULL\n);\n\nCREATE TABLE tbl_branches\n(\n id bigserial NOT NULL PRIMARY KEY,\n company_id bigint NOT NULL REFERENCES tbl_companies (id),\n name text NOT NULL\n);\n\nCREATE TABLE tbl_users\n(\n id bigserial NOT NULL PRIMARY KEY,\n user_email varchar(255) NOT NULL\n);\n\nCREATE TABLE tbl_rules\n(\n id bigserial NOT NULL PRIMARY KEY,\n rule_type varchar(255),\n user_id bigint REFERENCES tbl_users (id),\n group_id bigint REFERENCES tbl_groups (id),\n company_id bigint REFERENCES tbl_companies (id),\n branch_id bigint REFERENCES tbl_branches (id)\n);\n\nCREATE TABLE tbl_bills\n(\n id bigserial NOT NULL PRIMARY KEY,\n bill_date date NOT NULL,\n bill_number varchar(255) NOT NULL UNIQUE,\n CONSTRAINT bill_const1 UNIQUE (bill_date, bill_number)\n);\n\nCREATE TABLE tbl_company_bills\n(\n company_id bigint REFERENCES tbl_companies (id),\n bill_id bigint NOT NULL REFERENCES tbl_bills (id),\n PRIMARY KEY (company_id, bill_id)\n);\n\nCREATE TABLE tbl_branch_bills\n(\n branch_id bigint REFERENCES tbl_branches (id),\n bill_id bigint NOT NULL REFERENCES tbl_bills (id),\n PRIMARY KEY (branch_id, bill_id)\n);\n2.3. Populate the PostgreSQL database\nTRUNCATE tbl_users, tbl_companies, tbl_branches, tbl_groups, tbl_permissions, tbl_group_permissions, tbl_rules, tbl_bills, tbl_branch_bills, tbl_company_bills RESTART IDENTITY CASCADE;\n\nINSERT INTO tbl_users(user_email)\nVALUES ('[email protected]');\n\nWITH new_comps AS (INSERT INTO tbl_companies (id, name)\n SELECT nextval('tbl_companies_id_seq'),\n 'Company ' || currval('tbl_companies_id_seq')\n FROM generate_series(1, 100) num RETURNING id)\nINSERT\nINTO tbl_branches(id, company_id, name)\nSELECT nextval('tbl_branches_id_seq'),\n c.id,\n 'Branch ' || currval('tbl_branches_id_seq') || ' ( Company ' || c.id || ')'\nFROM new_comps c,\n generate_series(1, 100) num;\n\nINSERT INTO tbl_groups(name)\nVALUES ('Group X');\n\nINSERT INTO tbl_permissions(name)\nVALUES ('Permission W'),\n ('Permission X'),\n ('Permission Y'),\n ('Permission Z');\n\nINSERT INTO tbl_group_permissions(group_id, permission_id)\nSELECT g.id, p.id\nFROM tbl_groups g,\n tbl_permissions p\nWHERE g.name = 'Group X'\n AND p.name = 'Permission W';\n\nINSERT INTO tbl_rules(rule_type, user_id, group_id, company_id, branch_id)\nSELECT 'BRANCH', u.id, g.id, b.company_id, b.id\nFROM tbl_branches b,\n tbl_groups g,\n tbl_users u\nWHERE (g.name = 'Group X' AND b.id IN (1, 3, 5));\n\nINSERT INTO tbl_rules(rule_type, user_id, group_id, company_id, branch_id)\nSELECT 'COMPANY', u.id, g.id, c.id, NULL\nFROM tbl_companies c,\n tbl_groups g,\n tbl_users u\nWHERE (g.name = 'Group X' AND c.id IN (2, 4, 6));\n\nWITH ids AS (SELECT nextval('tbl_bills_id_seq') AS bill_id,\n make_date(year, month, 1) AS bill_date,\n br.id AS branch_id\n FROM tbl_branches AS br,\n generate_series(2010, 2018) AS year,\n generate_series(1, 12) AS month\n),\n bills AS (INSERT INTO tbl_bills (id, bill_date, bill_number)\n SELECT ids.bill_id AS billl_id,\n ids.bill_date AS bill_date,\n '#NUM-' || ids.bill_date || '-' || ids.branch_id AS bill_num\n FROM ids RETURNING *)\nINSERT\nINTO tbl_branch_bills(branch_id, bill_id)\nSELECT branch_id, bill_id\nFROM ids;\n\nEXPLAIN ( ANALYZE , COSTS , VERBOSE , BUFFERS , FORMAT JSON )\nSELECT bills.id AS bill_id,\n bills.bill_date AS bill_date,\n bills.bill_number AS bill_number,\n branch_bills.branch_id AS branch_id,\n company_bills.company_id AS company_id\nFROM tbl_bills bills\n LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id = branch_bills.bill_id\n LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id = company_bills.bill_id\n INNER JOIN tbl_branches ON branch_bills.branch_id = tbl_branches.id\nWHERE branch_bills.branch_id IN (\n SELECT b.id\n FROM tbl_branches b\n INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n INNER JOIN tbl_groups g ON r.group_id = g.id\n INNER JOIN (tbl_group_permissions gp INNER JOIN tbl_permissions p ON gp.permission_id = p.id)\n ON g.id = gp.group_id\n INNER JOIN tbl_users u ON r.user_id = u.id\n WHERE u.id = 1\n AND r.rule_type = 'BRANCH'\n AND p.name = 'Permission W'\n);\n2.4. Run the query\nVACUUM ANALYZE ;\n\nEXPLAIN ( ANALYZE , COSTS , VERBOSE , BUFFERS )\nSELECT bills.id AS bill_id,\n bills.bill_date AS bill_date,\n bills.bill_number AS bill_number,\n branch_bills.branch_id AS branch_id,\n company_bills.company_id AS company_id\nFROM tbl_bills bills\n LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id = branch_bills.bill_id\n LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id = company_bills.bill_id\n INNER JOIN tbl_branches ON branch_bills.branch_id = tbl_branches.id\nWHERE branch_bills.branch_id IN (\n SELECT b.id\n FROM tbl_branches b\n INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n INNER JOIN tbl_groups g ON r.group_id = g.id\n INNER JOIN (tbl_group_permissions gp INNER JOIN tbl_permissions p ON gp.permission_id = p.id)\n ON g.id = gp.group_id\n INNER JOIN tbl_users u ON r.user_id = u.id\n WHERE u.id = 1\n AND r.rule_type = 'BRANCH'\n AND p.name = 'Permission W'\n);\n\nGather (cost=36865.05..89524.81 rows=108 width=48) (actual time=667.105..1976.054 rows=324 loops=1)\n\" Output: bills.id, bills.bill_date, bills.bill_number, branch_bills.branch_id, company_bills.company_id\"\n Workers Planned: 2\n Workers Launched: 2\n\" Buffers: shared hit=28392 read=4240 written=336, temp read=20821 written=20635\"\n -> Hash Semi Join (cost=35865.05..88514.01 rows=45 width=48) (actual time=636.256..1948.638 rows=108 loops=3)\n\" Output: bills.id, bills.bill_date, bills.bill_number, branch_bills.branch_id, company_bills.company_id\"\n Hash Cond: (branch_bills.branch_id = b.id)\n\" Buffers: shared hit=28392 read=4240 written=336, temp read=20821 written=20635\"\n Worker 0: actual time=563.702..1964.847 rows=105 loops=1\n\" Buffers: shared hit=10027 read=953 written=109, temp read=6971 written=6909\"\n Worker 1: actual time=679.468..1965.037 rows=122 loops=1\n\" Buffers: shared hit=9292 read=1628 written=114, temp read=6960 written=6898\"\n -> Hash Join (cost=35859.32..87326.53 rows=450000 width=56) (actual time=491.279..1875.725 rows=360000 loops=3)\n\" Output: bills.id, bills.bill_date, bills.bill_number, branch_bills.branch_id, company_bills.company_id, tbl_branches.id\"\n Inner Unique: true\n Hash Cond: (branch_bills.branch_id = tbl_branches.id)\n\" Buffers: shared hit=28269 read=4239 written=336, temp read=20821 written=20635\"\n Worker 0: actual time=497.021..1870.969 rows=364536 loops=1\n\" Buffers: shared hit=9971 read=952 written=109, temp read=6971 written=6909\"\n Worker 1: actual time=479.286..1900.802 rows=363072 loops=1\n\" Buffers: shared hit=9235 read=1628 written=114, temp read=6960 written=6898\"\n -> Hash Join (cost=35541.32..85826.78 rows=450000 width=48) (actual time=487.460..1545.962 rows=360000 loops=3)\n\" Output: bills.id, bills.bill_date, bills.bill_number, branch_bills.branch_id, company_bills.company_id\"\n Hash Cond: (bills.id = branch_bills.bill_id)\n\" Buffers: shared hit=27990 read=4239 written=336, temp read=20821 written=20635\"\n Worker 0: actual time=493.881..1583.609 rows=364536 loops=1\n\" Buffers: shared hit=9878 read=952 written=109, temp read=6971 written=6909\"\n Worker 1: actual time=474.878..1542.282 rows=363072 loops=1\n\" Buffers: shared hit=9142 read=1628 written=114, temp read=6960 written=6898\"\n -> Merge Left Join (cost=129.32..31921.28 rows=450000 width=40) (actual time=0.047..239.155 rows=360000 loops=3)\n\" Output: bills.id, bills.bill_date, bills.bill_number, company_bills.company_id\"\n Merge Cond: (bills.id = company_bills.bill_id)\n Buffers: shared hit=12327 read=2345 written=336\n Worker 0: actual time=0.058..248.250 rows=364536 loops=1\n Buffers: shared hit=4336 read=637 written=109\n Worker 1: actual time=0.065..222.495 rows=363072 loops=1\n Buffers: shared hit=3979 read=929 written=114\n -> Parallel Index Scan using tbl_bills_pkey on public.tbl_bills bills (cost=0.43..30650.43 rows=450000 width=32) (actual time=0.030..127.785 rows=360000 loops=3)\n\" Output: bills.id, bills.bill_date, bills.bill_number\"\n Buffers: shared hit=12327 read=2345 written=336\n Worker 0: actual time=0.037..105.247 rows=364536 loops=1\n Buffers: shared hit=4336 read=637 written=109\n Worker 1: actual time=0.044..108.513 rows=363072 loops=1\n Buffers: shared hit=3979 read=929 written=114\n -> Sort (cost=128.89..133.52 rows=1850 width=16) (actual time=0.015..0.015 rows=0 loops=3)\n\" Output: company_bills.company_id, company_bills.bill_id\"\n Sort Key: company_bills.bill_id\n Sort Method: quicksort Memory: 25kB\n Worker 0: actual time=0.019..0.019 rows=0 loops=1\n Worker 1: actual time=0.018..0.018 rows=0 loops=1\n -> Seq Scan on public.tbl_company_bills company_bills (cost=0.00..28.50 rows=1850 width=16) (actual time=0.006..0.006 rows=0 loops=3)\n\" Output: company_bills.company_id, company_bills.bill_id\"\n Worker 0: actual time=0.007..0.007 rows=0 loops=1\n Worker 1: actual time=0.008..0.008 rows=0 loops=1\n -> Hash (cost=16638.00..16638.00 rows=1080000 width=16) (actual time=486.822..486.822 rows=1080000 loops=3)\n\" Output: branch_bills.branch_id, branch_bills.bill_id\"\n Buckets: 131072 Batches: 32 Memory Usage: 2614kB\n\" Buffers: shared hit=15620 read=1894, temp written=13740\"\n Worker 0: actual time=493.045..493.045 rows=1080000 loops=1\n\" Buffers: shared hit=5523 read=315, temp written=4580\"\n Worker 1: actual time=474.144..474.144 rows=1080000 loops=1\n\" Buffers: shared hit=5139 read=699, temp written=4580\"\n -> Seq Scan on public.tbl_branch_bills branch_bills (cost=0.00..16638.00 rows=1080000 width=16) (actual time=0.025..158.450 rows=1080000 loops=3)\n\" Output: branch_bills.branch_id, branch_bills.bill_id\"\n Buffers: shared hit=15620 read=1894\n Worker 0: actual time=0.032..182.305 rows=1080000 loops=1\n Buffers: shared hit=5523 read=315\n Worker 1: actual time=0.022..144.461 rows=1080000 loops=1\n Buffers: shared hit=5139 read=699\n -> Hash (cost=193.00..193.00 rows=10000 width=8) (actual time=3.769..3.769 rows=10000 loops=3)\n Output: tbl_branches.id\n Buckets: 16384 Batches: 1 Memory Usage: 519kB\n Buffers: shared hit=279\n Worker 0: actual time=3.077..3.077 rows=10000 loops=1\n Buffers: shared hit=93\n Worker 1: actual time=4.331..4.331 rows=10000 loops=1\n Buffers: shared hit=93\n -> Seq Scan on public.tbl_branches (cost=0.00..193.00 rows=10000 width=8) (actual time=0.006..1.755 rows=10000 loops=3)\n Output: tbl_branches.id\n Buffers: shared hit=279\n Worker 0: actual time=0.007..1.485 rows=10000 loops=1\n Buffers: shared hit=93\n Worker 1: actual time=0.008..1.980 rows=10000 loops=1\n Buffers: shared hit=93\n -> Hash (cost=5.72..5.72 rows=1 width=16) (actual time=0.117..0.117 rows=3 loops=3)\n\" Output: b.id, r.branch_id\"\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=40\n Worker 0: actual time=0.125..0.125 rows=3 loops=1\n Buffers: shared hit=15\n Worker 1: actual time=0.156..0.156 rows=3 loops=1\n Buffers: shared hit=15\n -> Nested Loop (cost=1.40..5.72 rows=1 width=16) (actual time=0.102..0.113 rows=3 loops=3)\n\" Output: b.id, r.branch_id\"\n Buffers: shared hit=40\n Worker 0: actual time=0.111..0.120 rows=3 loops=1\n Buffers: shared hit=15\n Worker 1: actual time=0.140..0.153 rows=3 loops=1\n Buffers: shared hit=15\n -> Nested Loop (cost=1.40..4.69 rows=1 width=24) (actual time=0.096..0.103 rows=3 loops=3)\n\" Output: b.id, r.branch_id, r.user_id\"\n Join Filter: (r.group_id = g.id)\n Buffers: shared hit=31\n Worker 0: actual time=0.107..0.112 rows=3 loops=1\n Buffers: shared hit=12\n Worker 1: actual time=0.131..0.139 rows=3 loops=1\n Buffers: shared hit=12\n -> Merge Join (cost=1.40..1.55 rows=3 width=32) (actual time=0.073..0.077 rows=3 loops=3)\n\" Output: b.id, r.branch_id, r.group_id, r.user_id\"\n Merge Cond: (b.id = r.branch_id)\n Buffers: shared hit=22\n Worker 0: actual time=0.079..0.082 rows=3 loops=1\n Buffers: shared hit=9\n Worker 1: actual time=0.102..0.107 rows=3 loops=1\n Buffers: shared hit=9\n -> Index Only Scan using tbl_branches_pkey on public.tbl_branches b (cost=0.29..270.29 rows=10000 width=8) (actual time=0.035..0.036 rows=6 loops=3)\n Output: b.id\n Heap Fetches: 0\n Buffers: shared hit=11\n Worker 0: actual time=0.038..0.039 rows=6 loops=1\n Buffers: shared hit=4\n Worker 1: actual time=0.049..0.051 rows=6 loops=1\n Buffers: shared hit=4\n -> Sort (cost=1.11..1.12 rows=3 width=24) (actual time=0.035..0.036 rows=3 loops=3)\n\" Output: r.branch_id, r.group_id, r.user_id\"\n Sort Key: r.branch_id\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=11\n Worker 0: actual time=0.039..0.039 rows=3 loops=1\n Buffers: shared hit=5\n Worker 1: actual time=0.050..0.051 rows=3 loops=1\n Buffers: shared hit=5\n -> Seq Scan on public.tbl_rules r (cost=0.00..1.09 rows=3 width=24) (actual time=0.017..0.019 rows=3 loops=3)\n\" Output: r.branch_id, r.group_id, r.user_id\"\n Filter: ((r.user_id = 1) AND ((r.rule_type)::text = 'BRANCH'::text))\n Rows Removed by Filter: 3\n Buffers: shared hit=3\n Worker 0: actual time=0.015..0.016 rows=3 loops=1\n Buffers: shared hit=1\n Worker 1: actual time=0.028..0.030 rows=3 loops=1\n Buffers: shared hit=1\n -> Materialize (cost=0.00..3.10 rows=1 width=16) (actual time=0.008..0.008 rows=1 loops=9)\n\" Output: g.id, gp.group_id\"\n Buffers: shared hit=9\n Worker 0: actual time=0.009..0.010 rows=1 loops=3\n Buffers: shared hit=3\n Worker 1: actual time=0.009..0.010 rows=1 loops=3\n Buffers: shared hit=3\n -> Nested Loop (cost=0.00..3.10 rows=1 width=16) (actual time=0.019..0.020 rows=1 loops=3)\n\" Output: g.id, gp.group_id\"\n Inner Unique: true\n Join Filter: (gp.permission_id = p.id)\n Buffers: shared hit=9\n Worker 0: actual time=0.024..0.025 rows=1 loops=1\n Buffers: shared hit=3\n Worker 1: actual time=0.024..0.025 rows=1 loops=1\n Buffers: shared hit=3\n -> Nested Loop (cost=0.00..2.03 rows=1 width=24) (actual time=0.012..0.012 rows=1 loops=3)\n\" Output: g.id, gp.permission_id, gp.group_id\"\n Join Filter: (g.id = gp.group_id)\n Buffers: shared hit=6\n Worker 0: actual time=0.013..0.014 rows=1 loops=1\n Buffers: shared hit=2\n Worker 1: actual time=0.015..0.016 rows=1 loops=1\n Buffers: shared hit=2\n -> Seq Scan on public.tbl_groups g (cost=0.00..1.01 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=3)\n\" Output: g.id, g.name\"\n Buffers: shared hit=3\n Worker 0: actual time=0.006..0.006 rows=1 loops=1\n Buffers: shared hit=1\n Worker 1: actual time=0.006..0.006 rows=1 loops=1\n Buffers: shared hit=1\n -> Seq Scan on public.tbl_group_permissions gp (cost=0.00..1.01 rows=1 width=16) (actual time=0.006..0.006 rows=1 loops=3)\n\" Output: gp.group_id, gp.permission_id\"\n Buffers: shared hit=3\n Worker 0: actual time=0.006..0.007 rows=1 loops=1\n Buffers: shared hit=1\n Worker 1: actual time=0.008..0.008 rows=1 loops=1\n Buffers: shared hit=1\n -> Seq Scan on public.tbl_permissions p (cost=0.00..1.05 rows=1 width=8) (actual time=0.007..0.007 rows=1 loops=3)\n\" Output: p.id, p.name\"\n Filter: ((p.name)::text = 'Permission W'::text)\n Buffers: shared hit=3\n Worker 0: actual time=0.010..0.010 rows=1 loops=1\n Buffers: shared hit=1\n Worker 1: actual time=0.008..0.008 rows=1 loops=1\n Buffers: shared hit=1\n -> Seq Scan on public.tbl_users u (cost=0.00..1.01 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=9)\n\" Output: u.id, u.user_email\"\n Filter: (u.id = 1)\n Buffers: shared hit=9\n Worker 0: actual time=0.001..0.002 rows=1 loops=3\n Buffers: shared hit=3\n Worker 1: actual time=0.003..0.004 rows=1 loops=3\n Buffers: shared hit=3\nPlanning time: 2.680 ms\nExecution time: 1976.277 ms\nBest regards,Behrang Saeedzadehblog.behrang.org",
"msg_date": "Tue, 1 Oct 2019 22:37:03 +1000",
"msg_from": "Behrang Saeedzadeh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow PostgreSQL 10.6 query"
},
{
"msg_contents": "Behrang Saeedzadeh <[email protected]> writes:\n> On my machine, this query that is generated by Hibernate runs in about 57\n> ms on MySQL 8 but it takes more than 1 second to run on PostgreSQL:\n\n> SELECT bills.id AS bill_id,\n> bills.bill_date AS bill_date,\n> bills.bill_number AS bill_number,\n> branch_bills.branch_id AS branch_id,\n> company_bills.company_id AS company_id\n> FROM tbl_bills bills\n> LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id =\n> branch_bills.bill_id\n> LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id =\n> company_bills.bill_id\n> INNER JOIN tbl_branches ON branch_bills.branch_id =\n> tbl_branches.id\n> WHERE branch_bills.branch_id IN (\n> SELECT b.id\n> FROM tbl_branches b\n> INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n> INNER JOIN tbl_groups g ON r.group_id = g.id\n> INNER JOIN (tbl_group_permissions gp INNER JOIN\n> tbl_permissions p ON gp.permission_id = p.id)\n> ON g.id = gp.group_id\n> INNER JOIN tbl_users u ON r.user_id = u.id\n> WHERE u.id = 1\n> AND r.rule_type = 'BRANCH'\n> AND p.name = 'Permission W'\n> );\n\n[ counts the JOINs... ] You might try raising join_collapse_limit and\nfrom_collapse_limit to be 12 or so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Oct 2019 09:27:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow PostgreSQL 10.6 query"
},
{
"msg_contents": "Thanks. That eliminated the bottleneck!\n\nAny ideas why adding ORDER BY to the subquery also changes the plan in a\nway that eliminates the bottleneck?\n\nBest regards,\nBehrang Saeedzadeh\nblog.behrang.org\n\n\nOn Tue, 1 Oct 2019 at 23:27, Tom Lane <[email protected]> wrote:\n\n> Behrang Saeedzadeh <[email protected]> writes:\n> > On my machine, this query that is generated by Hibernate runs in about 57\n> > ms on MySQL 8 but it takes more than 1 second to run on PostgreSQL:\n>\n> > SELECT bills.id AS bill_id,\n> > bills.bill_date AS bill_date,\n> > bills.bill_number AS bill_number,\n> > branch_bills.branch_id AS branch_id,\n> > company_bills.company_id AS company_id\n> > FROM tbl_bills bills\n> > LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id =\n> > branch_bills.bill_id\n> > LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id =\n> > company_bills.bill_id\n> > INNER JOIN tbl_branches ON branch_bills.branch_id =\n> > tbl_branches.id\n> > WHERE branch_bills.branch_id IN (\n> > SELECT b.id\n> > FROM tbl_branches b\n> > INNER JOIN tbl_rules r ON b.id = r.branch_id\n>\n> > INNER JOIN tbl_groups g ON r.group_id = g.id\n> > INNER JOIN (tbl_group_permissions gp INNER JOIN\n> > tbl_permissions p ON gp.permission_id = p.id)\n> > ON g.id = gp.group_id\n> > INNER JOIN tbl_users u ON r.user_id = u.id\n> > WHERE u.id = 1\n> > AND r.rule_type = 'BRANCH'\n> > AND p.name = 'Permission W'\n> > );\n>\n> [ counts the JOINs... ] You might try raising join_collapse_limit and\n> from_collapse_limit to be 12 or so.\n>\n> regards, tom lane\n>\n\nThanks. That eliminated the bottleneck!Any ideas why adding ORDER BY to the subquery also changes the plan in a way that eliminates the bottleneck?Best regards,Behrang Saeedzadehblog.behrang.orgOn Tue, 1 Oct 2019 at 23:27, Tom Lane <[email protected]> wrote:Behrang Saeedzadeh <[email protected]> writes:\n> On my machine, this query that is generated by Hibernate runs in about 57\n> ms on MySQL 8 but it takes more than 1 second to run on PostgreSQL:\n\n> SELECT bills.id AS bill_id,\n> bills.bill_date AS bill_date,\n> bills.bill_number AS bill_number,\n> branch_bills.branch_id AS branch_id,\n> company_bills.company_id AS company_id\n> FROM tbl_bills bills\n> LEFT OUTER JOIN tbl_branch_bills branch_bills ON bills.id =\n> branch_bills.bill_id\n> LEFT OUTER JOIN tbl_company_bills company_bills ON bills.id =\n> company_bills.bill_id\n> INNER JOIN tbl_branches ON branch_bills.branch_id =\n> tbl_branches.id\n> WHERE branch_bills.branch_id IN (\n> SELECT b.id\n> FROM tbl_branches b\n> INNER JOIN tbl_rules r ON b.id = r.branch_id\n\n> INNER JOIN tbl_groups g ON r.group_id = g.id\n> INNER JOIN (tbl_group_permissions gp INNER JOIN\n> tbl_permissions p ON gp.permission_id = p.id)\n> ON g.id = gp.group_id\n> INNER JOIN tbl_users u ON r.user_id = u.id\n> WHERE u.id = 1\n> AND r.rule_type = 'BRANCH'\n> AND p.name = 'Permission W'\n> );\n\n[ counts the JOINs... ] You might try raising join_collapse_limit and\nfrom_collapse_limit to be 12 or so.\n\n regards, tom lane",
"msg_date": "Tue, 1 Oct 2019 23:42:33 +1000",
"msg_from": "Behrang Saeedzadeh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow PostgreSQL 10.6 query"
},
{
"msg_contents": "On Tue, Oct 01, 2019 at 11:42:33PM +1000, Behrang Saeedzadeh wrote:\n>Thanks. That eliminated the bottleneck!\n>\n>Any ideas why adding ORDER BY to the subquery also changes the plan in a\n>way that eliminates the bottleneck?\n>\n\nIIRC the ORDER BY clause makes it impossible to \"collapse\" the subquery\ninto the main (upper) one, and it probably happens to constrict the\nchoices so that the planner ends up picking a good plan. I guess adding\n\"OFFSET 0\" to the subquery would have the same effect.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 6 Oct 2019 22:37:53 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow PostgreSQL 10.6 query"
},
{
"msg_contents": "Thanks for the tip!\n\nRegards,\nBehrang (sent from my mobile)\n\nOn Mon, Oct 7, 2019, 07:37 Tomas Vondra <[email protected]>\nwrote:\n\n> On Tue, Oct 01, 2019 at 11:42:33PM +1000, Behrang Saeedzadeh wrote:\n> >Thanks. That eliminated the bottleneck!\n> >\n> >Any ideas why adding ORDER BY to the subquery also changes the plan in a\n> >way that eliminates the bottleneck?\n> >\n>\n> IIRC the ORDER BY clause makes it impossible to \"collapse\" the subquery\n> into the main (upper) one, and it probably happens to constrict the\n> choices so that the planner ends up picking a good plan. I guess adding\n> \"OFFSET 0\" to the subquery would have the same effect.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nThanks for the tip!Regards,Behrang (sent from my mobile)On Mon, Oct 7, 2019, 07:37 Tomas Vondra <[email protected]> wrote:On Tue, Oct 01, 2019 at 11:42:33PM +1000, Behrang Saeedzadeh wrote:\n>Thanks. That eliminated the bottleneck!\n>\n>Any ideas why adding ORDER BY to the subquery also changes the plan in a\n>way that eliminates the bottleneck?\n>\n\nIIRC the ORDER BY clause makes it impossible to \"collapse\" the subquery\ninto the main (upper) one, and it probably happens to constrict the\nchoices so that the planner ends up picking a good plan. I guess adding\n\"OFFSET 0\" to the subquery would have the same effect.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 7 Oct 2019 19:27:42 +1100",
"msg_from": "Behrang Saeedzadeh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow PostgreSQL 10.6 query"
}
] |
[
{
"msg_contents": "Hey,\nIn PG12 I'm trying to create partitions by range on a date column that\nmight be null (indicates it is the most recent version of the object). My\nPK has to include the partition column, therefore I'm getting an error that\nI cant create a primary key with the specific column because it has null\nvalues.\n\nFor example :\n\\d object_revision\n Table \"public.object_revision\"\n Column | Type | Collation | Nullable |\n Default\n-------------+-----------------------------+-----------+----------+-----------------------------------------------\n id | integer | | not null |\nnextval('mariel_dates_test_id_seq'::regclass)\n end_time | timestamp without time zone | | |\n object_hash | text | | |\nIndexes:\n \"id_pk\" PRIMARY KEY, btree (id)\n\nLets say that the same object (object_hash) can have many revisions, the\nend_time is the time it was last updated. I'm trying to create this table\nas a range partition on the end_time. However, when I try to add the pk I'm\ngetting an error :\nALTER TABLE object_revision ADD CONSTRAINT object_revision_id_end_time\nPRIMARY KEY (id,end_time);\nERROR: column \"end_time\" contains null values\n\ndoes someone familiar with a workaround ? I know that in postgresql as part\nof the primary key definition unique and not null constraints are enforced\non each column and not on both of them. However, this might be problematic\nwith pg12 partitions..\n\nHey,In PG12 I'm trying to create partitions by range on a date column that might be null (indicates it is the most recent version of the object). My PK has to include the partition column, therefore I'm getting an error that I cant create a primary key with the specific column because it has null values.For example : \\d object_revision Table \"public.object_revision\" Column | Type | Collation | Nullable | Default-------------+-----------------------------+-----------+----------+----------------------------------------------- id | integer | | not null | nextval('mariel_dates_test_id_seq'::regclass) end_time | timestamp without time zone | | | object_hash | text | | |Indexes: \"id_pk\" PRIMARY KEY, btree (id)Lets say that the same object (object_hash) can have many revisions, the end_time is the time it was last updated. I'm trying to create this table as a range partition on the end_time. However, when I try to add the pk I'm getting an error : ALTER TABLE object_revision ADD CONSTRAINT object_revision_id_end_time PRIMARY KEY (id,end_time);ERROR: column \"end_time\" contains null valuesdoes someone familiar with a workaround ? I know that in postgresql as part of the primary key definition unique and not null constraints are enforced on each column and not on both of them. However, this might be problematic with pg12 partitions..",
"msg_date": "Wed, 2 Oct 2019 09:17:37 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg12 - partition by column that might have null values"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> In PG12 I'm trying to create partitions by range on a date column\n> that might be null (indicates it is the most recent version of the\n> object). My PK has to include the partition column, therefore I'm\n> getting an error that I cant create a primary key with the specific\n> column because it has null values.\n> \n> For example : \n> \\d object_revision\n> Table\n> \"public.object_revision\"\n> Column | Type | Collation | Nullable | \n> Default\n> -------------+-----------------------------+-----------+----------+\n> -----------------------------------------------\n> id | integer | | not null |\n> nextval('mariel_dates_test_id_seq'::regclass)\n> end_time | timestamp without time zone | | |\n> object_hash | text | | |\n> Indexes:\n> \"id_pk\" PRIMARY KEY, btree (id)\n> \n> Lets say that the same object (object_hash) can have many revisions,\n> the end_time is the time it was last updated. I'm trying to create\n> this table as a range partition on the end_time. However, when I try\n> to add the pk I'm getting an error : \n> ALTER TABLE object_revision ADD CONSTRAINT\n> object_revision_id_end_time PRIMARY KEY (id,end_time);\n> ERROR: column \"end_time\" contains null values\n> \n> does someone familiar with a workaround ? I know that in postgresql\n> as part of the primary key definition unique and not null constraints\n> are enforced on each column and not on both of them. However, this\n> might be problematic with pg12 partitions..\n\nCan \"end_time\" ever be modified?\n\nIf yes, it is a bad choice for a partitioning column.\n\nIf the NULL in \"end_time\" is meant to signify \"no end\", use the\nvalue \"infinity\" instead of NULL. Then you can define the column\nas NOT NULL.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 02 Oct 2019 09:29:58 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg12 - partition by column that might have null values"
},
{
"msg_contents": "Whenever I have a new revision of that object, I update the end_time of the\nlatest revision to be now() and I add a new record of that object with\nend_date null.\nThe null value is used to find most recent revisions of objects..\nThanks for the suggestion of infinity ! I'll try it.\n\nWhenever I have a new revision of that object, I update the end_time of the latest revision to be now() and I add a new record of that object with end_date null.The null value is used to find most recent revisions of objects..Thanks for the suggestion of infinity ! I'll try it.",
"msg_date": "Wed, 2 Oct 2019 10:37:11 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg12 - partition by column that might have null values"
},
{
"msg_contents": "From: Mariel Cherkassky <[email protected]> \nSent: Wednesday, October 02, 2019 12:37 AM\nWhenever I have a new revision of that object, I update the end_time of the latest revision to be now() and I add a new record of that object with end_date null.\n\nThe null value is used to find most recent revisions of objects..\n\nThanks for the suggestion of infinity ! I'll try it.\n\n \n\nMy partitioning table design model always uses a partitioning column that is 100% static since that guarantees that rows are not constantly moving between partitions (with index update overhead etc). In this scenario I’d use a “StartTime” column to anchor the row in a partition. The relatively few rows with a null EndTime don’t need the power of partitioning, just an index to find them.\n\n \n\nMike Sofen\n\n\nFrom: Mariel Cherkassky <[email protected]> Sent: Wednesday, October 02, 2019 12:37 AMWhenever I have a new revision of that object, I update the end_time of the latest revision to be now() and I add a new record of that object with end_date null.The null value is used to find most recent revisions of objects..Thanks for the suggestion of infinity ! I'll try it. My partitioning table design model always uses a partitioning column that is 100% static since that guarantees that rows are not constantly moving between partitions (with index update overhead etc). In this scenario I’d use a “StartTime” column to anchor the row in a partition. The relatively few rows with a null EndTime don’t need the power of partitioning, just an index to find them. Mike Sofen",
"msg_date": "Wed, 2 Oct 2019 05:08:05 -0700",
"msg_from": "\"Mike Sofen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pg12 - partition by column that might have null values"
},
{
"msg_contents": "but the start time doesnt indicates that the object is the most recent, it\njust indicates when the object was added to your table. If your queries\ninvolve the start_time I can understand why u set it as a partition column,\notherwise is isnt useful. In most of my queries I query by one of 2 options\n:\n1.Where end_time is null\n2.Where start_date>DATE and end_date <DATE\n\nI think that doing the following will be the best option :\npartition by list (end_time) - (1 for all non null (non infinity) and 1\ndefault for all those who has end_time that isnt null)\non each partition I'll create range partition on the end_date so that I can\nsearch for revisions faster.\n\nWhat do you think ?\n\nbut the start time doesnt indicates that the object is the most recent, it just indicates when the object was added to your table. If your queries involve the start_time I can understand why u set it as a partition column, otherwise is isnt useful. In most of my queries I query by one of 2 options :1.Where end_time is null 2.Where start_date>DATE and end_date <DATEI think that doing the following will be the best option : partition by list (end_time) - (1 for all non null (non infinity) and 1 default for all those who has end_time that isnt null)on each partition I'll create range partition on the end_date so that I can search for revisions faster.What do you think ?",
"msg_date": "Wed, 2 Oct 2019 16:44:31 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg12 - partition by column that might have null values"
},
{
"msg_contents": ">but the start time doesnt indicates that the object is the most recent, it just indicates when the object was added to your table…\n\n>on each partition I'll create range partition on the end_date so that I can search for revisions faster.\n\n \n\nI believe you are confusing data storage with query optimization. Rarely would there be more updated rows than aged/stable rows…in the normal system, having even 3% of the data in churn (updateable) state would be unusual and your description of the data dynamics on this table said that a row updated once, gets the end_date set and then a new row is created.\n\n \n\nTo me, that says, put an index on end_date so you can find/query them quickly, and create partitions on a static date so the rows (and indexes) aren’t always being updated.\n\n \n\nMike Sofen\n\n\n >but the start time doesnt indicates that the object is the most recent, it just indicates when the object was added to your table…>on each partition I'll create range partition on the end_date so that I can search for revisions faster. I believe you are confusing data storage with query optimization. Rarely would there be more updated rows than aged/stable rows…in the normal system, having even 3% of the data in churn (updateable) state would be unusual and your description of the data dynamics on this table said that a row updated once, gets the end_date set and then a new row is created. To me, that says, put an index on end_date so you can find/query them quickly, and create partitions on a static date so the rows (and indexes) aren’t always being updated. Mike Sofen",
"msg_date": "Wed, 2 Oct 2019 16:14:22 -0700",
"msg_from": "\"Mike Sofen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pg12 - partition by column that might have null values"
},
{
"msg_contents": "Not sure how data storage is relevant here, I was only focusing on query\noptimization. Lets say that most of the data isnt moving (history data).\nHowever, objects can be changed and therefore new revisions are added and\nthe previous revisions updated (their end_date is updated). If you run\nqueries that involve the end_date very common (in order to get the most\nrecent revision of objects) it will be better to set this column as a\npartition column instead just having an index on this col. In this way,\ngetting all the recent revisions of a specific object is reached by log(m)\n[m is the number of most recent revisions] instead of logn [n is the number\nof revisions u have] and n is by far bigger than m. Correct me I'f I'm\nwrong, this topic is quite interesting ..\n\nNot sure how data storage is relevant here, I was only focusing on query optimization. Lets say that most of the data isnt moving (history data). However, objects can be changed and therefore new revisions are added and the previous revisions updated (their end_date is updated). If you run queries that involve the end_date very common (in order to get the most recent revision of objects) it will be better to set this column as a partition column instead just having an index on this col. In this way, getting all the recent revisions of a specific object is reached by log(m) [m is the number of most recent revisions] instead of logn [n is the number of revisions u have] and n is by far bigger than m. Correct me I'f I'm wrong, this topic is quite interesting ..",
"msg_date": "Thu, 3 Oct 2019 10:30:20 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg12 - partition by column that might have null values"
},
{
"msg_contents": "Just create a partial index on id column where end_date = infinity (or null\nif you really prefer that pattern) and the system can quickly find the rows\nthat are still most current revision. How many rows do you have in this\ntable? Or foresee ever having? What took you down the road of partitioning\nthe table? Theory only, or solving a real life optimization problem?\n\nJust create a partial index on id column where end_date = infinity (or null if you really prefer that pattern) and the system can quickly find the rows that are still most current revision. How many rows do you have in this table? Or foresee ever having? What took you down the road of partitioning the table? Theory only, or solving a real life optimization problem?",
"msg_date": "Thu, 3 Oct 2019 11:55:22 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg12 - partition by column that might have null values"
}
] |
[
{
"msg_contents": "I recently performed a pg_dump (data-only) of a relatively large database\nwhere we store intermediate results of calculations. It is approximately 3\nTB on disk and has about 20 billion rows.\n\nWe do the dump/restore about once a month and as the dataset has grown, the\nrestores have gotten very slow. So, this time I decided to do it a\ndifferent way and have some observations that puzzle me.\n\nBackground:\n\nThe data is extremely simple. The rows consist only of numbers and are all\nfixed length. There are no foreign keys, constraints, null values, or\ndefault values. There are no strings or arrays. There are 66 tables and the\nnumber of rows in each table forms a gaussian distribution; so there are 3\ntables which have about 3 billion rows each and the rest of the tables have\nsignificantly fewer rows.\n\nI used the directory format when doing the pg_dump. The compressed data of\nthe dump is 550 GB.\n\nI am using: (PostgreSQL) 11.5 (Ubuntu 11.5-1.pgdg18.04+1)\n\nThe machine that I attempted to do a pg_restore to is a dedicated server\njust for one instance of posgresql. It has 32 GB of memory and is running\nUbuntu 18.04 (headless). It physical hardware, not virtualized. Nothing\nelse runs on the machine and the postgresql.conf settings have been tuned\n(to the best of my postgresql abilities which are suspect). While the\noperating system is installed on an SSD, there is one extra large, fast HDD\nthat is dedicated to the posgresql server. It has been in use for this\nparticular purpose for a while and has not had performance issues. (Just\nwith pg_restore)\n\nAutovacuum is off and all indexes have been deleted before the restore is\nstarted. There is nothing in the db except for the empty data tables.\n\nRestoring over the net:\n\nIn the past we have always restored in a way where the dumped data is read\nover a gigabit connection while being restored to the local drive. But, the\nlast time we did it it took 2 days and I was looking for something faster.\nSo, I decided to copy the dumped directory to the local drive and restore\nfrom the dump locally. I knew that because the machine only had one drive\nthat would fit the data, there would be some I/O contention, but I hoped\nthat it might not be as bad as reading over the network.\n\nThe pg_restore went unbearably slowly... after many hours it had written\nless than 20GB to the database, so I started tracking it with iostat to see\nwhat was going on. The following is iostat output every 60 seconds. I\ntracked it for several hours and this is representative of what was\nhappening consistently.\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.39 0.00 0.40 43.10 0.00 56.11\n\nDevice tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nloop0 0.00 0.00 0.00 0 0\nloop1 0.00 0.00 0.00 0 0\nloop2 0.00 0.00 0.00 0 0\nsda 263.33 132.87 2990.93 7972 179456\nsdb 0.17 0.00 0.73 0 44\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.34 0.00 0.41 44.43 0.00 54.82\n\nDevice tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nloop0 0.00 0.00 0.00 0 0\nloop1 0.00 0.00 0.00 0 0\nloop2 0.00 0.00 0.00 0 0\nsda 262.95 140.47 2983.00 8428 178980\nsdb 0.08 0.00 0.40 0 24\n\nWhile I was tracking this I started experimenting with the IO scheduler to\nsee if it had a noticable impact. I had been using cfq (ubuntu 18.04\ndefault). Changing to deadline did not have a noticable difference.\nChanging to noop made things much slower. I went back to cfq. I also\nexperimented with turning fsync off; that did speed things up a bit but not\nenough for me to leave it off.\n\nWhat puzzled me is that the OS was spending such a large percentage of time\nin iowait, yet there was so little IO going on.\n\nSo, I decided to go back to restoring over the net. While the slow\npg_restore was still going on, and while I was still tracking iostat, I\ncopied the 550 GB dumps to an nfs drive. The copy happened pretty much at\nfull speed (limit being the gigabit ethernet) and interestingly, it did not\nslow down kb_wrtn and kb_wrtn/s numbers in iostat (which was the postgresql\nserver continuing with the restore). To me that seemed to indicate that it\nwas not really a disk I/O limitation.\n\nRestoring over the net:\n\nAfter copying the dump files to an NFS drive, I stopped the restore,\ntruncated the tables and started exactly the same command, but this time\ntaking its input from the nfs drive. I did not reboot the machine or\nrestart the postgresql server. I tracked iostate every 60 seconds and this\nis what it looks like:\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 8.87 0.00 1.62 39.89 0.00 49.61\n\nDevice tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nloop0 0.00 0.00 0.00 0 0\nloop1 0.00 0.00 0.00 0 0\nloop2 0.00 0.00 0.00 0 0\nsda 252.77 527.87 37837.47 31672 2270248\nsdb 0.22 0.00 1.00 0 60\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 8.57 0.00 2.21 35.26 0.00 53.97\n\nDevice tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nloop0 0.00 0.00 0.00 0 0\nloop1 0.00 0.00 0.00 0 0\nloop2 0.00 0.00 0.00 0 0\nsda 236.10 465.27 54312.00 27916 3258720\nsdb 0.08 0.00 0.40 0 24\n\n\nNotice that the database is writing approximately 15 times as fast (and I\nhave verified that by tracking the size of the posgresql data directory\nover time) while the number of i/o transactions per second has actually\ndropped a little bit. It has now been running about 24 hours and has\nmaintained that speed.\n\nMy interpretation\n\nAt first sight this seems to me as being symptomatic of the pg_restore\nprocess doing a huge number of very small input operations when reading\nfrom the dump. If the proportion of input to output operations is the same\nnow as it was when trying to restore from the local drive, that implies\nthat the vast majority of i/o operations were inputs and not outputs.\n\nHowever, I am not sure that even that would cause such a slowdown because\nthe compressed data files in the directory format dump correspond to the\ntables and so there are 3 very large files that it starts with. So all of\nthese stats were gathered in the first 24 hours of the restore when it was\njust restoring the first 3 tables (I have verbose on, so I know). Because\nthose files are gzipped, we know that they are being read sequentially and\nbecause the machine has lots of memory we know that the OS has allocated a\nlot of space to disk buffers and so even if postgresql was doing lots of\nsmall reads, bouncing around between the 3 files, it would not hit the disk\nthat often.\n\nNow that restore is happening 15 times faster when reading from an nfs\ndrive, I looked at the nfsiostat output for a while and it does not show\nany indication of any untoward behavior:\n\n\n192.168.17.146:/volume1/Backups mounted on /nas/Backups:\n\n ops/s rpc bklog\n 27.000 0.000\n\nread: ops/s kB/s kB/op retrans\navg RTT (ms) avg exe (ms)\n 27.000 3464.332 128.309 0 (0.0%)\n 13.500 13.511\nwrite: ops/s kB/s kB/op retrans\navg RTT (ms) avg exe (ms)\n 0.000 0.000 0.000 0 (0.0%)\n 0.000 0.000\n\n192.168.17.146:/volume1/Backups mounted on /nas/Backups:\n\n ops/s rpc bklog\n 24.000 0.000\n\nread: ops/s kB/s kB/op retrans\navg RTT (ms) avg exe (ms)\n 24.000 3079.406 128.309 0 (0.0%)\n 28.492 28.504\nwrite: ops/s kB/s kB/op retrans\navg RTT (ms) avg exe (ms)\n 0.000 0.000 0.000 0 (0.0%)\n 0.000 0.000\n\n\nThe nubmer of operations per second (if those correspond to reads from\npostgresql, which I do not know for a fact) does not seem high at all.\n\nI actually do not have a great theory for what is going on but it might be\nmore obvious to someone who knows the postgresql implementation well. I\nwould love to hear any thoughts that would be helpful on how to get my\nrestores even faster.\n\nI recently performed a pg_dump (data-only) of a relatively large database where we store intermediate results of calculations. It is approximately 3 TB on disk and has about 20 billion rows.We do the dump/restore about once a month and as the dataset has grown, the restores have gotten very slow. So, this time I decided to do it a different way and have some observations that puzzle me.Background:The data is extremely simple. The rows consist only of numbers and are all fixed length. There are no foreign keys, constraints, null values, or default values. There are no strings or arrays. There are 66 tables and the number of rows in each table forms a gaussian distribution; so there are 3 tables which have about 3 billion rows each and the rest of the tables have significantly fewer rows. I used the directory format when doing the pg_dump. The compressed data of the dump is 550 GB.I am using: (PostgreSQL) 11.5 (Ubuntu 11.5-1.pgdg18.04+1)The machine that I attempted to do a pg_restore to is a dedicated server just for one instance of posgresql. It has 32 GB of memory and is running Ubuntu 18.04 (headless). It physical hardware, not virtualized. Nothing else runs on the machine and the postgresql.conf settings have been tuned (to the best of my postgresql abilities which are suspect). While the operating system is installed on an SSD, there is one extra large, fast HDD that is dedicated to the posgresql server. It has been in use for this particular purpose for a while and has not had performance issues. (Just with pg_restore)Autovacuum is off and all indexes have been deleted before the restore is started. There is nothing in the db except for the empty data tables.Restoring over the net:In the past we have always restored in a way where the dumped data is read over a gigabit connection while being restored to the local drive. But, the last time we did it it took 2 days and I was looking for something faster. So, I decided to copy the dumped directory to the local drive and restore from the dump locally. I knew that because the machine only had one drive that would fit the data, there would be some I/O contention, but I hoped that it might not be as bad as reading over the network.The pg_restore went unbearably slowly... after many hours it had written less than 20GB to the database, so I started tracking it with iostat to see what was going on. The following is iostat output every 60 seconds. I tracked it for several hours and this is representative of what was happening consistently.avg-cpu: %user %nice %system %iowait %steal %idle 0.39 0.00 0.40 43.10 0.00 56.11Device tps kB_read/s kB_wrtn/s kB_read kB_wrtnloop0 0.00 0.00 0.00 0 0loop1 0.00 0.00 0.00 0 0loop2 0.00 0.00 0.00 0 0sda 263.33 132.87 2990.93 7972 179456sdb 0.17 0.00 0.73 0 44avg-cpu: %user %nice %system %iowait %steal %idle 0.34 0.00 0.41 44.43 0.00 54.82Device tps kB_read/s kB_wrtn/s kB_read kB_wrtnloop0 0.00 0.00 0.00 0 0loop1 0.00 0.00 0.00 0 0loop2 0.00 0.00 0.00 0 0sda 262.95 140.47 2983.00 8428 178980sdb 0.08 0.00 0.40 0 24While I was tracking this I started experimenting with the IO scheduler to see if it had a noticable impact. I had been using cfq (ubuntu 18.04 default). Changing to deadline did not have a noticable difference. Changing to noop made things much slower. I went back to cfq. I also experimented with turning fsync off; that did speed things up a bit but not enough for me to leave it off.What puzzled me is that the OS was spending such a large percentage of time in iowait, yet there was so little IO going on.So, I decided to go back to restoring over the net. While the slow pg_restore was still going on, and while I was still tracking iostat, I copied the 550 GB dumps to an nfs drive. The copy happened pretty much at full speed (limit being the gigabit ethernet) and interestingly, it did not slow down kb_wrtn and kb_wrtn/s numbers in iostat (which was the postgresql server continuing with the restore). To me that seemed to indicate that it was not really a disk I/O limitation.Restoring over the net:After copying the dump files to an NFS drive, I stopped the restore, truncated the tables and started exactly the same command, but this time taking its input from the nfs drive. I did not reboot the machine or restart the postgresql server. I tracked iostate every 60 seconds and this is what it looks like:avg-cpu: %user %nice %system %iowait %steal %idle 8.87 0.00 1.62 39.89 0.00 49.61Device tps kB_read/s kB_wrtn/s kB_read kB_wrtnloop0 0.00 0.00 0.00 0 0loop1 0.00 0.00 0.00 0 0loop2 0.00 0.00 0.00 0 0sda 252.77 527.87 37837.47 31672 2270248sdb 0.22 0.00 1.00 0 60avg-cpu: %user %nice %system %iowait %steal %idle 8.57 0.00 2.21 35.26 0.00 53.97Device tps kB_read/s kB_wrtn/s kB_read kB_wrtnloop0 0.00 0.00 0.00 0 0loop1 0.00 0.00 0.00 0 0loop2 0.00 0.00 0.00 0 0sda 236.10 465.27 54312.00 27916 3258720sdb 0.08 0.00 0.40 0 24Notice that the database is writing approximately 15 times as fast (and I have verified that by tracking the size of the posgresql data directory over time) while the number of i/o transactions per second has actually dropped a little bit. It has now been running about 24 hours and has maintained that speed.My interpretationAt first sight this seems to me as being symptomatic of the pg_restore process doing a huge number of very small input operations when reading from the dump. If the proportion of input to output operations is the same now as it was when trying to restore from the local drive, that implies that the vast majority of i/o operations were inputs and not outputs.However, I am not sure that even that would cause such a slowdown because the compressed data files in the directory format dump correspond to the tables and so there are 3 very large files that it starts with. So all of these stats were gathered in the first 24 hours of the restore when it was just restoring the first 3 tables (I have verbose on, so I know). Because those files are gzipped, we know that they are being read sequentially and because the machine has lots of memory we know that the OS has allocated a lot of space to disk buffers and so even if postgresql was doing lots of small reads, bouncing around between the 3 files, it would not hit the disk that often.Now that restore is happening 15 times faster when reading from an nfs drive, I looked at the nfsiostat output for a while and it does not show any indication of any untoward behavior:192.168.17.146:/volume1/Backups mounted on /nas/Backups: ops/s rpc bklog 27.000 0.000read: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) 27.000 3464.332 128.309 0 (0.0%) 13.500 13.511write: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) 0.000 0.000 0.000 0 (0.0%) 0.000 0.000192.168.17.146:/volume1/Backups mounted on /nas/Backups: ops/s rpc bklog 24.000 0.000read: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) 24.000 3079.406 128.309 0 (0.0%) 28.492 28.504write: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) 0.000 0.000 0.000 0 (0.0%) 0.000 0.000The nubmer of operations per second (if those correspond to reads from postgresql, which I do not know for a fact) does not seem high at all. I actually do not have a great theory for what is going on but it might be more obvious to someone who knows the postgresql implementation well. I would love to hear any thoughts that would be helpful on how to get my restores even faster.",
"msg_date": "Thu, 3 Oct 2019 13:30:30 -0700",
"msg_from": "Ogden Brash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some observations on very slow pg_restore operations"
},
{
"msg_contents": "Hi Ogden,\n\nYou didn't mention any details about your postgresql.conf settings. Why \ndon't you set them optimally for your loads and try again and see if \nthere is any difference. Make sure you do a DB restart since some of \nthese parameters require it.\n\n======================================\nparameter\t\tbefore\tafter\n----------------\t------\t-------\nshared_buffers\t\tReduce this value to about 25% of total memory\ntemp_buffers\t\tDecrease this value to 8MB since we are not using temporary tables or doing intermediate sorts\nwork_mem\t\tReduce significantly (1MB) since we are not doing memory sorts or hashes per SQL\nmaintenance_work_mem\tIncrease signficantly for DDL bulk loading, restore operations\nfsync \t\toff (so that time is not being spent waiting for stuff to be written to disk). Note: you may not be able to recover your database after a crash when set to off.\ncheckpoint_segments\tIncrease this significantly for DML bulk loading, restore operations\nmax_wal_size Increase significantly like you would to checkpoint_segments\nmin_wal_size Increase significantly like you would to checkpoint_segments\ncheckpoint_timeout Increase to at least 30min\narchive_mode \t\toff\nautovacuum \t\toff\nsynchronous_commit\toff\nwal_level\t\tminimal\nmax_wal_senders 0\nfull_page_writes off during DML bulk loading, restore operations\nwal_buffers 16MB during DML bulk loading, restore operations\n\n\nRegards,\nMichael Vitale\n\n\nOgden Brash wrote on 10/3/2019 4:30 PM:\n> I recently performed a pg_dump (data-only) of a relatively large \n> database where we store intermediate results of calculations. It is \n> approximately 3 TB on disk and has about 20 billion rows.\n>\n> We do the dump/restore about once a month and as the dataset has \n> grown, the restores have gotten very slow. So, this time I decided to \n> do it a different way and have some observations that puzzle me.\n>\n> Background:\n>\n> The data is extremely simple. The rows consist only of numbers and are \n> all fixed length. There are no foreign keys, constraints, null values, \n> or default values. There are no strings or arrays. There are 66 tables \n> and the number of rows in each table forms a gaussian distribution; so \n> there are 3 tables which have about 3 billion rows each and the rest \n> of the tables have significantly fewer rows.\n>\n> I used the directory format when doing the pg_dump. The compressed \n> data of the dump is 550 GB.\n>\n> I am using: (PostgreSQL) 11.5 (Ubuntu 11.5-1.pgdg18.04+1)\n>\n> The machine that I attempted to do a pg_restore to is a dedicated \n> server just for one instance of posgresql. It has 32 GB of memory and \n> is running Ubuntu 18.04 (headless). It physical hardware, not \n> virtualized. Nothing else runs on the machine and the postgresql.conf \n> settings have been tuned (to the best of my postgresql abilities which \n> are suspect). While the operating system is installed on an SSD, there \n> is one extra large, fast HDD that is dedicated to the posgresql \n> server. It has been in use for this particular purpose for a while and \n> has not had performance issues. (Just with pg_restore)\n>\n> Autovacuum is off and all indexes have been deleted before the restore \n> is started. There is nothing in the db except for the empty data tables.\n>\n> Restoring over the net:\n>\n> In the past we have always restored in a way where the dumped data is \n> read over a gigabit connection while being restored to the local \n> drive. But, the last time we did it it took 2 days and I was looking \n> for something faster. So, I decided to copy the dumped directory to \n> the local drive and restore from the dump locally. I knew that because \n> the machine only had one drive that would fit the data, there would be \n> some I/O contention, but I hoped that it might not be as bad as \n> reading over the network.\n>\n> The pg_restore went unbearably slowly... after many hours it had \n> written less than 20GB to the database, so I started tracking it with \n> iostat to see what was going on. The following is iostat output every \n> 60 seconds. I tracked it for several hours and this is representative \n> of what was happening consistently.\n>\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 0.39 0.00 0.40 43.10 0.00 56.11\n>\n> Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> loop0 0.00 0.00 0.00 0 0\n> loop1 0.00 0.00 0.00 0 0\n> loop2 0.00 0.00 0.00 0 0\n> sda 263.33 132.87 2990.93 7972 179456\n> sdb 0.17 0.00 0.73 0 44\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 0.34 0.00 0.41 44.43 0.00 54.82\n>\n> Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> loop0 0.00 0.00 0.00 0 0\n> loop1 0.00 0.00 0.00 0 0\n> loop2 0.00 0.00 0.00 0 0\n> sda 262.95 140.47 2983.00 8428 178980\n> sdb 0.08 0.00 0.40 0 24\n>\n> While I was tracking this I started experimenting with the IO \n> scheduler to see if it had a noticable impact. I had been using cfq \n> (ubuntu 18.04 default). Changing to deadline did not have a noticable \n> difference. Changing to noop made things much slower. I went back to \n> cfq. I also experimented with turning fsync off; that did speed things \n> up a bit but not enough for me to leave it off.\n>\n> What puzzled me is that the OS was spending such a large percentage of \n> time in iowait, yet there was so little IO going on.\n>\n> So, I decided to go back to restoring over the net. While the slow \n> pg_restore was still going on, and while I was still tracking iostat, \n> I copied the 550 GB dumps to an nfs drive. The copy happened pretty \n> much at full speed (limit being the gigabit ethernet) and \n> interestingly, it did not slow down kb_wrtn and kb_wrtn/s numbers in \n> iostat (which was the postgresql server continuing with the restore). \n> To me that seemed to indicate that it was not really a disk I/O \n> limitation.\n>\n> Restoring over the net:\n>\n> After copying the dump files to an NFS drive, I stopped the restore, \n> truncated the tables and started exactly the same command, but this \n> time taking its input from the nfs drive. I did not reboot the machine \n> or restart the postgresql server. I tracked iostate every 60 seconds \n> and this is what it looks like:\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 8.87 0.00 1.62 39.89 0.00 49.61\n>\n> Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> loop0 0.00 0.00 0.00 0 0\n> loop1 0.00 0.00 0.00 0 0\n> loop2 0.00 0.00 0.00 0 0\n> sda 252.77 527.87 37837.47 31672 2270248\n> sdb 0.22 0.00 1.00 0 60\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 8.57 0.00 2.21 35.26 0.00 53.97\n>\n> Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> loop0 0.00 0.00 0.00 0 0\n> loop1 0.00 0.00 0.00 0 0\n> loop2 0.00 0.00 0.00 0 0\n> sda 236.10 465.27 54312.00 27916 3258720\n> sdb 0.08 0.00 0.40 0 24\n>\n>\n> Notice that the database is writing approximately 15 times as fast \n> (and I have verified that by tracking the size of the posgresql data \n> directory over time) while the number of i/o transactions per second \n> has actually dropped a little bit. It has now been running about 24 \n> hours and has maintained that speed.\n>\n> My interpretation\n>\n> At first sight this seems to me as being symptomatic of the pg_restore \n> process doing a huge number of very small input operations when \n> reading from the dump. If the proportion of input to output operations \n> is the same now as it was when trying to restore from the local drive, \n> that implies that the vast majority of i/o operations were inputs and \n> not outputs.\n>\n> However, I am not sure that even that would cause such a slowdown \n> because the compressed data files in the directory format dump \n> correspond to the tables and so there are 3 very large files that it \n> starts with. So all of these stats were gathered in the first 24 hours \n> of the restore when it was just restoring the first 3 tables (I have \n> verbose on, so I know). Because those files are gzipped, we know that \n> they are being read sequentially and because the machine has lots of \n> memory we know that the OS has allocated a lot of space to disk \n> buffers and so even if postgresql was doing lots of small reads, \n> bouncing around between the 3 files, it would not hit the disk that often.\n>\n> Now that restore is happening 15 times faster when reading from an nfs \n> drive, I looked at the nfsiostat output for a while and it does not \n> show any indication of any untoward behavior:\n>\n>\n> 192.168.17.146:/volume1/Backups mounted on /nas/Backups:\n>\n> ops/s rpc bklog\n> 27.000 0.000\n>\n> read: ops/s kB/s kB/op retrans \n> avg RTT (ms) avg exe (ms)\n> 27.000 3464.332 128.309 0 (0.0%) \n> 13.500 13.511\n> write: ops/s kB/s kB/op \n> retrans avg RTT (ms) avg exe (ms)\n> 0.000 0.000 0.000 0 (0.0%) \n> 0.000 0.000\n>\n> 192.168.17.146:/volume1/Backups mounted on /nas/Backups:\n>\n> ops/s rpc bklog\n> 24.000 0.000\n>\n> read: ops/s kB/s kB/op retrans \n> avg RTT (ms) avg exe (ms)\n> 24.000 3079.406 128.309 0 (0.0%) \n> 28.492 28.504\n> write: ops/s kB/s kB/op \n> retrans avg RTT (ms) avg exe (ms)\n> 0.000 0.000 0.000 0 (0.0%) \n> 0.000 0.000\n>\n>\n> The nubmer of operations per second (if those correspond to reads from \n> postgresql, which I do not know for a fact) does not seem high at all.\n>\n> I actually do not have a great theory for what is going on but it \n> might be more obvious to someone who knows the postgresql \n> implementation well. I would love to hear any thoughts that would be \n> helpful on how to get my restores even faster.\n>\n>\n>\n\n\n\n\nHi Ogden,\n\nYou didn't mention any details about your postgresql.conf settings. Why\n don't you set them optimally for your loads and try again and see if \nthere is any difference. Make sure you do a DB restart since some of \nthese parameters require it.\n======================================\nparameter\t\tbefore\tafter\n----------------\t------\t-------\nshared_buffers\t\tReduce this value to about 25% of total memory\ntemp_buffers\t\tDecrease this value to 8MB since we are not using temporary tables or doing intermediate sorts\nwork_mem\t\tReduce significantly (1MB) since we are not doing memory sorts or hashes per SQL\nmaintenance_work_mem\tIncrease signficantly for DDL bulk loading, restore operations\nfsync \t\toff (so that time is not being spent waiting for stuff to be written to disk). Note: you may not be able to recover your database after a crash when set to off.\ncheckpoint_segments\tIncrease this significantly for DML bulk loading, restore operations\nmax_wal_size Increase significantly like you would to checkpoint_segments\nmin_wal_size Increase significantly like you would to checkpoint_segments\ncheckpoint_timeout Increase to at least 30min\narchive_mode \t\toff\nautovacuum \t\toff\nsynchronous_commit\toff\nwal_level\t\tminimal\nmax_wal_senders 0\nfull_page_writes off during DML bulk loading, restore operations\nwal_buffers 16MB during DML bulk loading, restore operations \n\nRegards,\nMichael Vitale\n\n\nOgden Brash wrote on 10/3/2019 4:30 PM:\n\n\nI recently \nperformed a pg_dump (data-only) of a relatively large database where we \nstore intermediate results of calculations. It is approximately 3 TB on \ndisk and has about 20 billion rows.We do the \ndump/restore about once a month and as the dataset has grown, the \nrestores have gotten very slow. So, this time I decided to do it a \ndifferent way and have some observations that puzzle me.Background:The data \nis extremely simple. The rows consist only of numbers and are all fixed \nlength. There are no foreign keys, constraints, null values, or default \nvalues. There are no strings or arrays. There are 66 tables and the \nnumber of rows in each table forms a gaussian distribution; so there are\n 3 tables which have about 3 billion rows each and the rest of the \ntables have significantly fewer rows. I used the\n directory format when doing the pg_dump. The compressed data of the \ndump is 550 GB.I am using: (PostgreSQL) 11.5 (Ubuntu \n11.5-1.pgdg18.04+1)The machine that I attempted to do a pg_restore to is a\n dedicated server just for one instance of posgresql. It has 32 GB of \nmemory and is running Ubuntu 18.04 (headless). It physical hardware, not\n virtualized. Nothing else runs on the machine and the postgresql.conf \nsettings have been tuned (to the best of my postgresql abilities which \nare suspect). While the operating system is installed on an SSD, there \nis one extra large, fast HDD that is dedicated to the posgresql server. \nIt has been in use for this particular purpose for a while and has not \nhad performance issues. (Just with pg_restore)Autovacuum\n is off and all indexes have been deleted before the restore is started.\n There is nothing in the db except for the empty data tables.Restoring\n over the net:In the past we have always restored in a way where the\n dumped data is read over a gigabit connection while being restored to \nthe local drive. But, the last time we did it it took 2 days and I was \nlooking for something faster. So, I decided to copy the dumped directory\n to the local drive and restore from the dump locally. I knew that \nbecause the machine only had one drive that would fit the data, there \nwould be some I/O contention, but I hoped that it might not be as bad as\n reading over the network.The pg_restore went unbearably slowly... after many \nhours it had written less than 20GB to the database, so I started \ntracking it with iostat to see what was going on. The following is \niostat output every 60 seconds. I tracked it for several hours and this \nis representative of what was happening consistently.avg-cpu: %user %nice %system %iowait %steal \n %idle 0.39 0.00 \n 0.40 43.10 0.00 56.11Device tps kB_read/s kB_wrtn/s \nkB_read kB_wrtnloop0 \n 0.00 0.00 0.00 0 0loop1 0.00 0.00 0.00 \n 0 0loop2 \n 0.00 0.00 0.00 0 0sda 263.33 132.87 2990.93 \n 7972 179456sdb \n 0.17 0.00 0.73 0 44avg-cpu: \n %user %nice %system %iowait %steal %idle 0.34 0.00 0.41 44.43 0.00 54.82Device \n tps kB_read/s kB_wrtn/s kB_read kB_wrtnloop0 0.00 0.00 0.00 \n 0 0loop1 \n 0.00 0.00 0.00 0 0loop2 0.00 0.00 0.00 \n 0 0sda \n 262.95 140.47 2983.00 8428 178980sdb 0.08 0.00 0.40 \n 0 24While I was tracking this I started experimenting with\n the IO scheduler to see if it had a noticable impact. I had been using \ncfq (ubuntu 18.04 default). Changing to deadline did not have a \nnoticable difference. Changing to noop made things much slower. I went \nback to cfq. I also experimented with turning fsync off; that did speed \nthings up a bit but not enough for me to leave it off.What \npuzzled me is that the OS was spending such a large percentage of time \nin iowait, yet there was so little IO going on.So, I \ndecided to go back to restoring over the net. While the slow pg_restore \nwas still going on, and while I was still tracking iostat, I copied the \n550 GB dumps to an nfs drive. The copy happened pretty much at full \nspeed (limit being the gigabit ethernet) and interestingly, it did not \nslow down kb_wrtn and kb_wrtn/s numbers in iostat (which was the \npostgresql server continuing with the restore). To me that seemed to \nindicate that it was not really a disk I/O limitation.Restoring\n over the net:After copying the dump files to an NFS drive, I \nstopped the restore, truncated the tables and started exactly the same \ncommand, but this time taking its input from the nfs drive. I did not \nreboot the machine or restart the postgresql server. I tracked iostate \nevery 60 seconds and this is what it looks like:avg-cpu: \n%user %nice %system %iowait %steal %idle 8.87 0.00 1.62 39.89 0.00 49.61Device \n tps kB_read/s kB_wrtn/s kB_read kB_wrtnloop0 0.00 0.00 0.00 \n 0 0loop1 \n 0.00 0.00 0.00 0 0loop2 0.00 0.00 0.00 \n 0 0sda \n 252.77 527.87 37837.47 31672 2270248sdb 0.22 0.00 1.00 \n 0 60avg-cpu: %user %nice %system %iowait %steal \n %idle 8.57 0.00 \n 2.21 35.26 0.00 53.97Device tps kB_read/s kB_wrtn/s \nkB_read kB_wrtnloop0 \n 0.00 0.00 0.00 0 0loop1 0.00 0.00 0.00 \n 0 0loop2 \n 0.00 0.00 0.00 0 0sda 236.10 465.27 54312.00 \n27916 3258720sdb \n 0.08 0.00 0.40 0 24Notice that the database is writing approximately 15 \ntimes as fast (and I have verified that by tracking the size of the \nposgresql data directory over time) while the number of i/o transactions\n per second has actually dropped a little bit. It has now been running \nabout 24 hours and has maintained that speed.My \ninterpretationAt first sight this seems to me as being symptomatic \nof the pg_restore process doing a huge number of very small input \noperations when reading from the dump. If the proportion of input to \noutput operations is the same now as it was when trying to restore from \nthe local drive, that implies that the vast majority of i/o operations \nwere inputs and not outputs.However, I am not sure that even that would cause such\n a slowdown because the compressed data files in the directory format \ndump correspond to the tables and so there are 3 very large files that \nit starts with. So all of these stats were gathered in the first 24 \nhours of the restore when it was just restoring the first 3 tables (I \nhave verbose on, so I know). Because those files are gzipped, we know \nthat they are being read sequentially and because the machine has lots \nof memory we know that the OS has allocated a lot of space to disk \nbuffers and so even if postgresql was doing lots of small reads, \nbouncing around between the 3 files, it would not hit the disk that \noften.Now that restore is happening 15 times faster when \nreading from an nfs drive, I looked at the nfsiostat output for a while \nand it does not show any indication of any untoward behavior:192.168.17.146:/volume1/Backups mounted on \n/nas/Backups: ops/s rpc bklog 27.000 0.000read: \n ops/s kB/s kB/op retrans avg \nRTT (ms) avg exe (ms) \n 27.000 3464.332 128.309 0 (0.0%) \n 13.500 13.511write: \n ops/s kB/s kB/op retrans avg\n RTT (ms) avg exe (ms) \n 0.000 0.000 0.000 0 (0.0%) \n 0.000 0.000192.168.17.146:/volume1/Backups mounted on \n/nas/Backups: ops/s rpc bklog 24.000 0.000read: \n ops/s kB/s kB/op retrans avg \nRTT (ms) avg exe (ms) \n 24.000 3079.406 128.309 0 (0.0%) \n 28.492 28.504write: \n ops/s kB/s kB/op retrans avg\n RTT (ms) avg exe (ms) \n 0.000 0.000 0.000 0 (0.0%) \n 0.000 0.000The \nnubmer of operations per second (if those correspond to reads from \npostgresql, which I do not know for a fact) does not seem high at all. I \nactually do not have a great theory for what is going on but it might be\n more obvious to someone who knows the postgresql implementation well. I\n would love to hear any thoughts that would be helpful on how to get my \nrestores even faster.",
"msg_date": "Thu, 3 Oct 2019 17:48:18 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some observations on very slow pg_restore operations"
},
{
"msg_contents": "Thanks Michael,\n\nI am sure that there is some optimization to be found in the config (and\nbelow are all the non-default values in the file). I suspect that they\nwon't explain the difference between restoring from nfs vs local drive, but\nthey could certainly speed up my best case. The dataset is so huge and the\nschema so simple that I can't imagine the various buffer sizes making too\nmuch difference - nothing is really going to fit into memory anyway.\n\nThis instance of the DB is currently used in read-only mode by only two\nclient processes running on other machines. So it is tuned for a small\nnumber of user and primarily simple queries.\n\nroot@tb-db:/etc/postgresql/11/main# grep '^[[:blank:]]*[^[:blank:]#;]'\npostgresql.conf\n\ndata_directory = '/var/lib/postgresql/11/main'\nhba_file = '/etc/postgresql/11/main/pg_hba.conf'\nident_file = '/etc/postgresql/11/main/pg_ident.conf'\nexternal_pid_file = '/var/run/postgresql/11-main.pid'\nlisten_addresses = '*'\nport = 5432\nmax_connections = 25\nunix_socket_directories = '/var/run/postgresql' # comma-separated list of\ndirectories\nssl = on\nssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'\nssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'\nshared_buffers = 8GB # min 128kB\nwork_mem = 1GB # min 64kB\nmaintenance_work_mem = 4GB\ndynamic_shared_memory_type = posix\neffective_io_concurrency = 2\nmax_worker_processes = 4\nmax_parallel_workers = 4\nfsync = on\nsynchronous_commit = off\nwal_buffers = 16MB\nwal_writer_delay = 2000ms\nwal_writer_flush_after = 100MB\nmax_wal_size = 8GB\nmin_wal_size = 4GB\nrandom_page_cost = 4.0\neffective_cache_size = 24GB\ndefault_statistics_target = 500\nlog_line_prefix = '%m [%p] %q%u@%d '\nlog_timezone = 'localtime'\ncluster_name = '11/main'\nstats_temp_directory = '/var/run/postgresql/11-main.pg_stat_tmp'\nautovacuum = off\ndatestyle = 'iso, mdy'\ntimezone = 'localtime'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\ndefault_text_search_config = 'pg_catalog.english'\ninclude_dir = 'conf.d'\n\n\nOn Thu, Oct 3, 2019 at 2:48 PM MichaelDBA <[email protected]> wrote:\n\n> Hi Ogden,\n>\n> You didn't mention any details about your postgresql.conf settings. Why\n> don't you set them optimally for your loads and try again and see if there\n> is any difference. Make sure you do a DB restart since some of these\n> parameters require it.\n>\n> ======================================\n> parameter\t\tbefore\tafter\n> ----------------\t------\t-------\n> shared_buffers\t\tReduce this value to about 25% of total memory\n> temp_buffers\t\tDecrease this value to 8MB since we are not using temporary tables or doing intermediate sorts\n> work_mem\t\tReduce significantly (1MB) since we are not doing memory sorts or hashes per SQL\n> maintenance_work_mem\tIncrease signficantly for DDL bulk loading, restore operations\n> fsync \t\toff (so that time is not being spent waiting for stuff to be written to disk). Note: you may not be able to recover your database after a crash when set to off.\n> checkpoint_segments\tIncrease this significantly for DML bulk loading, restore operations\n> max_wal_size Increase significantly like you would to checkpoint_segments\n> min_wal_size Increase significantly like you would to checkpoint_segments\n> checkpoint_timeout Increase to at least 30min\n> archive_mode \t\toff\n> autovacuum \t\toff\n> synchronous_commit\toff\n> wal_level\t\tminimal\n> max_wal_senders 0\n> full_page_writes off during DML bulk loading, restore operations\n> wal_buffers 16MB during DML bulk loading, restore operations\n>\n>\n> Regards,\n> Michael Vitale\n>\n>\n> Ogden Brash wrote on 10/3/2019 4:30 PM:\n>\n> I recently performed a pg_dump (data-only) of a relatively large database\n> where we store intermediate results of calculations. It is approximately 3\n> TB on disk and has about 20 billion rows.\n>\n> We do the dump/restore about once a month and as the dataset has grown,\n> the restores have gotten very slow. So, this time I decided to do it a\n> different way and have some observations that puzzle me.\n>\n> Background:\n>\n> The data is extremely simple. The rows consist only of numbers and are all\n> fixed length. There are no foreign keys, constraints, null values, or\n> default values. There are no strings or arrays. There are 66 tables and the\n> number of rows in each table forms a gaussian distribution; so there are 3\n> tables which have about 3 billion rows each and the rest of the tables have\n> significantly fewer rows.\n>\n> I used the directory format when doing the pg_dump. The compressed data of\n> the dump is 550 GB.\n>\n> I am using: (PostgreSQL) 11.5 (Ubuntu 11.5-1.pgdg18.04+1)\n>\n> The machine that I attempted to do a pg_restore to is a dedicated server\n> just for one instance of posgresql. It has 32 GB of memory and is running\n> Ubuntu 18.04 (headless). It physical hardware, not virtualized. Nothing\n> else runs on the machine and the postgresql.conf settings have been tuned\n> (to the best of my postgresql abilities which are suspect). While the\n> operating system is installed on an SSD, there is one extra large, fast HDD\n> that is dedicated to the posgresql server. It has been in use for this\n> particular purpose for a while and has not had performance issues. (Just\n> with pg_restore)\n>\n> Autovacuum is off and all indexes have been deleted before the restore is\n> started. There is nothing in the db except for the empty data tables.\n>\n> Restoring over the net:\n>\n> In the past we have always restored in a way where the dumped data is read\n> over a gigabit connection while being restored to the local drive. But, the\n> last time we did it it took 2 days and I was looking for something faster.\n> So, I decided to copy the dumped directory to the local drive and restore\n> from the dump locally. I knew that because the machine only had one drive\n> that would fit the data, there would be some I/O contention, but I hoped\n> that it might not be as bad as reading over the network.\n>\n> The pg_restore went unbearably slowly... after many hours it had written\n> less than 20GB to the database, so I started tracking it with iostat to see\n> what was going on. The following is iostat output every 60 seconds. I\n> tracked it for several hours and this is representative of what was\n> happening consistently.\n>\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 0.39 0.00 0.40 43.10 0.00 56.11\n>\n> Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> loop0 0.00 0.00 0.00 0 0\n> loop1 0.00 0.00 0.00 0 0\n> loop2 0.00 0.00 0.00 0 0\n> sda 263.33 132.87 2990.93 7972 179456\n> sdb 0.17 0.00 0.73 0 44\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 0.34 0.00 0.41 44.43 0.00 54.82\n>\n> Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> loop0 0.00 0.00 0.00 0 0\n> loop1 0.00 0.00 0.00 0 0\n> loop2 0.00 0.00 0.00 0 0\n> sda 262.95 140.47 2983.00 8428 178980\n> sdb 0.08 0.00 0.40 0 24\n>\n> While I was tracking this I started experimenting with the IO scheduler to\n> see if it had a noticable impact. I had been using cfq (ubuntu 18.04\n> default). Changing to deadline did not have a noticable difference.\n> Changing to noop made things much slower. I went back to cfq. I also\n> experimented with turning fsync off; that did speed things up a bit but not\n> enough for me to leave it off.\n>\n> What puzzled me is that the OS was spending such a large percentage of\n> time in iowait, yet there was so little IO going on.\n>\n> So, I decided to go back to restoring over the net. While the slow\n> pg_restore was still going on, and while I was still tracking iostat, I\n> copied the 550 GB dumps to an nfs drive. The copy happened pretty much at\n> full speed (limit being the gigabit ethernet) and interestingly, it did not\n> slow down kb_wrtn and kb_wrtn/s numbers in iostat (which was the postgresql\n> server continuing with the restore). To me that seemed to indicate that it\n> was not really a disk I/O limitation.\n>\n> Restoring over the net:\n>\n> After copying the dump files to an NFS drive, I stopped the restore,\n> truncated the tables and started exactly the same command, but this time\n> taking its input from the nfs drive. I did not reboot the machine or\n> restart the postgresql server. I tracked iostate every 60 seconds and this\n> is what it looks like:\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 8.87 0.00 1.62 39.89 0.00 49.61\n>\n> Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> loop0 0.00 0.00 0.00 0 0\n> loop1 0.00 0.00 0.00 0 0\n> loop2 0.00 0.00 0.00 0 0\n> sda 252.77 527.87 37837.47 31672 2270248\n> sdb 0.22 0.00 1.00 0 60\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 8.57 0.00 2.21 35.26 0.00 53.97\n>\n> Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> loop0 0.00 0.00 0.00 0 0\n> loop1 0.00 0.00 0.00 0 0\n> loop2 0.00 0.00 0.00 0 0\n> sda 236.10 465.27 54312.00 27916 3258720\n> sdb 0.08 0.00 0.40 0 24\n>\n>\n> Notice that the database is writing approximately 15 times as fast (and I\n> have verified that by tracking the size of the posgresql data directory\n> over time) while the number of i/o transactions per second has actually\n> dropped a little bit. It has now been running about 24 hours and has\n> maintained that speed.\n>\n> My interpretation\n>\n> At first sight this seems to me as being symptomatic of the pg_restore\n> process doing a huge number of very small input operations when reading\n> from the dump. If the proportion of input to output operations is the same\n> now as it was when trying to restore from the local drive, that implies\n> that the vast majority of i/o operations were inputs and not outputs.\n>\n> However, I am not sure that even that would cause such a slowdown because\n> the compressed data files in the directory format dump correspond to the\n> tables and so there are 3 very large files that it starts with. So all of\n> these stats were gathered in the first 24 hours of the restore when it was\n> just restoring the first 3 tables (I have verbose on, so I know). Because\n> those files are gzipped, we know that they are being read sequentially and\n> because the machine has lots of memory we know that the OS has allocated a\n> lot of space to disk buffers and so even if postgresql was doing lots of\n> small reads, bouncing around between the 3 files, it would not hit the disk\n> that often.\n>\n> Now that restore is happening 15 times faster when reading from an nfs\n> drive, I looked at the nfsiostat output for a while and it does not show\n> any indication of any untoward behavior:\n>\n>\n> 192.168.17.146:/volume1/Backups mounted on /nas/Backups:\n>\n> ops/s rpc bklog\n> 27.000 0.000\n>\n> read: ops/s kB/s kB/op retrans\n> avg RTT (ms) avg exe (ms)\n> 27.000 3464.332 128.309 0 (0.0%)\n> 13.500 13.511\n> write: ops/s kB/s kB/op retrans\n> avg RTT (ms) avg exe (ms)\n> 0.000 0.000 0.000 0 (0.0%)\n> 0.000 0.000\n>\n> 192.168.17.146:/volume1/Backups mounted on /nas/Backups:\n>\n> ops/s rpc bklog\n> 24.000 0.000\n>\n> read: ops/s kB/s kB/op retrans\n> avg RTT (ms) avg exe (ms)\n> 24.000 3079.406 128.309 0 (0.0%)\n> 28.492 28.504\n> write: ops/s kB/s kB/op retrans\n> avg RTT (ms) avg exe (ms)\n> 0.000 0.000 0.000 0 (0.0%)\n> 0.000 0.000\n>\n>\n> The nubmer of operations per second (if those correspond to reads from\n> postgresql, which I do not know for a fact) does not seem high at all.\n>\n> I actually do not have a great theory for what is going on but it might be\n> more obvious to someone who knows the postgresql implementation well. I\n> would love to hear any thoughts that would be helpful on how to get my\n> restores even faster.\n>\n>\n>\n>\n>\n\nThanks Michael,I am sure that there is some optimization to be found in the config (and below are all the non-default values in the file). I suspect that they won't explain the difference between restoring from nfs vs local drive, but they could certainly speed up my best case. The dataset is so huge and the schema so simple that I can't imagine the various buffer sizes making too much difference - nothing is really going to fit into memory anyway.This instance of the DB is currently used in read-only mode by only two client processes running on other machines. So it is tuned for a small number of user and primarily simple queries. root@tb-db:/etc/postgresql/11/main# grep '^[[:blank:]]*[^[:blank:]#;]' postgresql.confdata_directory = '/var/lib/postgresql/11/main'hba_file = '/etc/postgresql/11/main/pg_hba.conf'ident_file = '/etc/postgresql/11/main/pg_ident.conf'external_pid_file = '/var/run/postgresql/11-main.pid'listen_addresses = '*'port = 5432max_connections = 25unix_socket_directories = '/var/run/postgresql' # comma-separated list of directoriesssl = onssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'shared_buffers = 8GB # min 128kBwork_mem = 1GB # min 64kBmaintenance_work_mem = 4GBdynamic_shared_memory_type = posixeffective_io_concurrency = 2max_worker_processes = 4max_parallel_workers = 4fsync = onsynchronous_commit = offwal_buffers = 16MB wal_writer_delay = 2000ms wal_writer_flush_after = 100MBmax_wal_size = 8GBmin_wal_size = 4GBrandom_page_cost = 4.0 effective_cache_size = 24GBdefault_statistics_target = 500log_line_prefix = '%m [%p] %q%u@%d 'log_timezone = 'localtime'cluster_name = '11/main'stats_temp_directory = '/var/run/postgresql/11-main.pg_stat_tmp'autovacuum = offdatestyle = 'iso, mdy'timezone = 'localtime'lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8'lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english'include_dir = 'conf.d' On Thu, Oct 3, 2019 at 2:48 PM MichaelDBA <[email protected]> wrote:\nHi Ogden,\n\nYou didn't mention any details about your postgresql.conf settings. Why\n don't you set them optimally for your loads and try again and see if \nthere is any difference. Make sure you do a DB restart since some of \nthese parameters require it.\n======================================\nparameter\t\tbefore\tafter\n----------------\t------\t-------\nshared_buffers\t\tReduce this value to about 25% of total memory\ntemp_buffers\t\tDecrease this value to 8MB since we are not using temporary tables or doing intermediate sorts\nwork_mem\t\tReduce significantly (1MB) since we are not doing memory sorts or hashes per SQL\nmaintenance_work_mem\tIncrease signficantly for DDL bulk loading, restore operations\nfsync \t\toff (so that time is not being spent waiting for stuff to be written to disk). Note: you may not be able to recover your database after a crash when set to off.\ncheckpoint_segments\tIncrease this significantly for DML bulk loading, restore operations\nmax_wal_size Increase significantly like you would to checkpoint_segments\nmin_wal_size Increase significantly like you would to checkpoint_segments\ncheckpoint_timeout Increase to at least 30min\narchive_mode \t\toff\nautovacuum \t\toff\nsynchronous_commit\toff\nwal_level\t\tminimal\nmax_wal_senders 0\nfull_page_writes off during DML bulk loading, restore operations\nwal_buffers 16MB during DML bulk loading, restore operations \n\nRegards,\nMichael Vitale\n\n\nOgden Brash wrote on 10/3/2019 4:30 PM:\n\nI recently \nperformed a pg_dump (data-only) of a relatively large database where we \nstore intermediate results of calculations. It is approximately 3 TB on \ndisk and has about 20 billion rows.We do the \ndump/restore about once a month and as the dataset has grown, the \nrestores have gotten very slow. So, this time I decided to do it a \ndifferent way and have some observations that puzzle me.Background:The data \nis extremely simple. The rows consist only of numbers and are all fixed \nlength. There are no foreign keys, constraints, null values, or default \nvalues. There are no strings or arrays. There are 66 tables and the \nnumber of rows in each table forms a gaussian distribution; so there are\n 3 tables which have about 3 billion rows each and the rest of the \ntables have significantly fewer rows. I used the\n directory format when doing the pg_dump. The compressed data of the \ndump is 550 GB.I am using: (PostgreSQL) 11.5 (Ubuntu \n11.5-1.pgdg18.04+1)The machine that I attempted to do a pg_restore to is a\n dedicated server just for one instance of posgresql. It has 32 GB of \nmemory and is running Ubuntu 18.04 (headless). It physical hardware, not\n virtualized. Nothing else runs on the machine and the postgresql.conf \nsettings have been tuned (to the best of my postgresql abilities which \nare suspect). While the operating system is installed on an SSD, there \nis one extra large, fast HDD that is dedicated to the posgresql server. \nIt has been in use for this particular purpose for a while and has not \nhad performance issues. (Just with pg_restore)Autovacuum\n is off and all indexes have been deleted before the restore is started.\n There is nothing in the db except for the empty data tables.Restoring\n over the net:In the past we have always restored in a way where the\n dumped data is read over a gigabit connection while being restored to \nthe local drive. But, the last time we did it it took 2 days and I was \nlooking for something faster. So, I decided to copy the dumped directory\n to the local drive and restore from the dump locally. I knew that \nbecause the machine only had one drive that would fit the data, there \nwould be some I/O contention, but I hoped that it might not be as bad as\n reading over the network.The pg_restore went unbearably slowly... after many \nhours it had written less than 20GB to the database, so I started \ntracking it with iostat to see what was going on. The following is \niostat output every 60 seconds. I tracked it for several hours and this \nis representative of what was happening consistently.avg-cpu: %user %nice %system %iowait %steal \n %idle 0.39 0.00 \n 0.40 43.10 0.00 56.11Device tps kB_read/s kB_wrtn/s \nkB_read kB_wrtnloop0 \n 0.00 0.00 0.00 0 0loop1 0.00 0.00 0.00 \n 0 0loop2 \n 0.00 0.00 0.00 0 0sda 263.33 132.87 2990.93 \n 7972 179456sdb \n 0.17 0.00 0.73 0 44avg-cpu: \n %user %nice %system %iowait %steal %idle 0.34 0.00 0.41 44.43 0.00 54.82Device \n tps kB_read/s kB_wrtn/s kB_read kB_wrtnloop0 0.00 0.00 0.00 \n 0 0loop1 \n 0.00 0.00 0.00 0 0loop2 0.00 0.00 0.00 \n 0 0sda \n 262.95 140.47 2983.00 8428 178980sdb 0.08 0.00 0.40 \n 0 24While I was tracking this I started experimenting with\n the IO scheduler to see if it had a noticable impact. I had been using \ncfq (ubuntu 18.04 default). Changing to deadline did not have a \nnoticable difference. Changing to noop made things much slower. I went \nback to cfq. I also experimented with turning fsync off; that did speed \nthings up a bit but not enough for me to leave it off.What \npuzzled me is that the OS was spending such a large percentage of time \nin iowait, yet there was so little IO going on.So, I \ndecided to go back to restoring over the net. While the slow pg_restore \nwas still going on, and while I was still tracking iostat, I copied the \n550 GB dumps to an nfs drive. The copy happened pretty much at full \nspeed (limit being the gigabit ethernet) and interestingly, it did not \nslow down kb_wrtn and kb_wrtn/s numbers in iostat (which was the \npostgresql server continuing with the restore). To me that seemed to \nindicate that it was not really a disk I/O limitation.Restoring\n over the net:After copying the dump files to an NFS drive, I \nstopped the restore, truncated the tables and started exactly the same \ncommand, but this time taking its input from the nfs drive. I did not \nreboot the machine or restart the postgresql server. I tracked iostate \nevery 60 seconds and this is what it looks like:avg-cpu: \n%user %nice %system %iowait %steal %idle 8.87 0.00 1.62 39.89 0.00 49.61Device \n tps kB_read/s kB_wrtn/s kB_read kB_wrtnloop0 0.00 0.00 0.00 \n 0 0loop1 \n 0.00 0.00 0.00 0 0loop2 0.00 0.00 0.00 \n 0 0sda \n 252.77 527.87 37837.47 31672 2270248sdb 0.22 0.00 1.00 \n 0 60avg-cpu: %user %nice %system %iowait %steal \n %idle 8.57 0.00 \n 2.21 35.26 0.00 53.97Device tps kB_read/s kB_wrtn/s \nkB_read kB_wrtnloop0 \n 0.00 0.00 0.00 0 0loop1 0.00 0.00 0.00 \n 0 0loop2 \n 0.00 0.00 0.00 0 0sda 236.10 465.27 54312.00 \n27916 3258720sdb \n 0.08 0.00 0.40 0 24Notice that the database is writing approximately 15 \ntimes as fast (and I have verified that by tracking the size of the \nposgresql data directory over time) while the number of i/o transactions\n per second has actually dropped a little bit. It has now been running \nabout 24 hours and has maintained that speed.My \ninterpretationAt first sight this seems to me as being symptomatic \nof the pg_restore process doing a huge number of very small input \noperations when reading from the dump. If the proportion of input to \noutput operations is the same now as it was when trying to restore from \nthe local drive, that implies that the vast majority of i/o operations \nwere inputs and not outputs.However, I am not sure that even that would cause such\n a slowdown because the compressed data files in the directory format \ndump correspond to the tables and so there are 3 very large files that \nit starts with. So all of these stats were gathered in the first 24 \nhours of the restore when it was just restoring the first 3 tables (I \nhave verbose on, so I know). Because those files are gzipped, we know \nthat they are being read sequentially and because the machine has lots \nof memory we know that the OS has allocated a lot of space to disk \nbuffers and so even if postgresql was doing lots of small reads, \nbouncing around between the 3 files, it would not hit the disk that \noften.Now that restore is happening 15 times faster when \nreading from an nfs drive, I looked at the nfsiostat output for a while \nand it does not show any indication of any untoward behavior:192.168.17.146:/volume1/Backups mounted on \n/nas/Backups: ops/s rpc bklog 27.000 0.000read: \n ops/s kB/s kB/op retrans avg \nRTT (ms) avg exe (ms) \n 27.000 3464.332 128.309 0 (0.0%) \n 13.500 13.511write: \n ops/s kB/s kB/op retrans avg\n RTT (ms) avg exe (ms) \n 0.000 0.000 0.000 0 (0.0%) \n 0.000 0.000192.168.17.146:/volume1/Backups mounted on \n/nas/Backups: ops/s rpc bklog 24.000 0.000read: \n ops/s kB/s kB/op retrans avg \nRTT (ms) avg exe (ms) \n 24.000 3079.406 128.309 0 (0.0%) \n 28.492 28.504write: \n ops/s kB/s kB/op retrans avg\n RTT (ms) avg exe (ms) \n 0.000 0.000 0.000 0 (0.0%) \n 0.000 0.000The \nnubmer of operations per second (if those correspond to reads from \npostgresql, which I do not know for a fact) does not seem high at all. I \nactually do not have a great theory for what is going on but it might be\n more obvious to someone who knows the postgresql implementation well. I\n would love to hear any thoughts that would be helpful on how to get my \nrestores even faster.",
"msg_date": "Thu, 3 Oct 2019 16:19:31 -0700",
"msg_from": "Ogden Brash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some observations on very slow pg_restore operations"
},
{
"msg_contents": "What is the state of KSM/THP? Did you try disabling them ? I've seen these\ncan cause high iowait (although that was a virtual environment). It would be\ninteresting to see vmstat output.\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag \nhttps://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\n\nI realize the postgres params don't seem related to the difference between\nlocal/remote/NFS, but did you see all these ?\nhttps://www.postgresql.org/docs/current/populate.html\n\nJustin\n\n\n",
"msg_date": "Fri, 4 Oct 2019 01:03:00 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some observations on very slow pg_restore operations"
}
] |
[
{
"msg_contents": "Hi All,\n\nAll of sudden the query went slow before the query was executing in 30- 35\nsec now even after 30 mins i am not getting any result.\n\nlater I have dropped a table ( t_meners) and recreated it and again it\nstarted working very fast.\n\nis there way to find what happen on that why is not any issue in table how\nto find out. i Have the same issue on the other databases also so that i\ncan check on it\n\nSELECT ((UID-1)/10000) AS BatchNo,\n * INTO \"temp_tt1\"\nFROM\n (SELECT ROW_NUMBER() OVER (\n ORDER BY a.\"rno\") AS UID,\n a.*\n FROM \"temp_10032019020721_4470\" AS a\n INNER JOIN \"t_ages\" AS b ON LOWER(a.\"cr\") = LOWER(b.\"c_pagealias\")\n LEFT JOIN \"t_meners\" AS c ON LOWER(a.\"cr\") = LOWER(c.\"c_id\")\n WHERE c.\"c_id\" IS NULL ) AS TempTable\n\nHi All,All of sudden the query went slow before the query was executing in 30- 35 sec now even after 30 mins i am not getting any result. later I have dropped a table (\nt_meners) and recreated it and again it started working very fast.is there way to find what happen on that why is not any issue in table how to find out. i Have the same issue on the other databases also so that i can check on itSELECT ((UID-1)/10000) AS BatchNo, * INTO \"temp_tt1\"FROM (SELECT ROW_NUMBER() OVER ( ORDER BY a.\"rno\") AS UID, a.* FROM \"temp_10032019020721_4470\" AS a INNER JOIN \"t_ages\" AS b ON LOWER(a.\"cr\") = LOWER(b.\"c_pagealias\") LEFT JOIN \"t_meners\" AS c ON LOWER(a.\"cr\") = LOWER(c.\"c_id\") WHERE c.\"c_id\" IS NULL ) AS TempTable",
"msg_date": "Fri, 4 Oct 2019 15:52:26 +0530",
"msg_from": "nikhil raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query went slow all of sudden. ON V 11.3"
},
{
"msg_contents": "On Fri, Oct 04, 2019 at 03:52:26PM +0530, nikhil raj wrote:\n> Hi All,\n> \n> All of sudden the query went slow before the query was executing in 30- 35\n> sec now even after 30 mins i am not getting any result.\n\nCan you show \"explain(analyze,buffers)\" when it's running fast, and at least\n\"explain\" when it's slow ?\n\n> later I have dropped a table ( t_meners) and recreated it and again it\n> started working very fast.\n\nWhat indexes exist on that table and on temp_10032019020721_4470 ?\n\nJustin\n\n\n",
"msg_date": "Fri, 4 Oct 2019 08:20:44 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query went slow all of sudden. ON V 11.3"
},
{
"msg_contents": "Hi Justin,\n\nIts been executing for 35 + mins due to statement time out its getting\ncanceled.\n\nYes temp_10032019020721_4470table index is there on cr column.\n\n\nOn Fri, Oct 4, 2019 at 6:50 PM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, Oct 04, 2019 at 03:52:26PM +0530, nikhil raj wrote:\n> > Hi All,\n> >\n> > All of sudden the query went slow before the query was executing in 30-\n> 35\n> > sec now even after 30 mins i am not getting any result.\n>\n> Can you show \"explain(analyze,buffers)\" when it's running fast, and at\n> least\n> \"explain\" when it's slow ?\n>\n> > later I have dropped a table ( t_meners) and recreated it and again it\n> > started working very fast.\n>\n> What indexes exist on that table and on temp_10032019020721_4470 ?\n>\n> Justin\n>\n\nHi Justin,Its been executing for 35 + mins due to statement time out its getting canceled.Yes \ntemp_10032019020721_4470table index is there on \ncr column.On Fri, Oct 4, 2019 at 6:50 PM Justin Pryzby <[email protected]> wrote:On Fri, Oct 04, 2019 at 03:52:26PM +0530, nikhil raj wrote:\n> Hi All,\n> \n> All of sudden the query went slow before the query was executing in 30- 35\n> sec now even after 30 mins i am not getting any result.\n\nCan you show \"explain(analyze,buffers)\" when it's running fast, and at least\n\"explain\" when it's slow ?\n\n> later I have dropped a table ( t_meners) and recreated it and again it\n> started working very fast.\n\nWhat indexes exist on that table and on temp_10032019020721_4470 ?\n\nJustin",
"msg_date": "Fri, 4 Oct 2019 19:28:54 +0530",
"msg_from": "nikhil raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query went slow all of sudden. ON V 11.3"
},
{
"msg_contents": "What are approx row counts and distribution of data in the concerned tables\nand columns? Have you run EXPLAIN (query plan) to get the plan that will be\nexecuted and can you paste on https://explain.depesz.com/ and share the\nlink that results?\n\nDo you have an index on LOWER( cr ) on table temp_10032019020721_4470?\nDo you have an index on LOWER( c_pagealias ) on table t_ages?\nDo you have an index on LOWER( c_id ) on table t_meners?\n\nIf temp_10032019020721_4470 is truly temp table, was it analyzed after\ncreating/inserting/updating/deleting data last, so that the optimizer knows\nthe number of distinct values, how many rows, most common values, etc?\n\nWhat are approx row counts and distribution of data in the concerned tables and columns? Have you run EXPLAIN (query plan) to get the plan that will be executed and can you paste on https://explain.depesz.com/ and share the link that results?Do you have an index on LOWER( cr ) on table temp_10032019020721_4470?Do you have an index on LOWER( c_pagealias ) on table t_ages?Do you have an index on LOWER( c_id ) on table t_meners?If temp_10032019020721_4470 is truly temp table, was it analyzed after creating/inserting/updating/deleting data last, so that the optimizer knows the number of distinct values, how many rows, most common values, etc?",
"msg_date": "Fri, 4 Oct 2019 13:07:24 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query went slow all of sudden. ON V 11.3"
},
{
"msg_contents": "On Fri, Oct 04, 2019 at 07:28:54PM +0530, nikhil raj wrote:\n>Hi Justin,\n>\n>Its been executing for 35 + mins due to statement time out its getting\n>canceled.\n>\n\nWell, without a query plan it's really hard to give you any advice. We\nneed to see at least EXPLAIN output (without analyze) to get an idea of\nhow the query will be executed. Even better, disable the statement\ntimeout in the session and dive use EXPLAIN ANALYZE. Of course, it's\nunclear how long it'll run.\n\nEarlier you mentioned the query started running fast after you recreated\none of the tables. That likely means the table (or the indexes on it)\nare getting bloated over time. Try looking at the sizes of those objects\n(and maybe use pgstattuple to get more detailed statistics before\nrebuilding it next time.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 6 Oct 2019 22:30:39 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query went slow all of sudden. ON V 11.3"
}
] |
[
{
"msg_contents": "I have a question about the files in .../data/postgresql/11/main/base,\nspecifically in relation to very large tables and how they are written.\n\nI have been attempting to restore a relatively large database with\npg_restore and it has been running for more than a week. (I also have\nanother thread about the same restore related to network vs. local disk I/O)\n\nI ran the pg_restore in verbose mode using multiple jobs so I can tell what\nhas finished and what has not. The database had 66 tables and most of them\nhave been restored. Two of the tables were quite large (billions of rows\ntranslating to between 1 and 2TB of data on disk for those two tables) and\nthose two tables are pretty much the only things remaining that has not\nbeen reported as finished by pg_restore.\n\nAs the process has been going for a week, I have been tracking the machine\n(a dedicated piece of hardware, non-virtualized) and have been noticing a\nprogressive slowdown (as tracked by iostat). There is nothing running on\nthe machine besides postgresql and the server is only doing this restore,\nnothing else. It is now, on average, running at less than 25% of the speed\nthat it was running four days ago (as measured by rate of I/O).\n\nI started to dig into what was happening on the machine and I noticed the\nfollowing:\n\niotop reports that two postgres processes (I assume each processing one of\nthe two tables that needs to be processed) are doing all the I/O. That\nmakes sense\n\nTotal DISK READ : 1473.81 K/s | Total DISK WRITE : 617.30 K/s\nActual DISK READ: 1473.81 K/s | Actual DISK WRITE: 0.00 B/s\n TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND\n 6601 be/4 postgres 586.44 K/s 7.72 K/s 0.00 % 97.39 % postgres:\n11/main: postg~s thebruteff [local] COPY\n 6600 be/4 postgres 887.37 K/s 601.87 K/s 0.00 % 93.42 % postgres:\n11/main: postg~s thebruteff [local] COPY\n 666 be/3 root 0.00 B/s 7.72 K/s 0.00 % 5.73 % [jbd2/sda1-8]\n 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init\n 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]\n 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H]\n\nSo, the next thing I though I would do is an \"lsof\" command for each of\nthose two processes to see what they were writing. That was a bit of a\nsurpise:\n\n# lsof -p 6600 | wc -l;\n840\n\n# lsof -p 6601 | wc -l;\n906\n\nIs that normal? That there be so many open file pointers? ~900 open file\npointers for each of the processes?\n\nThe next I did was go to see the actual data files, to see how many there\nare. In my case they are in postgresql/11/main/base/24576 and there are\n2076 files there. That made sense. However, I found that when I list them\nby modification date I see something interesting:\n\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.7\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.8\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.9\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.10\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.11\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.12\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.13\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.14\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.16\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.15\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.17\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.18\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.19\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.21\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.22\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.23\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.24\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.25\n-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.26\n-rw------- 1 postgres postgres 19062784 Oct 8 13:05 27082_fsm\n-rw------- 1 postgres postgres 544489472 Oct 8 13:05 27077.34\n-rw------- 1 postgres postgres 169705472 Oct 8 13:05 27082.72\n-rw------- 1 postgres postgres 978321408 Oct 8 13:05 27083.27\n-rw------- 1 postgres postgres 342925312 Oct 8 13:05 27076.88\n\nIf you notice, the file size is capped at 1 GB and as the giant table has\ngrown it has added more files in this directory. However, the mysterious\nthing to me is that it keeps modifying those files constantly - even the\nones that are completely full. So for the two large tables it has been\nrestoring all week, the modification time for the ever growing list of\nfiles is being updating constantly.\n\nCould it be that thats why I am seeing a slowdown over the course of the\nweek - that for some reason as the number of files for the table has grown,\nthe system is spending more and more time seeking around the disk to touch\nall those files for some reason?\n\nDoes anyone who understands the details of postgresql's interaction with\nthe file system have an explanation for why all those files which are full\nare being touched constantly?\n\nI have a question about the files in .../data/postgresql/11/main/base, specifically in relation to very large tables and how they are written.I have been attempting to restore a relatively large database with pg_restore and it has been running for more than a week. (I also have another thread about the same restore related to network vs. local disk I/O)I ran the pg_restore in verbose mode using multiple jobs so I can tell what has finished and what has not. The database had 66 tables and most of them have been restored. Two of the tables were quite large (billions of rows translating to between 1 and 2TB of data on disk for those two tables) and those two tables are pretty much the only things remaining that has not been reported as finished by pg_restore.As the process has been going for a week, I have been tracking the machine (a dedicated piece of hardware, non-virtualized) and have been noticing a progressive slowdown (as tracked by iostat). There is nothing running on the machine besides postgresql and the server is only doing this restore, nothing else. It is now, on average, running at less than 25% of the speed that it was running four days ago (as measured by rate of I/O). I started to dig into what was happening on the machine and I noticed the following:iotop reports that two postgres processes (I assume each processing one of the two tables that needs to be processed) are doing all the I/O. That makes senseTotal DISK READ : 1473.81 K/s | Total DISK WRITE : 617.30 K/sActual DISK READ: 1473.81 K/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 6601 be/4 postgres 586.44 K/s 7.72 K/s 0.00 % 97.39 % postgres: 11/main: postg~s thebruteff [local] COPY 6600 be/4 postgres 887.37 K/s 601.87 K/s 0.00 % 93.42 % postgres: 11/main: postg~s thebruteff [local] COPY 666 be/3 root 0.00 B/s 7.72 K/s 0.00 % 5.73 % [jbd2/sda1-8] 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H]So, the next thing I though I would do is an \"lsof\" command for each of those two processes to see what they were writing. That was a bit of a surpise:# lsof -p 6600 | wc -l;840# lsof -p 6601 | wc -l;906Is that normal? That there be so many open file pointers? ~900 open file pointers for each of the processes?The next I did was go to see the actual data files, to see how many there are. In my case they are in postgresql/11/main/base/24576 and there are 2076 files there. That made sense. However, I found that when I list them by modification date I see something interesting:-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.7-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.8-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.9-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.10-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.11-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.12-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.13-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.14-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.16-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.15-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.17-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.18-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.19-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.21-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.22-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.23-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.24-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.25-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.26-rw------- 1 postgres postgres 19062784 Oct 8 13:05 27082_fsm-rw------- 1 postgres postgres 544489472 Oct 8 13:05 27077.34-rw------- 1 postgres postgres 169705472 Oct 8 13:05 27082.72-rw------- 1 postgres postgres 978321408 Oct 8 13:05 27083.27-rw------- 1 postgres postgres 342925312 Oct 8 13:05 27076.88If you notice, the file size is capped at 1 GB and as the giant table has grown it has added more files in this directory. However, the mysterious thing to me is that it keeps modifying those files constantly - even the ones that are completely full. So for the two large tables it has been restoring all week, the modification time for the ever growing list of files is being updating constantly.Could it be that thats why I am seeing a slowdown over the course of the week - that for some reason as the number of files for the table has grown, the system is spending more and more time seeking around the disk to touch all those files for some reason?Does anyone who understands the details of postgresql's interaction with the file system have an explanation for why all those files which are full are being touched constantly?",
"msg_date": "Tue, 8 Oct 2019 13:20:43 -0700",
"msg_from": "Ogden Brash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Modification of data in base folder and very large tables"
},
{
"msg_contents": ">>>>> \"Ogden\" == Ogden Brash <[email protected]> writes:\n\n Ogden> I have a question about the files in\n Ogden> .../data/postgresql/11/main/base, specifically in relation to\n Ogden> very large tables and how they are written.\n\n Ogden> I have been attempting to restore a relatively large database\n Ogden> with pg_restore and it has been running for more than a week.\n\nDid you do the restore into a completely fresh database? Or did you make\nthe mistake of creating tables and indexes first?\n\nWhat relation does the filenode 27083 correspond to? You can find that\nwith:\n\nselect oid::regclass from pg_class\n where pg_relation_filenode(oid) = '27083';\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 09 Oct 2019 09:42:18 +0100",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Modification of data in base folder and very large tables"
},
{
"msg_contents": "On Wed, Oct 9, 2019 at 4:33 AM Ogden Brash <[email protected]> wrote:\n\n> # lsof -p 6600 | wc -l;\n> 840\n>\n> # lsof -p 6601 | wc -l;\n> 906\n>\n> Is that normal? That there be so many open file pointers? ~900 open file\n> pointers for each of the processes?\n>\n\nI don't think PostgreSQL makes any effort to conserve file handles, until\nit starts reaching the max. So any file that has ever been opened will\nremain open, unless it was somehow invalidated (e.g. the file needs to be\ndeleted). If those processes were previously loading smaller tables before\nthe got bogged down in the huge ones, a large number of handles would not\nbe unexpected.\n\n\n\n> The next I did was go to see the actual data files, to see how many there\n> are. In my case they are in postgresql/11/main/base/24576 and there are\n> 2076 files there. That made sense. However, I found that when I list them\n> by modification date I see something interesting:\n>\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.7\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.8\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.9\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.10\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.11\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.12\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.13\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.14\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.16\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.15\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.17\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.18\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.19\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.21\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.22\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.23\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.24\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.25\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.26\n> -rw------- 1 postgres postgres 19062784 Oct 8 13:05 27082_fsm\n> -rw------- 1 postgres postgres 544489472 Oct 8 13:05 27077.34\n> -rw------- 1 postgres postgres 169705472 Oct 8 13:05 27082.72\n> -rw------- 1 postgres postgres 978321408 Oct 8 13:05 27083.27\n> -rw------- 1 postgres postgres 342925312 Oct 8 13:05 27076.88\n>\n> If you notice, the file size is capped at 1 GB and as the giant table has\n> grown it has added more files in this directory. However, the mysterious\n> thing to me is that it keeps modifying those files constantly - even the\n> ones that are completely full. So for the two large tables it has been\n> restoring all week, the modification time for the ever growing list of\n> files is being updating constantly.\n>\n\nThe bgwriter, the checkpointer, and autovac, plus any backends that decide\nthey need a clean page from the buffer cache can all touch those files.\nThey might touch them in ways that are not IO intensive, but still cause\nthe modification time to get updated. In my hands, one all dirty buffers a\ngiven file have been flushed and all contents in the file have been\nvacuumed, its mtime stops changing just due to copy in which is directed to\nlater files.\n\nIt is also squishy what it even means to modify a file. I think\nfilesystems have heuristics to avoid \"gratuitous\" updates to mtime, which\nmake it hard to recon with.\n\n\n>\n> Could it be that thats why I am seeing a slowdown over the course of the\n> week - that for some reason as the number of files for the table has grown,\n> the system is spending more and more time seeking around the disk to touch\n> all those files for some reason?\n>\n\nI don't think lsof or mtime are effective ways to research this. How about\nrunning strace -ttt -T -y on those processes?\n\nCheers,\n\nJeff\n\nOn Wed, Oct 9, 2019 at 4:33 AM Ogden Brash <[email protected]> wrote:# lsof -p 6600 | wc -l;840# lsof -p 6601 | wc -l;906Is that normal? That there be so many open file pointers? ~900 open file pointers for each of the processes?I don't think PostgreSQL makes any effort to conserve file handles, until it starts reaching the max. So any file that has ever been opened will remain open, unless it was somehow invalidated (e.g. the file needs to be deleted). If those processes were previously loading smaller tables before the got bogged down in the huge ones, a large number of handles would not be unexpected. The next I did was go to see the actual data files, to see how many there are. In my case they are in postgresql/11/main/base/24576 and there are 2076 files there. That made sense. However, I found that when I list them by modification date I see something interesting:-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.7-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.8-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.9-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.10-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.11-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.12-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.13-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.14-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.16-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.15-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.17-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.18-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.19-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.21-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.22-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.23-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.24-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.25-rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.26-rw------- 1 postgres postgres 19062784 Oct 8 13:05 27082_fsm-rw------- 1 postgres postgres 544489472 Oct 8 13:05 27077.34-rw------- 1 postgres postgres 169705472 Oct 8 13:05 27082.72-rw------- 1 postgres postgres 978321408 Oct 8 13:05 27083.27-rw------- 1 postgres postgres 342925312 Oct 8 13:05 27076.88If you notice, the file size is capped at 1 GB and as the giant table has grown it has added more files in this directory. However, the mysterious thing to me is that it keeps modifying those files constantly - even the ones that are completely full. So for the two large tables it has been restoring all week, the modification time for the ever growing list of files is being updating constantly.The bgwriter, the checkpointer, and autovac, plus any backends that decide they need a clean page from the buffer cache can all touch those files. They might touch them in ways that are not IO intensive, but still cause the modification time to get updated. In my hands, one all dirty buffers a given file have been flushed and all contents in the file have been vacuumed, its mtime stops changing just due to copy in which is directed to later files.It is also squishy what it even means to modify a file. I think filesystems have heuristics to avoid \"gratuitous\" updates to mtime, which make it hard to recon with. Could it be that thats why I am seeing a slowdown over the course of the week - that for some reason as the number of files for the table has grown, the system is spending more and more time seeking around the disk to touch all those files for some reason?I don't think lsof or mtime are effective ways to research this. How about running strace -ttt -T -y on those processes? Cheers,Jeff",
"msg_date": "Wed, 9 Oct 2019 14:42:31 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Modification of data in base folder and very large tables"
},
{
"msg_contents": "Ogden Brash <[email protected]> writes:\n\n> I have a question about the files in .../data/postgresql/11/main/\n> base, specifically in relation to very large tables and how they are\n> written.\n>\n> I have been attempting to restore a relatively large database with\n> pg_restore and it has been running for more than a week. (I also \n> have another thread about the same restore related to network vs.\n> local disk I/O)\n>\n> I ran the pg_restore in verbose mode using multiple jobs so I can\n> tell what has finished and what has not. The database had 66 tables\n> and most of them have been restored. Two of the tables were quite\n> large (billions of rows translating to between 1 and 2TB of data on\n> disk for those two tables) and those two tables are pretty much the\n> only things remaining that has not been reported as finished by\n> pg_restore.\n>\n> As the process has been going for a week, I have been tracking the\n> machine (a dedicated piece of hardware, non-virtualized) and have\n> been noticing a progressive slowdown (as tracked by iostat). There is\n\nDo the tables that are being loaded have any indexes on them?\n\n> nothing running on the machine besides postgresql and the server is\n> only doing this restore, nothing else. It is now, on average, running\n> at less than 25% of the speed that it was running four days ago (as\n> measured by rate of I/O). \n>\n> I started to dig into what was happening on the machine and I noticed\n> the following:\n>\n> iotop reports that two postgres processes (I assume each processing\n> one of the two tables that needs to be processed) are doing all the I\n> /O. That makes sense\n>\n> Total DISK READ : 1473.81 K/s | Total DISK WRITE : 617.30 K/s\n> Actual DISK READ: 1473.81 K/s | Actual DISK WRITE: 0.00 B/s\n> TID PRIO USER DISK READ DISK WRITE SWAPIN IO> \n> COMMAND\n> 6601 be/4 postgres 586.44 K/s 7.72 K/s 0.00 % 97.39 % postgres:\n> 11/main: postg~s thebruteff [local] COPY\n> 6600 be/4 postgres 887.37 K/s 601.87 K/s 0.00 % 93.42 % postgres:\n> 11/main: postg~s thebruteff [local] COPY\n> 666 be/3 root 0.00 B/s 7.72 K/s 0.00 % 5.73 % [jbd2/\n> sda1-8]\n> 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init\n> 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 %\n> [kthreadd]\n> 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/\n> 0:0H]\n>\n> So, the next thing I though I would do is an \"lsof\" command for each\n> of those two processes to see what they were writing. That was a bit\n> of a surpise:\n>\n> # lsof -p 6600 | wc -l;\n> 840\n>\n> # lsof -p 6601 | wc -l;\n> 906\n>\n> Is that normal? That there be so many open file pointers? ~900 open\n> file pointers for each of the processes?\n>\n> The next I did was go to see the actual data files, to see how many\n> there are. In my case they are in postgresql/11/main/base/24576 and\n> there are 2076 files there. That made sense. However, I found that\n> when I list them by modification date I see something interesting:\n>\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.7\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.8\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.9\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.10\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.11\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.12\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.13\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.14\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.16\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.15\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.17\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.18\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.19\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.21\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.22\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.23\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.24\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.25\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.26\n> -rw------- 1 postgres postgres 19062784 Oct 8 13:05 27082_fsm\n> -rw------- 1 postgres postgres 544489472 Oct 8 13:05 27077.34\n> -rw------- 1 postgres postgres 169705472 Oct 8 13:05 27082.72\n> -rw------- 1 postgres postgres 978321408 Oct 8 13:05 27083.27\n> -rw------- 1 postgres postgres 342925312 Oct 8 13:05 27076.88\n>\n> If you notice, the file size is capped at 1 GB and as the giant table\n> has grown it has added more files in this directory. However, the\n> mysterious thing to me is that it keeps modifying those files\n> constantly - even the ones that are completely full. So for the two\n> large tables it has been restoring all week, the modification time\n> for the ever growing list of files is being updating constantly.\n>\n> Could it be that thats why I am seeing a slowdown over the course of\n> the week - that for some reason as the number of files for the table\n> has grown, the system is spending more and more time seeking around\n> the disk to touch all those files for some reason?\n>\n> Does anyone who understands the details of postgresql's interaction\n> with the file system have an explanation for why all those files\n> which are full are being touched constantly?\n>\n>\n>\n>\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\n\n\n",
"msg_date": "Wed, 09 Oct 2019 18:05:20 -0500",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Modification of data in base folder and very large tables"
},
{
"msg_contents": "First of all, thanks to Jeff for pointing out strace. I had not used it\nbefore and it is quite informative. This is the rather depressing one\nminute summary for the pg_restore:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 80.23 0.176089 65 2726 read\n 16.51 0.036226 12 3040 lseek\n 1.69 0.003710 11 326 write\n 0.79 0.001733 7 232 recvfrom\n 0.77 0.001684 18 96 96 openat\n 0.02 0.000035 12 3 1 futex\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.219477 6423 97 total\n\nThe two pg_related processes which are doing 99% of the i/o on the machine\nare spending less than 2% of all that time actually writing data. No wonder\nit is going slowly!\n\nWhich takes me to Jerry's question as to whether there are any indexes. My\nimmediate off-the cuff answer would be \"no\". I deleted all indexes in the\ntarget database before I did the pg_restore, and the tables do not have any\nforeign keys or any other constraints or triggers. I did the restore as a\ndata only restore so that it would not try to recreate any tables.\n\nHowever, after studying the details of the strace output and reading about\nother people's experiences, I believe that the more accurate answer is that\nI did not take into consideration the index that gets created as a result\nof the primary key. In this case, the primary key is big int that is\nautomatically assigned by a sequence.\n\nIf each of the tables has about 3+ billion rows, the index is still going\nto be pretty large and spread over many files. In the source database that\nwas backed up, the primary key sequence was sequentially assigned and\nwritten, but as various posprocessing operations were applied and the rows\nmodified, the data, is probably in a relatively random evenly distributed\norder. So I now believe that all the files that are being constantly\ntouched are not actually the files for the data rows, but the files for the\nindex, and as the system is reading the data it is jumping around\nrecreating the index for the primary key based on the random order of the\ndta rows it reads.\n\nSound plausible? I'm still a postgres noob.\n\nAs an experiment, I am in the process of clustering the source database\ntables by the primary key constraint. I am hoping that if I redo the\npg_dump after that, it will contain the records in more-or-less primary key\norder and on the subsequent pg_restore it should not have to spend the vast\nmajority of the time on reading and seeking.\n\nIt is surprising to me that the cluster operations (which also have to\nchurn through the entire index and all the records) are going *much* faster\nthan pg_restore. At the rate they are going, they should be finished in\nabout 12 hours, whereas the pg_restore is nowhere close to finishing and\nhas been churning for 10 days.\n\n\nOn Wed, Oct 9, 2019 at 4:05 PM Jerry Sievers <[email protected]> wrote:\n\n> Ogden Brash <[email protected]> writes:\n>\n> > I have a question about the files in .../data/postgresql/11/main/\n> > base, specifically in relation to very large tables and how they are\n> > written.\n> >\n> > I have been attempting to restore a relatively large database with\n> > pg_restore and it has been running for more than a week. (I also\n> > have another thread about the same restore related to network vs.\n> > local disk I/O)\n> >\n> > I ran the pg_restore in verbose mode using multiple jobs so I can\n> > tell what has finished and what has not. The database had 66 tables\n> > and most of them have been restored. Two of the tables were quite\n> > large (billions of rows translating to between 1 and 2TB of data on\n> > disk for those two tables) and those two tables are pretty much the\n> > only things remaining that has not been reported as finished by\n> > pg_restore.\n> >\n> > As the process has been going for a week, I have been tracking the\n> > machine (a dedicated piece of hardware, non-virtualized) and have\n> > been noticing a progressive slowdown (as tracked by iostat). There is\n>\n> Do the tables that are being loaded have any indexes on them?\n>\n> > nothing running on the machine besides postgresql and the server is\n> > only doing this restore, nothing else. It is now, on average, running\n> > at less than 25% of the speed that it was running four days ago (as\n> > measured by rate of I/O).\n> >\n> > I started to dig into what was happening on the machine and I noticed\n> > the following:\n> >\n> > iotop reports that two postgres processes (I assume each processing\n> > one of the two tables that needs to be processed) are doing all the I\n> > /O. That makes sense\n> >\n> > Total DISK READ : 1473.81 K/s | Total DISK WRITE : 617.30 K/s\n> > Actual DISK READ: 1473.81 K/s | Actual DISK WRITE: 0.00 B/s\n> > TID PRIO USER DISK READ DISK WRITE SWAPIN IO>\n> > COMMAND\n> > 6601 be/4 postgres 586.44 K/s 7.72 K/s 0.00 % 97.39 % postgres:\n> > 11/main: postg~s thebruteff [local] COPY\n> > 6600 be/4 postgres 887.37 K/s 601.87 K/s 0.00 % 93.42 % postgres:\n> > 11/main: postg~s thebruteff [local] COPY\n> > 666 be/3 root 0.00 B/s 7.72 K/s 0.00 % 5.73 % [jbd2/\n> > sda1-8]\n> > 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init\n> > 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 %\n> > [kthreadd]\n> > 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/\n> > 0:0H]\n> >\n> > So, the next thing I though I would do is an \"lsof\" command for each\n> > of those two processes to see what they were writing. That was a bit\n> > of a surpise:\n> >\n> > # lsof -p 6600 | wc -l;\n> > 840\n> >\n> > # lsof -p 6601 | wc -l;\n> > 906\n> >\n> > Is that normal? That there be so many open file pointers? ~900 open\n> > file pointers for each of the processes?\n> >\n> > The next I did was go to see the actual data files, to see how many\n> > there are. In my case they are in postgresql/11/main/base/24576 and\n> > there are 2076 files there. That made sense. However, I found that\n> > when I list them by modification date I see something interesting:\n> >\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.7\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.8\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.9\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.10\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.11\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.12\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.13\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.14\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.16\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.15\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.17\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.18\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.19\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.21\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.22\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.23\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.24\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.25\n> > -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.26\n> > -rw------- 1 postgres postgres 19062784 Oct 8 13:05 27082_fsm\n> > -rw------- 1 postgres postgres 544489472 Oct 8 13:05 27077.34\n> > -rw------- 1 postgres postgres 169705472 Oct 8 13:05 27082.72\n> > -rw------- 1 postgres postgres 978321408 Oct 8 13:05 27083.27\n> > -rw------- 1 postgres postgres 342925312 Oct 8 13:05 27076.88\n> >\n> > If you notice, the file size is capped at 1 GB and as the giant table\n> > has grown it has added more files in this directory. However, the\n> > mysterious thing to me is that it keeps modifying those files\n> > constantly - even the ones that are completely full. So for the two\n> > large tables it has been restoring all week, the modification time\n> > for the ever growing list of files is being updating constantly.\n> >\n> > Could it be that thats why I am seeing a slowdown over the course of\n> > the week - that for some reason as the number of files for the table\n> > has grown, the system is spending more and more time seeking around\n> > the disk to touch all those files for some reason?\n> >\n> > Does anyone who understands the details of postgresql's interaction\n> > with the file system have an explanation for why all those files\n> > which are full are being touched constantly?\n> >\n> >\n> >\n> >\n>\n> --\n> Jerry Sievers\n> Postgres DBA/Development Consulting\n> e: [email protected]\n>\n\nFirst of all, thanks to Jeff for pointing out strace. I had not used it before and it is quite informative. This is the rather depressing one minute summary for the pg_restore:% time seconds usecs/call calls errors syscall------ ----------- ----------- --------- --------- ---------------- 80.23 0.176089 65 2726 read 16.51 0.036226 12 3040 lseek 1.69 0.003710 11 326 write 0.79 0.001733 7 232 recvfrom 0.77 0.001684 18 96 96 openat 0.02 0.000035 12 3 1 futex------ ----------- ----------- --------- --------- ----------------100.00 0.219477 6423 97 totalThe two pg_related processes which are doing 99% of the i/o on the machine are spending less than 2% of all that time actually writing data. No wonder it is going slowly!Which takes me to Jerry's question as to whether there are any indexes. My immediate off-the cuff answer would be \"no\". I deleted all indexes in the target database before I did the pg_restore, and the tables do not have any foreign keys or any other constraints or triggers. I did the restore as a data only restore so that it would not try to recreate any tables.However, after studying the details of the strace output and reading about other people's experiences, I believe that the more accurate answer is that I did not take into consideration the index that gets created as a result of the primary key. In this case, the primary key is big int that is automatically assigned by a sequence.If each of the tables has about 3+ billion rows, the index is still going to be pretty large and spread over many files. In the source database that was backed up, the primary key sequence was sequentially assigned and written, but as various posprocessing operations were applied and the rows modified, the data, is probably in a relatively random evenly distributed order. So I now believe that all the files that are being constantly touched are not actually the files for the data rows, but the files for the index, and as the system is reading the data it is jumping around recreating the index for the primary key based on the random order of the dta rows it reads.Sound plausible? I'm still a postgres noob.As an experiment, I am in the process of clustering the source database tables by the primary key constraint. I am hoping that if I redo the pg_dump after that, it will contain the records in more-or-less primary key order and on the subsequent pg_restore it should not have to spend the vast majority of the time on reading and seeking.It is surprising to me that the cluster operations (which also have to churn through the entire index and all the records) are going *much* faster than pg_restore. At the rate they are going, they should be finished in about 12 hours, whereas the pg_restore is nowhere close to finishing and has been churning for 10 days.On Wed, Oct 9, 2019 at 4:05 PM Jerry Sievers <[email protected]> wrote:Ogden Brash <[email protected]> writes:\n\n> I have a question about the files in .../data/postgresql/11/main/\n> base, specifically in relation to very large tables and how they are\n> written.\n>\n> I have been attempting to restore a relatively large database with\n> pg_restore and it has been running for more than a week. (I also \n> have another thread about the same restore related to network vs.\n> local disk I/O)\n>\n> I ran the pg_restore in verbose mode using multiple jobs so I can\n> tell what has finished and what has not. The database had 66 tables\n> and most of them have been restored. Two of the tables were quite\n> large (billions of rows translating to between 1 and 2TB of data on\n> disk for those two tables) and those two tables are pretty much the\n> only things remaining that has not been reported as finished by\n> pg_restore.\n>\n> As the process has been going for a week, I have been tracking the\n> machine (a dedicated piece of hardware, non-virtualized) and have\n> been noticing a progressive slowdown (as tracked by iostat). There is\n\nDo the tables that are being loaded have any indexes on them?\n\n> nothing running on the machine besides postgresql and the server is\n> only doing this restore, nothing else. It is now, on average, running\n> at less than 25% of the speed that it was running four days ago (as\n> measured by rate of I/O). \n>\n> I started to dig into what was happening on the machine and I noticed\n> the following:\n>\n> iotop reports that two postgres processes (I assume each processing\n> one of the two tables that needs to be processed) are doing all the I\n> /O. That makes sense\n>\n> Total DISK READ : 1473.81 K/s | Total DISK WRITE : 617.30 K/s\n> Actual DISK READ: 1473.81 K/s | Actual DISK WRITE: 0.00 B/s\n> TID PRIO USER DISK READ DISK WRITE SWAPIN IO> \n> COMMAND\n> 6601 be/4 postgres 586.44 K/s 7.72 K/s 0.00 % 97.39 % postgres:\n> 11/main: postg~s thebruteff [local] COPY\n> 6600 be/4 postgres 887.37 K/s 601.87 K/s 0.00 % 93.42 % postgres:\n> 11/main: postg~s thebruteff [local] COPY\n> 666 be/3 root 0.00 B/s 7.72 K/s 0.00 % 5.73 % [jbd2/\n> sda1-8]\n> 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init\n> 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 %\n> [kthreadd]\n> 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/\n> 0:0H]\n>\n> So, the next thing I though I would do is an \"lsof\" command for each\n> of those two processes to see what they were writing. That was a bit\n> of a surpise:\n>\n> # lsof -p 6600 | wc -l;\n> 840\n>\n> # lsof -p 6601 | wc -l;\n> 906\n>\n> Is that normal? That there be so many open file pointers? ~900 open\n> file pointers for each of the processes?\n>\n> The next I did was go to see the actual data files, to see how many\n> there are. In my case they are in postgresql/11/main/base/24576 and\n> there are 2076 files there. That made sense. However, I found that\n> when I list them by modification date I see something interesting:\n>\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.7\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.8\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.9\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.10\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.11\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.12\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.13\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.14\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.16\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.15\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.17\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.18\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.19\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.21\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.22\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.23\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.24\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.25\n> -rw------- 1 postgres postgres 1073741824 Oct 8 13:05 27083.26\n> -rw------- 1 postgres postgres 19062784 Oct 8 13:05 27082_fsm\n> -rw------- 1 postgres postgres 544489472 Oct 8 13:05 27077.34\n> -rw------- 1 postgres postgres 169705472 Oct 8 13:05 27082.72\n> -rw------- 1 postgres postgres 978321408 Oct 8 13:05 27083.27\n> -rw------- 1 postgres postgres 342925312 Oct 8 13:05 27076.88\n>\n> If you notice, the file size is capped at 1 GB and as the giant table\n> has grown it has added more files in this directory. However, the\n> mysterious thing to me is that it keeps modifying those files\n> constantly - even the ones that are completely full. So for the two\n> large tables it has been restoring all week, the modification time\n> for the ever growing list of files is being updating constantly.\n>\n> Could it be that thats why I am seeing a slowdown over the course of\n> the week - that for some reason as the number of files for the table\n> has grown, the system is spending more and more time seeking around\n> the disk to touch all those files for some reason?\n>\n> Does anyone who understands the details of postgresql's interaction\n> with the file system have an explanation for why all those files\n> which are full are being touched constantly?\n>\n>\n>\n>\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]",
"msg_date": "Wed, 9 Oct 2019 22:01:35 -0700",
"msg_from": "Ogden Brash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Modification of data in base folder and very large tables"
},
{
"msg_contents": ">>>>> \"Ogden\" == Ogden Brash <[email protected]> writes:\n\n Ogden> I did the restore as a data only restore so that it would not\n Ogden> try to recreate any tables.\n\nDoing data-only restores is almost always a mistake.\n\npg_dump/pg_restore are very careful to create things in an order that\nallows the data part of the restore to run quickly: tables are created\nfirst without any indexes or constraints, then data is loaded, then\nindexes and constraints are created in bulk afterwards.\n\nIf you do a data-only restore into an existing table, then it's up to\nyou to avoid performance problems.\n\n Ogden> As an experiment, I am in the process of clustering the source\n Ogden> database tables by the primary key constraint. I am hoping that\n Ogden> if I redo the pg_dump after that, it will contain the records in\n Ogden> more-or-less primary key order and on the subsequent pg_restore\n Ogden> it should not have to spend the vast majority of the time on\n Ogden> reading and seeking.\n\nThis is a waste of time; just restore the data without the primary key\nin place and then create it at the end.\n\n Ogden> It is surprising to me that the cluster operations (which also\n Ogden> have to churn through the entire index and all the records) are\n Ogden> going *much* faster than pg_restore.\n\nThat's because cluster, as with creation of a fresh index, can do a bulk\nindex build: sequentially read the table, sort the values (spilling to\ntemporary files if need be, but these are also read and written\nsequentially), and write out the index data in one sequential pass.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 10 Oct 2019 13:05:24 +0100",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Modification of data in base folder and very large tables"
},
{
"msg_contents": "On Thu, Oct 10, 2019 at 3:40 AM Ogden Brash <[email protected]> wrote:\n\n> If each of the tables has about 3+ billion rows, the index is still going\n> to be pretty large and spread over many files. In the source database that\n> was backed up, the primary key sequence was sequentially assigned and\n> written, but as various posprocessing operations were applied and the rows\n> modified, the data, is probably in a relatively random evenly distributed\n> order. So I now believe that all the files that are being constantly\n> touched are not actually the files for the data rows, but the files for the\n> index, and as the system is reading the data it is jumping around\n> recreating the index for the primary key based on the random order of the\n> dta rows it reads.\n>\n> Sound plausible? I'm still a postgres noob.\n>\n\nYes, perfectly plausible.\n\n\n>\n> As an experiment, I am in the process of clustering the source database\n> tables by the primary key constraint. I am hoping that if I redo the\n> pg_dump after that, it will contain the records in more-or-less primary key\n> order and on the subsequent pg_restore it should not have to spend the vast\n> majority of the time on reading and seeking.\n>\n> It is surprising to me that the cluster operations (which also have to\n> churn through the entire index and all the records) are going *much* faster\n> than pg_restore.\n>\n\nThe cluster gets to lock the table against any concurrent changes, and then\nrebuild the indexes from scratch in bulk. You could have done more-or-less\nthe same thing just by dropping the primary key while doing the load. Alas,\nthere is no way to do that without losing the work already done. When you\ndo a data-only pg_restore, you are dis-inviting it from doing such\noptimizations. Isn't the whole point of data-only restore is that you\nleave the table open for other business while it happens? (Or maybe there\nis some other point to it that I am missing--if you want some halfway\nmeasure between creating the table from scratch, and leaving it completely\nopen for business as usual, then you have to evaluate each of those steps\nand implement them yourself, there is no way that pg_restore can reasonably\nguess which constraints and indexes it is allowed to drop and which it is\nnot).\n\nPerhaps\nhttps://www.postgresql.org/docs/current/populate.html#POPULATE-RM-INDEXES\nshould\nmention the index assocated with primary keys, since dropping them does\nrequire a different syntax and they might be overlooked.\n\nCheers,\n\nJeff\n\nOn Thu, Oct 10, 2019 at 3:40 AM Ogden Brash <[email protected]> wrote:If each of the tables has about 3+ billion rows, the index is still going to be pretty large and spread over many files. In the source database that was backed up, the primary key sequence was sequentially assigned and written, but as various posprocessing operations were applied and the rows modified, the data, is probably in a relatively random evenly distributed order. So I now believe that all the files that are being constantly touched are not actually the files for the data rows, but the files for the index, and as the system is reading the data it is jumping around recreating the index for the primary key based on the random order of the dta rows it reads.Sound plausible? I'm still a postgres noob.Yes, perfectly plausible. As an experiment, I am in the process of clustering the source database tables by the primary key constraint. I am hoping that if I redo the pg_dump after that, it will contain the records in more-or-less primary key order and on the subsequent pg_restore it should not have to spend the vast majority of the time on reading and seeking.It is surprising to me that the cluster operations (which also have to churn through the entire index and all the records) are going *much* faster than pg_restore. The cluster gets to lock the table against any concurrent changes, and then rebuild the indexes from scratch in bulk. You could have done more-or-less the same thing just by dropping the primary key while doing the load. Alas, there is no way to do that without losing the work already done. When you do a data-only pg_restore, you are dis-inviting it from doing such optimizations. Isn't the whole point of data-only restore is that you leave the table open for other business while it happens? (Or maybe there is some other point to it that I am missing--if you want some halfway measure between creating the table from scratch, and leaving it completely open for business as usual, then you have to evaluate each of those steps and implement them yourself, there is no way that pg_restore can reasonably guess which constraints and indexes it is allowed to drop and which it is not).Perhaps https://www.postgresql.org/docs/current/populate.html#POPULATE-RM-INDEXES should mention the index assocated with primary keys, since dropping them does require a different syntax and they might be overlooked. Cheers,Jeff",
"msg_date": "Thu, 10 Oct 2019 21:08:54 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Modification of data in base folder and very large tables"
}
] |
[
{
"msg_contents": "As my table has gotten bigger, it takes longer to get a single row back\nwhen querying a row by its btree index.\n\nRight now the database is running on a traditional HDD. SSDs have a much\nfaster seek time than traditional HDDs.\n\nWould switching to an SSD improve \"Index Only Scan\" time greatly? by at\nleast 3-4 times?\n\nAs my table has gotten bigger, it takes longer to get a single row back when querying a row by its btree index.Right now the database is running on a traditional HDD. SSDs have a much faster seek time than traditional HDDs. Would switching to an SSD improve \"Index Only Scan\" time greatly? by at least 3-4 times?",
"msg_date": "Tue, 8 Oct 2019 19:37:06 -0400",
"msg_from": "Arya F <[email protected]>",
"msg_from_op": true,
"msg_subject": "Would SSD improve Index Only Scan performance by a lot?"
},
{
"msg_contents": "På onsdag 09. oktober 2019 kl. 01:37:06, skrev Arya F <[email protected] \n<mailto:[email protected]>>: As my table has gotten bigger, it takes longer to \nget a single row back when querying a row by its btree index. Right now the \ndatabase is running on a traditional HDD. SSDs have a much faster seek time \nthan traditional HDDs. Would switching to an SSD improve \"Index Only Scan\" time \ngreatly? by at least 3-4 times? It depends on whether the index is accessed \noften or not (wrt. caching), and (of course) the size of the index, but yes - \n\"cold access\" to the index (or persistent data in general) ismuch faster with \nSSD. -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 \[email protected] <mailto:[email protected]> www.visena.com \n<https://www.visena.com> <https://www.visena.com>",
"msg_date": "Wed, 9 Oct 2019 01:42:26 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Sv: Would SSD improve Index Only Scan performance by a lot?"
},
{
"msg_contents": "On Tue, Oct 8, 2019 at 7:37 PM Arya F <[email protected]> wrote:\n\n> As my table has gotten bigger, it takes longer to get a single row back\n> when querying a row by its btree index.\n>\n> Right now the database is running on a traditional HDD. SSDs have a much\n> faster seek time than traditional HDDs.\n>\n> Would switching to an SSD improve \"Index Only Scan\" time greatly? by at\n> least 3-4 times?\n>\n\n*If* your query is disk I/O bound, SSD can help a lot.\n\nIf your data is already in memory, or file system cache, and your query is\nbound by CPU or bloated/corrupted indexes, or some query inefficiency, then\nfaster disks really won't do anything.\n\nDepending on the data type and size of the data you may be able to help\nyour query performance by choosing an index type other than the\nout-of-the-box btree as well (such as a hash or brin index) or maybe even a\ndifferent sort order on the index, or a partial index.\n\nOn Tue, Oct 8, 2019 at 7:37 PM Arya F <[email protected]> wrote:As my table has gotten bigger, it takes longer to get a single row back when querying a row by its btree index.Right now the database is running on a traditional HDD. SSDs have a much faster seek time than traditional HDDs. Would switching to an SSD improve \"Index Only Scan\" time greatly? by at least 3-4 times?*If* your query is disk I/O bound, SSD can help a lot.If your data is already in memory, or file system cache, and your query is bound by CPU or bloated/corrupted indexes, or some query inefficiency, then faster disks really won't do anything.Depending on the data type and size of the data you may be able to help your query performance by choosing an index type other than the out-of-the-box btree as well (such as a hash or brin index) or maybe even a different sort order on the index, or a partial index.",
"msg_date": "Tue, 8 Oct 2019 20:12:48 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Would SSD improve Index Only Scan performance by a lot?"
},
{
"msg_contents": "For indexes the SSDs are at least 4X faster but you won't get that to happen unless you fix the planner tunable for the random page fetch cost first. Super important change for SSDs. \n\nMatthew Hall\n\n> On Oct 8, 2019, at 5:12 PM, Rick Otten <[email protected]> wrote:\n> \n> \n>> On Tue, Oct 8, 2019 at 7:37 PM Arya F <[email protected]> wrote:\n>> As my table has gotten bigger, it takes longer to get a single row back when querying a row by its btree index.\n>> \n>> Right now the database is running on a traditional HDD. SSDs have a much faster seek time than traditional HDDs. \n>> \n>> Would switching to an SSD improve \"Index Only Scan\" time greatly? by at least 3-4 times?\n> \n> *If* your query is disk I/O bound, SSD can help a lot.\n> \n> If your data is already in memory, or file system cache, and your query is bound by CPU or bloated/corrupted indexes, or some query inefficiency, then faster disks really won't do anything.\n> \n> Depending on the data type and size of the data you may be able to help your query performance by choosing an index type other than the out-of-the-box btree as well (such as a hash or brin index) or maybe even a different sort order on the index, or a partial index.\n> \n> \n\nFor indexes the SSDs are at least 4X faster but you won't get that to happen unless you fix the planner tunable for the random page fetch cost first. Super important change for SSDs. Matthew HallOn Oct 8, 2019, at 5:12 PM, Rick Otten <[email protected]> wrote:On Tue, Oct 8, 2019 at 7:37 PM Arya F <[email protected]> wrote:As my table has gotten bigger, it takes longer to get a single row back when querying a row by its btree index.Right now the database is running on a traditional HDD. SSDs have a much faster seek time than traditional HDDs. Would switching to an SSD improve \"Index Only Scan\" time greatly? by at least 3-4 times?*If* your query is disk I/O bound, SSD can help a lot.If your data is already in memory, or file system cache, and your query is bound by CPU or bloated/corrupted indexes, or some query inefficiency, then faster disks really won't do anything.Depending on the data type and size of the data you may be able to help your query performance by choosing an index type other than the out-of-the-box btree as well (such as a hash or brin index) or maybe even a different sort order on the index, or a partial index.",
"msg_date": "Tue, 8 Oct 2019 20:17:13 -0700",
"msg_from": "Matthew Hall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Would SSD improve Index Only Scan performance by a lot?"
},
{
"msg_contents": "On Tue, Oct 8, 2019 at 7:37 PM Arya F <[email protected]> wrote:\n\n> As my table has gotten bigger, it takes longer to get a single row back\n> when querying a row by its btree index.\n>\n>\nIs this in a probabilistic sense, they take longer on average, or has every\nsingle access gotten slower? If the increase in size has caused the index\nleaf pages to go from being almost entirely in cache to almost entirely not\nbeing in cache, the slow down would probably be a lot more 3 to 4 fold.\nBut maybe you went from 100% in cache, to 90% in cache and 10% out of cache\n(with a 40 fold slowdown for those ones), coming out to 4 fold slow down on\naverage. If that is the case, maybe you can get the performance back up by\ntweaking some settings, rather than changing hardware.\n\n\n> Right now the database is running on a traditional HDD. SSDs have a much\n> faster seek time than traditional HDDs.\n>\n> Would switching to an SSD improve \"Index Only Scan\" time greatly? by at\n> least 3-4 times?\n>\n\nIf drive access is truly the bottleneck on every single execution, then\nyes, probably far more than 3-4 times.\n\nCheers,\n\nJeff\n\nOn Tue, Oct 8, 2019 at 7:37 PM Arya F <[email protected]> wrote:As my table has gotten bigger, it takes longer to get a single row back when querying a row by its btree index.Is this in a probabilistic sense, they take longer on average, or has every single access gotten slower? If the increase in size has caused the index leaf pages to go from being almost entirely in cache to almost entirely not being in cache, the slow down would probably be a lot more 3 to 4 fold. But maybe you went from 100% in cache, to 90% in cache and 10% out of cache (with a 40 fold slowdown for those ones), coming out to 4 fold slow down on average. If that is the case, maybe you can get the performance back up by tweaking some settings, rather than changing hardware. Right now the database is running on a traditional HDD. SSDs have a much faster seek time than traditional HDDs. Would switching to an SSD improve \"Index Only Scan\" time greatly? by at least 3-4 times?If drive access is truly the bottleneck on every single execution, then yes, probably far more than 3-4 times.Cheers,Jeff",
"msg_date": "Wed, 9 Oct 2019 15:54:57 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Would SSD improve Index Only Scan performance by a lot?"
}
] |
[
{
"msg_contents": "Is there a way to display the planner algorithm used by a query, either in\nEXPLAIN or in a different way?\n\nRegards,\nBehrang (sent from my mobile)\n\nIs there a way to display the planner algorithm used by a query, either in EXPLAIN or in a different way?Regards,Behrang (sent from my mobile)",
"msg_date": "Wed, 9 Oct 2019 17:21:23 +1100",
"msg_from": "Behrang Saeedzadeh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Get the planner used by a query?"
},
{
"msg_contents": "On Wed, 9 Oct 2019 at 19:21, Behrang Saeedzadeh <[email protected]> wrote:\n>\n> Is there a way to display the planner algorithm used by a query, either in EXPLAIN or in a different way?\n\nThere's not really any simple way to know. If the number of relations\nin the join search meets or exceeds geqo_threshold then it'll use the\ngenetic query optimizer. However, knowing exactly how many relations\nare in the join search is not often simple since certain types of\nsubqueries can be pulled up into the main query and that can increase\nthe number of relations in the search.\n\nIf you don't mind writing C code, then you could write an extension\nthat hooks into join_search_hook and somehow outputs this information\nto you before going on to call the geqo if the \"enable_geqo &&\nlevels_needed >= geqo_threshold\" condition is met. Besides that, I\ndon't really know if there's any way. You could try editing the\ngeqo_seed and seeing if the plan changes, but if it does not, then\nthat does not mean the geqo was not used, so doing it that way could\nbe quite error-prone. You'd only be able to tell the geqo was being\nused if you could confirm that changing geqo_seed did change the plan.\n(And you could be certain the plan did not change for some other\nreason like an auto-analyze).\n\n\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 10 Oct 2019 07:15:50 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Get the planner used by a query?"
}
] |
[
{
"msg_contents": "This is a follow up to\nhttps://www.postgresql.org/message-id/flat/CAERAJ%2B-1buiJ%2B_JWEo0a9Ao-CVMWpgp%3DEnFx1dJtnB3WmMi2zQ%40mail.gmail.com\n\nThe query (generated by Hibernate) got a bit more complex and performance\ndegraded again. I have uploaded all the details here (with changed table\nnames, etc.): https://github.com/behrangsa/slow-query\n\nIn short, the new query is:\n\n```\n\nSELECT inv.id AS i_id,\n inv.invoice_date AS inv_d,\n inv.invoice_xid AS inv_xid,\n inv.invoice_type AS inv_type,\n brs.branch_id AS br_id,\n cinvs.company_id AS c_idFROM invoices inv\n LEFT OUTER JOIN branch_invoices brs ON inv.id = brs.invoice_id\n LEFT OUTER JOIN company_invoices cinvs ON inv.id = cinvs.invoice_id\n INNER JOIN branches br ON brs.branch_id = br.idWHERE\nbrs.branch_id IN (SELECT br1.id\n FROM branches br1\n INNER JOIN access_rights ar1 ON\nbr1.id = ar1.branch_id\n INNER JOIN users usr1 ON ar1.user_id = usr1.id\n INNER JOIN groups grp1 ON\nar1.group_id = grp1.id\n INNER JOIN group_permissions gpr1 ON\ngrp1.id = gpr1.group_id\n INNER JOIN permissions prm1 ON\ngpr1.permission_id = prm1.id\n WHERE usr1.id = 1636\n AND prm1.code = 'C2'\n AND ar1.access_type = 'T1')\n OR brs.branch_id IN (SELECT br3.id\n FROM companies cmp\n INNER JOIN branches br3 ON cmp.id =\nbr3.company_id\n INNER JOIN access_rights ar2 ON\ncmp.id = ar2.company_id\n INNER JOIN users usr2 ON ar2.user_id = usr2.id\n INNER JOIN groups g2 ON ar2.group_id = g2.id\n INNER JOIN group_permissions gpr2 ON\ng2.id = gpr2.group_id\n INNER JOIN permissions prm2 ON\ngpr2.permission_id = prm2.id\n WHERE usr2.id = 1636\n AND prm2.code = 'C2'\n AND ar2.access_type = 'T1'\n ORDER BY br3.id)ORDER BY inv.invoice_date\nDESC, br.name ASCLIMIT 12;\n\n```\n\nI tried tweaking join_collapse_limit and from_collapse_limit (I tried up to\n30) but couldn't improve the performance (I also increased geqo_threshold to\njoin_collapse_limit + 2).\n\nAny chance of making PostgreSQL 10.6 choose a better plan without rewriting\nthe Hibernate generated query?\n\nBest regards,\nBehrang Saeedzadeh\nblog.behrang.org\n\nThis is a follow up to https://www.postgresql.org/message-id/flat/CAERAJ%2B-1buiJ%2B_JWEo0a9Ao-CVMWpgp%3DEnFx1dJtnB3WmMi2zQ%40mail.gmail.comThe query (generated by Hibernate) got a bit more complex and performance degraded again. I have uploaded all the details here (with changed table names, etc.): https://github.com/behrangsa/slow-queryIn short, the new query is:```SELECT inv.id AS i_id,\n inv.invoice_date AS inv_d,\n inv.invoice_xid AS inv_xid,\n inv.invoice_type AS inv_type,\n brs.branch_id AS br_id,\n cinvs.company_id AS c_id\nFROM invoices inv\n LEFT OUTER JOIN branch_invoices brs ON inv.id = brs.invoice_id\n LEFT OUTER JOIN company_invoices cinvs ON inv.id = cinvs.invoice_id\n INNER JOIN branches br ON brs.branch_id = br.id\nWHERE brs.branch_id IN (SELECT br1.id\n FROM branches br1\n INNER JOIN access_rights ar1 ON br1.id = ar1.branch_id\n INNER JOIN users usr1 ON ar1.user_id = usr1.id\n INNER JOIN groups grp1 ON ar1.group_id = grp1.id\n INNER JOIN group_permissions gpr1 ON grp1.id = gpr1.group_id\n INNER JOIN permissions prm1 ON gpr1.permission_id = prm1.id\n WHERE usr1.id = 1636\n AND prm1.code = 'C2'\n AND ar1.access_type = 'T1')\n OR brs.branch_id IN (SELECT br3.id\n FROM companies cmp\n INNER JOIN branches br3 ON cmp.id = br3.company_id\n INNER JOIN access_rights ar2 ON cmp.id = ar2.company_id\n INNER JOIN users usr2 ON ar2.user_id = usr2.id\n INNER JOIN groups g2 ON ar2.group_id = g2.id\n INNER JOIN group_permissions gpr2 ON g2.id = gpr2.group_id\n INNER JOIN permissions prm2 ON gpr2.permission_id = prm2.id\n WHERE usr2.id = 1636\n AND prm2.code = 'C2'\n AND ar2.access_type = 'T1'\n ORDER BY br3.id)\nORDER BY inv.invoice_date DESC, br.name ASC\nLIMIT 12;```I tried tweaking join_collapse_limit and from_collapse_limit (I tried up to 30) but couldn't improve the performance (I also increased geqo_threshold to join_collapse_limit + 2).Any chance of making PostgreSQL 10.6 choose a better plan without rewriting the Hibernate generated query?Best regards,Behrang Saeedzadehblog.behrang.org",
"msg_date": "Wed, 9 Oct 2019 23:07:02 +1100",
"msg_from": "Behrang Saeedzadeh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query slow again after adding an `OR` operation (was: Slow PostgreSQL\n 10.6 query)"
},
{
"msg_contents": "Are you prefixing this auto generated query with set join_collapse_limit =\n30, or are you changing the default and reloading config? That is, can you\nimpact ONLY this query with these config changes? I wouldn't assume so, so\nany hack/query hint like turning off hashjoins (which seem to be chosen\ninstead of nested loop because of bad estimates for this plan) will likely\nhave serious impact on other queries.\n\nI know you don't have the flexibility to change the query to be one that\nfollows best practices, but it is a bit disappointing that your ORM\ngenerates that OR condition instead of something like *brs.branch_id IN\n(query1 union all query2). *The join to branch_invoices also must function\nas inner join rather than left, but I am not sure if declaring a join type\nas left impacts the performance significantly.\n\nWhen performance matters, there's nothing quite like being able to\ncustomize the query directly.\n\nAre you prefixing this auto generated query with set join_collapse_limit = 30, or are you changing the default and reloading config? That is, can you impact ONLY this query with these config changes? I wouldn't assume so, so any hack/query hint like turning off hashjoins (which seem to be chosen instead of nested loop because of bad estimates for this plan) will likely have serious impact on other queries.I know you don't have the flexibility to change the query to be one that follows best practices, but it is a bit disappointing that your ORM generates that OR condition instead of something like brs.branch_id IN (query1 union all query2). The join to branch_invoices also must function as inner join rather than left, but I am not sure if declaring a join type as left impacts the performance significantly.When performance matters, there's nothing quite like being able to customize the query directly.",
"msg_date": "Wed, 9 Oct 2019 13:36:00 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow again after adding an `OR` operation (was: Slow\n PostgreSQL 10.6 query)"
},
{
"msg_contents": "On Thu, 10 Oct 2019 at 01:07, Behrang Saeedzadeh <[email protected]> wrote:\n>\n> This is a follow up to https://www.postgresql.org/message-id/flat/CAERAJ%2B-1buiJ%2B_JWEo0a9Ao-CVMWpgp%3DEnFx1dJtnB3WmMi2zQ%40mail.gmail.com\n>\n> The query (generated by Hibernate) got a bit more complex and performance degraded again. I have uploaded all the details here (with changed table names, etc.): https://github.com/behrangsa/slow-query\n>\n> In short, the new query is:\n\nThe query mostly appears slow due to the \"Rows Removed By Filter\" in\nthe OR condition. The only way to get around not scanning the entire\nbranch_invoices table would be to somehow write the way in such a way\nthat allows it to go on the inner side of the join.\n\nYou could do that if you ensure there's an index on branch_invoices\n(branch_id) and format the query as:\n\nSELECT inv.id AS i_id,\n inv.invoice_date AS inv_d,\n inv.invoice_xid AS inv_xid,\n inv.invoice_type AS inv_type,\n brs.branch_id AS br_id,\n cinvs.company_id AS c_id\nFROM invoices inv\n LEFT OUTER JOIN branch_invoices brs ON inv.id = brs.invoice_id\n LEFT OUTER JOIN company_invoices cinvs ON inv.id = cinvs.invoice_id\n INNER JOIN branches br ON brs.branch_id = br.id\nWHERE brs.branch_id IN (SELECT br1.id\n FROM branches br1\n INNER JOIN access_rights ar1 ON\nbr1.id = ar1.branch_id\n INNER JOIN users usr1 ON ar1.user_id = usr1.id\n INNER JOIN groups grp1 ON\nar1.group_id = grp1.id\n INNER JOIN group_permissions gpr1 ON\ngrp1.id = gpr1.group_id\n INNER JOIN permissions prm1 ON\ngpr1.permission_id = prm1.id\n WHERE usr1.id = 1636\n AND prm1.code = 'C2'\n AND ar1.access_type = 'T1')\nUNION ALL\nSELECT br3.id\n FROM companies cmp\n INNER JOIN branches br3 ON cmp.id =\nbr3.company_id\n INNER JOIN access_rights ar2 ON\ncmp.id = ar2.company_id\n INNER JOIN users usr2 ON ar2.user_id = usr2.id\n INNER JOIN groups g2 ON ar2.group_id = g2.id\n INNER JOIN group_permissions gpr2 ON\ng2.id = gpr2.group_id\n INNER JOIN permissions prm2 ON\ngpr2.permission_id = prm2.id\n WHERE usr2.id = 1636\n AND prm2.code = 'C2'\n AND ar2.access_type = 'T1')\nORDER BY inv.invoice_date DESC, br.name ASC\nLIMIT 12;\n\nThe planner may then choose to pullup the subquery and uniquify it\nthen put it on the outside of a nested loop join then lookup the\nbranch_invoices record using the index on branch_id. I think this is\nquite a likely plan since the planner estimates there's only going to\nbe 1 row from each of the subqueries.\n\nAlso note, that the LEFT JOIN you have to branch_invoices is not\nreally a left join since you're insisting that the branch_id must be\nin the first or 2nd sub-plan. There's no room for it to be NULL. The\nplanner will just convert that to an INNER JOIN with the above query\nsince that'll give it the flexibility to put the subquery in the IN\nclause on the outside of the join (after having uniquified it).\nYou'll need to decide what you actually want the behaviour to be here.\nIf you do need those NULL rows then you'd better move your WHERE quals\ndown into the join condition for branch_invoices table. I'd suggest\ntesting with some mock-up data if you're uncertain of what I mean.\n\nIf you find that is faster and you can't rewrite the query due to it\nhaving been generated by Hibernate, then that sounds like a problem\nwith Hibernate. PostgreSQL does not currently attempt to do any\nrewrites which convert OR clauses to use UNION or UNION ALL. No amount\nof tweaking the planner settings is going to change that fact.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 10 Oct 2019 11:06:25 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow again after adding an `OR` operation (was: Slow\n PostgreSQL 10.6 query)"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a problem with views. When I use view in my query it really slows down(1.7seconds)\nIf I use inside of view and add conditions and joins to it, it is really fast(0.7 milliseconds).\nI have no distinct/group/partition by in view so I have no idea why is this happening.\nI wrote queries and plans below.\nI would be very happy if you can help me.\n\nBest regards,\n\n\n\n\nQuery without view;\n\nexplain analyze select\n *\nfrom\n bss.prod_char_val\nleft join bss.prod on\n prod.prod_id = prod_char_val.prod_id,\n bss.gnl_st prodstatus,\n bss.gnl_char\nleft join bss.gnl_char_lang on\n gnl_char_lang.char_id = gnl_char.char_id,\n bss.gnl_char_val\nleft join bss.gnl_char_val_lang on\n gnl_char_val_lang.char_val_id = gnl_char_val.char_val_id,\n bss.gnl_st charvalstatus\n cross join bss.prod prodentity0_\ncross join bss.cust custentity2_\nwhere\n prod.st_id = prodstatus.gnl_st_id\n and (prodstatus.shrt_code::text = any (array['ACTV'::character varying::text,\n 'PNDG'::character varying::text]))\n and gnl_char_val_lang.is_actv = 1::numeric\n and gnl_char_lang.is_actv = 1::numeric\n and gnl_char_lang.lang::text = gnl_char_val_lang.lang::text\n and prod_char_val.char_id = gnl_char.char_id\n and prod_char_val.char_val_id = gnl_char_val.char_val_id\n and prod_char_val.st_id = charvalstatus.gnl_st_id\n and (charvalstatus.shrt_code::text = any (array['ACTV'::character varying::text,'PNDG'::character varying::text]))\n and gnl_char_val_lang.lang = 'en'\n and (charvalstatus.shrt_code = 'xxx'\n and prod_char_val.val = 'xxx'\n or charvalstatus.shrt_code = 'xxx'\n and prod_char_val.val = 'xxx')\n and prodentity0_.prod_id = prod_char_val.prod_id\n and custentity2_.party_id = 16424\n and prodentity0_.cust_id = custentity2_.cust_id\n order by\n prodentity0_.prod_id desc;\n\n\nSort (cost=373.92..373.93 rows=1 width=19509) (actual time=0.098..0.098 rows=0 loops=1)\n Sort Key: prod_char_val.prod_id DESC\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=2.57..373.91 rows=1 width=19509) (actual time=0.066..0.066 rows=0 loops=1)\n Join Filter: (gnl_char_val.char_val_id = gnl_char_val_lang.char_val_id)\n -> Nested Loop (cost=2.30..373.58 rows=1 width=19447) (actual time=0.066..0.066 rows=0 loops=1)\n -> Nested Loop (cost=2.15..373.42 rows=1 width=18571) (actual time=0.066..0.066 rows=0 loops=1)\n Join Filter: (gnl_char.char_id = gnl_char_lang.char_id)\n -> Nested Loop (cost=1.88..373.09 rows=1 width=18488) (actual time=0.066..0.066 rows=0 loops=1)\n -> Nested Loop (cost=1.73..372.92 rows=1 width=16002) (actual time=0.066..0.066 rows=0 loops=1)\n Join Filter: (charvalstatus.gnl_st_id = prod_char_val.st_id)\n -> Nested Loop (cost=1.29..214.51 rows=11 width=15914) (actual time=0.065..0.065 rows=0 loops=1)\n -> Nested Loop (cost=1.15..207.14 rows=44 width=15783) (actual time=0.065..0.065 rows=0 loops=1)\n -> Nested Loop (cost=0.72..180.73 rows=44 width=9586) (actual time=0.065..0.065 rows=0 loops=1)\n -> Seq Scan on gnl_st charvalstatus (cost=0.00..10.61 rows=1 width=131) (actual time=0.064..0.065 rows=0 loops=1)\n Filter: (((shrt_code)::text = ANY ('{ACTV,PNDG}'::text[])) AND ((shrt_code)::text = 'xxx'::text))\n Rows Removed by Filter: 307\n -> Nested Loop (cost=0.72..169.68 rows=44 width=9455) (never executed)\n -> Index Scan using idx_cust_party_id on cust custentity2_ (cost=0.29..8.31 rows=1 width=3258) (never executed)\n Index Cond: (party_id = '16424'::numeric)\n -> Index Scan using idx_prod_cust_id on prod prodentity0_ (cost=0.43..160.81 rows=57 width=6197) (never executed)\n Index Cond: (cust_id = custentity2_.cust_id)\n -> Index Scan using pk_prod on prod (cost=0.43..0.60 rows=1 width=6197) (never executed)\n Index Cond: (prod_id = prodentity0_.prod_id)\n -> Index Scan using gnl_st_pkey on gnl_st prodstatus (cost=0.15..0.17 rows=1 width=131) (never executed)\n Index Cond: (gnl_st_id = prod.st_id)\n Filter: ((shrt_code)::text = ANY ('{ACTV,PNDG}'::text[]))\n -> Index Scan using idx_prod_char_val_prod_id on prod_char_val (cost=0.44..14.38 rows=2 width=88) (never executed)\n Index Cond: (prod_id = prod.prod_id)\n Filter: (((val)::text = 'xxx'::text) OR ((val)::text = 'xxx'::text))\n -> Index Scan using gnl_char_pkey on gnl_char (cost=0.14..0.16 rows=1 width=2486) (never executed)\n Index Cond: (char_id = prod_char_val.char_id)\n -> Index Scan using idx_gnl_char_lang_char_id on gnl_char_lang (cost=0.27..0.32 rows=1 width=83) (never executed)\n Index Cond: (char_id = prod_char_val.char_id)\n Filter: ((is_actv = '1'::numeric) AND ((lang)::text = 'en'::text))\n -> Index Scan using gnl_char_val_pkey on gnl_char_val (cost=0.15..0.17 rows=1 width=876) (never executed)\n Index Cond: (char_val_id = prod_char_val.char_val_id)\n -> Index Scan using idx_gcvl_char_val_id on gnl_char_val_lang (cost=0.28..0.32 rows=1 width=56) (never executed)\n Index Cond: (char_val_id = prod_char_val.char_val_id)\n Filter: ((is_actv = '1'::numeric) AND ((lang)::text = 'en'::text))\nPlanning time: 12.275 ms\nExecution time: 0.770 ms\n\n\nQuery with view;\n\nexplain analyze select\n *\nfrom\n bss.prod prodentity0_\ncross join bss.v_prod_char_val vprodcharv1_\ncross join bss.cust custentity2_\nwhere\n vprodcharv1_.lang = 'en'\n and (vprodcharv1_.shrt_code = 'xxx'\n and vprodcharv1_.val = 'xxx'\n or vprodcharv1_.shrt_code = 'xxx'\n and vprodcharv1_.val = 'xxx')\n and prodentity0_.prod_id = vprodcharv1_.prod_id\n and custentity2_.party_id = 16424\n and prodentity0_.cust_id = custentity2_.cust_id\n order by prodentity0_.prod_id desc;\n\n\nSort (cost=19850.34..19850.34 rows=1 width=9616) (actual time=1661.094..1661.095 rows=6 loops=1)\n Sort Key: prodentity0_.prod_id DESC\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=6.72..19850.33 rows=1 width=9616) (actual time=527.507..1661.058 rows=6 loops=1)\n Join Filter: (prodentity0_.cust_id = custentity2_.cust_id)\n Rows Removed by Join Filter: 98999\n -> Index Scan using idx_cust_party_id on cust custentity2_ (cost=0.29..8.31 rows=1 width=3258) (actual time=0.007..0.008 rows=1 loops=1)\n Index Cond: (party_id = '16424'::numeric)\n -> Nested Loop (cost=6.43..19841.41 rows=49 width=6352) (actual time=0.066..1644.202 rows=99005 loops=1)\n -> Nested Loop (cost=6.00..19812.00 rows=49 width=161) (actual time=0.061..1347.225 rows=99005 loops=1)\n Join Filter: (gnl_char_val.char_val_id = gnl_char_val_lang.char_val_id)\n -> Nested Loop (cost=5.72..19795.69 rows=49 width=162) (actual time=0.055..1110.850 rows=99005 loops=1)\n -> Nested Loop (cost=5.58..19787.60 rows=49 width=142) (actual time=0.048..972.595 rows=99005 loops=1)\n -> Nested Loop (cost=5.43..19754.45 rows=198 width=149) (actual time=0.045..831.933 rows=101354 loops=1)\n -> Nested Loop (cost=5.00..19375.29 rows=198 width=128) (actual time=0.038..436.324 rows=101354 loops=1)\n -> Nested Loop (cost=4.85..19241.37 rows=799 width=122) (actual time=0.032..179.888 rows=188944 loops=1)\n -> Nested Loop (cost=4.29..15.95 rows=1 width=46) (actual time=0.014..0.044 rows=1 loops=1)\n -> Seq Scan on gnl_char (cost=0.00..6.83 rows=1 width=20) (actual time=0.006..0.034 rows=1 loops=1)\n Filter: ((shrt_code)::text = 'xxx'::text)\n Rows Removed by Filter: 225\n -> Bitmap Heap Scan on gnl_char_lang (cost=4.29..9.12 rows=1 width=26) (actual time=0.006..0.008 rows=1 loops=1)\n Recheck Cond: (char_id = gnl_char.char_id)\n Filter: ((is_actv = '1'::numeric) AND ((lang)::text = 'en'::text))\n Rows Removed by Filter: 1\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_gnl_char_lang_char_id (cost=0.00..4.29 rows=2 width=0) (actual time=0.003..0.003 rows=2 loops=1)\n Index Cond: (char_id = gnl_char.char_id)\n -> Index Scan using idx_prod_char_val_v02 on prod_char_val (cost=0.56..19213.05 rows=1237 width=88) (actual time=0.018..140.837 rows=188944 loops=1)\n Index Cond: (char_id = gnl_char_lang.char_id)\n Filter: (((val)::text = 'xxx'::text) OR ((val)::text = 'xxx'::text))\n Rows Removed by Filter: 3986\n -> Index Scan using gnl_st_pkey on gnl_st charvalstatus (cost=0.15..0.17 rows=1 width=11) (actual time=0.001..0.001 rows=1 loops=188944)\n Index Cond: (gnl_st_id = prod_char_val.st_id)\n Filter: ((shrt_code)::text = ANY ('{ACTV,PNDG}'::text[]))\n Rows Removed by Filter: 0\n -> Index Scan using pk_prod on prod (cost=0.43..1.91 rows=1 width=21) (actual time=0.003..0.003 rows=1 loops=101354)\n Index Cond: (prod_id = prod_char_val.prod_id)\n -> Index Scan using gnl_st_pkey on gnl_st prodstatus (cost=0.15..0.17 rows=1 width=5) (actual time=0.001..0.001 rows=1 loops=101354)\n Index Cond: (gnl_st_id = prod.st_id)\n Filter: ((shrt_code)::text = ANY ('{ACTV,PNDG}'::text[]))\n Rows Removed by Filter: 0\n -> Index Scan using gnl_char_val_pkey on gnl_char_val (cost=0.15..0.17 rows=1 width=20) (actual time=0.001..0.001 rows=1 loops=99005)\n Index Cond: (char_val_id = prod_char_val.char_val_id)\n -> Index Scan using idx_gcvl_char_val_id on gnl_char_val_lang (cost=0.28..0.32 rows=1 width=14) (actual time=0.001..0.002 rows=1 loops=99005)\n Index Cond: (char_val_id = prod_char_val.char_val_id)\n Filter: ((is_actv = '1'::numeric) AND ((lang)::text = 'en'::text))\n Rows Removed by Filter: 1\n -> Index Scan using pk_prod on prod prodentity0_ (cost=0.43..0.60 rows=1 width=6197) (actual time=0.002..0.002 rows=1 loops=99005)\n Index Cond: (prod_id = prod.prod_id)\nPlanning time: 6.947 ms\nExecution time: 1661.278 ms\n\n\nThis is the view;\ncreate or replace\nview bss.v_prod_char_val as select\n prod_char_val.prod_char_val_id,\n prod_char_val.prod_id,\n prod_char_val.char_id,\n prod_char_val.char_val_id,\n prod_char_val.val,\n prod_char_val.trnsc_id,\n prod_char_val.sdate,\n prod_char_val.edate,\n prod_char_val.st_id,\n prod_char_val.cdate,\n prod_char_val.cuser,\n prod_char_val.udate,\n prod_char_val.uuser,\n gnl_char_lang.name as char_name,\n gnl_char_val_lang.val_lbl as char_val_name,\n charvalstatus.shrt_code as prod_char_val_st_shrt_code,\n gnl_char_val_lang.lang,\n gnl_char.shrt_code,\n gnl_char_val.shrt_code as char_val_shrt_code,\n prod.bill_acct_id\nfrom\n bss.prod_char_val\nleft join bss.prod on\n prod.prod_id = prod_char_val.prod_id,\n bss.gnl_st prodstatus,\n bss.gnl_char\nleft join bss.gnl_char_lang on\n gnl_char_lang.char_id = gnl_char.char_id,\n bss.gnl_char_val\nleft join bss.gnl_char_val_lang on\n gnl_char_val_lang.char_val_id = gnl_char_val.char_val_id,\n bss.gnl_st charvalstatus\nwhere\n prod.st_id = prodstatus.gnl_st_id\n and (prodstatus.shrt_code::text = any (array['ACTV'::character varying::text,\n 'PNDG'::character varying::text]))\n and gnl_char_val_lang.is_actv = 1::numeric\n and gnl_char_lang.is_actv = 1::numeric\n and gnl_char_lang.lang::text = gnl_char_val_lang.lang::text\n and prod_char_val.char_id = gnl_char.char_id\n and prod_char_val.char_val_id = gnl_char_val.char_val_id\n and prod_char_val.st_id = charvalstatus.gnl_st_id\n and (charvalstatus.shrt_code::text = any (array['ACTV'::character varying::text,\n 'PNDG'::character varying::text]));\n\n[http://www.etiya.com/images/e-newsletter/signature/e_logo_1.png]\n[http://www.etiya.com/images/e-newsletter/signature/e_adres.png]<http://www.etiya.com>\n[http://www.etiya.com/images/e-newsletter/signature/facebook_icon.png]<https://www.facebook.com/Etiya-249050755136326/> [http://www.etiya.com/images/e-newsletter/signature/linkedin_icon.png] <https://www.linkedin.com/company/etiya?trk=tyah&trkInfo=tas%3Aetiya%2Cidx%3A1-1-1> [http://www.etiya.com/images/e-newsletter/signature/instagram_icon.png] <https://www.instagram.com/etiya_/> [http://www.etiya.com/images/e-newsletter/signature/youtube_icon.png] <https://www.youtube.com/channel/UCWjknu72sHoKKt2nujuU2kA> [http://www.etiya.com/images/e-newsletter/signature/twitter_icon.png] <https://twitter.com/etiya_>\n[http://www.etiya.com/images/e-newsletter/signature/0.png]\n\nYavuz Selim Sertoğlu\nSolution Support Specialist II\n\nT:+90 312 265 01 50\nM:+90 552 997 52 02\nE:[email protected]<mailto:[email protected]>\n\nÜniversiteler Mahallesi 1606.cadde No:4 Cyberpark C Blok Zemin kat ofis no :Z25A-Z44\n[http://www.etiya.com/images/e-newsletter/signature/tmf_award.jpg] <https://www.etiya.com/press/view/etiya-wins-tm-forum-excellence-award-for-disruptive-innovation>\n\n\n\nYasal Uyari :\nBu elektronik posta asagidaki adreste bulunan Kosul ve Sartlara tabidir;\nhttp://www.etiya.com/gizlilik<www.etiya.com/gizlilik>\n\nÇIKTI ALMADAN ÖNCE ÇEVREYE OLAN SORUMLULUGUMUZU BIR KEZ DAHA DÜSÜNELIM.\nPLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING ANY DOCUMENT.\n\n\n\n\n\n\n\n\n\nHi all,\n \nI have a problem with views. When I use view in my query it really slows down(1.7seconds)\nIf I use inside of view and add conditions and joins to it, it is really fast(0.7 milliseconds).\nI have no distinct/group/partition by in view so I have no idea why is this happening.\nI wrote queries and plans below.\nI would be very happy if you can help me.\n \nBest regards,\n \n \n \n \nQuery without view;\n \nexplain\nanalyze\nselect\n *\nfrom\n bss.prod_char_val\nleft\njoin bss.prod\non\n prod.prod_id = prod_char_val.prod_id,\n bss.gnl_st prodstatus,\n bss.gnl_char\nleft\njoin bss.gnl_char_lang\non\n gnl_char_lang.char_id = gnl_char.char_id,\n bss.gnl_char_val\nleft\njoin bss.gnl_char_val_lang\non\n gnl_char_val_lang.char_val_id = gnl_char_val.char_val_id,\n bss.gnl_st charvalstatus\n \ncross\njoin bss.prod prodentity0_\ncross\njoin bss.cust custentity2_\nwhere\n prod.st_id = prodstatus.gnl_st_id\n \nand (prodstatus.shrt_code::text\n = any (array['ACTV'::character\nvarying::text,\n \n'PNDG'::character\nvarying::text]))\n \nand gnl_char_val_lang.is_actv =\n1::numeric\n \nand gnl_char_lang.is_actv =\n1::numeric\n \nand gnl_char_lang.lang::text\n = gnl_char_val_lang.lang::text\n \nand prod_char_val.char_id = gnl_char.char_id\n \nand prod_char_val.char_val_id = gnl_char_val.char_val_id\n \nand prod_char_val.st_id = charvalstatus.gnl_st_id\n \nand (charvalstatus.shrt_code::text\n = any (array['ACTV'::character\nvarying::text,'PNDG'::character\nvarying::text]))\n \nand gnl_char_val_lang.lang =\n'en'\n \nand (charvalstatus.shrt_code =\n'xxx'\n \nand prod_char_val.val =\n'xxx'\n \nor charvalstatus.shrt_code =\n'xxx'\n \nand prod_char_val.val =\n'xxx')\n \nand prodentity0_.prod_id = prod_char_val.prod_id\n \nand custentity2_.party_id =\n16424\n \nand prodentity0_.cust_id = custentity2_.cust_id\n \norder\nby\n prodentity0_.prod_id\ndesc;\n \n \nSort (cost=373.92..373.93 rows=1 width=19509) (actual time=0.098..0.098 rows=0 loops=1)\n Sort Key: prod_char_val.prod_id DESC\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=2.57..373.91 rows=1 width=19509) (actual time=0.066..0.066 rows=0 loops=1)\n Join Filter: (gnl_char_val.char_val_id = gnl_char_val_lang.char_val_id)\n -> Nested Loop (cost=2.30..373.58 rows=1 width=19447) (actual time=0.066..0.066 rows=0 loops=1)\n -> Nested Loop (cost=2.15..373.42 rows=1 width=18571) (actual time=0.066..0.066 rows=0 loops=1)\n Join Filter: (gnl_char.char_id = gnl_char_lang.char_id)\n -> Nested Loop (cost=1.88..373.09 rows=1 width=18488) (actual time=0.066..0.066 rows=0 loops=1)\n -> Nested Loop (cost=1.73..372.92 rows=1 width=16002) (actual time=0.066..0.066 rows=0 loops=1)\n Join Filter: (charvalstatus.gnl_st_id = prod_char_val.st_id)\n -> Nested Loop (cost=1.29..214.51 rows=11 width=15914) (actual time=0.065..0.065 rows=0 loops=1)\n -> Nested Loop (cost=1.15..207.14 rows=44 width=15783) (actual time=0.065..0.065 rows=0 loops=1)\n -> Nested Loop (cost=0.72..180.73 rows=44 width=9586) (actual time=0.065..0.065 rows=0 loops=1)\n -> Seq Scan on gnl_st charvalstatus (cost=0.00..10.61 rows=1 width=131) (actual time=0.064..0.065 rows=0 loops=1)\n Filter: (((shrt_code)::text = ANY ('{ACTV,PNDG}'::text[])) AND ((shrt_code)::text = 'xxx'::text))\n Rows Removed by Filter: 307\n -> Nested Loop (cost=0.72..169.68 rows=44 width=9455) (never executed)\n -> Index Scan using idx_cust_party_id on cust custentity2_ (cost=0.29..8.31 rows=1 width=3258) (never executed)\n Index Cond: (party_id = '16424'::numeric)\n -> Index Scan using idx_prod_cust_id on prod prodentity0_ (cost=0.43..160.81 rows=57 width=6197) (never executed)\n Index Cond: (cust_id = custentity2_.cust_id)\n -> Index Scan using pk_prod on prod (cost=0.43..0.60 rows=1 width=6197) (never executed)\n Index Cond: (prod_id = prodentity0_.prod_id)\n -> Index Scan using gnl_st_pkey on gnl_st prodstatus (cost=0.15..0.17 rows=1 width=131) (never executed)\n Index Cond: (gnl_st_id = prod.st_id)\n Filter: ((shrt_code)::text = ANY ('{ACTV,PNDG}'::text[]))\n -> Index Scan using idx_prod_char_val_prod_id on prod_char_val (cost=0.44..14.38 rows=2 width=88) (never executed)\n Index Cond: (prod_id = prod.prod_id)\n Filter: (((val)::text = 'xxx'::text) OR ((val)::text = 'xxx'::text))\n -> Index Scan using gnl_char_pkey on gnl_char (cost=0.14..0.16 rows=1 width=2486) (never executed)\n Index Cond: (char_id = prod_char_val.char_id)\n -> Index Scan using idx_gnl_char_lang_char_id on gnl_char_lang (cost=0.27..0.32 rows=1 width=83) (never executed)\n Index Cond: (char_id = prod_char_val.char_id)\n Filter: ((is_actv = '1'::numeric) AND ((lang)::text = 'en'::text))\n -> Index Scan using gnl_char_val_pkey on gnl_char_val (cost=0.15..0.17 rows=1 width=876) (never executed)\n Index Cond: (char_val_id = prod_char_val.char_val_id)\n -> Index Scan using idx_gcvl_char_val_id on gnl_char_val_lang (cost=0.28..0.32 rows=1 width=56) (never executed)\n Index Cond: (char_val_id = prod_char_val.char_val_id)\n Filter: ((is_actv = '1'::numeric) AND ((lang)::text = 'en'::text))\nPlanning time: 12.275 ms\nExecution time: 0.770 ms\n \n \nQuery with view;\n \nexplain\nanalyze\nselect\n *\nfrom\n bss.prod prodentity0_\ncross\njoin bss.v_prod_char_val vprodcharv1_\ncross\njoin bss.cust custentity2_\nwhere\n vprodcharv1_.lang =\n'en'\n \nand (vprodcharv1_.shrt_code =\n'xxx'\n \nand vprodcharv1_.val =\n'xxx'\n \nor vprodcharv1_.shrt_code =\n'xxx'\n \nand vprodcharv1_.val =\n'xxx')\n \nand prodentity0_.prod_id = vprodcharv1_.prod_id\n \nand custentity2_.party_id =\n16424\n \nand prodentity0_.cust_id = custentity2_.cust_id\n \norder\nby prodentity0_.prod_id\ndesc;\n \n \nSort (cost=19850.34..19850.34 rows=1 width=9616) (actual time=1661.094..1661.095 rows=6 loops=1)\n Sort Key: prodentity0_.prod_id DESC\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=6.72..19850.33 rows=1 width=9616) (actual time=527.507..1661.058 rows=6 loops=1)\n Join Filter: (prodentity0_.cust_id = custentity2_.cust_id)\n Rows Removed by Join Filter: 98999\n -> Index Scan using idx_cust_party_id on cust custentity2_ (cost=0.29..8.31 rows=1 width=3258) (actual time=0.007..0.008 rows=1 loops=1)\n Index Cond: (party_id = '16424'::numeric)\n -> Nested Loop (cost=6.43..19841.41 rows=49 width=6352) (actual time=0.066..1644.202 rows=99005 loops=1)\n -> Nested Loop (cost=6.00..19812.00 rows=49 width=161) (actual time=0.061..1347.225 rows=99005 loops=1)\n Join Filter: (gnl_char_val.char_val_id = gnl_char_val_lang.char_val_id)\n -> Nested Loop (cost=5.72..19795.69 rows=49 width=162) (actual time=0.055..1110.850 rows=99005 loops=1)\n -> Nested Loop (cost=5.58..19787.60 rows=49 width=142) (actual time=0.048..972.595 rows=99005 loops=1)\n -> Nested Loop (cost=5.43..19754.45 rows=198 width=149) (actual time=0.045..831.933 rows=101354 loops=1)\n -> Nested Loop (cost=5.00..19375.29 rows=198 width=128) (actual time=0.038..436.324 rows=101354 loops=1)\n -> Nested Loop (cost=4.85..19241.37 rows=799 width=122) (actual time=0.032..179.888 rows=188944 loops=1)\n -> Nested Loop (cost=4.29..15.95 rows=1 width=46) (actual time=0.014..0.044 rows=1 loops=1)\n -> Seq Scan on gnl_char (cost=0.00..6.83 rows=1 width=20) (actual time=0.006..0.034 rows=1 loops=1)\n Filter: ((shrt_code)::text = 'xxx'::text)\n Rows Removed by Filter: 225\n -> Bitmap Heap Scan on gnl_char_lang (cost=4.29..9.12 rows=1 width=26) (actual time=0.006..0.008 rows=1 loops=1)\n Recheck Cond: (char_id = gnl_char.char_id)\n Filter: ((is_actv = '1'::numeric) AND ((lang)::text = 'en'::text))\n Rows Removed by Filter: 1\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_gnl_char_lang_char_id (cost=0.00..4.29 rows=2 width=0) (actual time=0.003..0.003 rows=2 loops=1)\n Index Cond: (char_id = gnl_char.char_id)\n -> Index Scan using idx_prod_char_val_v02 on prod_char_val (cost=0.56..19213.05 rows=1237 width=88) (actual time=0.018..140.837 rows=188944 loops=1)\n Index Cond: (char_id = gnl_char_lang.char_id)\n Filter: (((val)::text = 'xxx'::text) OR ((val)::text = 'xxx'::text))\n Rows Removed by Filter: 3986\n -> Index Scan using gnl_st_pkey on gnl_st charvalstatus (cost=0.15..0.17 rows=1 width=11) (actual time=0.001..0.001 rows=1 loops=188944)\n Index Cond: (gnl_st_id = prod_char_val.st_id)\n Filter: ((shrt_code)::text = ANY ('{ACTV,PNDG}'::text[]))\n Rows Removed by Filter: 0\n -> Index Scan using pk_prod on prod (cost=0.43..1.91 rows=1 width=21) (actual time=0.003..0.003 rows=1 loops=101354)\n Index Cond: (prod_id = prod_char_val.prod_id)\n -> Index Scan using gnl_st_pkey on gnl_st prodstatus (cost=0.15..0.17 rows=1 width=5) (actual time=0.001..0.001 rows=1 loops=101354)\n Index Cond: (gnl_st_id = prod.st_id)\n Filter: ((shrt_code)::text = ANY ('{ACTV,PNDG}'::text[]))\n Rows Removed by Filter: 0\n -> Index Scan using gnl_char_val_pkey on gnl_char_val (cost=0.15..0.17 rows=1 width=20) (actual time=0.001..0.001 rows=1 loops=99005)\n Index Cond: (char_val_id = prod_char_val.char_val_id)\n -> Index Scan using idx_gcvl_char_val_id on gnl_char_val_lang (cost=0.28..0.32 rows=1 width=14) (actual time=0.001..0.002 rows=1 loops=99005)\n Index Cond: (char_val_id = prod_char_val.char_val_id)\n Filter: ((is_actv = '1'::numeric) AND ((lang)::text = 'en'::text))\n Rows Removed by Filter: 1\n -> Index Scan using pk_prod on prod prodentity0_ (cost=0.43..0.60 rows=1 width=6197) (actual time=0.002..0.002 rows=1 loops=99005)\n Index Cond: (prod_id = prod.prod_id)\nPlanning time: 6.947 ms\nExecution time: 1661.278 ms\n \n \nThis is the view;\ncreate\nor\nreplace\nview bss.v_prod_char_val\nas\nselect\n prod_char_val.prod_char_val_id,\n prod_char_val.prod_id,\n prod_char_val.char_id,\n prod_char_val.char_val_id,\n prod_char_val.val,\n prod_char_val.trnsc_id,\n prod_char_val.sdate,\n prod_char_val.edate,\n prod_char_val.st_id,\n prod_char_val.cdate,\n prod_char_val.cuser,\n prod_char_val.udate,\n prod_char_val.uuser,\n gnl_char_lang.name\nas char_name,\n gnl_char_val_lang.val_lbl\nas char_val_name,\n charvalstatus.shrt_code\nas prod_char_val_st_shrt_code,\n gnl_char_val_lang.lang,\n gnl_char.shrt_code,\n gnl_char_val.shrt_code\nas char_val_shrt_code,\n prod.bill_acct_id\nfrom\n bss.prod_char_val\nleft\njoin bss.prod\non\n prod.prod_id = prod_char_val.prod_id,\n bss.gnl_st prodstatus,\n bss.gnl_char\nleft\njoin bss.gnl_char_lang\non\n gnl_char_lang.char_id = gnl_char.char_id,\n bss.gnl_char_val\nleft\njoin bss.gnl_char_val_lang\non\n gnl_char_val_lang.char_val_id = gnl_char_val.char_val_id,\n bss.gnl_st charvalstatus\nwhere\n prod.st_id = prodstatus.gnl_st_id\n \nand (prodstatus.shrt_code::text\n = any (array['ACTV'::character\nvarying::text,\n \n'PNDG'::character\nvarying::text]))\n \nand gnl_char_val_lang.is_actv =\n1::numeric\n \nand gnl_char_lang.is_actv =\n1::numeric\n \nand gnl_char_lang.lang::text\n = gnl_char_val_lang.lang::text\n \nand prod_char_val.char_id = gnl_char.char_id\n \nand prod_char_val.char_val_id = gnl_char_val.char_val_id\n \nand prod_char_val.st_id = charvalstatus.gnl_st_id\n \nand (charvalstatus.shrt_code::text\n = any (array['ACTV'::character\nvarying::text,\n \n'PNDG'::character\nvarying::text]));\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n \n\n\nYavuz Selim Sertoğlu\n\n\nSolution Support Specialist II\n\n\n \n\n\nT:+90 312 265 01 50\n\n\nM:+90 552 997 52 02\n\n\nE:[email protected]\n\n\n \n\n\nÜniversiteler Mahallesi 1606.cadde No:4 Cyberpark C Blok Zemin kat ofis no :Z25A-Z44\n\n\n\n\n\n\n\n\n\n\n\n\n \nYasal Uyari : \nBu elektronik posta asagidaki adreste bulunan Kosul ve Sartlara tabidir;\nhttp://www.etiya.com/gizlilik\nÇIKTI ALMADAN ÖNCE ÇEVREYE OLAN SORUMLULUGUMUZU BIR KEZ DAHA DÜSÜNELIM. \nPLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING ANY DOCUMENT.",
"msg_date": "Wed, 9 Oct 2019 12:31:40 +0000",
"msg_from": "=?iso-8859-9?Q?Yavuz_Selim_Serto=F0lu_=28ETIYA=29?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Query slows when used with view"
},
{
"msg_contents": "=?iso-8859-9?Q?Yavuz_Selim_Serto=F0lu_=28ETIYA=29?= <[email protected]> writes:\n> I have a problem with views. When I use view in my query it really slows down(1.7seconds)\n> If I use inside of view and add conditions and joins to it, it is really fast(0.7 milliseconds).\n> I have no distinct/group/partition by in view so I have no idea why is this happening.\n> I wrote queries and plans below.\n\nThose are not equivalent queries. Read up on the syntax of FROM;\nparticularly, that JOIN binds more tightly than comma.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Oct 2019 09:57:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slows when used with view"
},
{
"msg_contents": "Thanks for the reply Tom,\n\nSorry, I couldn't understand. I just copied inside of view and add conditions from query that runs with view.\nThe comma parts are the same in two queries, one is inside of view the other is in the query.\n\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]>\nSent: 09 October 2019 16:57\nTo: Yavuz Selim Sertoğlu (ETIYA) <[email protected]>\nCc: [email protected]\nSubject: Re: Query slows when used with view\n\n=?iso-8859-9?Q?Yavuz_Selim_Serto=F0lu_=28ETIYA=29?= <[email protected]> writes:\n> I have a problem with views. When I use view in my query it really slows down(1.7seconds)\n> If I use inside of view and add conditions and joins to it, it is really fast(0.7 milliseconds).\n> I have no distinct/group/partition by in view so I have no idea why is this happening.\n> I wrote queries and plans below.\n\nThose are not equivalent queries. Read up on the syntax of FROM;\nparticularly, that JOIN binds more tightly than comma.\n\nregards, tom lane\n[http://www.etiya.com/images/e-newsletter/signature/e_logo_1.png]\n[http://www.etiya.com/images/e-newsletter/signature/e_adres.png]<http://www.etiya.com>\n[http://www.etiya.com/images/e-newsletter/signature/facebook_icon.png]<https://www.facebook.com/Etiya-249050755136326/> [http://www.etiya.com/images/e-newsletter/signature/linkedin_icon.png] <https://www.linkedin.com/company/etiya?trk=tyah&trkInfo=tas%3Aetiya%2Cidx%3A1-1-1> [http://www.etiya.com/images/e-newsletter/signature/instagram_icon.png] <https://www.instagram.com/etiya_/> [http://www.etiya.com/images/e-newsletter/signature/youtube_icon.png] <https://www.youtube.com/channel/UCWjknu72sHoKKt2nujuU2kA> [http://www.etiya.com/images/e-newsletter/signature/twitter_icon.png] <https://twitter.com/etiya_>\n[http://www.etiya.com/images/e-newsletter/signature/0.png]\n\nYavuz Selim Sertoğlu\nSolution Support Specialist II\n\nT:+90 312 265 01 50\nM:+90 552 997 52 02\nE:[email protected]<mailto:[email protected]>\n\nÜniversiteler Mahallesi 1606.cadde No:4 Cyberpark C Blok Zemin kat ofis no :Z25A-Z44\n[http://www.etiya.com/images/e-newsletter/signature/tmf_award.jpg] <https://www.etiya.com/press/view/etiya-wins-tm-forum-excellence-award-for-disruptive-innovation>\n\n\nYasal Uyari :\nBu elektronik posta asagidaki adreste bulunan Kosul ve Sartlara tabidir;\nhttp://www.etiya.com/gizlilik<www.etiya.com/gizlilik>\n\nÇIKTI ALMADAN ÖNCE ÇEVREYE OLAN SORUMLULUGUMUZU BIR KEZ DAHA DÜSÜNELIM.\nPLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING ANY DOCUMENT.\n\n\n",
"msg_date": "Wed, 9 Oct 2019 14:55:58 +0000",
"msg_from": "=?iso-8859-9?Q?Yavuz_Selim_Serto=F0lu_=28ETIYA=29?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Query slows when used with view"
},
{
"msg_contents": ">\n> Those are not equivalent queries. Read up on the syntax of FROM;\n> particularly, that JOIN binds more tightly than comma.\n>\n\nI see this-\n\n\"A JOIN clause combines two FROM items, which for convenience we will refer\nto as “tables”, though in reality they can be any type of FROM item. Use\nparentheses if necessary to determine the order of nesting. In the absence\nof parentheses, JOINs nest left-to-right. In any case JOIN binds more\ntightly than the commas separating FROM-list items.\"\nhttps://www.postgresql.org/docs/current/sql-select.html\n\nWhat is meant by nesting? Or binding for that matter? I wouldn't expect\nincreasing from/join_collapse_limit to be helpful to the original poster\nsince they haven't exceeded default limit of 8. Any further clarification\nelsewhere you could point to?\n\nThose are not equivalent queries. Read up on the syntax of FROM;\nparticularly, that JOIN binds more tightly than comma.I see this-\"A JOIN clause combines two FROM items, which for convenience we will refer to as “tables”, though in reality they can be any type of FROM item. Use parentheses if necessary to determine the order of nesting. In the absence of parentheses, JOINs nest left-to-right. In any case JOIN binds more tightly than the commas separating FROM-list items.\"https://www.postgresql.org/docs/current/sql-select.htmlWhat is meant by nesting? Or binding for that matter? I wouldn't expect increasing from/join_collapse_limit to be helpful to the original poster since they haven't exceeded default limit of 8. Any further clarification elsewhere you could point to?",
"msg_date": "Wed, 9 Oct 2019 09:23:14 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slows when used with view"
},
{
"msg_contents": "On Wed, Oct 9, 2019 at 10:56 AM Yavuz Selim Sertoğlu (ETIYA) <\[email protected]> wrote:\n\n> Thanks for the reply Tom,\n>\n> Sorry, I couldn't understand. I just copied inside of view and add\n> conditions from query that runs with view.\n> The comma parts are the same in two queries, one is inside of view the\n> other is in the query.\n>\n\nWhen you join to a view, the view sticks together, as if they were all in\nparentheses. But when you substitute the text of a view into another\nquery, then they are all on the same level and can be parsed differently.\n\nConsider the difference between \"1+1 * 3\", and \"(1+1) * 3\"\n\nCheers,\n\nJeff\n\nOn Wed, Oct 9, 2019 at 10:56 AM Yavuz Selim Sertoğlu (ETIYA) <[email protected]> wrote:Thanks for the reply Tom,\n\nSorry, I couldn't understand. I just copied inside of view and add conditions from query that runs with view.\nThe comma parts are the same in two queries, one is inside of view the other is in the query.When you join to a view, the view sticks together, as if they were all in parentheses. But when you substitute the text of a view into another query, then they are all on the same level and can be parsed differently.Consider the difference between \"1+1 * 3\", and \"(1+1) * 3\"Cheers,Jeff",
"msg_date": "Wed, 9 Oct 2019 11:24:00 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slows when used with view"
},
{
"msg_contents": ">\n> When you join to a view, the view sticks together, as if they were all in\n> parentheses. But when you substitute the text of a view into another\n> query, then they are all on the same level and can be parsed differently.\n>\n> Consider the difference between \"1+1 * 3\", and \"(1+1) * 3\"\n>\n\nI thought from_collapse_limit being high enough meant that it will get\nre-written and inlined into the same level. To extend your metaphor, that\nit would be 1 * 3 + 1 * 3.\n\nWhen you join to a view, the view sticks together, as if they were all in parentheses. But when you substitute the text of a view into another query, then they are all on the same level and can be parsed differently.Consider the difference between \"1+1 * 3\", and \"(1+1) * 3\"I thought from_collapse_limit being high enough meant that it will get re-written and inlined into the same level. To extend your metaphor, that it would be 1 * 3 + 1 * 3.",
"msg_date": "Wed, 9 Oct 2019 09:32:37 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slows when used with view"
},
{
"msg_contents": "Michael Lewis <[email protected]> writes:\n>> When you join to a view, the view sticks together, as if they were all in\n>> parentheses. But when you substitute the text of a view into another\n>> query, then they are all on the same level and can be parsed differently.\n>> \n>> Consider the difference between \"1+1 * 3\", and \"(1+1) * 3\"\n\n> I thought from_collapse_limit being high enough meant that it will get\n> re-written and inlined into the same level. To extend your metaphor, that\n> it would be 1 * 3 + 1 * 3.\n\nThe point is that the semantics are actually different --- in Jeff's\nexample, the answer is 4 vs. 6, and in the OP's query, the joins have\ndifferent scopes. from_collapse_limit has to do with whether the\nplanner can rewrite the query into a different form, but it's not\nallowed to change the semantics by doing so.\n\nIn some cases you can re-order joins without changing the semantics,\njust as arithmetic has associative and commutative laws. But you\ncan't always re-order outer joins like that. I didn't dig into\nthe details of the OP's query too much, but I believe that the two\nforms of his join tree are semantically different, resulting\nin different runtimes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Oct 2019 12:07:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slows when used with view"
}
] |
[
{
"msg_contents": "Hello,\n\nThere's a \"users\" table with the following structure:\n\nCREATE TABLE \"user\" (\n id SERIAL PRIMARY KEY,\n -- other fields\n);\n\nand there's a \"friends\" table with the following structure:\n\nCREATE TABLE friend (\n user1_id INTEGER NOT NULL REFERENCES \"user\"(id),\n user2_id INTEGER NOT NULL REFERENCES \"user\"(id),\n -- other fields\n CHECK (user1_id < user2_id),\n PRIMARY KEY (user1_id, user2_id)\n);\n\nAnd I'm running this query:\n\nSELECT user1_id,user2_id FROM friend WHERE user1_id=42 OR user2_id=42;\n\nWith seqscan disabled, I get this plan on 9.6:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on friend (cost=8.42..19.01 rows=14 width=8)\n Recheck Cond: ((user1_id = 1) OR (user2_id = 2))\n -> BitmapOr (cost=8.42..8.42 rows=14 width=0)\n -> Bitmap Index Scan on friend_pkey (cost=0.00..4.21 rows=7\nwidth=0)\n Index Cond: (user1_id = 1)\n -> Bitmap Index Scan on friend_user2_id_user1_id_idx\n (cost=0.00..4.21 rows=7 width=0)\n Index Cond: (user2_id = 2)\n(7 rows)\n\nI expected to get an index-only scan in this situation, as that would be a\nvery common query. Is there a way to actually make this sort of query\nresolvable with an index-only scan? Maybe a different table structure would\nhelp?\n\nHello,There's a \"users\" table with the following structure:CREATE TABLE \"user\" ( id SERIAL PRIMARY KEY, -- other fields);and there's a \"friends\" table with the following structure:CREATE TABLE friend ( user1_id INTEGER NOT NULL REFERENCES \"user\"(id), user2_id INTEGER NOT NULL REFERENCES \"user\"(id), -- other fields CHECK (user1_id < user2_id), PRIMARY KEY (user1_id, user2_id));And I'm running this query:SELECT user1_id,user2_id FROM friend WHERE user1_id=42 OR user2_id=42;With seqscan disabled, I get this plan on 9.6: QUERY PLAN------------------------------------------------------------------------------------------------- Bitmap Heap Scan on friend (cost=8.42..19.01 rows=14 width=8) Recheck Cond: ((user1_id = 1) OR (user2_id = 2)) -> BitmapOr (cost=8.42..8.42 rows=14 width=0) -> Bitmap Index Scan on friend_pkey (cost=0.00..4.21 rows=7 width=0) Index Cond: (user1_id = 1) -> Bitmap Index Scan on friend_user2_id_user1_id_idx (cost=0.00..4.21 rows=7 width=0) Index Cond: (user2_id = 2)(7 rows)I expected to get an index-only scan in this situation, as that would be a very common query. Is there a way to actually make this sort of query resolvable with an index-only scan? Maybe a different table structure would help?",
"msg_date": "Sat, 12 Oct 2019 16:39:56 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimising a two column OR check"
},
{
"msg_contents": "On Sat, Oct 12, 2019 at 04:39:56PM +0200, Ivan Voras wrote:\n> With seqscan disabled, I get this plan on 9.6:\n> Bitmap Heap Scan on friend (cost=8.42..19.01 rows=14 width=8)\n...\n> I expected to get an index-only scan in this situation, as that would be a\n> very common query. Is there a way to actually make this sort of query\n> resolvable with an index-only scan? Maybe a different table structure would\n> help?\n\nThe v11 release notes have this relevant item:\n\nhttps://www.postgresql.org/docs/11/release-11.html\n|Allow bitmap scans to perform index-only scans when possible (Alexander Kuzmenkov)\n\nJustin\n\n\n",
"msg_date": "Sat, 12 Oct 2019 09:43:36 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": ">>>>> \"Ivan\" == Ivan Voras <[email protected]> writes:\n\n Ivan> Hello,\n Ivan> There's a \"users\" table with the following structure:\n\n Ivan> CREATE TABLE \"user\" (\n Ivan> id SERIAL PRIMARY KEY,\n Ivan> -- other fields\n Ivan> );\n\n Ivan> and there's a \"friends\" table with the following structure:\n\n Ivan> CREATE TABLE friend (\n Ivan> user1_id INTEGER NOT NULL REFERENCES \"user\"(id),\n Ivan> user2_id INTEGER NOT NULL REFERENCES \"user\"(id),\n Ivan> -- other fields\n Ivan> CHECK (user1_id < user2_id),\n Ivan> PRIMARY KEY (user1_id, user2_id)\n Ivan> );\n\n Ivan> And I'm running this query:\n\n Ivan> SELECT user1_id,user2_id FROM friend WHERE user1_id=42 OR user2_id=42;\n\nTo get friends of user 42:\n\nSELECT user1_id FROM friend WHERE user2_id=42\nUNION ALL\nSELECT user2_id FROM friend WHERE user1_id=42;\n\nassuming you create the (user2_id,user1_id) index, this should get you\nan Append of two index-only scans. We can use UNION ALL here rather than\nUNION because the table constraints ensure there are no duplicates.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sat, 12 Oct 2019 16:16:50 +0100",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": "On Sat, Oct 12, 2019 at 10:43 AM Justin Pryzby <[email protected]> wrote:\n\n> On Sat, Oct 12, 2019 at 04:39:56PM +0200, Ivan Voras wrote:\n> > With seqscan disabled, I get this plan on 9.6:\n> > Bitmap Heap Scan on friend (cost=8.42..19.01 rows=14 width=8)\n> ...\n> > I expected to get an index-only scan in this situation, as that would be\n> a\n> > very common query. Is there a way to actually make this sort of query\n> > resolvable with an index-only scan? Maybe a different table structure\n> would\n> > help?\n>\n\nIt would have to scan the entire index to find the cases where user2_id=42\nbut user1_id is not constrained. Technically User1_id would be constrained\nto be less than 42, but I don't think the planner will take that into\naccount.\n\n\n> The v11 release notes have this relevant item:\n>\n> https://www.postgresql.org/docs/11/release-11.html\n> |Allow bitmap scans to perform index-only scans when possible (Alexander\n> Kuzmenkov)\n>\n>\nBut this is not one of those cases. It is only possible when the only data\nneeded is whether the row exists or not.\n\nCheers,\n\nJeff\n\nOn Sat, Oct 12, 2019 at 10:43 AM Justin Pryzby <[email protected]> wrote:On Sat, Oct 12, 2019 at 04:39:56PM +0200, Ivan Voras wrote:\n> With seqscan disabled, I get this plan on 9.6:\n> Bitmap Heap Scan on friend (cost=8.42..19.01 rows=14 width=8)\n...\n> I expected to get an index-only scan in this situation, as that would be a\n> very common query. Is there a way to actually make this sort of query\n> resolvable with an index-only scan? Maybe a different table structure would\n> help?It would have to scan the entire index to find the cases where user2_id=42 but user1_id is not constrained. Technically User1_id would be constrained to be less than 42, but I don't think the planner will take that into account.\n\nThe v11 release notes have this relevant item:\n\nhttps://www.postgresql.org/docs/11/release-11.html\n|Allow bitmap scans to perform index-only scans when possible (Alexander Kuzmenkov)\nBut this is not one of those cases. It is only possible when the only data needed is whether the row exists or not.Cheers,Jeff",
"msg_date": "Sat, 12 Oct 2019 11:17:58 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": "Another thing to consider is the visibility map. From what I \nunderstand, index only scans are preferred for heavily updated tables, \nnot infrequently updated ones. Even though index only scans imply ONLY \nthey really aren't in the sense that they may need to visit the \nVisibility Map for the heap. This can be costly and the planner may \nremove index only scan consideration if the VM has tuples that are not \nvisible.\n\nBTW, to Andrew, the UNION ALL alternative still results in bitmap index \nscans from my testing.\n\nRegards,\nMichael Vitale\n\n\n\nJeff Janes wrote on 10/12/2019 11:17 AM:\n> On Sat, Oct 12, 2019 at 10:43 AM Justin Pryzby <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On Sat, Oct 12, 2019 at 04:39:56PM +0200, Ivan Voras wrote:\n> > With seqscan disabled, I get this plan on 9.6:\n> > Bitmap Heap Scan on friend (cost=8.42..19.01 rows=14 width=8)\n> ...\n> > I expected to get an index-only scan in this situation, as that\n> would be a\n> > very common query. Is there a way to actually make this sort of\n> query\n> > resolvable with an index-only scan? Maybe a different table\n> structure would\n> > help?\n>\n>\n> It would have to scan the entire index to find the cases where \n> user2_id=42 but user1_id is not constrained. Technically User1_id \n> would be constrained to be less than 42, but I don't think the planner \n> will take that into account.\n>\n>\n> The v11 release notes have this relevant item:\n>\n> https://www.postgresql.org/docs/11/release-11.html\n> |Allow bitmap scans to perform index-only scans when possible\n> (Alexander Kuzmenkov)\n>\n>\n> But this is not one of those cases. It is only possible when the only \n> data needed is whether the row exists or not.\n>\n> Cheers,\n>\n> Jeff\n\n\n\n\nAnother thing to consider is the visibility map. From \nwhat I understand, index only scans are preferred for heavily updated \ntables, not infrequently updated ones. Even though index only scans \nimply ONLY they really aren't in the sense that they may need to visit \nthe Visibility Map for the heap. This can be costly and the planner may \nremove index only scan consideration if the VM has tuples that are not \nvisible.\n\nBTW, to Andrew, the UNION ALL alternative still results in bitmap index \nscans from my testing.\n\nRegards,\nMichael Vitale\n\n\n\nJeff Janes wrote on 10/12/2019 11:17 AM:\n\n\nOn Sat, Oct 12, 2019 at 10:43 AM Justin \nPryzby <[email protected]>\n wrote:On Sat, Oct 12, 2019 at \n04:39:56PM +0200, Ivan Voras wrote:\n> With seqscan disabled, I get this plan on 9.6:\n> Bitmap Heap Scan on friend (cost=8.42..19.01 rows=14 width=8)\n...\n> I expected to get an index-only scan in this situation, as that \nwould be a\n> very common query. Is there a way to actually make this sort of \nquery\n> resolvable with an index-only scan? Maybe a different table \nstructure would\n> help?It would have to scan the\n entire index to find the cases where user2_id=42 but user1_id is not constrained. \nTechnically User1_id would be constrained to be less than 42, but I \ndon't think the planner will take that into account.\nThe v11 release notes have this relevant item:\n\nhttps://www.postgresql.org/docs/11/release-11.html\n|Allow bitmap scans to perform index-only scans when possible (Alexander\n Kuzmenkov)\nBut this is not one of those \ncases. It is only possible when the only data needed is whether the row\n exists or not.Cheers,Jeff",
"msg_date": "Sat, 12 Oct 2019 11:27:55 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": ">>>>> \"MichaelDBA\" == MichaelDBA <[email protected]> writes:\n\n MichaelDBA> BTW, to Andrew, the UNION ALL alternative still results in\n MichaelDBA> bitmap index scans from my testing.\n\nYou probably forgot to vacuum the table.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sat, 12 Oct 2019 16:33:37 +0100",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": "Yikes, apologies to all, my wording is the opposite of what I meant!\n\nIndex only scans are preferred for infrequently updated ones, not \nheavily updated ones where the visibility map is updated often.\n\nRegards,\nMichael Vitale\n\n\nMichaelDBA wrote on 10/12/2019 11:27 AM:\n> Another thing to consider is the visibility map. From what I \n> understand, index only scans are preferred for heavily updated tables, \n> not infrequently updated ones. Even though index only scans imply \n> ONLY they really aren't in the sense that they may need to visit the \n> Visibility Map for the heap. This can be costly and the planner may \n> remove index only scan consideration if the VM has tuples that are not \n> visible.\n>\n> BTW, to Andrew, the UNION ALL alternative still results in bitmap \n> index scans from my testing.\n>\n> Regards,\n> Michael Vitale\n>\n>\n>\n> Jeff Janes wrote on 10/12/2019 11:17 AM:\n>> On Sat, Oct 12, 2019 at 10:43 AM Justin Pryzby <[email protected] \n>> <mailto:[email protected]>> wrote:\n>>\n>> On Sat, Oct 12, 2019 at 04:39:56PM +0200, Ivan Voras wrote:\n>> > With seqscan disabled, I get this plan on 9.6:\n>> > Bitmap Heap Scan on friend (cost=8.42..19.01 rows=14 width=8)\n>> ...\n>> > I expected to get an index-only scan in this situation, as that\n>> would be a\n>> > very common query. Is there a way to actually make this sort of\n>> query\n>> > resolvable with an index-only scan? Maybe a different table\n>> structure would\n>> > help?\n>>\n>>\n>> It would have to scan the entire index to find the cases where \n>> user2_id=42 but user1_id is not constrained. Technically User1_id \n>> would be constrained to be less than 42, but I don't think the \n>> planner will take that into account.\n>>\n>>\n>> The v11 release notes have this relevant item:\n>>\n>> https://www.postgresql.org/docs/11/release-11.html\n>> |Allow bitmap scans to perform index-only scans when possible\n>> (Alexander Kuzmenkov)\n>>\n>>\n>> But this is not one of those cases. It is only possible when the \n>> only data needed is whether the row exists or not.\n>>\n>> Cheers,\n>>\n>> Jeff\n>\n\n\n\n\nYikes, apologies to all, my wording is the \nopposite of what I meant!\n\nIndex only scans are preferred for infrequently updated ones, not \nheavily updated ones where the visibility map is updated often.\n\nRegards,\nMichael Vitale\n\n\nMichaelDBA wrote on 10/12/2019 11:27 AM:\n\n\nAnother thing to consider is the visibility map. From \nwhat I understand, index only scans are preferred for heavily updated \ntables, not infrequently updated ones. Even though index only scans \nimply ONLY they really aren't in the sense that they may need to visit \nthe Visibility Map for the heap. This can be costly and the planner may \nremove index only scan consideration if the VM has tuples that are not \nvisible.\n\n\nBTW, to Andrew, the UNION ALL alternative still results in bitmap index \nscans from my testing.\n\n\nRegards,\n\nMichael Vitale\n\n\n\nJeff Janes wrote on 10/12/2019 11:17 AM:\n\nOn Sat, Oct 12, 2019 at 10:43 AM Justin \nPryzby <[email protected]>\n wrote:On Sat, Oct 12, 2019 at \n04:39:56PM +0200, Ivan Voras wrote:\n> With seqscan disabled, I get this plan on 9.6:\n> Bitmap Heap Scan on friend (cost=8.42..19.01 rows=14 width=8)\n...\n> I expected to get an index-only scan in this situation, as that \nwould be a\n> very common query. Is there a way to actually make this sort of \nquery\n> resolvable with an index-only scan? Maybe a different table \nstructure would\n> help?It would have to scan the\n entire index to find the cases where user2_id=42 but user1_id is not constrained. \nTechnically User1_id would be constrained to be less than 42, but I \ndon't think the planner will take that into account.\nThe v11 release notes have this relevant item:\n\nhttps://www.postgresql.org/docs/11/release-11.html\n|Allow bitmap scans to perform index-only scans when possible (Alexander\n Kuzmenkov)\nBut this is not one of those \ncases. It is only possible when the only data needed is whether the row\n exists or not.Cheers,Jeff",
"msg_date": "Sat, 12 Oct 2019 11:33:53 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": "Nope, vacuumed it and still got the bitmap index scans.\n\nAndrew Gierth wrote on 10/12/2019 11:33 AM:\n>>>>>> \"MichaelDBA\" == MichaelDBA <[email protected]> writes:\n> MichaelDBA> BTW, to Andrew, the UNION ALL alternative still results in\n> MichaelDBA> bitmap index scans from my testing.\n>\n> You probably forgot to vacuum the table.\n>\n\n\n\n",
"msg_date": "Sat, 12 Oct 2019 11:35:22 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": "Perhaps the fix by Alexander Kuzmenkov in V11 added this VM \nconsideration for having a preference of bitmap index scan over an index \nonly scan. Looks like I'm goin' down the rabbit hole...\n\nRegards,\nMichael Vitale\n\nMichaelDBA wrote on 10/12/2019 11:35 AM:\n> Nope, vacuumed it and still got the bitmap index scans.\n>\n> Andrew Gierth wrote on 10/12/2019 11:33 AM:\n>>>>>>> \"MichaelDBA\" == MichaelDBA <[email protected]> writes:\n>> MichaelDBA> BTW, to Andrew, the UNION ALL alternative still results in\n>> MichaelDBA> bitmap index scans from my testing.\n>>\n>> You probably forgot to vacuum the table.\n>>\n>\n>\n>\n\n\n\n\nPerhaps the fix by Alexander Kuzmenkov in V11 \nadded this VM consideration for having a preference of bitmap index scan\n over an index only scan. Looks like I'm goin' down the rabbit hole...\n\nRegards,\nMichael Vitale\n\nMichaelDBA wrote on 10/12/2019 11:35 AM:\nNope, \nvacuumed it and still got the bitmap index scans.\n \n\nAndrew Gierth wrote on 10/12/2019 11:33 AM:\n \n\"MichaelDBA\" == MichaelDBA <[email protected]> \nwrites:\n \nMichaelDBA> BTW, to Andrew, the UNION ALL alternative still results \nin\n MichaelDBA> bitmap index scans from my testing.\n\nYou probably forgot to vacuum the table.",
"msg_date": "Sat, 12 Oct 2019 11:45:45 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": ">>>>> \"MichaelDBA\" == MichaelDBA <[email protected]> writes:\n\n MichaelDBA> Nope, vacuumed it and still got the bitmap index scans.\n\nLet's see your explains. Here's mine:\n\n# set enable_seqscan=false; -- because I only have a few rows\nSET\n# insert into friend values (1,2),(2,5);\nINSERT 0 2\n# vacuum analyze friend;\nVACUUM\n# explain analyze SELECT user1_id FROM friend WHERE user2_id=2 UNION ALL select user2_id FROM friend WHERE user1_id=2;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=0.13..8.32 rows=2 width=4) (actual time=0.009..0.014 rows=2 loops=1)\n -> Index Only Scan using friend_user2_id_user1_id_idx on friend (cost=0.13..4.15 rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)\n Index Cond: (user2_id = 2)\n Heap Fetches: 0\n -> Index Only Scan using friend_pkey on friend friend_1 (cost=0.13..4.15 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1)\n Index Cond: (user1_id = 2)\n Heap Fetches: 0\n Planning Time: 0.271 ms\n Execution Time: 0.045 ms\n(9 rows)\n\nNote that you have to put some actual rows in the table; if it is\ncompletely empty, you'll not get a representative result.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sat, 12 Oct 2019 16:46:44 +0100",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": "Yep, you're right, Andrew, adding a couple rows made it do the index \nonly scan. I reckon I got misled by turning off sequential scans, \nthinking that actual rows were not important anymore. Overly simplistic \nreasonings can get one into trouble, lol.\n\nRegards,\nMichael Vitale\n\n\nAndrew Gierth wrote on 10/12/2019 11:46 AM:\n>>>>>> \"MichaelDBA\" == MichaelDBA <[email protected]> writes:\n> MichaelDBA> Nope, vacuumed it and still got the bitmap index scans.\n>\n> Let's see your explains. Here's mine:\n>\n> # set enable_seqscan=false; -- because I only have a few rows\n> SET\n> # insert into friend values (1,2),(2,5);\n> INSERT 0 2\n> # vacuum analyze friend;\n> VACUUM\n> # explain analyze SELECT user1_id FROM friend WHERE user2_id=2 UNION ALL select user2_id FROM friend WHERE user1_id=2;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------\n> Append (cost=0.13..8.32 rows=2 width=4) (actual time=0.009..0.014 rows=2 loops=1)\n> -> Index Only Scan using friend_user2_id_user1_id_idx on friend (cost=0.13..4.15 rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)\n> Index Cond: (user2_id = 2)\n> Heap Fetches: 0\n> -> Index Only Scan using friend_pkey on friend friend_1 (cost=0.13..4.15 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1)\n> Index Cond: (user1_id = 2)\n> Heap Fetches: 0\n> Planning Time: 0.271 ms\n> Execution Time: 0.045 ms\n> (9 rows)\n>\n> Note that you have to put some actual rows in the table; if it is\n> completely empty, you'll not get a representative result.\n>\n\n\n\n",
"msg_date": "Sat, 12 Oct 2019 11:54:07 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": ">>>>> \"MichaelDBA\" == MichaelDBA <[email protected]> writes:\n\n MichaelDBA> Yep, you're right, Andrew, adding a couple rows made it do\n MichaelDBA> the index only scan. I reckon I got misled by turning off\n MichaelDBA> sequential scans, thinking that actual rows were not\n MichaelDBA> important anymore. Overly simplistic reasonings can get\n MichaelDBA> one into trouble, lol.\n\nWe do some odd stuff with the statistics estimates for completely empty\ntables because (a) it's not common in practice for a table to be always\nempty (i.e. the emptiness is usually transient) and (b) if you take the\nemptiness of a table at face value, you end up generating insanely bad\nplans for certain FK check queries that may not get replanned quickly\nenough to mitigate the performance impact.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sat, 12 Oct 2019 17:58:22 +0100",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": "On Sat, 12 Oct 2019 at 17:16, Andrew Gierth <[email protected]>\nwrote:\n\n> >>>>> \"Ivan\" == Ivan Voras <[email protected]> writes:\n>\n\n\n> Ivan> SELECT user1_id,user2_id FROM friend WHERE user1_id=42 OR\n> user2_id=42;\n>\n> To get friends of user 42:\n>\n> SELECT user1_id FROM friend WHERE user2_id=42\n> UNION ALL\n> SELECT user2_id FROM friend WHERE user1_id=42;\n>\n> assuming you create the (user2_id,user1_id) index, this should get you\n> an Append of two index-only scans. We can use UNION ALL here rather than\n> UNION because the table constraints ensure there are no duplicates.\n>\n\nThanks! That's a more elegant solution for my query than what I had in mind!\n\nOn Sat, 12 Oct 2019 at 17:16, Andrew Gierth <[email protected]> wrote:>>>>> \"Ivan\" == Ivan Voras <[email protected]> writes:\n \n Ivan> SELECT user1_id,user2_id FROM friend WHERE user1_id=42 OR user2_id=42;\n\nTo get friends of user 42:\n\nSELECT user1_id FROM friend WHERE user2_id=42\nUNION ALL\nSELECT user2_id FROM friend WHERE user1_id=42;\n\nassuming you create the (user2_id,user1_id) index, this should get you\nan Append of two index-only scans. We can use UNION ALL here rather than\nUNION because the table constraints ensure there are no duplicates. Thanks! That's a more elegant solution for my query than what I had in mind!",
"msg_date": "Sun, 13 Oct 2019 15:29:19 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimising a two column OR check"
},
{
"msg_contents": "On Sat, 12 Oct 2019 at 17:46, Andrew Gierth <[email protected]>\nwrote:\n\n> >>>>> \"MichaelDBA\" == MichaelDBA <[email protected]> writes:\n>\n> MichaelDBA> Nope, vacuumed it and still got the bitmap index scans.\n>\n> Let's see your explains. Here's mine:\n>\n> # set enable_seqscan=false; -- because I only have a few rows\n> SET\n> # insert into friend values (1,2),(2,5);\n> INSERT 0 2\n> # vacuum analyze friend;\n> VACUUM\n> # explain analyze SELECT user1_id FROM friend WHERE user2_id=2 UNION ALL\n> select user2_id FROM friend WHERE user1_id=2;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------\n> Append (cost=0.13..8.32 rows=2 width=4) (actual time=0.009..0.014 rows=2\n> loops=1)\n> -> Index Only Scan using friend_user2_id_user1_id_idx on friend\n> (cost=0.13..4.15 rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)\n> Index Cond: (user2_id = 2)\n> Heap Fetches: 0\n> -> Index Only Scan using friend_pkey on friend friend_1\n> (cost=0.13..4.15 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1)\n> Index Cond: (user1_id = 2)\n> Heap Fetches: 0\n> Planning Time: 0.271 ms\n> Execution Time: 0.045 ms\n> (9 rows)\n>\n> Note that you have to put some actual rows in the table; if it is\n> completely empty, you'll not get a representative result.\n>\n\nConfirming what's been said - the whole thing works fine on 10. I can't get\nindex only scans on 9.6, but that's a dev machine anyway.\n\nNow if only hash indexes supported multiple column, that'd probably result\nin all my data being returned from a single read of a hash bucket, but\nthat's going into microoptimisation territory :)\n\nThanks!\n\nOn Sat, 12 Oct 2019 at 17:46, Andrew Gierth <[email protected]> wrote:>>>>> \"MichaelDBA\" == MichaelDBA <[email protected]> writes:\n\n MichaelDBA> Nope, vacuumed it and still got the bitmap index scans.\n\nLet's see your explains. Here's mine:\n\n# set enable_seqscan=false; -- because I only have a few rows\nSET\n# insert into friend values (1,2),(2,5);\nINSERT 0 2\n# vacuum analyze friend;\nVACUUM\n# explain analyze SELECT user1_id FROM friend WHERE user2_id=2 UNION ALL select user2_id FROM friend WHERE user1_id=2;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=0.13..8.32 rows=2 width=4) (actual time=0.009..0.014 rows=2 loops=1)\n -> Index Only Scan using friend_user2_id_user1_id_idx on friend (cost=0.13..4.15 rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)\n Index Cond: (user2_id = 2)\n Heap Fetches: 0\n -> Index Only Scan using friend_pkey on friend friend_1 (cost=0.13..4.15 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1)\n Index Cond: (user1_id = 2)\n Heap Fetches: 0\n Planning Time: 0.271 ms\n Execution Time: 0.045 ms\n(9 rows)\n\nNote that you have to put some actual rows in the table; if it is\ncompletely empty, you'll not get a representative result.Confirming what's been said - the whole thing works fine on 10. I can't get index only scans on 9.6, but that's a dev machine anyway.Now if only hash indexes supported multiple column, that'd probably result in all my data being returned from a single read of a hash bucket, but that's going into microoptimisation territory :)Thanks!",
"msg_date": "Sun, 13 Oct 2019 15:38:49 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimising a two column OR check"
}
] |
[
{
"msg_contents": "Dear I would like to share with you to see what you think about the\nstatistics of pg_stat_bgwriter\n\npostgres = # select * from pg_stat_bgwriter;\n checkpoints_timed | checkpoints_req | checkpoint_write_time |\ncheckpoint_sync_time | buffers_checkpoint | buffers_clean | maxwritten_clean\n| buffers_backend | buffers_\nbackend_fsync | buffers_alloc | stats_reset\n------------------- + ----------------- + ------------ ----------- +\n---------------------- + --------------- ----- + --------------- +\n------------------ + --------- -------- + ---------\n-------------- + --------------- + ------------------- ------------\n 338 | 6 | 247061792 | 89418 | 2939561 | 19872289 | 54876 |\n6015787 |\n 0 | 710682240 | 2019-10-06 19: 25: 30.688186-03\n(1 row)\n\npostgres = # show bgwriter_delay;\n bgwriter_delay\n----------------\n 200ms\n(1 row)\n\npostgres = # show bgwriter_lru_maxpages;\n bgwriter_lru_maxpages\n-----------------------\n 100\n(1 row)\n\npostgres = # show bgwriter_lru_multiplier;\n bgwriter_lru_multiplier\n-------------------------\n 2\n(1 row)\n\n\nDo you think it should increase bgwriter_lru_maxpages due to the value of\nmaxwritten_clean?\nDo you think it should increase bgwriter_lru_maxpages,\nbgwriter_lru_multiplier, and decrease bgwriter_delay due to the value of\nbuffers_backend compared to buffers_alloc?\nDo you think a modification is necessary?\nWhat values would you recommend?\nthank you\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Sun, 13 Oct 2019 18:27:35 -0700 (MST)",
"msg_from": "dangal <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_bgwriter"
},
{
"msg_contents": "On Sun, Oct 13, 2019 at 06:27:35PM -0700, dangal wrote:\n>Dear I would like to share with you to see what you think about the\n>statistics of pg_stat_bgwriter\n>\n>postgres = # select * from pg_stat_bgwriter;\n> checkpoints_timed | checkpoints_req | checkpoint_write_time |\n>checkpoint_sync_time | buffers_checkpoint | buffers_clean | maxwritten_clean\n>| buffers_backend | buffers_\n>backend_fsync | buffers_alloc | stats_reset\n>------------------- + ----------------- + ------------ ----------- +\n>---------------------- + --------------- ----- + --------------- +\n>------------------ + --------- -------- + ---------\n>-------------- + --------------- + ------------------- ------------\n> 338 | 6 | 247061792 | 89418 | 2939561 | 19872289 | 54876 |\n>6015787 |\n> 0 | 710682240 | 2019-10-06 19: 25: 30.688186-03\n>(1 row)\n>\n>postgres = # show bgwriter_delay;\n> bgwriter_delay\n>----------------\n> 200ms\n>(1 row)\n>\n>postgres = # show bgwriter_lru_maxpages;\n> bgwriter_lru_maxpages\n>-----------------------\n> 100\n>(1 row)\n>\n>postgres = # show bgwriter_lru_multiplier;\n> bgwriter_lru_multiplier\n>-------------------------\n> 2\n>(1 row)\n>\n>\n>Do you think it should increase bgwriter_lru_maxpages due to the value of\n>maxwritten_clean?\n>Do you think it should increase bgwriter_lru_maxpages,\n>bgwriter_lru_multiplier, and decrease bgwriter_delay due to the value of\n>buffers_backend compared to buffers_alloc?\n>Do you think a modification is necessary?\n>What values would you recommend?\n\nbuffers_alloc does not really matter, here, IMO. You need to compare\nbuffers_checkpoint, buffers_backend and buffers_clean, and ideally you'd\nhave (checkpoints > clean > backend). In your case it's already\n\n buffers_checkpoint | buffers_clean | buffers_backend\n 2939561 | 19872289 | 6015787\n\nYou could make bgwriter even more aggressive, but that's unlikely to be\na huge improvement. You should investigate why buffers_checkpoint is so\nlow. This is usually a sign of shared_buffers being too small for the\nactive set, so perhaps you need to increase shared_buffers, or see which\nqueries are causing this and optimize them.\n\nNote: FWIW, a single snapshot of pg_stats* may be misleading, because\nit's cumulative, so it's not clear how accurately it reflects current\nstate. Next time take two snapshots and subtract them.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 14 Oct 2019 20:18:47 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter"
},
{
"msg_contents": "Thanks a lot, always helping\nI attached a snapshot that I take every 12 hours of the pg_stat_bgwriter\n\nselect now,buffers_checkpoint,buffers_clean, buffers_backend from\npg_stat_bgwriter_snapshot;\n now | buffers_checkpoint | buffers_clean |\nbuffers_backend \n-------------------------------+--------------------+---------------+-----------------\n 2019-10-07 12:00:01.312067-03 | 288343 | 1182944 | \n520101\n 2019-10-08 00:00:02.034129-03 | 475323 | 3890772 | \n975454\n 2019-10-08 12:00:01.500756-03 | 616154 | 4774924 | \n1205261\n 2019-10-09 00:00:01.520329-03 | 784840 | 7377771 | \n1601278\n 2019-10-09 12:00:01.388113-03 | 1149560 | 8395288 | \n2456249\n 2019-10-10 00:00:01.841054-03 | 1335747 | 11023014 | \n2824740\n 2019-10-10 12:00:01.354555-03 | 1486963 | 11919462 | \n2995211\n 2019-10-11 00:00:01.519538-03 | 1649066 | 14400593 | \n3360700\n 2019-10-11 12:00:01.468203-03 | 1979781 | 15332086 | \n4167663\n 2019-10-12 00:00:01.343714-03 | 2161116 | 17791871 | \n4525957\n 2019-10-12 12:00:01.991429-03 | 2323194 | 18324723 | \n5139418\n 2019-10-13 00:00:01.251191-03 | 2453939 | 19059149 | \n5306894\n 2019-10-13 12:00:01.677379-03 | 2782606 | 19391676 | \n5878981\n 2019-10-14 00:00:01.824249-03 | 2966021 | 19915346 | \n6040316\n 2019-10-14 12:00:01.869126-03 | 3117659 | 20675018 | \n6184214\n \nI tell you that we have a server with 24 gb of ram and 6gb of shared_buffers\nWhen you tell me that maybe I am running too low of shared_buffers, the\nquery I run to see what is happening is the following:\nThe first 10 are insert, update and an autovaccum\n\nselect calls, shared_blks_hit, shared_blks_read, shared_blks_dirtied\n from pg_stat_statements\n where shared_blks_dirtied> 0 order by shared_blks_dirtied desc\n limit 10\n \n\n calls | shared_blks_hit | shared_blks_read | shared_blks_dirtied \n-----------+-----------------+------------------+---------------------\n 41526844 | 1524091324 | 74477743 | 40568348\n 22707516 | 1317743612 | 33153916 | 28106071\n 517309 | 539285911 | 24583841 | 24408950\n 23 | 23135504 | 187638126 | 15301103\n 11287105 | 383864219 | 18369813 | 13879956\n 2247661 | 275357344 | 9252598 | 6084363\n 13070036 | 244904154 | 5557321 | 5871613\n 54158879 | 324425993 | 5054200 | 4676472\n 24955177 | 125421833 | 5775788 | 4517367\n 142807488 | 14401507751 | 81965894 | 2661358\n(10 filas)\n\nAnother query\n\nSELECT pg_size_pretty(count(*) * 8192) as buffered,\n round(100.0 * count(*) /\n (SELECT setting FROM pg_settings WHERE name = 'shared_buffers')\n ::integer,\n 1) AS buffers_percent,\n round(100.0 * count(*) * 8192 / pg_table_size(c.oid), 1) AS\npercent_of_relation\n FROM pg_class c\n INNER JOIN pg_buffercache b\n ON b.relfilenode = c.relfilenode\n INNER JOIN pg_database d\n ON (b.reldatabase = d.oid AND d.datname = current_database())\n GROUP BY c.oid, c.relname\n ORDER BY 3 DESC LIMIT 10;\n\nbuffered\tbuffers_percent\t percent_of_relation\n3938 MB; \t64.1;\t\t\t53.2\n479 MB;\t\t7.8;\t\t\t\t21.3\n261 MB;\t\t4.3;\t\t\t\t99.3\n163 MB;\t\t2.6;\t\t\t\t0.1\n153 MB;\t\t2.5;\t\t\t\t6.7\n87 MB;\t\t1.4;\t\t\t\t1.2\n82 MB;\t\t1.3;\t\t\t\t81.6\n65 MB;\t\t1.1;\t\t\t\t100.0\n64 MB;\t\t1.0;\t\t\t\t0.1\n53 MB;\t\t0.9;\t\t\t\t73.5\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Mon, 14 Oct 2019 13:12:43 -0700 (MST)",
"msg_from": "dangal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter"
},
{
"msg_contents": "On Mon, Oct 14, 2019 at 08:18:47PM +0200, Tomas Vondra wrote:\n> Note: FWIW, a single snapshot of pg_stats* may be misleading, because\n> it's cumulative, so it's not clear how accurately it reflects current\n> state. Next time take two snapshots and subtract them.\n\nFor bonus points, capture it with timestamp and make RRD graphs.\n\nI took me awhile to get around to following this advice, but now I have 12+\nmonths of history at 5 minute granularity across all our customers, and I've\nused my own implementation to track down inefficient queries being run\nperiodically from cron, and notice other radical changes in writes/reads\n\nI recall seeing that the pgCluu project does this.\nhttp://pgcluu.darold.net/\n\nJustin\n\n\n",
"msg_date": "Mon, 14 Oct 2019 17:16:02 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter"
},
{
"msg_contents": "On Mon, Oct 14, 2019 at 01:12:43PM -0700, dangal wrote:\n>Thanks a lot, always helping\n>I attached a snapshot that I take every 12 hours of the pg_stat_bgwriter\n>\n>select now,buffers_checkpoint,buffers_clean, buffers_backend from\n>pg_stat_bgwriter_snapshot;\n\nPlease show us the deltas, i.e. subtract the preceding value (using a\nwindow function, or something). FWIW 12 hours may be a bit too coarse,\nbut it's better than nothing.\n\n> now | buffers_checkpoint | buffers_clean |\n>buffers_backend\n>-------------------------------+--------------------+---------------+-----------------\n> 2019-10-07 12:00:01.312067-03 | 288343 | 1182944 |\n>520101\n> 2019-10-08 00:00:02.034129-03 | 475323 | 3890772 |\n>975454\n> 2019-10-08 12:00:01.500756-03 | 616154 | 4774924 |\n>1205261\n> 2019-10-09 00:00:01.520329-03 | 784840 | 7377771 |\n>1601278\n> 2019-10-09 12:00:01.388113-03 | 1149560 | 8395288 |\n>2456249\n> 2019-10-10 00:00:01.841054-03 | 1335747 | 11023014 |\n>2824740\n> 2019-10-10 12:00:01.354555-03 | 1486963 | 11919462 |\n>2995211\n> 2019-10-11 00:00:01.519538-03 | 1649066 | 14400593 |\n>3360700\n> 2019-10-11 12:00:01.468203-03 | 1979781 | 15332086 |\n>4167663\n> 2019-10-12 00:00:01.343714-03 | 2161116 | 17791871 |\n>4525957\n> 2019-10-12 12:00:01.991429-03 | 2323194 | 18324723 |\n>5139418\n> 2019-10-13 00:00:01.251191-03 | 2453939 | 19059149 |\n>5306894\n> 2019-10-13 12:00:01.677379-03 | 2782606 | 19391676 |\n>5878981\n> 2019-10-14 00:00:01.824249-03 | 2966021 | 19915346 |\n>6040316\n> 2019-10-14 12:00:01.869126-03 | 3117659 | 20675018 |\n>6184214\n>\n>I tell you that we have a server with 24 gb of ram and 6gb of shared_buffers\n>When you tell me that maybe I am running too low of shared_buffers, the\n>query I run to see what is happening is the following:\n\nThe question is how that compared to database size, and size of the\nactive set (fraction of the database accessed by the application /\nqueries).\n\nI suggest you also track & compute shared_buffers cache hit ratio.\n\n>The first 10 are insert, update and an autovaccum\n>\n>select calls, shared_blks_hit, shared_blks_read, shared_blks_dirtied\n>� from pg_stat_statements\n>� where shared_blks_dirtied> 0 order by shared_blks_dirtied desc\n>� limit 10\n>\n>\n> calls | shared_blks_hit | shared_blks_read | shared_blks_dirtied\n>-----------+-----------------+------------------+---------------------\n> 41526844 | 1524091324 | 74477743 | 40568348\n> 22707516 | 1317743612 | 33153916 | 28106071\n> 517309 | 539285911 | 24583841 | 24408950\n> 23 | 23135504 | 187638126 | 15301103\n> 11287105 | 383864219 | 18369813 | 13879956\n> 2247661 | 275357344 | 9252598 | 6084363\n> 13070036 | 244904154 | 5557321 | 5871613\n> 54158879 | 324425993 | 5054200 | 4676472\n> 24955177 | 125421833 | 5775788 | 4517367\n> 142807488 | 14401507751 | 81965894 | 2661358\n>(10 filas)\n>\n\nUnfortunately, this has the same issue as the data you shared in the\nfirst message - it's a snapshot with data accumulated since the database\nwas created. It's unclear whether the workload changed over time etc.\nBut I guess you can use this to identify queries producing the most\ndirty buffers and maybe see if you can optimize that somehow (e.g. by\nremoving unnecessary indexes or something).\n\n>Another query\n>\n>SELECT pg_size_pretty(count(*) * 8192) as buffered,\n> round(100.0 * count(*) /\n> (SELECT setting FROM pg_settings WHERE name = 'shared_buffers')\n> ::integer,\n> 1) AS buffers_percent,\n> round(100.0 * count(*) * 8192 / pg_table_size(c.oid), 1) AS\n>percent_of_relation\n> FROM pg_class c\n> INNER JOIN pg_buffercache b\n> ON b.relfilenode = c.relfilenode\n> INNER JOIN pg_database d\n> ON (b.reldatabase = d.oid AND d.datname = current_database())\n> GROUP BY c.oid, c.relname\n> ORDER BY 3 DESC LIMIT 10;\n>\n>buffered\tbuffers_percent\t percent_of_relation\n>3938 MB; \t64.1;\t\t\t53.2\n>479 MB;\t\t7.8;\t\t\t\t21.3\n>261 MB;\t\t4.3;\t\t\t\t99.3\n>163 MB;\t\t2.6;\t\t\t\t0.1\n>153 MB;\t\t2.5;\t\t\t\t6.7\n>87 MB;\t\t1.4;\t\t\t\t1.2\n>82 MB;\t\t1.3;\t\t\t\t81.6\n>65 MB;\t\t1.1;\t\t\t\t100.0\n>64 MB;\t\t1.0;\t\t\t\t0.1\n>53 MB;\t\t0.9;\t\t\t\t73.5\n>\n\nIt's generally a good idea to explain what a query is supposed to do,\ninstead of just leaving the users to figure that out. In any case, this\nis a snapshot at a particular moment in time, it's unclear how how that\ncorrelates to the activity. The fact that you've removed names of tables\nand even queries is does not really help either.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 15 Oct 2019 02:39:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter"
},
{
"msg_contents": "Hi Tomas, restart the statistics and take 24-hour samples to see if you can\nhelp me\n24 gb server memory 6 gb sharred buffers \n\n# select now,\n# pg_size_pretty(buffers_checkpoint*8192)AS buffers_checkpoint,\n# pg_size_pretty(buffers_clean*8192)AS buffers_clean,\n# pg_size_pretty(buffers_backend*8192)AS buffers_backend,\n#\n(buffers_checkpoint*100)/(buffers_checkpoint+buffers_clean+buffers_backend)AS\nbuffers_checkpoint_pct,\n# (buffers_clean*100)/(buffers_checkpoint+buffers_clean+buffers_backend)AS\nbuffers_clean_pct,\n# (buffers_backend*100)/(buffers_checkpoint+buffers_clean+buffers_backend)AS\nbuffers_backend_pct,\n# pg_size_pretty(buffers_checkpoint * 8192 /(checkpoints_timed +\ncheckpoints_req)) AS avg_checkpoint_write,\n# pg_size_pretty(8192 *(buffers_checkpoint + buffers_clean +\nbuffers_backend)) AS total_write\n# from pg_stat_bgwriter_snapshot\n# ;\n now | buffers_checkpoint | buffers_clean |\nbuffers_backend | buffers_checkpoint_pct | buffers_clean_pct |\nbuffers_backend_pct | avg_checkpoint_write | total_write \n-------------------------------+--------------------+---------------+-----------------+------------------------+-------------------+---------------------+----------------------+-------------\n 2019-10-15 15:00:02.070105-03 | 33 MB | 1190 MB | 144 MB \n| 2 | 87 | 10 | 33 MB \n| 1367 MB\n 2019-10-15 16:00:01.477785-03 | 109 MB | 3543 MB | 393 MB \n| 2 | 87 | 9 | 36 MB \n| 4045 MB\n 2019-10-15 17:00:01.960162-03 | 179 MB | 6031 MB | 703 MB \n| 2 | 87 | 10 | 36 MB \n| 6913 MB\n 2019-10-15 18:00:01.558404-03 | 252 MB | 8363 MB | 1000\nMB | 2 | 86 | \n10 | 36 MB | 9615 MB\n 2019-10-15 19:00:01.170866-03 | 327 MB | 10019 MB | 1232\nMB | 2 | 86 | \n10 | 36 MB | 11 GB\n 2019-10-15 20:00:01.397473-03 | 417 MB | 11 GB | 1407\nMB | 3 | 85 | \n10 | 38 MB | 13 GB\n 2019-10-15 21:00:01.211047-03 | 522 MB | 12 GB | 1528\nMB | 3 | 85 | \n11 | 40 MB | 14 GB\n 2019-10-15 22:00:01.164853-03 | 658 MB | 12 GB | 1691\nMB | 4 | 83 | \n11 | 44 MB | 14 GB\n 2019-10-15 23:00:01.116564-03 | 782 MB | 13 GB | 1797\nMB | 5 | 83 | \n11 | 46 MB | 15 GB\n 2019-10-16 00:00:01.19203-03 | 887 MB | 13 GB | 2016\nMB | 5 | 82 | \n12 | 47 MB | 16 GB\n 2019-10-16 01:00:01.329851-03 | 1003 MB | 14 GB | 2104\nMB | 5 | 81 | \n12 | 48 MB | 17 GB\n 2019-10-16 02:00:01.518606-03 | 1114 MB | 14 GB | 2222\nMB | 6 | 81 | \n12 | 48 MB | 17 GB\n 2019-10-16 03:00:01.673498-03 | 1227 MB | 14 GB | 2314\nMB | 6 | 80 | \n12 | 49 MB | 18 GB\n 2019-10-16 04:00:01.936604-03 | 1354 MB | 15 GB | 2468\nMB | 7 | 79 | \n12 | 50 MB | 19 GB\n 2019-10-16 05:00:01.854888-03 | 1465 MB | 15 GB | 2518\nMB | 7 | 79 | \n13 | 51 MB | 19 GB\n 2019-10-16 06:00:01.804182-03 | 1585 MB | 15 GB | 2581\nMB | 8 | 78 | \n13 | 51 MB | 19 GB\n 2019-10-16 07:00:01.889345-03 | 1677 MB | 15 GB | 2649\nMB | 8 | 78 | \n13 | 51 MB | 20 GB\n 2019-10-16 08:00:01.248247-03 | 1756 MB | 16 GB | 2707\nMB | 8 | 78 | \n13 | 50 MB | 20 GB\n 2019-10-16 09:00:01.258408-03 | 1826 MB | 16 GB | 2763\nMB | 8 | 78 | \n13 | 49 MB | 21 GB\n 2019-10-16 10:00:01.418323-03 | 1881 MB | 17 GB | 2872\nMB | 8 | 78 | \n13 | 48 MB | 21 GB\n 2019-10-16 11:00:02.077084-03 | 1951 MB | 18 GB | 3140\nMB | 8 | 78 | \n13 | 48 MB | 23 GB\n 2019-10-16 12:00:01.83188-03 | 2026 MB | 20 GB | 3322\nMB | 7 | 79 | \n12 | 47 MB | 25 GB\n 2019-10-16 13:00:01.628877-03 | 2109 MB | 22 GB | 3638\nMB | 7 | 79 | \n12 | 47 MB | 28 GB\n 2019-10-16 14:00:02.351529-03 | 2179 MB | 24 GB | 3934\nMB | 6 | 80 | \n12 | 46 MB | 30 GB\n(24 filas)\n\n# SELECT \n# sum(heap_blks_read) as heap_read,\n# sum(heap_blks_hit) as heap_hit,\n# sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio\n# FROM \n# pg_statio_user_tables;\n heap_read | heap_hit | ratio \n-------------+---------------+------------------------\n 80203672248 | 4689023850651 | 0.98318308953328194824\n(1 fila)\n\n# SELECT \n# sum(idx_blks_read) as idx_read,\n# sum(idx_blks_hit) as idx_hit,\n# (sum(idx_blks_hit) - sum(idx_blks_read)) / sum(idx_blks_hit) as ratio\n# FROM \n# pg_statio_user_indexes;\n idx_read | idx_hit | ratio \n------------+--------------+------------------------\n 3307622770 | 653969845259 | 0.99494223962468783241\n(1 fila)\n\n =# -- perform a \"select pg_stat_reset();\" when you want to reset counter\nstatistics\n =# with \n -# all_tables as\n -# (\n (# SELECT *\n (# FROM (\n (# SELECT 'all'::text as table_name, \n (# sum( (coalesce(heap_blks_read,0) + coalesce(idx_blks_read,0) +\ncoalesce(toast_blks_read,0) + coalesce(tidx_blks_read,0)) ) as from_disk, \n (# sum( (coalesce(heap_blks_hit,0) + coalesce(idx_blks_hit,0) +\ncoalesce(toast_blks_hit,0) + coalesce(tidx_blks_hit,0)) ) as from_cache \n (# FROM pg_statio_all_tables --> change to pg_statio_USER_tables if\nyou want to check only user tables (excluding postgres's own tables)\n (# ) a\n (# WHERE (from_disk + from_cache) > 0 -- discard tables without hits\n (# ),\n -# tables as \n -# (\n (# SELECT *\n (# FROM (\n (# SELECT relname as table_name, \n (# ( (coalesce(heap_blks_read,0) + coalesce(idx_blks_read,0) +\ncoalesce(toast_blks_read,0) + coalesce(tidx_blks_read,0)) ) as from_disk, \n (# ( (coalesce(heap_blks_hit,0) + coalesce(idx_blks_hit,0) +\ncoalesce(toast_blks_hit,0) + coalesce(tidx_blks_hit,0)) ) as from_cache \n (# FROM pg_statio_all_tables --> change to pg_statio_USER_tables if\nyou want to check only user tables (excluding postgres's own tables)\n (# ) a\n (# WHERE (from_disk + from_cache) > 0 -- discard tables without hits\n (# )\n -# SELECT table_name as \"table name\",\n -# from_disk as \"disk hits\",\n -# round((from_disk::numeric / (from_disk +\nfrom_cache)::numeric)*100.0,2) as \"% disk hits\",\n -# round((from_cache::numeric / (from_disk +\nfrom_cache)::numeric)*100.0,2) as \"% cache hits\",\n -# (from_disk + from_cache) as \"total hits\"\n -# FROM (SELECT * FROM all_tables UNION ALL SELECT * FROM tables) a\n -# ORDER BY (case when table_name = 'all' then 0 else 1 end), from_disk\ndesc\n -# ;\n table name | disk hits | % disk hits | %\ncache hits | total hits \n---------------------------------------------+-------------+-------------+--------------+---------------\n all | 88000266877 | 1.60 | \n98.40 | 5489558628019\n b_e_i\t\t\t | 38269990257 | 2.88 | \n97.12 | 1329542407426\n n_c_r_o\t\t\t\t\t\t\t | 32839222402 | 1.44 | 98.56 |\n2278801314997\n b_e_i_a | 6372214550 | 4.76 | \n95.24 | 133916822424\n d_d | 2101245550 | 6.58 | \n93.42 | 31936220932\n pg_toast_550140 | 2055940284 | 32.63 | \n67.37 | 6300424824\n p_i\t\t | 1421254520 | 0.36 | \n99.64 | 393348432350\n n_c_e_s\t\t\t\t\t\t | 1164509701 | 27.85 | 72.15 | \n4180714300\n s_b_c_a\t\t\t | 1116814156 | 0.19 | \n99.81 | 595617511928\n b_e_i_l\t\t | 624945696 | 41.13 | \n58.87 | 1519594743\n p_e_i | 525580057 | 5.27 | \n94.73 | 9968414493\n \n =# select\n -# s.relname,\n -# pg_size_pretty(pg_relation_size(relid)),\n -# coalesce(n_tup_ins,0) + 2 * coalesce(n_tup_upd,0) -\n -# coalesce(n_tup_hot_upd,0) + coalesce(n_tup_del,0) AS total_writes,\n -# (coalesce(n_tup_hot_upd,0)::float * 100 / (case when n_tup_upd > 0\n (# then n_tup_upd else 1 end)::float)::numeric(10,2) AS hot_rate,\n -# (select v[1] FROM regexp_matches(reloptions::text,E'fillfactor=(\\\\d+)')\nas\n (# r(v) limit 1) AS fillfactor\n -# from pg_stat_all_tables s\n -# join pg_class c ON c.oid=relid\n -# order by total_writes desc limit 50;\n relname | pg_size_pretty | total_writes | hot_rate\n| fillfactor \n----------------------------------+----------------+--------------+----------+------------\n pg_toast_550140 | 1637 GB | 820414234 | 0.00\n| \n b_e_i_a\t\t\t\t | 168 GB | 454229502 | 0.00 | \n s_b_c_a \t\t\t | 26 MB | 419253909 | 96.94 | \n b_e_i_a_l \t\t\t\t | 71 GB | 305584644 | 0.00 | \n s_b_c_a_l \t\t\t\t | 965 MB | 203361185 | 0.00 | \n b_e_i \t\t\t | 7452 MB | 194861425 | 62.88 | \n b_e_i_l\t\t\t | 57 GB | 144929408 | 0.00 | \n o_i_n\t\t\t | 3344 kB | 98435081 | 99.38 | \n r_h\t\t | 1140 MB | 33209351 | 0.11 | \n b_e\t | 5808 kB | 29608085 | 99.65 | \n\n =# select\ncalls,shared_blks_hit,shared_blks_read,shared_blks_dirtied,query--,\nshared_blks_dirtied\n -# from pg_stat_statements\n -# where shared_blks_dirtied > 0 order by shared_blks_dirtied desc\n -# limit 10;\n calls | shared_blks_hit | shared_blks_read | shared_blks_dirtied | \n \nquery \n 43374691 | 1592513886 | 77060029 | 42096885 |\nINSERT INTO b_e_i_a\n 23762881 | 1367338973 | 34351016 | 29131240 |\nUPDATE b_e_i \n 541120 | 564550710 | 25726748 | 25551138 |\nINSERT INTO d_d\n 23 | 23135504 | 187638126 | 15301103 |\nVACUUM ANALYZE VERBOSE b_e_i;\n 11804481 | 401558460 | 19124307 | 14492182 |\nUPDATE b_e_i_a \n 2352159 | 287732134 | 9462460 | 6250734 |\nINSERT INTO b_e_i\n 13701688 | 256215340 | 5803881 | 6142119 |\nINSERT into I_C_M\n 56582737 | 338943996 | 5272879 | 4882863 |\nINSERT INTO b_e_i_a_l\n 26115040 | 131274217 | 6016404 | 4712060 |\nINSERT INTO b_e_i_l\n \n =# SELECT oid::REGCLASS::TEXT AS table_name,\n -# pg_size_pretty(\n (# pg_total_relation_size(oid)\n (# ) AS total_size\n -# FROM pg_class\n -# WHERE relkind = 'r'\n -# AND relpages > 0\n -# ORDER BY pg_total_relation_size(oid) DESC\n -# LIMIT 20;;\n table_name | total_size \n----------------------------------+------------\n d_d | 1656 GB\n b_e_i_a\t\t\t\t | 547 GB\n b_e_i_a_l\t\t\t\t\t | 107 GB\n b_e_i_l\t\t\t | 71 GB\n b_e_i\t\t\t | 66 GB\n n_c_e_s\t\t\t\t\t\t | 28 GB\n p_e_i\t\t\t | 7807 MB\n n_c_s\t\t\t\t\t | 7344 MB\n e_i_n\t\t\t | 5971 MB\n p_e_d_i\t\t\t | 3695 MB\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Wed, 16 Oct 2019 10:37:37 -0700 (MST)",
"msg_from": "dangal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter"
},
{
"msg_contents": "Excuse me, can you tell me how can I achieve this?\n\n\"The question is how that compared to database size, and size of the\nactive set (fraction of the database accessed by the application /\nqueries).\"\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Wed, 16 Oct 2019 12:13:36 -0700 (MST)",
"msg_from": "dangal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter"
},
{
"msg_contents": "On Mon, Oct 14, 2019 at 1:25 PM dangal <[email protected]> wrote:\n\n> Do you think it should increase bgwriter_lru_maxpages due to the value of\n> maxwritten_clean?\n>\n\nI find the background writer to be pretty unimportant these days. If the\nkernel is freely accepting writes without blocking, the backends can\nprobably write their own buffers without it being a meaningful bottleneck.\nOn the other hand, if the kernel is constipated, no tweaking of the\nbackground writer parameters is going to insulate the backends from that\nfact. That said, I would increase bgwriter_lru_maxpages (or decrease\nbgwriter_delay) anyway. It probably won't make much difference, but if it\ndoes it is more likely to help than to hurt.\n\n\n> Do you think it should increase bgwriter_lru_maxpages,\n> bgwriter_lru_multiplier, and decrease bgwriter_delay due to the value of\n> buffers_backend compared to buffers_alloc?\n>\n\nI don't think that that comparison is meaningful, so wouldn't make changes\nbased on it.\n\n\n> Do you think a modification is necessary?\n> What values would you recommend?\n> thank you\n>\n\nIf you are experiencing a problem, this is probably not the right way to\ninvestigate it. If a particular query is slow, try EXPLAIN (ANALYZE,\nBUFFERS). If lots of user-facing things are slow, try sampling \"select\nwait_event, wait_event_type from pg_stat_activity where\nbackend_type='client backend';\"\n\nOn Mon, Oct 14, 2019 at 1:25 PM dangal <[email protected]> wrote:Do you think it should increase bgwriter_lru_maxpages due to the value of\nmaxwritten_clean?I find the background writer to be pretty unimportant these days. If the kernel is freely accepting writes without blocking, the backends can probably write their own buffers without it being a meaningful bottleneck. On the other hand, if the kernel is constipated, no tweaking of the background writer parameters is going to insulate the backends from that fact. That said, I would increase bgwriter_lru_maxpages (or decrease \n\nbgwriter_delay) anyway. It probably won't make much difference, but if it does it is more likely to help than to hurt. \nDo you think it should increase bgwriter_lru_maxpages,\nbgwriter_lru_multiplier, and decrease bgwriter_delay due to the value of\nbuffers_backend compared to buffers_alloc?I don't think that that comparison is meaningful, so wouldn't make changes based on it. \nDo you think a modification is necessary?\nWhat values would you recommend?\nthank youIf you are experiencing a problem, this is probably not the right way to investigate it. If a particular query is slow, try EXPLAIN (ANALYZE, BUFFERS). If lots of user-facing things are slow, try sampling \"select wait_event, wait_event_type from pg_stat_activity where backend_type='client backend';\"",
"msg_date": "Thu, 17 Oct 2019 19:47:10 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter"
},
{
"msg_contents": "thank you very much jeff I'll see with the team that manages the operating\nsystem to see if they can help me with this data that you have given me\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Thu, 17 Oct 2019 17:50:44 -0700 (MST)",
"msg_from": "dangal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter"
},
{
"msg_contents": "thank you very much justin, i am seeing install the product you recommended\nme!\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Thu, 17 Oct 2019 17:55:40 -0700 (MST)",
"msg_from": "dangal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter"
}
] |
[
{
"msg_contents": "Hi folks -\n\nDoes anyone know if there's been a change in the way values for CTEs are \ndisplayed in query plans?\n\nI think that it used to be the case that, for keys that include the \nvalues of child nodes values (eg \"Shared Hit Blocks\", or \"Actual Total \nTime\"), CTE scans included the CTE itself, even if it wasn't included as \none of its children in the plan. If you didn't subtract the CTE scan, \nyou would see surprising things, like sort operations reading table \ndata, or the total time of the nodes in a single-threaded query plan \nadding up to significantly more than 100% of the total query time.\n\nNow (I think since v11, but I'm not sure), it looks like these values \nonly include the children listed in the plan. For example, I've seen CTE \nscans that have smaller times and buffers values than the CTE itself, \nwhich couldn't be true if the CTE was included in the scan.\n\nI'm much less sure, but I *think* the same is also true of other \nInitPlan nodes - for example, if a node includes the filter \"value > \n$1\", its time and buffers used to (but no longer does) include the total \nfor the InitPlan node which returned the value \"$1\".\n\nAm I way off base with this, or did this change happen, and if so, am I \nright in thinking that it was changed in v11?\n\nThanks in advance\n\nDave\n\n\n\n\n\n",
"msg_date": "Tue, 15 Oct 2019 12:28:19 +0100",
"msg_from": "David Conlin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Change in CTE treatment in query plans?"
},
{
"msg_contents": "David Conlin <[email protected]> writes:\n> Does anyone know if there's been a change in the way values for CTEs are \n> displayed in query plans?\n\nOffhand I don't recall any such changes, nor does a cursory look\nthrough explain.c find anything promising.\n\nIf you're concerned with a multiply-referenced CTE, one possibility\nfor funny results is that the blame for its execution cost could be\nspread across the multiple call sites. The same can happen with\ninitplans/subplans. But I'm just guessing; you didn't show any\nconcrete examples so it's hard to be definite.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 17 Oct 2019 11:04:32 +0200",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change in CTE treatment in query plans?"
},
{
"msg_contents": "Hi Tom -\n\nThanks so much for getting back to me.\n\nI didn't realise that the costs of init/sub plans would be spread across \nthe call sites - I had (stupidly) assumed that each call site would \ninclude the full cost.\n\nHaving taken a couple of days to go back over the problems I was seeing, \nyou were absolutely right - it was all to do with multiple call sites - \nthe postgres version was just a red herring.\n\nThanks for your help & all the best,\n\nDave\n\nOn 17/10/2019 10:04, Tom Lane wrote:\n> David Conlin <[email protected]> writes:\n>> Does anyone know if there's been a change in the way values for CTEs are\n>> displayed in query plans?\n> Offhand I don't recall any such changes, nor does a cursory look\n> through explain.c find anything promising.\n>\n> If you're concerned with a multiply-referenced CTE, one possibility\n> for funny results is that the blame for its execution cost could be\n> spread across the multiple call sites. The same can happen with\n> initplans/subplans. But I'm just guessing; you didn't show any\n> concrete examples so it's hard to be definite.\n>\n> \t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Oct 2019 14:22:01 +0100",
"msg_from": "David Conlin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change in CTE treatment in query plans?"
}
] |
[
{
"msg_contents": "Hi all,\n\n\n\nHow to fetch certain number of tuples from a postgres table.\n\n\n\nSame I am doing in oracle using following lines by setting prefetch attribute.\n\n\n\nFor oracle\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count\n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n\n\nFor Postgres\n\n\n\nCan you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\n\nmySqlResultsPG = PQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n\n\n\n\nRegards\n\nTarkeshwar\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nHow to fetch certain number of tuples from a postgres table.\n\n \nSame I am doing in oracle\nusing following lines by setting \nprefetch attribute.\n \nFor oracle\n\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count \n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n \n \nFor Postgres\n\n \nCan you please tell us how set this prefetch attribute in following lines. Is\nPQexec\nreturns all the rows from the table?\n \nmySqlResultsPG =\nPQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n\n\n \nRegards\nTarkeshwar",
"msg_date": "Thu, 17 Oct 2019 11:16:29 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can you please tell us how set this prefetch attribute in following\n lines."
},
{
"msg_contents": "On Thu, 2019-10-17 at 11:16 +0000, M Tarkeshwar Rao wrote:\r\n> [EXTERNAL SOURCE]\r\n> \r\n> \r\n> \r\n> Hi all,\r\n> \r\n> How to fetch certain number of tuples from a postgres table.\r\n> \r\n> Same I am doing in oracle using following lines by setting prefetch attribute.\r\n> \r\n> For oracle\r\n> // Prepare query\r\n> if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\r\n> // Get statement type\r\n> OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\r\n> // Set prefetch count \r\n> OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \r\n> // Execute query\r\n> status = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\r\n> \r\n> \r\n> For Postgres\r\n> \r\n> Can you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\r\n> \r\n> mySqlResultsPG = PQexec(connection, aSqlStatement);\r\n> if((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\r\n> if ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\r\n> {\r\n> myNumColumns = PQnfields(mySqlResultsPG);\r\n> myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\r\n> myCurrentRowNum = 0 ;\r\n> }\r\n> \r\n> \r\n> Regards\r\n> Tarkeshwar\r\n> \r\n\r\ndeclare a cursor and fetch\r\n\r\nhttps://books.google.com/books?id=Nc5ZT2X5mOcC&pg=PA405&lpg=PA405&dq=pqexec+fetch&source=bl&ots=8P8w5JemcL&sig=ACfU3U0POGGSP0tYTrs5oxykJdOeffaspA&hl=en&sa=X&ved=2ahUKEwjevbmA2KPlAhXukOAKHaBIBcoQ6AEwCnoECDEQAQ#v=onepage&q=pqexec%20fetch&f=false\r\n\r\n\r\n",
"msg_date": "Thu, 17 Oct 2019 16:18:42 +0000",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can you please tell us how set this prefetch attribute in\n following lines."
},
{
"msg_contents": "On Thu, 2019-10-17 at 11:16 +0000, M Tarkeshwar Rao wrote:\n> How to fetch certain number of tuples from a postgres table.\n> \n> Same I am doing in oracle using following lines by setting prefetch attribute.\n> \n> For oracle\n> // Prepare query\n> if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n> // Get statement type\n> OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n> // Set prefetch count \n> OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \n> // Execute query\n> status = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n> \n> For Postgres\n> \n> Can you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n> \n> mySqlResultsPG = PQexec(connection, aSqlStatement);\n> \n> if((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\n> if ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n> {\n> myNumColumns = PQnfields(mySqlResultsPG);\n> myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n> myCurrentRowNum = 0 ;\n> }\n\nThe C API doesn't offer anything like Oracle prefetch to force prefetching of a certain\nnumber of result rows.\n\nIn the PostgreSQL code you show above, the whole result set will be fetched in one go\nand cached in client RAM, so in a way you have \"prefetch all\".\n\nThe alternative thet the C API gives you is PQsetSingleRowMode(), which, when called,\nwill return the result rows one by one, as they arrive from the server.\nThat disables prefetching.\n\nIf you want to prefetch only a certain number of rows, you can use the DECLARE and\nFETCH SQL statements to create a cursor in SQL and fetch it in batches.\n\nThis workaround has the down side that the current query shown in \"pg_stat_activity\"\nor \"pg_stat_statements\" is always something like \"FETCH 32\", and you are left to guess\nwhich statement actually caused the problem.\n\n\nIf you are willing to bypass the C API and directly speak the network protocol with\nthe server, you can do better. This is documented in\nhttps://www.postgresql.org/docs/current/protocol.html\n\nThe \"Execute\" ('E') message allows you to send an integer with the maximum number of\nrows to return (0 means everything), so that does exactly what you want.\n\nThe backend will send a \"PortalSuspended\" ('s') to indicate that there is more to come,\nand you keep sending \"Execute\" until you get a \"CommandComplete\" ('C').\n\nI you feel hacky you could write C API support for that...\n\n\nIf you use that or a cursor, PostgreSQL will know that you are executing a cursor\nand will plan its queries differently: it will assume that only \"cursor_tuple_fraction\"\n(default 0.1) of your result set is actually fetched and prefer fast startup plans.\nIf you don't want that, because you are fetching batches as fast as you can without\nlengthy intermediate client processing, you might want to set the parameter to 1.0.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 17 Oct 2019 19:05:57 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can you please tell us how set this prefetch attribute in\n following lines."
},
{
"msg_contents": "Hi all,\n\n\n\nHow to fetch certain number of tuples from a postgres table.\n\n\n\nSame I am doing in oracle using following lines by setting prefetch attribute.\n\n\n\nFor oracle\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count\n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n\n\nFor Postgres\n\n\n\nCan you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\n\nmySqlResultsPG = PQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n\n\n\nRegards\n\nTarkeshwar\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nHow to fetch certain number of tuples from a postgres table.\n\n \nSame I am doing in oracle\nusing following lines by setting \nprefetch attribute.\n \nFor oracle\n\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count \n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n \n \nFor Postgres\n\n \nCan you please tell us how set this prefetch attribute in following lines. Is\nPQexec\nreturns all the rows from the table?\n \nmySqlResultsPG =\nPQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n \nRegards\nTarkeshwar",
"msg_date": "Fri, 18 Oct 2019 03:43:38 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can you please tell us how set this prefetch attribute in following\n lines."
},
{
"msg_contents": "Hi all,\n\n\n\nHow to fetch certain number of tuples from a postgres table.\n\n\n\nSame I am doing in oracle using following lines by setting prefetch attribute.\n\n\n\nFor oracle\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count\n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n\n\nFor Postgres\n\n\n\nCan you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\n\nmySqlResultsPG = PQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n\n\n\nRegards\n\nTarkeshwar\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nHow to fetch certain number of tuples from a postgres table.\n\n \nSame I am doing in oracle\nusing following lines by setting \nprefetch attribute.\n \nFor oracle\n\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count \n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n \n \nFor Postgres\n\n \nCan you please tell us how set this prefetch attribute in following lines. Is\nPQexec\nreturns all the rows from the table?\n \nmySqlResultsPG =\nPQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n \nRegards\nTarkeshwar",
"msg_date": "Fri, 18 Oct 2019 03:47:13 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Can you please tell us how set this prefetch attribute in\n following lines."
},
{
"msg_contents": "Hi all,\n\n\n\nHow to fetch certain number of tuples from a postgres table.\n\n\n\nSame I am doing in oracle using following lines by setting prefetch attribute.\n\n\n\nFor oracle\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count\n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n\n\nFor Postgres\n\n\n\nCan you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\n\nmySqlResultsPG = PQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n\n\n\nRegards\n\nTarkeshwar\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nHow to fetch certain number of tuples from a postgres table.\n\n \nSame I am doing in oracle\nusing following lines by setting \nprefetch attribute.\n \nFor oracle\n\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count \n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n \n \nFor Postgres\n\n \nCan you please tell us how set this prefetch attribute in following lines. Is\nPQexec\nreturns all the rows from the table?\n \nmySqlResultsPG =\nPQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n \nRegards\nTarkeshwar",
"msg_date": "Fri, 18 Oct 2019 03:47:49 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can you please tell us how set this prefetch attribute in following\n lines."
},
{
"msg_contents": "On Fri, Oct 18, 2019 at 03:47:49AM +0000, M Tarkeshwar Rao wrote:\n> How to fetch certain number of tuples from a postgres table.\n> \n> Same I am doing in oracle using following lines by setting prefetch attribute.\n> \n> For oracle\n> // Prepare query\n> if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n> // Get statement type\n> OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n> // Set prefetch count\n> OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n> // Execute query\n> status = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n> \n> For Postgres\n> Can you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\nYes, PQexec reads everything at once into a buffer on the library.\nhttps://www.postgresql.org/docs/current/libpq-exec.html\n\nI think you want this:\nhttps://www.postgresql.org/docs/current/libpq-async.html\n|Another frequently-desired feature that can be obtained with PQsendQuery and PQgetResult is retrieving large query results a row at a time. This is discussed in Section 33.5.\nhttps://www.postgresql.org/docs/current/libpq-single-row-mode.html\n\nNote this does not naively send \"get one row\" requests to the server on each\ncall. Rather, I believe it behaves at a protocol layer exactly the same as\nPQexec(), but each library call returns only a single row. When it runs out of\nrows, it requests from the server another packet full of rows, which are saved\nfor future iterations.\n\nThe effect is constant memory use for arbitrarily large result set with same\nnumber of network roundtrips as PQexec(). You'd do something like:\n\nPQsendQuery(conn)\nPQsetSingleRowMode(conn)\nwhile(res = PQgetResult(conn)) {\n\t...\n\tPQclear(res)\n}\n\nJustin\n\n\n",
"msg_date": "Fri, 18 Oct 2019 11:15:02 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can you please tell us how set this prefetch attribute in\n following lines."
},
{
"msg_contents": "Thanks Thompson. Your inputs are very valuable and we successfully implemented it and results are very good. \r\n\r\nBut I am getting following error message. Can you please suggest why this is coming and what is the remedy for this.\r\n\r\nError Details\r\n-----------------\r\nFailed to execute the sql command close: \r\nmycursor_4047439616_1571970686004430275FATAL: terminating connection due to conflict with recovery\r\nDETAIL: User query might have needed to see row versions that must be removed.\r\nHINT: In a moment you should be able to reconnect to the database and repeat your command.\r\n\r\nRegards\r\nTarkeshwar\r\n\r\n-----Original Message-----\r\nFrom: Reid Thompson <[email protected]> \r\nSent: Thursday, October 17, 2019 9:49 PM\r\nTo: [email protected]\r\nCc: Reid Thompson <[email protected]>\r\nSubject: Re: Can you please tell us how set this prefetch attribute in following lines.\r\n\r\nOn Thu, 2019-10-17 at 11:16 +0000, M Tarkeshwar Rao wrote:\r\n> [EXTERNAL SOURCE]\r\n> \r\n> \r\n> \r\n> Hi all,\r\n> \r\n> How to fetch certain number of tuples from a postgres table.\r\n> \r\n> Same I am doing in oracle using following lines by setting prefetch attribute.\r\n> \r\n> For oracle\r\n> // Prepare query\r\n> if( OCIStmtPrepare( myOciStatement, myOciError, (text \r\n> *)aSqlStatement, // Get statement type OCIAttrGet( (void \r\n> *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\r\n> // Set prefetch count \r\n> OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \r\n> // Execute query\r\n> status = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, \r\n> iters, 0, NULL, NULL, OCI_DEFAULT );\r\n> \r\n> \r\n> For Postgres\r\n> \r\n> Can you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\r\n> \r\n> mySqlResultsPG = PQexec(connection, aSqlStatement);\r\n> if((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || \r\n> (PQstatus(connection) != CONNECTION_OK)){} if ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\r\n> {\r\n> myNumColumns = PQnfields(mySqlResultsPG);\r\n> myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\r\n> myCurrentRowNum = 0 ;\r\n> }\r\n> \r\n> \r\n> Regards\r\n> Tarkeshwar\r\n> \r\n\r\ndeclare a cursor and fetch\r\n\r\nhttps://protect2.fireeye.com/v1/url?k=d75a6ab6-8b8e60bf-d75a2a2d-86740465fc08-fa8f74c15b35a3fd&q=1&e=7b7df498-f187-408a-a07c-07b1c5f6f868&u=https%3A%2F%2Fbooks.google.com%2Fbooks%3Fid%3DNc5ZT2X5mOcC%26pg%3DPA405%26lpg%3DPA405%26dq%3Dpqexec%2Bfetch%26source%3Dbl%26ots%3D8P8w5JemcL%26sig%3DACfU3U0POGGSP0tYTrs5oxykJdOeffaspA%26hl%3Den%26sa%3DX%26ved%3D2ahUKEwjevbmA2KPlAhXukOAKHaBIBcoQ6AEwCnoECDEQAQ%23v%3Donepage%26q%3Dpqexec%2520fetch%26f%3Dfalse\r\n\r\n\r\n",
"msg_date": "Wed, 30 Oct 2019 16:47:27 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Can you please tell us how set this prefetch attribute in\n following lines."
}
] |
[
{
"msg_contents": "https://explain.depesz.com/s/Caa5\n\nI am looking at this explain analyze output and seeing a nested loop\ntowards the lowest levels with pretty bad estimate vs actual (2.3k vs 99k),\nbut the things that feed that nested loop seem like the estimates are\nrather close (index scans with 11 estimated vs 30 actual and 3350 vs\n3320)... why does the higher node have such a different estimate vs actual\nratio?\n\n\n*Michael Lewis | Software Engineer*\n*Entrata*\n\nhttps://explain.depesz.com/s/Caa5I am looking at this explain analyze output and seeing a nested loop towards the lowest levels with pretty bad estimate vs actual (2.3k vs 99k), but the things that feed that nested loop seem like the estimates are rather close (index scans with 11 estimated vs 30 actual and 3350 vs 3320)... why does the higher node have such a different estimate vs actual ratio? Michael Lewis | Software EngineerEntrata",
"msg_date": "Thu, 17 Oct 2019 15:15:21 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reading explain plans- row estimates/actuals on lower nodes vs next\n level up"
},
{
"msg_contents": "On Thu, Oct 17, 2019 at 03:15:21PM -0600, Michael Lewis wrote:\n>https://explain.depesz.com/s/Caa5\n>\n>I am looking at this explain analyze output and seeing a nested loop\n>towards the lowest levels with pretty bad estimate vs actual (2.3k vs 99k),\n>but the things that feed that nested loop seem like the estimates are\n>rather close (index scans with 11 estimated vs 30 actual and 3350 vs\n>3320)... why does the higher node have such a different estimate vs actual\n>ratio?\n>\n\nThis is usually a sign of non-uniform distribution for the join columns,\nand/or correlation between sides of the join.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 18 Oct 2019 02:13:34 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading explain plans- row estimates/actuals on lower nodes vs\n next level up"
}
] |
[
{
"msg_contents": "Hi,\nI am not sure if here is the best place to post this.\nI would like to know why max_connections parameter can't be changed\nwithout a restart. I know that it is a postmaster's context parameter.\nWhich PostgreSQL's subsystems, structures and OS kernel parameters should\nbe affected to support this change??\n\nThanks!\n\nHi, I am not sure if here is the best place to post this.I would like to know why max_connections parameter can't be changed without a restart. I know that it is a postmaster's context parameter. Which PostgreSQL's subsystems, structures and OS kernel parameters should be affected to support this change??Thanks!",
"msg_date": "Tue, 22 Oct 2019 14:28:31 +0200",
"msg_from": "Joao Junior <[email protected]>",
"msg_from_op": true,
"msg_subject": "max_connections"
},
{
"msg_contents": "Hi\n\nút 22. 10. 2019 v 14:28 odesílatel Joao Junior <[email protected]> napsal:\n\n> Hi,\n> I am not sure if here is the best place to post this.\n> I would like to know why max_connections parameter can't be changed\n> without a restart. I know that it is a postmaster's context parameter.\n> Which PostgreSQL's subsystems, structures and OS kernel parameters\n> should be affected to support this change??\n>\n\nThe max_connections is used more time inside PostgreSQL source code -\ntypically for dimensions arrays where one field is per session.\n\nYou can search in source code the variable \"MaxConnections\". It's used for\ninformations about sessions (processes), about transactions, locks, ..\n\nUnfortunately, lot of these arrays are allocated in shared memory. For\nshared memory has Postgres special coding standards - almost all memory is\nallocated in startup time, and newer released. It is very simple to manage\nmemory with these rules. It is fast, and there are not any risk of memory\nleaks. But related limits requires restart. Usually it is not a problem. So\nnecessity of restart after change of max_connection is related to simple\nbut very robust share memory management.\n\nRegards\n\nPavel\n\n\n> Thanks!\n>\n\nHiút 22. 10. 2019 v 14:28 odesílatel Joao Junior <[email protected]> napsal:Hi, I am not sure if here is the best place to post this.I would like to know why max_connections parameter can't be changed without a restart. I know that it is a postmaster's context parameter. Which PostgreSQL's subsystems, structures and OS kernel parameters should be affected to support this change??The max_connections is used more time inside PostgreSQL source code - typically for dimensions arrays where one field is per session. You can search in source code the variable \"MaxConnections\". It's used for informations about sessions (processes), about transactions, locks, ..Unfortunately, lot of these arrays are allocated in shared memory. For shared memory has Postgres special coding standards - almost all memory is allocated in startup time, and newer released. It is very simple to manage memory with these rules. It is fast, and there are not any risk of memory leaks. But related limits requires restart. Usually it is not a problem. So necessity of restart after change of max_connection is related to simple but very robust share memory management.RegardsPavelThanks!",
"msg_date": "Tue, 22 Oct 2019 14:39:19 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max_connections"
}
] |
[
{
"msg_contents": "Hi there!\n\nI guess we stumbled upon a performance issue with notifications sent within triggers (using PostgreSQL version 11.5)\nand I'd like your opinion about this.\n\nWe want our app to maintain a data cache, so each instance of the app listens to some channels (one per table).\nThere are update triggers set up on the tables so each update yelds a notification to the appropriate channel.\n\nIt works fine and we love the feature, but it seems to come with a performance cost.\nSince we set them up, we get query timeouts in our app (set to 200ms in the app).\n\nTo try and understand this, we set deadlock_timeout to 100ms and enabled log_lock_waits to get the following warnings in the log: process XXXXX still waiting for AccessExclusiveLock on object 0 of class 1262 of database 0 after YYY.YYY ms\nA row update transaction on table A is waiting for another row update transaction on table B. Tables are only tied by an FK, the updated fields are not the ID or FK fields.\n\nA quick google + source code search showed thePreCommit_Notify <https://doxygen.postgresql.org/async_8c.html#a90945c51e67f5618a2003d672f1880cb> function is trying to acquire this lock.\nMy educated guess of what happens during a COMMIT is the following :\n- pre-commit actions are taken, the \"notification lock\" is taken\n- commit actions are performed (can take some time)\n- post-commit actions are taken, the notification is enqueued and \"notification lock\" is released\n\nAm I correct ?\n\nOther transactions involving a notification are stuck waiting for previous transactions to finish, this can be a performance issue.\n\nI understand the need for lock to be taken pre-commit to ensure notification order matches transaction order, but it in my case I don't really care about the order and the performance penalty is high.\n\nWe could think of several options there :\n- different locks for different channels (implies different notification queues I guess)\n- an argument to NOTIFY query not to guarantee notifications order (and thus take and release the lock in post-commit actions)\n\nI believe the notify-in-trigger should be a pretty common usage pattern and so this finding may impact quite a few systems.\n\nWhat do you think about this ?\n\nRegards,\n\n-- \nGrégoire de Turckheim\n\n\n\n\n\n\n\nHi there!\n\nI guess we stumbled upon a performance issue with notifications sent within triggers (using PostgreSQL version 11.5)\nand I'd like your opinion about this.\n\nWe want our app to maintain a data cache, so each instance of the app listens to some channels (one per table).\nThere are update triggers set up on the tables so each update yelds a notification to the appropriate channel.\n\nIt works fine and we love the feature, but it seems to come with a performance cost.\nSince we set them up, we get query timeouts in our app (set to 200ms in the app).\n\nTo try and understand this, we set deadlock_timeout to 100ms and enabled log_lock_waits to get the following warnings in the log: process XXXXX still waiting for AccessExclusiveLock on object 0 of class 1262 of database 0 after YYY.YYY ms\nA row update transaction on table A is waiting for another row update transaction on table B. Tables are only tied by an FK, the updated fields are not the ID or FK fields.\n\nA quick google + source code search showed the PreCommit_Notify function is trying to acquire this lock.\nMy educated guess of what happens during a COMMIT is the following :\n- pre-commit actions are taken, the \"notification lock\" is taken\n- commit actions are performed (can take some time)\n- post-commit actions are taken, the notification is enqueued and \"notification lock\" is released\n\nAm I correct ?\n\nOther transactions involving a notification are stuck waiting for previous transactions to finish, this can be a performance issue.\n\nI understand the need for lock to be taken pre-commit to ensure notification order matches transaction order, but it in my case I don't really care about the order and the performance penalty is high.\n\nWe could think of several options there :\n- different locks for different channels (implies different notification queues I guess)\n- an argument to NOTIFY query not to guarantee notifications order (and thus take and release the lock in post-commit actions)\n\nI believe the notify-in-trigger should be a pretty common usage pattern and so this finding may impact quite a few systems.\n\nWhat do you think about this ?\n\nRegards,\n\n-- \nGrégoire de Turckheim",
"msg_date": "Mon, 28 Oct 2019 14:42:03 +0100",
"msg_from": "=?UTF-8?Q?Gr=c3=a9goire_de_Turckheim?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Notifications within triggers seem to compromise performance"
},
{
"msg_contents": "=?UTF-8?Q?Gr=c3=a9goire_de_Turckheim?= <[email protected]> writes:\n> I guess we stumbled upon a performance issue with notifications sent within triggers (using PostgreSQL version 11.5)\n> and I'd like your opinion about this.\n\nWe made some performance improvements for NOTIFY just a couple months\nago, cf commits b10f40bf0, bb5ae8f6c, bca6e6435, 51004c717. It would\nbe interesting to know how much those changes helped your use-case.\n\nI'm quite disinclined to reduce the correctness guarantees around\nNOTIFY for performance's sake. That's the sort of thing that sounds\nlike a good idea until you find out that it subtly breaks your\napplication, and then you've got nothing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Oct 2019 10:22:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Notifications within triggers seem to compromise performance"
},
{
"msg_contents": "Le 28/10/2019 à 15:22, Tom Lane a écrit :\n> =?UTF-8?Q?Gr=c3=a9goire_de_Turckheim?= <[email protected]> writes:\n>> I guess we stumbled upon a performance issue with notifications sent within triggers (using PostgreSQL version 11.5)\n>> and I'd like your opinion about this.\n> We made some performance improvements for NOTIFY just a couple months\n> ago, cf commits b10f40bf0, bb5ae8f6c, bca6e6435, 51004c717. It would\n> be interesting to know how much those changes helped your use-case.\nThanks for your quick reply!\n\nIf my understanding of the problem is correct, there is no performance \nissue with the notification itself.\nThe problem is the following: a system-wide lock is taken pre-commit, so \nany other transaction with a NOTIFY will have to wait for other \ntransactions to complete before it can leave its own pre-commit stage.\nIs this wording better to clarify my explanation attempt ? :)\n\nIn my case, ~90% of the data is in tables with triggered notifications, \nall of this data updates become \"single threaded\", whatever the table it \nis in.\n>\n> I'm quite disinclined to reduce the correctness guarantees around\n> NOTIFY for performance's sake. That's the sort of thing that sounds\n> like a good idea until you find out that it subtly breaks your\n> application, and then you've got nothing.\n100% agreed, this is why my suggestion was to make it an option. From a \nuser perspective, it seems very complex to understand if this option is \nto be used or not. I really don't know how to present such an option to \nthe user.\n\nThere also may be better ways to do it, I suggested different queues \n(and thus locks) for different channels but I have no idea about the \ncost of it.\n\nRegards,\n\n-- \nGrégoire de Turckheim\n\n\n\n",
"msg_date": "Mon, 28 Oct 2019 16:39:30 +0100",
"msg_from": "=?UTF-8?Q?Gr=c3=a9goire_de_Turckheim?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Notifications within triggers seem to compromise performance"
},
{
"msg_contents": "=?UTF-8?Q?Gr=c3=a9goire_de_Turckheim?= <[email protected]> writes:\n> Le 28/10/2019 à 15:22, Tom Lane a écrit :\n>> We made some performance improvements for NOTIFY just a couple months\n>> ago, cf commits b10f40bf0, bb5ae8f6c, bca6e6435, 51004c717. It would\n>> be interesting to know how much those changes helped your use-case.\n\n> If my understanding of the problem is correct, there is no performance \n> issue with the notification itself.\n> The problem is the following: a system-wide lock is taken pre-commit, so \n> any other transaction with a NOTIFY will have to wait for other \n> transactions to complete before it can leave its own pre-commit stage.\n\nRight, but all commits are single-threaded at some granularity.\nThe big problem with NOTIFY is that it sits for a long time holding\nthat lock, if you have a lot of notify traffic. The commits I mentioned\nshould improve that.\n\nAnyway, as I said, it would be good to find out whether the already\nfinished fixes are enough to solve your problem, before we debate\nwhether more needs to be done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Oct 2019 12:25:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Notifications within triggers seem to compromise performance"
},
{
"msg_contents": "Le 28/10/2019 à 17:25, Tom Lane a écrit :\n> =?UTF-8?Q?Gr=c3=a9goire_de_Turckheim?= <[email protected]> writes:\n>> Le 28/10/2019 à 15:22, Tom Lane a écrit :\n>>> We made some performance improvements for NOTIFY just a couple months\n>>> ago, cf commits b10f40bf0, bb5ae8f6c, bca6e6435, 51004c717. It would\n>>> be interesting to know how much those changes helped your use-case.\n>> If my understanding of the problem is correct, there is no performance\n>> issue with the notification itself.\n>> The problem is the following: a system-wide lock is taken pre-commit, so\n>> any other transaction with a NOTIFY will have to wait for other\n>> transactions to complete before it can leave its own pre-commit stage.\n> Right, but all commits are single-threaded at some granularity.\n> The big problem with NOTIFY is that it sits for a long time holding\n> that lock, if you have a lot of notify traffic. The commits I mentioned\n> should improve that.\n>\n> Anyway, as I said, it would be good to find out whether the already\n> finished fixes are enough to solve your problem, before we debate\n> whether more needs to be done.\nLet's do it this way, I'll give it a try (might be long, this isn't \nsomething we can easily upgrade) and get back to you.\n\nThanks!\n\n-- \nGrégoire de Turckheim\n\n\n\n",
"msg_date": "Mon, 28 Oct 2019 17:35:22 +0100",
"msg_from": "=?UTF-8?Q?Gr=c3=a9goire_de_Turckheim?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Notifications within triggers seem to compromise performance"
}
] |
[
{
"msg_contents": "On Postgres 9.6 (config below), I have a case I don't understand: three\ntables that can be separately queried in milliseconds, but when put\ntogether into one view using UNION, take 150 seconds to query. Here's the\nrough idea (actual details below):\n\ncreate view thesaurus as\n (select id, name from A)\n union (select id, name from B)\n union (select id, name from C);\n\n\ncreate table h(i integer);\ninsert into h values(12345);\nselect * from thesaurus join h on (thesaurus.id = h.id);\n\n\nOn the other hand, if you do this, it's a millisecond plan:\n\nselect * from thesaurus where id in (12345);\n\n\nNotice that it's effectively the same query since h above contains just\nthis one value.\n\nHere are the actual details. The view being queried:\n\ncreate view thesaurus2 as\n\nselect\n rt.thesaurus_id,\n rt.version_id,\n rt.normalized,\n rt.identifier,\n rt.typecode\n from local_sample s\n join thesaurus_master rt using (sample_id)\nunion\nselect c.id as thesaurus_id,\n c.id as version_id,\n c.cas_number as normalized,\n c.cas_number as identifier,\n 3 as typecode\n from cas_number c\n join sample s on c.id = s.version_id\nunion\nselect m.id as thesaurus_id,\n m.id as version_id,\n lower(m.mfcd) as normalized,\n m.mfcd as identifier,\n 4 as typecode\n from mfcd m\n join sample s on m.id = s.version_id;\n\n\nThe bad sort (147 seconds to execute). Note that the \"hitlist\" table\ncontains exactly one row.\n\n explain analyze select c.version_id\n from thesaurus2 c\n join hitlist_rows_103710241 h on (c.thesaurus_id = h.objectid);\n\nhttps://explain.depesz.com/s/5oRC\n\nIf I instead just query directly for that value, the answer is almost\ninstant (1.2 msec):\n\nexplain analyze select c.version_id\nfrom thesaurus2 c\nwhere c.version_id in (1324511991);\nhttps://explain.depesz.com/s/EktF\n\n\nNow if I take any one of the three tables in the UNION view, the query is\nreally fast on each one. For example:\n\nselect distinct c.version_id\n\nfrom (\nselect distinct c.id as thesaurus_id,\n c.id as version_id,\n c.cas_number as normalized,\n c.cas_number as identifier,\n 3 as typecode\n from cas_number c\n join sample s on c.id = s.version_id\n) c\njoin hitlist_rows_103710241 h on (c.thesaurus_id = h.objectid);\n\nhttps://explain.depesz.com/s/KJUZ\n\n\nThe other two subqueries are similarly fast.\n\nThis is Postgres9.6 running on Ubuntu 16.04, 64GB memory 16 CPUs.\nNon-default config values:\n\nmax_connections = 2000\nshared_buffers = 12073MB\nwork_mem = 256MB\nmaintenance_work_mem = 512MB\nsynchronous_commit = off\neffective_cache_size = 32GB\nwal_level = logical\nwal_keep_segments = 1000\nmax_wal_senders = 10\nhot_standby = on\narchive_mode = on\narchive_command = '/bin/true'\n\n\nThanks!\nCraig\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n3430 Carmel Mountain Road, Suite 250\nSan Diego, CA 92121\n---------------------------------\n\nOn Postgres 9.6 (config below), I have a case I don't understand: three tables that can be separately queried in milliseconds, but when put together into one view using UNION, take 150 seconds to query. Here's the rough idea (actual details below):create view thesaurus as (select id, name from A) union (select id, name from B) union (select id, name from C);create table h(i integer);insert into h values(12345);select * from thesaurus join h on (thesaurus.id = h.id);On the other hand, if you do this, it's a millisecond plan:select * from thesaurus where id in (12345);Notice that it's effectively the same query since h above contains just this one value.Here are the actual details. The view being queried:create view thesaurus2 asselect rt.thesaurus_id, rt.version_id, rt.normalized, rt.identifier, rt.typecode from local_sample s join thesaurus_master rt using (sample_id)unionselect c.id as thesaurus_id, c.id as version_id, c.cas_number as normalized, c.cas_number as identifier, 3 as typecode from cas_number c join sample s on c.id = s.version_idunionselect m.id as thesaurus_id, m.id as version_id, lower(m.mfcd) as normalized, m.mfcd as identifier, 4 as typecode from mfcd m join sample s on m.id = s.version_id;The bad sort (147 seconds to execute). Note that the \"hitlist\" table contains exactly one row. explain analyze select c.version_id from thesaurus2 c join hitlist_rows_103710241 h on (c.thesaurus_id = h.objectid);https://explain.depesz.com/s/5oRCIf I instead just query directly for that value, the answer is almost instant (1.2 msec):explain analyze select c.version_idfrom thesaurus2 cwhere c.version_id in (1324511991);https://explain.depesz.com/s/EktFNow if I take any one of the three tables in the UNION view, the query is really fast on each one. For example:select distinct c.version_idfrom (select distinct c.id as thesaurus_id, c.id as version_id, c.cas_number as normalized, c.cas_number as identifier, 3 as typecode from cas_number c join sample s on c.id = s.version_id) cjoin hitlist_rows_103710241 h on (c.thesaurus_id = h.objectid);https://explain.depesz.com/s/KJUZThe other two subqueries are similarly fast.This is Postgres9.6 running on Ubuntu 16.04, 64GB memory 16 CPUs. Non-default config values:max_connections = 2000shared_buffers = 12073MBwork_mem = 256MBmaintenance_work_mem = 512MBsynchronous_commit = offeffective_cache_size = 32GBwal_level = logicalwal_keep_segments = 1000max_wal_senders = 10hot_standby = onarchive_mode = onarchive_command = '/bin/true'Thanks!Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.3430 Carmel Mountain Road, Suite 250San Diego, CA 92121---------------------------------",
"msg_date": "Mon, 28 Oct 2019 15:40:58 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNION causes horrible plan on JOIN"
},
{
"msg_contents": "On Mon, Oct 28, 2019 at 03:40:58PM -0700, Craig James wrote:\n> On Postgres 9.6 (config below), I have a case I don't understand: three\n> tables that can be separately queried in milliseconds, but when put\n> together into one view using UNION, take 150 seconds to query. Here's the\n> rough idea (actual details below):\n\nDo you want UNION ALL ?\n\nUNION without ALL distintifies the output.\nhttps://www.postgresql.org/docs/current/sql-select.html#SQL-UNION\n\nJustin\n\n\n",
"msg_date": "Mon, 28 Oct 2019 17:45:53 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UNION causes horrible plan on JOIN"
},
{
"msg_contents": "On Mon, Oct 28, 2019 at 3:45 PM Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Oct 28, 2019 at 03:40:58PM -0700, Craig James wrote:\n> > On Postgres 9.6 (config below), I have a case I don't understand: three\n> > tables that can be separately queried in milliseconds, but when put\n> > together into one view using UNION, take 150 seconds to query. Here's the\n> > rough idea (actual details below):\n>\n> Do you want UNION ALL ?\n>\n> UNION without ALL distintifies the output.\n> https://www.postgresql.org/docs/current/sql-select.html#SQL-UNION\n\n\nInteresting idea, thanks. But it makes no difference. Tried it and got the\nsame bad performance.\n\nCraig\n\n\n>\n>\n> Justin\n>\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n3430 Carmel Mountain Road, Suite 250\nSan Diego, CA 92121\n---------------------------------\n\nOn Mon, Oct 28, 2019 at 3:45 PM Justin Pryzby <[email protected]> wrote:On Mon, Oct 28, 2019 at 03:40:58PM -0700, Craig James wrote:\n> On Postgres 9.6 (config below), I have a case I don't understand: three\n> tables that can be separately queried in milliseconds, but when put\n> together into one view using UNION, take 150 seconds to query. Here's the\n> rough idea (actual details below):\n\nDo you want UNION ALL ?\n\nUNION without ALL distintifies the output.\nhttps://www.postgresql.org/docs/current/sql-select.html#SQL-UNIONInteresting idea, thanks. But it makes no difference. Tried it and got the same bad performance.Craig \n\nJustin\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.3430 Carmel Mountain Road, Suite 250San Diego, CA 92121---------------------------------",
"msg_date": "Mon, 28 Oct 2019 16:30:24 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UNION causes horrible plan on JOIN"
},
{
"msg_contents": "On Mon, Oct 28, 2019 at 4:31 PM Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Oct 28, 2019 at 04:30:24PM -0700, Craig James wrote:\n> > On Mon, Oct 28, 2019 at 3:45 PM Justin Pryzby <[email protected]>\n> wrote:\n> >\n> > > On Mon, Oct 28, 2019 at 03:40:58PM -0700, Craig James wrote:\n> > > > On Postgres 9.6 (config below), I have a case I don't understand:\n> three\n> > > > tables that can be separately queried in milliseconds, but when put\n> > > > together into one view using UNION, take 150 seconds to query.\n> Here's the\n> > > > rough idea (actual details below):\n> > >\n> > > Do you want UNION ALL ?\n> > >\n> > > UNION without ALL distintifies the output.\n> > > https://www.postgresql.org/docs/current/sql-select.html#SQL-UNION\n> >\n> >\n> > Interesting idea, thanks. But it makes no difference. Tried it and got\n> the\n> > same bad performance.\n>\n> Could you mail the list the plan with union ALL ?\n>\n\nHere it is. It is indeed different, but takes 104 seconds instead of 140\nseconds.\nhttps://explain.depesz.com/s/zW6I\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n3430 Carmel Mountain Road, Suite 250\nSan Diego, CA 92121\n---------------------------------\n\nOn Mon, Oct 28, 2019 at 4:31 PM Justin Pryzby <[email protected]> wrote:On Mon, Oct 28, 2019 at 04:30:24PM -0700, Craig James wrote:\n> On Mon, Oct 28, 2019 at 3:45 PM Justin Pryzby <[email protected]> wrote:\n> \n> > On Mon, Oct 28, 2019 at 03:40:58PM -0700, Craig James wrote:\n> > > On Postgres 9.6 (config below), I have a case I don't understand: three\n> > > tables that can be separately queried in milliseconds, but when put\n> > > together into one view using UNION, take 150 seconds to query. Here's the\n> > > rough idea (actual details below):\n> >\n> > Do you want UNION ALL ?\n> >\n> > UNION without ALL distintifies the output.\n> > https://www.postgresql.org/docs/current/sql-select.html#SQL-UNION\n> \n> \n> Interesting idea, thanks. But it makes no difference. Tried it and got the\n> same bad performance.\n\nCould you mail the list the plan with union ALL ?Here it is. It is indeed different, but takes 104 seconds instead of 140 seconds.https://explain.depesz.com/s/zW6I -- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.3430 Carmel Mountain Road, Suite 250San Diego, CA 92121---------------------------------",
"msg_date": "Mon, 28 Oct 2019 16:37:23 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UNION causes horrible plan on JOIN"
},
{
"msg_contents": "At Mon, 28 Oct 2019 16:30:24 -0700, Craig James <[email protected]> wrote in \n> On Mon, Oct 28, 2019 at 3:45 PM Justin Pryzby <[email protected]> wrote:\n> \n> > On Mon, Oct 28, 2019 at 03:40:58PM -0700, Craig James wrote:\n> > > On Postgres 9.6 (config below), I have a case I don't understand: three\n> > > tables that can be separately queried in milliseconds, but when put\n> > > together into one view using UNION, take 150 seconds to query. Here's the\n> > > rough idea (actual details below):\n> >\n> > Do you want UNION ALL ?\n> >\n> > UNION without ALL distintifies the output.\n> > https://www.postgresql.org/docs/current/sql-select.html#SQL-UNION\n> \n> \n> Interesting idea, thanks. But it makes no difference. Tried it and got the\n> same bad performance.\n\nThe join clauses in the view also prevent the query from getting\nfaster plans. So if you somehow could move the join clauses out of the\nUNION leafs in the view in addtion to using UNION ALL, you would get\nbetter performance.\n\nOr if hitlist_rows is known to highly narrow the result from the\nelement tables, using a function instead of the view might work.\n\ncreate or replace function the_view(int)\nreturns table(thesaurus_id int, version_id int, normalized int,\n identifier int, typecode int) as $$\nselect\n rt.thesaurus_id,\n rt.version_id,\n rt.normalized,\n rt.identifier,\n rt.typecode\n from local_sample s\n join thesaurus_master rt using (sample_id)\n where rt.thesaurus_id = $1\nunion\n...\n$$ language sql;\n\nexplain analyze select c.version_id\nfrom hitlist_rows_103710241 h,\nlateral the_view(h.objectid) as c;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 29 Oct 2019 12:24:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UNION causes horrible plan on JOIN"
}
] |
[
{
"msg_contents": "I have created a GIN index using jsonb_path_ops over some JSONB\ncolumns. Initially, while the index was small, the query planner would\nselect a Bitmap Index Scan strategy to execute queries leveraging the\nappropriate JSONB operator (@>). Now that the table has grown to\nalmost 200k rows and the index has grown to 1.6M rows, the query\nplanner prefers running Seq Scans.\n\ndb=> SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE\nrelname IN ('_object', 'idx_object') and relnamespace = 43315;\n relname | relkind | reltuples | relpages\n------------+---------+------------+----------\n _object | r | 185618 | 39030\n idx_object | i | 1.6583e+06 | 512\n(2 rows)\n\ndb=> explain analyze ...;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan on x (cost=41814.28..41814.30 rows=1 width=25) (actual\ntime=3954.742..3954.820 rows=5 loops=1)\n Filter: ((x._data IS NOT NULL) OR (x.metadata IS NOT NULL))\n Rows Removed by Filter: 3\n -> Unique (cost=41814.28..41814.29 rows=1 width=1562) (actual\ntime=3954.740..3954.814 rows=8 loops=1)\n -> Sort (cost=41814.28..41814.28 rows=1 width=1562) (actual\ntime=3954.738..3954.754 rows=77 loops=1)\n Sort Key: ...\n Sort Method: quicksort Memory: 63kB\n -> Seq Scan on _object (cost=0.00..41814.27 rows=1\nwidth=1562) (actual time=84.980..3954.330 rows=77 loops=1)\n Filter: ((... @> ...::jsonb) AND (... @> ...::jsonb))\n Rows Removed by Filter: 185529\n Planning time: 0.261 ms\n Execution time: 3954.860 ms\n(12 rows)\n\nDisabling seq scans shows that the execution time of the Bitmap Index\nScan is considerably lower, but the cost estimate is completely off.\n\ndb=> explain analyze ....;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan on x (cost=75584.03..75584.05 rows=1 width=25) (actual\ntime=24.591..24.664 rows=5 loops=1)\n Filter: ((x._data IS NOT NULL) OR (x.metadata IS NOT NULL))\n Rows Removed by Filter: 3\n -> Unique (cost=75584.03..75584.04 rows=1 width=1562) (actual\ntime=24.589..24.659 rows=8 loops=1)\n -> Sort (cost=75584.03..75584.03 rows=1 width=1562) (actual\ntime=24.588..24.603 rows=77 loops=1)\n Sort Key:...\n Sort Method: quicksort Memory: 63kB\n -> Bitmap Heap Scan on _object\n(cost=75580.00..75584.02 rows=1 width=1562) (actual\ntime=24.120..24.284 rows=77 loops=1)\n Recheck Cond: ((... @> ...::jsonb) AND (... @> ...::jsonb))\n Heap Blocks: exact=73\n -> Bitmap Index Scan on idx_object\n(cost=0.00..75580.00 rows=1 width=0) (actual time=24.094..24.094\nrows=77 loops=1)\n Index Cond: ((... @> ...::jsonb) AND (...\n@> ...::jsonb))\n Planning time: 0.301 ms\n Execution time: 24.723 ms\n(14 rows)\n\nIt would seem that this miscalculation of the cost of the index scan\nis due to the query planner lacking detailed statistics about the\nrelevant JSONB column. Here are the statistics collected by postgresql\non the relevant columns.\n\ndb=> \\x\ndb=> select attname, null_frac, n_distinct,\narray_length(most_common_vals,1) mcv,\narray_length(most_common_freqs,1) mcf,\narray_length(most_common_elems,1) mce,\narray_length(most_common_elem_freqs,1) mcef, elem_count_histogram from\npg_stats where schemaname = 'zero2' and tablename = '_object';\n-[ RECORD 1 ]--------+----------\nattname | doctype\nnull_frac | 0\nn_distinct | 56\nmcv | 4\nmcf | 4\nmce |\nmcef |\nelem_count_histogram |\n-[ RECORD 2 ]--------+----------\nattname | app_id\nnull_frac | 0\nn_distinct | 2374\nmcv | 100\nmcf | 100\nmce |\nmcef |\nelem_count_histogram |\n-[ RECORD 3 ]--------+----------\nattname | status\nnull_frac | 0\nn_distinct | 2\nmcv | 2\nmcf | 2\nmce |\nmcef |\nelem_count_histogram |\n-[ RECORD 4 ]--------+----------\nattname | metadata\nnull_frac | 0.0924333\nn_distinct | 985\nmcv | 100\nmcf | 100\nmce |\nmcef |\nelem_count_histogram |\n\n\nAs you can see, most_common_elems and most_common_elems_freqs are\nempty. This prevents the query planner from making any kind of\nmeaningful estimate of the number of index rows that it would need to\naccess as a function of the query terms.\n\nThe workaround I found so far is to set a low value of\nrandom_page_cost, but this could result in the query planner using\nindex scans for other tables and other queries, where a seq scan would\nactually be more appropriate.\n\nThe only real solution would seem to be for the postgresql engine to\nsupport the most_common_elems and most_common_elems_freqs for the\nJSONB data type. Are there any plans to implement this?\n\nThanks!\n\n-- Alex\n\n\n",
"msg_date": "Wed, 30 Oct 2019 09:24:36 -0700",
"msg_from": "Alessandro Baretta <[email protected]>",
"msg_from_op": true,
"msg_subject": "GIN index on JSONB not used due to lack of nested statistics"
},
{
"msg_contents": "On Wed, Oct 30, 2019 at 12:25 PM Alessandro Baretta <[email protected]>\nwrote:\n\n\n> -> Bitmap Index Scan on idx_object\n> (cost=0.00..75580.00 rows=1 width=0) (actual time=24.094..24.094\n> rows=77 loops=1)\n> Index Cond: ((... @> ...::jsonb) AND (...\n> @> ...::jsonb))\n> Planning time: 0.301 ms\n> Execution time: 24.723 ms\n> (14 rows)\n>\n> It would seem that this miscalculation of the cost of the index scan\n> is due to the query planner lacking detailed statistics about the\n> relevant JSONB column.\n\n\nSince it expected 1 row but actually found 77, I think that if it had\naccurate statistics it would have overestimated the costs by even more.\n\nCan you repeat the executions with \"EXPLAIN (ANALYZE, BUFFERS)\"?\n\nHow does the cost estimate change if you make effective_cache_size much\nlarger or much smaller? (No need for ANALYZE, just the cost estimate)\n\nWhat kind of performance do you get if you turn enable_seqscan and then\nrepeat the query from a cold start (restart PostgreSQL, then run sudo sh -c\n \"echo 3 > /proc/sys/vm/drop_caches\"). If the performance is very fast\nafter a cold start, then something is wrong with the planner estimate. If\nit is slow from a cold start, then the planner has at least a plausible\nbasis for charging as much as it does.\n\nIf you run the query with just one branch of your AND at a time, what is\nthe expected and actual number of rows?\n\n\n> The workaround I found so far is to set a low value of\n> random_page_cost, but this could result in the query planner using\n> index scans for other tables and other queries, where a seq scan would\n> actually be more appropriate.\n>\n\nBased on what you know about your IO system, and the cacheability of your\ndata, what is the appropriate setting of random_page_cost from first\nprinciples? Maybe it is those other queries which have the problem, not\nthis one.\n\nIf you can up with a random object generator which creates data structured\nsimilar to yours, and shows the same issue when run with disclosable\nqueries, that would help us look into it.\n\nAlso, what version are you running?\n\nCheers,\n\nJeff\n\nOn Wed, Oct 30, 2019 at 12:25 PM Alessandro Baretta <[email protected]> wrote: -> Bitmap Index Scan on idx_object\n(cost=0.00..75580.00 rows=1 width=0) (actual time=24.094..24.094\nrows=77 loops=1)\n Index Cond: ((... @> ...::jsonb) AND (...\n@> ...::jsonb))\n Planning time: 0.301 ms\n Execution time: 24.723 ms\n(14 rows)\n\nIt would seem that this miscalculation of the cost of the index scan\nis due to the query planner lacking detailed statistics about the\nrelevant JSONB column. Since it expected 1 row but actually found 77, I think that if it had accurate statistics it would have overestimated the costs by even more.Can you repeat the executions with \"EXPLAIN (ANALYZE, BUFFERS)\"?How does the cost estimate change if you make effective_cache_size much larger or much smaller? (No need for ANALYZE, just the cost estimate) What kind of performance do you get if you turn enable_seqscan and then repeat the query from a cold start (restart PostgreSQL, then run sudo sh -c \"echo 3 > /proc/sys/vm/drop_caches\"). If the performance is very fast after a cold start, then something is wrong with the planner estimate. If it is slow from a cold start, then the planner has at least a plausible basis for charging as much as it does.If you run the query with just one branch of your AND at a time, what is the expected and actual number of rows?\nThe workaround I found so far is to set a low value of\nrandom_page_cost, but this could result in the query planner using\nindex scans for other tables and other queries, where a seq scan would\nactually be more appropriate.Based on what you know about your IO system, and the cacheability of your data, what is the appropriate setting of random_page_cost from first principles? Maybe it is those other queries which have the problem, not this one. If you can up with a random object generator which creates data structured similar to yours, and shows the same issue when run with disclosable queries, that would help us look into it.Also, what version are you running?Cheers,Jeff",
"msg_date": "Wed, 30 Oct 2019 18:31:44 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN index on JSONB not used due to lack of nested statistics"
}
] |
[
{
"msg_contents": "Hi All,\n\nWe have configured postgres 11.2 in streaming replication (primary & Standby) on docker and I am looking to initiate the Postgres backup using barman. As I know there are few options for taking backup using barman.\n\nRSYNC backup\nIncremental Backups\nStreaming Backup with continuous WAL streaming\nCentralized and Catalogued Backups\n\nWhich is the best option for backup using barman? So that we can keep the database safe in case of disaster? I feel the Incremental Backups are most useful to perform the PITR but I want to know the experts suggestions.\n\n\n Thanks,\n\n\n\n\n\n\n\n\n\n\nHi All,\n \nWe have configured postgres 11.2 in streaming replication (primary & Standby) on docker and I am looking to initiate the Postgres backup using barman. As I know there are few options for taking backup using barman.\n \nRSYNC backup\nIncremental Backups \nStreaming Backup with continuous WAL streaming \n\nCentralized and Catalogued Backups\n \nWhich is the best option for backup using barman? So that we can keep the database safe in case of disaster? I feel the Incremental Backups are most useful to perform the PITR but I want to know the experts suggestions.\n \n \n Thanks,",
"msg_date": "Thu, 31 Oct 2019 17:29:34 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Barman"
},
{
"msg_contents": "On Thu, Oct 31, 2019 at 05:29:34PM +0000, Daulat Ram wrote:\n>Hi All,\n>\n>We have configured postgres 11.2 in streaming replication (primary &\n>Standby) on docker and I am looking to initiate the Postgres backup\n>using barman. As I know there are few options for taking backup using\n>barman.\n>\n>RSYNC backup\n>Incremental Backups\n>Streaming Backup with continuous WAL streaming\n>Centralized and Catalogued Backups\n>\n>Which is the best option for backup using barman? So that we can keep\n>the database safe in case of disaster? I feel the Incremental Backups\n>are most useful to perform the PITR but I want to know the experts\n>suggestions.\n>\n\nYou're mixing a number of topics, here. Firstly, all backups done by\nbarman are centralized and catalogued, that's pretty much one of the\nmain purposes of barman.\n\nWhen it comes to backup methods, there are two basic methods. rsync and\npostgres (which means pg_basebackup). This is about creating the initial\nbase backup. Both methods then can replicate WAL by either streaming or\narchive_command.\n\nSo first you need to decide whether to use rsync and pg_basebackup,\nwhere rsync allows advanced features like incremental backup, parallel\nbackup and deduplication.\n\nThen you need to decide whether to use archive_command or streaming\n(i.e. pg_receivexlog).\n\nThe \"right\" backup method very much depends on the size of your\ndatabase, activity, and so on. By default you should probably go with\nthe default option, described as \"scenario 1\" in the barman docs, i.e.\npg_basebackup (backup_method = postgres) and WAL streaming.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 31 Oct 2019 19:56:46 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Barman"
},
{
"msg_contents": "Thanks Tomas for your inputs. Suppose, if we have database in TB's with OLTP applications then what will be suitable backup strategy. \n\n\n-----Original Message-----\nFrom: Tomas Vondra <[email protected]> \nSent: Friday, November 1, 2019 12:27 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]; [email protected]\nSubject: Re: Barman\n\nOn Thu, Oct 31, 2019 at 05:29:34PM +0000, Daulat Ram wrote:\n>Hi All,\n>\n>We have configured postgres 11.2 in streaming replication (primary &\n>Standby) on docker and I am looking to initiate the Postgres backup \n>using barman. As I know there are few options for taking backup using \n>barman.\n>\n>RSYNC backup\n>Incremental Backups\n>Streaming Backup with continuous WAL streaming Centralized and \n>Catalogued Backups\n>\n>Which is the best option for backup using barman? So that we can keep \n>the database safe in case of disaster? I feel the Incremental Backups \n>are most useful to perform the PITR but I want to know the experts \n>suggestions.\n>\n\nYou're mixing a number of topics, here. Firstly, all backups done by barman are centralized and catalogued, that's pretty much one of the main purposes of barman.\n\nWhen it comes to backup methods, there are two basic methods. rsync and postgres (which means pg_basebackup). This is about creating the initial base backup. Both methods then can replicate WAL by either streaming or archive_command.\n\nSo first you need to decide whether to use rsync and pg_basebackup, where rsync allows advanced features like incremental backup, parallel backup and deduplication.\n\nThen you need to decide whether to use archive_command or streaming (i.e. pg_receivexlog).\n\nThe \"right\" backup method very much depends on the size of your database, activity, and so on. By default you should probably go with the default option, described as \"scenario 1\" in the barman docs, i.e.\npg_basebackup (backup_method = postgres) and WAL streaming.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 1 Nov 2019 09:42:50 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Barman"
}
] |
[
{
"msg_contents": "We’re having trouble working out why the planning time for this particular query is slow (~2.5s vs 0.9ms execution time). As you can see below, there are only 3 tables involved so it’s hard to imagine what decisions the planner has to make that take so long. After 5 runs the prepared-statement code kicks in and it becomes quick, but it’s quite infuriating for the first 5 runs given the execution is so much faster.\r\n\r\nAre you able to give any tips what might be taking so long (and how we might improve it)?\r\n\r\nWe read elsewhere that someone had a “catalog stats file leak”, which I’m taking to mean a big pg_statistic table. Ours is 10mb, which doesn’t seem particularly large to me, but I don’t have much context for it. https://www.postgresql.org/message-id/CABWW-d21z_WgawkjXFQQviqm16oAx0KQvR6bLkRxvYQmhdByfg%40mail.gmail.com\r\n\r\nOther queries (with 3 or more tables) in the same db seem to be planning much quicker.\r\n\r\nThe query:\r\n\r\nexplain (analyse) SELECT subscription_binding.subscription_binding,\r\n subscription_binding.tid,\r\n subscription.subscription_uuid,\r\n subscription_binding.subscription_binding_uuid,\r\n binding.binding_uuid,\r\n subscription_binding.start_time,\r\n subscription_binding.end_time,\r\n subscription_binding.timezone,\r\n now() >= subscription_binding.start_time AND (subscription_binding.end_time IS NULL OR now() <= subscription_binding.end_time) AS active\r\n FROM jackpot.binding\r\n JOIN jackpot.subscription_binding USING (tid, binding)\r\n JOIN jackpot.subscription USING (tid, subscription)\r\n where (tid=2082003407) AND (binding_uuid='4f61dcd5-97a0-4098-b9ae-c1546c31b2e6'::uuid) offset 0 limit 1000;\r\n QUERY PLAN\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nLimit (cost=1.29..25.38 rows=1 width=80) (actual time=0.770..0.771 rows=1 loops=1)\r\n -> Nested Loop (cost=1.29..25.38 rows=1 width=80) (actual time=0.770..0.771 rows=1 loops=1)\r\n -> Nested Loop (cost=0.86..16.91 rows=1 width=76) (actual time=0.697..0.698 rows=1 loops=1)\r\n -> Index Scan using binding_tid_binding_uuid_key on binding (cost=0.43..8.45 rows=1 width=28) (actual time=0.647..0.647 rows=1 loops=1)\r\n Index Cond: ((tid = 2082003407) AND (binding_uuid = '4f61dcd5-97a0-4098-b9ae-c1546c31b2e6'::uuid))\r\n -> Index Scan using subscription_binding_idx on subscription_binding (cost=0.43..8.45 rows=1 width=64) (actual time=0.045..0.046 rows=1 loops=1)\r\n Index Cond: ((tid = 2082003407) AND (binding = binding.binding))\r\n -> Index Scan using subscription_pkey on subscription (cost=0.43..8.45 rows=1 width=28) (actual time=0.068..0.068 rows=1 loops=1)\r\n Index Cond: ((tid = 2082003407) AND (subscription = subscription_binding.subscription))\r\nPlanning time: 2429.682 ms\r\nExecution time: 0.914 ms\r\n(11 rows)\r\n\r\nPostgres version 9.5.19\r\n\r\nEach of the tables has between 3-4 indexes, and all the indexes include tid as first parameter. No partitions, no sign of a stray replication slot / uncommitted transaction / prepared transaction that may be holding up autovac, no sign of bloated indexes.\r\n\r\nTIA!\r\n\r\nBest regards,\r\n\r\nDavid Wheeler\r\nGeneral Manager Bali Office\r\nBali T +62 361 475 2333 M +62 819 3660 9180\r\nE [email protected]<mailto:[email protected]>\r\nJl. Pura Mertasari No. 7, Sunset Road Abian Base\r\nKuta, Badung – Bali 80361, Indonesia\r\nhttp://www.dgitsystems.com<http://www.dgitsystems.com/>\r\n\r\n[signature_1605257152][signature_1263573595]",
"msg_date": "Mon, 4 Nov 2019 03:04:45 +0000",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "On Mon, 2019-11-04 at 03:04 +0000, David Wheeler wrote:\n> We’re having trouble working out why the planning time for this particular query is slow\n> (~2.5s vs 0.9ms execution time). As you can see below, there are only 3 tables involved\n> so it’s hard to imagine what decisions the planner has to make that take so long. After\n> 5 runs the prepared-statement code kicks in and it becomes quick, but it’s quite\n> infuriating for the first 5 runs given the execution is so much faster. \n> \n> Are you able to give any tips what might be taking so long (and how we might improve it)?\n> \n[...]\n> Planning time: 2429.682 ms\n> \n> Execution time: 0.914 ms\n\nStrange.\nAre any of your catalog tables unusually large?\n\nSELECT pg_relation_size(t.oid),\n t.relname\nFROM pg_class AS t\n JOIN pg_namespace AS n ON t.relnamespace = n.oid\nWHERE t.relkind = 'r'\nORDER BY pg_relation_size(t.oid) DESC\nLIMIT 10;\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 04 Nov 2019 05:37:00 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "I'm not sure what \"unusually large\" is, but they're all < 1mb which is a little larger than some of our other comparable databases (mostly <300kb) but seems reasonable to me. \r\n\r\nRegards, \r\n \r\nDavid\r\n\r\nOn 4/11/19, 3:37 pm, \"Laurenz Albe\" <[email protected]> wrote:\r\n\r\n On Mon, 2019-11-04 at 03:04 +0000, David Wheeler wrote:\r\n > We’re having trouble working out why the planning time for this particular query is slow\r\n > (~2.5s vs 0.9ms execution time). As you can see below, there are only 3 tables involved\r\n > so it’s hard to imagine what decisions the planner has to make that take so long. After\r\n > 5 runs the prepared-statement code kicks in and it becomes quick, but it’s quite\r\n > infuriating for the first 5 runs given the execution is so much faster. \r\n > \r\n > Are you able to give any tips what might be taking so long (and how we might improve it)?\r\n > \r\n [...]\r\n > Planning time: 2429.682 ms\r\n > \r\n > Execution time: 0.914 ms\r\n \r\n Strange.\r\n Are any of your catalog tables unusually large?\r\n \r\n SELECT pg_relation_size(t.oid),\r\n t.relname\r\n FROM pg_class AS t\r\n JOIN pg_namespace AS n ON t.relnamespace = n.oid\r\n WHERE t.relkind = 'r'\r\n ORDER BY pg_relation_size(t.oid) DESC\r\n LIMIT 10;\r\n \r\n Yours,\r\n Laurenz Albe\r\n -- \r\n Cybertec | https://www.cybertec-postgresql.com\r\n \r\n \r\n\r\n",
"msg_date": "Mon, 4 Nov 2019 04:42:00 +0000",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "David Wheeler wrote:\n> I'm not sure what \"unusually large\" is, but they're all < 1mb which is a little larger\n> than some of our other comparable databases (mostly <300kb) but seems reasonable to me. \n\nI forgot the condition \"AND n.nspname = 'pg_catalog'\"...\n\nBut if all your tables are small, catalog bloat is probably not your problem.\n\nDo you have many tables in the database? Partitioning?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 04 Nov 2019 05:46:41 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "David Wheeler <[email protected]> writes:\n> We’re having trouble working out why the planning time for this\n> particular query is slow (~2.5s vs 0.9ms execution time). As you can see\n> below, there are only 3 tables involved so it’s hard to imagine what\n> decisions the planner has to make that take so long.\n\nI wonder whether this traces to the cost of trying to estimate the\nlargest/smallest value of an indexed column by looking into the index.\nNormally that's pretty cheap, but if you have a lot of recently-inserted\nor recently-deleted values at the end of the index, it can get painful.\nAFAIR this only happens for columns that are equijoin keys, so the fact\nthat your query is a join is significant.\n\nI'm not convinced that this is the problem, because it's a corner case\nthat few people hit. To see this issue, you have to have recently\ninserted or deleted a bunch of extremal values of the indexed join-key\ncolumn. And the problem only persists until those values become known\ncommitted-good, or known dead-to-everybody. (Maybe you've got a\nlong-running transaction somewhere, postponing the dead-to-everybody\ncondition?)\n\n> Postgres version 9.5.19\n\nIf this *is* the cause, v11 and up have a performance improvement that\nyou need:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3ca930fc3\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Nov 2019 00:00:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "> To see this issue, you have to have recently\r\n> inserted or deleted a bunch of extremal values of the indexed join-key\r\n> column. And the problem only persists until those values become known\r\n> committed-good, or known dead-to-everybody. (Maybe you've got a\r\n> long-running transaction somewhere, postponing the dead-to-everybody\r\n> condition?)\r\n\r\nThere are no long-running transactions that have backend_xmin set in pg_stat_activity, if that's what you mean here. There are also no open prepared transactions or replication slots which I understand have a similar keeping-things-alive issue. \r\n\r\nThese tables are biggish (hundreds of mb), but not changing so frequently that I'd expect large quantities of data to be inserted or deleted before autovac can get in there and clean it up. And certainly not in a single uncommitted transaction. \r\n\r\nI'll try reindexing each of the tables just to make sure it's not strange index imbalance or something causing the issue. \r\n\r\nRegards, \r\n \r\nDavid\r\n\r\nOn 4/11/19, 4:01 pm, \"Tom Lane\" <[email protected]> wrote:\r\n\r\n David Wheeler <[email protected]> writes:\r\n > We’re having trouble working out why the planning time for this\r\n > particular query is slow (~2.5s vs 0.9ms execution time). As you can see\r\n > below, there are only 3 tables involved so it’s hard to imagine what\r\n > decisions the planner has to make that take so long.\r\n \r\n I wonder whether this traces to the cost of trying to estimate the\r\n largest/smallest value of an indexed column by looking into the index.\r\n Normally that's pretty cheap, but if you have a lot of recently-inserted\r\n or recently-deleted values at the end of the index, it can get painful.\r\n AFAIR this only happens for columns that are equijoin keys, so the fact\r\n that your query is a join is significant.\r\n \r\n I'm not convinced that this is the problem, because it's a corner case\r\n that few people hit. To see this issue, you have to have recently\r\n inserted or deleted a bunch of extremal values of the indexed join-key\r\n column. And the problem only persists until those values become known\r\n committed-good, or known dead-to-everybody. (Maybe you've got a\r\n long-running transaction somewhere, postponing the dead-to-everybody\r\n condition?)\r\n \r\n > Postgres version 9.5.19\r\n \r\n If this *is* the cause, v11 and up have a performance improvement that\r\n you need:\r\n \r\n https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3ca930fc3\r\n \r\n \t\t\tregards, tom lane\r\n \r\n\r\n",
"msg_date": "Mon, 4 Nov 2019 05:17:11 +0000",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "po 4. 11. 2019 v 6:17 odesílatel David Wheeler <[email protected]>\nnapsal:\n\n> > To see this issue, you have to have recently\n> > inserted or deleted a bunch of extremal values of the indexed join-key\n> > column. And the problem only persists until those values become known\n> > committed-good, or known dead-to-everybody. (Maybe you've got a\n> > long-running transaction somewhere, postponing the dead-to-everybody\n> > condition?)\n>\n> There are no long-running transactions that have backend_xmin set in\n> pg_stat_activity, if that's what you mean here. There are also no open\n> prepared transactions or replication slots which I understand have a\n> similar keeping-things-alive issue.\n>\n> These tables are biggish (hundreds of mb), but not changing so frequently\n> that I'd expect large quantities of data to be inserted or deleted before\n> autovac can get in there and clean it up. And certainly not in a single\n> uncommitted transaction.\n>\n> I'll try reindexing each of the tables just to make sure it's not strange\n> index imbalance or something causing the issue.\n>\n\nI seen this issue few time - and reindex helps.\n\nPavel\n\n\n> Regards,\n>\n> David\n>\n> On 4/11/19, 4:01 pm, \"Tom Lane\" <[email protected]> wrote:\n>\n> David Wheeler <[email protected]> writes:\n> > We’re having trouble working out why the planning time for this\n> > particular query is slow (~2.5s vs 0.9ms execution time). As you can\n> see\n> > below, there are only 3 tables involved so it’s hard to imagine what\n> > decisions the planner has to make that take so long.\n>\n> I wonder whether this traces to the cost of trying to estimate the\n> largest/smallest value of an indexed column by looking into the index.\n> Normally that's pretty cheap, but if you have a lot of\n> recently-inserted\n> or recently-deleted values at the end of the index, it can get painful.\n> AFAIR this only happens for columns that are equijoin keys, so the fact\n> that your query is a join is significant.\n>\n> I'm not convinced that this is the problem, because it's a corner case\n> that few people hit. To see this issue, you have to have recently\n> inserted or deleted a bunch of extremal values of the indexed join-key\n> column. And the problem only persists until those values become known\n> committed-good, or known dead-to-everybody. (Maybe you've got a\n> long-running transaction somewhere, postponing the dead-to-everybody\n> condition?)\n>\n> > Postgres version 9.5.19\n>\n> If this *is* the cause, v11 and up have a performance improvement that\n> you need:\n>\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3ca930fc3\n>\n> regards, tom lane\n>\n>\n>\n\npo 4. 11. 2019 v 6:17 odesílatel David Wheeler <[email protected]> napsal:> To see this issue, you have to have recently\n> inserted or deleted a bunch of extremal values of the indexed join-key\n> column. And the problem only persists until those values become known\n> committed-good, or known dead-to-everybody. (Maybe you've got a\n> long-running transaction somewhere, postponing the dead-to-everybody\n> condition?)\n\nThere are no long-running transactions that have backend_xmin set in pg_stat_activity, if that's what you mean here. There are also no open prepared transactions or replication slots which I understand have a similar keeping-things-alive issue. \n\nThese tables are biggish (hundreds of mb), but not changing so frequently that I'd expect large quantities of data to be inserted or deleted before autovac can get in there and clean it up. And certainly not in a single uncommitted transaction. \n\nI'll try reindexing each of the tables just to make sure it's not strange index imbalance or something causing the issue. I seen this issue few time - and reindex helps.Pavel\n\nRegards, \n\nDavid\n\nOn 4/11/19, 4:01 pm, \"Tom Lane\" <[email protected]> wrote:\n\n David Wheeler <[email protected]> writes:\n > We’re having trouble working out why the planning time for this\n > particular query is slow (~2.5s vs 0.9ms execution time). As you can see\n > below, there are only 3 tables involved so it’s hard to imagine what\n > decisions the planner has to make that take so long.\n\n I wonder whether this traces to the cost of trying to estimate the\n largest/smallest value of an indexed column by looking into the index.\n Normally that's pretty cheap, but if you have a lot of recently-inserted\n or recently-deleted values at the end of the index, it can get painful.\n AFAIR this only happens for columns that are equijoin keys, so the fact\n that your query is a join is significant.\n\n I'm not convinced that this is the problem, because it's a corner case\n that few people hit. To see this issue, you have to have recently\n inserted or deleted a bunch of extremal values of the indexed join-key\n column. And the problem only persists until those values become known\n committed-good, or known dead-to-everybody. (Maybe you've got a\n long-running transaction somewhere, postponing the dead-to-everybody\n condition?)\n\n > Postgres version 9.5.19\n\n If this *is* the cause, v11 and up have a performance improvement that\n you need:\n\n https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3ca930fc3\n\n regards, tom lane",
"msg_date": "Mon, 4 Nov 2019 06:53:16 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": ">> I'll try reindexing each of the tables just to make sure it's not strange index imbalance or something causing the issue.\r\n> I seen this issue few time - and reindex helps.\r\n\r\nAlas our reindex doesn’t seem to have helped. I’m going to see if we can reproduce this on a non-prod environment so we can muck about a bit more. If we can reproduce it in a safe place, is there a tool we can use to get more info out of the query planner to find what it’s doing to take so long?\r\n\r\nRegards,\r\n\r\nDavid\r\n\r\nFrom: Pavel Stehule <[email protected]>\r\nDate: Monday, 4 November 2019 at 4:53 pm\r\nTo: David Wheeler <[email protected]>\r\nCc: Tom Lane <[email protected]>, \"[email protected]\" <[email protected]>, Cameron Redpath <[email protected]>\r\nSubject: Re: Slow planning, fast execution for particular 3-table query\r\n\r\n\r\n\r\npo 4. 11. 2019 v 6:17 odesílatel David Wheeler <[email protected]<mailto:[email protected]>> napsal:\r\n> To see this issue, you have to have recently\r\n> inserted or deleted a bunch of extremal values of the indexed join-key\r\n> column. And the problem only persists until those values become known\r\n> committed-good, or known dead-to-everybody. (Maybe you've got a\r\n> long-running transaction somewhere, postponing the dead-to-everybody\r\n> condition?)\r\n\r\nThere are no long-running transactions that have backend_xmin set in pg_stat_activity, if that's what you mean here. There are also no open prepared transactions or replication slots which I understand have a similar keeping-things-alive issue.\r\n\r\nThese tables are biggish (hundreds of mb), but not changing so frequently that I'd expect large quantities of data to be inserted or deleted before autovac can get in there and clean it up. And certainly not in a single uncommitted transaction.\r\n\r\nI'll try reindexing each of the tables just to make sure it's not strange index imbalance or something causing the issue.\r\n\r\nI seen this issue few time - and reindex helps.\r\n\r\nPavel\r\n\r\n\r\nRegards,\r\n\r\nDavid\r\n\r\nOn 4/11/19, 4:01 pm, \"Tom Lane\" <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n David Wheeler <[email protected]<mailto:[email protected]>> writes:\r\n > We’re having trouble working out why the planning time for this\r\n > particular query is slow (~2.5s vs 0.9ms execution time). As you can see\r\n > below, there are only 3 tables involved so it’s hard to imagine what\r\n > decisions the planner has to make that take so long.\r\n\r\n I wonder whether this traces to the cost of trying to estimate the\r\n largest/smallest value of an indexed column by looking into the index.\r\n Normally that's pretty cheap, but if you have a lot of recently-inserted\r\n or recently-deleted values at the end of the index, it can get painful.\r\n AFAIR this only happens for columns that are equijoin keys, so the fact\r\n that your query is a join is significant.\r\n\r\n I'm not convinced that this is the problem, because it's a corner case\r\n that few people hit. To see this issue, you have to have recently\r\n inserted or deleted a bunch of extremal values of the indexed join-key\r\n column. And the problem only persists until those values become known\r\n committed-good, or known dead-to-everybody. (Maybe you've got a\r\n long-running transaction somewhere, postponing the dead-to-everybody\r\n condition?)\r\n\r\n > Postgres version 9.5.19\r\n\r\n If this *is* the cause, v11 and up have a performance improvement that\r\n you need:\r\n\r\n https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3ca930fc3\r\n\r\n regards, tom lane\r\n\r\n\n\n\n\n\n\n\n\n\n>> I'll try reindexing each of the tables just to make sure it's not strange index imbalance or something causing the issue.\r\n\n> I seen this issue few time - and reindex helps.\n \nAlas our reindex doesn’t seem to have helped. I’m going to see if we can reproduce this on a non-prod environment so we can muck about a bit more. If we can reproduce it in a safe place, is there a tool we can use to get more info out of\r\n the query planner to find what it’s doing to take so long?\n \n\nRegards, \n \n\nDavid\n \n\nFrom:\r\nPavel Stehule <[email protected]>\nDate: Monday, 4 November 2019 at 4:53 pm\nTo: David Wheeler <[email protected]>\nCc: Tom Lane <[email protected]>, \"[email protected]\" <[email protected]>, Cameron Redpath <[email protected]>\nSubject: Re: Slow planning, fast execution for particular 3-table query\n\n\n \n\n\n\n \n\n \n\n\npo 4. 11. 2019 v 6:17 odesílatel David Wheeler <[email protected]> napsal:\n\n\n> To see this issue, you have to have recently\r\n> inserted or deleted a bunch of extremal values of the indexed join-key\r\n> column. And the problem only persists until those values become known\r\n> committed-good, or known dead-to-everybody. (Maybe you've got a\r\n> long-running transaction somewhere, postponing the dead-to-everybody\r\n> condition?)\n\r\nThere are no long-running transactions that have backend_xmin set in pg_stat_activity, if that's what you mean here. There are also no open prepared transactions or replication slots which I understand have a similar keeping-things-alive issue.\r\n\n\r\nThese tables are biggish (hundreds of mb), but not changing so frequently that I'd expect large quantities of data to be inserted or deleted before autovac can get in there and clean it up. And certainly not in a single uncommitted transaction.\r\n\n\r\nI'll try reindexing each of the tables just to make sure it's not strange index imbalance or something causing the issue.\r\n\n\n\n \n\n\nI seen this issue few time - and reindex helps.\n\n\n \n\n\nPavel\n\n\n \n\n\n\n\r\nRegards, \n\r\nDavid\n\r\nOn 4/11/19, 4:01 pm, \"Tom Lane\" <[email protected]> wrote:\n\r\n David Wheeler <[email protected]> writes:\r\n > We’re having trouble working out why the planning time for this\r\n > particular query is slow (~2.5s vs 0.9ms execution time). As you can see\r\n > below, there are only 3 tables involved so it’s hard to imagine what\r\n > decisions the planner has to make that take so long.\n\r\n I wonder whether this traces to the cost of trying to estimate the\r\n largest/smallest value of an indexed column by looking into the index.\r\n Normally that's pretty cheap, but if you have a lot of recently-inserted\r\n or recently-deleted values at the end of the index, it can get painful.\r\n AFAIR this only happens for columns that are equijoin keys, so the fact\r\n that your query is a join is significant.\n\r\n I'm not convinced that this is the problem, because it's a corner case\r\n that few people hit. To see this issue, you have to have recently\r\n inserted or deleted a bunch of extremal values of the indexed join-key\r\n column. And the problem only persists until those values become known\r\n committed-good, or known dead-to-everybody. (Maybe you've got a\r\n long-running transaction somewhere, postponing the dead-to-everybody\r\n condition?)\n\r\n > Postgres version 9.5.19\n\r\n If this *is* the cause, v11 and up have a performance improvement that\r\n you need:\n\r\n \r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3ca930fc3\n\r\n regards, tom lane",
"msg_date": "Wed, 6 Nov 2019 22:46:25 +0000",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "Is default_statistics_target set above default 100? I would assume that\nwould reflect in the size of pg_statistic, but wanted to ask since\nincreasing that from 100 to 1000 was the only time I have seen planning\ntime explode. Are other queries slow to plan?\n\nIs default_statistics_target set above default 100? I would assume that would reflect in the size of pg_statistic, but wanted to ask since increasing that from 100 to 1000 was the only time I have seen planning time explode. Are other queries slow to plan?",
"msg_date": "Wed, 6 Nov 2019 15:56:50 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "On Mon, Nov 04, 2019 at 03:04:45AM +0000, David Wheeler wrote:\n> Postgres version 9.5.19\n> Each of the tables has between 3-4 indexes, and all the indexes include tid as first parameter.\n\nOn Mon, Nov 04, 2019 at 12:00:59AM -0500, Tom Lane wrote:\n> If this *is* the cause, v11 and up have a performance improvement that\n> you need:\n\nBut note that index definition will be prohibited since:\n\nhttps://www.postgresql.org/docs/9.6/release-9-6.html\n|Disallow creation of indexes on system columns, except for OID columns (David Rowley)\n|Such indexes were never considered supported, and would very possibly misbehave since the system might change the system-column fields of a tuple without updating indexes. However, previously there were no error checks to prevent them from being created.\n\nJustin\n\n\n",
"msg_date": "Wed, 6 Nov 2019 16:59:45 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "Is default_statistics_target set above default 100? I would assume that would reflect in the size of pg_statistic, but wanted to ask since increasing that from 100 to 1000 was the only time I have seen planning time explode. Are other queries slow to plan?\r\n\r\nLooks like you’ve found it! Someone has set the target to 10k so that’s going to wildly increase planning time.\r\n\r\nThanks for your help! And thanks to the others who chipped in along the way 😊\r\n\r\nRegards,\r\n\r\nDavid\r\n\r\n\n\n\n\n\n\n\n\n\nIs default_statistics_target set above default 100? I would assume that would reflect in the size of pg_statistic, but wanted to ask since increasing that from 100 to 1000 was the only time I have seen planning\r\n time explode. Are other queries slow to plan?\n \nLooks like you’ve found it! Someone has set the target to 10k so that’s going to wildly increase planning time.\r\n\n \nThanks for your help! And thanks to the others who chipped in along the way\r\n😊\n \n\nRegards, \n \n\nDavid",
"msg_date": "Wed, 6 Nov 2019 23:41:57 +0000",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "On Thu, 7 Nov 2019 at 11:59, Justin Pryzby <[email protected]> wrote:\n>\n> On Mon, Nov 04, 2019 at 03:04:45AM +0000, David Wheeler wrote:\n> > Postgres version 9.5.19\n> > Each of the tables has between 3-4 indexes, and all the indexes include tid as first parameter.\n\n\n> But note that index definition will be prohibited since:\n>\n> https://www.postgresql.org/docs/9.6/release-9-6.html\n> |Disallow creation of indexes on system columns, except for OID columns (David Rowley)\n> |Such indexes were never considered supported, and would very possibly misbehave since the system might change the system-column fields of a tuple without updating indexes. However, previously there were no error checks to prevent them from being created.\n\nDavid will have meant the user column named \"tid\" rather than the\nsystem column named \"ctid\".\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 7 Nov 2019 13:15:30 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
},
{
"msg_contents": "On Thu, Nov 07, 2019 at 01:15:30PM +1300, David Rowley wrote:\n> On Thu, 7 Nov 2019 at 11:59, Justin Pryzby <[email protected]> wrote:\n> >\n> > On Mon, Nov 04, 2019 at 03:04:45AM +0000, David Wheeler wrote:\n> > > Postgres version 9.5.19\n> > > Each of the tables has between 3-4 indexes, and all the indexes include tid as first parameter.\n> \n> > But note that index definition will be prohibited since:\n> >\n> > https://www.postgresql.org/docs/9.6/release-9-6.html\n> > |Disallow creation of indexes on system columns, except for OID columns (David Rowley)\n> > |Such indexes were never considered supported, and would very possibly misbehave since the system might change the system-column fields of a tuple without updating indexes. However, previously there were no error checks to prevent them from being created.\n> \n> David will have meant the user column named \"tid\" rather than the\n> system column named \"ctid\".\n\nAh. And David must have meant David W :)\n\nJustin \n\n\n",
"msg_date": "Wed, 6 Nov 2019 18:18:38 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow planning, fast execution for particular 3-table query"
}
] |
[
{
"msg_contents": "Hello all,\r\n\r\nWe are trying to debug some slow performance in our production environment (Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN node that seems… weird. This is a very large query involving a number of joins, but it performs pretty well in our staging environment (which has roughly the same data set as production, with a few tweaks). However, there is one node in the EXPLAIN plan that is wildly different:\r\n\r\nIn the staging environment, we get this:\r\n\r\nIndex Scan using \"programPK\" on public.program prog (cost=0.29..0.35 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=21965)\r\n Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\r\n Index Cond: (prog.id = per.program)\r\n Buffers: shared hit=87860\r\n\r\nIn the production environment, we get this:\r\n\r\nIndex Scan using \"programPK\" on public.program prog (cost=0.29..0.36 rows=1 width=16) (actual time=0.017..4.251 rows=1 loops=21956)\r\n Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\r\n Index Cond: (prog.id = per.program)\r\n Buffers: shared hit=25437716\r\n\r\nThe tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the two environments.\r\n\r\nLastly, if we take out the join to the “program” table, the query performs much faster in production and the timing between staging and production is similar.\r\n\r\nThis has happened one time before, and we did a “REINDEX” on the program table – and that made the problem mostly go away. Now it seems to be back, and I’m not sure what to make of it.\r\n\r\nThanks in advance for any help you can offer!\r\n\r\nScott\r\n\r\nSCOTT RANKIN\r\nVP, Technology\r\nMotus, LLC\r\nTwo Financial Center, 60 South Street, Boston, MA 02111\r\n617.467.1900 (O) | [email protected]<mailto:[email protected]>\r\n\r\nFollow us on LinkedIn<https://www.linkedin.com/company/motus-llc/> | Visit us at motus.com<http://www.motus.com/>\r\n\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n\n\n\n\n\n\n\n\nHello all,\n \nWe are trying to debug some slow performance in our production environment (Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN node that seems… weird. This is a very large query involving\r\n a number of joins, but it performs pretty well in our staging environment (which has roughly the same data set as production, with a few tweaks). However, there is one node in the EXPLAIN plan that is wildly different:\r\n\n \nIn the staging environment, we get this:\r\n\n \nIndex Scan using \"programPK\" on public.program prog (cost=0.29..0.35 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=21965)\n Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields,\r\n prog.setup_complete, prog.setup_messages, prog.legacy_program_type\n Index Cond: (prog.id = per.program)\n Buffers: shared hit=87860\n \nIn the production environment, we get this:\n \nIndex Scan using \"programPK\" on public.program prog (cost=0.29..0.36 rows=1 width=16) (actual time=0.017..4.251 rows=1 loops=21956)\n Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields,\r\n prog.setup_complete, prog.setup_messages, prog.legacy_program_type\n Index Cond: (prog.id = per.program)\n Buffers: shared hit=25437716\n \nThe tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the\r\n two environments. \n \nLastly, if we take out the join to the “program” table, the query performs much faster in production and the timing between staging and production is similar. \r\n\n \nThis has happened one time before, and we did a “REINDEX” on the program table – and that made the problem mostly go away. Now it seems to be back, and I’m not sure what to make of it. \r\n\n \nThanks in advance for any help you can offer!\r\n\n \nScott\n \nSCOTT RANKIN\nVP, Technology\nMotus, LLC\r\nTwo Financial Center, 60 South Street, Boston, MA 02111 \r\n617.467.1900 (O) | [email protected]\n \nFollow us on LinkedIn |\r\n Visit us at motus.com \n \n\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not\r\n be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the\r\n intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication\r\n in error, please notify sender immediately and destroy the original message.\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not\r\n intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.",
"msg_date": "Mon, 4 Nov 2019 19:38:40 +0000",
"msg_from": "Scott Rankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Huge shared hit for small table"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-04 19:38:40 +0000, Scott Rankin wrote:\n> In the staging environment, we get this:\n> \n> Index Scan using \"programPK\" on public.program prog (cost=0.29..0.35 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=21965)\n> Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\n> Index Cond: (prog.id = per.program)\n> Buffers: shared hit=87860\n> \n> In the production environment, we get this:\n> \n> Index Scan using \"programPK\" on public.program prog (cost=0.29..0.36 rows=1 width=16) (actual time=0.017..4.251 rows=1 loops=21956)\n> Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\n> Index Cond: (prog.id = per.program)\n> Buffers: shared hit=25437716\n> \n> The tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the two environments.\n\nIt'd be worthwhile to look at the index stats using pgstatindex. Also,\ncould you show the definition of those indexes?\n\n\n> This email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\n> \n> Internal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\n\nGNGNGGRR.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Nov 2019 11:46:29 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "On Mon, Nov 04, 2019 at 07:38:40PM +0000, Scott Rankin wrote:\n> In the staging environment, we get this:\n> \n> Index Scan using \"programPK\" on public.program prog (cost=0.29..0.35 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=21965)\n> Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\n> Index Cond: (prog.id = per.program)\n> Buffers: shared hit=87860\n> \n> In the production environment, we get this:\n> \n> Index Scan using \"programPK\" on public.program prog (cost=0.29..0.36 rows=1 width=16) (actual time=0.017..4.251 rows=1 loops=21956)\n> Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\n> Index Cond: (prog.id = per.program)\n> Buffers: shared hit=25437716\n> \n> The tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the two environments.\n\nI think it's because some heap pages are being visited many times, due to the\nindex tuples being badly \"fragmented\". Note, I'm not talking about\nfragmentation of index *pages*, which is what pgstattuple reports (which\nwouldn't have nearly so detrimental effect). I could probably say that the\nindex tuples are badly \"correlated\" with the heap.\n\nI'm guessing there are perhaps 25437716/87860 = 290 index tuples per page, and\nthey rarely point to same heap page as their siblings. \"Hit\" means that this\naffects you even though it's cached (by postgres). So this is apparently slow\ndue to reading each page ~300 times rather than once to get its tuples all at\nonce.\n\n> This has happened one time before, and we did a “REINDEX” on the program table – and that made the problem mostly go away. Now it seems to be back, and I’m not sure what to make of it.\n\n..which is consistent with my hypothesis.\n\nYou can use pg_repack or CREATE INDEX+DROP+RENAME hack (which is what pg_repack\n-i does). In a fresh index, its tuples are sorted by heap TID. You could\nCLUSTER the table itself (or pg_repack -t) on that index column.\n\nIn PG v12 you can use REINDEX CONCURRENTLY (but beware there's a crash\naffecting its progress reporting, fix to be included in v12.1).\n\nJustin\n\n\n",
"msg_date": "Mon, 4 Nov 2019 13:56:49 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "The index is exceedingly simple:\r\n\r\n\r\nCREATE UNIQUE INDEX \"programPK\" ON program(id int8_ops);\r\n\r\nFrom pg_stat_user_indexes:\r\n\r\nStaging:\r\n\r\nidx_scan: 5826745\r\nidx_tup_read: 52715470\r\nidx_tup_fetch: 52644465\r\n\r\nProduction:\r\n\r\nidx_scan : 7277919087\r\nidx_tup_read: 90612605047\r\nidx_tup_fetch: 5207807880\r\n\r\nFrom: Andres Freund <[email protected]>\r\nDate: Monday, November 4, 2019 at 2:46 PM\r\nTo: Scott Rankin <[email protected]>\r\nCc: \"[email protected]\" <[email protected]>\r\nSubject: Re: Huge shared hit for small table\r\n\r\nHi,\r\n\r\nOn 2019-11-04 19:38:40 +0000, Scott Rankin wrote:\r\n> In the staging environment, we get this:\r\n>\r\n> Index Scan using \"programPK\" on public.program prog (cost=0.29..0.35 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=21965)\r\n> Output: prog.id<http://prog.id>, prog.version, prog.active<http://prog.active>, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name<http://prog.name>, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\r\n> Index Cond: (prog.id<http://prog.id> = per.program)\r\n> Buffers: shared hit=87860\r\n>\r\n> In the production environment, we get this:\r\n>\r\n> Index Scan using \"programPK\" on public.program prog (cost=0.29..0.36 rows=1 width=16) (actual time=0.017..4.251 rows=1 loops=21956)\r\n> Output: prog.id<http://prog.id>, prog.version, prog.active<http://prog.active>, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name<http://prog.name>, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\r\n> Index Cond: (prog.id<http://prog.id> = per.program)\r\n> Buffers: shared hit=25437716\r\n>\r\n> The tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the two environments.\r\n\r\nIt'd be worthwhile to look at the index stats using pgstatindex. Also,\r\ncould you show the definition of those indexes?\r\n\r\n\r\n> This email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n>\r\n> Internal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\r\nGNGNGGRR.\r\n\r\nGreetings,\r\n\r\nAndres Freund\r\n\r\n\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n\n\n\n\n\n\n\n\nThe index is exceedingly simple:\n \nCREATE UNIQUE \r\nINDEX \"programPK\" ON \r\nprogram(id int8_ops);\n \nFrom pg_stat_user_indexes:\n \nStaging: \n \nidx_scan: 5826745\nidx_tup_read: 52715470\nidx_tup_fetch: 52644465\n \nProduction: \n \nidx_scan : 7277919087\nidx_tup_read: 90612605047\nidx_tup_fetch: 5207807880\n \n\nFrom: Andres Freund <[email protected]>\nDate: Monday, November 4, 2019 at 2:46 PM\nTo: Scott Rankin <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: Re: Huge shared hit for small table\n\n\n \n\nHi,\n\r\nOn 2019-11-04 19:38:40 +0000, Scott Rankin wrote:\r\n> In the staging environment, we get this:\r\n> \r\n> Index Scan using \"programPK\" on public.program prog (cost=0.29..0.35 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=21965)\r\n> Output: \r\nprog.id, prog.version, \r\nprog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id,\r\n\r\nprog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\r\n> Index Cond: (prog.id = per.program)\r\n> Buffers: shared hit=87860\r\n> \r\n> In the production environment, we get this:\r\n> \r\n> Index Scan using \"programPK\" on public.program prog (cost=0.29..0.36 rows=1 width=16) (actual time=0.017..4.251 rows=1 loops=21956)\r\n> Output: \r\nprog.id, prog.version, \r\nprog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id,\r\n\r\nprog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type\r\n> Index Cond: (prog.id = per.program)\r\n> Buffers: shared hit=25437716\r\n> \r\n> The tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the two environments.\n\r\nIt'd be worthwhile to look at the index stats using pgstatindex. Also,\r\ncould you show the definition of those indexes?\n\n\r\n> This email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any\r\n other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient,\r\n or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify\r\n sender immediately and destroy the original message.\r\n> \r\n> Internal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written\r\n to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\n\r\nGNGNGGRR.\n\r\nGreetings,\n\r\nAndres Freund\n\n\n\n\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not\r\n be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the\r\n intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication\r\n in error, please notify sender immediately and destroy the original message.\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not\r\n intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.",
"msg_date": "Mon, 4 Nov 2019 19:56:57 +0000",
"msg_from": "Scott Rankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 11:56 AM Justin Pryzby <[email protected]> wrote:\n> I think it's because some heap pages are being visited many times, due to the\n> index tuples being badly \"fragmented\". Note, I'm not talking about\n> fragmentation of index *pages*, which is what pgstattuple reports (which\n> wouldn't have nearly so detrimental effect). I could probably say that the\n> index tuples are badly \"correlated\" with the heap.\n\nBut this is a unique index, and Scott indicates that the problem seems\nto go away for a while following a REINDEX.\n\n> In PG v12 you can use REINDEX CONCURRENTLY (but beware there's a crash\n> affecting its progress reporting, fix to be included in v12.1).\n\nPG v12 will store B-Tree duplicates in heap TID order, so if that's\nthe problem then upgrading to v12 (and REINDEXing if the upgrade was\nperformed using pg_upgrade) will fix it for good.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Nov 2019 12:07:27 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-04 19:56:57 +0000, Scott Rankin wrote:\n> The index is exceedingly simple:\n>\n>\n> CREATE UNIQUE INDEX \"programPK\" ON program(id int8_ops);\n>\n> From pg_stat_user_indexes:\n>\n> Staging:\n>\n> idx_scan: 5826745\n> idx_tup_read: 52715470\n> idx_tup_fetch: 52644465\n>\n> Production:\n>\n> idx_scan : 7277919087\n> idx_tup_read: 90612605047\n> idx_tup_fetch: 5207807880\n\nI was basically asking for SELECT * FROM pgstatindex('pgstatindex');\nwith pgstatindex being from the pgstattuple extension\nhttps://www.postgresql.org/docs/current/pgstattuple.html\nnot the pg_stat_user_indexes entry...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Nov 2019 12:17:28 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "Thanks to Justin for the clarification around pgstatindex:\r\n\r\nStaging:\r\n\r\nversion2\r\ntree_level1\r\nindex_size425984\r\nroot_block_no3\r\ninternal_pages1\r\nleaf_pages50\r\nempty_pages0\r\ndeleted_pages0\r\navg_leaf_density70.86\r\nleaf_fragmentation16\r\n\r\nProduction:\r\n\r\nversion2\r\ntree_level1\r\nindex_size360448\r\nroot_block_no3\r\ninternal_pages1\r\nleaf_pages41\r\nempty_pages0\r\ndeleted_pages1\r\navg_leaf_density60.44\r\nleaf_fragmentation39.02\r\n\r\nOn 11/4/19, 3:07 PM, \"Peter Geoghegan\" <[email protected]> wrote:\r\n\r\n On Mon, Nov 4, 2019 at 11:56 AM Justin Pryzby <[email protected]> wrote:\r\n > I think it's because some heap pages are being visited many times, due to the\r\n > index tuples being badly \"fragmented\". Note, I'm not talking about\r\n > fragmentation of index *pages*, which is what pgstattuple reports (which\r\n > wouldn't have nearly so detrimental effect). I could probably say that the\r\n > index tuples are badly \"correlated\" with the heap.\r\n\r\n But this is a unique index, and Scott indicates that the problem seems\r\n to go away for a while following a REINDEX.\r\n\r\n > In PG v12 you can use REINDEX CONCURRENTLY (but beware there's a crash\r\n > affecting its progress reporting, fix to be included in v12.1).\r\n\r\n PG v12 will store B-Tree duplicates in heap TID order, so if that's\r\n the problem then upgrading to v12 (and REINDEXing if the upgrade was\r\n performed using pg_upgrade) will fix it for good.\r\n\r\n --\r\n Peter Geoghegan\r\n\r\n\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n",
"msg_date": "Mon, 4 Nov 2019 20:18:03 +0000",
"msg_from": "Scott Rankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 2:38 PM Scott Rankin <[email protected]> wrote:\n\n> Hello all,\n>\n>\n>\n> We are trying to debug some slow performance in our production environment\n> (Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN\n> node that seems… weird. This is a very large query involving a number of\n> joins, but it performs pretty well in our staging environment (which has\n> roughly the same data set as production, with a few tweaks). However,\n> there is one node in the EXPLAIN plan that is wildly different:\n>\n\nCould there be a long-open transaction, which is preventing hint-bits from\ngetting on set on the table rows, as well on the index rows?\n\n...\n\n\n> The tables in both environments are about the same size (18MB) and the\n> indexes are about the same size (360kb/410kb) – and the shared hits are\n> pretty much the same on the other nodes of the query between the two\n> environments.\n>\n\nIf this table has more turn-over than those other tables (as measured in\nrows, not in percentage of the table), this would not be inconsistent with\nmy theory.\n\n\n> This has happened one time before, and we did a “REINDEX” on the program\n> table – and that made the problem mostly go away. Now it seems to be back,\n> and I’m not sure what to make of it.\n>\n\n\nA reindex would not by itself fix the problem if it were the long open\ntransaction. But if the long open transaction held a sufficient lock on\nthe table, then the reindex would block until the transaction went away on\nits own, at which point the problem would go away on its own, so it might\n**appear** to have fixed the problem.\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, Nov 4, 2019 at 2:38 PM Scott Rankin <[email protected]> wrote:\n\n\nHello all,\n \nWe are trying to debug some slow performance in our production environment (Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN node that seems… weird. This is a very large query involving\n a number of joins, but it performs pretty well in our staging environment (which has roughly the same data set as production, with a few tweaks). However, there is one node in the EXPLAIN plan that is wildly different:Could there be a long-open transaction, which is preventing hint-bits from getting on set on the table rows, as well on the index rows?... The tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the\n two environments.If this table has more turn-over than those other tables (as measured in rows, not in percentage of the table), this would not be inconsistent with my theory. This has happened one time before, and we did a “REINDEX” on the program table – and that made the problem mostly go away. Now it seems to be back, and I’m not sure what to make of it.A reindex would not by itself fix the problem if it were the long open transaction. But if the long open transaction held a sufficient lock on the table, then the reindex would block until the transaction went away on its own, at which point the problem would go away on its own, so it might **appear** to have fixed the problem. Cheers,Jeff",
"msg_date": "Mon, 4 Nov 2019 15:31:52 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 12:32 PM Jeff Janes <[email protected]> wrote:\n> Could there be a long-open transaction, which is preventing hint-bits from getting on set on the table rows, as well on the index rows?\n\nContention on a small number of rows may also be a factor.\n\n> A reindex would not by itself fix the problem if it were the long open transaction. But if the long open transaction held a sufficient lock on the table, then the reindex would block until the transaction went away on its own, at which point the problem would go away on its own, so it might **appear** to have fixed the problem.\n\nThat seems like the simplest and most likely explanation to me, even\nthough it isn't particularly simple.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Nov 2019 12:34:04 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "Definitely no long-running transactions on this table; in fact, this table is pretty infrequently updated – on the order of a few tens of rows updated per day.\r\n\r\nFrom: Jeff Janes <[email protected]>\r\nDate: Monday, November 4, 2019 at 3:32 PM\r\nTo: Scott Rankin <[email protected]>\r\nCc: \"[email protected]\" <[email protected]>\r\nSubject: Re: Huge shared hit for small table\r\n\r\nOn Mon, Nov 4, 2019 at 2:38 PM Scott Rankin <[email protected]<mailto:[email protected]>> wrote:\r\nHello all,\r\n\r\nWe are trying to debug some slow performance in our production environment (Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN node that seems… weird. This is a very large query involving a number of joins, but it performs pretty well in our staging environment (which has roughly the same data set as production, with a few tweaks). However, there is one node in the EXPLAIN plan that is wildly different:\r\n\r\nCould there be a long-open transaction, which is preventing hint-bits from getting on set on the table rows, as well on the index rows?\r\n\r\n...\r\n\r\nThe tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the two environments.\r\n\r\nIf this table has more turn-over than those other tables (as measured in rows, not in percentage of the table), this would not be inconsistent with my theory.\r\n\r\nThis has happened one time before, and we did a “REINDEX” on the program table – and that made the problem mostly go away. Now it seems to be back, and I’m not sure what to make of it.\r\n\r\n\r\nA reindex would not by itself fix the problem if it were the long open transaction. But if the long open transaction held a sufficient lock on the table, then the reindex would block until the transaction went away on its own, at which point the problem would go away on its own, so it might **appear** to have fixed the problem.\r\n\r\nCheers,\r\n\r\nJeff\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n\n\n\n\n\n\n\n\nDefinitely no long-running transactions on this table; in fact, this table is pretty infrequently updated – on the order of a few tens of rows updated per day. \r\n\n \n\nFrom: Jeff Janes <[email protected]>\nDate: Monday, November 4, 2019 at 3:32 PM\nTo: Scott Rankin <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: Re: Huge shared hit for small table\n\n\n \n\n\n\nOn Mon, Nov 4, 2019 at 2:38 PM Scott Rankin <[email protected]> wrote:\n\n\n\n\n\n\nHello all,\n \nWe are trying to debug some slow performance in our production environment (Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN node that seems… weird. This\r\n is a very large query involving a number of joins, but it performs pretty well in our staging environment (which has roughly the same data set as production, with a few tweaks). However, there is one node in the EXPLAIN plan that is wildly different:\n\n\n\n\n \n\n\nCould there be a long-open transaction, which is preventing hint-bits from getting on set on the table rows, as well on the index rows?\n\n\n \n\n\n...\n\n\n \n\n\n\n\nThe tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other\r\n nodes of the query between the two environments.\n\n\n\n\n \n\n\nIf this table has more turn-over than those other tables (as measured in rows, not in percentage of the table), this would not be inconsistent with my theory.\n\n\n \n\n\n\n\nThis has happened one time before, and we did a “REINDEX” on the program table – and that made the problem mostly go away. Now it seems to be back, and I’m not sure what to make\r\n of it.\n\n\n\n\n \n\n\n \n\n\nA reindex would not by itself fix the problem if it were the long open transaction. But if the long open transaction held a sufficient lock on the table, then the reindex would block until the transaction went away on its own, at which\r\n point the problem would go away on its own, so it might **appear** to have fixed the problem. \n\n\n \n\n\nCheers,\n\n\n \n\n\nJeff\n\n\n\n\n\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not\r\n be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the\r\n intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication\r\n in error, please notify sender immediately and destroy the original message.\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not\r\n intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.",
"msg_date": "Mon, 4 Nov 2019 20:38:36 +0000",
"msg_from": "Scott Rankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 12:38 PM Scott Rankin <[email protected]> wrote:\n> Definitely no long-running transactions on this table; in fact, this table is pretty infrequently updated – on the order of a few tens of rows updated per day.\n\nBut a long running transaction will have an impact on all tables --\nnot just the tables that happen to have been accessed so far in the\nlong running transaction. This is necessary because nothing stops the\nlong running transaction from SELECTing data from any table at any\ntime -- we need to pessimistically keep around the data required to\nmake that work.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Nov 2019 12:40:46 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "I think we have a winner. I looked in and found a process that was 'idle in transaction' for a couple days - and once I killed it, query performance went back to normal.\r\n\r\nThank you all for the very quick responses on this.\r\n\r\nOn 11/4/19, 3:41 PM, \"Peter Geoghegan\" <[email protected]> wrote:\r\n\r\n On Mon, Nov 4, 2019 at 12:38 PM Scott Rankin <[email protected]> wrote:\r\n > Definitely no long-running transactions on this table; in fact, this table is pretty infrequently updated – on the order of a few tens of rows updated per day.\r\n\r\n But a long running transaction will have an impact on all tables --\r\n not just the tables that happen to have been accessed so far in the\r\n long running transaction. This is necessary because nothing stops the\r\n long running transaction from SELECTing data from any table at any\r\n time -- we need to pessimistically keep around the data required to\r\n make that work.\r\n\r\n --\r\n Peter Geoghegan\r\n\r\n\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n",
"msg_date": "Mon, 4 Nov 2019 21:00:15 +0000",
"msg_from": "Scott Rankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Huge shared hit for small table"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 3:38 PM Scott Rankin <[email protected]> wrote:\n\n> Definitely no long-running transactions on this table;\n>\n\nAny long running transactions at all? The lock on the table is only\nnecessary to explain why the problem would have gone away at the same time\nas the reindex finished. If there is a long running transaction which\ndoesn't touch this table, it would still cause the problem. It is just that\nthe reindinex would not solve the problem (because the\nnot-entirely-dead-yet tuples would have to be copied into the new index),\nand with no lock there is no reason for them to be correlated in time,\nother than sheer dumb luck.\n\nDoes another reindex solve the problem again?\n\n> in fact, this table is pretty infrequently updated – on the order of a\nfew tens of rows updated per day.\n\nThat would seem to argue against this explanations, but all the others ones\ntoo I think. But a few tens of rows per day and a transaction left open\nfor a few tens of days, and you could get enough zombie tuples to add up to\ntrouble. Particularly if there is one row (as defined by prog.id) which is\nseeing both most of those updates, an most of the index-scan activity.\n\nBut now I am curious, if it is a small table and the index scan is going to\nbe invoked 21,956 times in one query, it seems like it should hash it\ninstead. Does it misestimate how often that index scan is going to get\ninvoked? (assuming the index scan is the 2nd child of a nested loop, what\nis the expected and actual row count of the 1st child of that loop?)\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, Nov 4, 2019 at 3:38 PM Scott Rankin <[email protected]> wrote:\n\n\nDefinitely no long-running transactions on this table; Any long running transactions at all? The lock on the table is only necessary to explain why the problem would have gone away at the same time as the reindex finished. If there is a long running transaction which doesn't touch this table, it would still cause the problem. It is just that the reindinex would not solve the problem (because the not-entirely-dead-yet tuples would have to be copied into the new index), and with no lock there is no reason for them to be correlated in time, other than sheer dumb luck.Does another reindex solve the problem again?> in fact, this table is pretty infrequently updated – on the order of a few tens of rows updated per day. That would seem to argue against this explanations, but all the others ones too I think. But a few tens of rows per day and a transaction left open for a few tens of days, and you could get enough zombie tuples to add up to trouble. Particularly if there is one row (as defined by prog.id) which is seeing both most of those updates, an most of the index-scan activity.But now I am curious, if it is a small table and the index scan is going to be invoked 21,956 times in one query, it seems like it should hash it instead. Does it misestimate how often that index scan is going to get invoked? (assuming the index scan is the 2nd child of a nested loop, what is the expected and actual row count of the 1st child of that loop?)Cheers,Jeff",
"msg_date": "Mon, 4 Nov 2019 16:05:12 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge shared hit for small table"
}
] |
[
{
"msg_contents": "The time has come.\n\nFPGA optimization is in the palm of our hands (literally a 2 TB 40 GB/s \nIO PostgreSQL server fits into less than a shoe box), and on Amazon AWS \nF1 instances.\n\nSome demos are beginning to exist: https://github.com/Xilinx/data-analytics.\n<https://github.com/Xilinx/data-analytics>\n\nBut a lot more could be done. How about linear sort performance at O(N)? \nhttps://hackaday.com/2016/01/20/a-linear-time-sorting-algorithm-for-fpgas/. \nAnd how about https://people.csail.mit.edu/wjun/papers/fccm2017.pdf, \nthe following four sorting accelerators are used:\n\n * Tuple Sorter : Sorts an N-tuple using a sorting network.\n * Page Sorter : Sorts an 8KB (a flash page) chunk of sorted N-tuples\n in on-chip memory.\n * Super-Page Sorter : Sorts 16 8K-32MB sorted chunks in DRAM.\n * Storage-to-Storage Sorter: Sorts 16 512MB or larger sorted chunks in\n flash.\n\nOrder of magnitude speed improvements? Better than Hadoop clusters on a \nsingle chip? 40 GB/s I/O throughput massive full table scan, blazing \nfast sort-merge joins? Here it is. Anybody working more on that? Should \nbe an ideal project for a student or a group of students.\n\nIs there a PostgreSQL foundation I could donate to, 501(c)(3) tax \nexempt? I can donate and possibly find some people at Purdue University \nwho might take this on. Interest?\n\nregards,\n-Gunther\n\n\n\n\n\n\n\nThe time has come. \n\nFPGA optimization is in the palm of our hands (literally a 2 TB\n 40 GB/s IO PostgreSQL server fits into less than a shoe box), and\n on Amazon AWS F1 instances.\nSome demos are beginning to exist: https://github.com/Xilinx/data-analytics.\n\nBut a lot more could be done. How about linear sort performance\n at O(N)? https://hackaday.com/2016/01/20/a-linear-time-sorting-algorithm-for-fpgas/.\n And how about https://people.csail.mit.edu/wjun/papers/fccm2017.pdf,\n the following four sorting accelerators are used:\n\n\nTuple Sorter : Sorts an N-tuple using a sorting network.\nPage Sorter : Sorts an 8KB (a flash page) chunk of sorted\n N-tuples in on-chip memory.\nSuper-Page Sorter : Sorts 16 8K-32MB sorted chunks in DRAM.\nStorage-to-Storage Sorter: Sorts 16 512MB or larger sorted\n chunks in flash.\n\nOrder of magnitude speed improvements? Better than Hadoop\n clusters on a single chip? 40 GB/s I/O throughput massive full\n table scan, blazing fast sort-merge joins? Here it is. Anybody\n working more on that? Should be an ideal project for a student or\n a group of students.\nIs there a PostgreSQL foundation I could donate to, 501(c)(3) tax\n exempt? I can donate and possibly find some people at Purdue\n University who might take this on. Interest?\n\nregards,\n -Gunther",
"msg_date": "Mon, 4 Nov 2019 18:33:15 -0500",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "FPGA optimization ..."
},
{
"msg_contents": "On Mon, Nov 04, 2019 at 06:33:15PM -0500, Gunther wrote:\n>The time has come.\n>\n>FPGA optimization is in the palm of our hands (literally a 2 TB 40 \n>GB/s IO PostgreSQL server fits into less than a shoe box), and on \n>Amazon AWS F1 instances.\n>\n>Some demos are beginning to exist: https://github.com/Xilinx/data-analytics.\n><https://github.com/Xilinx/data-analytics>\n>\n>But a lot more could be done. How about linear sort performance at \n>O(N)? https://hackaday.com/2016/01/20/a-linear-time-sorting-algorithm-for-fpgas/. \n>And how about https://people.csail.mit.edu/wjun/papers/fccm2017.pdf, \n>the following four sorting accelerators are used:\n>\n> * Tuple Sorter : Sorts an N-tuple using a sorting network.\n> * Page Sorter : Sorts an 8KB (a flash page) chunk of sorted N-tuples\n> in on-chip memory.\n> * Super-Page Sorter : Sorts 16 8K-32MB sorted chunks in DRAM.\n> * Storage-to-Storage Sorter: Sorts 16 512MB or larger sorted chunks in\n> flash.\n>\n>Order of magnitude speed improvements? Better than Hadoop clusters on \n>a single chip? 40 GB/s I/O throughput massive full table scan, blazing \n>fast sort-merge joins? Here it is. Anybody working more on that? \n>Should be an ideal project for a student or a group of students.\n>\n\nFor the record, this is not exactly a new thing. Netezza (a PostgreSQL\nfork started in 1999 IBM) used FPGAs. Now there's swarm64 [1], another\nPostgreSQL fork, also using FPGAs with newer PostgreSQL releases.\n\nThose are proprietary forks, though. The main reason why the community\nitself is not working on this directly (at least not on pgsql-hackers)\nis exactly that it requires specialized hardware, which the devs\nprobably don't have, making development impossible, and the regular\ncustomers are not asking for it either (one of the reasons being limited\navailability of such hardware, especially for customers running in the\ncloud and not being even able to deploy custom appliances).\n\nI don't think this will change, unless the access to systems with FPGAs\nbecomes much easier (e.g. if AWS introduces such instance type).\n\n>Is there a PostgreSQL foundation I could donate to, 501(c)(3) tax \n>exempt? I can donate and possibly find some people at Purdue \n>University who might take this on. Interest?\n>\n\nI don't think there's any such non-profit, managing/funding development.\nAt least I'm not avare of it. There are various non-profits around the\nworld, but those are organizing events and local communities.\n\nI'd say the best way to do something like this is to either talk to one\nof the companies participating in PostgreSQL devopment (pgsql-hackers is\nprobably a good starting point), or - if you absolutely need to go\nthrough a non-profit - approach a university (which does not mean people\nfrom pgsql-hackers can't be involved, of course). I've been involved in\na couple of such research projects in Europe, not sure what exactly is\nthe situation/rules in US.\n\nregards\n\n[1] https://swarm64.com/netezza-replacement/\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 5 Nov 2019 01:19:15 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FPGA optimization ..."
},
{
"msg_contents": "Hi Thomas, you said:\n\n> For the record, this is not exactly a new thing. Netezza (a PostgreSQL\n> fork started in 1999 IBM) used FPGAs. Now there's swarm64 [1], another\n> PostgreSQL fork, also using FPGAs with newer PostgreSQL releases.\n\nyes, I found the swarm thing on Google, and heard about Netezza years \nago from the Indian consulting contractor that had worked on it (their \nprice point was way out of the range that made sense for the academic \nplace where I worked then).\n\nBut there is good news, better than you thought when you wrote:\n\n> Those are proprietary forks, though. The main reason why the community\n> itself is not working on this directly (at least not on pgsql-hackers)\n> is exactly that it requires specialized hardware, which the devs\n> probably don't have, making development impossible, and the regular\n> customers are not asking for it either (one of the reasons being limited\n> availability of such hardware, especially for customers running in the\n> cloud and not being even able to deploy custom appliances).\n>\n> I don't think this will change, unless the access to systems with FPGAs\n> becomes much easier (e.g. if AWS introduces such instance type).\n\nIt already has changed! Amazon F1 instances. And Xilinx has already \npackaged a demo https://aws.amazon.com/marketplace/pp/B07BVSZL51. This \ndemo appears very limited though (only for TPC-H query 6 and 12 or so).\n\nEven the hardware to hold in your hand is now much cheaper. I know a guy \nwho's marketing a board with 40 GB/s throughput. I don't have price but \nI can't imagine the board plus 1 TB disk to be much outside of US$ 2k. I \ncould sponsor that if someone wants to have a serious shot at it.\n\n>> Is there a PostgreSQL foundation I could donate to, 501(c)(3) tax \n>> exempt? I can donate and possibly find some people at Purdue \n>> University who might take this on. Interest?\n>>\n>\n> I don't think there's any such non-profit, managing/funding development.\n> At least I'm not avare of it. There are various non-profits around the\n> world, but those are organizing events and local communities.\n>\n> I'd say the best way to do something like this is to either talk to one\n> of the companies participating in PostgreSQL devopment (pgsql-hackers is\n> probably a good starting point), or - if you absolutely need to go\n> through a non-profit - approach a university (which does not mean people\n> from pgsql-hackers can't be involved, of course). I've been involved in\n> a couple of such research projects in Europe, not sure what exactly is\n> the situation/rules in US.\n\nYes, might work with a University directly. Although I will contact the \nPostgreSQL foundation in the US also.\n\nregards,\n-Gunther\n\n\n\n",
"msg_date": "Mon, 4 Nov 2019 23:52:34 -0500",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FPGA optimization ..."
},
{
"msg_contents": " From what I have read and benchmarks seen..\n\nFPGA shines for writes (and up to 3x (as opposed to 10x claim) real world\nfor queries from memory)\n\nGPU shines/outperforms FPGA for reads. There is a very recent and\ninteresting academic paper[1] on High Performance GPU B-Tree (vs lsm) and\nthe incredible performance it gets, but I 'think' it requires NVIDIA (so no\neasy/super epyc+gpu+hbm on-chip combo solution then ;) ).\n\nDoesn't both FPHGA and GPU going to require changes to executor from pull to\npush to get real benefits from them? Isnt that something Andres working on\n(pull to push)?\n\nWhat really is exciting is UPMEM (little 500mhz processors on the memory),\ncost will be little more than memory cost itself, and shows up to 20x\nperformance improvement on things like index search (from memory). C\nlibrary, claim only needs few hundred lines of code to integrate from\nmemory, but not clear to me what use cases it can also be used for than ones\nthey show benchmarks for.\n\n\n[1] https://escholarship.org/content/qt1ph2x5td/qt1ph2x5td.pdf?t=pkvkdm\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n",
"msg_date": "Wed, 6 Nov 2019 11:01:37 -0700 (MST)",
"msg_from": "AJG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FPGA optimization ..."
},
{
"msg_contents": "On Wed, Nov 06, 2019 at 11:01:37AM -0700, AJG wrote:\n>From what I have read and benchmarks seen..\n>\n>FPGA shines for writes (and up to 3x (as opposed to 10x claim) real world\n>for queries from memory)\n>\n>GPU shines/outperforms FPGA for reads. There is a very recent and\n>interesting academic paper[1] on High Performance GPU B-Tree (vs lsm) and\n>the incredible performance it gets, but I 'think' it requires NVIDIA (so no\n>easy/super epyc+gpu+hbm on-chip combo solution then ;) ).\n>\n>Doesn't both FPHGA and GPU going to require changes to executor from pull to\n>push to get real benefits from them? Isnt that something Andres working on\n>(pull to push)?\n>\n\nI think it very much depends on how the FPA/GPU/... is used.\n\nIf we're only talking about FPGA I/O acceleration, essentially FPGA\nbetween the database and storage, it's likely possible to get that\nworking without any extensive executor changes. Essentially create an\nFPGA-aware variant of SeqScan and you're done. Or an FPGA-aware\ntuplesort, or something like that. Neither of this should require\nsignificant planner/executor changes, except for costing.\n\n>What really is exciting is UPMEM (little 500mhz processors on the memory),\n>cost will be little more than memory cost itself, and shows up to 20x\n>performance improvement on things like index search (from memory). C\n>library, claim only needs few hundred lines of code to integrate from\n>memory, but not clear to me what use cases it can also be used for than ones\n>they show benchmarks for.\n>\n\nInteresting, and perhaps interesting for in-memory databases.\n\n>\n>[1] https://escholarship.org/content/qt1ph2x5td/qt1ph2x5td.pdf?t=pkvkdm\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 6 Nov 2019 22:54:48 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FPGA optimization ..."
},
{
"msg_contents": "Hi,\n\nOn 2019-11-06 22:54:48 +0100, Tomas Vondra wrote:\n> If we're only talking about FPGA I/O acceleration, essentially FPGA\n> between the database and storage, it's likely possible to get that\n> working without any extensive executor changes. Essentially create an\n> FPGA-aware variant of SeqScan and you're done. Or an FPGA-aware\n> tuplesort, or something like that. Neither of this should require\n> significant planner/executor changes, except for costing.\n\nI doubt that that is true. For one, you either need to teach the FPGA\nto understand at least enough about the intricacies of postgres storage\nformat, to be able to make enough sense of visibility information to\nknow when it safe to look at a tuple (you can't evaluate qual's before\nvisibility information). It also needs to be fed a lot of information\nabout the layout of the table, involved operators etc. And even if you\ndefine those away somehow, you still need to make sure that the on-disk\nstate is coherent with the in-memory state - which definitely requires\nreaching outside of just a replacement seqscan node.\n\nI've a hard time believing that, even though some storage vendors are\npushing this model heavily, the approach of performing qual evaluation\non the storage level is actually useful for anything close to a general\npurpose database, especially a row store.\n\nIt's more realistic to have a model where the fpga is fed pre-processed\ndata, and it streams out the processed results. That way there are no\nproblems with coherency, one can can transparently handle parts of\nreading the data that the FPGA can't, etc.\n\n\nBut I admit I'm sceptical even the above model is relevant for\npostgres. The potential market seems likely to stay small, and there's\nso much more performance work that's applicable to everyone using PG,\neven without access to special purpose hardware.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Nov 2019 15:15:53 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FPGA optimization ..."
},
{
"msg_contents": "On Wed, Nov 06, 2019 at 03:15:53PM -0800, Andres Freund wrote:\n>Hi,\n>\n>On 2019-11-06 22:54:48 +0100, Tomas Vondra wrote:\n>> If we're only talking about FPGA I/O acceleration, essentially FPGA\n>> between the database and storage, it's likely possible to get that\n>> working without any extensive executor changes. Essentially create an\n>> FPGA-aware variant of SeqScan and you're done. Or an FPGA-aware\n>> tuplesort, or something like that. Neither of this should require\n>> significant planner/executor changes, except for costing.\n>\n>I doubt that that is true. For one, you either need to teach the FPGA\n>to understand at least enough about the intricacies of postgres storage\n>format, to be able to make enough sense of visibility information to\n>know when it safe to look at a tuple (you can't evaluate qual's before\n>visibility information). It also needs to be fed a lot of information\n>about the layout of the table, involved operators etc. And even if you\n>define those away somehow, you still need to make sure that the on-disk\n>state is coherent with the in-memory state - which definitely requires\n>reaching outside of just a replacement seqscan node.\n>\n\nThat's true, of course - the new node would have to know a lot of\ndetails about the on-disk format, meaning of operators, etc. Not\ntrivial, that's for sure. (I think PGStrom does this)\n\nWhat I had in mind were extensive changes to how the executor works in\ngeneral, because the OP mentioned changing the executor from pull to\npush, or abandoning the iterative executor design. And I think that\nwould not be necessary ...\n\n>I've a hard time believing that, even though some storage vendors are\n>pushing this model heavily, the approach of performing qual evaluation\n>on the storage level is actually useful for anything close to a general\n>purpose database, especially a row store.\n>\n\nI agree with this too - it's unlikely to be a huge win for \"regular\"\nworkloads, it's usually aimed at (some) analytical workloads.\n\nAnd yes, row store is not the most efficient format for this type of\naccelerators (I don't have much experience with FPGA, but for GPUs it's\nvery inefficient).\n\n>It's more realistic to have a model where the fpga is fed pre-processed\n>data, and it streams out the processed results. That way there are no\n>problems with coherency, one can can transparently handle parts of\n>reading the data that the FPGA can't, etc.\n>\n\nWell, the whole idea is that the FPGA does a lot of \"simple\" filtering\nbefore the data even get into RAM / CPU, etc. So I don't think this\nmodel would perform well - I assume the \"processing\" necessary could\neasily be more expensive than the gains.\n\n>\n>But I admit I'm sceptical even the above model is relevant for\n>postgres. The potential market seems likely to stay small, and there's\n>so much more performance work that's applicable to everyone using PG,\n>even without access to special purpose hardware.\n>\n\nNot sure. It certainly is irrelevant for everyone who does not have\naccess to systems with FPGAs, and useful only for some workloads. How\nlarge the market is, I don't know.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 7 Nov 2019 01:45:51 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FPGA optimization ..."
}
] |
[
{
"msg_contents": "I have a large table with millions of rows. Each row has an array field\n\"tags\". I also have the proper GIN index on tags.\n\nCounting the rows that have a tag is fast (~7s):\nSELECT COUNT(*) FROM \"subscriptions\" WHERE (tags @> ARRAY['t1']::varchar[]);\n\nHowever counting the rows that don't have a tag is extremely slow (~70s):\nSELECT COUNT(*) FROM \"subscriptions\" WHERE NOT (tags @>\nARRAY['t1']::varchar[]);\n\nI have also tried other variants, but with the same results (~70s):\nSELECT COUNT(*) FROM \"subscriptions\" WHERE NOT ('t1' = ANY (tags));\n\nHow can I make the \"not in array\" operation fast?\n\nAny help would be appreciated, thank you!\nMarco Colli\n\nPostgreSQL 11 on Ubuntu 18LTS\n\nI have a large table with millions of rows. Each row has an array field \"tags\". I also have the proper GIN index on tags.Counting the rows that have a tag is fast (~7s):SELECT COUNT(*) FROM \"subscriptions\" WHERE (tags @> ARRAY['t1']::varchar[]);However counting the rows that don't have a tag is extremely slow (~70s):SELECT COUNT(*) FROM \"subscriptions\" WHERE NOT (tags @> ARRAY['t1']::varchar[]);I have also tried other variants, but with the same results (~70s):SELECT COUNT(*) FROM \"subscriptions\" WHERE NOT ('t1' = ANY (tags));How can I make the \"not in array\" operation fast?Any help would be appreciated, thank you!Marco ColliPostgreSQL 11 on Ubuntu 18LTS",
"msg_date": "Tue, 12 Nov 2019 18:29:31 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow \"not in array\" operation"
},
{
"msg_contents": "What's the plan for the slow one? What's the time to just count all rows?\n\n>\n\nWhat's the plan for the slow one? What's the time to just count all rows?",
"msg_date": "Tue, 12 Nov 2019 11:39:46 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "To be honest, I have simplified the question above. In order to show you\nthe plan, I must show you the actual query, which is this:\n\n=== QUERY ===\n\nSELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" =\n123 AND \"subscriptions\".\"trashed_at\" IS NULL AND NOT (tags @>\nARRAY['en']::varchar[]);\n\n\n=== QUERY PLAN ===\n\n QUERY\nPLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Finalize Aggregate (cost=2152593.04..2152593.05 rows=1 width=8) (actual\ntime=70555.561..70555.561 rows=1 loops=1)\n\n -> Gather (cost=2152592.31..2152593.02 rows=7 width=8) (actual\ntime=70540.641..70702.365 rows=8 loops=1)\n\n Workers Planned: 7\n\n Workers Launched: 7\n\n -> Partial Aggregate (cost=2151592.31..2151592.32 rows=1\nwidth=8) (actual time=70537.376..70537.377 rows=1 loops=8)\n\n -> Parallel Seq Scan on subscriptions\n(cost=0.00..2149490.49 rows=840731 width=0) (actual time=0.742..70479.359\nrows=611828 loops=8)\n\n Filter: ((trashed_at IS NULL) AND (NOT (tags @>\n'{en}'::character varying[])) AND (project_id = 123))\n\n Rows Removed by Filter: 4572769\n\n Planning Time: 1.304 ms\n\n Execution Time: 70702.463 ms\n\n(10 rows)\n\n\n=== INDEXES ===\n\n\nIndexes:\n\n \"subscriptions_pkey\" PRIMARY KEY, btree (id)\n\n \"index_subscriptions_on_project_id_and_created_at\" btree (project_id,\ncreated_at DESC)\n\n \"index_subscriptions_on_project_id_and_tags\" gin (project_id, tags)\nWHERE trashed_at IS NULL\n\n \"index_subscriptions_on_project_id_and_trashed_at\" btree (project_id,\ntrashed_at DESC)\n\n=== NOTES ===\n\nRunning the query without the last filter on tags takes only 500ms.\nUnfortunately I cannot make strict assumptions on data or tags: for example\nI also have to count subscriptions in a project that don't have tag A and\ndon't have tag B, etc. This means that I cannot simply calculate the total\nand then make a subtraction.\n\nOn Tue, Nov 12, 2019 at 7:40 PM Michael Lewis <[email protected]> wrote:\n\n> What's the plan for the slow one? What's the time to just count all rows?\n>\n>>\n\nTo be honest, I have simplified the question above. In order to show you the plan, I must show you the actual query, which is this:=== QUERY ===SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123 AND \"subscriptions\".\"trashed_at\" IS NULL AND NOT (tags @> ARRAY['en']::varchar[]);=== QUERY PLAN === QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=2152593.04..2152593.05 rows=1 width=8) (actual time=70555.561..70555.561 rows=1 loops=1)\n -> Gather (cost=2152592.31..2152593.02 rows=7 width=8) (actual time=70540.641..70702.365 rows=8 loops=1)\n Workers Planned: 7\n Workers Launched: 7\n -> Partial Aggregate (cost=2151592.31..2151592.32 rows=1 width=8) (actual time=70537.376..70537.377 rows=1 loops=8)\n -> Parallel Seq Scan on subscriptions (cost=0.00..2149490.49 rows=840731 width=0) (actual time=0.742..70479.359 rows=611828 loops=8)\n Filter: ((trashed_at IS NULL) AND (NOT (tags @> '{en}'::character varying[])) AND (project_id = 123))\n Rows Removed by Filter: 4572769\n Planning Time: 1.304 ms\n Execution Time: 70702.463 ms\n(10 rows)=== INDEXES ===Indexes: \"subscriptions_pkey\" PRIMARY KEY, btree (id) \"index_subscriptions_on_project_id_and_created_at\" btree (project_id, created_at DESC) \"index_subscriptions_on_project_id_and_tags\" gin (project_id, tags) WHERE trashed_at IS NULL \"index_subscriptions_on_project_id_and_trashed_at\" btree (project_id, trashed_at DESC)=== NOTES ===Running the query without the last filter on tags takes only 500ms. Unfortunately I cannot make strict assumptions on data or tags: for example I also have to count subscriptions in a project that don't have tag A and don't have tag B, etc. This means that I cannot simply calculate the total and then make a subtraction.On Tue, Nov 12, 2019 at 7:40 PM Michael Lewis <[email protected]> wrote:What's the plan for the slow one? What's the time to just count all rows?",
"msg_date": "Tue, 12 Nov 2019 20:04:15 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "It is very interesting to me that the optimizer chose a parallel sequential\nscan rather than an index scan on either of your indexes that start\nwith project_id that also reference trashed_at.\n\n1) Are you running on SSD type storage? Has random_page_cost been lowered\nto 1-1.5 or so (close to 1 assumes good cache hits)?\n2) It seems you have increased parallel workers. Have you also changed the\nstartup or other cost configs related to how inclined the system is to use\nsequential scans?\n3) If you disable sequential scan, what does the plan look like for this\nquery? (SET ENABLE_SEQSCAN TO OFF;)\n\n>\n\nIt is very interesting to me that the optimizer chose a parallel sequential scan rather than an index scan on either of your indexes that start with project_id that also reference trashed_at.1) Are you running on SSD type storage? Has random_page_cost been lowered to 1-1.5 or so (close to 1 assumes good cache hits)?2) It seems you have increased parallel workers. Have you also changed the startup or other cost configs related to how inclined the system is to use sequential scans?3) If you disable sequential scan, what does the plan look like for this query? (SET ENABLE_SEQSCAN TO OFF;)",
"msg_date": "Tue, 12 Nov 2019 12:20:10 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 12:20:10PM -0700, Michael Lewis wrote:\n> It is very interesting to me that the optimizer chose a parallel sequential\n> scan rather than an index scan on either of your indexes that start\n> with project_id that also reference trashed_at.\n\nMaybe because of low correlation on any of those columns?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE tablename='subscriptions' AND attname IN ('project_id','tags') ORDER BY 1 DESC; \n\nMaybe clustering the table on project_id (and ANALYZEing) would help, but that\nmight need to be done consistently.\n\nMichael previously suggested partitioning which, if done on project_id,\nwould then no longer need to be specially CLUSTERed.\n\nIs the plan for the fast query the same as in August ?\nhttps://www.postgresql.org/message-id/CAFvCgN4UijKTYiOF61Tyd%2BgHvF_oqnMabatS9%2BDcX%2B_PK2SHRw%40mail.gmail.com\n\nJustin\n\n\n",
"msg_date": "Tue, 12 Nov 2019 13:53:34 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "1) It is running on a DigitalOcean CPU-optimized droplet with dedicated\nhyperthreads (16 cores) and SSD.\nSHOW random_page_cost; => 2\n\n2) What config names should I check exactly? I used some suggestions from\nthe online PGTune, when I first configured the db some months ago:\nmax_worker_processes = 16\nmax_parallel_workers_per_gather = 8\nmax_parallel_workers = 16\n\n3) Here's the query plan that I get after disabling the seq scan:\n\n\n QUERY PLAN\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual\ntime=94972.253..94972.254 rows=1 loops=1)\n\n -> Gather (cost=2183938.16..2183938.87 rows=7 width=8) (actual\ntime=94952.895..95132.626 rows=8 loops=1)\n\n Workers Planned: 7\n\n Workers Launched: 7\n\n -> Partial Aggregate (cost=2182938.16..2182938.17 rows=1\nwidth=8) (actual time=94950.958..94950.958 rows=1 loops=8)\n\n -> Parallel Bitmap Heap Scan on subscriptions\n(cost=50294.50..2180801.47 rows=854677 width=0) (actual\ntime=1831.342..94895.208 rows=611828 loops=8)\n\n Recheck Cond: ((project_id = 123) AND (trashed_at IS\nNULL))\n\n Rows Removed by Index Recheck: 2217924\n\n Filter: (NOT (tags @> '{en}'::character varying[]))\n\n Rows Removed by Filter: 288545\n\n Heap Blocks: exact=120301 lossy=134269\n\n -> Bitmap Index Scan on\nindex_subscriptions_on_project_id_and_tags (cost=0.00..48798.81\nrows=6518094 width=0) (actual time=1493.823..1493.823 rows=7203173 loops=1)\n\n Index Cond: (project_id = 123)\n\n Planning Time: 1.273 ms\n\n Execution Time: 95132.766 ms\n\n(15 rows)\n\n\nOn Tue, Nov 12, 2019 at 8:20 PM Michael Lewis <[email protected]> wrote:\n\n> It is very interesting to me that the optimizer chose a parallel\n> sequential scan rather than an index scan on either of your indexes that\n> start with project_id that also reference trashed_at.\n>\n> 1) Are you running on SSD type storage? Has random_page_cost been lowered\n> to 1-1.5 or so (close to 1 assumes good cache hits)?\n> 2) It seems you have increased parallel workers. Have you also changed the\n> startup or other cost configs related to how inclined the system is to use\n> sequential scans?\n> 3) If you disable sequential scan, what does the plan look like for this\n> query? (SET ENABLE_SEQSCAN TO OFF;)\n>\n>>\n\n1) It is running on a DigitalOcean CPU-optimized droplet with dedicated hyperthreads (16 cores) and SSD.SHOW random_page_cost; => 22) What config names should I check exactly? I used some suggestions from the online PGTune, when I first configured the db some months ago:max_worker_processes = 16max_parallel_workers_per_gather = 8max_parallel_workers = 163) Here's the query plan that I get after disabling the seq scan: QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual time=94972.253..94972.254 rows=1 loops=1)\n -> Gather (cost=2183938.16..2183938.87 rows=7 width=8) (actual time=94952.895..95132.626 rows=8 loops=1)\n Workers Planned: 7\n Workers Launched: 7\n -> Partial Aggregate (cost=2182938.16..2182938.17 rows=1 width=8) (actual time=94950.958..94950.958 rows=1 loops=8)\n -> Parallel Bitmap Heap Scan on subscriptions (cost=50294.50..2180801.47 rows=854677 width=0) (actual time=1831.342..94895.208 rows=611828 loops=8)\n Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL))\n Rows Removed by Index Recheck: 2217924\n Filter: (NOT (tags @> '{en}'::character varying[]))\n Rows Removed by Filter: 288545\n Heap Blocks: exact=120301 lossy=134269\n -> Bitmap Index Scan on index_subscriptions_on_project_id_and_tags (cost=0.00..48798.81 rows=6518094 width=0) (actual time=1493.823..1493.823 rows=7203173 loops=1)\n Index Cond: (project_id = 123)\n Planning Time: 1.273 ms\n Execution Time: 95132.766 ms\n(15 rows)On Tue, Nov 12, 2019 at 8:20 PM Michael Lewis <[email protected]> wrote:It is very interesting to me that the optimizer chose a parallel sequential scan rather than an index scan on either of your indexes that start with project_id that also reference trashed_at.1) Are you running on SSD type storage? Has random_page_cost been lowered to 1-1.5 or so (close to 1 assumes good cache hits)?2) It seems you have increased parallel workers. Have you also changed the startup or other cost configs related to how inclined the system is to use sequential scans?3) If you disable sequential scan, what does the plan look like for this query? (SET ENABLE_SEQSCAN TO OFF;)",
"msg_date": "Tue, 12 Nov 2019 21:06:34 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "Odd index choice by the optimizer given what is available. The bitmap being\nlossy means more work_mem is needed if I remember properly.\n\nIt is interesting that skipping the where condition on the array is only\nhalf a second. Is the array being toasted or is it small and being stored\nin the same file as primary table?\n\nWhat is the result for this count query? Is it roughly 4 million?\n\n\nOn Tue, Nov 12, 2019, 1:06 PM Marco Colli <[email protected]> wrote:\n\n> 1) It is running on a DigitalOcean CPU-optimized droplet with dedicated\n> hyperthreads (16 cores) and SSD.\n> SHOW random_page_cost; => 2\n>\n> 2) What config names should I check exactly? I used some suggestions from\n> the online PGTune, when I first configured the db some months ago:\n> max_worker_processes = 16\n> max_parallel_workers_per_gather = 8\n> max_parallel_workers = 16\n>\n> 3) Here's the query plan that I get after disabling the seq scan:\n>\n>\n> QUERY PLAN\n>\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual\n> time=94972.253..94972.254 rows=1 loops=1)\n>\n> -> Gather (cost=2183938.16..2183938.87 rows=7 width=8) (actual\n> time=94952.895..95132.626 rows=8 loops=1)\n>\n> Workers Planned: 7\n>\n> Workers Launched: 7\n>\n> -> Partial Aggregate (cost=2182938.16..2182938.17 rows=1\n> width=8) (actual time=94950.958..94950.958 rows=1 loops=8)\n>\n> -> Parallel Bitmap Heap Scan on subscriptions\n> (cost=50294.50..2180801.47 rows=854677 width=0) (actual\n> time=1831.342..94895.208 rows=611828 loops=8)\n>\n> Recheck Cond: ((project_id = 123) AND (trashed_at IS\n> NULL))\n>\n> Rows Removed by Index Recheck: 2217924\n>\n> Filter: (NOT (tags @> '{en}'::character varying[]))\n>\n> Rows Removed by Filter: 288545\n>\n> Heap Blocks: exact=120301 lossy=134269\n>\n> -> Bitmap Index Scan on\n> index_subscriptions_on_project_id_and_tags (cost=0.00..48798.81\n> rows=6518094 width=0) (actual time=1493.823..1493.823 rows=7203173 loops=1)\n>\n> Index Cond: (project_id = 123)\n>\n> Planning Time: 1.273 ms\n>\n> Execution Time: 95132.766 ms\n>\n> (15 rows)\n>\n>\n> On Tue, Nov 12, 2019 at 8:20 PM Michael Lewis <[email protected]> wrote:\n>\n>> It is very interesting to me that the optimizer chose a parallel\n>> sequential scan rather than an index scan on either of your indexes that\n>> start with project_id that also reference trashed_at.\n>>\n>> 1) Are you running on SSD type storage? Has random_page_cost been lowered\n>> to 1-1.5 or so (close to 1 assumes good cache hits)?\n>> 2) It seems you have increased parallel workers. Have you also changed\n>> the startup or other cost configs related to how inclined the system is to\n>> use sequential scans?\n>> 3) If you disable sequential scan, what does the plan look like for this\n>> query? (SET ENABLE_SEQSCAN TO OFF;)\n>>\n>>>\n\nOdd index choice by the optimizer given what is available. The bitmap being lossy means more work_mem is needed if I remember properly.It is interesting that skipping the where condition on the array is only half a second. Is the array being toasted or is it small and being stored in the same file as primary table?What is the result for this count query? Is it roughly 4 million?On Tue, Nov 12, 2019, 1:06 PM Marco Colli <[email protected]> wrote:1) It is running on a DigitalOcean CPU-optimized droplet with dedicated hyperthreads (16 cores) and SSD.SHOW random_page_cost; => 22) What config names should I check exactly? I used some suggestions from the online PGTune, when I first configured the db some months ago:max_worker_processes = 16max_parallel_workers_per_gather = 8max_parallel_workers = 163) Here's the query plan that I get after disabling the seq scan: QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual time=94972.253..94972.254 rows=1 loops=1)\n -> Gather (cost=2183938.16..2183938.87 rows=7 width=8) (actual time=94952.895..95132.626 rows=8 loops=1)\n Workers Planned: 7\n Workers Launched: 7\n -> Partial Aggregate (cost=2182938.16..2182938.17 rows=1 width=8) (actual time=94950.958..94950.958 rows=1 loops=8)\n -> Parallel Bitmap Heap Scan on subscriptions (cost=50294.50..2180801.47 rows=854677 width=0) (actual time=1831.342..94895.208 rows=611828 loops=8)\n Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL))\n Rows Removed by Index Recheck: 2217924\n Filter: (NOT (tags @> '{en}'::character varying[]))\n Rows Removed by Filter: 288545\n Heap Blocks: exact=120301 lossy=134269\n -> Bitmap Index Scan on index_subscriptions_on_project_id_and_tags (cost=0.00..48798.81 rows=6518094 width=0) (actual time=1493.823..1493.823 rows=7203173 loops=1)\n Index Cond: (project_id = 123)\n Planning Time: 1.273 ms\n Execution Time: 95132.766 ms\n(15 rows)On Tue, Nov 12, 2019 at 8:20 PM Michael Lewis <[email protected]> wrote:It is very interesting to me that the optimizer chose a parallel sequential scan rather than an index scan on either of your indexes that start with project_id that also reference trashed_at.1) Are you running on SSD type storage? Has random_page_cost been lowered to 1-1.5 or so (close to 1 assumes good cache hits)?2) It seems you have increased parallel workers. Have you also changed the startup or other cost configs related to how inclined the system is to use sequential scans?3) If you disable sequential scan, what does the plan look like for this query? (SET ENABLE_SEQSCAN TO OFF;)",
"msg_date": "Tue, 12 Nov 2019 13:31:06 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "Marco Colli <[email protected]> writes:\n> 3) Here's the query plan that I get after disabling the seq scan:\n\n> Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual\n> time=94972.253..94972.254 rows=1 loops=1)\n\nSo, this is slower than the seqscan, which means the planner made the\nright choice.\n\nYou seem to be imagining that there's some way the index could be used\nwith the NOT clause, but there isn't. Indexable clauses are of the form\n\tindexed_column indexable_operator constant\nand there's no provision for a NOT in that. If we had a \"not contained\nin\" array operator, the NOT could be folded to be of this syntactic form,\nbut it's highly unlikely that any index operator class would consider such\nan operator to be a supported indexable operator. It doesn't lend itself\nto searching an index.\n\nSo the planner is doing the best it can, which in this case is a\nfull-table scan.\n\nA conceivable solution, if the tags array is a lot smaller than\nthe table as a whole and the table is fairly static, is that you could\nmake a btree index on the tags array and let the planner fall back\nto an index-only scan that is just using the index as a cheaper\nsource of the array data. (This doesn't work for your existing GIST\nindex because GIST can't reconstruct the original arrays on-demand.)\nI suspect though that this wouldn't win much, even if you disregard\nthe maintenance costs for the extra index. The really fundamental\nproblem here is that a large fraction of the table satisfies the\nNOT-in condition, and no index is going to beat a seqscan by all that\nmuch when that's true. Indexes are good at retrieving small portions\nof tables.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Nov 2019 15:50:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "I am not a PostgreSQL expert, however I think that the following\nalgorithm should be possible and fast:\n\n1. find the bitmap of all subscriptions in a project that are not trashed\n(it can use the index and takes only ~500ms)\n2. find the bitmap of all subscriptions that match the above condition and\nHAVE the tag (~7s)\n3. calculate [bitmap 1] - [bitmap 2] to find the subscriptions of the\nproject that DON'T HAVE the tag\n\n\n\nOn Tue, Nov 12, 2019 at 9:50 PM Tom Lane <[email protected]> wrote:\n\n> Marco Colli <[email protected]> writes:\n> > 3) Here's the query plan that I get after disabling the seq scan:\n>\n> > Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual\n> > time=94972.253..94972.254 rows=1 loops=1)\n>\n> So, this is slower than the seqscan, which means the planner made the\n> right choice.\n>\n> You seem to be imagining that there's some way the index could be used\n> with the NOT clause, but there isn't. Indexable clauses are of the form\n> indexed_column indexable_operator constant\n> and there's no provision for a NOT in that. If we had a \"not contained\n> in\" array operator, the NOT could be folded to be of this syntactic form,\n> but it's highly unlikely that any index operator class would consider such\n> an operator to be a supported indexable operator. It doesn't lend itself\n> to searching an index.\n>\n> So the planner is doing the best it can, which in this case is a\n> full-table scan.\n>\n> A conceivable solution, if the tags array is a lot smaller than\n> the table as a whole and the table is fairly static, is that you could\n> make a btree index on the tags array and let the planner fall back\n> to an index-only scan that is just using the index as a cheaper\n> source of the array data. (This doesn't work for your existing GIST\n> index because GIST can't reconstruct the original arrays on-demand.)\n> I suspect though that this wouldn't win much, even if you disregard\n> the maintenance costs for the extra index. The really fundamental\n> problem here is that a large fraction of the table satisfies the\n> NOT-in condition, and no index is going to beat a seqscan by all that\n> much when that's true. Indexes are good at retrieving small portions\n> of tables.\n>\n> regards, tom lane\n>\n\nI am not a PostgreSQL expert, however I think that the following algorithm should be possible and fast:1. find the bitmap of all subscriptions in a project that are not trashed (it can use the index and takes only ~500ms)2. find the bitmap of all subscriptions that match the above condition and HAVE the tag (~7s)3. calculate [bitmap 1] - [bitmap 2] to find the subscriptions of the project that DON'T HAVE the tagOn Tue, Nov 12, 2019 at 9:50 PM Tom Lane <[email protected]> wrote:Marco Colli <[email protected]> writes:\n> 3) Here's the query plan that I get after disabling the seq scan:\n\n> Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual\n> time=94972.253..94972.254 rows=1 loops=1)\n\nSo, this is slower than the seqscan, which means the planner made the\nright choice.\n\nYou seem to be imagining that there's some way the index could be used\nwith the NOT clause, but there isn't. Indexable clauses are of the form\n indexed_column indexable_operator constant\nand there's no provision for a NOT in that. If we had a \"not contained\nin\" array operator, the NOT could be folded to be of this syntactic form,\nbut it's highly unlikely that any index operator class would consider such\nan operator to be a supported indexable operator. It doesn't lend itself\nto searching an index.\n\nSo the planner is doing the best it can, which in this case is a\nfull-table scan.\n\nA conceivable solution, if the tags array is a lot smaller than\nthe table as a whole and the table is fairly static, is that you could\nmake a btree index on the tags array and let the planner fall back\nto an index-only scan that is just using the index as a cheaper\nsource of the array data. (This doesn't work for your existing GIST\nindex because GIST can't reconstruct the original arrays on-demand.)\nI suspect though that this wouldn't win much, even if you disregard\nthe maintenance costs for the extra index. The really fundamental\nproblem here is that a large fraction of the table satisfies the\nNOT-in condition, and no index is going to beat a seqscan by all that\nmuch when that's true. Indexes are good at retrieving small portions\nof tables.\n\n regards, tom lane",
"msg_date": "Tue, 12 Nov 2019 22:40:07 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": ">\n>\n> 3) Here's the query plan that I get after disabling the seq scan:\n>\n>\n> QUERY PLAN\n>\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual\n> time=94972.253..94972.254 rows=1 loops=1)\n>\n> -> Gather (cost=2183938.16..2183938.87 rows=7 width=8) (actual\n> time=94952.895..95132.626 rows=8 loops=1)\n>\n> Workers Planned: 7\n>\n> Workers Launched: 7\n>\n> -> Partial Aggregate (cost=2182938.16..2182938.17 rows=1\n> width=8) (actual time=94950.958..94950.958 rows=1 loops=8)\n>\n> -> Parallel Bitmap Heap Scan on subscriptions\n> (cost=50294.50..2180801.47 rows=854677 width=0) (actual\n> time=1831.342..94895.208 rows=611828 loops=8)\n>\n> Recheck Cond: ((project_id = 123) AND (trashed_at IS\n> NULL))\n>\n> Rows Removed by Index Recheck: 2217924\n>\n> Filter: (NOT (tags @> '{en}'::character varying[]))\n>\n> Rows Removed by Filter: 288545\n>\n> Heap Blocks: exact=120301 lossy=134269\n>\n> -> Bitmap Index Scan on\n> index_subscriptions_on_project_id_and_tags (cost=0.00..48798.81\n> rows=6518094 width=0) (actual time=1493.823..1493.823 rows=7203173 loops=1)\n>\n> Index Cond: (project_id = 123)\n>\n> Planning Time: 1.273 ms\n>\n> Execution Time: 95132.766 ms\n>\n> (15 rows)\n>\n\nWhat was the plan for the one that took 500ms? I don't see how it is\npossible that this one is 180 times slower than that one. Maybe a hot\ncache versus cold cache? Also, it seems weird to me that \"trashed_at IS\nNULL\" shows up in the recheck but not in the original Index Cond.\nIncreasing work_mem can also help, but since the Bitmap Index Scan itself\ntook half the time there is only so much it can do.\n\nCheers,\n\nJeff\n\n3) Here's the query plan that I get after disabling the seq scan: QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual time=94972.253..94972.254 rows=1 loops=1)\n -> Gather (cost=2183938.16..2183938.87 rows=7 width=8) (actual time=94952.895..95132.626 rows=8 loops=1)\n Workers Planned: 7\n Workers Launched: 7\n -> Partial Aggregate (cost=2182938.16..2182938.17 rows=1 width=8) (actual time=94950.958..94950.958 rows=1 loops=8)\n -> Parallel Bitmap Heap Scan on subscriptions (cost=50294.50..2180801.47 rows=854677 width=0) (actual time=1831.342..94895.208 rows=611828 loops=8)\n Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL))\n Rows Removed by Index Recheck: 2217924\n Filter: (NOT (tags @> '{en}'::character varying[]))\n Rows Removed by Filter: 288545\n Heap Blocks: exact=120301 lossy=134269\n -> Bitmap Index Scan on index_subscriptions_on_project_id_and_tags (cost=0.00..48798.81 rows=6518094 width=0) (actual time=1493.823..1493.823 rows=7203173 loops=1)\n Index Cond: (project_id = 123)\n Planning Time: 1.273 ms\n Execution Time: 95132.766 ms\n(15 rows)What was the plan for the one that took 500ms? I don't see how it is possible that this one is 180 times slower than that one. Maybe a hot cache versus cold cache? Also, it seems weird to me that \"trashed_at IS NULL\" shows up in the recheck but not in the original Index Cond. Increasing work_mem can also help, but since the \n\nBitmap Index Scan itself took half the time there is only so much it can do.Cheers,Jeff",
"msg_date": "Tue, 12 Nov 2019 18:33:28 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "Replying to the previous questions:\n- work_mem = 64MB (there are hundreds of connections)\n- the project 123 has more than 7M records, and those that don't have the\ntag 'en' are 4.8M\n\n\n> What was the plan for the one that took 500ms?\n\n\nThis is the query / plan without the filter on tags:\n\nSELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" =\n123 AND \"subscriptions\".\"trashed_at\" IS NULL;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=291342.67..291342.68 rows=1 width=8) (actual\ntime=354.556..354.556 rows=1 loops=1)\n -> Gather (cost=291342.05..291342.66 rows=6 width=8) (actual\ntime=354.495..374.305 rows=7 loops=1)\n Workers Planned: 6\n Workers Launched: 6\n -> Partial Aggregate (cost=290342.05..290342.06 rows=1 width=8)\n(actual time=349.799..349.799 rows=1 loops=7)\n -> Parallel Index Only Scan using\nindex_subscriptions_on_project_id_and_uid on subscriptions\n (cost=0.56..287610.27 rows=1092713 width=0) (actual time=0.083..273.018\nrows=1030593 loops=7)\n Index Cond: (project_id = 123)\n Heap Fetches: 280849\n Planning Time: 0.753 ms\n Execution Time: 374.483 ms\n(10 rows)\n\nThen if I simply add the exclusion of a single tag, it goes from a few\nmilliseconds to 70s...\n\n\n\nOn Wed, Nov 13, 2019 at 12:33 AM Jeff Janes <[email protected]> wrote:\n\n>\n>> 3) Here's the query plan that I get after disabling the seq scan:\n>>\n>>\n>> QUERY PLAN\n>>\n>>\n>>\n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual\n>> time=94972.253..94972.254 rows=1 loops=1)\n>>\n>> -> Gather (cost=2183938.16..2183938.87 rows=7 width=8) (actual\n>> time=94952.895..95132.626 rows=8 loops=1)\n>>\n>> Workers Planned: 7\n>>\n>> Workers Launched: 7\n>>\n>> -> Partial Aggregate (cost=2182938.16..2182938.17 rows=1\n>> width=8) (actual time=94950.958..94950.958 rows=1 loops=8)\n>>\n>> -> Parallel Bitmap Heap Scan on subscriptions\n>> (cost=50294.50..2180801.47 rows=854677 width=0) (actual\n>> time=1831.342..94895.208 rows=611828 loops=8)\n>>\n>> Recheck Cond: ((project_id = 123) AND (trashed_at IS\n>> NULL))\n>>\n>> Rows Removed by Index Recheck: 2217924\n>>\n>> Filter: (NOT (tags @> '{en}'::character varying[]))\n>>\n>> Rows Removed by Filter: 288545\n>>\n>> Heap Blocks: exact=120301 lossy=134269\n>>\n>> -> Bitmap Index Scan on\n>> index_subscriptions_on_project_id_and_tags (cost=0.00..48798.81\n>> rows=6518094 width=0) (actual time=1493.823..1493.823 rows=7203173 loops=1)\n>>\n>> Index Cond: (project_id = 123)\n>>\n>> Planning Time: 1.273 ms\n>>\n>> Execution Time: 95132.766 ms\n>>\n>> (15 rows)\n>>\n>\n> What was the plan for the one that took 500ms? I don't see how it is\n> possible that this one is 180 times slower than that one. Maybe a hot\n> cache versus cold cache? Also, it seems weird to me that \"trashed_at IS\n> NULL\" shows up in the recheck but not in the original Index Cond.\n> Increasing work_mem can also help, but since the Bitmap Index Scan itself\n> took half the time there is only so much it can do.\n>\n> Cheers,\n>\n> Jeff\n>\n\nReplying to the previous questions:- work_mem = 64MB (there are hundreds of connections)- the project 123 has more than 7M records, and those that don't have the tag 'en' are 4.8M What was the plan for the one that took 500ms?This is the query / plan without the filter on tags:SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123 AND \"subscriptions\".\"trashed_at\" IS NULL; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Finalize Aggregate (cost=291342.67..291342.68 rows=1 width=8) (actual time=354.556..354.556 rows=1 loops=1) -> Gather (cost=291342.05..291342.66 rows=6 width=8) (actual time=354.495..374.305 rows=7 loops=1) Workers Planned: 6 Workers Launched: 6 -> Partial Aggregate (cost=290342.05..290342.06 rows=1 width=8) (actual time=349.799..349.799 rows=1 loops=7) -> Parallel Index Only Scan using index_subscriptions_on_project_id_and_uid on subscriptions (cost=0.56..287610.27 rows=1092713 width=0) (actual time=0.083..273.018 rows=1030593 loops=7) Index Cond: (project_id = 123) Heap Fetches: 280849 Planning Time: 0.753 ms Execution Time: 374.483 ms(10 rows) Then if I simply add the exclusion of a single tag, it goes from a few milliseconds to 70s...On Wed, Nov 13, 2019 at 12:33 AM Jeff Janes <[email protected]> wrote:3) Here's the query plan that I get after disabling the seq scan: QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=2183938.89..2183938.90 rows=1 width=8) (actual time=94972.253..94972.254 rows=1 loops=1)\n -> Gather (cost=2183938.16..2183938.87 rows=7 width=8) (actual time=94952.895..95132.626 rows=8 loops=1)\n Workers Planned: 7\n Workers Launched: 7\n -> Partial Aggregate (cost=2182938.16..2182938.17 rows=1 width=8) (actual time=94950.958..94950.958 rows=1 loops=8)\n -> Parallel Bitmap Heap Scan on subscriptions (cost=50294.50..2180801.47 rows=854677 width=0) (actual time=1831.342..94895.208 rows=611828 loops=8)\n Recheck Cond: ((project_id = 123) AND (trashed_at IS NULL))\n Rows Removed by Index Recheck: 2217924\n Filter: (NOT (tags @> '{en}'::character varying[]))\n Rows Removed by Filter: 288545\n Heap Blocks: exact=120301 lossy=134269\n -> Bitmap Index Scan on index_subscriptions_on_project_id_and_tags (cost=0.00..48798.81 rows=6518094 width=0) (actual time=1493.823..1493.823 rows=7203173 loops=1)\n Index Cond: (project_id = 123)\n Planning Time: 1.273 ms\n Execution Time: 95132.766 ms\n(15 rows)What was the plan for the one that took 500ms? I don't see how it is possible that this one is 180 times slower than that one. Maybe a hot cache versus cold cache? Also, it seems weird to me that \"trashed_at IS NULL\" shows up in the recheck but not in the original Index Cond. Increasing work_mem can also help, but since the \n\nBitmap Index Scan itself took half the time there is only so much it can do.Cheers,Jeff",
"msg_date": "Wed, 13 Nov 2019 10:20:02 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "Disclaimer: Out over my skis again.\n\n From what you say here, and over on SO, it sounds like you've got two\nproblems:\n\n* Matching on *huge *numbers of records because of common tags.\n\n* A dynamic collection of tags as they're customer driven/configured.\n\nAn \"ideal\" solution might look like a bit-index for each tag+tuple, but\nPostgres does not have such a structure. The closest I've seen are Bloom\nfilter based indexes. That's likely not going to work here as you don't\nknow the collection of tags at any one time. If, however, you create your\nown frequency count estimates for tags, you may well find that there are a\nsmall number of common tags, and a large number of rare tags. That would be\ngood to find out. If you do have some super common (non selective) tags,\nthen perhaps a Bloom index based on that collection could be effective. Or\nexpression indexes on the very common tags. In your SaaS setup, you might\nneed counts/indexes tied to some kind of customer/tenancy distinction ID,\nunderstood. But, for simplicity, I'm just saying a single set of frequency\ncounts, etc.\n\nHere's a recent article on Bloom filter based indexes in Postgres that\nlooks decent:\nhttps://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/\n\n>\n\nDisclaimer: Out over my skis again.From what you say here, and over on SO, it sounds like you've got two problems:* Matching on huge numbers of records because of common tags.* A dynamic collection of tags as they're customer driven/configured.An \"ideal\" solution might look like a bit-index for each tag+tuple, but Postgres does not have such a structure. The closest I've seen are Bloom filter based indexes. That's likely not going to work here as you don't know the collection of tags at any one time. If, however, you create your own frequency count estimates for tags, you may well find that there are a small number of common tags, and a large number of rare tags. That would be good to find out. If you do have some super common (non selective) tags, then perhaps a Bloom index based on that collection could be effective. Or expression indexes on the very common tags. In your SaaS setup, you might need counts/indexes tied to some kind of customer/tenancy distinction ID, understood. But, for simplicity, I'm just saying a single set of frequency counts, etc.Here's a recent article on Bloom filter based indexes in Postgres that looks decent:https://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/",
"msg_date": "Wed, 13 Nov 2019 21:46:10 +1100",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 5:47 AM Morris de Oryx <[email protected]>\nwrote:\n\n> Disclaimer: Out over my skis again.\n>\n> From what you say here, and over on SO, it sounds like you've got two\n> problems:\n>\n> * Matching on *huge *numbers of records because of common tags.\n>\n> * A dynamic collection of tags as they're customer driven/configured.\n>\n> An \"ideal\" solution might look like a bit-index for each tag+tuple, but\n> Postgres does not have such a structure. The closest I've seen are Bloom\n> filter based indexes. That's likely not going to work here as you don't\n> know the collection of tags at any one time. If, however, you create your\n> own frequency count estimates for tags, you may well find that there are a\n> small number of common tags, and a large number of rare tags. That would be\n> good to find out. If you do have some super common (non selective) tags,\n> then perhaps a Bloom index based on that collection could be effective. Or\n> expression indexes on the very common tags. In your SaaS setup, you might\n> need counts/indexes tied to some kind of customer/tenancy distinction ID,\n> understood. But, for simplicity, I'm just saying a single set of frequency\n> counts, etc.\n>\n> Here's a recent article on Bloom filter based indexes in Postgres that\n> looks decent:\n> https://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/\n>\n\nOne other question might be whether you are always querying for a specific\ntag or small set of tags, or if your queries are for relatively random\ntags. ie, if you are always looking for the same 2 or 3 tags, then maybe\nyou could use a functional index or trigger-populate a new column on\ninsert/update that indicates whether those tags are present.\n\nIt is possible that you want a Graph model for this data instead of a\nRelational model. ie, if you are finding a bunch of users with common\nfeatures, you may find traversing a graph (such as Neo4j - or if you _have_\nto stay with a PG backend, something like Cayley.io) to be much more\nefficient and flexible.\n\nOn Wed, Nov 13, 2019 at 5:47 AM Morris de Oryx <[email protected]> wrote:Disclaimer: Out over my skis again.From what you say here, and over on SO, it sounds like you've got two problems:* Matching on huge numbers of records because of common tags.* A dynamic collection of tags as they're customer driven/configured.An \"ideal\" solution might look like a bit-index for each tag+tuple, but Postgres does not have such a structure. The closest I've seen are Bloom filter based indexes. That's likely not going to work here as you don't know the collection of tags at any one time. If, however, you create your own frequency count estimates for tags, you may well find that there are a small number of common tags, and a large number of rare tags. That would be good to find out. If you do have some super common (non selective) tags, then perhaps a Bloom index based on that collection could be effective. Or expression indexes on the very common tags. In your SaaS setup, you might need counts/indexes tied to some kind of customer/tenancy distinction ID, understood. But, for simplicity, I'm just saying a single set of frequency counts, etc.Here's a recent article on Bloom filter based indexes in Postgres that looks decent:https://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/One other question might be whether you are always querying for a specific tag or small set of tags, or if your queries are for relatively random tags. ie, if you are always looking for the same 2 or 3 tags, then maybe you could use a functional index or trigger-populate a new column on insert/update that indicates whether those tags are present.It is possible that you want a Graph model for this data instead of a Relational model. ie, if you are finding a bunch of users with common features, you may find traversing a graph (such as Neo4j - or if you _have_ to stay with a PG backend, something like Cayley.io) to be much more efficient and flexible.",
"msg_date": "Wed, 13 Nov 2019 06:18:16 -0500",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 4:20 AM Marco Colli <[email protected]> wrote:\n\n> Replying to the previous questions:\n> - work_mem = 64MB (there are hundreds of connections)\n> - the project 123 has more than 7M records, and those that don't have the\n> tag 'en' are 4.8M\n>\n>\n>> What was the plan for the one that took 500ms?\n>\n>\n> This is the query / plan without the filter on tags:\n>\n> SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" =\n> 123 AND \"subscriptions\".\"trashed_at\" IS NULL;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Finalize Aggregate (cost=291342.67..291342.68 rows=1 width=8) (actual\n> time=354.556..354.556 rows=1 loops=1)\n> -> Gather (cost=291342.05..291342.66 rows=6 width=8) (actual\n> time=354.495..374.305 rows=7 loops=1)\n> Workers Planned: 6\n> Workers Launched: 6\n> -> Partial Aggregate (cost=290342.05..290342.06 rows=1 width=8)\n> (actual time=349.799..349.799 rows=1 loops=7)\n> -> Parallel Index Only Scan using\n> index_subscriptions_on_project_id_and_uid on subscriptions\n> (cost=0.56..287610.27 rows=1092713 width=0) (actual time=0.083..273.018\n> rows=1030593 loops=7)\n> Index Cond: (project_id = 123)\n> Heap Fetches: 280849\n> Planning Time: 0.753 ms\n> Execution Time: 374.483 ms\n> (10 rows)\n>\n\nMy previous comment about the bitmap index scan taking half the time was a\nslip of the eye, I was comparing *cost* of the bitmap index scan to the\n*time* of the overall plan. But then the question is, why isn't it doing\nan index-only scan on \"index_subscriptions_on_project_id_and_tags\"? And\nthe answer is that is because it is a GIN index. Make the same index only\nas btree, and you should get good performance as it can filter the tags\nwithin a given project without visiting the table.\n\nCheers,\n\nJeff\n\n>\n\nOn Wed, Nov 13, 2019 at 4:20 AM Marco Colli <[email protected]> wrote:Replying to the previous questions:- work_mem = 64MB (there are hundreds of connections)- the project 123 has more than 7M records, and those that don't have the tag 'en' are 4.8M What was the plan for the one that took 500ms?This is the query / plan without the filter on tags:SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123 AND \"subscriptions\".\"trashed_at\" IS NULL; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Finalize Aggregate (cost=291342.67..291342.68 rows=1 width=8) (actual time=354.556..354.556 rows=1 loops=1) -> Gather (cost=291342.05..291342.66 rows=6 width=8) (actual time=354.495..374.305 rows=7 loops=1) Workers Planned: 6 Workers Launched: 6 -> Partial Aggregate (cost=290342.05..290342.06 rows=1 width=8) (actual time=349.799..349.799 rows=1 loops=7) -> Parallel Index Only Scan using index_subscriptions_on_project_id_and_uid on subscriptions (cost=0.56..287610.27 rows=1092713 width=0) (actual time=0.083..273.018 rows=1030593 loops=7) Index Cond: (project_id = 123) Heap Fetches: 280849 Planning Time: 0.753 ms Execution Time: 374.483 ms(10 rows)My previous comment about the bitmap index scan taking half the time was a slip of the eye, I was comparing *cost* of the bitmap index scan to the *time* of the overall plan. But then the question is, why isn't it doing an index-only scan on \"index_subscriptions_on_project_id_and_tags\"? And the answer is that is because it is a GIN index. Make the same index only as btree, and you should get good performance as it can filter the tags within a given project without visiting the table.Cheers,Jeff",
"msg_date": "Wed, 13 Nov 2019 06:30:10 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "> the answer is that is because it is a GIN index. Make the same index only\nas btree, and you should get good performance as it can filter the tags\nwithin a given project without visiting the table.\n\nCurrently I have this GIN index:\n \"index_subscriptions_on_project_id_and_tags\" gin (project_id, tags)\nWHERE trashed_at IS NULL\n\nIt uses the btree_gin extension and works perfectly for tag search, except\nfor the \"NOT\" operator. I don't understand why it doesn't use the GIN index\nalso for the \"NOT\" operator.\nThe problem is that I cannot create the same index with BTree, because PG\ndoesn't support BTree on array :(\n\nOn Wed, Nov 13, 2019 at 12:30 PM Jeff Janes <[email protected]> wrote:\n\n> On Wed, Nov 13, 2019 at 4:20 AM Marco Colli <[email protected]>\n> wrote:\n>\n>> Replying to the previous questions:\n>> - work_mem = 64MB (there are hundreds of connections)\n>> - the project 123 has more than 7M records, and those that don't have the\n>> tag 'en' are 4.8M\n>>\n>>\n>>> What was the plan for the one that took 500ms?\n>>\n>>\n>> This is the query / plan without the filter on tags:\n>>\n>> SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" =\n>> 123 AND \"subscriptions\".\"trashed_at\" IS NULL;\n>>\n>> QUERY PLAN\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Finalize Aggregate (cost=291342.67..291342.68 rows=1 width=8) (actual\n>> time=354.556..354.556 rows=1 loops=1)\n>> -> Gather (cost=291342.05..291342.66 rows=6 width=8) (actual\n>> time=354.495..374.305 rows=7 loops=1)\n>> Workers Planned: 6\n>> Workers Launched: 6\n>> -> Partial Aggregate (cost=290342.05..290342.06 rows=1\n>> width=8) (actual time=349.799..349.799 rows=1 loops=7)\n>> -> Parallel Index Only Scan using\n>> index_subscriptions_on_project_id_and_uid on subscriptions\n>> (cost=0.56..287610.27 rows=1092713 width=0) (actual time=0.083..273.018\n>> rows=1030593 loops=7)\n>> Index Cond: (project_id = 123)\n>> Heap Fetches: 280849\n>> Planning Time: 0.753 ms\n>> Execution Time: 374.483 ms\n>> (10 rows)\n>>\n>\n> My previous comment about the bitmap index scan taking half the time was a\n> slip of the eye, I was comparing *cost* of the bitmap index scan to the\n> *time* of the overall plan. But then the question is, why isn't it doing\n> an index-only scan on \"index_subscriptions_on_project_id_and_tags\"? And\n> the answer is that is because it is a GIN index. Make the same index only\n> as btree, and you should get good performance as it can filter the tags\n> within a given project without visiting the table.\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\n> the answer is that is because it is a GIN index. Make the same index only as btree, and you should get good performance as it can filter the tags within a given project without visiting the table.Currently I have this GIN index: \"index_subscriptions_on_project_id_and_tags\" gin (project_id, tags) WHERE trashed_at IS NULLIt uses the btree_gin extension and works perfectly for tag search, except for the \"NOT\" operator. I don't understand why it doesn't use the GIN index also for the \"NOT\" operator.The problem is that I cannot create the same index with BTree, because PG doesn't support BTree on array :( On Wed, Nov 13, 2019 at 12:30 PM Jeff Janes <[email protected]> wrote:On Wed, Nov 13, 2019 at 4:20 AM Marco Colli <[email protected]> wrote:Replying to the previous questions:- work_mem = 64MB (there are hundreds of connections)- the project 123 has more than 7M records, and those that don't have the tag 'en' are 4.8M What was the plan for the one that took 500ms?This is the query / plan without the filter on tags:SELECT COUNT(*) FROM \"subscriptions\" WHERE \"subscriptions\".\"project_id\" = 123 AND \"subscriptions\".\"trashed_at\" IS NULL; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Finalize Aggregate (cost=291342.67..291342.68 rows=1 width=8) (actual time=354.556..354.556 rows=1 loops=1) -> Gather (cost=291342.05..291342.66 rows=6 width=8) (actual time=354.495..374.305 rows=7 loops=1) Workers Planned: 6 Workers Launched: 6 -> Partial Aggregate (cost=290342.05..290342.06 rows=1 width=8) (actual time=349.799..349.799 rows=1 loops=7) -> Parallel Index Only Scan using index_subscriptions_on_project_id_and_uid on subscriptions (cost=0.56..287610.27 rows=1092713 width=0) (actual time=0.083..273.018 rows=1030593 loops=7) Index Cond: (project_id = 123) Heap Fetches: 280849 Planning Time: 0.753 ms Execution Time: 374.483 ms(10 rows)My previous comment about the bitmap index scan taking half the time was a slip of the eye, I was comparing *cost* of the bitmap index scan to the *time* of the overall plan. But then the question is, why isn't it doing an index-only scan on \"index_subscriptions_on_project_id_and_tags\"? And the answer is that is because it is a GIN index. Make the same index only as btree, and you should get good performance as it can filter the tags within a given project without visiting the table.Cheers,Jeff",
"msg_date": "Wed, 13 Nov 2019 12:56:22 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 6:56 AM Marco Colli <[email protected]> wrote:\n\n> > the answer is that is because it is a GIN index. Make the same index\n> only as btree, and you should get good performance as it can filter the\n> tags within a given project without visiting the table.\n>\n> Currently I have this GIN index:\n> \"index_subscriptions_on_project_id_and_tags\" gin (project_id, tags)\n> WHERE trashed_at IS NULL\n>\n>\nMulticolumn GIN indexes are nearly worthless IMO when one column is a\nscalar. You can use this index, but it won't be better than one just on\n\"GIN (tags) trashed_at IS NULL\". An N-column GIN index is mostly the same\nthing as N single column GIN indexes.\n\n\n> It uses the btree_gin extension and works perfectly for tag search, except\n> for the \"NOT\" operator. I don't understand why it doesn't use the GIN index\n> also for the \"NOT\" operator.\n>\n\nBecause it can't. Tom already did a good job of describing that. Can you\ndescribe what steps you think an index should take to jump to the specific\nrows which fail to exist in an inverted index?\n\n\nThe problem is that I cannot create the same index with BTree, because PG\n> doesn't support BTree on array :(\n>\n\nSure it does. It can't jump to specific parts of the index based on the\narray containment operators, but it can use them for in-index filtering\n(but only if you can do an index-only scan). And really, that is probably\nall you need to get > 100x improvement.\n\nAre you getting an error when you try to build it? If so, what is the\nerror?\n\nCheers,\n\nJeff\n\n>\n\nOn Wed, Nov 13, 2019 at 6:56 AM Marco Colli <[email protected]> wrote:> the answer is that is because it is a GIN index. Make the same index only as btree, and you should get good performance as it can filter the tags within a given project without visiting the table.Currently I have this GIN index: \"index_subscriptions_on_project_id_and_tags\" gin (project_id, tags) WHERE trashed_at IS NULLMulticolumn GIN indexes are nearly worthless IMO when one column is a scalar. You can use this index, but it won't be better than one just on \"GIN (tags) \n\ntrashed_at IS NULL\". An N-column GIN index is mostly the same thing as N single column GIN indexes. It uses the btree_gin extension and works perfectly for tag search, except for the \"NOT\" operator. I don't understand why it doesn't use the GIN index also for the \"NOT\" operator.Because it can't. Tom already did a good job of describing that. Can you describe what steps you think an index should take to jump to the specific rows which fail to exist in an inverted index?The problem is that I cannot create the same index with BTree, because PG doesn't support BTree on array :( Sure it does. It can't jump to specific parts of the index based on the array containment operators, but it can use them for in-index filtering (but only if you can do an index-only scan). And really, that is probably all you need to get > 100x improvement.Are you getting an error when you try to build it? If so, what is the error?Cheers,Jeff",
"msg_date": "Wed, 13 Nov 2019 07:18:21 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"not in array\" operation"
},
{
"msg_contents": "Wow! Thank you very much Jeff!! I am really grateful.\n\nThanks to the btree (instead of gin) the query now takes about 500ms\ninstead of 70s.\n\nIl Mer 13 Nov 2019, 13:18 Jeff Janes <[email protected]> ha scritto:\n\n> On Wed, Nov 13, 2019 at 6:56 AM Marco Colli <[email protected]>\n> wrote:\n>\n>> > the answer is that is because it is a GIN index. Make the same index\n>> only as btree, and you should get good performance as it can filter the\n>> tags within a given project without visiting the table.\n>>\n>> Currently I have this GIN index:\n>> \"index_subscriptions_on_project_id_and_tags\" gin (project_id, tags)\n>> WHERE trashed_at IS NULL\n>>\n>>\n> Multicolumn GIN indexes are nearly worthless IMO when one column is a\n> scalar. You can use this index, but it won't be better than one just on\n> \"GIN (tags) trashed_at IS NULL\". An N-column GIN index is mostly the same\n> thing as N single column GIN indexes.\n>\n>\n>> It uses the btree_gin extension and works perfectly for tag search,\n>> except for the \"NOT\" operator. I don't understand why it doesn't use the\n>> GIN index also for the \"NOT\" operator.\n>>\n>\n> Because it can't. Tom already did a good job of describing that. Can you\n> describe what steps you think an index should take to jump to the specific\n> rows which fail to exist in an inverted index?\n>\n>\n> The problem is that I cannot create the same index with BTree, because PG\n>> doesn't support BTree on array :(\n>>\n>\n> Sure it does. It can't jump to specific parts of the index based on the\n> array containment operators, but it can use them for in-index filtering\n> (but only if you can do an index-only scan). And really, that is probably\n> all you need to get > 100x improvement.\n>\n> Are you getting an error when you try to build it? If so, what is the\n> error?\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\nWow! Thank you very much Jeff!! I am really grateful.Thanks to the btree (instead of gin) the query now takes about 500ms instead of 70s.Il Mer 13 Nov 2019, 13:18 Jeff Janes <[email protected]> ha scritto:On Wed, Nov 13, 2019 at 6:56 AM Marco Colli <[email protected]> wrote:> the answer is that is because it is a GIN index. Make the same index only as btree, and you should get good performance as it can filter the tags within a given project without visiting the table.Currently I have this GIN index: \"index_subscriptions_on_project_id_and_tags\" gin (project_id, tags) WHERE trashed_at IS NULLMulticolumn GIN indexes are nearly worthless IMO when one column is a scalar. You can use this index, but it won't be better than one just on \"GIN (tags) \n\ntrashed_at IS NULL\". An N-column GIN index is mostly the same thing as N single column GIN indexes. It uses the btree_gin extension and works perfectly for tag search, except for the \"NOT\" operator. I don't understand why it doesn't use the GIN index also for the \"NOT\" operator.Because it can't. Tom already did a good job of describing that. Can you describe what steps you think an index should take to jump to the specific rows which fail to exist in an inverted index?The problem is that I cannot create the same index with BTree, because PG doesn't support BTree on array :( Sure it does. It can't jump to specific parts of the index based on the array containment operators, but it can use them for in-index filtering (but only if you can do an index-only scan). And really, that is probably all you need to get > 100x improvement.Are you getting an error when you try to build it? If so, what is the error?Cheers,Jeff",
"msg_date": "Wed, 13 Nov 2019 17:15:43 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"not in array\" operation"
}
] |
[
{
"msg_contents": "Hi!\n\nIs there a reason query 3 can't use parallel workers? Using q1 and q2 \nthey seem very similar but can use up to 4 workers to run faster:\n\nq1: https://pastebin.com/ufkbSmfB\nq2: https://pastebin.com/Yt32zRNX\nq3: https://pastebin.com/dqh7yKPb\n\nThe sort node on q3 takes almost 12 seconds, making the query run on 68 \nif I had set enough work_mem to make it all in memory.\n\nRunning version 10.10.\n\n\n",
"msg_date": "Wed, 13 Nov 2019 17:16:44 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel Query"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 3:11 PM Luís Roberto Weck <\[email protected]> wrote:\n\n> Hi!\n>\n> Is there a reason query 3 can't use parallel workers? Using q1 and q2\n> they seem very similar but can use up to 4 workers to run faster:\n>\n> q1: https://pastebin.com/ufkbSmfB\n> q2: https://pastebin.com/Yt32zRNX\n> q3: https://pastebin.com/dqh7yKPb\n>\n> The sort node on q3 takes almost 12 seconds, making the query run on 68\n> if I had set enough work_mem to make it all in memory.\n>\n\nThe third one thinks it is going find 3454539 output rows. If it run in\nparallel, it thinks it will be passing lots of rows up from the parallel\nworkers, and charges a high price (parallel_tuple_cost = 0.1) for doing\nso. So you can try lowering parallel_tuple_cost, or figuring out why the\nestimate is so bad.\n\nCheers,\n\nJeff\n\nOn Wed, Nov 13, 2019 at 3:11 PM Luís Roberto Weck <[email protected]> wrote:Hi!\n\nIs there a reason query 3 can't use parallel workers? Using q1 and q2 \nthey seem very similar but can use up to 4 workers to run faster:\n\nq1: https://pastebin.com/ufkbSmfB\nq2: https://pastebin.com/Yt32zRNX\nq3: https://pastebin.com/dqh7yKPb\n\nThe sort node on q3 takes almost 12 seconds, making the query run on 68 \nif I had set enough work_mem to make it all in memory.The third one thinks it is going find 3454539 output rows. If it run in parallel, it thinks it will be passing lots of rows up from the parallel workers, and charges a high price (parallel_tuple_cost = 0.1) for doing so. So you can try lowering \n\nparallel_tuple_cost, or figuring out why the estimate is so bad.Cheers,Jeff",
"msg_date": "Wed, 13 Nov 2019 15:40:17 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Query"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 05:16:44PM -0300, Lu�s Roberto Weck wrote:\n>Hi!\n>\n>Is there a reason query 3 can't use parallel workers? Using q1 and q2 \n>they seem very similar but can use up to 4 workers to run faster:\n>\n>q1: https://pastebin.com/ufkbSmfB\n>q2: https://pastebin.com/Yt32zRNX\n>q3: https://pastebin.com/dqh7yKPb\n>\n>The sort node on q3 takes almost 12 seconds, making the query run on \n>68� if I had set enough work_mem to make it all in memory.\n>\n\nMost likely because it'd be actually slower. The trouble is the\naggregation does not actually reduce the cardinality, or at least the\nplanner does not expect that - the Sort and GroupAggregate are expected\nto produce 3454539 rows. The last step of the aggregation has to receive\nand merge data from all workers, which is not exactly free, and if there\nis no reduction of cardinality it's likely cheaper to just do everything\nin a single process serially.\n\nHow does the explain analyze output look like without the HAVING clause?\n\nTry setting parallel_setup_cost and parallel_tuple_cost to 0. That might\ntrigger parallel query.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 13 Nov 2019 21:47:37 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Query"
},
{
"msg_contents": "****\nEm 13/11/2019 17:47, Tomas Vondra escreveu:\n> On Wed, Nov 13, 2019 at 05:16:44PM -0300, Luís Roberto Weck wrote:\n>> Hi!\n>>\n>> Is there a reason query 3 can't use parallel workers? Using q1 and q2 \n>> they seem very similar but can use up to 4 workers to run faster:\n>>\n>> q1: https://pastebin.com/ufkbSmfB\n>> q2: https://pastebin.com/Yt32zRNX\n>> q3: https://pastebin.com/dqh7yKPb\n>>\n>> The sort node on q3 takes almost 12 seconds, making the query run on \n>> 68 if I had set enough work_mem to make it all in memory.\n>>\n>\n> Most likely because it'd be actually slower. The trouble is the\n> aggregation does not actually reduce the cardinality, or at least the\n> planner does not expect that - the Sort and GroupAggregate are expected\n> to produce 3454539 rows. The last step of the aggregation has to receive\n> and merge data from all workers, which is not exactly free, and if there\n> is no reduction of cardinality it's likely cheaper to just do everything\n> in a single process serially.\n>\n> How does the explain analyze output look like without the HAVING clause?\n>\n> Try setting parallel_setup_cost and parallel_tuple_cost to 0. That might\n> trigger parallel query.\n>\n> regards\n>\nTomas,\n\nEXPLAIN:\nGroup (cost=1245130.37..1279676.46 rows=3454609 width=14)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n -> Sort (cost=1245130.37..1253766.89 rows=3454609 width=14)\n Sort Key: c.concod, cp.conparnum, cp.conpardatven\n -> Hash Join (cost=34366.64..869958.26 rows=3454609 width=14)\n Hash Cond: (cp.concod = c.concod)\n -> Seq Scan on contrato_parcela cp (cost=0.00..804245.73 \nrows=11941273 width=14)\n -> Hash (cost=23436.55..23436.55 rows=874407 width=8)\n -> Index Only Scan using contrato_iu0004 on \ncontrato c (cost=0.43..23436.55 rows=874407 width=8)\n Index Cond: (carcod = 100)\n\nEXPLAIN ANALYZE:\n\nGroup (cost=1245132.29..1279678.44 rows=3454615 width=14) (actual \ntime=61860.985..64852.579 rows=6787445 loops=1)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n -> Sort (cost=1245132.29..1253768.83 rows=3454615 width=14) (actual \ntime=61860.980..63128.557 rows=6787531 loops=1)\n Sort Key: c.concod, cp.conparnum, cp.conpardatven\n Sort Method: external merge Disk: 172688kB\n -> Hash Join (cost=34366.64..869959.48 rows=3454615 width=14) \n(actual time=876.428..52675.140 rows=6787531 loops=1)\n Hash Cond: (cp.concod = c.concod)\n -> Seq Scan on contrato_parcela cp (cost=0.00..804246.91 \nrows=11941291 width=14) (actual time=0.010..44860.242 rows=11962505 loops=1)\n -> Hash (cost=23436.55..23436.55 rows=874407 width=8) \n(actual time=874.791..874.791 rows=879841 loops=1)\n Buckets: 1048576 Batches: 1 Memory Usage: 42561kB\n -> Index Only Scan using contrato_iu0004 on \ncontrato c (cost=0.43..23436.55 rows=874407 width=8) (actual \ntime=0.036..535.897 rows=879841 loops=1)\n Index Cond: (carcod = 100)\n Heap Fetches: 144438\nPlanning time: 1.252 ms\nExecution time: 65214.007 ms\n\n\nIndeed, reducing the costs made the query run in parallel, but the \nimprovement in speed was not worth the cost (CPU).\n\n\n\n\n\n\n\n\nEm 13/11/2019 17:47, Tomas Vondra\n escreveu:\n\nOn Wed, Nov\n 13, 2019 at 05:16:44PM -0300, Luís Roberto Weck wrote:\n \nHi!\n \n\n Is there a reason query 3 can't use parallel workers? Using q1\n and q2 they seem very similar but can use up to 4 workers to run\n faster:\n \n\n q1: https://pastebin.com/ufkbSmfB\n\n q2: https://pastebin.com/Yt32zRNX\n\n q3: https://pastebin.com/dqh7yKPb\n\n\n The sort node on q3 takes almost 12 seconds, making the query\n run on 68 if I had set enough work_mem to make it all in\n memory.\n \n\n\n\n Most likely because it'd be actually slower. The trouble is the\n \n aggregation does not actually reduce the cardinality, or at least\n the\n \n planner does not expect that - the Sort and GroupAggregate are\n expected\n \n to produce 3454539 rows. The last step of the aggregation has to\n receive\n \n and merge data from all workers, which is not exactly free, and if\n there\n \n is no reduction of cardinality it's likely cheaper to just do\n everything\n \n in a single process serially.\n \n\n How does the explain analyze output look like without the HAVING\n clause?\n \n\n Try setting parallel_setup_cost and parallel_tuple_cost to 0. That\n might\n \n trigger parallel query.\n \n\n regards\n \n\n\n Tomas, \n\n EXPLAIN:\nGroup (cost=1245130.37..1279676.46 rows=3454609 width=14)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n -> Sort (cost=1245130.37..1253766.89 rows=3454609\n width=14)\n Sort Key: c.concod, cp.conparnum, cp.conpardatven\n -> Hash Join (cost=34366.64..869958.26\n rows=3454609 width=14)\n Hash Cond: (cp.concod = c.concod)\n -> Seq Scan on contrato_parcela cp \n (cost=0.00..804245.73 rows=11941273 width=14)\n -> Hash (cost=23436.55..23436.55\n rows=874407 width=8)\n -> Index Only Scan using\n contrato_iu0004 on contrato c (cost=0.43..23436.55 rows=874407\n width=8)\n Index Cond: (carcod = 100)\n\nEXPLAIN ANALYZE:\n\n Group (cost=1245132.29..1279678.44 rows=3454615 width=14) (actual\n time=61860.985..64852.579 rows=6787445 loops=1)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n -> Sort (cost=1245132.29..1253768.83 rows=3454615 width=14)\n (actual time=61860.980..63128.557 rows=6787531 loops=1)\n Sort Key: c.concod, cp.conparnum, cp.conpardatven\n Sort Method: external merge Disk: 172688kB\n -> Hash Join (cost=34366.64..869959.48 rows=3454615\n width=14) (actual time=876.428..52675.140 rows=6787531 loops=1)\n Hash Cond: (cp.concod = c.concod)\n -> Seq Scan on contrato_parcela cp \n (cost=0.00..804246.91 rows=11941291 width=14) (actual\n time=0.010..44860.242 rows=11962505 loops=1)\n -> Hash (cost=23436.55..23436.55 rows=874407\n width=8) (actual time=874.791..874.791 rows=879841 loops=1)\n Buckets: 1048576 Batches: 1 Memory Usage:\n 42561kB\n -> Index Only Scan using contrato_iu0004\n on contrato c (cost=0.43..23436.55 rows=874407 width=8) (actual\n time=0.036..535.897 rows=879841 loops=1)\n Index Cond: (carcod = 100)\n Heap Fetches: 144438\n Planning time: 1.252 ms\n Execution time: 65214.007 ms\n\n\nIndeed, reducing the costs made the query run in parallel, but\n the improvement in speed was not worth the cost (CPU).",
"msg_date": "Wed, 13 Nov 2019 18:04:42 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Query"
},
{
"msg_contents": "****\nEm 13/11/2019 17:40, Jeff Janes escreveu:\n> On Wed, Nov 13, 2019 at 3:11 PM Luís Roberto Weck \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hi!\n>\n> Is there a reason query 3 can't use parallel workers? Using q1 and q2\n> they seem very similar but can use up to 4 workers to run faster:\n>\n> q1: https://pastebin.com/ufkbSmfB\n> q2: https://pastebin.com/Yt32zRNX\n> q3: https://pastebin.com/dqh7yKPb\n>\n> The sort node on q3 takes almost 12 seconds, making the query run\n> on 68\n> if I had set enough work_mem to make it all in memory.\n>\n>\n> The third one thinks it is going find 3454539 output rows. If it run \n> in parallel, it thinks it will be passing lots of rows up from the \n> parallel workers, and charges a high price (parallel_tuple_cost = 0.1) \n> for doing so. So you can try lowering parallel_tuple_cost, or \n> figuring out why the estimate is so bad.\n>\n> Cheers,\n>\n> Jeff\n Hi Jeff,\n\nI don't think the \"HAVING\" clause is havin any effect on the estimates:\n\nWITHOUT \"HAVING\":\nGroup (cost=1245134.08..1279680.28 rows=3454620 width=14)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n -> Sort (cost=1245134.08..1253770.63 rows=3454620 width=14)\n Sort Key: c.concod, cp.conparnum, cp.conpardatven\n -> Hash Join (cost=34366.64..869960.70 rows=3454620 width=14)\n Hash Cond: (cp.concod = c.concod)\n -> Seq Scan on contrato_parcela cp (cost=0.00..804248.08 \nrows=11941308 width=14)\n -> Hash (cost=23436.55..23436.55 rows=874407 width=8)\n -> Index Only Scan using contrato_iu0004 on \ncontrato c (cost=0.43..23436.55 rows=874407 width=8)\n Index Cond: (carcod = 100)\n\nWITH \"HAVING\":\nGroupAggregate (cost=1245144.88..1322874.51 rows=3454650 width=14)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n Filter: (count(*) > 1)\n -> Sort (cost=1245144.88..1253781.51 rows=3454650 width=14)\n Sort Key: c.concod, cp.conparnum, cp.conpardatven\n -> Hash Join (cost=34366.64..869968.02 rows=3454650 width=14)\n Hash Cond: (cp.concod = c.concod)\n -> Seq Scan on contrato_parcela cp (cost=0.00..804255.13 \nrows=11941413 width=14)\n -> Hash (cost=23436.55..23436.55 rows=874407 width=8)\n -> Index Only Scan using contrato_iu0004 on \ncontrato c (cost=0.43..23436.55 rows=874407 width=8)\n Index Cond: (carcod = 100)\n\nMaybe PostgreSQL can't find a way to calculate having estimates?\n\n\n\n\n\n\n\n\nEm 13/11/2019 17:40, Jeff Janes\n escreveu:\n\n\n\n\nOn Wed, Nov 13, 2019 at 3:11 PM Luís Roberto Weck\n <[email protected]>\n wrote:\n\n\nHi!\n\n Is there a reason query 3 can't use parallel workers? Using\n q1 and q2 \n they seem very similar but can use up to 4 workers to run\n faster:\n\n q1: https://pastebin.com/ufkbSmfB\n q2: https://pastebin.com/Yt32zRNX\n q3: https://pastebin.com/dqh7yKPb\n\n The sort node on q3 takes almost 12 seconds, making the\n query run on 68 \n if I had set enough work_mem to make it all in memory.\n\n\n\nThe third one thinks it is going find 3454539 output\n rows. If it run in parallel, it thinks it will be passing\n lots of rows up from the parallel workers, and charges a\n high price (parallel_tuple_cost = 0.1) for doing so. So you\n can try lowering \n parallel_tuple_cost, or figuring out why the estimate is so\n bad.\n\n\nCheers,\n\n\nJeff\n\n\n\n Hi Jeff,\n\n I don't think the \"HAVING\" clause is havin any effect on the\n estimates:\n\n WITHOUT \"HAVING\":\nGroup (cost=1245134.08..1279680.28 rows=3454620 width=14)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n -> Sort (cost=1245134.08..1253770.63 rows=3454620\n width=14)\n Sort Key: c.concod, cp.conparnum, cp.conpardatven\n -> Hash Join (cost=34366.64..869960.70\n rows=3454620 width=14)\n Hash Cond: (cp.concod = c.concod)\n -> Seq Scan on contrato_parcela cp \n (cost=0.00..804248.08 rows=11941308 width=14)\n -> Hash (cost=23436.55..23436.55\n rows=874407 width=8)\n -> Index Only Scan using\n contrato_iu0004 on contrato c (cost=0.43..23436.55 rows=874407\n width=8)\n Index Cond: (carcod = 100)\n\n WITH \"HAVING\":\nGroupAggregate (cost=1245144.88..1322874.51 rows=3454650\n width=14)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n Filter: (count(*) > 1)\n -> Sort (cost=1245144.88..1253781.51 rows=3454650\n width=14)\n Sort Key: c.concod, cp.conparnum, cp.conpardatven\n -> Hash Join (cost=34366.64..869968.02\n rows=3454650 width=14)\n Hash Cond: (cp.concod = c.concod)\n -> Seq Scan on contrato_parcela cp \n (cost=0.00..804255.13 rows=11941413 width=14)\n -> Hash (cost=23436.55..23436.55\n rows=874407 width=8)\n -> Index Only Scan using\n contrato_iu0004 on contrato c (cost=0.43..23436.55 rows=874407\n width=8)\n Index Cond: (carcod = 100)\n\nMaybe PostgreSQL can't find a way to calculate having\n estimates?",
"msg_date": "Wed, 13 Nov 2019 18:07:16 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Query"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 3:59 PM Luís Roberto Weck <\[email protected]> wrote:\n\n>\n>\n> Indeed, reducing the costs made the query run in parallel, but the\n> improvement in speed was not worth the cost (CPU).\n>\n\nCould you show the plan for that?\n\nOn Wed, Nov 13, 2019 at 3:59 PM Luís Roberto Weck <[email protected]> wrote:\n\n\nIndeed, reducing the costs made the query run in parallel, but\n the improvement in speed was not worth the cost (CPU).Could you show the plan for that?",
"msg_date": "Wed, 13 Nov 2019 17:08:15 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Query"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 4:01 PM Luís Roberto Weck <\[email protected]> wrote:\n\n>\n> Maybe PostgreSQL can't find a way to calculate having estimates?\n>\n\nI wasn't even thinking of the HAVING estimates I was thinking of just the\nraw aggregates. It can't implement the HAVING until has the raw aggregate\nin hand. But, what is the actual row count without the HAVING? Well, I\nnotice now this line:\n\nRows Removed by Filter: 6787359\n\nSo the row count of rows=86 is mostly due to the HAVING, not due to the raw\naggregation, a point I overlooked initially. So the planner is not\nmistaken in thinking that a huge number of rows need to be passed up--it is\ncorrect in thinking that.\n\nCheers,\n\nJeff\n\nOn Wed, Nov 13, 2019 at 4:01 PM Luís Roberto Weck <[email protected]> wrote:\n\nMaybe PostgreSQL can't find a way to calculate having\n estimates?I wasn't even thinking of the HAVING estimates I was thinking of just the raw aggregates. It can't implement the HAVING until has the raw aggregate in hand. But, what is the actual row count without the HAVING? Well, I notice now this line:Rows Removed by Filter: 6787359So the row count of rows=86 is mostly due to the HAVING, not due to the raw aggregation, a point I overlooked initially. So the planner is not mistaken in thinking that a huge number of rows need to be passed up--it is correct in thinking that. Cheers,Jeff",
"msg_date": "Wed, 13 Nov 2019 17:21:26 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Query"
},
{
"msg_contents": "Em 13/11/2019 19:08, Jeff Janes escreveu:\n> On Wed, Nov 13, 2019 at 3:59 PM Luís Roberto Weck \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n>\n>\n> Indeed, reducing the costs made the query run in parallel, but the\n> improvement in speed was not worth the cost (CPU).\n>\n>\n> Could you show the plan for that?\nSure:\n\nFinalize GroupAggregate (cost=842675.56..1017018.29 rows=3470567 \nwidth=14) (actual time=61419.510..65635.188 rows=86 loops=1)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n Filter: (count(*) > 1)\n Rows Removed by Filter: 6787359\n -> Gather Merge (cost=842675.56..947606.94 rows=3470568 width=22) \n(actual time=51620.609..60648.085 rows=6787506 loops=1)\n Workers Planned: 4\n Workers Launched: 4\n -> Partial GroupAggregate (cost=842575.50..862097.45 \nrows=867642 width=22) (actual time=51585.526..53477.065 rows=1357501 \nloops=5)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n -> Sort (cost=842575.50..844744.61 rows=867642 \nwidth=14) (actual time=51585.514..51951.984 rows=1357506 loops=5)\n Sort Key: c.concod, cp.conparnum, cp.conpardatven\n Sort Method: quicksort Memory: 112999kB\n -> Hash Join (cost=34390.13..756996.76 rows=867642 \nwidth=14) (actual time=1087.591..49744.673 rows=1357506 loops=5)\n Hash Cond: (cp.concod = c.concod)\n -> Parallel Seq Scan on contrato_parcela cp \n(cost=0.00..714762.89 rows=2988089 width=14) (actual \ntime=0.077..46674.986 rows=2392501 loops=5)\n -> Hash (cost=23462.75..23462.75 rows=874190 \nwidth=8) (actual time=1080.189..1080.189 rows=879841 loops=5)\n Buckets: 1048576 Batches: 1 Memory \nUsage: 42561kB\n -> Index Only Scan using \ncontrato_iu0004 on contrato c (cost=0.43..23462.75 rows=874190 width=8) \n(actual time=0.141..663.108 rows=879841 loops=5)\n Index Cond: (carcod = 100)\n Heap Fetches: 35197\nPlanning time: 1.045 ms\nExecution time: 65734.134 ms\n\n\n\n\n\n\n Em 13/11/2019 19:08, Jeff Janes escreveu:\n\n\n\nOn Wed, Nov 13, 2019 at 3:59 PM Luís Roberto Weck\n <[email protected]>\n wrote:\n\n\n\n\n\nIndeed, reducing the costs made the query run in\n parallel, but the improvement in speed was not worth the\n cost (CPU).\n\n\n\n\nCould you show the plan for that? \n\n\n\n Sure:\n\nFinalize GroupAggregate (cost=842675.56..1017018.29\n rows=3470567 width=14) (actual time=61419.510..65635.188 rows=86\n loops=1)\n Group Key: c.concod, cp.conparnum, cp.conpardatven\n Filter: (count(*) > 1)\n Rows Removed by Filter: 6787359\n -> Gather Merge (cost=842675.56..947606.94\n rows=3470568 width=22) (actual time=51620.609..60648.085\n rows=6787506 loops=1)\n Workers Planned: 4\n Workers Launched: 4\n -> Partial GroupAggregate \n (cost=842575.50..862097.45 rows=867642 width=22) (actual\n time=51585.526..53477.065 rows=1357501 loops=5)\n Group Key: c.concod, cp.conparnum,\n cp.conpardatven\n -> Sort (cost=842575.50..844744.61\n rows=867642 width=14) (actual time=51585.514..51951.984\n rows=1357506 loops=5)\n Sort Key: c.concod, cp.conparnum,\n cp.conpardatven\n Sort Method: quicksort Memory:\n 112999kB\n -> Hash Join \n (cost=34390.13..756996.76 rows=867642 width=14) (actual\n time=1087.591..49744.673 rows=1357506 loops=5)\n Hash Cond: (cp.concod = c.concod)\n -> Parallel Seq Scan on\n contrato_parcela cp (cost=0.00..714762.89 rows=2988089 width=14)\n (actual time=0.077..46674.986 rows=2392501 loops=5)\n -> Hash \n (cost=23462.75..23462.75 rows=874190 width=8) (actual\n time=1080.189..1080.189 rows=879841 loops=5)\n Buckets: 1048576 Batches:\n 1 Memory Usage: 42561kB\n -> Index Only Scan\n using contrato_iu0004 on contrato c (cost=0.43..23462.75\n rows=874190 width=8) (actual time=0.141..663.108 rows=879841\n loops=5)\n Index Cond: (carcod =\n 100)\n Heap Fetches: 35197\nPlanning time: 1.045 ms\nExecution time: 65734.134 ms",
"msg_date": "Thu, 14 Nov 2019 08:14:25 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Query"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have a table which has a jsonb column in it. Each row contains a lot \nof data in that column, so TOASTed.\n\nWe have to extract data from that column at different levels, so an \nexample query could look like\n\nselect\n col1,\n col2, \njsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5.val1.\"text()\"') \nas val1,\njsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5.val2.\"text()\"') \nas val2,\njsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5.val3.\"text()\"') \nas val3\nfrom tbl\nwhere\n id = 1;\n\nI tried to rewrite it to\n\nWITH foo AS (select\n id,\n col1,\n col2,\njsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5') as jsondata,\n from tbl )\nselect\n col1,\n col2,\n jsondata->val1->'text()' as val1,\n jsondata->val2->'text()' as val2,\n jsondata->val3->'text()' as val3\nfrom foo\nwhere\n id = 1;\n\nHowever, WITH has the same run-time profile - most of the time is spent \nin pglz_decompress. Using the -> notation has the same profile.\n\nThe more data I extract from the JSON object the slower the query gets.\n\nOf course, if I change the column to EXTERNAL we see a ~3.5 x speedup in \nthe queries but disk space requirements goes up by too much.\n\n(We need to use a jsonb column as the data is unstructured, and may \ndiffer in structure between rows. Yes, yes, I know...)\n\nPostgreSQL 12.x on RHEL.\n\nIf anybody has some good ideas it would be appreciated.\n\nThanks in advance !\n\nBest regards,\n Jesper\n\n\n\n",
"msg_date": "Thu, 14 Nov 2019 08:55:28 -0500",
"msg_from": "Jesper Pedersen <[email protected]>",
"msg_from_op": true,
"msg_subject": "JSON path"
},
{
"msg_contents": "Jesper Pedersen <[email protected]> writes:\n> We have a table which has a jsonb column in it. Each row contains a lot \n> of data in that column, so TOASTed.\n> We have to extract data from that column at different levels, so an \n> example query could look like\n\n> select\n> col1,\n> col2, \n> jsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5.val1.\"text()\"') \n> as val1,\n> jsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5.val2.\"text()\"') \n> as val2,\n> jsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5.val3.\"text()\"') \n> as val3\n> from tbl\n> where\n> id = 1;\n\nRight ...\n\n> I tried to rewrite it to\n\n> WITH foo AS (select\n> id,\n> col1,\n> col2,\n> jsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5') as jsondata,\n> from tbl )\n> select\n> col1,\n> col2,\n> jsondata->val1->'text()' as val1,\n> jsondata->val2->'text()' as val2,\n> jsondata->val3->'text()' as val3\n> from foo\n> where\n> id = 1;\n\nThis has got syntax errors, but I get the point.\n\n> However, WITH has the same run-time profile - most of the time is spent \n> in pglz_decompress. Using the -> notation has the same profile.\n\nAs of v12, that WITH will get flattened, so that you still end up\nwith three invocations of jsonb_path_query_first, as EXPLAIN VERBOSE\nwill show you. You could write \"WITH foo AS MATERIALIZED ...\" to\nprevent that, but then you'll need to stick the WHERE clause inside\nthe WITH or you'll end up running jsonb_path_query_first for every\nrow of tbl.\n\nWith\n\nexplain verbose WITH foo AS materialized (select\n id,\n col1,\n col2,\njsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5') as jsondata\n from tbl where id = 1 )\nselect\n col1,\n col2,\n jsondata->'val1'->'text()' as val1,\n jsondata->'val2'->'text()' as val2,\n jsondata->'val3'->'text()' as val3\nfrom foo;\n\nI get a plan that does what you're looking for:\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on foo (cost=24.14..24.35 rows=6 width=104)\n Output: foo.col1, foo.col2, ((foo.jsondata -> 'val1'::text) -> 'text()'::text), ((foo.jsondata -> 'val2'::text) -> 'text()'::text), ((foo.jsondata -> 'val3'::text) -> 'text()'::text)\n CTE foo\n -> Seq Scan on public.tbl (cost=0.00..24.14 rows=6 width=44)\n Output: tbl.id, tbl.col1, tbl.col2, jsonb_path_query_first(tbl.data, '$.\"lvl1\".\"lvl2\".\"lvl3\".\"lvl4\".\"lvl5\"'::jsonpath, '{}'::jsonb, false)\n Filter: (tbl.id = 1)\n(6 rows)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Nov 2019 13:04:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON path"
},
{
"msg_contents": "Hi,\n\nOn 11/14/19 1:04 PM, Tom Lane wrote:\n> As of v12, that WITH will get flattened, so that you still end up\n> with three invocations of jsonb_path_query_first, as EXPLAIN VERBOSE\n> will show you. You could write \"WITH foo AS MATERIALIZED ...\" to\n> prevent that, but then you'll need to stick the WHERE clause inside\n> the WITH or you'll end up running jsonb_path_query_first for every\n> row of tbl.\n> \n> With\n> \n> explain verbose WITH foo AS materialized (select\n> id,\n> col1,\n> col2,\n> jsonb_path_query_first(data,'$.lvl1.lvl2.lvl3.lvl4.lvl5') as jsondata\n> from tbl where id = 1 )\n> select\n> col1,\n> col2,\n> jsondata->'val1'->'text()' as val1,\n> jsondata->'val2'->'text()' as val2,\n> jsondata->'val3'->'text()' as val3\n> from foo;\n> \n\nThanks Tom ! This works :)\n\nI owe you one.\n\nBest regards,\n Jesper\n\n\n\n",
"msg_date": "Thu, 14 Nov 2019 15:46:08 -0500",
"msg_from": "Jesper Pedersen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON path"
}
] |
[
{
"msg_contents": "I'm completely baffled by this problem: I'm doing a delete that joins three\nmodest-sized tables, and it gets completely stuck: 100% CPU use forever.\nHere's the query:\n\nexplain analyze\n select count(1) from registry.categories\n where category_id = 15 and id in\n (select c.id from registry.categories c\n left join registry.category_staging_15 st on (c.id = st.id) where\nc.category_id = 15 and st.id is null);\n\nIf I leave out the \"analyze\", here's what I get (note that the\ncategories_staging_N table's name changes every time; it's\ncreated on demand as \"create table categories_staging_n(id integer)\").\n\nAggregate (cost=193.54..193.55 rows=1 width=8)\n -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0)\n Join Filter: (categories.id = c.id)\n -> Index Scan using i_categories_category_id on categories\n (cost=0.42..2.44 rows=1 width=4)\n Index Cond: (category_id = 23)\n -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4)\n Join Filter: (c.id = st.id)\n -> Index Scan using i_categories_category_id on categories c\n (cost=0.42..2.44 rows=1 width=4)\n Index Cond: (category_id = 23)\n -> Seq Scan on category_staging_23 st (cost=0.00..99.40\nrows=7140 width=4)\n\nThe tables are small. From a debugging printout:\n\n 7997 items in table registry.category_staging_15\n228292 items in table registry.categories\n309398 items in table registry.smiles\n 7997 items in joined registry.category_staging_15 / registry.categories\n\n\nWhat on Earth could be causing this simple query to be running 100% CPU for\nhours?\n\nPostgres: 10.10\nUbuntu 16.04\nThis is a VirtualBox virtual machine running on a Mac host.\n\nEverything else seems to work as expected; just this one query does this.\n\nThanks,\nCraig\n\nI'm completely baffled by this problem: I'm doing a delete that joins three modest-sized tables, and it gets completely stuck: 100% CPU use forever. Here's the query:explain analyze select count(1) from registry.categories where category_id = 15 and id in (select c.id from registry.categories c left join registry.category_staging_15 st on (c.id = st.id) where c.category_id = 15 and st.id is null);If I leave out the \"analyze\", here's what I get (note that the categories_staging_N table's name changes every time; it'screated on demand as \"create table categories_staging_n(id integer)\").Aggregate (cost=193.54..193.55 rows=1 width=8) -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0) Join Filter: (categories.id = c.id) -> Index Scan using i_categories_category_id on categories (cost=0.42..2.44 rows=1 width=4) Index Cond: (category_id = 23) -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4) Join Filter: (c.id = st.id) -> Index Scan using i_categories_category_id on categories c (cost=0.42..2.44 rows=1 width=4) Index Cond: (category_id = 23) -> Seq Scan on category_staging_23 st (cost=0.00..99.40 rows=7140 width=4)The tables are small. From a debugging printout: 7997 items in table registry.category_staging_15228292 items in table registry.categories309398 items in table registry.smiles 7997 items in joined registry.category_staging_15 / registry.categoriesWhat on Earth could be causing this simple query to be running 100% CPU for hours?Postgres: 10.10Ubuntu 16.04This is a VirtualBox virtual machine running on a Mac host.Everything else seems to work as expected; just this one query does this.Thanks,Craig",
"msg_date": "Thu, 14 Nov 2019 14:19:51 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "On 2019-Nov-14, Craig James wrote:\n\n> I'm completely baffled by this problem: I'm doing a delete that joins three\n> modest-sized tables, and it gets completely stuck: 100% CPU use forever.\n\nDo you have any FKs there? If any delete is cascading, and you don't\nhave an index on the other side, it'd do tons of seqscans.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 14 Nov 2019 19:22:58 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "> If I leave out the \"analyze\", here's what I get (note that the\n> categories_staging_N table's name changes every time; it's\n> created on demand as \"create table categories_staging_n(id integer)\").\n>\n\nHow/when are they created? In the same statement? After create, are you\nanalyzing these tables? If not, the optimizer is blind and may be choosing\na bad plan by chance.\n\nIf I leave out the \"analyze\", here's what I get (note that the categories_staging_N table's name changes every time; it'screated on demand as \"create table categories_staging_n(id integer)\").How/when are they created? In the same statement? After create, are you analyzing these tables? If not, the optimizer is blind and may be choosing a bad plan by chance.",
"msg_date": "Thu, 14 Nov 2019 15:24:47 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 02:19:51PM -0800, Craig James wrote:\n> I'm completely baffled by this problem: I'm doing a delete that joins three\n> modest-sized tables, and it gets completely stuck: 100% CPU use forever.\n> Here's the query:\n> \n> explain analyze\n> select count(1) from registry.categories\n> where category_id = 15 and id in\n> (select c.id from registry.categories c\n> left join registry.category_staging_15 st on (c.id = st.id) where\n> c.category_id = 15 and st.id is null);\n> \n> If I leave out the \"analyze\", here's what I get (note that the\n\nDo you mean that you're doing DELETE..USING, and that's an explain for SELECT\nCOUNT() with same join conditions ? Can you show explain for the DELETE, and\n\\d for the tables ? \n\nIf there's FKs, do the other tables have indices on their referencING columns ?\n\nhttps://www.postgresql.org/docs/devel/static/ddl-constraints.html#DDL-CONSTRAINTS-FK\n\"Since a DELETE of a row from the referenced table [...] will require a scan of\nthe referencing table for rows matching the old value, it is often a good idea\nto index the referencing columns too.\"\n\nJustin\n\n\n",
"msg_date": "Thu, 14 Nov 2019 16:28:45 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-14 14:19:51 -0800, Craig James wrote:\n> I'm completely baffled by this problem: I'm doing a delete that joins three\n> modest-sized tables, and it gets completely stuck: 100% CPU use forever.\n> Here's the query:\n\nI assume this is intended to be an equivalent SELECT? Because you did\nmention DELETE, but I'm not seeing one here? Could you actually show\nthat query - surely that didn't include a count() etc... You can\nEPLAIN DELETEs too.\n\n\n\n> explain analyze\n> select count(1) from registry.categories\n> where category_id = 15 and id in\n> (select c.id from registry.categories c\n> left join registry.category_staging_15 st on (c.id = st.id) where\n> c.category_id = 15 and st.id is null);\n> \n> If I leave out the \"analyze\", here's what I get (note that the\n> categories_staging_N table's name changes every time; it's\n> created on demand as \"create table categories_staging_n(id integer)\").\n\n> Aggregate (cost=193.54..193.55 rows=1 width=8)\n> -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0)\n> Join Filter: (categories.id = c.id)\n> -> Index Scan using i_categories_category_id on categories\n> (cost=0.42..2.44 rows=1 width=4)\n> Index Cond: (category_id = 23)\n> -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4)\n> Join Filter: (c.id = st.id)\n> -> Index Scan using i_categories_category_id on categories c\n> (cost=0.42..2.44 rows=1 width=4)\n> Index Cond: (category_id = 23)\n> -> Seq Scan on category_staging_23 st (cost=0.00..99.40\n> rows=7140 width=4)\n> \n> The tables are small. From a debugging printout:\n\n\nIs categories.category_id unique? Does the plan change if you ANALYZE\nthe tables?\n\n\nThis plan doesn't look like it'd actually take long, if the estimates\nare correct.\n\n\n> What on Earth could be causing this simple query to be running 100% CPU for\n> hours?\n\nIs the DELETE actually taking that long, or the query you showed the\nexplain for, or both?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Nov 2019 14:29:29 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 2:29 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2019-11-14 14:19:51 -0800, Craig James wrote:\n> > I'm completely baffled by this problem: I'm doing a delete that joins\n> three\n> > modest-sized tables, and it gets completely stuck: 100% CPU use forever.\n> > Here's the query:\n>\n> I assume this is intended to be an equivalent SELECT? Because you did\n> mention DELETE, but I'm not seeing one here? Could you actually show\n> that query - surely that didn't include a count() etc... You can\n> EPLAIN DELETEs too.\n>\n\nSorry, my explanation was misleading. It is a \"delete ... where id in\n(select ...)\". But I discovered that the select part itself never\ncompletes, whether you include it in the delete or not. So I only showed\nthe select, which I converted to a \"select count(1) ...\" for simplicity.\n\n\n> > explain analyze\n> > select count(1) from registry.categories\n> > where category_id = 15 and id in\n> > (select c.id from registry.categories c\n> > left join registry.category_staging_15 st on (c.id = st.id) where\n> > c.category_id = 15 and st.id is null);\n> >\n> > If I leave out the \"analyze\", here's what I get (note that the\n> > categories_staging_N table's name changes every time; it's\n> > created on demand as \"create table categories_staging_n(id integer)\").\n>\n> > Aggregate (cost=193.54..193.55 rows=1 width=8)\n> > -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0)\n> > Join Filter: (categories.id = c.id)\n> > -> Index Scan using i_categories_category_id on categories\n> > (cost=0.42..2.44 rows=1 width=4)\n> > Index Cond: (category_id = 23)\n> > -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4)\n> > Join Filter: (c.id = st.id)\n> > -> Index Scan using i_categories_category_id on\n> categories c\n> > (cost=0.42..2.44 rows=1 width=4)\n> > Index Cond: (category_id = 23)\n> > -> Seq Scan on category_staging_23 st (cost=0.00..99.40\n> > rows=7140 width=4)\n> >\n> > The tables are small. From a debugging printout:\n>\n>\n> Is categories.category_id unique?\n\n\nNo, categories.category_id is not unique. It has a b-tree index.\n\n\n> Does the plan change if you ANALYZE\n> the tables?\n>\n\nNo. No difference.\n\nBut interestingly, it changes as the process goes forward. And it's\ninconsistent. Here's an example: it's going through several \"categories\" to\nupdate each. The first plan works, and it typically uses this plan a few\ntimes. But when selects the second plan, it gets stuck.\n\n----------------\n15994 items in table registry.category_staging_15\n245598 items in table registry.categories\n309398 items in table registry.smiles\n15994 items in joined registry.category_staging_15 / registry.categories\n0 items to be inserted\ninserted: 0E0\nEXPLAIN: Aggregate (cost=3464.82..3464.83 rows=1 width=8)\nEXPLAIN: -> Hash Semi Join (cost=2029.16..3464.05 rows=311 width=0)\nEXPLAIN: Hash Cond: (categories.id = c.id)\nEXPLAIN: -> Index Scan using i_categories_category_id on\ncategories (cost=0.42..1405.28 rows=7900 width=4)\nEXPLAIN: Index Cond: (category_id = 15)\nEXPLAIN: -> Hash (cost=1933.44..1933.44 rows=7624 width=4)\nEXPLAIN: -> Hash Anti Join (cost=431.28..1933.44 rows=7624\nwidth=4)\nEXPLAIN: Hash Cond: (c.id = st.id)\nEXPLAIN: -> Index Scan using i_categories_category_id\non categories c (cost=0.42..1405.28 rows=7900 width=4)\nEXPLAIN: Index Cond: (category_id = 15)\nEXPLAIN: -> Hash (cost=230.94..230.94 rows=15994\nwidth=4)\nEXPLAIN: -> Seq Scan on category_staging_15 st\n (cost=0.00..230.94 rows=15994 width=4)\n0 items deleted\n7997 items inserted\n----------------\n6250 items in table registry.category_staging_25\n245598 items in table registry.categories\n309398 items in table registry.smiles\n6250 items in joined registry.category_staging_25 / registry.categories\n6250 items to be inserted\ninserted: 3125\nEXPLAIN: Aggregate (cost=173.51..173.52 rows=1 width=8)\nEXPLAIN: -> Nested Loop Semi Join (cost=0.84..173.51 rows=1 width=0)\nEXPLAIN: Join Filter: (categories.id = c.id)\nEXPLAIN: -> Index Scan using i_categories_category_id on\ncategories (cost=0.42..2.44 rows=1 width=4)\nEXPLAIN: Index Cond: (category_id = 25)\nEXPLAIN: -> Nested Loop Anti Join (cost=0.42..171.06 rows=1\nwidth=4)\nEXPLAIN: Join Filter: (c.id = st.id)\nEXPLAIN: -> Index Scan using i_categories_category_id on\ncategories c (cost=0.42..2.44 rows=1 width=4)\nEXPLAIN: Index Cond: (category_id = 25)\nEXPLAIN: -> Seq Scan on category_staging_25 st\n (cost=0.00..90.50 rows=6250 width=4)\n\nThis plan doesn't look like it'd actually take long, if the estimates\n> are correct.\n>\n\nAnother data point: during this query, Postgres is burning 100% CPU and\ndoing no I/O. Pretty much for hours if I let it go.\n\nThanks for your help,\nCraig\n\nOn Thu, Nov 14, 2019 at 2:29 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2019-11-14 14:19:51 -0800, Craig James wrote:\n> I'm completely baffled by this problem: I'm doing a delete that joins three\n> modest-sized tables, and it gets completely stuck: 100% CPU use forever.\n> Here's the query:\n\nI assume this is intended to be an equivalent SELECT? Because you did\nmention DELETE, but I'm not seeing one here? Could you actually show\nthat query - surely that didn't include a count() etc... You can\nEPLAIN DELETEs too.Sorry, my explanation was misleading. It is a \"delete ... where id in (select ...)\". But I discovered that the select part itself never completes, whether you include it in the delete or not. So I only showed the select, which I converted to a \"select count(1) ...\" for simplicity. > explain analyze\n> select count(1) from registry.categories\n> where category_id = 15 and id in\n> (select c.id from registry.categories c\n> left join registry.category_staging_15 st on (c.id = st.id) where\n> c.category_id = 15 and st.id is null);\n> \n> If I leave out the \"analyze\", here's what I get (note that the\n> categories_staging_N table's name changes every time; it's\n> created on demand as \"create table categories_staging_n(id integer)\").\n\n> Aggregate (cost=193.54..193.55 rows=1 width=8)\n> -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0)\n> Join Filter: (categories.id = c.id)\n> -> Index Scan using i_categories_category_id on categories\n> (cost=0.42..2.44 rows=1 width=4)\n> Index Cond: (category_id = 23)\n> -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4)\n> Join Filter: (c.id = st.id)\n> -> Index Scan using i_categories_category_id on categories c\n> (cost=0.42..2.44 rows=1 width=4)\n> Index Cond: (category_id = 23)\n> -> Seq Scan on category_staging_23 st (cost=0.00..99.40\n> rows=7140 width=4)\n> \n> The tables are small. From a debugging printout:\n\n\nIs categories.category_id unique?No, categories.category_id is not unique. It has a b-tree index. Does the plan change if you ANALYZE\nthe tables?No. No difference.But interestingly, it changes as the process goes forward. And it's inconsistent. Here's an example: it's going through several \"categories\" to update each. The first plan works, and it typically uses this plan a few times. But when selects the second plan, it gets stuck.----------------15994 items in table registry.category_staging_15245598 items in table registry.categories309398 items in table registry.smiles15994 items in joined registry.category_staging_15 / registry.categories0 items to be insertedinserted: 0E0EXPLAIN: Aggregate (cost=3464.82..3464.83 rows=1 width=8)EXPLAIN: -> Hash Semi Join (cost=2029.16..3464.05 rows=311 width=0)EXPLAIN: Hash Cond: (categories.id = c.id)EXPLAIN: -> Index Scan using i_categories_category_id on categories (cost=0.42..1405.28 rows=7900 width=4)EXPLAIN: Index Cond: (category_id = 15)EXPLAIN: -> Hash (cost=1933.44..1933.44 rows=7624 width=4)EXPLAIN: -> Hash Anti Join (cost=431.28..1933.44 rows=7624 width=4)EXPLAIN: Hash Cond: (c.id = st.id)EXPLAIN: -> Index Scan using i_categories_category_id on categories c (cost=0.42..1405.28 rows=7900 width=4)EXPLAIN: Index Cond: (category_id = 15)EXPLAIN: -> Hash (cost=230.94..230.94 rows=15994 width=4)EXPLAIN: -> Seq Scan on category_staging_15 st (cost=0.00..230.94 rows=15994 width=4)0 items deleted7997 items inserted----------------6250 items in table registry.category_staging_25245598 items in table registry.categories309398 items in table registry.smiles6250 items in joined registry.category_staging_25 / registry.categories6250 items to be insertedinserted: 3125EXPLAIN: Aggregate (cost=173.51..173.52 rows=1 width=8)EXPLAIN: -> Nested Loop Semi Join (cost=0.84..173.51 rows=1 width=0)EXPLAIN: Join Filter: (categories.id = c.id)EXPLAIN: -> Index Scan using i_categories_category_id on categories (cost=0.42..2.44 rows=1 width=4)EXPLAIN: Index Cond: (category_id = 25)EXPLAIN: -> Nested Loop Anti Join (cost=0.42..171.06 rows=1 width=4)EXPLAIN: Join Filter: (c.id = st.id)EXPLAIN: -> Index Scan using i_categories_category_id on categories c (cost=0.42..2.44 rows=1 width=4)EXPLAIN: Index Cond: (category_id = 25)EXPLAIN: -> Seq Scan on category_staging_25 st (cost=0.00..90.50 rows=6250 width=4)This plan doesn't look like it'd actually take long, if the estimates\nare correct.Another data point: during this query, Postgres is burning 100% CPU and doing no I/O. Pretty much for hours if I let it go. Thanks for your help,Craig",
"msg_date": "Fri, 15 Nov 2019 13:06:42 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "Hey James,\r\n\r\nLooking at your select query, if that’s taking forever STILL. Could it be because of the IN clause? If so, try using EXISTS instead of IN.. should give much better results.\r\n\r\nFrom: Craig James <[email protected]>\r\nSent: Friday, November 15, 2019 1:07 PM\r\nTo: Andres Freund <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: Simple DELETE on modest-size table runs 100% CPU forever\r\n\r\nOn Thu, Nov 14, 2019 at 2:29 PM Andres Freund <[email protected]<mailto:[email protected]>> wrote:\r\nHi,\r\n\r\nOn 2019-11-14 14:19:51 -0800, Craig James wrote:\r\n> I'm completely baffled by this problem: I'm doing a delete that joins three\r\n> modest-sized tables, and it gets completely stuck: 100% CPU use forever.\r\n> Here's the query:\r\n\r\nI assume this is intended to be an equivalent SELECT? Because you did\r\nmention DELETE, but I'm not seeing one here? Could you actually show\r\nthat query - surely that didn't include a count() etc... You can\r\nEPLAIN DELETEs too.\r\n\r\nSorry, my explanation was misleading. It is a \"delete ... where id in (select ...)\". But I discovered that the select part itself never completes, whether you include it in the delete or not. So I only showed the select, which I converted to a \"select count(1) ...\" for simplicity.\r\n\r\n> explain analyze\r\n> select count(1) from registry.categories\r\n> where category_id = 15 and id in\r\n> (select c.id<http://c.id> from registry.categories c\r\n> left join registry.category_staging_15 st on (c.id<http://c.id> = st.id<http://st.id>) where\r\n> c.category_id = 15 and st.id<http://st.id> is null);\r\n>\r\n> If I leave out the \"analyze\", here's what I get (note that the\r\n> categories_staging_N table's name changes every time; it's\r\n> created on demand as \"create table categories_staging_n(id integer)\").\r\n\r\n> Aggregate (cost=193.54..193.55 rows=1 width=8)\r\n> -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0)\r\n> Join Filter: (categories.id<http://categories.id> = c.id<http://c.id>)\r\n> -> Index Scan using i_categories_category_id on categories\r\n> (cost=0.42..2.44 rows=1 width=4)\r\n> Index Cond: (category_id = 23)\r\n> -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4)\r\n> Join Filter: (c.id<http://c.id> = st.id<http://st.id>)\r\n> -> Index Scan using i_categories_category_id on categories c\r\n> (cost=0.42..2.44 rows=1 width=4)\r\n> Index Cond: (category_id = 23)\r\n> -> Seq Scan on category_staging_23 st (cost=0.00..99.40\r\n> rows=7140 width=4)\r\n>\r\n> The tables are small. From a debugging printout:\r\n\r\n\r\nIs categories.category_id unique?\r\n\r\nNo, categories.category_id is not unique. It has a b-tree index.\r\n\r\nDoes the plan change if you ANALYZE\r\nthe tables?\r\n\r\nNo. No difference.\r\n\r\nBut interestingly, it changes as the process goes forward. And it's inconsistent. Here's an example: it's going through several \"categories\" to update each. The first plan works, and it typically uses this plan a few times. But when selects the second plan, it gets stuck.\r\n\r\n----------------\r\n15994 items in table registry.category_staging_15\r\n245598 items in table registry.categories\r\n309398 items in table registry.smiles\r\n15994 items in joined registry.category_staging_15 / registry.categories\r\n0 items to be inserted\r\ninserted: 0E0\r\nEXPLAIN: Aggregate (cost=3464.82..3464.83 rows=1 width=8)\r\nEXPLAIN: -> Hash Semi Join (cost=2029.16..3464.05 rows=311 width=0)\r\nEXPLAIN: Hash Cond: (categories.id<http://categories.id> = c.id<http://c.id>)\r\nEXPLAIN: -> Index Scan using i_categories_category_id on categories (cost=0.42..1405.28 rows=7900 width=4)\r\nEXPLAIN: Index Cond: (category_id = 15)\r\nEXPLAIN: -> Hash (cost=1933.44..1933.44 rows=7624 width=4)\r\nEXPLAIN: -> Hash Anti Join (cost=431.28..1933.44 rows=7624 width=4)\r\nEXPLAIN: Hash Cond: (c.id<http://c.id> = st.id<http://st.id>)\r\nEXPLAIN: -> Index Scan using i_categories_category_id on categories c (cost=0.42..1405.28 rows=7900 width=4)\r\nEXPLAIN: Index Cond: (category_id = 15)\r\nEXPLAIN: -> Hash (cost=230.94..230.94 rows=15994 width=4)\r\nEXPLAIN: -> Seq Scan on category_staging_15 st (cost=0.00..230.94 rows=15994 width=4)\r\n0 items deleted\r\n7997 items inserted\r\n----------------\r\n6250 items in table registry.category_staging_25\r\n245598 items in table registry.categories\r\n309398 items in table registry.smiles\r\n6250 items in joined registry.category_staging_25 / registry.categories\r\n6250 items to be inserted\r\ninserted: 3125\r\nEXPLAIN: Aggregate (cost=173.51..173.52 rows=1 width=8)\r\nEXPLAIN: -> Nested Loop Semi Join (cost=0.84..173.51 rows=1 width=0)\r\nEXPLAIN: Join Filter: (categories.id<http://categories.id> = c.id<http://c.id>)\r\nEXPLAIN: -> Index Scan using i_categories_category_id on categories (cost=0.42..2.44 rows=1 width=4)\r\nEXPLAIN: Index Cond: (category_id = 25)\r\nEXPLAIN: -> Nested Loop Anti Join (cost=0.42..171.06 rows=1 width=4)\r\nEXPLAIN: Join Filter: (c.id<http://c.id> = st.id<http://st.id>)\r\nEXPLAIN: -> Index Scan using i_categories_category_id on categories c (cost=0.42..2.44 rows=1 width=4)\r\nEXPLAIN: Index Cond: (category_id = 25)\r\nEXPLAIN: -> Seq Scan on category_staging_25 st (cost=0.00..90.50 rows=6250 width=4)\r\n\r\nThis plan doesn't look like it'd actually take long, if the estimates\r\nare correct.\r\n\r\nAnother data point: during this query, Postgres is burning 100% CPU and doing no I/O. Pretty much for hours if I let it go.\r\n\r\nThanks for your help,\r\nCraig\r\n\r\n\n\n\n\n\n\n\n\n\nHey James,\n \nLooking at your select query, if that’s taking forever STILL. Could it be because of the IN clause? If so, try using EXISTS instead of IN.. should give much better results.\n \nFrom: Craig James <[email protected]>\r\n\nSent: Friday, November 15, 2019 1:07 PM\nTo: Andres Freund <[email protected]>\nCc: [email protected]\nSubject: Re: Simple DELETE on modest-size table runs 100% CPU forever\n \n\n\nOn Thu, Nov 14, 2019 at 2:29 PM Andres Freund <[email protected]> wrote:\n\n\n\nHi,\n\r\nOn 2019-11-14 14:19:51 -0800, Craig James wrote:\r\n> I'm completely baffled by this problem: I'm doing a delete that joins three\r\n> modest-sized tables, and it gets completely stuck: 100% CPU use forever.\r\n> Here's the query:\n\r\nI assume this is intended to be an equivalent SELECT? Because you did\r\nmention DELETE, but I'm not seeing one here? Could you actually show\r\nthat query - surely that didn't include a count() etc... You can\r\nEPLAIN DELETEs too.\n\n\n \n\n\nSorry, my explanation was misleading. It is a \"delete ... where id in (select ...)\". But I discovered that the select part itself never completes, whether you include it in the delete or not. So I only showed the select, which I converted\r\n to a \"select count(1) ...\" for simplicity.\n\n\n \n\n\n> explain analyze\r\n> select count(1) from registry.categories\r\n> where category_id = 15 and id in\r\n> (select c.id from registry.categories c\r\n> left join registry.category_staging_15 st on (c.id =\r\nst.id) where\r\n> c.category_id = 15 and st.id is null);\r\n> \r\n> If I leave out the \"analyze\", here's what I get (note that the\r\n> categories_staging_N table's name changes every time; it's\r\n> created on demand as \"create table categories_staging_n(id integer)\").\n\r\n> Aggregate (cost=193.54..193.55 rows=1 width=8)\r\n> -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0)\r\n> Join Filter: (categories.id =\r\nc.id)\r\n> -> Index Scan using i_categories_category_id on categories\r\n> (cost=0.42..2.44 rows=1 width=4)\r\n> Index Cond: (category_id = 23)\r\n> -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4)\r\n> Join Filter: (c.id = \r\nst.id)\r\n> -> Index Scan using i_categories_category_id on categories c\r\n> (cost=0.42..2.44 rows=1 width=4)\r\n> Index Cond: (category_id = 23)\r\n> -> Seq Scan on category_staging_23 st (cost=0.00..99.40\r\n> rows=7140 width=4)\r\n> \r\n> The tables are small. From a debugging printout:\n\n\r\nIs categories.category_id unique?\n\n\n \n\n\nNo, categories.category_id is not unique. It has a b-tree index.\n\n\n \n\n\nDoes the plan change if you ANALYZE\r\nthe tables?\n\n\n \n\n\nNo. No difference.\n\n\n \n\n\nBut interestingly, it changes as the process goes forward. And it's inconsistent. Here's an example: it's going through several \"categories\" to update each. The first plan works, and it typically uses this plan a few times. But when selects\r\n the second plan, it gets stuck.\n\n\n \n\n\n----------------\r\n15994 items in table registry.category_staging_15\r\n245598 items in table registry.categories\r\n309398 items in table registry.smiles\r\n15994 items in joined registry.category_staging_15 / registry.categories\r\n0 items to be inserted\r\ninserted: 0E0\r\nEXPLAIN: Aggregate (cost=3464.82..3464.83 rows=1 width=8)\r\nEXPLAIN: -> Hash Semi Join (cost=2029.16..3464.05 rows=311 width=0)\r\nEXPLAIN: Hash Cond: (categories.id = \r\nc.id)\r\nEXPLAIN: -> Index Scan using i_categories_category_id on categories (cost=0.42..1405.28 rows=7900 width=4)\r\nEXPLAIN: Index Cond: (category_id = 15)\r\nEXPLAIN: -> Hash (cost=1933.44..1933.44 rows=7624 width=4)\r\nEXPLAIN: -> Hash Anti Join (cost=431.28..1933.44 rows=7624 width=4)\r\nEXPLAIN: Hash Cond: (c.id = \r\nst.id)\r\nEXPLAIN: -> Index Scan using i_categories_category_id on categories c (cost=0.42..1405.28 rows=7900 width=4)\r\nEXPLAIN: Index Cond: (category_id = 15)\r\nEXPLAIN: -> Hash (cost=230.94..230.94 rows=15994 width=4)\r\nEXPLAIN: -> Seq Scan on category_staging_15 st (cost=0.00..230.94 rows=15994 width=4)\r\n0 items deleted\r\n7997 items inserted\r\n----------------\r\n6250 items in table registry.category_staging_25\r\n245598 items in table registry.categories\r\n309398 items in table registry.smiles\r\n6250 items in joined registry.category_staging_25 / registry.categories\r\n6250 items to be inserted\r\ninserted: 3125\r\nEXPLAIN: Aggregate (cost=173.51..173.52 rows=1 width=8)\r\nEXPLAIN: -> Nested Loop Semi Join (cost=0.84..173.51 rows=1 width=0)\r\nEXPLAIN: Join Filter: (categories.id =\r\nc.id)\r\nEXPLAIN: -> Index Scan using i_categories_category_id on categories (cost=0.42..2.44 rows=1 width=4)\r\nEXPLAIN: Index Cond: (category_id = 25)\r\nEXPLAIN: -> Nested Loop Anti Join (cost=0.42..171.06 rows=1 width=4)\r\nEXPLAIN: Join Filter: (c.id = \r\nst.id)\r\nEXPLAIN: -> Index Scan using i_categories_category_id on categories c (cost=0.42..2.44 rows=1 width=4)\r\nEXPLAIN: Index Cond: (category_id = 25)\r\nEXPLAIN: -> Seq Scan on category_staging_25 st (cost=0.00..90.50 rows=6250 width=4)\n\n\n \n\n\nThis plan doesn't look like it'd actually take long, if the estimates\r\nare correct.\n\n\n \n\n\nAnother data point: during this query, Postgres is burning 100% CPU and doing no I/O. Pretty much for hours if I let it go. \n\n\n \n\n\nThanks for your help,\n\n\nCraig",
"msg_date": "Fri, 15 Nov 2019 21:11:41 +0000",
"msg_from": "Ravi Rai <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 5:20 PM Craig James <[email protected]> wrote:\n\n> I'm completely baffled by this problem: I'm doing a delete that joins\n> three modest-sized tables, and it gets completely stuck: 100% CPU use\n> forever. Here's the query:\n>\n>\n> Aggregate (cost=193.54..193.55 rows=1 width=8)\n> -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0)\n> Join Filter: (categories.id = c.id)\n> -> Index Scan using i_categories_category_id on categories\n> (cost=0.42..2.44 rows=1 width=4)\n> Index Cond: (category_id = 23)\n> -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4)\n> Join Filter: (c.id = st.id)\n> -> Index Scan using i_categories_category_id on categories\n> c (cost=0.42..2.44 rows=1 width=4)\n> Index Cond: (category_id = 23)\n> -> Seq Scan on category_staging_23 st (cost=0.00..99.40\n> rows=7140 width=4)\n>\n\n\nIf the estimates were correct, this shouldn't be slow. But how can it\nscrew up the estimate for this by much, when the conditions are so simple?\nHow many rows are there actually in categories where category_id=23?\n\nWhat do you see in `select * from pg_stats where tablename='categories' and\nattname='category_id' \\x\\g\\x`?\n\nSince it thinks the seq scan of category_staging_23 is only going to\nhappen once (at the bottom of two nested loops, but each executing just\nonce) it sees no benefit in hashing that table. Of course it is actually\nhappening a lot more than once.\n\nCheers,\n\nJeff\n\nOn Thu, Nov 14, 2019 at 5:20 PM Craig James <[email protected]> wrote:I'm completely baffled by this problem: I'm doing a delete that joins three modest-sized tables, and it gets completely stuck: 100% CPU use forever. Here's the query:Aggregate (cost=193.54..193.55 rows=1 width=8) -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0) Join Filter: (categories.id = c.id) -> Index Scan using i_categories_category_id on categories (cost=0.42..2.44 rows=1 width=4) Index Cond: (category_id = 23) -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4) Join Filter: (c.id = st.id) -> Index Scan using i_categories_category_id on categories c (cost=0.42..2.44 rows=1 width=4) Index Cond: (category_id = 23) -> Seq Scan on category_staging_23 st (cost=0.00..99.40 rows=7140 width=4)If the estimates were correct, this shouldn't be slow. But how can it screw up the estimate for this by much, when the conditions are so simple? How many rows are there actually in categories where category_id=23?What do you see in `select * from pg_stats where tablename='categories' and attname='category_id' \\x\\g\\x`?Since it thinks the seq scan of \n\ncategory_staging_23 is only going to happen once (at the bottom of two nested loops, but each executing just once) it sees no benefit in hashing that table. Of course it is actually happening a lot more than once.Cheers,Jeff",
"msg_date": "Fri, 15 Nov 2019 17:44:48 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 2:45 PM Jeff Janes <[email protected]> wrote:\n\n> On Thu, Nov 14, 2019 at 5:20 PM Craig James <[email protected]> wrote:\n>\n>> I'm completely baffled by this problem: I'm doing a delete that joins\n>> three modest-sized tables, and it gets completely stuck: 100% CPU use\n>> forever. Here's the query:\n>>\n>>\n>> Aggregate (cost=193.54..193.55 rows=1 width=8)\n>> -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0)\n>> Join Filter: (categories.id = c.id)\n>> -> Index Scan using i_categories_category_id on categories\n>> (cost=0.42..2.44 rows=1 width=4)\n>> Index Cond: (category_id = 23)\n>> -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4)\n>> Join Filter: (c.id = st.id)\n>> -> Index Scan using i_categories_category_id on categories\n>> c (cost=0.42..2.44 rows=1 width=4)\n>> Index Cond: (category_id = 23)\n>> -> Seq Scan on category_staging_23 st (cost=0.00..99.40\n>> rows=7140 width=4)\n>>\n>\n>\n> If the estimates were correct, this shouldn't be slow. But how can it\n> screw up the estimate for this by much, when the conditions are so simple?\n> How many rows are there actually in categories where category_id=23?\n>\n\nI actually waited long enough for this to finish an \"explain analyze\". Here\nit is, preceded by stats about the table that I added to the program:\n\n10000 items in table registry.category_staging_8\n274602 items in table registry.categories\n309398 items in table registry.smiles\n10000 items in joined registry.category_staging_8 / registry.categories\nAggregate (cost=274.90..274.91 rows=1 width=8) (actual\ntime=7666916.832..7666916.832 rows=1 loops=1)\n -> Nested Loop Semi Join (cost=0.84..274.89 rows=1 width=0) (actual\ntime=7666916.829..7666916.829 rows=0 loops=1)\n Join Filter: (categories.id = c.id)\n -> Index Scan using i_categories_category_id on categories\n (cost=0.42..2.44 rows=1 width=4) (actual time=0.015..6.192 rows=5000\nloops=1)\n Index Cond: (category_id = 8)\n -> Nested Loop Anti Join (cost=0.42..272.44 rows=1 width=4)\n(actual time=1533.380..1533.380 rows=0 loops=5000)\n Join Filter: (c.id = st.id)\n Rows Removed by Join Filter: 12497500\n -> Index Scan using i_categories_category_id on categories c\n (cost=0.42..2.44 rows=1 width=4) (actual time=0.017..1.927 rows=5000\nloops=5000)\n Index Cond: (category_id = 8)\n -> Seq Scan on category_staging_8 st (cost=0.00..145.00\nrows=10000 width=4) (actual time=0.003..0.153 rows=2500 loops=25000000)\nPlanning time: 0.311 ms\nExecution time: 7666916.865 ms\n\nBTW, I'll note at this point that \"analyze category_staging_8\" prior to\nthis query made no difference.\n\n\n> What do you see in `select * from pg_stats where tablename='categories'\n> and attname='category_id' \\x\\g\\x`?\n>\n\ndb=> select * from pg_stats where tablename='categories' and\nattname='category_id' \\x\\g\\x;\nExpanded display is on.\n-[ RECORD 1\n]----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nschemaname | registry\ntablename | categories\nattname | category_id\ninherited | f\nnull_frac | 0\navg_width | 4\nn_distinct | 21\nmost_common_vals |\n{4,3,2,10,11,13,12,16,9,6,7,5,15,23,14,25,24,1,26,28,27}\nmost_common_freqs |\n{0.2397,0.159933,0.0926667,0.0556,0.0555667,0.0546333,0.0525333,0.0439,0.0426667,0.0346333,0.0331,0.0302333,0.0288333,0.0240667,0.0224,0.0122333,0.011,0.0035,0.00233333,0.000366667,0.0001}\nhistogram_bounds |\ncorrelation | -0.0200765\nmost_common_elems |\nmost_common_elem_freqs |\nelem_count_histogram |\n-[ RECORD 2\n]----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nschemaname | test\ntablename | categories\nattname | category_id\ninherited | f\nnull_frac | 0\navg_width | 4\nn_distinct | 11\nmost_common_vals | {10,30,50,1,2,0,3,9,6,7,5}\nmost_common_freqs |\n{0.132051,0.132051,0.132051,0.10641,0.0935897,0.0807692,0.0807692,0.0807692,0.0769231,0.0551282,0.0294872}\nhistogram_bounds |\ncorrelation | -0.435298\nmost_common_elems |\nmost_common_elem_freqs |\n\n\n>\n> Since it thinks the seq scan of category_staging_23 is only going to\n> happen once (at the bottom of two nested loops, but each executing just\n> once) it sees no benefit in hashing that table. Of course it is actually\n> happening a lot more than once.\n>\n\nYeah, 25 million times to be exact.\n\n\n>\n> Cheers,\n>\n> Jeff\n>\n\nThanks,\nCraig\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n3430 Carmel Mountain Road, Suite 250\nSan Diego, CA 92121\n---------------------------------\n\nOn Fri, Nov 15, 2019 at 2:45 PM Jeff Janes <[email protected]> wrote:On Thu, Nov 14, 2019 at 5:20 PM Craig James <[email protected]> wrote:I'm completely baffled by this problem: I'm doing a delete that joins three modest-sized tables, and it gets completely stuck: 100% CPU use forever. Here's the query:Aggregate (cost=193.54..193.55 rows=1 width=8) -> Nested Loop Semi Join (cost=0.84..193.54 rows=1 width=0) Join Filter: (categories.id = c.id) -> Index Scan using i_categories_category_id on categories (cost=0.42..2.44 rows=1 width=4) Index Cond: (category_id = 23) -> Nested Loop Anti Join (cost=0.42..191.09 rows=1 width=4) Join Filter: (c.id = st.id) -> Index Scan using i_categories_category_id on categories c (cost=0.42..2.44 rows=1 width=4) Index Cond: (category_id = 23) -> Seq Scan on category_staging_23 st (cost=0.00..99.40 rows=7140 width=4)If the estimates were correct, this shouldn't be slow. But how can it screw up the estimate for this by much, when the conditions are so simple? How many rows are there actually in categories where category_id=23?I actually waited long enough for this to finish an \"explain analyze\". Here it is, preceded by stats about the table that I added to the program:10000 items in table registry.category_staging_8274602 items in table registry.categories309398 items in table registry.smiles10000 items in joined registry.category_staging_8 / registry.categoriesAggregate (cost=274.90..274.91 rows=1 width=8) (actual time=7666916.832..7666916.832 rows=1 loops=1) -> Nested Loop Semi Join (cost=0.84..274.89 rows=1 width=0) (actual time=7666916.829..7666916.829 rows=0 loops=1) Join Filter: (categories.id = c.id) -> Index Scan using i_categories_category_id on categories (cost=0.42..2.44 rows=1 width=4) (actual time=0.015..6.192 rows=5000 loops=1) Index Cond: (category_id = 8) -> Nested Loop Anti Join (cost=0.42..272.44 rows=1 width=4) (actual time=1533.380..1533.380 rows=0 loops=5000) Join Filter: (c.id = st.id) Rows Removed by Join Filter: 12497500 -> Index Scan using i_categories_category_id on categories c (cost=0.42..2.44 rows=1 width=4) (actual time=0.017..1.927 rows=5000 loops=5000) Index Cond: (category_id = 8) -> Seq Scan on category_staging_8 st (cost=0.00..145.00 rows=10000 width=4) (actual time=0.003..0.153 rows=2500 loops=25000000)Planning time: 0.311 msExecution time: 7666916.865 msBTW, I'll note at this point that \"analyze category_staging_8\" prior to this query made no difference. What do you see in `select * from pg_stats where tablename='categories' and attname='category_id' \\x\\g\\x`?db=> select * from pg_stats where tablename='categories' and attname='category_id' \\x\\g\\x;Expanded display is on.-[ RECORD 1 ]----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------schemaname | registrytablename | categoriesattname | category_idinherited | fnull_frac | 0avg_width | 4n_distinct | 21most_common_vals | {4,3,2,10,11,13,12,16,9,6,7,5,15,23,14,25,24,1,26,28,27}most_common_freqs | {0.2397,0.159933,0.0926667,0.0556,0.0555667,0.0546333,0.0525333,0.0439,0.0426667,0.0346333,0.0331,0.0302333,0.0288333,0.0240667,0.0224,0.0122333,0.011,0.0035,0.00233333,0.000366667,0.0001}histogram_bounds | correlation | -0.0200765most_common_elems | most_common_elem_freqs | elem_count_histogram | -[ RECORD 2 ]----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------schemaname | testtablename | categoriesattname | category_idinherited | fnull_frac | 0avg_width | 4n_distinct | 11most_common_vals | {10,30,50,1,2,0,3,9,6,7,5}most_common_freqs | {0.132051,0.132051,0.132051,0.10641,0.0935897,0.0807692,0.0807692,0.0807692,0.0769231,0.0551282,0.0294872}histogram_bounds | correlation | -0.435298most_common_elems | most_common_elem_freqs | Since it thinks the seq scan of \n\ncategory_staging_23 is only going to happen once (at the bottom of two nested loops, but each executing just once) it sees no benefit in hashing that table. Of course it is actually happening a lot more than once.Yeah, 25 million times to be exact. Cheers,Jeff\nThanks,Craig\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.3430 Carmel Mountain Road, Suite 250San Diego, CA 92121---------------------------------",
"msg_date": "Fri, 15 Nov 2019 16:26:55 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 7:27 PM Craig James <[email protected]> wrote:\n\n> On Fri, Nov 15, 2019 at 2:45 PM Jeff Janes <[email protected]> wrote:\n> BTW, I'll note at this point that \"analyze category_staging_8\" prior to\n> this query made no difference.\n>\n\nIsn't that the wrong table to have analyzed? The offender here is\n\"categories\", not \"category_staging_8\". Is this some sort of inheritance\nsituation?\n\n\n>\n>> What do you see in `select * from pg_stats where tablename='categories'\n>> and attname='category_id' \\x\\g\\x`?\n>>\n>\n> db=> select * from pg_stats where tablename='categories' and\n> attname='category_id' \\x\\g\\x;\n> Expanded display is on.\n>\n\n\n> ...\n>\nn_distinct | 21\n> most_common_vals |\n> {4,3,2,10,11,13,12,16,9,6,7,5,15,23,14,25,24,1,26,28,27}\n> most_common_freqs |\n> {0.2397,0.159933,0.0926667,0.0556,0.0555667,0.0546333,0.0525333,0.0439,0.0426667,0.0346333,0.0331,0.0302333,0.0288333,0.0240667,0.0224,0.0122333,0.011,0.0035,0.00233333,0.000366667,0.0001}\n>\n\nThere is a path in the analyze code where if the least-seen value in the\nsample was seen more than once (i.e. no value was seen exactly once) then\nit assumes that the seen values are all the values that exist. I think the\nlogic behind that is dubious. I think it is pretty clear that that is\nkicking in here. But why? I think the simple answer is that you analyzed\nthe wrong table, and the statistics shown here might be accurate for some\ntime in the past but are no longer accurate. It is hard to see how a value\npresent 5000 times in a table of 274602 rows could have evaded sampling if\nthey were present at the time the sample was done.\n\nCheers,\n\nJeff\n\nOn Fri, Nov 15, 2019 at 7:27 PM Craig James <[email protected]> wrote:On Fri, Nov 15, 2019 at 2:45 PM Jeff Janes <[email protected]> wrote:BTW, I'll note at this point that \"analyze category_staging_8\" prior to this query made no difference.Isn't that the wrong table to have analyzed? The offender here is \"categories\", not \"category_staging_8\". Is this some sort of inheritance situation? What do you see in `select * from pg_stats where tablename='categories' and attname='category_id' \\x\\g\\x`?db=> select * from pg_stats where tablename='categories' and attname='category_id' \\x\\g\\x;Expanded display is on. ... n_distinct | 21most_common_vals | {4,3,2,10,11,13,12,16,9,6,7,5,15,23,14,25,24,1,26,28,27}most_common_freqs | {0.2397,0.159933,0.0926667,0.0556,0.0555667,0.0546333,0.0525333,0.0439,0.0426667,0.0346333,0.0331,0.0302333,0.0288333,0.0240667,0.0224,0.0122333,0.011,0.0035,0.00233333,0.000366667,0.0001}There is a path in the analyze code where if the least-seen value in the sample was seen more than once (i.e. no value was seen exactly once) then it assumes that the seen values are all the values that exist. I think the logic behind that is dubious. I think it is pretty clear that that is kicking in here. But why? I think the simple answer is that you analyzed the wrong table, and the statistics shown here might be accurate for some time in the past but are no longer accurate. It is hard to see how a value present 5000 times in a table of 274602 rows could have evaded sampling if they were present at the time the sample was done.Cheers,Jeff",
"msg_date": "Sat, 16 Nov 2019 10:16:02 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple DELETE on modest-size table runs 100% CPU forever"
},
{
"msg_contents": "Problem solved ... see below. Thanks everyone for your suggestions and\ninsights!\n\nOn Sat, Nov 16, 2019 at 7:16 AM Jeff Janes <[email protected]> wrote:\n\n> On Fri, Nov 15, 2019 at 7:27 PM Craig James <[email protected]> wrote:\n>\n>> On Fri, Nov 15, 2019 at 2:45 PM Jeff Janes <[email protected]> wrote:\n>> BTW, I'll note at this point that \"analyze category_staging_8\" prior to\n>> this query made no difference.\n>>\n>\n> Isn't that the wrong table to have analyzed? The offender here is\n> \"categories\", not \"category_staging_8\". Is this some sort of inheritance\n> situation?\n>\n>\n>>\n>>> What do you see in `select * from pg_stats where tablename='categories'\n>>> and attname='category_id' \\x\\g\\x`?\n>>>\n>>\n>> db=> select * from pg_stats where tablename='categories' and\n>> attname='category_id' \\x\\g\\x;\n>> Expanded display is on.\n>>\n>\n>\n>> ...\n>>\n> n_distinct | 21\n>> most_common_vals |\n>> {4,3,2,10,11,13,12,16,9,6,7,5,15,23,14,25,24,1,26,28,27}\n>> most_common_freqs |\n>> {0.2397,0.159933,0.0926667,0.0556,0.0555667,0.0546333,0.0525333,0.0439,0.0426667,0.0346333,0.0331,0.0302333,0.0288333,0.0240667,0.0224,0.0122333,0.011,0.0035,0.00233333,0.000366667,0.0001}\n>>\n>\n> There is a path in the analyze code where if the least-seen value in the\n> sample was seen more than once (i.e. no value was seen exactly once) then\n> it assumes that the seen values are all the values that exist. I think the\n> logic behind that is dubious. I think it is pretty clear that that is\n> kicking in here. But why? I think the simple answer is that you analyzed\n> the wrong table, and the statistics shown here might be accurate for some\n> time in the past but are no longer accurate. It is hard to see how a value\n> present 5000 times in a table of 274602 rows could have evaded sampling if\n> they were present at the time the sample was done.\n>\n\nAs I mentioned in a reply to Andreas, I also added an \"analyze ...\" to the\nother two tables as an experiment. It made no difference. However ...\n\nYour comment about missing 5000 values solved the problem: those values\nwere only inserted in the previous SQL statement, inside of a transaction.\nThe code is reconciling two collections across two different servers: First\nit inserts all new values, then it deletes obsolete values. So the \"select\n...\" in question is including the very 5000 rows that were just inserted.\n\nI added an \"analyze\" between the insert and the delete. Instant fix.\n\nIt also solves one other mystery: This query only caused problems on the\nsmall test system, and has been working well on a production database with\nabout 100x more data. In production, each \"category\" is already populated\nwith a significant amount of data. The production system already has good\nstatistics, so this one insert/delete doesn't change the statistics.\n\n\n> Cheers,\n>\n> Jeff\n>\n\nThanks!\nCraig\n\nProblem solved ... see below. Thanks everyone for your suggestions and insights!On Sat, Nov 16, 2019 at 7:16 AM Jeff Janes <[email protected]> wrote:On Fri, Nov 15, 2019 at 7:27 PM Craig James <[email protected]> wrote:On Fri, Nov 15, 2019 at 2:45 PM Jeff Janes <[email protected]> wrote:BTW, I'll note at this point that \"analyze category_staging_8\" prior to this query made no difference.Isn't that the wrong table to have analyzed? The offender here is \"categories\", not \"category_staging_8\". Is this some sort of inheritance situation? What do you see in `select * from pg_stats where tablename='categories' and attname='category_id' \\x\\g\\x`?db=> select * from pg_stats where tablename='categories' and attname='category_id' \\x\\g\\x;Expanded display is on. ... n_distinct | 21most_common_vals | {4,3,2,10,11,13,12,16,9,6,7,5,15,23,14,25,24,1,26,28,27}most_common_freqs | {0.2397,0.159933,0.0926667,0.0556,0.0555667,0.0546333,0.0525333,0.0439,0.0426667,0.0346333,0.0331,0.0302333,0.0288333,0.0240667,0.0224,0.0122333,0.011,0.0035,0.00233333,0.000366667,0.0001}There is a path in the analyze code where if the least-seen value in the sample was seen more than once (i.e. no value was seen exactly once) then it assumes that the seen values are all the values that exist. I think the logic behind that is dubious. I think it is pretty clear that that is kicking in here. But why? I think the simple answer is that you analyzed the wrong table, and the statistics shown here might be accurate for some time in the past but are no longer accurate. It is hard to see how a value present 5000 times in a table of 274602 rows could have evaded sampling if they were present at the time the sample was done.As I mentioned in a reply to Andreas, I also added an \"analyze ...\" to the other two tables as an experiment. It made no difference. However ...Your comment about missing 5000 values solved the problem: those values were only inserted in the previous SQL statement, inside of a transaction. The code is reconciling two collections across two different servers: First it inserts all new values, then it deletes obsolete values. So the \"select ...\" in question is including the very 5000 rows that were just inserted.I added an \"analyze\" between the insert and the delete. Instant fix.It also solves one other mystery: This query only caused problems on the small test system, and has been working well on a production database with about 100x more data. In production, each \"category\" is already populated with a significant amount of data. The production system already has good statistics, so this one insert/delete doesn't change the statistics. Cheers,Jeff\nThanks!Craig",
"msg_date": "Sat, 16 Nov 2019 09:17:47 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple DELETE on modest-size table runs 100% CPU forever"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis morning I was checking postgres servers logs, looking for errors \n(we've recently upgraded them and changed default config) and long \nrunning queries when I found one of the servers had really big logs \nsince yesterday. It was giving the error of this mail's subject: out of \nmemory, failed on request of size XXX on automatic vacuum of table YYY. \nA quick search revealed me some postgresql-lists messages talking about \nwork_mem and shared_buffers configuration options, some kernel config \noptions too. Although all of them were messages from several years ago, \nI decided to cut my shared_buffers configured value and restart server: \nnow it looks like error is gone. But I'd like to understand what's \nbeyond the logged error (it's really long and refers to things about \ninner functionalities that I'm missing), how to detect what config \noptions are possibly conflicting and, most important, I want to know if \nI've solved it right.\n\n-- Logged error (one of them) ==> Original log is in spanish and I've \ntranslated myself. I've replaced my database objects names\n\nTopMemoryContext: 77568 total in 5 blocks; 15612 free (5 chunks); 61956 used\n TopTransactionContext: 8192 total in 1 blocks; 7604 free (40 chunks); \n588 used\n TOAST to main relid map: 24576 total in 2 blocks; 13608 free (3 \nchunks); 10968 used\n AV worker: 8192 total in 1 blocks; 3228 free (6 chunks); 4964 used\n* Autovacuum Portal: 0 total in 0 blocks; 0 free (0 chunks); 0 used \n**=> Is this line the one that points to the task that has run of of \nmemory?*\n Vacuum: 8192 total in 1 blocks; 8132 free (0 chunks); 60 used\n Operator class cache: 8192 total in 1 blocks; 4420 free (0 chunks); \n3772 used\n smgr relation table: 8192 total in 1 blocks; 316 free (0 chunks); \n7876 used\n TransactionAbortContext: 32768 total in 1 blocks; 32748 free (0 \nchunks); 20 used\n Portal hash: 8192 total in 1 blocks; 3460 free (0 chunks); 4732 used\n PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used\n Relcache by OID: 8192 total in 1 blocks; 1884 free (0 chunks); 6308 used\n CacheMemoryContext: 516096 total in 6 blocks; 234136 free (5 chunks); \n281960 used\n <index>: 1024 total in 1 blocks; 476 free (0 chunks); 548 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks); 308 used\n pg_index_indrelid_index: 1024 total in 1 blocks; 676 free (0 \nchunks); 348 used\n pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 604 free (0 \nchunks); 420 used\n pg_db_role_setting_databaseid_rol_index: 1024 total in 1 blocks; \n644 free (0 chunks); 380 used\n pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 476 free (0 \nchunks); 548 used\n pg_foreign_data_wrapper_name_index: 1024 total in 1 blocks; 716 \nfree (0 chunks); 308 used\n pg_enum_oid_index: 1024 total in 1 blocks; 716 free (0 chunks); 308 \nused\n pg_class_relname_nsp_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_foreign_server_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_statistic_relid_att_inh_index: 1024 total in 1 blocks; 476 free \n(0 chunks); 548 used\n pg_cast_source_target_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_language_name_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_transform_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_collation_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_amop_fam_strat_index: 1024 total in 1 blocks; 476 free (0 \nchunks); 548 used\n pg_index_indexrelid_index: 1024 total in 1 blocks; 676 free (0 \nchunks); 348 used\n pg_ts_template_tmplname_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_ts_config_map_index: 1024 total in 1 blocks; 476 free (0 \nchunks); 548 used\n pg_opclass_oid_index: 1024 total in 1 blocks; 676 free (0 chunks); \n348 used\n pg_foreign_data_wrapper_oid_index: 1024 total in 1 blocks; 716 free \n(0 chunks); 308 used\n pg_event_trigger_evtname_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_ts_dict_oid_index: 1024 total in 1 blocks; 716 free (0 chunks); \n308 used\n pg_event_trigger_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_conversion_default_index: 1024 total in 1 blocks; 476 free (0 \nchunks); 548 used\n pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 476 free \n(0 chunks); 548 used\n pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 644 free \n(0 chunks); 380 used\n pg_enum_typid_label_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_ts_config_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_user_mapping_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_opfamily_am_name_nsp_index: 1024 total in 1 blocks; 476 free (0 \nchunks); 548 used\n pg_foreign_table_relid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_type_oid_index: 1024 total in 1 blocks; 716 free (0 chunks); 308 \nused\n pg_aggregate_fnoid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_constraint_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_ts_parser_prsname_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_ts_config_cfgname_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_ts_parser_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_operator_oid_index: 1024 total in 1 blocks; 716 free (0 chunks); \n308 used\n pg_namespace_nspname_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_ts_template_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_amop_opr_fam_index: 1024 total in 1 blocks; 476 free (0 chunks); \n548 used\n pg_default_acl_role_nsp_obj_index: 1024 total in 1 blocks; 476 free \n(0 chunks); 548 used\n pg_collation_name_enc_nsp_index: 1024 total in 1 blocks; 476 free \n(0 chunks); 548 used\n pg_range_rngtypid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_ts_dict_dictname_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_type_typname_nsp_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_opfamily_oid_index: 1024 total in 1 blocks; 716 free (0 chunks); \n308 used\n pg_class_oid_index: 1024 total in 1 blocks; 676 free (0 chunks); \n348 used\n pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 476 free (0 \nchunks); 548 used\n pg_transform_type_lang_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 604 free \n(0 chunks); 420 used\n pg_proc_oid_index: 1024 total in 1 blocks; 716 free (0 chunks); 308 \nused\n pg_language_oid_index: 1024 total in 1 blocks; 716 free (0 chunks); \n308 used\n pg_namespace_oid_index: 1024 total in 1 blocks; 676 free (0 \nchunks); 348 used\n pg_amproc_fam_proc_index: 1024 total in 1 blocks; 436 free (0 \nchunks); 588 used\n pg_foreign_server_name_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 644 free \n(0 chunks); 380 used\n pg_conversion_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_user_mapping_user_server_index: 1024 total in 1 blocks; 644 free \n(0 chunks); 380 used\n pg_conversion_name_nsp_index: 1024 total in 1 blocks; 644 free (0 \nchunks); 380 used\n pg_authid_oid_index: 1024 total in 1 blocks; 676 free (0 chunks); \n348 used\n pg_auth_members_member_role_index: 1024 total in 1 blocks; 644 free \n(0 chunks); 380 used\n pg_tablespace_oid_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n pg_shseclabel_object_index: 1024 total in 1 blocks; 476 free (0 \nchunks); 548 used\n pg_replication_origin_roname_index: 1024 total in 1 blocks; 716 \nfree (0 chunks); 308 used\n pg_database_datname_index: 1024 total in 1 blocks; 676 free (0 \nchunks); 348 used\n pg_replication_origin_roiident_index: 1024 total in 1 blocks; 716 \nfree (0 chunks); 308 used\n pg_auth_members_role_member_index: 1024 total in 1 blocks; 644 free \n(0 chunks); 380 used\n pg_database_oid_index: 1024 total in 1 blocks; 676 free (0 chunks); \n348 used\n pg_authid_rolname_index: 1024 total in 1 blocks; 716 free (0 \nchunks); 308 used\n WAL record construction: 49520 total in 2 blocks; 6876 free (0 \nchunks); 42644 used\n PrivateRefCount: 8192 total in 1 blocks; 5516 free (0 chunks); 2676 used\n MdSmgr: 8192 total in 1 blocks; 8004 free (0 chunks); 188 used\n LOCALLOCK hash: 8192 total in 1 blocks; 2428 free (0 chunks); 5764 used\n Timezones: 104028 total in 2 blocks; 5516 free (0 chunks); 98512 used\n Postmaster: 8192 total in 1 blocks; 7796 free (8 chunks); 396 used\n ident parser context: 0 total in 0 blocks; 0 free (0 chunks); 0 used\n hba parser context: 7168 total in 3 blocks; 3148 free (1 chunks); \n4020 used\n ErrorContext: 8192 total in 1 blocks; 8172 free (4 chunks); 20 used\nGrand total: 1009356 bytes in 130 blocks; 436888 free (72 chunks); \n572468 used\n[2019-11-17 09:27:37.425 CET] – 11899 ERROR: out of memory\n[2019-11-17 09:27:37.425 CET] – 11899 DETAIL: Failed on request of size \n700378218.\n[2019-11-17 09:27:37.425 CET] – 11899 CONTEXT: automatic vacuum on \ntable «my_db.public.my_table»\n\n\n-- System specs:\n\nCentos 6.5 *i686. *\n\n12 GB RAM.\n\n[root@myserver pg_log]# lscpu\n*Architecture: i686*\nCPU op-mode(s): 32-bit, 64-bit\nCPU(s): 4\nOn-line CPU(s) list: 0-3\nThread(s) per core: 1\nCore(s) per socket: 4\nSocket(s): 1\nVendor ID: GenuineIntel\n\n[root@myserver pg_log]# psql postgres -c \"select version();\"\nversion\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 9.6.15 on i686-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 \n20120313 (Red Hat 4.4.7-23), 32-bit\n(1 row)\n\n-- Postgres configured values\n\n[root@myserver data]# cat postgresql.conf | grep CHANGED\nlisten_addresses = '*' # what IP address(es) to listen on; - CHANGED\nport = 5432 # (change requires restart) - CHANGED\n*shared_buffers = 1024MB* # min 128kB - Default 128MB - \nCHANGED ==> This value was 2048 when I was getting out of memory error.\nwork_mem = 64MB # min 64kB - Default 4MB - CHANGED\nmaintenance_work_mem = 704MB # min 1MB - Default 64MB - CHANGED\neffective_io_concurrency = 2 # 1-1000; 0 disables prefetching - \nDf 1 - CHANGED\nwal_buffers = 16MB # min 32kB, -1 sets based on \nshared_buffers - Df -1 - CHANGED\nmax_wal_size = 1GB # Default 1GB - CHANGED\nmin_wal_size = 512MB # Default 80MB - CHANGED\ncheckpoint_completion_target = 0.8 # checkpoint target duration, 0.0 \n- 1.0 - Df 0.5 - CHANGED\neffective_cache_size = 8448MB #Default 4GB - CHANGED\nlog_min_duration_statement = 30000 # -1 is disabled, 0 logs all \nstatements - Df -1 - CHANGED\nlog_line_prefix = '[%m] – %p %q- %u@%h:%d – %a ' # special values: - \nCHANGED\n\nI've now realized that work_mem is only 64MB, isn't it a bit low?\n\nPlease note that server has a 32bit OS installed (don't know why). I'm \naware of some limitations with memory configuration on 32bit systems [1]\n\nSummary: how to interprete the log message, and is the error controlled \nby the change made to shared_buffers value (from 2GB to 1GB).\n\nKind regards,\n\nEkaterina\n\n\n[1] this post written by Robert Haas: \nhttp://rhaas.blogspot.com/2011/05/sharedbuffers-on-32-bit-systems.html\n\n\n\n\n\n\n\nHi all,\nThis morning I was checking postgres servers logs, looking for\n errors (we've recently upgraded them and changed default config)\n and long running queries when I found one of the servers had\n really big logs since yesterday. It was giving the error of this\n mail's subject: out of memory, failed on request of size XXX on\n automatic vacuum of table YYY. A quick search revealed me some\n postgresql-lists messages talking about work_mem and\n shared_buffers configuration options, some kernel config options\n too. Although all of them were messages from several years ago, I\n decided to cut my shared_buffers configured value and restart\n server: now it looks like error is gone. But I'd like to\n understand what's beyond the logged error (it's really long and\n refers to things about inner functionalities that I'm missing),\n how to detect what config options are possibly conflicting and,\n most important, I want to know if I've solved it right.\n-- Logged error (one of them) ==> Original log is in spanish\n and I've translated myself. I've replaced my database objects\n names\n\nTopMemoryContext: 77568 total in 5 blocks; 15612 free (5 chunks);\n 61956 used\n TopTransactionContext: 8192 total in 1 blocks; 7604 free (40\n chunks); 588 used\n TOAST to main relid map: 24576 total in 2 blocks; 13608 free (3\n chunks); 10968 used\n AV worker: 8192 total in 1 blocks; 3228 free (6 chunks); 4964\n used\n Autovacuum Portal: 0 total in 0 blocks; 0 free (0 chunks);\n 0 used => Is this line the one that points to the\n task that has run of of memory?\n Vacuum: 8192 total in 1 blocks; 8132 free (0 chunks); 60\n used\n Operator class cache: 8192 total in 1 blocks; 4420 free (0\n chunks); 3772 used\n smgr relation table: 8192 total in 1 blocks; 316 free (0\n chunks); 7876 used\n TransactionAbortContext: 32768 total in 1 blocks; 32748 free (0\n chunks); 20 used\n Portal hash: 8192 total in 1 blocks; 3460 free (0 chunks); 4732\n used\n PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used\n Relcache by OID: 8192 total in 1 blocks; 1884 free (0 chunks);\n 6308 used\n CacheMemoryContext: 516096 total in 6 blocks; 234136 free (5\n chunks); 281960 used\n <index>: 1024 total in 1 blocks; 476 free (0 chunks);\n 548 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n <index>: 1024 total in 1 blocks; 716 free (0 chunks);\n 308 used\n pg_index_indrelid_index: 1024 total in 1 blocks; 676 free (0\n chunks); 348 used\n pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 604\n free (0 chunks); 420 used\n pg_db_role_setting_databaseid_rol_index: 1024 total in 1\n blocks; 644 free (0 chunks); 380 used\n pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 476 free\n (0 chunks); 548 used\n pg_foreign_data_wrapper_name_index: 1024 total in 1 blocks;\n 716 free (0 chunks); 308 used\n pg_enum_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_class_relname_nsp_index: 1024 total in 1 blocks; 644 free\n (0 chunks); 380 used\n pg_foreign_server_oid_index: 1024 total in 1 blocks; 716 free\n (0 chunks); 308 used\n pg_statistic_relid_att_inh_index: 1024 total in 1 blocks; 476\n free (0 chunks); 548 used\n pg_cast_source_target_index: 1024 total in 1 blocks; 644 free\n (0 chunks); 380 used\n pg_language_name_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_transform_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_collation_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_amop_fam_strat_index: 1024 total in 1 blocks; 476 free (0\n chunks); 548 used\n pg_index_indexrelid_index: 1024 total in 1 blocks; 676 free (0\n chunks); 348 used\n pg_ts_template_tmplname_index: 1024 total in 1 blocks; 644\n free (0 chunks); 380 used\n pg_ts_config_map_index: 1024 total in 1 blocks; 476 free (0\n chunks); 548 used\n pg_opclass_oid_index: 1024 total in 1 blocks; 676 free (0\n chunks); 348 used\n pg_foreign_data_wrapper_oid_index: 1024 total in 1 blocks; 716\n free (0 chunks); 308 used\n pg_event_trigger_evtname_index: 1024 total in 1 blocks; 716\n free (0 chunks); 308 used\n pg_ts_dict_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_event_trigger_oid_index: 1024 total in 1 blocks; 716 free\n (0 chunks); 308 used\n pg_conversion_default_index: 1024 total in 1 blocks; 476 free\n (0 chunks); 548 used\n pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 476\n free (0 chunks); 548 used\n pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 644\n free (0 chunks); 380 used\n pg_enum_typid_label_index: 1024 total in 1 blocks; 644 free (0\n chunks); 380 used\n pg_ts_config_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_user_mapping_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_opfamily_am_name_nsp_index: 1024 total in 1 blocks; 476\n free (0 chunks); 548 used\n pg_foreign_table_relid_index: 1024 total in 1 blocks; 716 free\n (0 chunks); 308 used\n pg_type_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_aggregate_fnoid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_constraint_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 644\n free (0 chunks); 380 used\n pg_ts_parser_prsname_index: 1024 total in 1 blocks; 644 free\n (0 chunks); 380 used\n pg_ts_config_cfgname_index: 1024 total in 1 blocks; 644 free\n (0 chunks); 380 used\n pg_ts_parser_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_operator_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_namespace_nspname_index: 1024 total in 1 blocks; 716 free\n (0 chunks); 308 used\n pg_ts_template_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_amop_opr_fam_index: 1024 total in 1 blocks; 476 free (0\n chunks); 548 used\n pg_default_acl_role_nsp_obj_index: 1024 total in 1 blocks; 476\n free (0 chunks); 548 used\n pg_collation_name_enc_nsp_index: 1024 total in 1 blocks; 476\n free (0 chunks); 548 used\n pg_range_rngtypid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_ts_dict_dictname_index: 1024 total in 1 blocks; 644 free (0\n chunks); 380 used\n pg_type_typname_nsp_index: 1024 total in 1 blocks; 644 free (0\n chunks); 380 used\n pg_opfamily_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_class_oid_index: 1024 total in 1 blocks; 676 free (0\n chunks); 348 used\n pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 476\n free (0 chunks); 548 used\n pg_transform_type_lang_index: 1024 total in 1 blocks; 644 free\n (0 chunks); 380 used\n pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 604\n free (0 chunks); 420 used\n pg_proc_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_language_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_namespace_oid_index: 1024 total in 1 blocks; 676 free (0\n chunks); 348 used\n pg_amproc_fam_proc_index: 1024 total in 1 blocks; 436 free (0\n chunks); 588 used\n pg_foreign_server_name_index: 1024 total in 1 blocks; 716 free\n (0 chunks); 308 used\n pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 644\n free (0 chunks); 380 used\n pg_conversion_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_user_mapping_user_server_index: 1024 total in 1 blocks; 644\n free (0 chunks); 380 used\n pg_conversion_name_nsp_index: 1024 total in 1 blocks; 644 free\n (0 chunks); 380 used\n pg_authid_oid_index: 1024 total in 1 blocks; 676 free (0\n chunks); 348 used\n pg_auth_members_member_role_index: 1024 total in 1 blocks; 644\n free (0 chunks); 380 used\n pg_tablespace_oid_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n pg_shseclabel_object_index: 1024 total in 1 blocks; 476 free\n (0 chunks); 548 used\n pg_replication_origin_roname_index: 1024 total in 1 blocks;\n 716 free (0 chunks); 308 used\n pg_database_datname_index: 1024 total in 1 blocks; 676 free (0\n chunks); 348 used\n pg_replication_origin_roiident_index: 1024 total in 1 blocks;\n 716 free (0 chunks); 308 used\n pg_auth_members_role_member_index: 1024 total in 1 blocks; 644\n free (0 chunks); 380 used\n pg_database_oid_index: 1024 total in 1 blocks; 676 free (0\n chunks); 348 used\n pg_authid_rolname_index: 1024 total in 1 blocks; 716 free (0\n chunks); 308 used\n WAL record construction: 49520 total in 2 blocks; 6876 free (0\n chunks); 42644 used\n PrivateRefCount: 8192 total in 1 blocks; 5516 free (0 chunks);\n 2676 used\n MdSmgr: 8192 total in 1 blocks; 8004 free (0 chunks); 188 used\n LOCALLOCK hash: 8192 total in 1 blocks; 2428 free (0 chunks);\n 5764 used\n Timezones: 104028 total in 2 blocks; 5516 free (0 chunks); 98512\n used\n Postmaster: 8192 total in 1 blocks; 7796 free (8 chunks); 396\n used\n ident parser context: 0 total in 0 blocks; 0 free (0 chunks);\n 0 used\n hba parser context: 7168 total in 3 blocks; 3148 free (1\n chunks); 4020 used\n ErrorContext: 8192 total in 1 blocks; 8172 free (4 chunks); 20\n used\n Grand total: 1009356 bytes in 130 blocks; 436888 free (72 chunks);\n 572468 used\n [2019-11-17 09:27:37.425 CET] – 11899 ERROR: out of memory\n [2019-11-17 09:27:37.425 CET] – 11899 DETAIL: Failed on request\n of size 700378218.\n [2019-11-17 09:27:37.425 CET] – 11899 CONTEXT: automatic vacuum\n on table «my_db.public.my_table»\n\n\n\n-- System specs:\nCentos 6.5 i686. \n\n12 GB RAM.\n\n[root@myserver pg_log]# lscpu\nArchitecture: i686\n CPU op-mode(s): 32-bit, 64-bit\n CPU(s): 4\n On-line CPU(s) list: 0-3\n Thread(s) per core: 1\n Core(s) per socket: 4\n Socket(s): 1\n Vendor ID: GenuineIntel\n[root@myserver pg_log]# psql postgres -c \"select version();\"\n \n version \n---------------------------------------------------------------------------------------------------------\n PostgreSQL 9.6.15 on i686-pc-linux-gnu, compiled by gcc (GCC)\n 4.4.7 20120313 (Red Hat 4.4.7-23), 32-bit\n (1 row)\n\n\n-- Postgres configured values\n[root@myserver data]# cat postgresql.conf | grep CHANGED\n listen_addresses = '*' # what IP address(es) to listen on;\n - CHANGED\n port = 5432 # (change requires restart) - CHANGED\nshared_buffers = 1024MB # min 128kB - Default\n 128MB - CHANGED ==> This value was 2048 when I was getting out\n of memory error.\n work_mem = 64MB # min 64kB - Default 4MB - CHANGED\n \n maintenance_work_mem = 704MB # min 1MB - Default 64MB -\n CHANGED\n effective_io_concurrency = 2 # 1-1000; 0 disables\n prefetching - Df 1 - CHANGED\n wal_buffers = 16MB # min 32kB, -1 sets based on\n shared_buffers - Df -1 - CHANGED\n max_wal_size = 1GB # Default 1GB - CHANGED\n min_wal_size = 512MB # Default 80MB - CHANGED\n checkpoint_completion_target = 0.8 # checkpoint target\n duration, 0.0 - 1.0 - Df 0.5 - CHANGED\n effective_cache_size = 8448MB #Default 4GB - CHANGED\n log_min_duration_statement = 30000 # -1 is disabled, 0 logs all\n statements - Df -1 - CHANGED\n log_line_prefix = '[%m] – %p %q- %u@%h:%d – %a ' # special\n values: - CHANGED\n\n\nI've now realized that work_mem is only 64MB, isn't it a bit low?\nPlease note that server has a 32bit OS installed (don't know\n why). I'm aware of some limitations with memory configuration on\n 32bit systems [1] \n\nSummary: how to interprete the log message, and is the error\n controlled by the change made to shared_buffers value (from 2GB to\n 1GB).\nKind regards,\nEkaterina\n\n\n\n[1] this post written by Robert Haas:\n http://rhaas.blogspot.com/2011/05/sharedbuffers-on-32-bit-systems.html",
"msg_date": "Mon, 18 Nov 2019 12:41:27 +0100",
"msg_from": "Ekaterina Amez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Out of memory error on automatic vacuum"
},
{
"msg_contents": "On Mon, Nov 18, 2019 at 12:41:27PM +0100, Ekaterina Amez wrote:\n>Hi all,\n>\n>This morning I was checking postgres servers logs, looking for errors \n>(we've recently upgraded them and changed default config) and long \n>running queries when I found one of the servers had really big logs \n>since yesterday.� It was giving the error of this mail's subject: out \n>of memory, failed on request of size XXX on automatic vacuum of table \n>YYY. A quick search revealed me some postgresql-lists messages talking \n>about work_mem and shared_buffers configuration options, some kernel \n>config options too. Although all of them were messages from several \n>years ago, I decided to cut my shared_buffers configured value and \n>restart server: now it looks like error is gone. But I'd like to \n>understand what's beyond the logged error (it's really long and refers \n>to things about inner functionalities that I'm missing), how to detect \n>what config options are possibly conflicting and, most important, I \n>want to know if I've solved it right.\n>\n\nUnfortunately that's hard to say, without further data. The \"out of\nmemory\" errors simply mean we called malloc() and it returned NULL,\nbecause the kernel was unable to provide the memory.\n\nThere probably were other processes using all the available RAM (the\nlimit depends on various config values, e.g. overcommit). What were\nthese processes doing we don't know :-(\n\nFor example, there might be multiple complex queries, allocating\nseveral work_mem each, using quite a bit of memory. Or there might be a\nrunaway query doing HashAgg allocating much more memory than predicted.\nOr maybe there was running a completely separate job (say, backup)\nallocating a lot of memory or dirtying data in page cache.\n\nThere are countless options what might have happened. The memory context\nstats are nice, but it's just a snapshot from one particular process,\nand it does not seem very interesting (the total is just ~1MB, so\nnothing extreme). We still don't know what else was running.\n\nLowering shared_buffers certainly does reduce the memory pressure in\ngeneral, i.e. there is 1GB of work for use by processes. It may be\nsufficient, hard to guess.\n\nI don't know if work_mem 64MB is too low, becuase it depends on what\nqueries you're running etc. But you probably don't wat to increase that,\nas it allows processes to use more memory when executing queries, i.e.\nit increases memory pressure and makes OOM more likely.\n\nSo you need to watch system monitoring, see how much memory is being\nused (excluding page cache) and consider reducing work_mem and/or\nmax_connections if it's too close.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 18 Nov 2019 13:25:28 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of memory error on automatic vacuum"
},
{
"msg_contents": "El 18/11/19 a las 13:25, Tomas Vondra escribió:\n> On Mon, Nov 18, 2019 at 12:41:27PM +0100, Ekaterina Amez wrote:\n>> Hi all,\n>>\n>> This morning I was checking postgres servers logs, looking for errors \n>> (we've recently upgraded them and changed default config) and long \n>> running queries when I found one of the servers had really big logs \n>> since yesterday. It was giving the error of this mail's subject: out \n>> of memory, failed on request of size XXX on automatic vacuum of table \n>> YYY. A quick search revealed me some postgresql-lists messages \n>> talking about work_mem and shared_buffers configuration options, some \n>> kernel config options too. Although all of them were messages from \n>> several years ago, I decided to cut my shared_buffers configured \n>> value and restart server: now it looks like error is gone. But I'd \n>> like to understand what's beyond the logged error (it's really long \n>> and refers to things about inner functionalities that I'm missing), \n>> how to detect what config options are possibly conflicting and, most \n>> important, I want to know if I've solved it right.\n>>\n>\n> Unfortunately that's hard to say, without further data. The \"out of\n> memory\" errors simply mean we called malloc() and it returned NULL,\n> because the kernel was unable to provide the memory.\nThis (kernel unable to provide memory) was because no more RAM was \navailable to allocate? It was because PG process did not have more \nmemory assigned ready to use? Or is something unknown because it depends \non the situations where the error is thrown?\n>\n> There probably were other processes using all the available RAM (the\n> limit depends on various config values, e.g. overcommit). What were\n> these processes doing we don't know :-(\n>\n> For example, there might be multiple complex queries, allocating\n> several work_mem each, using quite a bit of memory. Or there might be a\n> runaway query doing HashAgg allocating much more memory than predicted.\n> Or maybe there was running a completely separate job (say, backup)\n> allocating a lot of memory or dirtying data in page cache.\n\nI've looked at cron and I've seen a scheduled process that finished a \nbit before the error began to log (o couple of minutes or so). Errors \nbegan on Sunday morning and this machine doesn't have much workload on \nwork days, and less on weekend. I'll keep an eye on this log and if the \nproblem appears again I'll try to track database activity and machine \nactivity.\n\n>\n> There are countless options what might have happened. The memory context\n> stats are nice, but it's just a snapshot from one particular process,\n> and it does not seem very interesting (the total is just ~1MB, so\n> nothing extreme). We still don't know what else was running.\n\nWhen you talk about ~1MB are you getting this size from log lines like this?\n\n<index>: *1024* total in 1 blocks; 476 free (0 chunks); 548 used\n\n\n>\n> Lowering shared_buffers certainly does reduce the memory pressure in\n> general, i.e. there is 1GB of work for use by processes. It may be\n> sufficient, hard to guess.\n>\n> I don't know if work_mem 64MB is too low, becuase it depends on what\n> queries you're running etc. But you probably don't wat to increase that,\n> as it allows processes to use more memory when executing queries, i.e.\n> it increases memory pressure and makes OOM more likely.\n>\n> So you need to watch system monitoring, see how much memory is being\n> used (excluding page cache) and consider reducing work_mem and/or\n> max_connections if it's too close.\n\nI'll do, thanks for your suggestions.\n\n\n>\n> regards\n>\nRegards,\n\nEkaterina\n\n\n\n\n\n\n\n\n\nEl 18/11/19 a las 13:25, Tomas Vondra\n escribió:\n\nOn Mon, Nov\n 18, 2019 at 12:41:27PM +0100, Ekaterina Amez wrote:\n \nHi all,\n \n\n This morning I was checking postgres servers logs, looking for\n errors (we've recently upgraded them and changed default config)\n and long running queries when I found one of the servers had\n really big logs since yesterday. It was giving the error of\n this mail's subject: out of memory, failed on request of size\n XXX on automatic vacuum of table YYY. A quick search revealed me\n some postgresql-lists messages talking about work_mem and\n shared_buffers configuration options, some kernel config options\n too. Although all of them were messages from several years ago,\n I decided to cut my shared_buffers configured value and restart\n server: now it looks like error is gone. But I'd like to\n understand what's beyond the logged error (it's really long and\n refers to things about inner functionalities that I'm missing),\n how to detect what config options are possibly conflicting and,\n most important, I want to know if I've solved it right.\n \n\n\n\n Unfortunately that's hard to say, without further data. The \"out\n of\n \n memory\" errors simply mean we called malloc() and it returned\n NULL,\n \n because the kernel was unable to provide the memory.\n \n\n This (kernel unable to provide memory) was because no more RAM was\n available to allocate? It was because PG process did not have more\n memory assigned ready to use? Or is something unknown because it\n depends on the situations where the error is thrown?\n\n\n There probably were other processes using all the available RAM\n (the\n \n limit depends on various config values, e.g. overcommit). What\n were\n \n these processes doing we don't know :-(\n \n\n For example, there might be multiple complex queries, allocating\n \n several work_mem each, using quite a bit of memory. Or there might\n be a\n \n runaway query doing HashAgg allocating much more memory than\n predicted.\n \n Or maybe there was running a completely separate job (say, backup)\n \n allocating a lot of memory or dirtying data in page cache.\n \n\nI've looked at cron and I've seen a scheduled process that\n finished a bit before the error began to log (o couple of minutes\n or so). Errors began on Sunday morning and this machine doesn't\n have much workload on work days, and less on weekend. I'll keep an\n eye on this log and if the problem appears again I'll try to track\n database activity and machine activity. \n\n\n\n There are countless options what might have happened. The memory\n context\n \n stats are nice, but it's just a snapshot from one particular\n process,\n \n and it does not seem very interesting (the total is just ~1MB, so\n \n nothing extreme). We still don't know what else was running.\n \n\nWhen you talk about ~1MB are you getting this size from log lines\n like this?\n\n<index>: 1024 total in 1 blocks; 476 free (0\n chunks); 548 used\n\n\n\n\n Lowering shared_buffers certainly does reduce the memory pressure\n in\n \n general, i.e. there is 1GB of work for use by processes. It may be\n \n sufficient, hard to guess.\n \n\n I don't know if work_mem 64MB is too low, becuase it depends on\n what\n \n queries you're running etc. But you probably don't wat to increase\n that,\n \n as it allows processes to use more memory when executing queries,\n i.e.\n \n it increases memory pressure and makes OOM more likely.\n \n\n So you need to watch system monitoring, see how much memory is\n being\n \n used (excluding page cache) and consider reducing work_mem and/or\n \n max_connections if it's too close.\n \n\nI'll do, thanks for your suggestions.\n\n\n\n\n regards\n \n\n\nRegards,\nEkaterina",
"msg_date": "Mon, 18 Nov 2019 15:02:16 +0100",
"msg_from": "Ekaterina Amez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of memory error on automatic vacuum"
},
{
"msg_contents": "On Mon, Nov 18, 2019 at 03:02:16PM +0100, Ekaterina Amez wrote:\n>\n>El 18/11/19 a las 13:25, Tomas Vondra escribi�:\n>>On Mon, Nov 18, 2019 at 12:41:27PM +0100, Ekaterina Amez wrote:\n>>>Hi all,\n>>>\n>>>This morning I was checking postgres servers logs, looking for \n>>>errors (we've recently upgraded them and changed default config) \n>>>and long running queries when I found one of the servers had \n>>>really big logs since yesterday.� It was giving the error of this \n>>>mail's subject: out of memory, failed on request of size XXX on \n>>>automatic vacuum of table YYY. A quick search revealed me some \n>>>postgresql-lists messages talking about work_mem and \n>>>shared_buffers configuration options, some kernel config options \n>>>too. Although all of them were messages from several years ago, I \n>>>decided to cut my shared_buffers configured value and restart \n>>>server: now it looks like error is gone. But I'd like to \n>>>understand what's beyond the logged error (it's really long and \n>>>refers to things about inner functionalities that I'm missing), \n>>>how to detect what config options are possibly conflicting and, \n>>>most important, I want to know if I've solved it right.\n>>>\n>>\n>>Unfortunately that's hard to say, without further data. The \"out of\n>>memory\" errors simply mean we called malloc() and it returned NULL,\n>>because the kernel was unable to provide the memory.\n>This (kernel unable to provide memory) was because no more RAM was \n>available to allocate? It was because PG process did not have more \n>memory assigned ready to use? Or is something unknown because it \n>depends on the situations where the error is thrown?\n\nNot sure I understand. Whenever PostgreSQL process needs memory it\nrequests it from the kernel by calling malloc(), and the amount of\navailabe RAM is limited. So when kernel can't provide more memory,\nit returns NULL.\n\n>>\n>>There probably were other processes using all the available RAM (the\n>>limit depends on various config values, e.g. overcommit). What were\n>>these processes doing we don't know :-(\n>>\n>>For example, there might be multiple complex queries, allocating\n>>several work_mem each, using quite a bit of memory. Or there might be a\n>>runaway query doing HashAgg allocating much more memory than predicted.\n>>Or maybe there was running a completely separate job (say, backup)\n>>allocating a lot of memory or dirtying data in page cache.\n>\n>I've looked at cron and I've seen a scheduled process that finished a \n>bit before the error began to log (o couple of minutes or so). Errors \n>began on Sunday morning and this machine doesn't have much workload on \n>work days, and less on weekend. I'll keep an eye on this log and if \n>the problem appears again I'll try to track database activity and \n>machine activity.\n>\n\nIf it finished a couple of minutes before, it's unlikely to be the\nrelated. But hard to say, without knowing the details.\n\n>>\n>>There are countless options what might have happened. The memory context\n>>stats are nice, but it's just a snapshot from one particular process,\n>>and it does not seem very interesting (the total is just ~1MB, so\n>>nothing extreme). We still don't know what else was running.\n>\n>When you talk about ~1MB are you getting this size from log lines like this?\n>\n><index>: *1024* total in 1 blocks; 476 free (0 chunks); 548 used\n>\n\nNo, that's just one of the memory contexts (they form a tree), using\nonly 1kB of memory. What matters is the \"grand total\"\n\nGrand total: 1009356 bytes in 130 blocks; 436888 free (72 chunks);\n572468 used\n\nwhich is ~1MB.\n\n>\n>>\n>>Lowering shared_buffers certainly does reduce the memory pressure in\n>>general, i.e. there is 1GB of work for use by processes. It may be\n>>sufficient, hard to guess.\n>>\n>>I don't know if work_mem 64MB is too low, becuase it depends on what\n>>queries you're running etc. But you probably don't wat to increase that,\n>>as it allows processes to use more memory when executing queries, i.e.\n>>it increases memory pressure and makes OOM more likely.\n>>\n>>So you need to watch system monitoring, see how much memory is being\n>>used (excluding page cache) and consider reducing work_mem and/or\n>>max_connections if it's too close.\n>\n>I'll do, thanks for your suggestions.\n>\n\nAnother thing you might do is adding a swap (if you don't have one\nalready), as a safety.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 18 Nov 2019 15:16:05 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of memory error on automatic vacuum"
},
{
"msg_contents": "\nEl 18/11/19 a las 15:16, Tomas Vondra escribió:\n>\n> Not sure I understand. Whenever PostgreSQL process needs memory it\n> requests it from the kernel by calling malloc(), and the amount of\n> availabe RAM is limited. So when kernel can't provide more memory,\n> it returns NULL.\n>\nUnderstood.\n\n>\n> If it finished a couple of minutes before, it's unlikely to be the\n> related. But hard to say, without knowing the details.\n\nYeah, I thought the same but for me is too much coincidence and is \nsuspicious (or at least a thing to have in mind).\n\n\n> No, that's just one of the memory contexts (they form a tree), using\n> only 1kB of memory. What matters is the \"grand total\"\n>\n> Grand total: 1009356 bytes in 130 blocks; 436888 free (72 chunks);\n> 572468 used\n>\n> which is ~1MB.\n>\nOK, in my lack of knowledge I was understanding \"memory context\" as the \nwhole message.\n\n\n> Another thing you might do is adding a swap (if you don't have one\n> already), as a safety.\n>\n>\nExcuse my ignorance but... swap? You mean some mechanism that prevents \nserver to be unavailable by having a second instance running but not \naccesible and changing from one to the other when the main fails? (It's \nthe best way I find to describe it, as I don't usually speak/write english).\n\nThanks for your time.\n\n\n\n\n",
"msg_date": "Mon, 18 Nov 2019 15:46:03 +0100",
"msg_from": "Ekaterina Amez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of memory error on automatic vacuum"
},
{
"msg_contents": "On Mon, Nov 18, 2019 at 03:46:03PM +0100, Ekaterina Amez wrote:\n>\n>El 18/11/19 a las 15:16, Tomas Vondra escribi�:\n>>\n>>Not sure I understand. Whenever PostgreSQL process needs memory it\n>>requests it from the kernel by calling malloc(), and the amount of\n>>availabe RAM is limited.� So when kernel can't provide more memory,\n>>it returns NULL.\n>>\n>Understood.\n>\n>>\n>>If it finished a couple of minutes before, it's unlikely to be the\n>>related. But hard to say, without knowing the details.\n>\n>Yeah, I thought the same but for me is too much coincidence and is \n>suspicious (or at least a thing to have in mind).\n>\n>\n>>No, that's just one of the memory contexts (they form a tree), using\n>>only 1kB of memory. What matters is the \"grand total\"\n>>\n>>Grand total: 1009356 bytes in 130 blocks; 436888 free (72 chunks);\n>>572468 used\n>>\n>>which is ~1MB.\n>>\n>OK, in my lack of knowledge I was understanding \"memory context\" as \n>the whole message.\n>\n>\n>>Another thing you might do is adding a swap (if you don't have one\n>>already), as a safety.\n>>\n>>\n>Excuse my ignorance but... swap? You mean some mechanism that prevents \n>server to be unavailable by having a second instance running but not \n>accesible and changing from one to the other when the main fails? \n>(It's the best way I find to describe it, as I don't usually \n>speak/write english).\n>\n\nswap = space on disk, so that OS can page out unused data from RAM when\nthere's memory pressure\n\nIt's a basic sysadmin knowledge, I think. Search for mkswap.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 18 Nov 2019 16:10:33 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of memory error on automatic vacuum"
}
] |
[
{
"msg_contents": "Hello colleagues -\n\n \n\nThe problem description:\n\nWe're moving from 9.6 to 11.5. There is a SQL code that never ends in 11.5\nbut works fine in 9.6. The main cause is the optimizer considers of using NL\nAnti join instead of Merge in 9.6. And the root cause - wrong estimation\nwhile self-joining.\n\n \n\nSystem environment:\n\nCentOS Linux 3.10.0-1062.4.1.el7.x86_64 x86_64\n\nMemTotal: 16266644 kB\n\nIntel(R) Xeon(R) CPU E7-8867 v3 @ 2.50GHz\n\nHDD - unknown\n\n \n\nPostgreSQL:\n\nPostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623\n(Red Hat 4.8.5-36), 64-bit\n\nshared_buffers = 1GB\n\nhuge_pages = on\n\ntemp_buffers = 1GB\n\nmax_prepared_transactions = 128\n\nmax_connections = 500\n\nwork_mem = 256MB\n\nmaintenance_work_mem = 1024MB\n\nautovacuum_work_mem = 512MB\n\nmax_worker_processes = 100\n\nmax_parallel_workers_per_gather = 0 # changing this value to any others\ntakes no effect for issue resolving\n\nmax_parallel_workers = 8\n\ncheckpoint_timeout = 30min\n\nmax_wal_size = 32GB\n\nmin_wal_size = 8GB\n\ncheckpoint_completion_target = 0.9\n\nenable_nestloop = on # off value fixes the issue but this is\nwrong way\n\nrandom_page_cost = 4.0\n\neffective_cache_size = 4GB\n\ndefault_statistics_target = 2000\n\n \n\nMain script:\n\n \n\n -- preparation\n\n -- this table is reverted tree with tree keys position_uuid,\nparent_position_uuid\n\n create temporary table tmp_nsi_klp on commit drop as\n\n select\n\n k1.gid,\n\n k1.smnn_gid,\n\n k1.position_uuid,\n\n p.parent_position_uuid,\n\n k1.order_number,\n\n k1.date_start,\n\n k1.date_end,\n\n k1.is_active,\n\n coalesce(p.is_fake_series, false) as is_fake_series\n\n from\n\n nsi_klp k1\n\n left join (select gid, unnest(parent_position_uuid) as\nparent_position_uuid, coalesce(array_length(parent_position_uuid, 1),0) > 1\nas is_fake_series from nsi_klp where version_esklp = '2.0') p using (gid)\n\n where\n\n k1.version_esklp = '2.0'\n\n ;\n\n \n\n create unique index tmp_nsi_klp_ui on tmp_nsi_klp(gid,\nparent_position_uuid);\n\n \n\n analyze tmp_nsi_klp;\n\n \n\n -- working set (!!This SQL never ends in 11.5 now)\n\n create temporary table tmp_klp_replace on commit drop as\n\n select distinct on (klp_gid)\n\n *\n\n from (\n\n select\n\n k2.gid as klp_gid,\n\n k2.smnn_gid as klp_smnn_gid,\n\n k2.position_uuid as klp_position_uuid,\n\n k2.order_number as klp_order_number,\n\n k2.is_active as klp_is_active,\n\n k1.gid as klp_child_gid,\n\n k1.smnn_gid as klp_child_smnn_gid,\n\n k1.position_uuid as klp_child_position_uuid,\n\n k1.order_number as klp_child_order_number,\n\n k1.is_active as klp_child_is_active\n\n from\n\n tmp_nsi_klp k1\n\n join tmp_nsi_klp k2 on (k2.position_uuid =\nk1.parent_position_uuid)\n\n union all\n\n select\n\n k1.gid as klp_gid,\n\n k1.smnn_gid as klp_smnn_gid,\n\n k1.position_uuid as klp_position_uuid,\n\n k1.order_number as klp_order_number,\n\n k1.is_active as klp_is_active,\n\n null as klp_child_gid,\n\n null as klp_child_smnn_gid,\n\n null as klp_child_position_uuid,\n\n null as klp_child_order_number,\n\n null as klp_child_is_active\n\n from\n\n tmp_nsi_klp k1\n\n left join tmp_nsi_klp k2 on (k1.position_uuid =\nk2.parent_position_uuid)\n\n left join (select position_uuid from tmp_nsi_klp where not\nis_fake_series group by position_uuid having count(1) > 1) klp_series on\n(klp_series.position_uuid = k1.position_uuid)\n\n where\n\n -- not exists(select 1 from tmp_nsi_klp k2 where k1.position_uuid =\nk2.parent_position_uuid)\n\n k2.gid is null -- none referenced\n\n and klp_series.position_uuid is null -- klp series with the same\nposition_uuid\n\n ) a\n\n order by\n\n klp_gid,\n\n klp_order_number desc\n\n ;\n\n \n\nCharacteristics of source table - tmp_nsi_klp:\n\n \n\ncreate table tmp_nsi_klp (\n\n gid uuid NULL, -- not null by the fact\n\n smnn_gid uuid NULL, -- not null by the fact\n\n position_uuid uuid NULL, -- not null by the fact\n\n parent_position_uuid uuid NULL, \n\n order_number int8 NULL,\n\n date_start timestamp NULL, -- not null by the fact\n\n date_end timestamp NULL,\n\n is_active bool NULL, -- not null by the fact\n\n is_fake_series bool NULL -- not null by the fact\n\n);\n\n \n\nRows: 237279\n\n \n\nCols stats:\n\nhttps://docs.google.com/spreadsheets/d/1Ocbult13kZ64vK9nHt-_BV3EENK_ZSHFTAmR\nZLISUIE/edit?usp=sharing\n\n \n\nExecution plans for problematic query - working set \"create temporary table\ntmp_klp_replace on commit drop as\":\n\n \n\nOn 11.5 (option nestloop enabled):\n\nhttps://explain.depesz.com/s/pIzd\n\nExec time: never finished\n\n \n\nOn 9.6 (PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-11), 64-bit):\n\nhttps://explain.depesz.com/s/sO0G\n\nExec time: ~1 sec\n\n \n\nOn 11.5 (option nestloop disabled):\n\nhttps://explain.depesz.com/s/eYzk\n\nExec time: ~1,5 sec\n\n \n\nConstruction \"not exists(select 1 from tmp_nsi_klp k2 where k1.position_uuid\n= k2.parent_position_uuid)\" works perfectly but there are lots of similar\nconstructions in a code made for checking inclusion of data. Thus no chances\nto change existing code to another using not exists construction. Are there\nany options to bring initial statement to life and keep the server option\nnestloop enable? \n\nGive me a clue, pls.\n\nThanks in advance.\n\nAndrew.\n\n \n\n\nHello colleagues – The problem description:We're moving from 9.6 to 11.5. There is a SQL code that never ends in 11.5 but works fine in 9.6. The main cause is the optimizer considers of using NL Anti join instead of Merge in 9.6. And the root cause - wrong estimation while self-joining. System environment:CentOS Linux 3.10.0-1062.4.1.el7.x86_64 x86_64MemTotal: 16266644 kBIntel(R) Xeon(R) CPU E7-8867 v3 @ 2.50GHzHDD - unknown PostgreSQL:PostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bitshared_buffers = 1GBhuge_pages = ontemp_buffers = 1GBmax_prepared_transactions = 128max_connections = 500work_mem = 256MBmaintenance_work_mem = 1024MBautovacuum_work_mem = 512MBmax_worker_processes = 100max_parallel_workers_per_gather = 0 # changing this value to any others takes no effect for issue resolvingmax_parallel_workers = 8checkpoint_timeout = 30minmax_wal_size = 32GBmin_wal_size = 8GBcheckpoint_completion_target = 0.9enable_nestloop = on # off value fixes the issue but this is wrong wayrandom_page_cost = 4.0effective_cache_size = 4GBdefault_statistics_target = 2000 Main script: -- preparation -- this table is reverted tree with tree keys position_uuid, parent_position_uuid create temporary table tmp_nsi_klp on commit drop as select k1.gid, k1.smnn_gid, k1.position_uuid, p.parent_position_uuid, k1.order_number, k1.date_start, k1.date_end, k1.is_active, coalesce(p.is_fake_series, false) as is_fake_series from nsi_klp k1 left join (select gid, unnest(parent_position_uuid) as parent_position_uuid, coalesce(array_length(parent_position_uuid, 1),0) > 1 as is_fake_series from nsi_klp where version_esklp = '2.0') p using (gid) where k1.version_esklp = '2.0' ; create unique index tmp_nsi_klp_ui on tmp_nsi_klp(gid, parent_position_uuid); analyze tmp_nsi_klp; -- working set (!!This SQL never ends in 11.5 now) create temporary table tmp_klp_replace on commit drop as select distinct on (klp_gid) * from ( select k2.gid as klp_gid, k2.smnn_gid as klp_smnn_gid, k2.position_uuid as klp_position_uuid, k2.order_number as klp_order_number, k2.is_active as klp_is_active, k1.gid as klp_child_gid, k1.smnn_gid as klp_child_smnn_gid, k1.position_uuid as klp_child_position_uuid, k1.order_number as klp_child_order_number, k1.is_active as klp_child_is_active from tmp_nsi_klp k1 join tmp_nsi_klp k2 on (k2.position_uuid = k1.parent_position_uuid) union all select k1.gid as klp_gid, k1.smnn_gid as klp_smnn_gid, k1.position_uuid as klp_position_uuid, k1.order_number as klp_order_number, k1.is_active as klp_is_active, null as klp_child_gid, null as klp_child_smnn_gid, null as klp_child_position_uuid, null as klp_child_order_number, null as klp_child_is_active from tmp_nsi_klp k1 left join tmp_nsi_klp k2 on (k1.position_uuid = k2.parent_position_uuid) left join (select position_uuid from tmp_nsi_klp where not is_fake_series group by position_uuid having count(1) > 1) klp_series on (klp_series.position_uuid = k1.position_uuid) where -- not exists(select 1 from tmp_nsi_klp k2 where k1.position_uuid = k2.parent_position_uuid) k2.gid is null -- none referenced and klp_series.position_uuid is null -- klp series with the same position_uuid ) a order by klp_gid, klp_order_number desc ; Characteristics of source table - tmp_nsi_klp: create table tmp_nsi_klp ( gid uuid NULL, -- not null by the fact smnn_gid uuid NULL, -- not null by the fact position_uuid uuid NULL, -- not null by the fact parent_position_uuid uuid NULL, order_number int8 NULL, date_start timestamp NULL, -- not null by the fact date_end timestamp NULL, is_active bool NULL, -- not null by the fact is_fake_series bool NULL -- not null by the fact); Rows: 237279 Cols stats:https://docs.google.com/spreadsheets/d/1Ocbult13kZ64vK9nHt-_BV3EENK_ZSHFTAmRZLISUIE/edit?usp=sharing Execution plans for problematic query - working set \"create temporary table tmp_klp_replace on commit drop as\": On 11.5 (option nestloop enabled):https://explain.depesz.com/s/pIzdExec time: never finished On 9.6 (PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit):https://explain.depesz.com/s/sO0GExec time: ~1 sec On 11.5 (option nestloop disabled):https://explain.depesz.com/s/eYzkExec time: ~1,5 sec Construction \"not exists(select 1 from tmp_nsi_klp k2 where k1.position_uuid = k2.parent_position_uuid)\" works perfectly but there are lots of similar constructions in a code made for checking inclusion of data. Thus no chances to change existing code to another using not exists construction. Are there any options to bring initial statement to life and keep the server option nestloop enable? Give me a clue, pls.Thanks in advance.Andrew.",
"msg_date": "Mon, 18 Nov 2019 20:35:29 +0300",
"msg_from": "\"Andrew Zakharov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong estimations and NL Anti join poor performance"
}
] |
[
{
"msg_contents": "Hello\n\nI'm on a PostgreSQL 12.1 and I just restored a database from a backup.\nWhen I run a query I get a big execution time: 5.482 ms\nAfter running EXPLAIN ANALYZE I can see that the \"Planning Time: \n5165.742 ms\" and the \"Execution Time: 6.244 ms\".\nThe database is new(no need to vacuum) and i'm the only one connected to \nit. I use a single partition on the harddrive.\nI also tried this on a postgresql 9.5 and the result was the same.\nI'm not sure what to do to improve this situation.\nThe query and the explain is attached.\n\nThank you",
"msg_date": "Fri, 22 Nov 2019 11:21:03 +0000",
"msg_from": "\"Sterpu Victor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql planning time too high"
},
{
"msg_contents": "Hello Sterpu,\n\n\n\nFirst, please run vaccum for your Postgresql DB.\n\n\n\nNo rows returned from your query. Could you double check your query\ncriteria.\n\n\n\nAfter that could you send explain analyze again .\n\n\n\nRegards,\n\n\n\n*FIRAT GÜLEÇ*\nInfrastructure & Database Operations Manager\[email protected]\n\n\n\n*M:* 0 532 210 57 18\nİnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n\n[image: image.png]\n\n\n\n\n\n\n\n*From:* Sterpu Victor <[email protected]>\n*Sent:* Friday, November 22, 2019 2:21 PM\n*To:* [email protected]\n*Subject:* Postgresql planning time too high\n\n\n\nHello\n\n\n\nI'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n\nWhen I run a query I get a big execution time: 5.482 ms\n\nAfter running EXPLAIN ANALYZE I can see that the \"Planning Time: 5165.742\nms\" and the \"Execution Time: 6.244 ms\".\n\nThe database is new(no need to vacuum) and i'm the only one connected to\nit. I use a single partition on the harddrive.\n\nI also tried this on a postgresql 9.5 and the result was the same.\n\nI'm not sure what to do to improve this situation.\n\nThe query and the explain is attached.\n\n\n\nThank you",
"msg_date": "Fri, 22 Nov 2019 14:35:15 +0300",
"msg_from": "=?UTF-8?B?RsSxcmF0IEfDvGxlw6c=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Postgresql planning time too high"
},
{
"msg_contents": "No rows should be returned, DB is empty.\nI'm testing now on a empty DB trying to find out how to improve this.\n\nIn this query I have 3 joins like this:\n\nSELECT t1.id, t2.valid_from\nFROM t1\nJOIN t2 ON (t1.id_t1 = t1.id)\nLEFT JOIN t3 ON (t3.id_t1 = t1.id AND t3.valid_from<t2.valid_from)\nWHERE t3.id IS NULL\n\nIf I delete these 3 joins than the planning time goes down from 5.482 ms \nto 754.708 ms but I'm not sure why this context is so demanding on the \nplanner.\nI'm tryng now to make a materialized view that will allow me to stop \nusing the syntax above.\n\nI reattached the same files, they should be fine like this.\n\n\n\n\n------ Original Message ------\nFrom: \"Fırat Güleç\" <[email protected]>\nTo: \"Sterpu Victor\" <[email protected]>\nCc: [email protected]\nSent: 2019-11-22 1:35:15 PM\nSubject: RE: Postgresql planning time too high\n\n>Hello Sterpu,\n>\n>\n>\n>First, please run vaccum for your Postgresql DB.\n>\n>\n>\n>No rows returned from your query. Could you double check your query \n>criteria.\n>\n>\n>\n>After that could you send explain analyze again .\n>\n>\n>\n>Regards,\n>\n>\n>\n>FIRAT GÜLEÇ\n>Infrastructure & Database Operations Manager\n>[email protected]\n>\n>\n>\n>M: 0 532 210 57 18\n>İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>\n>\n>\n>\n>\n>\n>\n>\n>From: Sterpu Victor <[email protected]>\n>Sent: Friday, November 22, 2019 2:21 PM\n>To:[email protected]\n>Subject: Postgresql planning time too high\n>\n>\n>\n>Hello\n>\n>\n>\n>I'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n>\n>When I run a query I get a big execution time: 5.482 ms\n>\n>After running EXPLAIN ANALYZE I can see that the \"Planning Time: \n>5165.742 ms\" and the \"Execution Time: 6.244 ms\".\n>\n>The database is new(no need to vacuum) and i'm the only one connected \n>to it. I use a single partition on the harddrive.\n>\n>I also tried this on a postgresql 9.5 and the result was the same.\n>\n>I'm not sure what to do to improve this situation.\n>\n>The query and the explain is attached.\n>\n>\n>\n>Thank you\n>\n>\n>",
"msg_date": "Fri, 22 Nov 2019 11:44:51 +0000",
"msg_from": "\"Sterpu Victor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re[2]: Postgresql planning time too high"
},
{
"msg_contents": "I did runned \"VACCUM FULL\" followed by \"VACUUM\" but no difference.\n\n------ Original Message ------\nFrom: \"Fırat Güleç\" <[email protected]>\nTo: \"Sterpu Victor\" <[email protected]>\nCc: [email protected]\nSent: 2019-11-22 1:35:15 PM\nSubject: RE: Postgresql planning time too high\n\n>Hello Sterpu,\n>\n>\n>\n>First, please run vaccum for your Postgresql DB.\n>\n>\n>\n>No rows returned from your query. Could you double check your query \n>criteria.\n>\n>\n>\n>After that could you send explain analyze again .\n>\n>\n>\n>Regards,\n>\n>\n>\n>FIRAT GÜLEÇ\n>Infrastructure & Database Operations Manager\n>[email protected]\n>\n>\n>\n>M: 0 532 210 57 18\n>İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>\n>\n>\n>\n>\n>\n>\n>\n>From: Sterpu Victor <[email protected]>\n>Sent: Friday, November 22, 2019 2:21 PM\n>To:[email protected]\n>Subject: Postgresql planning time too high\n>\n>\n>\n>Hello\n>\n>\n>\n>I'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n>\n>When I run a query I get a big execution time: 5.482 ms\n>\n>After running EXPLAIN ANALYZE I can see that the \"Planning Time: \n>5165.742 ms\" and the \"Execution Time: 6.244 ms\".\n>\n>The database is new(no need to vacuum) and i'm the only one connected \n>to it. I use a single partition on the harddrive.\n>\n>I also tried this on a postgresql 9.5 and the result was the same.\n>\n>I'm not sure what to do to improve this situation.\n>\n>The query and the explain is attached.\n>\n>\n>\n>Thank you\n>\n>\n>",
"msg_date": "Fri, 22 Nov 2019 11:46:05 +0000",
"msg_from": "\"Sterpu Victor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re[2]: Postgresql planning time too high"
},
{
"msg_contents": "Could you run VACCUM ANALYZE.\n\n\n\n*From:* Sterpu Victor <[email protected]>\n*Sent:* Friday, November 22, 2019 2:46 PM\n*To:* Fırat Güleç <[email protected]>\n*Cc:* [email protected]\n*Subject:* Re[2]: Postgresql planning time too high\n\n\n\nI did runned \"VACCUM FULL\" followed by \"VACUUM\" but no difference.\n\n\n\n------ Original Message ------\n\nFrom: \"Fırat Güleç\" <[email protected]>\n\nTo: \"Sterpu Victor\" <[email protected]>\n\nCc: [email protected]\n\nSent: 2019-11-22 1:35:15 PM\n\nSubject: RE: Postgresql planning time too high\n\n\n\nHello Sterpu,\n\n\n\nFirst, please run vaccum for your Postgresql DB.\n\n\n\nNo rows returned from your query. Could you double check your query\ncriteria.\n\n\n\nAfter that could you send explain analyze again .\n\n\n\nRegards,\n\n\n\n*FIRAT GÜLEÇ*\nInfrastructure & Database Operations Manager\[email protected]\n\n\n\n*M:* 0 532 210 57 18\nİnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n\n[image: image.png]\n\n\n\n\n\n\n\n*From:* Sterpu Victor <[email protected]>\n*Sent:* Friday, November 22, 2019 2:21 PM\n*To:* [email protected]\n*Subject:* Postgresql planning time too high\n\n\n\nHello\n\n\n\nI'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n\nWhen I run a query I get a big execution time: 5.482 ms\n\nAfter running EXPLAIN ANALYZE I can see that the \"Planning Time: 5165.742\nms\" and the \"Execution Time: 6.244 ms\".\n\nThe database is new(no need to vacuum) and i'm the only one connected to\nit. I use a single partition on the harddrive.\n\nI also tried this on a postgresql 9.5 and the result was the same.\n\nI'm not sure what to do to improve this situation.\n\nThe query and the explain is attached.\n\n\n\nThank you",
"msg_date": "Fri, 22 Nov 2019 15:05:44 +0300",
"msg_from": "=?UTF-8?B?RsSxcmF0IEfDvGxlw6c=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Re[2]: Postgresql planning time too high"
},
{
"msg_contents": "Em 22/11/2019 08:46, Sterpu Victor escreveu:\n> I did runned \"VACCUM FULL\" followed by \"VACUUM\" but no difference.\n>\n> ------ Original Message ------\n> From: \"Fırat Güleç\" <[email protected] \n> <mailto:[email protected]>>\n> To: \"Sterpu Victor\" <[email protected] <mailto:[email protected]>>\n> Cc: [email protected] \n> <mailto:[email protected]>\n> Sent: 2019-11-22 1:35:15 PM\n> Subject: RE: Postgresql planning time too high\n>\n>> Hello Sterpu,\n>>\n>> First, please run vaccum for your Postgresql DB.\n>>\n>> No rows returned from your query. Could you double check your query \n>> criteria.\n>>\n>> After that could you send explain analyze again .\n>>\n>> Regards,\n>>\n>> *FIRAT GÜLEÇ***\n>> Infrastructure & Database Operations Manager\n>> [email protected] <mailto:[email protected]>\n>>\n>> *M:*0 532 210 57 18\n>> İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>>\n>> image.png\n>>\n>> *From:*Sterpu Victor <[email protected] <mailto:[email protected]>>\n>> *Sent:* Friday, November 22, 2019 2:21 PM\n>> *To:* [email protected] \n>> <mailto:[email protected]>\n>> *Subject:* Postgresql planning time too high\n>>\n>> Hello\n>>\n>> I'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n>>\n>> When I run a query I get a big execution time: 5.482 ms\n>>\n>> After running EXPLAIN ANALYZE I can see that the \"Planning Time: \n>> 5165.742 ms\" and the \"Execution Time: 6.244 ms\".\n>>\n>> The database is new(no need to vacuum) and i'm the only one connected \n>> to it. I use a single partition on the harddrive.\n>>\n>> I also tried this on a postgresql 9.5 and the result was the same.\n>>\n>> I'm not sure what to do to improve this situation.\n>>\n>> The query and the explain is attached.\n>>\n>> Thank you\n>>\nHave you run the ANALYZE command to update your DB statistics?",
"msg_date": "Fri, 22 Nov 2019 09:22:16 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql planning time too high"
},
{
"msg_contents": "This is interesting because \"VACUUM ANALYZE\" solved the problem on \npostgresql 12.1, the planning time was cut down from 5165.742 ms to 517 \nms.\nThis is great but I didn't think to do this on postgresql 12.1 because I \ndid the same thing on the production server(postgresql 9.5) and the \nproblem was not solved there by this command on the 9.5.\nI guess I should update the production server.\nIs there another way?\n\nThank you\n\n------ Original Message ------\nFrom: \"Fırat Güleç\" <[email protected]>\nTo: \"Sterpu Victor\" <[email protected]>\nCc: [email protected]\nSent: 2019-11-22 2:05:44 PM\nSubject: RE: Re[2]: Postgresql planning time too high\n\n>Could you run VACCUM ANALYZE.\n>\n>\n>\n>From: Sterpu Victor <[email protected]>\n>Sent: Friday, November 22, 2019 2:46 PM\n>To: Fırat Güleç <[email protected]>\n>Cc:[email protected]\n>Subject: Re[2]: Postgresql planning time too high\n>\n>\n>\n>I did runned \"VACCUM FULL\" followed by \"VACUUM\" but no difference.\n>\n>\n>\n>------ Original Message ------\n>\n>From: \"Fırat Güleç\" <[email protected]>\n>\n>To: \"Sterpu Victor\" <[email protected]>\n>\n>Cc: [email protected]\n>\n>Sent: 2019-11-22 1:35:15 PM\n>\n>Subject: RE: Postgresql planning time too high\n>\n>\n>\n>>Hello Sterpu,\n>>\n>>\n>>\n>>First, please run vaccum for your Postgresql DB.\n>>\n>>\n>>\n>>No rows returned from your query. Could you double check your query \n>>criteria.\n>>\n>>\n>>\n>>After that could you send explain analyze again .\n>>\n>>\n>>\n>>Regards,\n>>\n>>\n>>\n>>FIRAT GÜLEÇ\n>>Infrastructure & Database Operations Manager\n>>[email protected]\n>>\n>>\n>>\n>>M: 0 532 210 57 18\n>>İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>From: Sterpu Victor <[email protected]>\n>>Sent: Friday, November 22, 2019 2:21 PM\n>>To:[email protected]\n>>Subject: Postgresql planning time too high\n>>\n>>\n>>\n>>Hello\n>>\n>>\n>>\n>>I'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n>>\n>>When I run a query I get a big execution time: 5.482 ms\n>>\n>>After running EXPLAIN ANALYZE I can see that the \"Planning Time: \n>>5165.742 ms\" and the \"Execution Time: 6.244 ms\".\n>>\n>>The database is new(no need to vacuum) and i'm the only one connected \n>>to it. I use a single partition on the harddrive.\n>>\n>>I also tried this on a postgresql 9.5 and the result was the same.\n>>\n>>I'm not sure what to do to improve this situation.\n>>\n>>The query and the explain is attached.\n>>\n>>\n>>\n>>Thank you\n>>\n>>\n>>",
"msg_date": "Fri, 22 Nov 2019 12:25:10 +0000",
"msg_from": "\"Sterpu Victor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re[4]: Postgresql planning time too high"
},
{
"msg_contents": "I'm sorry, I messed up between a lot of queries ..... there is no \ndifference after running \"VACUUM ANALYZE\".\nI guess this was to be expected as the database was just restored from \nbackup.\n\n------ Original Message ------\nFrom: \"Fırat Güleç\" <[email protected]>\nTo: \"Sterpu Victor\" <[email protected]>\nCc: [email protected]\nSent: 2019-11-22 2:05:44 PM\nSubject: RE: Re[2]: Postgresql planning time too high\n\n>Could you run VACCUM ANALYZE.\n>\n>\n>\n>From: Sterpu Victor <[email protected]>\n>Sent: Friday, November 22, 2019 2:46 PM\n>To: Fırat Güleç <[email protected]>\n>Cc:[email protected]\n>Subject: Re[2]: Postgresql planning time too high\n>\n>\n>\n>I did runned \"VACCUM FULL\" followed by \"VACUUM\" but no difference.\n>\n>\n>\n>------ Original Message ------\n>\n>From: \"Fırat Güleç\" <[email protected]>\n>\n>To: \"Sterpu Victor\" <[email protected]>\n>\n>Cc: [email protected]\n>\n>Sent: 2019-11-22 1:35:15 PM\n>\n>Subject: RE: Postgresql planning time too high\n>\n>\n>\n>>Hello Sterpu,\n>>\n>>\n>>\n>>First, please run vaccum for your Postgresql DB.\n>>\n>>\n>>\n>>No rows returned from your query. Could you double check your query \n>>criteria.\n>>\n>>\n>>\n>>After that could you send explain analyze again .\n>>\n>>\n>>\n>>Regards,\n>>\n>>\n>>\n>>FIRAT GÜLEÇ\n>>Infrastructure & Database Operations Manager\n>>[email protected]\n>>\n>>\n>>\n>>M: 0 532 210 57 18\n>>İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>From: Sterpu Victor <[email protected]>\n>>Sent: Friday, November 22, 2019 2:21 PM\n>>To:[email protected]\n>>Subject: Postgresql planning time too high\n>>\n>>\n>>\n>>Hello\n>>\n>>\n>>\n>>I'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n>>\n>>When I run a query I get a big execution time: 5.482 ms\n>>\n>>After running EXPLAIN ANALYZE I can see that the \"Planning Time: \n>>5165.742 ms\" and the \"Execution Time: 6.244 ms\".\n>>\n>>The database is new(no need to vacuum) and i'm the only one connected \n>>to it. I use a single partition on the harddrive.\n>>\n>>I also tried this on a postgresql 9.5 and the result was the same.\n>>\n>>I'm not sure what to do to improve this situation.\n>>\n>>The query and the explain is attached.\n>>\n>>\n>>\n>>Thank you\n>>\n>>\n>>",
"msg_date": "Fri, 22 Nov 2019 12:44:27 +0000",
"msg_from": "\"Sterpu Victor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re[4]: Postgresql planning time too high"
},
{
"msg_contents": "I finnished testing with the matterialized view and the result is much \nimproved, planning time goes down from 5.482 ms to 1507.741 ms.\nThis is much better but I still don't understand why postgres is \nplanning so much time as long the main table is empty(there are no \nrecords in table focg).\n\n\n------ Original Message ------\nFrom: \"Sterpu Victor\" <[email protected]>\nTo: \"Fırat Güleç\" <[email protected]>\nCc: [email protected]\nSent: 2019-11-22 1:44:51 PM\nSubject: Re[2]: Postgresql planning time too high\n\n>No rows should be returned, DB is empty.\n>I'm testing now on a empty DB trying to find out how to improve this.\n>\n>In this query I have 3 joins like this:\n>\n>SELECT t1.id, t2.valid_from\n>FROM t1\n>JOIN t2 ON (t1.id_t1 = t1.id)\n>LEFT JOIN t3 ON (t3.id_t1 = t1.id AND t3.valid_from<t2.valid_from)\n>WHERE t3.id IS NULL\n>\n>If I delete these 3 joins than the planning time goes down from 5.482 \n>ms to 754.708 ms but I'm not sure why this context is so demanding on \n>the planner.\n>I'm tryng now to make a materialized view that will allow me to stop \n>using the syntax above.\n>\n>I reattached the same files, they should be fine like this.\n>\n>\n>\n>\n>------ Original Message ------\n>From: \"Fırat Güleç\" <[email protected]>\n>To: \"Sterpu Victor\" <[email protected]>\n>Cc: [email protected]\n>Sent: 2019-11-22 1:35:15 PM\n>Subject: RE: Postgresql planning time too high\n>\n>>Hello Sterpu,\n>>\n>>\n>>\n>>First, please run vaccum for your Postgresql DB.\n>>\n>>\n>>\n>>No rows returned from your query. Could you double check your query \n>>criteria.\n>>\n>>\n>>\n>>After that could you send explain analyze again .\n>>\n>>\n>>\n>>Regards,\n>>\n>>\n>>\n>>FIRAT GÜLEÇ\n>>Infrastructure & Database Operations Manager\n>>[email protected]\n>>\n>>\n>>\n>>M: 0 532 210 57 18\n>>İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>From: Sterpu Victor <[email protected]>\n>>Sent: Friday, November 22, 2019 2:21 PM\n>>To:[email protected]\n>>Subject: Postgresql planning time too high\n>>\n>>\n>>\n>>Hello\n>>\n>>\n>>\n>>I'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n>>\n>>When I run a query I get a big execution time: 5.482 ms\n>>\n>>After running EXPLAIN ANALYZE I can see that the \"Planning Time: \n>>5165.742 ms\" and the \"Execution Time: 6.244 ms\".\n>>\n>>The database is new(no need to vacuum) and i'm the only one connected \n>>to it. I use a single partition on the harddrive.\n>>\n>>I also tried this on a postgresql 9.5 and the result was the same.\n>>\n>>I'm not sure what to do to improve this situation.\n>>\n>>The query and the explain is attached.\n>>\n>>\n>>\n>>Thank you\n>>\n>>\n>>",
"msg_date": "Fri, 22 Nov 2019 12:50:05 +0000",
"msg_from": "\"Sterpu Victor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re[3]: Postgresql planning time too high"
},
{
"msg_contents": "pá 22. 11. 2019 v 12:46 odesílatel Sterpu Victor <[email protected]> napsal:\n\n> No rows should be returned, DB is empty.\n> I'm testing now on a empty DB trying to find out how to improve this.\n>\n> In this query I have 3 joins like this:\n>\n> SELECT t1.id, t2.valid_from\n> FROM t1\n> JOIN t2 ON (t1.id_t1 = t1.id)\n> LEFT JOIN t3 ON (t3.id_t1 = t1.id AND t3.valid_from<t2.valid_from)\n> WHERE t3.id IS NULL\n>\n> If I delete these 3 joins than the planning time goes down from 5.482 ms to\n> 754.708 ms but I'm not sure why this context is so demanding on the planner.\n> I'm tryng now to make a materialized view that will allow me to stop using\n> the syntax above.\n>\n\nThis query is little bit crazy - it has more than 40 joins - but 700ms for\nplanning is looks too much. Maybe your comp has slow CPU.\n\nPostgres has two planners - deterministic and genetic\n\n https://www.postgresql.org/docs/9.1/geqo-pg-intro.html\n\nProbably slow plan is related to deterministic planner.\n\n\n\n> I reattached the same files, they should be fine like this.\n>\n>\n>\n>\n> ------ Original Message ------\n> From: \"Fırat Güleç\" <[email protected]>\n> To: \"Sterpu Victor\" <[email protected]>\n> Cc: [email protected]\n> Sent: 2019-11-22 1:35:15 PM\n> Subject: RE: Postgresql planning time too high\n>\n> Hello Sterpu,\n>\n>\n>\n> First, please run vaccum for your Postgresql DB.\n>\n>\n>\n> No rows returned from your query. Could you double check your query\n> criteria.\n>\n>\n>\n> After that could you send explain analyze again .\n>\n>\n>\n> Regards,\n>\n>\n>\n> *FIRAT GÜLEÇ*\n> Infrastructure & Database Operations Manager\n> [email protected]\n>\n>\n>\n> *M:* 0 532 210 57 18\n> İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>\n> [image: image.png]\n>\n>\n>\n>\n>\n>\n>\n> *From:* Sterpu Victor <[email protected]>\n> *Sent:* Friday, November 22, 2019 2:21 PM\n> *To:* [email protected]\n> *Subject:* Postgresql planning time too high\n>\n>\n>\n> Hello\n>\n>\n>\n> I'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n>\n> When I run a query I get a big execution time: 5.482 ms\n>\n> After running EXPLAIN ANALYZE I can see that the \"Planning Time: 5165.742\n> ms\" and the \"Execution Time: 6.244 ms\".\n>\n> The database is new(no need to vacuum) and i'm the only one connected to\n> it. I use a single partition on the harddrive.\n>\n> I also tried this on a postgresql 9.5 and the result was the same.\n>\n> I'm not sure what to do to improve this situation.\n>\n> The query and the explain is attached.\n>\n>\n>\n> Thank you\n>\n>\n>\n>",
"msg_date": "Fri, 22 Nov 2019 13:59:11 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[2]: Postgresql planning time too high"
},
{
"msg_contents": "I did some testing and the results are surprising.\nI did 3 tests:\n\nTest 1\nTest 2\nTest 3\nTest conditions\nSHOW geqo: \"on\"\nSHOW geqo_threshold: \"5\"\nSHOW geqo: \"on\"\nSHOW geqo_threshold: \"12\"\nSHOW geqo: \"off\"\nPlanning Time\n43691.910 ms\n5114.959 ms\n7305.504 ms\nExecution Time\n4.002 ms\n3.987 ms\n5.034 ms\nThis are things that are way over my knowledge, I can only speculate \nabout this: in the documentation from here \n<https://www.postgresql.org/docs/9.6/runtime-config-query.html#GUC-GEQO-THRESHOLD> \ngeqo_threshold is defined as the number of joins after wich postgres \nwill start to use the generic planner.\nOn my query there are about 50 joins so test 1 and test 2 should both \nbe done with the generic planner but the planning time of these tests \nsugest that this is not the case.\nSo I think test 1 is generic and test 2 and 3 are deterministic(test 3 \ncan be only deterministic as as setted this way the postgres server).\nAnyway, in the end the deterministic planner is much more effective at \nplanning this query that the generic one(test 3 is with generic planner \nturned off).\n\n\n\n\n\n------ Original Message ------\nFrom: \"Pavel Stehule\" <[email protected]>\nTo: \"Sterpu Victor\" <[email protected]>\nCc: \"Fırat Güleç\" <[email protected]>; \"Pgsql Performance\" \n<[email protected]>\nSent: 2019-11-22 2:59:11 PM\nSubject: Re: Re[2]: Postgresql planning time too high\n\n>\n>\n>pá 22. 11. 2019 v 12:46 odesílatel Sterpu Victor <[email protected]> \n>napsal:\n>>No rows should be returned, DB is empty.\n>>I'm testing now on a empty DB trying to find out how to improve this.\n>>\n>>In this query I have 3 joins like this:\n>>\n>>SELECT t1.id, t2.valid_from\n>>FROM t1\n>>JOIN t2 ON (t1.id_t1 = t1.id)\n>>LEFT JOIN t3 ON (t3.id_t1 = t1.id AND t3.valid_from<t2.valid_from)\n>>WHERE t3.id IS NULL\n>>\n>>If I delete these 3 joins than the planning time goes down from 5.482 \n>>ms to 754.708 ms but I'm not sure why this context is so demanding on \n>>the planner.\n>>I'm tryng now to make a materialized view that will allow me to stop \n>>using the syntax above.\n>\n>This query is little bit crazy - it has more than 40 joins - but 700ms \n>for planning is looks too much. Maybe your comp has slow CPU.\n>\n>Postgres has two planners - deterministic and genetic\n>\n> https://www.postgresql.org/docs/9.1/geqo-pg-intro.html\n>\n>Probably slow plan is related to deterministic planner.\n>\n>\n>>\n>>I reattached the same files, they should be fine like this.\n>>\n>>\n>>\n>>\n>>------ Original Message ------\n>>From: \"Fırat Güleç\" <[email protected]>\n>>To: \"Sterpu Victor\" <[email protected]>\n>>Cc: [email protected]\n>>Sent: 2019-11-22 1:35:15 PM\n>>Subject: RE: Postgresql planning time too high\n>>\n>>>Hello Sterpu,\n>>>\n>>>\n>>>\n>>>First, please run vaccum for your Postgresql DB.\n>>>\n>>>\n>>>\n>>>No rows returned from your query. Could you double check your query \n>>>criteria.\n>>>\n>>>\n>>>\n>>>After that could you send explain analyze again .\n>>>\n>>>\n>>>\n>>>Regards,\n>>>\n>>>\n>>>\n>>>FIRAT GÜLEÇ\n>>>Infrastructure & Database Operations Manager\n>>>[email protected]\n>>>\n>>>\n>>>\n>>>M: 0 532 210 57 18\n>>>İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>From: Sterpu Victor <[email protected]>\n>>>Sent: Friday, November 22, 2019 2:21 PM\n>>>To:[email protected]\n>>>Subject: Postgresql planning time too high\n>>>\n>>>\n>>>\n>>>Hello\n>>>\n>>>\n>>>\n>>>I'm on a PostgreSQL 12.1 and I just restored a database from a \n>>>backup.\n>>>\n>>>When I run a query I get a big execution time: 5.482 ms\n>>>\n>>>After running EXPLAIN ANALYZE I can see that the \"Planning Time: \n>>>5165.742 ms\" and the \"Execution Time: 6.244 ms\".\n>>>\n>>>The database is new(no need to vacuum) and i'm the only one connected \n>>>to it. I use a single partition on the harddrive.\n>>>\n>>>I also tried this on a postgresql 9.5 and the result was the same.\n>>>\n>>>I'm not sure what to do to improve this situation.\n>>>\n>>>The query and the explain is attached.\n>>>\n>>>\n>>>\n>>>Thank you\n>>>\n>>>\n>>>\n>>>",
"msg_date": "Fri, 22 Nov 2019 13:45:20 +0000",
"msg_from": "\"Sterpu Victor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re[4]: Postgresql planning time too high"
},
{
"msg_contents": "The CPU is at about 7% when I run the query and 5% are occupied by \npostgresql.\nCPU is Xeon E3 1240 v6 3.7Gh - not very good, but postgres is not \noverloading it.\n\nTests are done on windows 2016 server so the next step was to try and \nchange the priority of all the postgresql procesess to realtime.\nThis setting had some effect as the planning time went down from \n5114.959 ms to 2999.542 ms\n\nAnd then I changed a single line and the planning time went from \n2999.542 ms to 175.509 ms: I deleted the line \"LIMIT 20 OFFSET 0\"\nChanging this line in the final query is not an option, can I do \nsomething else to fix this?\n\nThank you.\n\n\n------ Original Message ------\nFrom: \"Pavel Stehule\" <[email protected]>\nTo: \"Sterpu Victor\" <[email protected]>\nCc: \"Fırat Güleç\" <[email protected]>; \"Pgsql Performance\" \n<[email protected]>\nSent: 2019-11-22 2:59:11 PM\nSubject: Re: Re[2]: Postgresql planning time too high\n\n>\n>\n>pá 22. 11. 2019 v 12:46 odesílatel Sterpu Victor <[email protected]> \n>napsal:\n>>No rows should be returned, DB is empty.\n>>I'm testing now on a empty DB trying to find out how to improve this.\n>>\n>>In this query I have 3 joins like this:\n>>\n>>SELECT t1.id, t2.valid_from\n>>FROM t1\n>>JOIN t2 ON (t1.id_t1 = t1.id)\n>>LEFT JOIN t3 ON (t3.id_t1 = t1.id AND t3.valid_from<t2.valid_from)\n>>WHERE t3.id IS NULL\n>>\n>>If I delete these 3 joins than the planning time goes down from 5.482 \n>>ms to 754.708 ms but I'm not sure why this context is so demanding on \n>>the planner.\n>>I'm tryng now to make a materialized view that will allow me to stop \n>>using the syntax above.\n>\n>This query is little bit crazy - it has more than 40 joins - but 700ms \n>for planning is looks too much. Maybe your comp has slow CPU.\n>\n>Postgres has two planners - deterministic and genetic\n>\n> https://www.postgresql.org/docs/9.1/geqo-pg-intro.html\n>\n>Probably slow plan is related to deterministic planner.\n>\n>\n>>\n>>I reattached the same files, they should be fine like this.\n>>\n>>\n>>\n>>\n>>------ Original Message ------\n>>From: \"Fırat Güleç\" <[email protected]>\n>>To: \"Sterpu Victor\" <[email protected]>\n>>Cc: [email protected]\n>>Sent: 2019-11-22 1:35:15 PM\n>>Subject: RE: Postgresql planning time too high\n>>\n>>>Hello Sterpu,\n>>>\n>>>\n>>>\n>>>First, please run vaccum for your Postgresql DB.\n>>>\n>>>\n>>>\n>>>No rows returned from your query. Could you double check your query \n>>>criteria.\n>>>\n>>>\n>>>\n>>>After that could you send explain analyze again .\n>>>\n>>>\n>>>\n>>>Regards,\n>>>\n>>>\n>>>\n>>>FIRAT GÜLEÇ\n>>>Infrastructure & Database Operations Manager\n>>>[email protected]\n>>>\n>>>\n>>>\n>>>M: 0 532 210 57 18\n>>>İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>From: Sterpu Victor <[email protected]>\n>>>Sent: Friday, November 22, 2019 2:21 PM\n>>>To:[email protected]\n>>>Subject: Postgresql planning time too high\n>>>\n>>>\n>>>\n>>>Hello\n>>>\n>>>\n>>>\n>>>I'm on a PostgreSQL 12.1 and I just restored a database from a \n>>>backup.\n>>>\n>>>When I run a query I get a big execution time: 5.482 ms\n>>>\n>>>After running EXPLAIN ANALYZE I can see that the \"Planning Time: \n>>>5165.742 ms\" and the \"Execution Time: 6.244 ms\".\n>>>\n>>>The database is new(no need to vacuum) and i'm the only one connected \n>>>to it. I use a single partition on the harddrive.\n>>>\n>>>I also tried this on a postgresql 9.5 and the result was the same.\n>>>\n>>>I'm not sure what to do to improve this situation.\n>>>\n>>>The query and the explain is attached.\n>>>\n>>>\n>>>\n>>>Thank you\n>>>\n>>>\n>>>\n>>>",
"msg_date": "Fri, 22 Nov 2019 14:06:13 +0000",
"msg_from": "\"Sterpu Victor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re[4]: Postgresql planning time too high"
},
{
"msg_contents": "pá 22. 11. 2019 v 15:06 odesílatel Sterpu Victor <[email protected]> napsal:\n\n> The CPU is at about 7% when I run the query and 5% are occupied by\n> postgresql.\n> CPU is Xeon E3 1240 v6 3.7Gh - not very good, but postgres is not\n> overloading it.\n>\n> Tests are done on windows 2016 server so the next step was to try and\n> change the priority of all the postgresql procesess to realtime.\n> This setting had some effect as the planning time went down from 5114.959\n> ms to 2999.542 ms\n>\n> And then I changed a single line and the planning time went from 2999.542\n> ms to 175.509 ms: I deleted the line \"LIMIT 20 OFFSET 0\"\n> Changing this line in the final query is not an option, can I do something\n> else to fix this?\n>\n\nit looks like planner bug. It's strange so LIMIT OFFSET 0 can increase 10x\nplanning time\n\nPavel\n\n\n\n\n\n> Thank you.\n>\n>\n> ------ Original Message ------\n> From: \"Pavel Stehule\" <[email protected]>\n> To: \"Sterpu Victor\" <[email protected]>\n> Cc: \"Fırat Güleç\" <[email protected]>; \"Pgsql Performance\" <\n> [email protected]>\n> Sent: 2019-11-22 2:59:11 PM\n> Subject: Re: Re[2]: Postgresql planning time too high\n>\n>\n>\n> pá 22. 11. 2019 v 12:46 odesílatel Sterpu Victor <[email protected]> napsal:\n>\n>> No rows should be returned, DB is empty.\n>> I'm testing now on a empty DB trying to find out how to improve this.\n>>\n>> In this query I have 3 joins like this:\n>>\n>> SELECT t1.id, t2.valid_from\n>> FROM t1\n>> JOIN t2 ON (t1.id_t1 = t1.id)\n>> LEFT JOIN t3 ON (t3.id_t1 = t1.id AND t3.valid_from<t2.valid_from)\n>> WHERE t3.id IS NULL\n>>\n>> If I delete these 3 joins than the planning time goes down from 5.482 ms to\n>> 754.708 ms but I'm not sure why this context is so demanding on the planner.\n>> I'm tryng now to make a materialized view that will allow me to stop\n>> using the syntax above.\n>>\n>\n> This query is little bit crazy - it has more than 40 joins - but 700ms for\n> planning is looks too much. Maybe your comp has slow CPU.\n>\n> Postgres has two planners - deterministic and genetic\n>\n> https://www.postgresql.org/docs/9.1/geqo-pg-intro.html\n>\n> Probably slow plan is related to deterministic planner.\n>\n>\n>\n>> I reattached the same files, they should be fine like this.\n>>\n>>\n>>\n>>\n>> ------ Original Message ------\n>> From: \"Fırat Güleç\" <[email protected]>\n>> To: \"Sterpu Victor\" <[email protected]>\n>> Cc: [email protected]\n>> Sent: 2019-11-22 1:35:15 PM\n>> Subject: RE: Postgresql planning time too high\n>>\n>> Hello Sterpu,\n>>\n>>\n>>\n>> First, please run vaccum for your Postgresql DB.\n>>\n>>\n>>\n>> No rows returned from your query. Could you double check your query\n>> criteria.\n>>\n>>\n>>\n>> After that could you send explain analyze again .\n>>\n>>\n>>\n>> Regards,\n>>\n>>\n>>\n>> *FIRAT GÜLEÇ*\n>> Infrastructure & Database Operations Manager\n>> [email protected]\n>>\n>>\n>>\n>> *M:* 0 532 210 57 18\n>> İnönü Mh. Mimar Sinan Cd. No:3 Güzeller Org.San.Bölg. GEBZE / KOCAELİ\n>>\n>> [image: image.png]\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> *From:* Sterpu Victor <[email protected]>\n>> *Sent:* Friday, November 22, 2019 2:21 PM\n>> *To:* [email protected]\n>> *Subject:* Postgresql planning time too high\n>>\n>>\n>>\n>> Hello\n>>\n>>\n>>\n>> I'm on a PostgreSQL 12.1 and I just restored a database from a backup.\n>>\n>> When I run a query I get a big execution time: 5.482 ms\n>>\n>> After running EXPLAIN ANALYZE I can see that the \"Planning Time: 5165.742\n>> ms\" and the \"Execution Time: 6.244 ms\".\n>>\n>> The database is new(no need to vacuum) and i'm the only one connected to\n>> it. I use a single partition on the harddrive.\n>>\n>> I also tried this on a postgresql 9.5 and the result was the same.\n>>\n>> I'm not sure what to do to improve this situation.\n>>\n>> The query and the explain is attached.\n>>\n>>\n>>\n>> Thank you\n>>\n>>\n>>\n>>",
"msg_date": "Fri, 22 Nov 2019 15:29:14 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[4]: Postgresql planning time too high"
},
{
"msg_contents": "As a matter of habit, I put all inner joins that may limit the result set\nas the first joins, then the left joins that have where conditions on them.\nI am not sure whether the optimizer sees that only those tables are needed\nto determine which rows will be in the end result and automatically\nprioritizes them as far as joins. With 40+ joins, I would want if this\nre-ordering of the declared joins may be significant.\n\nIf that doesn't help, then I would put all of those in a sub-query to break\nup the problem for the optimizer (OFFSET 0 being an optimization fence,\nthough if this is an example of \"simple\" pagination then I assume but am\nnot sure that OFFSET 20 would also be an optimization fence). Else, put all\nthat in a CTE with MATERIALIZED keyword when on v12 and without on 9.5\nsince it did not exist yet and was default behavior then.\n\nWith an empty database, there are no statistics so perhaps the optimizer\nhas too many plans that are very close in expected costs. I'd be curious if\nthe planning time gets shorter once you have data, assuming\ndefault_statistics_target is left at the standard 100, or is not increased\ntoo hugely.\n\n>",
"msg_date": "Fri, 22 Nov 2019 11:13:30 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[4]: Postgresql planning time too high"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 11:44:51AM +0000, Sterpu Victor wrote:\n>No rows should be returned, DB is empty.\n>I'm testing now on a empty DB trying to find out how to improve this.\n>\n\nI'm a bit puzzled why you're doinf tests on an empty database, when in\nproduction it'll certainly contain data. I guess you're assuming that\nthis way you isolate planning time, which should remain about the same\neven with data loaded, but I'm not entirely sure that's true - all this\nplanning is done with no statistics (histograms, MCV lists, ...) and\nmaybe it's forcing the planner to do more work? I wouldn't be surprised\nif having those stats would allow the planner to take some shortcuts,\ncutting the plannnig time down.\n\nNot to mention that we don't know if the plan is actually any good, for\nall what we know it might take 10 years on real data, making the\nplanning duration irrelevant.\n\n\nLet's put that aside, though. Let's assume it's because of expensive\njoin order planning. I don't think you have a lot of options, here,\nunfortunately.\n\nOne option is to try reducing the planner options that determine how\nmuch effort should be spent on join planning, e.g. join_collapse_limit\nand geqo_threshold. If this is the root cause, you might even rewrite\nthe query to use optimal join order and set join_collapse_limit=1.\nYou'll have to play with it.\n\nThe other option is using CTEs with materialization, with the same\neffect, i.e. prevention of optimization across CTEs, reducing the\ntotal effort.\n\n>In this query I have 3 joins like this:\n>\n>SELECT t1.id, t2.valid_from\n>FROM t1\n>JOIN t2 ON (t1.id_t1 = t1.id)\n>LEFT JOIN t3 ON (t3.id_t1 = t1.id AND t3.valid_from<t2.valid_from)\n>WHERE t3.id IS NULL\n>\n>If I delete these 3 joins than the planning time goes down from 5.482 \n>ms to 754.708 ms but I'm not sure why this context is so demanding on \n>the planner.\n>I'm tryng now to make a materialized view that will allow me to stop \n>using the syntax above.\n>\n>I reattached the same files, they should be fine like this.\n>\n\nIt'd be useful to have something others can use to reproduce the issue,\nand investigate locally. SQL script that creates the whole schema and\nruns the query, for example.\n\nWhat I'd like to see is a perf profile from the planning, so that we can\nsee where exactly is the bottleneck. Maybe there actually is a bug that\nmakes it muych more expensive than it should be, in some corner case?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 20:36:02 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql planning time too high"
}
] |
[
{
"msg_contents": "Hey,\n\nI'm trying to figure out why Postgres is choosing a Hash Join over a \nNested Loop in this query:\n\nSELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ, T2.CarAti, \nT1.CarCod, T1.EmpCod,\n T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE( T3.PesDatAnt, DATE \n'00010101') AS PesDatAnt\n FROM ((public.Pessoa T1\n INNER JOIN public.Carteira T2 ON T2.EmpCod = T1.EmpCod AND \nT2.CarCod = T1.CarCod)\n LEFT JOIN (SELECT MIN(COALESCE( T5.ConVenAnt, DATE \n'00010101')) AS PesDatAnt, T4.EmpCod, T4.CarCod, T4.ConPesCod AS ConPesCod\n FROM (public.Contrato T4\n LEFT JOIN (SELECT MIN(ConParDatVen) \nAS ConVenAnt, EmpCod, CarCod, ConPesCod, ConSeq\n FROM \npublic.ContratoParcela T5\n WHERE ConParAti = true\n AND ConParValSal > 0\nGROUP BY EmpCod, CarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod = \nT4.EmpCod AND\n T5.CarCod = T4.CarCod AND\n T5.ConPesCod = T4.ConPesCod AND\n T5.ConSeq = T4.ConSeq)\n WHERE T4.ConAti = TRUE\nGROUP BY T4.EmpCod, T4.CarCod, T4.ConPesCod ) T3 ON t3.EmpCod = \nT1.EmpCod AND\n t3.CarCod = T1.CarCod AND\n t3.ConPesCod = T1.PesCod)\n WHERE (T2.CarAti = true)\n AND (T1.EmpCod = 112)\n and (UPPER(T1.PesNom) like UPPER('%MARIA%'))\n ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n\nHere the Hash Join[1] plan takes ~700ms, and if I change the first LEFT \nJOIN to a LEFT JOIN LATERAL, forcing a nested loop, the query[2] runs in \n3ms.\n\n[1] https://explain.depesz.com/s/8IL3\n[2] https://explain.depesz.com/s/f8Q9****\n\n\n\n\n\n\nHey, \n\n I'm trying to figure out why Postgres is choosing a Hash Join over\n a Nested Loop in this query:\n\nSELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ,\n T2.CarAti, T1.CarCod, T1.EmpCod, \n T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE(\n T3.PesDatAnt, DATE '00010101') AS PesDatAnt \n FROM ((public.Pessoa T1 \n INNER JOIN public.Carteira T2 ON T2.EmpCod =\n T1.EmpCod AND T2.CarCod = T1.CarCod) \n LEFT JOIN (SELECT MIN(COALESCE(\n T5.ConVenAnt, DATE '00010101')) AS PesDatAnt, T4.EmpCod,\n T4.CarCod, T4.ConPesCod AS ConPesCod \n FROM (public.Contrato T4 \n LEFT JOIN (SELECT\n MIN(ConParDatVen) AS ConVenAnt, EmpCod, CarCod, ConPesCod,\n ConSeq \n FROM\n public.ContratoParcela T5\n WHERE\n ConParAti = true \n AND\n ConParValSal > 0 \n GROUP\n BY EmpCod, CarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod =\n T4.EmpCod AND \n \n T5.CarCod = T4.CarCod AND\n \n \n T5.ConPesCod = T4.ConPesCod AND\n \n \n T5.ConSeq = T4.ConSeq) \n WHERE T4.ConAti = TRUE\n GROUP BY T4.EmpCod,\n T4.CarCod, T4.ConPesCod ) T3 ON t3.EmpCod = T1.EmpCod AND \n \n t3.CarCod = T1.CarCod AND \n \n t3.ConPesCod = T1.PesCod) \n WHERE (T2.CarAti = true)\n AND (T1.EmpCod = 112)\n and (UPPER(T1.PesNom) like UPPER('%MARIA%')) \n ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n\nHere the Hash Join[1] plan takes ~700ms, and if I change the\n first LEFT JOIN to a LEFT JOIN LATERAL, forcing a nested loop, the\n query[2] runs in 3ms.\n\n[1] https://explain.depesz.com/s/8IL3\n [2] https://explain.depesz.com/s/f8Q9",
"msg_date": "Fri, 22 Nov 2019 14:33:29 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hash Join over Nested Loop"
},
{
"msg_contents": "> Hey,\n>\n> I'm trying to figure out why Postgres is choosing a Hash Join over a \n> Nested Loop in this query:\n>\n> SELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ, T2.CarAti, \n> T1.CarCod, T1.EmpCod,\n> T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE( T3.PesDatAnt, \n> DATE '00010101') AS PesDatAnt\n> FROM ((public.Pessoa T1\n> INNER JOIN public.Carteira T2 ON T2.EmpCod = T1.EmpCod AND \n> T2.CarCod = T1.CarCod)\n> LEFT JOIN (SELECT MIN(COALESCE( T5.ConVenAnt, DATE \n> '00010101')) AS PesDatAnt, T4.EmpCod, T4.CarCod, T4.ConPesCod AS \n> ConPesCod\n> FROM (public.Contrato T4\n> LEFT JOIN (SELECT MIN(ConParDatVen) \n> AS ConVenAnt, EmpCod, CarCod, ConPesCod, ConSeq\n> FROM \n> public.ContratoParcela T5\n> WHERE ConParAti = true\n> AND ConParValSal > 0\n> GROUP BY EmpCod, CarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod = \n> T4.EmpCod AND\n> T5.CarCod = T4.CarCod AND\n> T5.ConPesCod = T4.ConPesCod AND\n> T5.ConSeq = T4.ConSeq)\n> WHERE T4.ConAti = TRUE\n> GROUP BY T4.EmpCod, T4.CarCod, T4.ConPesCod ) T3 ON t3.EmpCod = \n> T1.EmpCod AND\n> t3.CarCod = T1.CarCod AND\n> t3.ConPesCod = T1.PesCod)\n> WHERE (T2.CarAti = true)\n> AND (T1.EmpCod = 112)\n> and (UPPER(T1.PesNom) like UPPER('%MARIA%'))\n> ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n>\n> Here the Hash Join[1] plan takes ~700ms, and if I change the first \n> LEFT JOIN to a LEFT JOIN LATERAL, forcing a nested loop, the query[2] \n> runs in 3ms.\n>\n> [1] https://explain.depesz.com/s/8IL3\n> [2] https://explain.depesz.com/s/f8Q9 \n\nPostgreSQL version is 11.5, I have run analyze on all the tables.\n\nPG settings:\n\nname |setting |unit|\n-------------------------------|---------|----|\nautovacuum |on | |\ndefault_statistics_target |250 | |\neffective_cache_size |983040 |8kB |\neffective_io_concurrency |200 | |\nmax_parallel_workers |6 | |\nmax_parallel_workers_per_gather|3 | |\nrandom_page_cost |1.1 | |\nwork_mem |51200 |kB |\n\n\n\n\n\n\nHey,\n \n\n I'm trying to figure out why Postgres is choosing a Hash Join\n over a Nested Loop in this query:\n\nSELECT T1.PesID, T1.PesNom, T1.PesValSal,\n T1.PesCPFCNPJ, T2.CarAti, T1.CarCod, T1.EmpCod, \n T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE(\n T3.PesDatAnt, DATE '00010101') AS PesDatAnt \n FROM ((public.Pessoa T1 \n INNER JOIN public.Carteira T2 ON T2.EmpCod\n = T1.EmpCod AND T2.CarCod = T1.CarCod) \n LEFT JOIN (SELECT MIN(COALESCE(\n T5.ConVenAnt, DATE '00010101')) AS PesDatAnt, T4.EmpCod,\n T4.CarCod, T4.ConPesCod AS ConPesCod \n FROM (public.Contrato T4 \n LEFT JOIN (SELECT\n MIN(ConParDatVen) AS ConVenAnt, EmpCod, CarCod, ConPesCod,\n ConSeq \n FROM\n public.ContratoParcela T5\n WHERE\n ConParAti = true \n AND\n ConParValSal > 0 \n GROUP\n BY EmpCod, CarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod =\n T4.EmpCod AND \n \n T5.CarCod = T4.CarCod\n AND \n \n T5.ConPesCod =\n T4.ConPesCod AND \n \n T5.ConSeq = T4.ConSeq)\n \n WHERE T4.ConAti = TRUE\n GROUP BY T4.EmpCod,\n T4.CarCod, T4.ConPesCod ) T3 ON t3.EmpCod = T1.EmpCod AND \n \n t3.CarCod = T1.CarCod AND \n \n t3.ConPesCod = T1.PesCod) \n WHERE (T2.CarAti = true)\n AND (T1.EmpCod = 112)\n and (UPPER(T1.PesNom) like UPPER('%MARIA%')) \n ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n\nHere the Hash Join[1] plan takes ~700ms, and if I change\n the first LEFT JOIN to a LEFT JOIN LATERAL, forcing a nested\n loop, the query[2] runs in 3ms.\n\n[1] https://explain.depesz.com/s/8IL3\n [2] https://explain.depesz.com/s/f8Q9\n\n\n PostgreSQL version is 11.5, I have run analyze on all the tables.\n\n PG settings:\n\nname |setting |unit|\n-------------------------------|---------|----|\nautovacuum |on | |\ndefault_statistics_target |250 | |\neffective_cache_size |983040 |8kB |\neffective_io_concurrency |200 | |\nmax_parallel_workers |6 | |\nmax_parallel_workers_per_gather|3 | |\nrandom_page_cost |1.1 | |\nwork_mem |51200 |kB |",
"msg_date": "Fri, 22 Nov 2019 14:43:04 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Join over Nested Loop"
},
{
"msg_contents": "pá 22. 11. 2019 v 18:37 odesílatel Luís Roberto Weck <\[email protected]> napsal:\n\n> Hey,\n>\n> I'm trying to figure out why Postgres is choosing a Hash Join over a\n> Nested Loop in this query:\n>\n> SELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ, T2.CarAti,\n> T1.CarCod, T1.EmpCod,\n> T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE( T3.PesDatAnt, DATE\n> '00010101') AS PesDatAnt\n> FROM ((public.Pessoa T1\n> INNER JOIN public.Carteira T2 ON T2.EmpCod = T1.EmpCod AND\n> T2.CarCod = T1.CarCod)\n> LEFT JOIN (SELECT MIN(COALESCE( T5.ConVenAnt, DATE\n> '00010101')) AS PesDatAnt, T4.EmpCod, T4.CarCod, T4.ConPesCod AS ConPesCod\n> FROM (public.Contrato T4\n> LEFT JOIN (SELECT MIN(ConParDatVen) AS\n> ConVenAnt, EmpCod, CarCod, ConPesCod, ConSeq\n> FROM public.ContratoParcela\n> T5\n> WHERE ConParAti = true\n> AND ConParValSal > 0\n> GROUP BY EmpCod, CarCod,\n> ConPesCod, ConSeq ) T5 ON T5.EmpCod = T4.EmpCod AND\n>\n> T5.CarCod = T4.CarCod AND\n>\n> T5.ConPesCod = T4.ConPesCod AND\n>\n> T5.ConSeq = T4.ConSeq)\n> WHERE T4.ConAti = TRUE\n> GROUP BY T4.EmpCod, T4.CarCod, T4.ConPesCod )\n> T3 ON t3.EmpCod = T1.EmpCod AND\n>\n> t3.CarCod = T1.CarCod AND\n>\n> t3.ConPesCod = T1.PesCod)\n> WHERE (T2.CarAti = true)\n> AND (T1.EmpCod = 112)\n> and (UPPER(T1.PesNom) like UPPER('%MARIA%'))\n> ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n>\n> Here the Hash Join[1] plan takes ~700ms, and if I change the first LEFT\n> JOIN to a LEFT JOIN LATERAL, forcing a nested loop, the query[2] runs in\n> 3ms.\n>\n> [1] https://explain.depesz.com/s/8IL3\n> [2] https://explain.depesz.com/s/f8Q9\n>\n>\nMaybe I am wrong, but probably you have to do more than just change LEFT\nJOIN to LATERAL JOIN. Lateral join is based on correlated subquery - so you\nhad to push some predicates to subquery - and then the query can be much\nmore effective.\n\nRegards\n\nPavel\n\n\n\n\n\n> PostgreSQL version is 11.5, I have run analyze on all the tables.\n>\n> PG settings:\n>\n> name |setting |unit|\n> -------------------------------|---------|----|\n> autovacuum |on | |\n> default_statistics_target |250 | |\n> effective_cache_size |983040 |8kB |\n> effective_io_concurrency |200 | |\n> max_parallel_workers |6 | |\n> max_parallel_workers_per_gather|3 | |\n> random_page_cost |1.1 | |\n> work_mem |51200 |kB |\n>\n\npá 22. 11. 2019 v 18:37 odesílatel Luís Roberto Weck <[email protected]> napsal:\n\nHey,\r\n \n\r\n I'm trying to figure out why Postgres is choosing a Hash Join\r\n over a Nested Loop in this query:\n\nSELECT T1.PesID, T1.PesNom, T1.PesValSal,\r\n T1.PesCPFCNPJ, T2.CarAti, T1.CarCod, T1.EmpCod, \r\n T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE(\r\n T3.PesDatAnt, DATE '00010101') AS PesDatAnt \n FROM ((public.Pessoa T1 \n INNER JOIN public.Carteira T2 ON T2.EmpCod\r\n = T1.EmpCod AND T2.CarCod = T1.CarCod) \n LEFT JOIN (SELECT MIN(COALESCE(\r\n T5.ConVenAnt, DATE '00010101')) AS PesDatAnt, T4.EmpCod,\r\n T4.CarCod, T4.ConPesCod AS ConPesCod \n FROM (public.Contrato T4 \n LEFT JOIN (SELECT\r\n MIN(ConParDatVen) AS ConVenAnt, EmpCod, CarCod, ConPesCod,\r\n ConSeq \n FROM\r\n public.ContratoParcela T5\n WHERE\r\n ConParAti = true \n AND\r\n ConParValSal > 0 \n GROUP\r\n BY EmpCod, CarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod =\r\n T4.EmpCod AND \r\n \r\n T5.CarCod = T4.CarCod\r\n AND \r\n \r\n T5.ConPesCod =\r\n T4.ConPesCod AND \r\n \r\n T5.ConSeq = T4.ConSeq)\r\n \n WHERE T4.ConAti = TRUE\n GROUP BY T4.EmpCod,\r\n T4.CarCod, T4.ConPesCod ) T3 ON t3.EmpCod = T1.EmpCod AND \r\n \r\n t3.CarCod = T1.CarCod AND \r\n \r\n t3.ConPesCod = T1.PesCod) \n WHERE (T2.CarAti = true)\n AND (T1.EmpCod = 112)\n and (UPPER(T1.PesNom) like UPPER('%MARIA%')) \n ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n\nHere the Hash Join[1] plan takes ~700ms, and if I change\r\n the first LEFT JOIN to a LEFT JOIN LATERAL, forcing a nested\r\n loop, the query[2] runs in 3ms.\n\n[1] https://explain.depesz.com/s/8IL3\r\n [2] https://explain.depesz.com/s/f8Q9Maybe I am wrong, but probably you have to do more than just change LEFT JOIN to LATERAL JOIN. Lateral join is based on correlated subquery - so you had to push some predicates to subquery - and then the query can be much more effective.RegardsPavel \n\n\r\n PostgreSQL version is 11.5, I have run analyze on all the tables.\n\r\n PG settings:\n\nname |setting |unit|\n-------------------------------|---------|----|\nautovacuum |on | |\ndefault_statistics_target |250 | |\neffective_cache_size |983040 |8kB |\neffective_io_concurrency |200 | |\nmax_parallel_workers |6 | |\nmax_parallel_workers_per_gather|3 | |\nrandom_page_cost |1.1 | |\nwork_mem |51200 |kB |",
"msg_date": "Fri, 22 Nov 2019 18:55:34 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Join over Nested Loop"
},
{
"msg_contents": "****\nEm 22/11/2019 14:55, Pavel Stehule escreveu:\n>\n>\n> pá 22. 11. 2019 v 18:37 odesílatel Luís Roberto Weck \n> <[email protected] <mailto:[email protected]>> napsal:\n>\n>> Hey,\n>>\n>> I'm trying to figure out why Postgres is choosing a Hash Join\n>> over a Nested Loop in this query:\n>>\n>> SELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ,\n>> T2.CarAti, T1.CarCod, T1.EmpCod,\n>> T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE(\n>> T3.PesDatAnt, DATE '00010101') AS PesDatAnt\n>> FROM ((public.Pessoa T1\n>> INNER JOIN public.Carteira T2 ON T2.EmpCod =\n>> T1.EmpCod AND T2.CarCod = T1.CarCod)\n>> LEFT JOIN (SELECT MIN(COALESCE( T5.ConVenAnt, DATE\n>> '00010101')) AS PesDatAnt, T4.EmpCod, T4.CarCod, T4.ConPesCod AS\n>> ConPesCod\n>> FROM (public.Contrato T4\n>> LEFT JOIN (SELECT\n>> MIN(ConParDatVen) AS ConVenAnt, EmpCod, CarCod, ConPesCod, ConSeq\n>> FROM public.ContratoParcela T5\n>> WHERE ConParAti = true\n>> AND ConParValSal > 0\n>> GROUP BY EmpCod, CarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod =\n>> T4.EmpCod AND\n>> T5.CarCod = T4.CarCod AND\n>> T5.ConPesCod = T4.ConPesCod AND\n>> T5.ConSeq = T4.ConSeq)\n>> WHERE T4.ConAti = TRUE\n>> GROUP BY T4.EmpCod, T4.CarCod, T4.ConPesCod ) T3 ON t3.EmpCod \n>> = T1.EmpCod AND\n>> t3.CarCod = T1.CarCod AND\n>> t3.ConPesCod = T1.PesCod)\n>> WHERE (T2.CarAti = true)\n>> AND (T1.EmpCod = 112)\n>> and (UPPER(T1.PesNom) like UPPER('%MARIA%'))\n>> ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n>>\n>> Here the Hash Join[1] plan takes ~700ms, and if I change the\n>> first LEFT JOIN to a LEFT JOIN LATERAL, forcing a nested loop,\n>> the query[2] runs in 3ms.\n>>\n>> [1] https://explain.depesz.com/s/8IL3\n>> [2] https://explain.depesz.com/s/f8Q9\n>\n>\n> Maybe I am wrong, but probably you have to do more than just change \n> LEFT JOIN to LATERAL JOIN. Lateral join is based on correlated \n> subquery - so you had to push some predicates to subquery - and then \n> the query can be much more effective.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n> PostgreSQL version is 11.5, I have run analyze on all the tables.\n>\n> PG settings:\n>\n> name |setting |unit|\n> -------------------------------|---------|----|\n> autovacuum |on | |\n> default_statistics_target |250 | |\n> effective_cache_size |983040 |8kB |\n> effective_io_concurrency |200 | |\n> max_parallel_workers |6 | |\n> max_parallel_workers_per_gather|3 | |\n> random_page_cost |1.1 | |\n> work_mem |51200 |kB |\n>\n\nI'm sorry, I am not sure I understood.\n\nThis is the altered query:\n\nSELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ, T2.CarAti, \nT1.CarCod, T1.EmpCod, T2.CarFan, T1.PesDatAge, T1.PesCod,\n COALESCE( T3.PesDatAnt, DATE '00010101') AS PesDatAnt\n FROM ((public.Pessoa T1\n INNER JOIN public.Carteira T2 ON T2.EmpCod = T1.EmpCod AND \nT2.CarCod = T1.CarCod)\n LEFT JOIN *LATERAL *(SELECT MIN(COALESCE( T5.ConVenAnt, \nDATE '00010101')) AS PesDatAnt, T4.EmpCod, T4.CarCod, T4.ConPesCod AS \nConPesCod\n FROM (public.Contrato T4\n LEFT JOIN (SELECT \nMIN(ConParDatVen) AS ConVenAnt, EmpCod, CarCod, ConPesCod, ConSeq\n FROM \npublic.ContratoParcela T5\n WHERE ConParAti = true\n and ConParValSal > 0\n\n GROUP BY EmpCod, \nCarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod = T4.EmpCod AND T5.CarCod = \nT4.CarCod AND T5.ConPesCod = T4.ConPesCod AND T5.ConSeq = T4.ConSeq)\n WHERE T4.ConAti = TRUE\n*AND t4.EmpCod = T1.EmpCod AND t4.CarCod = T1.CarCod AND t4.ConPesCod = \nT1.PesCod*\n GROUP BY T4.EmpCod, T4.CarCod, \nT4.ConPesCod ) T3 ON *TRUE ) --ON t3.EmpCod = T1.EmpCod AND t3.CarCod = \nT1.CarCod AND t3.ConPesCod = T1.PesCod) *\n WHERE (T2.CarAti = true)\n AND (T1.EmpCod = 112)\n and (UPPER(T1.PesNom) like UPPER('%MARIA%'))\n ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n\nIn bold are the changes I've made to the query. I am sure PostgreSQL is \nable to push it down, since it is much faster now.The problem I have is \nthat this is a query generated by an ORM, So I can't change it.\n\nI would like to understand why wasn't Postgres able to optimize it to a \nnested loop. Is there something I can do with the statistics?\n\nThanks!!\n\n\n\n\n\n\n\n\nEm 22/11/2019 14:55, Pavel Stehule\n escreveu:\n\n\n\n\n\n\n\n\npá 22. 11. 2019 v 18:37\n odesílatel Luís Roberto Weck <[email protected]>\n napsal:\n\n\n\nHey, \n\n I'm trying to figure out why Postgres is choosing a\n Hash Join over a Nested Loop in this query:\n\nSELECT T1.PesID, T1.PesNom, T1.PesValSal,\n T1.PesCPFCNPJ, T2.CarAti, T1.CarCod, T1.EmpCod, \n T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE(\n T3.PesDatAnt, DATE '00010101') AS PesDatAnt \n FROM ((public.Pessoa T1 \n INNER JOIN public.Carteira T2 ON\n T2.EmpCod = T1.EmpCod AND T2.CarCod = T1.CarCod) \n LEFT JOIN (SELECT MIN(COALESCE(\n T5.ConVenAnt, DATE '00010101')) AS PesDatAnt,\n T4.EmpCod, T4.CarCod, T4.ConPesCod AS ConPesCod \n FROM\n (public.Contrato T4 \n LEFT JOIN \n (SELECT MIN(ConParDatVen) AS ConVenAnt, EmpCod,\n CarCod, ConPesCod, ConSeq \n \n FROM public.ContratoParcela T5\n \n WHERE ConParAti = true \n \n AND ConParValSal > 0 \n \n GROUP BY EmpCod, CarCod, ConPesCod, ConSeq )\n T5 ON T5.EmpCod = T4.EmpCod AND \n \n \n T5.CarCod = T4.CarCod AND \n \n \n T5.ConPesCod = T4.ConPesCod AND \n \n \n T5.ConSeq = T4.ConSeq) \n WHERE T4.ConAti =\n TRUE\n GROUP BY\n T4.EmpCod, T4.CarCod, T4.ConPesCod ) T3 ON\n t3.EmpCod = T1.EmpCod AND \n \n t3.CarCod = T1.CarCod\n AND \n \n t3.ConPesCod = T1.PesCod)\n \n WHERE (T2.CarAti = true)\n AND (T1.EmpCod = 112)\n and (UPPER(T1.PesNom) like\n UPPER('%MARIA%')) \n ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n\nHere the Hash Join[1] plan takes ~700ms, and if I\n change the first LEFT JOIN to a LEFT JOIN LATERAL,\n forcing a nested loop, the query[2] runs in 3ms.\n\n[1] https://explain.depesz.com/s/8IL3\n [2] https://explain.depesz.com/s/f8Q9\n\n\n\n\nMaybe I am wrong, but probably you have to do more than\n just change LEFT JOIN to LATERAL JOIN. Lateral join is based\n on correlated subquery - so you had to push some predicates\n to subquery - and then the query can be much more effective.\n\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\n\n \n\n\n\n \n\n PostgreSQL version is 11.5, I have run analyze on all the\n tables.\n\n PG settings:\n\nname |setting |unit|\n-------------------------------|---------|----|\nautovacuum |on | |\ndefault_statistics_target |250 | |\neffective_cache_size |983040 |8kB |\neffective_io_concurrency |200 | |\nmax_parallel_workers |6 | |\nmax_parallel_workers_per_gather|3 | |\nrandom_page_cost |1.1 | |\nwork_mem |51200 |kB |\n\n\n\n\n\n\n I'm sorry, I am not sure I understood.\n\n This is the altered query:\n\nSELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ,\n T2.CarAti, T1.CarCod, T1.EmpCod, T2.CarFan, T1.PesDatAge,\n T1.PesCod, \n COALESCE( T3.PesDatAnt, DATE '00010101') AS\n PesDatAnt \n FROM ((public.Pessoa T1 \n INNER JOIN public.Carteira T2 ON T2.EmpCod =\n T1.EmpCod AND T2.CarCod = T1.CarCod) \n LEFT JOIN LATERAL (SELECT\n MIN(COALESCE( T5.ConVenAnt, DATE '00010101')) AS PesDatAnt,\n T4.EmpCod, T4.CarCod, T4.ConPesCod AS ConPesCod \n FROM (public.Contrato T4\n \n LEFT JOIN (SELECT\n MIN(ConParDatVen) AS ConVenAnt, EmpCod, CarCod, ConPesCod, ConSeq\n \n FROM\n public.ContratoParcela T5\n WHERE\n ConParAti = true \n and\n ConParValSal > 0 \n \n GROUP\n BY EmpCod, CarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod =\n T4.EmpCod AND T5.CarCod = T4.CarCod AND T5.ConPesCod =\n T4.ConPesCod AND T5.ConSeq = T4.ConSeq) \n WHERE T4.ConAti = TRUE\n AND t4.EmpCod =\n T1.EmpCod AND t4.CarCod = T1.CarCod AND t4.ConPesCod = T1.PesCod\n GROUP BY T4.EmpCod,\n T4.CarCod, T4.ConPesCod ) T3 ON TRUE ) --ON t3.EmpCod =\n T1.EmpCod AND t3.CarCod = T1.CarCod AND t3.ConPesCod =\n T1.PesCod) \n WHERE (T2.CarAti = true)\n AND (T1.EmpCod = 112)\n and (UPPER(T1.PesNom) like UPPER('%MARIA%')) \n ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n\nIn bold are the changes I've made to the query. I am sure\n PostgreSQL is able to push it down, since it is much faster now.\nThe problem I have is that this is a query generated by an ORM,\n So I can't change it.\n\nI would like to understand why wasn't Postgres able to optimize\n it to a nested loop. Is there something I can do with the\n statistics?\n\n Thanks!!",
"msg_date": "Fri, 22 Nov 2019 15:48:22 -0300",
"msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Join over Nested Loop"
},
{
"msg_contents": "pá 22. 11. 2019 v 19:42 odesílatel Luís Roberto Weck <\[email protected]> napsal:\n\n> Em 22/11/2019 14:55, Pavel Stehule escreveu:\n>\n>\n>\n> pá 22. 11. 2019 v 18:37 odesílatel Luís Roberto Weck <\n> [email protected]> napsal:\n>\n>> Hey,\n>>\n>> I'm trying to figure out why Postgres is choosing a Hash Join over a\n>> Nested Loop in this query:\n>>\n>> SELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ, T2.CarAti,\n>> T1.CarCod, T1.EmpCod,\n>> T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE( T3.PesDatAnt, DATE\n>> '00010101') AS PesDatAnt\n>> FROM ((public.Pessoa T1\n>> INNER JOIN public.Carteira T2 ON T2.EmpCod = T1.EmpCod AND\n>> T2.CarCod = T1.CarCod)\n>> LEFT JOIN (SELECT MIN(COALESCE( T5.ConVenAnt, DATE\n>> '00010101')) AS PesDatAnt, T4.EmpCod, T4.CarCod, T4.ConPesCod AS ConPesCod\n>> FROM (public.Contrato T4\n>> LEFT JOIN (SELECT MIN(ConParDatVen) AS\n>> ConVenAnt, EmpCod, CarCod, ConPesCod, ConSeq\n>> FROM\n>> public.ContratoParcela T5\n>> WHERE ConParAti = true\n>> AND ConParValSal > 0\n>> GROUP BY EmpCod, CarCod,\n>> ConPesCod, ConSeq ) T5 ON T5.EmpCod = T4.EmpCod AND\n>>\n>> T5.CarCod = T4.CarCod AND\n>>\n>> T5.ConPesCod = T4.ConPesCod AND\n>>\n>> T5.ConSeq = T4.ConSeq)\n>> WHERE T4.ConAti = TRUE\n>> GROUP BY T4.EmpCod, T4.CarCod, T4.ConPesCod )\n>> T3 ON t3.EmpCod = T1.EmpCod AND\n>>\n>> t3.CarCod = T1.CarCod AND\n>>\n>> t3.ConPesCod = T1.PesCod)\n>> WHERE (T2.CarAti = true)\n>> AND (T1.EmpCod = 112)\n>> and (UPPER(T1.PesNom) like UPPER('%MARIA%'))\n>> ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n>>\n>> Here the Hash Join[1] plan takes ~700ms, and if I change the first LEFT\n>> JOIN to a LEFT JOIN LATERAL, forcing a nested loop, the query[2] runs in\n>> 3ms.\n>>\n>> [1] https://explain.depesz.com/s/8IL3\n>> [2] https://explain.depesz.com/s/f8Q9\n>>\n>>\n> Maybe I am wrong, but probably you have to do more than just change LEFT\n> JOIN to LATERAL JOIN. Lateral join is based on correlated subquery - so you\n> had to push some predicates to subquery - and then the query can be much\n> more effective.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>> PostgreSQL version is 11.5, I have run analyze on all the tables.\n>>\n>> PG settings:\n>>\n>> name |setting |unit|\n>> -------------------------------|---------|----|\n>> autovacuum |on | |\n>> default_statistics_target |250 | |\n>> effective_cache_size |983040 |8kB |\n>> effective_io_concurrency |200 | |\n>> max_parallel_workers |6 | |\n>> max_parallel_workers_per_gather|3 | |\n>> random_page_cost |1.1 | |\n>> work_mem |51200 |kB |\n>>\n>\n> I'm sorry, I am not sure I understood.\n>\n> This is the altered query:\n>\n> SELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ, T2.CarAti,\n> T1.CarCod, T1.EmpCod, T2.CarFan, T1.PesDatAge, T1.PesCod,\n> COALESCE( T3.PesDatAnt, DATE '00010101') AS PesDatAnt\n> FROM ((public.Pessoa T1\n> INNER JOIN public.Carteira T2 ON T2.EmpCod = T1.EmpCod AND\n> T2.CarCod = T1.CarCod)\n> LEFT JOIN *LATERAL *(SELECT MIN(COALESCE( T5.ConVenAnt, DATE\n> '00010101')) AS PesDatAnt, T4.EmpCod, T4.CarCod, T4.ConPesCod AS ConPesCod\n> FROM (public.Contrato T4\n> LEFT JOIN (SELECT\n> MIN(ConParDatVen) AS ConVenAnt, EmpCod, CarCod, ConPesCod, ConSeq\n> FROM\n> public.ContratoParcela T5\n> WHERE ConParAti = true\n> and ConParValSal > 0\n>\n> GROUP BY EmpCod,\n> CarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod = T4.EmpCod AND T5.CarCod =\n> T4.CarCod AND T5.ConPesCod = T4.ConPesCod AND T5.ConSeq = T4.ConSeq)\n> WHERE T4.ConAti = TRUE\n> *AND t4.EmpCod = T1.EmpCod AND\n> t4.CarCod = T1.CarCod AND t4.ConPesCod = T1.PesCod*\n> GROUP BY T4.EmpCod, T4.CarCod,\n> T4.ConPesCod ) T3 ON *TRUE ) --ON t3.EmpCod = T1.EmpCod AND t3.CarCod =\n> T1.CarCod AND t3.ConPesCod = T1.PesCod) *\n> WHERE (T2.CarAti = true)\n> AND (T1.EmpCod = 112)\n> and (UPPER(T1.PesNom) like UPPER('%MARIA%'))\n> ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n>\n> In bold are the changes I've made to the query. I am sure PostgreSQL is\n> able to push it down, since it is much faster now. The problem I have is\n> that this is a query generated by an ORM, So I can't change it.\n>\n> I would like to understand why wasn't Postgres able to optimize it to a\n> nested loop. Is there something I can do with the statistics?\n>\n\nI don't think. Postgres optimizer just doesn't support this optimization.\nIt has sense only when you know so number of loops is very small - else\nnested loop should be much slower.\n\nRegards\n\nPavel\n\n\n\n> Thanks!!\n>\n>\n\npá 22. 11. 2019 v 19:42 odesílatel Luís Roberto Weck <[email protected]> napsal:\n\n\nEm 22/11/2019 14:55, Pavel Stehule\r\n escreveu:\n\n\n\n\n\n\n\npá 22. 11. 2019 v 18:37\r\n odesílatel Luís Roberto Weck <[email protected]>\r\n napsal:\n\n\n\nHey, \n\r\n I'm trying to figure out why Postgres is choosing a\r\n Hash Join over a Nested Loop in this query:\n\nSELECT T1.PesID, T1.PesNom, T1.PesValSal,\r\n T1.PesCPFCNPJ, T2.CarAti, T1.CarCod, T1.EmpCod, \r\n T2.CarFan, T1.PesDatAge, T1.PesCod, COALESCE(\r\n T3.PesDatAnt, DATE '00010101') AS PesDatAnt \n FROM ((public.Pessoa T1 \n INNER JOIN public.Carteira T2 ON\r\n T2.EmpCod = T1.EmpCod AND T2.CarCod = T1.CarCod) \n LEFT JOIN (SELECT MIN(COALESCE(\r\n T5.ConVenAnt, DATE '00010101')) AS PesDatAnt,\r\n T4.EmpCod, T4.CarCod, T4.ConPesCod AS ConPesCod \n FROM\r\n (public.Contrato T4 \n LEFT JOIN \r\n (SELECT MIN(ConParDatVen) AS ConVenAnt, EmpCod,\r\n CarCod, ConPesCod, ConSeq \n \r\n FROM public.ContratoParcela T5\n \r\n WHERE ConParAti = true \n \r\n AND ConParValSal > 0 \n \r\n GROUP BY EmpCod, CarCod, ConPesCod, ConSeq )\r\n T5 ON T5.EmpCod = T4.EmpCod AND \r\n \r\n \r\n T5.CarCod = T4.CarCod AND \r\n \r\n \r\n T5.ConPesCod = T4.ConPesCod AND \r\n \r\n \r\n T5.ConSeq = T4.ConSeq) \n WHERE T4.ConAti =\r\n TRUE\n GROUP BY\r\n T4.EmpCod, T4.CarCod, T4.ConPesCod ) T3 ON\r\n t3.EmpCod = T1.EmpCod AND \r\n \r\n t3.CarCod = T1.CarCod\r\n AND \r\n \r\n t3.ConPesCod = T1.PesCod)\r\n \n WHERE (T2.CarAti = true)\n AND (T1.EmpCod = 112)\n and (UPPER(T1.PesNom) like\r\n UPPER('%MARIA%')) \n ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n\nHere the Hash Join[1] plan takes ~700ms, and if I\r\n change the first LEFT JOIN to a LEFT JOIN LATERAL,\r\n forcing a nested loop, the query[2] runs in 3ms.\n\n[1] https://explain.depesz.com/s/8IL3\r\n [2] https://explain.depesz.com/s/f8Q9\n\n\n\n\nMaybe I am wrong, but probably you have to do more than\r\n just change LEFT JOIN to LATERAL JOIN. Lateral join is based\r\n on correlated subquery - so you had to push some predicates\r\n to subquery - and then the query can be much more effective.\n\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\n\n \n\n\n\n \n\r\n PostgreSQL version is 11.5, I have run analyze on all the\r\n tables.\n\r\n PG settings:\n\nname |setting |unit|\n-------------------------------|---------|----|\nautovacuum |on | |\ndefault_statistics_target |250 | |\neffective_cache_size |983040 |8kB |\neffective_io_concurrency |200 | |\nmax_parallel_workers |6 | |\nmax_parallel_workers_per_gather|3 | |\nrandom_page_cost |1.1 | |\nwork_mem |51200 |kB |\n\n\n\n\n\n\r\n I'm sorry, I am not sure I understood.\n\r\n This is the altered query:\n\nSELECT T1.PesID, T1.PesNom, T1.PesValSal, T1.PesCPFCNPJ,\r\n T2.CarAti, T1.CarCod, T1.EmpCod, T2.CarFan, T1.PesDatAge,\r\n T1.PesCod, \n COALESCE( T3.PesDatAnt, DATE '00010101') AS\r\n PesDatAnt \n FROM ((public.Pessoa T1 \n INNER JOIN public.Carteira T2 ON T2.EmpCod =\r\n T1.EmpCod AND T2.CarCod = T1.CarCod) \n LEFT JOIN LATERAL (SELECT\r\n MIN(COALESCE( T5.ConVenAnt, DATE '00010101')) AS PesDatAnt,\r\n T4.EmpCod, T4.CarCod, T4.ConPesCod AS ConPesCod \n FROM (public.Contrato T4\r\n \n LEFT JOIN (SELECT\r\n MIN(ConParDatVen) AS ConVenAnt, EmpCod, CarCod, ConPesCod, ConSeq\r\n \n FROM\r\n public.ContratoParcela T5\n WHERE\r\n ConParAti = true \n and\r\n ConParValSal > 0 \n \n GROUP\r\n BY EmpCod, CarCod, ConPesCod, ConSeq ) T5 ON T5.EmpCod =\r\n T4.EmpCod AND T5.CarCod = T4.CarCod AND T5.ConPesCod =\r\n T4.ConPesCod AND T5.ConSeq = T4.ConSeq) \n WHERE T4.ConAti = TRUE\n AND t4.EmpCod =\r\n T1.EmpCod AND t4.CarCod = T1.CarCod AND t4.ConPesCod = T1.PesCod\n GROUP BY T4.EmpCod,\r\n T4.CarCod, T4.ConPesCod ) T3 ON TRUE ) --ON t3.EmpCod =\r\n T1.EmpCod AND t3.CarCod = T1.CarCod AND t3.ConPesCod =\r\n T1.PesCod) \n WHERE (T2.CarAti = true)\n AND (T1.EmpCod = 112)\n and (UPPER(T1.PesNom) like UPPER('%MARIA%')) \n ORDER BY T1.EmpCod, T1.CarCod, T1.PesCod\n\nIn bold are the changes I've made to the query. I am sure\r\n PostgreSQL is able to push it down, since it is much faster now.\nThe problem I have is that this is a query generated by an ORM,\r\n So I can't change it.\n\nI would like to understand why wasn't Postgres able to optimize\r\n it to a nested loop. Is there something I can do with the\r\n statistics?I don't think. Postgres optimizer just doesn't support this optimization. It has sense only when you know so number of loops is very small - else nested loop should be much slower. RegardsPavel \n\r\n Thanks!!",
"msg_date": "Fri, 22 Nov 2019 19:54:43 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Join over Nested Loop"
}
] |
[
{
"msg_contents": "Hey all,\nI'm testing performance of two identical machines one in 9.6 and the second\none is in 12. The second machine is a clone of the first one + db upgrade\nto 12 beta 3 (Yes I'm aware 12.1 was released).\n\nmachine stats :\n32gb ram\n8 cpu\nregular hd (not ssd)\n\nmy postgresql.confg settings:\n\nmax_wal_size = 2GB\nmin_wal_size = 1GB\nwal_buffers = 16MB\ncheckpoint_completion_target = 0.9\ncheckpoint_timeout = 30min\nlog_checkpoints = on\nlog_lock_waits = on\nlog_temp_files = 1024\nlog_min_duration_statement = 1000\nlog_autovacuum_min_duration = 5000\nautovacuum_max_workers = 4\nautovacuum_vacuum_cost_limit = 800\nautovacuum_vacuum_cost_delay = 10ms\nstandard_conforming_strings = off\nmax_locks_per_transaction = 5000\nmax_connections = 500\nlog_line_prefix = '%t %d %p '\nrandom_page_cost = 2.0\ndeadlock_timeout = 5s\nshared_preload_libraries = 'pg_stat_statements'\ntrack_activity_query_size = 32764\nmaintenance_work_mem = 250MB\nwork_mem = 32MB\nshared_buffers = 8058MB\neffective_cache_size = 16116MB\n\nin 12v I also added the following settings :\nlog_directory = 'pg_log'\nenable_partitionwise_join = on\nenable_partitionwise_aggregate = on\nmax_worker_processes = 8 # (change requires restart)\nmax_parallel_workers_per_gather = 4 # taken from max_parallel_workers\nmax_parallel_workers = 8 # maximum number of max_worker_pr\n\nI tested a few applications flows and I saw that the 9.6 version is faster.\nI also did a few simple tests (enabled \\timing) :\n\n12v :\npostgres=# create table test1 as select generate_series(1,10000);\nSELECT 10000\nTime: 35.099 ms\n\npostgres=# select count(*) from test1;\n count\n-------\n 10000\n(1 row)\n\nTime: 4.819 ms\n\n9.6v :\npostgres=# create table test1 as select generate_series(1,10000);\nSELECT 10000\nTime: 19.962 ms\n\npostgres=# select count(*) from test1;\n count\n-------\n 10000\n(1 row)\n\nTime: 1.541 ms\n\nAny idea what can cause it ? What can I check?\nThis degredation is visible in many queries that we use ..\n\nAfter the upgrade to 12v version I run analyze on all tables..\n\nThanks.\n\nHey all,I'm testing performance of two identical machines one in 9.6 and the second one is in 12. The second machine is a clone of the first one + db upgrade to 12 beta 3 (Yes I'm aware 12.1 was released).machine stats : 32gb ram8 cpuregular hd (not ssd)my postgresql.confg settings: max_wal_size = 2GBmin_wal_size = 1GBwal_buffers = 16MBcheckpoint_completion_target = 0.9checkpoint_timeout = 30minlog_checkpoints = onlog_lock_waits = onlog_temp_files = 1024log_min_duration_statement = 1000log_autovacuum_min_duration = 5000autovacuum_max_workers = 4autovacuum_vacuum_cost_limit = 800autovacuum_vacuum_cost_delay = 10msstandard_conforming_strings = offmax_locks_per_transaction = 5000max_connections = 500log_line_prefix = '%t %d %p 'random_page_cost = 2.0deadlock_timeout = 5sshared_preload_libraries = 'pg_stat_statements'track_activity_query_size = 32764maintenance_work_mem = 250MBwork_mem = 32MBshared_buffers = 8058MBeffective_cache_size = 16116MBin 12v I also added the following settings : log_directory = 'pg_log'enable_partitionwise_join = onenable_partitionwise_aggregate = onmax_worker_processes = 8 # (change requires restart)max_parallel_workers_per_gather = 4 # taken from max_parallel_workersmax_parallel_workers = 8 # maximum number of max_worker_prI tested a few applications flows and I saw that the 9.6 version is faster. I also did a few simple tests (enabled \\timing) : 12v : postgres=# create table test1 as select generate_series(1,10000);SELECT 10000Time: 35.099 mspostgres=# select count(*) from test1; count------- 10000(1 row)Time: 4.819 ms9.6v : postgres=# create table test1 as select generate_series(1,10000);SELECT 10000Time: 19.962 mspostgres=# select count(*) from test1; count------- 10000(1 row)Time: 1.541 msAny idea what can cause it ? What can I check? This degredation is visible in many queries that we use ..After the upgrade to 12v version I run analyze on all tables..Thanks.",
"msg_date": "Sun, 24 Nov 2019 14:53:19 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "Hello,\ndid you run ananlyze on your db?\n\nLe dim. 24 nov. 2019 à 13:53, Mariel Cherkassky <[email protected]>\na écrit :\n\n> Hey all,\n> I'm testing performance of two identical machines one in 9.6 and the\n> second one is in 12. The second machine is a clone of the first one + db\n> upgrade to 12 beta 3 (Yes I'm aware 12.1 was released).\n>\n> machine stats :\n> 32gb ram\n> 8 cpu\n> regular hd (not ssd)\n>\n> my postgresql.confg settings:\n>\n> max_wal_size = 2GB\n> min_wal_size = 1GB\n> wal_buffers = 16MB\n> checkpoint_completion_target = 0.9\n> checkpoint_timeout = 30min\n> log_checkpoints = on\n> log_lock_waits = on\n> log_temp_files = 1024\n> log_min_duration_statement = 1000\n> log_autovacuum_min_duration = 5000\n> autovacuum_max_workers = 4\n> autovacuum_vacuum_cost_limit = 800\n> autovacuum_vacuum_cost_delay = 10ms\n> standard_conforming_strings = off\n> max_locks_per_transaction = 5000\n> max_connections = 500\n> log_line_prefix = '%t %d %p '\n> random_page_cost = 2.0\n> deadlock_timeout = 5s\n> shared_preload_libraries = 'pg_stat_statements'\n> track_activity_query_size = 32764\n> maintenance_work_mem = 250MB\n> work_mem = 32MB\n> shared_buffers = 8058MB\n> effective_cache_size = 16116MB\n>\n> in 12v I also added the following settings :\n> log_directory = 'pg_log'\n> enable_partitionwise_join = on\n> enable_partitionwise_aggregate = on\n> max_worker_processes = 8 # (change requires restart)\n> max_parallel_workers_per_gather = 4 # taken from max_parallel_workers\n> max_parallel_workers = 8 # maximum number of max_worker_pr\n>\n> I tested a few applications flows and I saw that the 9.6 version is\n> faster. I also did a few simple tests (enabled \\timing) :\n>\n> 12v :\n> postgres=# create table test1 as select generate_series(1,10000);\n> SELECT 10000\n> Time: 35.099 ms\n>\n> postgres=# select count(*) from test1;\n> count\n> -------\n> 10000\n> (1 row)\n>\n> Time: 4.819 ms\n>\n> 9.6v :\n> postgres=# create table test1 as select generate_series(1,10000);\n> SELECT 10000\n> Time: 19.962 ms\n>\n> postgres=# select count(*) from test1;\n> count\n> -------\n> 10000\n> (1 row)\n>\n> Time: 1.541 ms\n>\n> Any idea what can cause it ? What can I check?\n> This degredation is visible in many queries that we use ..\n>\n> After the upgrade to 12v version I run analyze on all tables..\n>\n> Thanks.\n>\n\nHello, did you run ananlyze on your db? Le dim. 24 nov. 2019 à 13:53, Mariel Cherkassky <[email protected]> a écrit :Hey all,I'm testing performance of two identical machines one in 9.6 and the second one is in 12. The second machine is a clone of the first one + db upgrade to 12 beta 3 (Yes I'm aware 12.1 was released).machine stats : 32gb ram8 cpuregular hd (not ssd)my postgresql.confg settings: max_wal_size = 2GBmin_wal_size = 1GBwal_buffers = 16MBcheckpoint_completion_target = 0.9checkpoint_timeout = 30minlog_checkpoints = onlog_lock_waits = onlog_temp_files = 1024log_min_duration_statement = 1000log_autovacuum_min_duration = 5000autovacuum_max_workers = 4autovacuum_vacuum_cost_limit = 800autovacuum_vacuum_cost_delay = 10msstandard_conforming_strings = offmax_locks_per_transaction = 5000max_connections = 500log_line_prefix = '%t %d %p 'random_page_cost = 2.0deadlock_timeout = 5sshared_preload_libraries = 'pg_stat_statements'track_activity_query_size = 32764maintenance_work_mem = 250MBwork_mem = 32MBshared_buffers = 8058MBeffective_cache_size = 16116MBin 12v I also added the following settings : log_directory = 'pg_log'enable_partitionwise_join = onenable_partitionwise_aggregate = onmax_worker_processes = 8 # (change requires restart)max_parallel_workers_per_gather = 4 # taken from max_parallel_workersmax_parallel_workers = 8 # maximum number of max_worker_prI tested a few applications flows and I saw that the 9.6 version is faster. I also did a few simple tests (enabled \\timing) : 12v : postgres=# create table test1 as select generate_series(1,10000);SELECT 10000Time: 35.099 mspostgres=# select count(*) from test1; count------- 10000(1 row)Time: 4.819 ms9.6v : postgres=# create table test1 as select generate_series(1,10000);SELECT 10000Time: 19.962 mspostgres=# select count(*) from test1; count------- 10000(1 row)Time: 1.541 msAny idea what can cause it ? What can I check? This degredation is visible in many queries that we use ..After the upgrade to 12v version I run analyze on all tables..Thanks.",
"msg_date": "Sun, 24 Nov 2019 14:15:16 +0100",
"msg_from": "Thomas Poty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "Hi there - \nI have same feelings. Try set max_parallel_workers_per_gather to zero. I don't think that comparison non-parallel and parallel versions is correct (don't say anything about parallel in 9.6 pls) \nWhat explain says? I suppose you will have different exec plans. Optimizer stranges of 11 and 12 ver have been discussed. Look thru the archive, but I didn't remember the problem status - resolved or not.\nAndrew. \n\n24 ноября 2019 г. 15:53:19 GMT+03:00, Mariel Cherkassky <[email protected]> пишет:\n>Hey all,\n>I'm testing performance of two identical machines one in 9.6 and the\n>second\n>one is in 12. The second machine is a clone of the first one + db\n>upgrade\n>to 12 beta 3 (Yes I'm aware 12.1 was released).\n>\n>machine stats :\n>32gb ram\n>8 cpu\n>regular hd (not ssd)\n>\n>my postgresql.confg settings:\n>\n>max_wal_size = 2GB\n>min_wal_size = 1GB\n>wal_buffers = 16MB\n>checkpoint_completion_target = 0.9\n>checkpoint_timeout = 30min\n>log_checkpoints = on\n>log_lock_waits = on\n>log_temp_files = 1024\n>log_min_duration_statement = 1000\n>log_autovacuum_min_duration = 5000\n>autovacuum_max_workers = 4\n>autovacuum_vacuum_cost_limit = 800\n>autovacuum_vacuum_cost_delay = 10ms\n>standard_conforming_strings = off\n>max_locks_per_transaction = 5000\n>max_connections = 500\n>log_line_prefix = '%t %d %p '\n>random_page_cost = 2.0\n>deadlock_timeout = 5s\n>shared_preload_libraries = 'pg_stat_statements'\n>track_activity_query_size = 32764\n>maintenance_work_mem = 250MB\n>work_mem = 32MB\n>shared_buffers = 8058MB\n>effective_cache_size = 16116MB\n>\n>in 12v I also added the following settings :\n>log_directory = 'pg_log'\n>enable_partitionwise_join = on\n>enable_partitionwise_aggregate = on\n>max_worker_processes = 8 # (change requires restart)\n>max_parallel_workers_per_gather = 4 # taken from\n>max_parallel_workers\n>max_parallel_workers = 8 # maximum number of\n>max_worker_pr\n>\n>I tested a few applications flows and I saw that the 9.6 version is\n>faster.\n>I also did a few simple tests (enabled \\timing) :\n>\n>12v :\n>postgres=# create table test1 as select generate_series(1,10000);\n>SELECT 10000\n>Time: 35.099 ms\n>\n>postgres=# select count(*) from test1;\n> count\n>-------\n> 10000\n>(1 row)\n>\n>Time: 4.819 ms\n>\n>9.6v :\n>postgres=# create table test1 as select generate_series(1,10000);\n>SELECT 10000\n>Time: 19.962 ms\n>\n>postgres=# select count(*) from test1;\n> count\n>-------\n> 10000\n>(1 row)\n>\n>Time: 1.541 ms\n>\n>Any idea what can cause it ? What can I check?\n>This degredation is visible in many queries that we use ..\n>\n>After the upgrade to 12v version I run analyze on all tables..\n>\n>Thanks.\n\n------------------\nС уважением,\nАндрей Захаров\nHi there - I have same feelings. Try set max_parallel_workers_per_gather to zero. I don't think that comparison non-parallel and parallel versions is correct (don't say anything about parallel in 9.6 pls) What explain says? I suppose you will have different exec plans. Optimizer stranges of 11 and 12 ver have been discussed. Look thru the archive, but I didn't remember the problem status - resolved or not.Andrew. 24 ноября 2019 г. 15:53:19 GMT+03:00, Mariel Cherkassky <[email protected]> пишет:\nHey all,I'm testing performance of two identical machines one in 9.6 and the second one is in 12. The second machine is a clone of the first one + db upgrade to 12 beta 3 (Yes I'm aware 12.1 was released).machine stats : 32gb ram8 cpuregular hd (not ssd)my postgresql.confg settings: max_wal_size = 2GBmin_wal_size = 1GBwal_buffers = 16MBcheckpoint_completion_target = 0.9checkpoint_timeout = 30minlog_checkpoints = onlog_lock_waits = onlog_temp_files = 1024log_min_duration_statement = 1000log_autovacuum_min_duration = 5000autovacuum_max_workers = 4autovacuum_vacuum_cost_limit = 800autovacuum_vacuum_cost_delay = 10msstandard_conforming_strings = offmax_locks_per_transaction = 5000max_connections = 500log_line_prefix = '%t %d %p 'random_page_cost = 2.0deadlock_timeout = 5sshared_preload_libraries = 'pg_stat_statements'track_activity_query_size = 32764maintenance_work_mem = 250MBwork_mem = 32MBshared_buffers = 8058MBeffective_cache_size = 16116MBin 12v I also added the following settings : log_directory = 'pg_log'enable_partitionwise_join = onenable_partitionwise_aggregate = onmax_worker_processes = 8 # (change requires restart)max_parallel_workers_per_gather = 4 # taken from max_parallel_workersmax_parallel_workers = 8 # maximum number of max_worker_prI tested a few applications flows and I saw that the 9.6 version is faster. I also did a few simple tests (enabled \\timing) : 12v : postgres=# create table test1 as select generate_series(1,10000);SELECT 10000Time: 35.099 mspostgres=# select count(*) from test1; count------- 10000(1 row)Time: 4.819 ms9.6v : postgres=# create table test1 as select generate_series(1,10000);SELECT 10000Time: 19.962 mspostgres=# select count(*) from test1; count------- 10000(1 row)Time: 1.541 msAny idea what can cause it ? What can I check? This degredation is visible in many queries that we use ..After the upgrade to 12v version I run analyze on all tables..Thanks.\nС уважением,Андрей Захаров",
"msg_date": "Sun, 24 Nov 2019 16:19:20 +0300",
"msg_from": "Andrew Zakharov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "Hey Andrew,\nIt seems that changing this parameter worked for me.\nSetting it to zero means that there wont be any parallel workers for one\nquery right ?\nIs it something familiar this problem with the gatherers ?\n\nHey Andrew,It seems that changing this parameter worked for me.Setting it to zero means that there wont be any parallel workers for one query right ?Is it something familiar this problem with the gatherers ?",
"msg_date": "Sun, 24 Nov 2019 15:51:50 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "On Sun, Nov 24, 2019 at 7:53 AM Mariel Cherkassky <\[email protected]> wrote:\n\nThe second machine is a clone of the first one + db upgrade to 12 beta 3\n> (Yes I'm aware 12.1 was released).\n>\n\nSo then fix it. Why spend time investigating obsolete software? Was\n12Beta3 compiled with --enable-cassert?\n\nCheers,\n\nJeff\n\n>\n\nOn Sun, Nov 24, 2019 at 7:53 AM Mariel Cherkassky <[email protected]> wrote:The second machine is a clone of the first one + db upgrade to 12 beta 3 (Yes I'm aware 12.1 was released).So then fix it. Why spend time investigating obsolete software? Was 12Beta3 compiled with --enable-cassert?Cheers,Jeff",
"msg_date": "Sun, 24 Nov 2019 10:22:04 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "On Sun, Nov 24, 2019 at 8:52 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> Hey Andrew,\n> It seems that changing this parameter worked for me.\n> Setting it to zero means that there wont be any parallel workers for one\n> query right ?\n> Is it something familiar this problem with the gatherers ?\n>\n\nYour example would not be using parallel workers anyway, regardless of the\nsetting of max_parallel_workers_per_gather, so I don't see how changing\nthis could have worked for you. Unless you mean it worked in your full\ntest, rather than in your test case. I doubt your test case benchmarking\nwas very reliable to start with, you only show a single execution and\ndidn't indicate you had more unshown ones.\n\nIf I do more credible benchmarking, I do get a performance regression but\nit closer is to 16% than to 3 fold. And it doesn't depend on the setting\nof max_parallel_workers_per_gather. I doubt a regression of this size is\neven worth investigating.\n\npgbench -T300 -P5 -f <(echo \"select count(*) from test1\") -p 9912 -n -M\nprepared\n\nCheers,\n\nJeff\n\nOn Sun, Nov 24, 2019 at 8:52 AM Mariel Cherkassky <[email protected]> wrote:Hey Andrew,It seems that changing this parameter worked for me.Setting it to zero means that there wont be any parallel workers for one query right ?Is it something familiar this problem with the gatherers ? Your example would not be using parallel workers anyway, regardless of the setting of max_parallel_workers_per_gather, so I don't see how changing this could have worked for you. Unless you mean it worked in your full test, rather than in your test case. I doubt your test case benchmarking was very reliable to start with, you only show a single execution and didn't indicate you had more unshown ones.If I do more credible benchmarking, I do get a performance regression but it closer is to 16% than to 3 fold. And it doesn't depend on the setting of max_parallel_workers_per_gather. I doubt a regression of this size is even worth investigating.pgbench -T300 -P5 -f <(echo \"select count(*) from test1\") -p 9912 -n -M preparedCheers,Jeff",
"msg_date": "Sun, 24 Nov 2019 10:30:02 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "Hey Jeff,\nThis example was only used to show that pg96 had better perfomance than\npg12 in a very simple case.\n In all the tests that I run most of the queries took less time on 9.6`s\nversion. I dont know why, but as you can see after disabling the parameter\nthe simple test that I did showed different results. I intend to test this\ntheory tomorrow. I'm going to disable the parameter and run the same\napplication flows that I have on both machines (9.6 vs 12 with zero value\nfor the param).\n\nI didnt send this mail after doing just one simple test, I have more than\n100 queries that work better on 9.6 . If u have any explanation I will be\nhappy to hear.\nI'll update tomorrow once I'll have the results..\n\nHey Jeff,This example was only used to show that pg96 had better perfomance than pg12 in a very simple case. In all the tests that I run most of the queries took less time on 9.6`s version. I dont know why, but as you can see after disabling the parameter the simple test that I did showed different results. I intend to test this theory tomorrow. I'm going to disable the parameter and run the same application flows that I have on both machines (9.6 vs 12 with zero value for the param).I didnt send this mail after doing just one simple test, I have more than 100 queries that work better on 9.6 . If u have any explanation I will be happy to hear.I'll update tomorrow once I'll have the results..",
"msg_date": "Sun, 24 Nov 2019 20:05:39 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "Op 24-11-2019 om 19:05 schreef Mariel Cherkassky:\n> Hey Jeff,\n> This example was only used to show that pg96 had better perfomance \n> than pg12 in a very simple case.\n> In all the tests that I run most of the queries took less time on \n> 9.6`s version. I dont know why, but as you can see after \n> disabling the parameter the simple test that I did showed different \n> results. I intend to test this theory tomorrow. I'm going to disable \n> the parameter and run the same application flows that I have on both \n> machines (9.6 vs 12 with zero value for the param).\n>\n> I didnt send this mail after doing just one simple test, I have more \n> than 100 queries that work better on 9.6 . If u have any explanation I \n> will be happy to hear.\n> I'll update tomorrow once I'll have the results..\n>\nI've had the same experience with parallel query. By default parallel \nquery is disabled in 9.6. When we upgraded from 9.6 to 10 is was \nsignificant slower till I disabled parallel query.\n\nWe have a very small database (40 tables and all together max 4GB) and \nwe have no long running queries (the largest queries run max 2 to 3 \nseconds). That's when parallel query gives a performance degrade in my \nopinion.\n\nBest regards,\n\nJohn Felix\n\n\n\n\n\n\n\n\nOp 24-11-2019 om 19:05 schreef Mariel\n Cherkassky:\n\n\n\n\nHey Jeff,\nThis example was only used to show that pg96 had\n better perfomance than pg12 in a very simple case.\n In all the tests that I run most of the queries\n took less time on 9.6`s version. I dont know why, but as you\n can see after disabling the parameter the simple test that I\n did showed different results. I intend to test this theory\n tomorrow. I'm going to disable the parameter and run the same\n application flows that I have on both machines (9.6 vs 12 with\n zero value for the param).\n\n\nI didnt send this mail after doing just one\n simple test, I have more than 100 queries that work better on\n 9.6 . If u have any explanation I will be happy to hear.\nI'll update tomorrow once I'll have the results..\n\n\n\n\n\n\n\n\nI've had the same experience with parallel query. By default\n parallel query is disabled in 9.6. When we upgraded from 9.6 to 10\n is was significant slower till I disabled parallel query.\nWe have a very small database (40 tables and all together max\n 4GB) and we have no long running queries (the largest queries run\n max 2 to 3 seconds). That's when parallel query gives a\n performance degrade in my opinion.\n\nBest regards,\nJohn Felix",
"msg_date": "Sun, 24 Nov 2019 21:00:48 +0100",
"msg_from": "John Felix <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "On Sun, Nov 24, 2019 at 1:05 PM Mariel Cherkassky <\[email protected]> wrote:\n\n> Hey Jeff,\n> This example was only used to show that pg96 had better perfomance than\n> pg12 in a very simple case.\n>\n\nOK, but do you agree that a 15% slow down is more realistic than 3 fold\none? Or are you still getting 3 fold slow down with more careful testing\nand over a wide variety of queries?\n\nI find that the main regression (about 15%) in your example occurs in major\nversion 10, at the following commit:\n\ncommit b8d7f053c5c2bf2a7e8734fe3327f6a8bc711755\nAuthor: Andres Freund <[email protected]>\nDate: Tue Mar 14 15:45:36 2017 -0700\n\n Faster expression evaluation and targetlist projection.\n\nIt is disappointing that this made this case slower rather than faster, and\nthat the \"future work\" alluded to either hasn't happened, or wasn't\neffective for this example. I also tested the same example, only 100 times\nmore rows, and still see the regression at about 16%. This is a major\ninfrastructure change patch which has been extensively built on since then,\nthe chances of reverting it are very small. It is making an omelette, and\nyour example is one of the eggs that got broken.\n\nPerformance changes in a large body of queries are usually not all due to\nthe same thing. Are you a position to custom compile your own PostgreSQL?\nIt would be nice to test this commit against the one before it, and see how\nmuch of the change in your real queries is explained by this one thing (or\nwhether any of it is)\n\n\n> In all the tests that I run most of the queries took less time on 9.6`s\n> version. I dont know why, but as you can see after disabling the parameter\n> the simple test that I did showed different results.\n>\n\nI can't see--You didn't post results for that. And running your test on my\nown system doesn't show that at all. In your test case,\nmax_parallel_workers_per_gather makes no difference. With 100 times more\nrows, setting it to 0 actually slows things down, as at that size\nparallelization is useful and disabling it hurts.\n\nOf course parallel query might be hurting some of the other queries, but\nfor the one example you show you will have to show something more\nconvincing for me to believe that that is what caused it.\n\nIt is easy to benchmark with something like:\n\nPGOPTIONS=\"-c max_parallel_workers_per_gather=0\" pgbench -T30 -f <(echo\n\"select count(*) from test1\") -p 9912 -n -M prepared\n\nIf it is other queries where mpwpg is making a difference, than one issue\ncould be that your settings of parallel_setup_cost and/or\nparllel_tuple_cost are too low (although I usually find the default\nsettings too high, not too low); or you are running your test concurrently\nalready and so don't need parallel query to fully load the CPUs and trying\nto use parallel query just increases the overhead; or your machine doesn't\nhave the number of truly effective CPUs you think it does.\n\nCheers,\n\nJeff\n\n>\n\nOn Sun, Nov 24, 2019 at 1:05 PM Mariel Cherkassky <[email protected]> wrote:Hey Jeff,This example was only used to show that pg96 had better perfomance than pg12 in a very simple case.OK, but do you agree that a 15% slow down is more realistic than 3 fold one? Or are you still getting 3 fold slow down with more careful testing and over a wide variety of queries?I find that the main regression (about 15%) in your example occurs in major version 10, at the following commit:commit b8d7f053c5c2bf2a7e8734fe3327f6a8bc711755Author: Andres Freund <[email protected]>Date: Tue Mar 14 15:45:36 2017 -0700 Faster expression evaluation and targetlist projection. It is disappointing that this made this case slower rather than faster, and that the \"future work\" alluded to either hasn't happened, or wasn't effective for this example. I also tested the same example, only 100 times more rows, and still see the regression at about 16%. This is a major infrastructure change patch which has been extensively built on since then, the chances of reverting it are very small. It is making an omelette, and your example is one of the eggs that got broken. Performance changes in a large body of queries are usually not all due to the same thing. Are you a position to custom compile your own PostgreSQL? It would be nice to test this commit against the one before it, and see how much of the change in your real queries is explained by this one thing (or whether any of it is) In all the tests that I run most of the queries took less time on 9.6`s version. I dont know why, but as you can see after disabling the parameter the simple test that I did showed different results.I can't see--You didn't post results for that. And running your test on my own system doesn't show that at all. In your test case, max_parallel_workers_per_gather makes no difference. With 100 times more rows, setting it to 0 actually slows things down, as at that size parallelization is useful and disabling it hurts.Of course parallel query might be hurting some of the other queries, but for the one example you show you will have to show something more convincing for me to believe that that is what caused it.It is easy to benchmark with something like: PGOPTIONS=\"-c max_parallel_workers_per_gather=0\" pgbench -T30 -f <(echo \"select count(*) from test1\") -p 9912 -n -M preparedIf it is other queries where mpwpg is making a difference, than one issue could be that your settings of parallel_setup_cost and/or parllel_tuple_cost are too low (although I usually find the default settings too high, not too low); or you are running your test concurrently already and so don't need parallel query to fully load the CPUs and trying to use parallel query just increases the overhead; or your machine doesn't have the number of truly effective CPUs you think it does.Cheers,Jeff",
"msg_date": "Sun, 24 Nov 2019 15:50:20 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "Hey Jeff,\nFirst of all thank you again for the quick response. I really appreciate\nyour comments.\nUnfortunately I installed pg from rpms so I cant compile my current env but\nI can install it from source code and migrate the data via pg_dump. Can you\nexplain how can I compile the sources without this commit ?\nI understand that my test such a good test but I thought that using\nsomething simple can show my issue. I was sure that I added the results of\nthe same tests after disabling the param but it seems that I didnt do it.\nI'll send it tomorrow because right now im not inforont of my pc. The\nbottom line, after disabling this param the same query took less time in\npg12(significly).\n From this email chain and other peoples comments it seems that I'm not the\nonly one who was facing this issue. One thing that might worth mentitoning,\nin all my tests I didnt use a big env, my db was at max 30GB and the max\nduration of a query was about 10s. It seems that this happens when we use\nqueries with short duration (in your test you mentioned u used the same\ntest but with 100 times more rows, maybe that is the reason it took more\ntime ?) - maybe on \"short\" queries it creates degredation ? It is just an\nassumption...\n\nI'll try to do most tests tomorrow and update with results.\n\n>\n\nHey Jeff,First of all thank you again for the quick response. I really appreciate your comments.Unfortunately I installed pg from rpms so I cant compile my current env but I can install it from source code and migrate the data via pg_dump. Can you explain how can I compile the sources without this commit ? I understand that my test such a good test but I thought that using something simple can show my issue. I was sure that I added the results of the same tests after disabling the param but it seems that I didnt do it. I'll send it tomorrow because right now im not inforont of my pc. The bottom line, after disabling this param the same query took less time in pg12(significly).From this email chain and other peoples comments it seems that I'm not the only one who was facing this issue. One thing that might worth mentitoning, in all my tests I didnt use a big env, my db was at max 30GB and the max duration of a query was about 10s. It seems that this happens when we use queries with short duration (in your test you mentioned u used the same test but with 100 times more rows, maybe that is the reason it took more time ?) - maybe on \"short\" queries it creates degredation ? It is just an assumption...I'll try to do most tests tomorrow and update with results.",
"msg_date": "Sun, 24 Nov 2019 23:49:14 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-24 15:50:20 -0500, Jeff Janes wrote:\n> OK, but do you agree that a 15% slow down is more realistic than 3 fold\n> one? Or are you still getting 3 fold slow down with more careful testing\n> and over a wide variety of queries?\n> \n> I find that the main regression (about 15%) in your example occurs in major\n> version 10, at the following commit:\n\nHuh, that's somewhat surprising. <5% I can see - there were some\ntradeoffs to be made, and some performance issues to be worked around,\nbut 15% seems large. Is this with assertions enabled? Optimized?\n\n\n> I also tested the same example, only 100 times\n> more rows, and still see the regression at about 16%. This is a major\n> infrastructure change patch which has been extensively built on since then,\n> the chances of reverting it are very small. It is making an omelette, and\n> your example is one of the eggs that got broken.\n\nYea, there's zero chance of a revert.\n\n\n> Performance changes in a large body of queries are usually not all due to\n> the same thing. Are you a position to custom compile your own PostgreSQL?\n> It would be nice to test this commit against the one before it, and see how\n> much of the change in your real queries is explained by this one thing (or\n> whether any of it is)\n\nIn particular, artificial queries will often show bottlenecks that are\nnot releveant in practice...\n\n\n\n> commit b8d7f053c5c2bf2a7e8734fe3327f6a8bc711755\n> Author: Andres Freund <[email protected]>\n> Date: Tue Mar 14 15:45:36 2017 -0700\n> \n> Faster expression evaluation and targetlist projection.\n> \n> It is disappointing that this made this case slower rather than faster, and\n> that the \"future work\" alluded to either hasn't happened, or wasn't\n> effective for this example.\n\nI wonder if the improvements in\nhttps://www.postgresql.org/message-id/20191023163849.sosqbfs5yenocez3%40alap3.anarazel.de\nwould at least partially address this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Dec 2019 13:13:21 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "Hey Jeff,Andrew,\nI continued testing the 12version vs the 96 version and it seems that there\nis almost non diff and in some cases pg96 is faster than 12. I compared the\ncontent of pg_stat_statements after each test that I have done and it seems\nthat the db time is almost the same and sometimes 96 is faster by 5%.\n\nAny idea why there isnt any improvement even when I enabled the parallel\nparams in 12 ?\nI can add a few examples if needed..\n\n>\n\nHey Jeff,Andrew,I continued testing the 12version vs the 96 version and it seems that there is almost non diff and in some cases pg96 is faster than 12. I compared the content of pg_stat_statements after each test that I have done and it seems that the db time is almost the same and sometimes 96 is faster by 5%.Any idea why there isnt any improvement even when I enabled the parallel params in 12 ?I can add a few examples if needed..",
"msg_date": "Mon, 16 Dec 2019 13:48:20 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "Hi there –\n\nI have no idea why this happening. But I suspect the parallel requires more internal machine resources like CPU etc because you can faster retrieve the disk data from the one hand but you ought to spend more resources for maintaining several threads and theirs coordination (px coordinator process in Oracle terms) from another one. Thus there could be more serious hardware requirements even just to keep performance the same. I believe that the real benefit of the parallel will be shown when you have pair of large and wide tables (30M or more each) with hash join (typical task for mart construction) but such class of databases is supposed to be big and required enough resources initially.\n\n \n\n \n\nFrom: Mariel Cherkassky <[email protected]> \nSent: Monday, December 16, 2019 2:48 PM\nTo: Jeff Janes <[email protected]>\nCc: Andrew Zakharov <[email protected]>; [email protected]\nSubject: Re: performance degredation after upgrade from 9.6 to 12\n\n \n\nHey Jeff,Andrew,\n\nI continued testing the 12version vs the 96 version and it seems that there is almost non diff and in some cases pg96 is faster than 12. I compared the content of pg_stat_statements after each test that I have done and it seems that the db time is almost the same and sometimes 96 is faster by 5%.\n\n \n\nAny idea why there isnt any improvement even when I enabled the parallel params in 12 ?\n\nI can add a few examples if needed..\n\n\nHi there –I have no idea why this happening. But I suspect the parallel requires more internal machine resources like CPU etc because you can faster retrieve the disk data from the one hand but you ought to spend more resources for maintaining several threads and theirs coordination (px coordinator process in Oracle terms) from another one. Thus there could be more serious hardware requirements even just to keep performance the same. I believe that the real benefit of the parallel will be shown when you have pair of large and wide tables (30M or more each) with hash join (typical task for mart construction) but such class of databases is supposed to be big and required enough resources initially. From: Mariel Cherkassky <[email protected]> Sent: Monday, December 16, 2019 2:48 PMTo: Jeff Janes <[email protected]>Cc: Andrew Zakharov <[email protected]>; [email protected]: Re: performance degredation after upgrade from 9.6 to 12 Hey Jeff,Andrew,I continued testing the 12version vs the 96 version and it seems that there is almost non diff and in some cases pg96 is faster than 12. I compared the content of pg_stat_statements after each test that I have done and it seems that the db time is almost the same and sometimes 96 is faster by 5%. Any idea why there isnt any improvement even when I enabled the parallel params in 12 ?I can add a few examples if needed..",
"msg_date": "Mon, 16 Dec 2019 15:27:01 +0300",
"msg_from": "\"Andrew Zakharov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "I see, thank u !\nMaybe I didnt see big difference because most of my tables arent so big. My\ndb`s size is 17GB and the largest table contains about 20M+ records.\n\nThanks again !\n\nI see, thank u !Maybe I didnt see big difference because most of my tables arent so big. My db`s size is 17GB and the largest table contains about 20M+ records.Thanks again !",
"msg_date": "Mon, 16 Dec 2019 15:01:57 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
},
{
"msg_contents": "po 16. 12. 2019 v 14:02 odesílatel Mariel Cherkassky <\[email protected]> napsal:\n\n> I see, thank u !\n> Maybe I didnt see big difference because most of my tables arent so big.\n> My db`s size is 17GB and the largest table contains about 20M+ records.\n>\n\nPostgres 12 has enabled JIT by default.\n\nPavel\n\n\n> Thanks again !\n>\n\npo 16. 12. 2019 v 14:02 odesílatel Mariel Cherkassky <[email protected]> napsal:I see, thank u !Maybe I didnt see big difference because most of my tables arent so big. My db`s size is 17GB and the largest table contains about 20M+ records.Postgres 12 has enabled JIT by default.PavelThanks again !",
"msg_date": "Mon, 16 Dec 2019 18:29:12 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance degredation after upgrade from 9.6 to 12"
}
] |
[
{
"msg_contents": "Hi,\nI'm using PostgreSQL on Windows for Planet OSM database and have\nnoticed considirable decrease in performance when upgrading from v10\nto 11 or 12. Here are the details of the experiment I conducted trying\nto figure out what is causing the issue.\n\nInstalled PostgreSQL 10 from scratch. Created a database and a table.\n\nCREATE TABLE ways (\n id bigint NOT NULL,\n version int NOT NULL,\n user_id int NOT NULL,\n tstamp timestamp without time zone NOT NULL,\n changeset_id bigint NOT NULL,\n tags hstore,\n nodes bigint[]\n);\n\nImported ways data from a file and added a primary key.\n\nSET synchronous_commit TO OFF;\nCOPY ways FROM 'E:\\ways.txt';\nALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);\n\nThe file is 365GB in size.\n\nThe copy operation took 3.5h and the resulting table size is 253GB.\nThe primary key operation took 20 minutes and occuped 13GB of disk\nspace.\n\nThen I unstalled PostgreSQL v10, deleted the data directory and\ninstalled v11 from scratch. Created the same kind of database and\ntable. v11 is not able to handle large files, so the I piped the data\nthrough the cmd type command, and then added the primary key with the\nsame command as above. synchronous_commit turned off beforehand as\nabove.\n\nCOPY ways FROM PROGRAM 'cmd /c \"type E:\\ways.txt\"';\n\nThe copy operation took 7 hours and adding primary key took 1h 40m !\nThe resulting table and pk sizes are the same as in v10. Also very\nhigh load on disk drive (quite often at 100%) was observed.\n\nv12 performs the same as v11.\n\nHere are the changes in v11 default postgresql.conf file compared to\nv10 one. Differences in Authentication, Replication and Logging\nsections are skipped.\n\n-#replacement_sort_tuples = 150000\n+#max_parallel_maintenance_workers = 2\n+#parallel_leader_participation = on\n~max_wal_size = 1GB (in v10 is commented out)\n~min_wal_size = 80MB (in v10 is commented out)\n+#enable_parallel_append = on\n+#enable_partitionwise_join = off\n+#enable_partitionwise_aggregate = off\n+#enable_parallel_hash = on\n+#enable_partition_pruning = on\n+#jit_above_cost = 100000\n+#jit_inline_above_cost = 500000\n+#jit_optimize_above_cost = 500000\n+#jit = off\n+#jit_provider = 'llvmjit'\n+#vacuum_cleanup_index_scale_factor = 0.1\n\nAny ideas pleaes on what is trapping the performance?\n\nRegards\n\n\n",
"msg_date": "Fri, 29 Nov 2019 13:04:37 +0300",
"msg_from": "Eugene Podshivalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Considerable performance downgrade of v11 and 12 on Windows"
},
{
"msg_contents": "Eugene Podshivalov schrieb am 29.11.2019 um 11:04:\n> Imported ways data from a file and added a primary key.\n> \n> SET synchronous_commit TO OFF;\n> COPY ways FROM 'E:\\ways.txt';\n\n> ...\n> COPY ways FROM PROGRAM 'cmd /c \"type E:\\ways.txt\"';\n\nThose two commands are not doing the same thing - the piping through the TYPE command is most probably eating all the performance\n\n\n\n\n",
"msg_date": "Fri, 29 Nov 2019 11:36:56 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Considerable performance downgrade of v11 and 12 on Windows"
},
{
"msg_contents": "I don't think so. Why adding primary key shows the same downgraded\nperformance as well then?\n\nпт, 29 нояб. 2019 г. в 13:37, Thomas Kellerer <[email protected]>:\n>\n> Eugene Podshivalov schrieb am 29.11.2019 um 11:04:\n> > Imported ways data from a file and added a primary key.\n> >\n> > SET synchronous_commit TO OFF;\n> > COPY ways FROM 'E:\\ways.txt';\n>\n> > ...\n> > COPY ways FROM PROGRAM 'cmd /c \"type E:\\ways.txt\"';\n>\n> Those two commands are not doing the same thing - the piping through the TYPE command is most probably eating all the performance\n>\n>\n>\n>\n\n\n",
"msg_date": "Fri, 29 Nov 2019 13:47:26 +0300",
"msg_from": "Eugene Podshivalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Considerable performance downgrade of v11 and 12 on Windows"
},
{
"msg_contents": "On Fri, 2019-11-29 at 13:04 +0300, Eugene Podshivalov wrote:\n> I'm using PostgreSQL on Windows for Planet OSM database and have\n> noticed considirable decrease in performance when upgrading from v10\n> to 11 or 12. Here are the details of the experiment I conducted trying\n> to figure out what is causing the issue.\n> \n> Installed PostgreSQL 10 from scratch. Created a database and a table.\n> [...]\n> SET synchronous_commit TO OFF;\n> COPY ways FROM 'E:\\ways.txt';\n> ALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);\n> \n> The file is 365GB in size.\n> \n> The copy operation took 3.5h and the resulting table size is 253GB.\n> The primary key operation took 20 minutes and occuped 13GB of disk\n> space.\n> \n> Then I unstalled PostgreSQL v10, deleted the data directory and\n> installed v11 from scratch. Created the same kind of database and\n> table. v11 is not able to handle large files, so the I piped the data\n> through the cmd type command, and then added the primary key with the\n> same command as above. synchronous_commit turned off beforehand as\n> above.\n> \n> COPY ways FROM PROGRAM 'cmd /c \"type E:\\ways.txt\"';\n> \n> The copy operation took 7 hours and adding primary key took 1h 40m !\n> The resulting table and pk sizes are the same as in v10. Also very\n> high load on disk drive (quite often at 100%) was observed.\n> \n> v12 performs the same as v11.\n> \n> Here are the changes in v11 default postgresql.conf file compared to\n> v10 one. Differences in Authentication, Replication and Logging\n> sections are skipped.\n> \n> -#replacement_sort_tuples = 150000\n> +#max_parallel_maintenance_workers = 2\n> +#parallel_leader_participation = on\n> ~max_wal_size = 1GB (in v10 is commented out)\n> ~min_wal_size = 80MB (in v10 is commented out)\n> +#enable_parallel_append = on\n> +#enable_partitionwise_join = off\n> +#enable_partitionwise_aggregate = off\n> +#enable_parallel_hash = on\n> +#enable_partition_pruning = on\n> +#jit_above_cost = 100000\n> +#jit_inline_above_cost = 500000\n> +#jit_optimize_above_cost = 500000\n> +#jit = off\n> +#jit_provider = 'llvmjit'\n> +#vacuum_cleanup_index_scale_factor = 0.1\n> \n> Any ideas pleaes on what is trapping the performance?\n\nSeems like you have a very weak I/O subsystem.\n\nFor the COPY, try doing it the same way in both cases (without the \"type\").\n\nFor the index creation, perhaps set \"max_parallel_maintenance_workers = 0\"\nso that your system doesn't get overloaded.\n\nIs \"maintenance_work_mem\" set to the same value in both cases?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Fri, 29 Nov 2019 13:04:49 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Considerable performance downgrade of v11 and 12 on Windows"
},
{
"msg_contents": "Laurenz,\nThere is no way to run copy without the \"type\" on v11. See this thread\nhttps://www.postgresql.org/message-id/CAKHmqNCfTMM6%3DPqc6RUMEQ_2BPfo5KGGG-0fzRXZCVooo%3DwdNA%40mail.gmail.com\n\nMy machine is running on NVMe disks, so the I/O subsystem very strong.\nThe 100% overload is not constant but periodical, as if there are some\nkind of dumps for recovery performed in the background.\n\nmaintenance_work_mem is the same in both cases.\n\nRegards\n\nпт, 29 нояб. 2019 г. в 15:04, Laurenz Albe <[email protected]>:\n>\n> On Fri, 2019-11-29 at 13:04 +0300, Eugene Podshivalov wrote:\n> > I'm using PostgreSQL on Windows for Planet OSM database and have\n> > noticed considirable decrease in performance when upgrading from v10\n> > to 11 or 12. Here are the details of the experiment I conducted trying\n> > to figure out what is causing the issue.\n> >\n> > Installed PostgreSQL 10 from scratch. Created a database and a table.\n> > [...]\n> > SET synchronous_commit TO OFF;\n> > COPY ways FROM 'E:\\ways.txt';\n> > ALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);\n> >\n> > The file is 365GB in size.\n> >\n> > The copy operation took 3.5h and the resulting table size is 253GB.\n> > The primary key operation took 20 minutes and occuped 13GB of disk\n> > space.\n> >\n> > Then I unstalled PostgreSQL v10, deleted the data directory and\n> > installed v11 from scratch. Created the same kind of database and\n> > table. v11 is not able to handle large files, so the I piped the data\n> > through the cmd type command, and then added the primary key with the\n> > same command as above. synchronous_commit turned off beforehand as\n> > above.\n> >\n> > COPY ways FROM PROGRAM 'cmd /c \"type E:\\ways.txt\"';\n> >\n> > The copy operation took 7 hours and adding primary key took 1h 40m !\n> > The resulting table and pk sizes are the same as in v10. Also very\n> > high load on disk drive (quite often at 100%) was observed.\n> >\n> > v12 performs the same as v11.\n> >\n> > Here are the changes in v11 default postgresql.conf file compared to\n> > v10 one. Differences in Authentication, Replication and Logging\n> > sections are skipped.\n> >\n> > -#replacement_sort_tuples = 150000\n> > +#max_parallel_maintenance_workers = 2\n> > +#parallel_leader_participation = on\n> > ~max_wal_size = 1GB (in v10 is commented out)\n> > ~min_wal_size = 80MB (in v10 is commented out)\n> > +#enable_parallel_append = on\n> > +#enable_partitionwise_join = off\n> > +#enable_partitionwise_aggregate = off\n> > +#enable_parallel_hash = on\n> > +#enable_partition_pruning = on\n> > +#jit_above_cost = 100000\n> > +#jit_inline_above_cost = 500000\n> > +#jit_optimize_above_cost = 500000\n> > +#jit = off\n> > +#jit_provider = 'llvmjit'\n> > +#vacuum_cleanup_index_scale_factor = 0.1\n> >\n> > Any ideas pleaes on what is trapping the performance?\n>\n> Seems like you have a very weak I/O subsystem.\n>\n> For the COPY, try doing it the same way in both cases (without the \"type\").\n>\n> For the index creation, perhaps set \"max_parallel_maintenance_workers = 0\"\n> so that your system doesn't get overloaded.\n>\n> Is \"maintenance_work_mem\" set to the same value in both cases?\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n\n\n",
"msg_date": "Fri, 29 Nov 2019 15:22:45 +0300",
"msg_from": "Eugene Podshivalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Considerable performance downgrade of v11 and 12 on Windows"
},
{
"msg_contents": "It sounds strange but the \"type\" is indeed impacting the overall\nperformance somehow.\nI've just tried to execute the following sequence of commands on a\nfresh new database with PostreSQL v10 and both the copy and primary\nkey commands performed as slow as in v11 and 12.\n\nSET synchronous_commit TO OFF;\nSET client_encoding TO 'UTF8';\nCOPY ways FROM program 'cmd /c \"type D:\\ways.txt\"';\nALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);\n\nRegards\n\nпт, 29 нояб. 2019 г. в 15:22, Eugene Podshivalov <[email protected]>:\n>\n> Laurenz,\n> There is no way to run copy without the \"type\" on v11. See this thread\n> https://www.postgresql.org/message-id/CAKHmqNCfTMM6%3DPqc6RUMEQ_2BPfo5KGGG-0fzRXZCVooo%3DwdNA%40mail.gmail.com\n>\n> My machine is running on NVMe disks, so the I/O subsystem very strong.\n> The 100% overload is not constant but periodical, as if there are some\n> kind of dumps for recovery performed in the background.\n>\n> maintenance_work_mem is the same in both cases.\n>\n> Regards\n>\n> пт, 29 нояб. 2019 г. в 15:04, Laurenz Albe <[email protected]>:\n> >\n> > On Fri, 2019-11-29 at 13:04 +0300, Eugene Podshivalov wrote:\n> > > I'm using PostgreSQL on Windows for Planet OSM database and have\n> > > noticed considirable decrease in performance when upgrading from v10\n> > > to 11 or 12. Here are the details of the experiment I conducted trying\n> > > to figure out what is causing the issue.\n> > >\n> > > Installed PostgreSQL 10 from scratch. Created a database and a table.\n> > > [...]\n> > > SET synchronous_commit TO OFF;\n> > > COPY ways FROM 'E:\\ways.txt';\n> > > ALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);\n> > >\n> > > The file is 365GB in size.\n> > >\n> > > The copy operation took 3.5h and the resulting table size is 253GB.\n> > > The primary key operation took 20 minutes and occuped 13GB of disk\n> > > space.\n> > >\n> > > Then I unstalled PostgreSQL v10, deleted the data directory and\n> > > installed v11 from scratch. Created the same kind of database and\n> > > table. v11 is not able to handle large files, so the I piped the data\n> > > through the cmd type command, and then added the primary key with the\n> > > same command as above. synchronous_commit turned off beforehand as\n> > > above.\n> > >\n> > > COPY ways FROM PROGRAM 'cmd /c \"type E:\\ways.txt\"';\n> > >\n> > > The copy operation took 7 hours and adding primary key took 1h 40m !\n> > > The resulting table and pk sizes are the same as in v10. Also very\n> > > high load on disk drive (quite often at 100%) was observed.\n> > >\n> > > v12 performs the same as v11.\n> > >\n> > > Here are the changes in v11 default postgresql.conf file compared to\n> > > v10 one. Differences in Authentication, Replication and Logging\n> > > sections are skipped.\n> > >\n> > > -#replacement_sort_tuples = 150000\n> > > +#max_parallel_maintenance_workers = 2\n> > > +#parallel_leader_participation = on\n> > > ~max_wal_size = 1GB (in v10 is commented out)\n> > > ~min_wal_size = 80MB (in v10 is commented out)\n> > > +#enable_parallel_append = on\n> > > +#enable_partitionwise_join = off\n> > > +#enable_partitionwise_aggregate = off\n> > > +#enable_parallel_hash = on\n> > > +#enable_partition_pruning = on\n> > > +#jit_above_cost = 100000\n> > > +#jit_inline_above_cost = 500000\n> > > +#jit_optimize_above_cost = 500000\n> > > +#jit = off\n> > > +#jit_provider = 'llvmjit'\n> > > +#vacuum_cleanup_index_scale_factor = 0.1\n> > >\n> > > Any ideas pleaes on what is trapping the performance?\n> >\n> > Seems like you have a very weak I/O subsystem.\n> >\n> > For the COPY, try doing it the same way in both cases (without the \"type\").\n> >\n> > For the index creation, perhaps set \"max_parallel_maintenance_workers = 0\"\n> > so that your system doesn't get overloaded.\n> >\n> > Is \"maintenance_work_mem\" set to the same value in both cases?\n> >\n> > Yours,\n> > Laurenz Albe\n> > --\n> > Cybertec | https://www.cybertec-postgresql.com\n> >\n\n\n",
"msg_date": "Sat, 30 Nov 2019 22:47:02 +0300",
"msg_from": "Eugene Podshivalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Considerable performance downgrade of v11 and 12 on Windows"
},
{
"msg_contents": "On Sat, 2019-11-30 at 22:47 +0300, Eugene Podshivalov wrote:\n> It sounds strange but the \"type\" is indeed impacting the overall\n> performance somehow.\n> I've just tried to execute the following sequence of commands on a\n> fresh new database with PostreSQL v10 and both the copy and primary\n> key commands performed as slow as in v11 and 12.\n> \n> SET synchronous_commit TO OFF;\n> SET client_encoding TO 'UTF8';\n> COPY ways FROM program 'cmd /c \"type D:\\ways.txt\"';\n> ALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);\n> \n> Regards\n> \n> пт, 29 нояб. 2019 г. в 15:22, Eugene Podshivalov <[email protected]>:\n> > Laurenz,\n> > There is no way to run copy without the \"type\" on v11. See this thread\n> > https://www.postgresql.org/message-id/CAKHmqNCfTMM6%3DPqc6RUMEQ_2BPfo5KGGG-0fzRXZCVooo%3DwdNA%40mail.gmail.com\n> > \n> > My machine is running on NVMe disks, so the I/O subsystem very strong.\n> > The 100% overload is not constant but periodical, as if there are some\n> > kind of dumps for recovery performed in the background.\n\nIs it an option to split the file into parts of less than 2GB in size?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 02 Dec 2019 10:04:29 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Considerable performance downgrade of v11 and 12 on Windows"
},
{
"msg_contents": "I have managed to split the 365GB file into 2GB chunks with the help of\n'split' unix utility in mingw shell like so\nsplit -C 2GB ways.txt\nThen I imported the files into a clean database with the help of the\nfollowing cmd command\nfor /f %f in ('dir /b') do psql -U postgres -w -d osm -t -c \"set\nclient_encoding TO 'UTF8'; copy ways from 'D:\\ways\\%f';\"\nThe operation took ~3.5 hour which is the same as v10!\n\nPrior to that I set 'parallel_leader_participation = on' and\n'synchronous_commit = off' in the config file and restarted the server.\n\nThen I logged into the psql interactive terminal and ran\nALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);\nIt took 1h 10m which is 30m faster than with the default settings (after\n'type' commad if it really matters) but still 3 times slower than in v10.\n\nRegards\n\nпн, 2 дек. 2019 г. в 12:04, Laurenz Albe <[email protected]>:\n\n> On Sat, 2019-11-30 at 22:47 +0300, Eugene Podshivalov wrote:\n> > It sounds strange but the \"type\" is indeed impacting the overall\n> > performance somehow.\n> > I've just tried to execute the following sequence of commands on a\n> > fresh new database with PostreSQL v10 and both the copy and primary\n> > key commands performed as slow as in v11 and 12.\n> >\n> > SET synchronous_commit TO OFF;\n> > SET client_encoding TO 'UTF8';\n> > COPY ways FROM program 'cmd /c \"type D:\\ways.txt\"';\n> > ALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);\n> >\n> > Regards\n> >\n> > пт, 29 нояб. 2019 г. в 15:22, Eugene Podshivalov <[email protected]>:\n> > > Laurenz,\n> > > There is no way to run copy without the \"type\" on v11. See this thread\n> > >\n> https://www.postgresql.org/message-id/CAKHmqNCfTMM6%3DPqc6RUMEQ_2BPfo5KGGG-0fzRXZCVooo%3DwdNA%40mail.gmail.com\n> > >\n> > > My machine is running on NVMe disks, so the I/O subsystem very strong.\n> > > The 100% overload is not constant but periodical, as if there are some\n> > > kind of dumps for recovery performed in the background.\n>\n> Is it an option to split the file into parts of less than 2GB in size?\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\nI have managed to split the 365GB file into 2GB chunks with the help of 'split' unix utility in mingw shell like sosplit -C 2GB \n\nways.txtThen I imported the files into a clean database with the help of the following cmd commandfor /f %f in ('dir /b') do psql -U postgres -w -d osm -t -c \"set client_encoding TO 'UTF8'; copy ways from 'D:\\ways\\%f';\"The operation took ~3.5 hour which is the same as v10!Prior to that I set 'parallel_leader_participation = on' and 'synchronous_commit = off' in the config file and restarted the server.Then I logged into the psql interactive terminal and ranALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);It took 1h 10m which is 30m faster than with the default settings (after 'type' commad if it really matters) but still 3 times slower than in v10.Regardsпн, 2 дек. 2019 г. в 12:04, Laurenz Albe <[email protected]>:On Sat, 2019-11-30 at 22:47 +0300, Eugene Podshivalov wrote:\n> It sounds strange but the \"type\" is indeed impacting the overall\n> performance somehow.\n> I've just tried to execute the following sequence of commands on a\n> fresh new database with PostreSQL v10 and both the copy and primary\n> key commands performed as slow as in v11 and 12.\n> \n> SET synchronous_commit TO OFF;\n> SET client_encoding TO 'UTF8';\n> COPY ways FROM program 'cmd /c \"type D:\\ways.txt\"';\n> ALTER TABLE ONLY ways ADD CONSTRAINT pk_ways PRIMARY KEY (id);\n> \n> Regards\n> \n> пт, 29 нояб. 2019 г. в 15:22, Eugene Podshivalov <[email protected]>:\n> > Laurenz,\n> > There is no way to run copy without the \"type\" on v11. See this thread\n> > https://www.postgresql.org/message-id/CAKHmqNCfTMM6%3DPqc6RUMEQ_2BPfo5KGGG-0fzRXZCVooo%3DwdNA%40mail.gmail.com\n> > \n> > My machine is running on NVMe disks, so the I/O subsystem very strong.\n> > The 100% overload is not constant but periodical, as if there are some\n> > kind of dumps for recovery performed in the background.\n\nIs it an option to split the file into parts of less than 2GB in size?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com",
"msg_date": "Mon, 2 Dec 2019 22:03:51 +0300",
"msg_from": "Eugene Podshivalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Considerable performance downgrade of v11 and 12 on Windows"
}
] |
[
{
"msg_contents": "Hello community!\n\nWe are currently testing PostgreSQL 11's built-in logical replication. We\nare trying to initialize a subscriber (from scratch) from a publisher with\na large database (+6TB) with around 220 tables.\n\nWe tweaked the configuration parameters below, both on publisher and\nsubscriber, in order to minimize the initial copy data phase delay:\n\n- max_replication_slots\n- max_wal_senders\n- max_wal_size\n- max_worker_processes\n- max_logical_replication_workers\n- max_sync_workers_per_subscription\n- max_worker_processes\n\nThe two PostgreSQL instances are using the same hardware: 48 vCPU, 384 GB\nram, 10GB network and same version of software (PostgreSQL 11.6).\n\nWe pre-loaded the full schema of the database (with indexes and\nconstraints) on the subscriber since it's mandatory to have the logical\nreplication working.\n\nHowever, the initial copy data phase is quite long (+2 days and still\nrunning) for largest tables in the database. There is no load on the\npublisher since it's a staging environment.\nWe noticed that logical replication workers processes on the subscriber can\nreach more than 90% CPU usage per worker.\n\nWe understand that we cannot have more than one worker per table running\nbut we would like to know if there is anything that could help us to\nachieve this initial copy phase more quickly.\n\nWe tried another solution: we loaded a minimal schema (without indexes and\nconstraints) on the subscriber and created the subscription. The initial\ncopy phase was way faster (a few hours). Then we created indexes and\nconstraints. Is this a suitable solution for production? Will the logical\nreplication flow be buffered by the replication slots during index creation\nand get in sync afterwards or will it conflict due to locking issues?\n\nMany thanks for your help.\n\n-- \nFlorian Philippon\n\nHello community!We are currently testing PostgreSQL 11's built-in logical replication. We are trying to initialize a subscriber (from scratch) from a publisher with a large database (+6TB) with around 220 tables.We tweaked the configuration parameters below, both on publisher and subscriber, in order to minimize the initial copy data phase delay:- max_replication_slots- max_wal_senders- max_wal_size- max_worker_processes- max_logical_replication_workers- max_sync_workers_per_subscription- max_worker_processesThe two PostgreSQL instances are using the same hardware: 48 vCPU, 384 GB ram, 10GB network and same version of software (PostgreSQL 11.6).We pre-loaded the full schema of the database (with indexes and constraints) on the subscriber since it's mandatory to have the logical replication working.However, the initial copy data phase is quite long (+2 days and still running) for largest tables in the database. There is no load on the publisher since it's a staging environment.We noticed that logical replication workers processes on the subscriber can reach more than 90% CPU usage per worker.We understand that we cannot have more than one worker per table running but we would like to know if there is anything that could help us to achieve this initial copy phase more quickly.We tried another solution: we loaded a minimal schema (without indexes and constraints) on the subscriber and created the subscription. The initial copy phase was way faster (a few hours). Then we created indexes and constraints. Is this a suitable solution for production? Will the logical replication flow be buffered by the replication slots during index creation and get in sync afterwards or will it conflict due to locking issues?Many thanks for your help.-- Florian Philippon",
"msg_date": "Fri, 29 Nov 2019 17:06:34 +0100",
"msg_from": "Florian Philippon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logical replication performance"
},
{
"msg_contents": "Em sex., 29 de nov. de 2019 às 17:06, Florian Philippon <\[email protected]> escreveu:\n\n> Hello community!\n>\n> Hi Florian\n\n\n> We are currently testing PostgreSQL 11's built-in logical replication. We\n> are trying to initialize a subscriber (from scratch) from a publisher with\n> a large database (+6TB) with around 220 tables.\n>\n> We tweaked the configuration parameters below, both on publisher and\n> subscriber, in order to minimize the initial copy data phase delay:\n>\n> - max_replication_slots\n> - max_wal_senders\n> - max_wal_size\n> - max_worker_processes\n> - max_logical_replication_workers\n> - max_sync_workers_per_subscription\n> - max_worker_processes\n>\n> The two PostgreSQL instances are using the same hardware: 48 vCPU, 384 GB\n> ram, 10GB network and same version of software (PostgreSQL 11.6).\n>\n> We pre-loaded the full schema of the database (with indexes and\n> constraints) on the subscriber since it's mandatory to have the logical\n> replication working.\n>\n> However, the initial copy data phase is quite long (+2 days and still\n> running) for largest tables in the database. There is no load on the\n> publisher since it's a staging environment.\n> We noticed that logical replication workers processes on the subscriber\n> can reach more than 90% CPU usage per worker.\n>\n> We understand that we cannot have more than one worker per table running\n> but we would like to know if there is anything that could help us to\n> achieve this initial copy phase more quickly.\n>\n> We tried another solution: we loaded a minimal schema (without indexes and\n> constraints) on the subscriber and created the subscription. The initial\n> copy phase was way faster (a few hours). Then we created indexes and\n> constraints. Is this a suitable solution for production? Will the logical\n> replication flow be buffered by the replication slots during index creation\n> and get in sync afterwards or will it conflict due to locking issues?\n>\n>\nYou can try the pg_dump over a snapshot and use parallel restore\n(pg_restore -j option) to your initial data load, it should be much faster\nthan an initial sync. Take a look here:\nhttps://www.postgresql.org/docs/11/logicaldecoding-explanation.html#id-1.8.14.8.5\n\nBest,\nFlavio\n\nEm sex., 29 de nov. de 2019 às 17:06, Florian Philippon <[email protected]> escreveu:Hello community!Hi Florian We are currently testing PostgreSQL 11's built-in logical replication. We are trying to initialize a subscriber (from scratch) from a publisher with a large database (+6TB) with around 220 tables.We tweaked the configuration parameters below, both on publisher and subscriber, in order to minimize the initial copy data phase delay:- max_replication_slots- max_wal_senders- max_wal_size- max_worker_processes- max_logical_replication_workers- max_sync_workers_per_subscription- max_worker_processesThe two PostgreSQL instances are using the same hardware: 48 vCPU, 384 GB ram, 10GB network and same version of software (PostgreSQL 11.6).We pre-loaded the full schema of the database (with indexes and constraints) on the subscriber since it's mandatory to have the logical replication working.However, the initial copy data phase is quite long (+2 days and still running) for largest tables in the database. There is no load on the publisher since it's a staging environment.We noticed that logical replication workers processes on the subscriber can reach more than 90% CPU usage per worker.We understand that we cannot have more than one worker per table running but we would like to know if there is anything that could help us to achieve this initial copy phase more quickly.We tried another solution: we loaded a minimal schema (without indexes and constraints) on the subscriber and created the subscription. The initial copy phase was way faster (a few hours). Then we created indexes and constraints. Is this a suitable solution for production? Will the logical replication flow be buffered by the replication slots during index creation and get in sync afterwards or will it conflict due to locking issues? You can try the pg_dump over a snapshot and use parallel restore (pg_restore -j option) to your initial data load, it should be much faster than an initial sync. Take a look here:https://www.postgresql.org/docs/11/logicaldecoding-explanation.html#id-1.8.14.8.5Best,Flavio",
"msg_date": "Fri, 29 Nov 2019 17:25:43 +0100",
"msg_from": "Flavio Henrique Araque Gurgel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication performance"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 11:06 AM Florian Philippon <\[email protected]> wrote:\n\n>\n> We tried another solution: we loaded a minimal schema (without indexes and\n> constraints) on the subscriber and created the subscription. The initial\n> copy phase was way faster (a few hours). Then we created indexes and\n> constraints. Is this a suitable solution for production?\n>\n\nThis is probably not suitable for production. Once the COPY is finished,\nit still has to replicate row-by-row changes to the table rows which\noccurred since the starting COPY snapshot. UPDATEs and DELETEs will\nprobably fail due to the lack of indexes on the “replica identity”\ncolumns. This failure will make the entire transaction, including the\nCOPY, roll back to beginning. So you there will be no point at which you\ncan build the missing indexes without first losing all the work that was\ndone. If the master was quiescent (at least in regards to UPDATEs and\nDELETEs) then it there will be no row-by-row changes to apply between the\nstart of the COPY and the start of transactional replication. In that\ncase, the COPY will have committed before the system discovers the problem\nwith the “replica identity”, giving you an opportunity to go build the\nindex without losing all of the work.\n\n\n\n> Will the logical replication flow be buffered by the replication slots\n> during index creation and get in sync afterwards or will it conflict due to\n> locking issues?\n>\n\nIt can't buffer in the middle of the transaction which includes the initial\nCOPY.\n\nCheers,\n\nJeff\n\nOn Fri, Nov 29, 2019 at 11:06 AM Florian Philippon <[email protected]> wrote:We tried another solution: we loaded a minimal schema (without indexes and constraints) on the subscriber and created the subscription. The initial copy phase was way faster (a few hours). Then we created indexes and constraints. Is this a suitable solution for production? This is probably not suitable for production. Once the COPY is finished, it still has to replicate row-by-row changes to the table rows which occurred since the starting COPY snapshot. UPDATEs and DELETEs will probably fail due to the lack of indexes on the “replica identity” columns. This failure will make the entire transaction, including the COPY, roll back to beginning. So you there will be no point at which you can build the missing indexes without first losing all the work that was done. If the master was quiescent (at least in regards to UPDATEs and DELETEs) then it there will be no row-by-row changes to apply between the start of the COPY and the start of transactional replication. In that case, the COPY will have committed before the system discovers the problem with the “replica identity”, giving you an opportunity to go build the index without losing all of the work. Will the logical replication flow be buffered by the replication slots during index creation and get in sync afterwards or will it conflict due to locking issues?It can't buffer in the middle of the transaction which includes the initial COPY.Cheers,Jeff",
"msg_date": "Mon, 9 Dec 2019 17:31:39 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication performance"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm trying to figure out how to optimise 3-table (many-to-many relation) joins\nwith predicate, limit, and ordering, where one of the tables returns at most one\nrow.\n\nThis is the query that I have right now:\n\nSELECT entity.id\nFROM (\n SELECT entity_tag.entity_id\n FROM tag\n JOIN entity_tag ON tag.id = entity_tag.tag_id\n WHERE tag.key = 'status'\n AND tag.value = 'SUCCEEDED'\n) matched\nJOIN entity ON matched.entity_id = entity.id\nWHERE entity.type = 'execution'\nORDER BY entity.id DESC\nLIMIT 10;\n\nIt runs very slowly when there are many rows matched on entity table\nwhich, in my\ncase, there are about 90K rows, even though at most there is only one\nrow returned\nby tag.\n\nLimit (cost=723.39..723.40 rows=1 width=4) (actual\ntime=6189.015..6189.242 rows=10 loops=1)\n Output: entity.id\n Buffers: shared hit=411886 read=31282\n -> Sort (cost=723.39..723.40 rows=1 width=4) (actual\ntime=6188.999..6189.059 rows=10 loops=1)\n Output: entity.id\n Sort Key: entity.id DESC\n Sort Method: top-N heapsort Memory: 25kB\n Buffers: shared hit=411886 read=31282\n -> Nested Loop (cost=1.28..723.38 rows=1 width=4) (actual\ntime=0.153..5590.717 rows=89222 loops=1)\n Output: entity.id\n Buffers: shared hit=411886 read=31282\n -> Nested Loop (cost=0.86..721.98 rows=3 width=8)\n(actual time=0.108..1851.707 rows=89222 loops=1)\n Output: entity_tag.entity_id\n Buffers: shared hit=65146 read=20646\n -> Index Scan using tag_key_value_key on\npublic.tag (cost=0.43..8.45 rows=1 width=4) (actual time=0.043..0.061\nrows=1 loops=1)\n Output: tag.id, tag.key, tag.value, tag.created_at\n Index Cond: (((tag.key)::text =\n'status'::text) AND ((tag.value)::text = 'SUCCEEDED'::text))\n Buffers: shared hit=1 read=3\n -> Index Only Scan using\nentity_tag_tag_id_entity_id_idx on public.entity_tag\n(cost=0.43..711.53 rows=201 width=16) (actual time=0.035..756.829\nrows=89222 loops=1)\n Output: entity_tag.tag_id, entity_tag.entity_id\n Index Cond: (entity_tag.tag_id = tag.id)\n Heap Fetches: 89222\n Buffers: shared hit=65145 read=20643\n -> Index Scan using entity_pkey on public.entity\n(cost=0.42..0.46 rows=1 width=4) (actual time=0.010..0.017 rows=1\nloops=89222)\n Output: entity.id, entity.entity_id, entity.type,\nentity.created_at\n Index Cond: (entity.id = entity_tag.entity_id)\n Filter: ((entity.type)::text = 'execution'::text)\n Buffers: shared hit=346740 read=10636\nPlanning time: 0.817 ms\nExecution time: 6189.419 ms\n\nBoth tag_key_value_key and entity_tag_tag_id_entity_id_idx is a UNIQUE\nconstraint on tag(key,value) and entity_tag(tag_id, entity_id) respectively.\n\nIt seems to me that PostgreSQL runs the nested loop against all of the 90K\nrecords because it wants to sort the result before limiting the result. It\ndoesn't take into account of the UNIQUE constraint imposed on the table and\nthinks that the join being done inside the subquery will change the ordering of\nentity_id returned by the subquery, thus prompting the sort.\n\nI believe with how the index sorted, it should be able to just scan the index\nbackwards because at most only one tag_id will be returned. When I tried\nchanging the predicate here to filter by ID with the following query:\n\n-- This runs very fast\nSELECT entity.id\nFROM (\n SELECT entity_tag.entity_id\n FROM tag\n JOIN entity_tag ON tag.id = entity_tag.tag_id\n WHERE tag.id = 24\n) matched\nJOIN entity ON matched.entity_id = entity.id\nWHERE entity.type = 'execution'\nORDER BY entity.id DESC\nLIMIT 10;\n\nand it's blazing fast. This time PostgreSQL seems to know that the join inside\nthe subqery won't change the ordering of entity_id returned by the subquery, as\nseen in the following query explanation:\n\nLimit (cost=1.28..1025.56 rows=10 width=4) (actual time=0.144..0.276\nrows=1 loops=1)\n Output: entity.id\n Buffers: shared hit=12\n -> Nested Loop (cost=1.28..1537.70 rows=15 width=4) (actual\ntime=0.125..0.238 rows=1 loops=1)\n Output: entity.id\n Buffers: shared hit=12\n -> Nested Loop (cost=0.86..1529.06 rows=15 width=12) (actual\ntime=0.057..0.116 rows=1 loops=1)\n Output: entity_tag.tag_id, entity.id\n Buffers: shared hit=8\n -> Index Only Scan Backward using\nentity_tag_tag_id_entity_id_idx on public.entity_tag\n(cost=0.43..454.82 rows=128 width=16) (actual time=0.018..0.038 rows=1\nloops=1)\n Output: entity_tag.tag_id, entity_tag.entity_id\n Index Cond: (entity_tag.tag_id = 24)\n Heap Fetches: 1\n Buffers: shared hit=4\n -> Index Scan using entity_pkey on public.entity\n(cost=0.42..8.38 rows=1 width=4) (actual time=0.011..0.030 rows=1\nloops=1)\n Output: entity.id, entity.entity_id, entity.type,\nentity.created_at\n Index Cond: (entity.id = entity_tag.entity_id)\n Filter: ((entity.type)::text = 'execution'::text)\n Buffers: shared hit=4\n -> Materialize (cost=0.43..8.45 rows=1 width=4) (actual\ntime=0.040..0.078 rows=1 loops=1)\n Output: tag.id\n Buffers: shared hit=4\n -> Index Only Scan using tag_pkey on public.tag\n(cost=0.43..8.45 rows=1 width=4) (actual time=0.021..0.040 rows=1\nloops=1)\n Output: tag.id\n Index Cond: (tag.id = 24)\n Heap Fetches: 1\n Buffers: shared hit=4\nPlanning time: 0.362 ms\nExecution time: 0.458 ms\n\nwhich proves that it can just scan the index backward. I can't figure out why\nPostgreSQL doesn't take into account the UNIQUE index there.\n\nIs there any way to do this with one query? Or is it because the database\ndesign is flawed?\n\nEnvironment:\n- PostgreSQL version: 9.6\n- OS: Linux (Amazon RDS) / MacOSX / Docker\n\nInstallation:\n- MacOSX: PostgreSQL is installed by using brew.\n- Docker: PostgreSQL image from https://hub.docker.com/_/postgres.\n\nI use the default configuration provided in all of these environments.\n\nThanks!\n\n-- \nBest regards,\n\nAufar Gilbran\n\n-- \n*_Grab is hiring. Learn more at _**https://grab.careers \n<https://grab.careers/>*\n\n\nBy communicating with Grab Inc and/or its \nsubsidiaries, associate companies and jointly controlled entities (“Grab \nGroup”), you are deemed to have consented to the processing of your \npersonal data as set out in the Privacy Notice which can be viewed at \nhttps://grab.com/privacy/ <https://grab.com/privacy/>\n\n\nThis email contains \nconfidential information and is only for the intended recipient(s). If you \nare not the intended recipient(s), please do not disseminate, distribute or \ncopy this email Please notify Grab Group immediately if you have received \nthis by mistake and delete this email from your system. Email transmission \ncannot be guaranteed to be secure or error-free as any information therein \ncould be intercepted, corrupted, lost, destroyed, delayed or incomplete, or \ncontain viruses. Grab Group do not accept liability for any errors or \nomissions in the contents of this email arises as a result of email \ntransmission. All intellectual property rights in this email and \nattachments therein shall remain vested in Grab Group, unless otherwise \nprovided by law.\n\n\n\n",
"msg_date": "Mon, 2 Dec 2019 20:43:05 +0800",
"msg_from": "Aufar Gilbran <[email protected]>",
"msg_from_op": true,
"msg_subject": "[External] Join queries slow with predicate, limit, and ordering"
},
{
"msg_contents": "On Mon, Dec 2, 2019 at 8:29 AM Aufar Gilbran <[email protected]> wrote:\n\n> Hello,\n>\n> I'm trying to figure out how to optimise 3-table (many-to-many relation)\n> joins\n> with predicate, limit, and ordering, where one of the tables returns at\n> most one\n> row.\n>\n> This is the query that I have right now:\n>\n> SELECT entity.id\n> FROM (\n> SELECT entity_tag.entity_id\n> FROM tag\n> JOIN entity_tag ON tag.id = entity_tag.tag_id\n> WHERE tag.key = 'status'\n> AND tag.value = 'SUCCEEDED'\n> ) matched\n> JOIN entity ON matched.entity_id = entity.id\n> WHERE entity.type = 'execution'\n> ORDER BY entity.id DESC\n> LIMIT 10;\n>\n\nWhat happens if you set enable_sort to off before running it?\n\n\n> -> Nested Loop (cost=1.28..723.38 rows=1 width=4) (actual\n> time=0.153..5590.717 rows=89222 loops=1)\n>\n\nIt thinks it will find 1 row, and actually finds 89,222. I don't know\nexactly why that would be, I suppose tag_id has an extremely skewed\ndistribution. But yeah, that is going to cause some problems. For one\nthing, if there was actually just one qualifying row, then it wouldn't get\nto stop early, as the LIMIT would never be satisfied. So it thinks that if\nit choose to walk the index backwards, it would have to walk the **entire**\nindex.\n\n\n -> Index Only Scan using\n> entity_tag_tag_id_entity_id_idx on public.entity_tag (cost=0.43..711.53\n> rows=201 width=16) (actual time=0.035..756.829 rows=89222 loops=1)\n> Heap Fetches: 89222\n>\n\nYou should vacuum this table. Doing that (and only that) probably won't\nmake a great deal of difference to this particular query, but still, it\nwill help some. And might help other ones you haven't noticed yet as well.\n\n\n>\n> Both tag_key_value_key and entity_tag_tag_id_entity_id_idx is a UNIQUE\n> constraint on tag(key,value) and entity_tag(tag_id, entity_id)\n> respectively.\n>\n> It seems to me that PostgreSQL runs the nested loop against all of the 90K\n> records because it wants to sort the result before limiting the result.\n\n\nIt doesn't **know** there are going to be 90000 records. It cannot plan\nqueries based on knowledge it doesn't possess.\n\n\n> It\n> doesn't take into account of the UNIQUE constraint imposed on the table and\n> thinks that the join being done inside the subquery will change the\n> ordering of\n> entity_id returned by the subquery, thus prompting the sort.\n>\n\nThis seems like rather adventurous speculation. It does the sort because\nthe horrible estimation makes it think it will be faster that way, not\nbecause it thinks it is the only possible way. Of you set enable_sort =\noff and it still does a sort, then you know it thinks there is no other way.\n\n\n\n>\n> I believe with how the index sorted, it should be able to just scan the\n> index\n> backwards because at most only one tag_id will be returned. When I tried\n> changing the predicate here to filter by ID with the following query:\n>\n> -- This runs very fast\n> SELECT entity.id\n> FROM (\n> SELECT entity_tag.entity_id\n> FROM tag\n> JOIN entity_tag ON tag.id = entity_tag.tag_id\n> WHERE tag.id = 24\n> ) matched\n> JOIN entity ON matched.entity_id = entity.id\n> WHERE entity.type = 'execution'\n> ORDER BY entity.id DESC\n> LIMIT 10;\n>\n\nWith this query, it can use the join condition to transfer the knowledge of\ntag.id=24 to become entity_tag.tag_id=24, and then look up stats on\nentity_tag.tag_id for the value 24. When you specify the single row of tag\nindirectly, it can't do that as it doesn't know what specific value of\ntag.id is going to be the one it finds (until after the query is done being\nplanned and starts executing, at which point it is too late). But the row\nwith id=24 doesn't seem to be the same one with \"tag.key = 'status' AND\ntag.value = 'SUCCEEDED'\", so you have basically changed the query entirely\non us.\n\nIf you replanned this query with ORDER BY entity.id+0 DESC, (and with the\ntrue value of tag_id) that might give you some more insight into the hidden\n\"thought process\" behind the planner.\n\nCheers,\n\nJeff\n\nOn Mon, Dec 2, 2019 at 8:29 AM Aufar Gilbran <[email protected]> wrote:Hello,\n\nI'm trying to figure out how to optimise 3-table (many-to-many relation) joins\nwith predicate, limit, and ordering, where one of the tables returns at most one\nrow.\n\nThis is the query that I have right now:\n\nSELECT entity.id\nFROM (\n SELECT entity_tag.entity_id\n FROM tag\n JOIN entity_tag ON tag.id = entity_tag.tag_id\n WHERE tag.key = 'status'\n AND tag.value = 'SUCCEEDED'\n) matched\nJOIN entity ON matched.entity_id = entity.id\nWHERE entity.type = 'execution'\nORDER BY entity.id DESC\nLIMIT 10;What happens if you set enable_sort to off before running it? \n -> Nested Loop (cost=1.28..723.38 rows=1 width=4) (actual\ntime=0.153..5590.717 rows=89222 loops=1)It thinks it will find 1 row, and actually finds 89,222. I don't know exactly why that would be, I suppose tag_id has an extremely skewed distribution. But yeah, that is going to cause some problems. For one thing, if there was actually just one qualifying row, then it wouldn't get to stop early, as the LIMIT would never be satisfied. So it thinks that if it choose to walk the index backwards, it would have to walk the **entire** index. -> Index Only Scan using entity_tag_tag_id_entity_id_idx on public.entity_tag (cost=0.43..711.53 rows=201 width=16) (actual time=0.035..756.829 rows=89222 loops=1) Heap Fetches: 89222You should vacuum this table. Doing that (and only that) probably won't make a great deal of difference to this particular query, but still, it will help some. And might help other ones you haven't noticed yet as well. \nBoth tag_key_value_key and entity_tag_tag_id_entity_id_idx is a UNIQUE\nconstraint on tag(key,value) and entity_tag(tag_id, entity_id) respectively.\n\nIt seems to me that PostgreSQL runs the nested loop against all of the 90K\nrecords because it wants to sort the result before limiting the result. It doesn't **know** there are going to be 90000 records. It cannot plan queries based on knowledge it doesn't possess. It\ndoesn't take into account of the UNIQUE constraint imposed on the table and\nthinks that the join being done inside the subquery will change the ordering of\nentity_id returned by the subquery, thus prompting the sort.This seems like rather adventurous speculation. It does the sort because the horrible estimation makes it think it will be faster that way, not because it thinks it is the only possible way. Of you set enable_sort = off and it still does a sort, then you know it thinks there is no other way. \n\nI believe with how the index sorted, it should be able to just scan the index\nbackwards because at most only one tag_id will be returned. When I tried\nchanging the predicate here to filter by ID with the following query:\n\n-- This runs very fast\nSELECT entity.id\nFROM (\n SELECT entity_tag.entity_id\n FROM tag\n JOIN entity_tag ON tag.id = entity_tag.tag_id\n WHERE tag.id = 24\n) matched\nJOIN entity ON matched.entity_id = entity.id\nWHERE entity.type = 'execution'\nORDER BY entity.id DESC\nLIMIT 10;With this query, it can use the join condition to transfer the knowledge of tag.id=24 to become entity_tag.tag_id=24, and then look up stats on entity_tag.tag_id for the value 24. When you specify the single row of tag indirectly, it can't do that as it doesn't know what specific value of tag.id is going to be the one it finds (until after the query is done being planned and starts executing, at which point it is too late). But the row with id=24 doesn't seem to be the same one with \"tag.key = 'status' AND tag.value = 'SUCCEEDED'\", so you have basically changed the query entirely on us. If you replanned this query with ORDER BY entity.id+0 DESC, (and with the true value of tag_id) that might give you some more insight into the hidden \"thought process\" behind the planner.Cheers,Jeff",
"msg_date": "Mon, 2 Dec 2019 19:38:52 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [External] Join queries slow with predicate, limit, and ordering"
},
{
"msg_contents": "Thanks for the answer!\n\nOn Tue, Dec 3, 2019 at 8:39 AM Jeff Janes <[email protected]> wrote:\n> What happens if you set enable_sort to off before running it?\n\nTurning enable_sort to off makes the first query to not sort[1]. It\ndoes run much slower though compared to the original query[2]. This\ntime I do VACUUM ANALYZE first so even the slow query is much faster,\nbut still much slower than the fast query[3].\n\n> It thinks it will find 1 row, and actually finds 89,222. I don't know exactly why that would be, I suppose tag_id has an extremely skewed distribution. But yeah, that is going to cause some problems. For one thing, if there was actually just one qualifying row, then it wouldn't get to stop early, as the LIMIT would never be satisfied. So it thinks that if it choose to walk the index backwards, it would have to walk the **entire** index.\n\nI'm not really sure what skewed distribution is. If by skewed you mean\nthat for a particular tag_id there are many entity and other tag_id\nthere might be low amount entity then yes, this particular key value\ncovers 80% of the entity. For this kind of dataset, is there any way\nthat I can do to improve it or is it just impossible?\n\n> With this query, it can use the join condition to transfer the knowledge of tag.id=24 to become entity_tag.tag_id=24, and then look up stats on entity_tag.tag_id for the value 24. When you specify the single row of tag indirectly, it can't do that as it doesn't know what specific value of tag.id is going to be the one it finds (until after the query is done being planned and starts executing, at which point it is too late). But the row with id=24 doesn't seem to be the same one with \"tag.key = 'status' AND tag.value = 'SUCCEEDED'\", so you have basically changed the query entirely on us.\n\nApologies, I used the query for database on another environment\npreviously. The correct one uses tag_id=18 [3]. So it becomes like\nthis:\n\nSELECT entity.id\nFROM (\n SELECT entity_tag.entity_id\n FROM tag\n JOIN entity_tag ON tag.id = entity_tag.tag_id\n WHERE tag.id = 18\n) matched\nJOIN entity ON matched.entity_id = entity.id\nWHERE entity.type = 'execution'\nORDER BY entity.id DESC\nLIMIT 10;\n\nIt's still very fast and the query plan looks similar to me.\n\n> If you replanned this query with ORDER BY entity.id+0 DESC, (and with the true value of tag_id) that might give you some more insight into the hidden \"thought process\" behind the planner.\n\nI tried this on the fast query and it becomes very slow [4]. I guess\nbecause it cannot consult the index for the ordering anymore so it\ncan't do LIMIT? I'm not so sure.\n\n[1] https://explain.depesz.com/s/aEmR\n[2] https://explain.depesz.com/s/kmNY\n[3] https://explain.depesz.com/s/pD5v\n[4] https://explain.depesz.com/s/4s7Q\n\n--\nBest regards,\n\nAufar Gilbran\n\n-- \n*_Grab is hiring. Learn more at _**https://grab.careers \n<https://grab.careers/>*\n\n\nBy communicating with Grab Inc and/or its \nsubsidiaries, associate companies and jointly controlled entities (“Grab \nGroup”), you are deemed to have consented to the processing of your \npersonal data as set out in the Privacy Notice which can be viewed at \nhttps://grab.com/privacy/ <https://grab.com/privacy/>\n\n\nThis email contains \nconfidential information and is only for the intended recipient(s). If you \nare not the intended recipient(s), please do not disseminate, distribute or \ncopy this email Please notify Grab Group immediately if you have received \nthis by mistake and delete this email from your system. Email transmission \ncannot be guaranteed to be secure or error-free as any information therein \ncould be intercepted, corrupted, lost, destroyed, delayed or incomplete, or \ncontain viruses. Grab Group do not accept liability for any errors or \nomissions in the contents of this email arises as a result of email \ntransmission. All intellectual property rights in this email and \nattachments therein shall remain vested in Grab Group, unless otherwise \nprovided by law.\n\n\n\n",
"msg_date": "Tue, 3 Dec 2019 15:50:43 +0800",
"msg_from": "Aufar Gilbran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [External] Join queries slow with predicate, limit, and ordering"
}
] |
[
{
"msg_contents": "Hi,\n\nI am looking for tuning my PG setup such that recently inserted or updated\nrecord will be available in the buffer/cache (I believe they are same in\nthis context). Does PostgreSQL do it by default? If yes, just increasing\nbuffer size sufficient? What will be its effect on LRU performance -- I\nguess there won't be any adverse effect?\n\nMy use case is that I am going to use it as a queue and performance will be\ndependent upon whether the recently updated record is available in the\ncache.\n\nThank you.\n\nregards\nSachin\n\nHi, I am looking for tuning my PG setup such that recently inserted or updated record will be available in the buffer/cache (I believe they are same in this context). Does PostgreSQL do it by default? If yes, just increasing buffer size sufficient? What will be its effect on LRU performance -- I guess there won't be any adverse effect?My use case is that I am going to use it as a queue and performance will be dependent upon whether the recently updated record is available in the cache.Thank you.regardsSachin",
"msg_date": "Mon, 2 Dec 2019 22:33:03 +0530",
"msg_from": "Sachin Divekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Make recently inserted/updated records available in the buffer/cache"
},
{
"msg_contents": "Hi,\n\nPostgreSQL decides which pages should be evicted from memory and written to\ndisk with the help of LRU algorithm. Thus, it depends on your query work\nload. In OLTP systems, the algorithm is beneficial to business\nrequirements(almost :) )\n\nIt's hard to figure out that a configuration change will affect the\nperformance in a good way. Maybe, you can use PostgreSQL warmup features in\norder to make sure the data pages that you need will be available in cache.\n\nBecause the results of LRU algorithm can vary depending on your business\nand system workload.\n\nBest Regards.\n\n\nSachin Divekar <[email protected]>, 2 Ara 2019 Pzt, 20:03 tarihinde şunu\nyazdı:\n\n> Hi,\n>\n> I am looking for tuning my PG setup such that recently inserted or updated\n> record will be available in the buffer/cache (I believe they are same in\n> this context). Does PostgreSQL do it by default? If yes, just increasing\n> buffer size sufficient? What will be its effect on LRU performance -- I\n> guess there won't be any adverse effect?\n>\n> My use case is that I am going to use it as a queue and performance will\n> be dependent upon whether the recently updated record is available in the\n> cache.\n>\n> Thank you.\n>\n> regards\n> Sachin\n>\n\n\n-- \n\n*Hüseyin DEMİR*\n\nIT SOLUTION ARCHITECT\n\n0534-614-72-06\[email protected]\n\nselfarrival.blogspot.com.tr\n\nHi, PostgreSQL decides which pages should be evicted from memory and written to disk with the help of LRU algorithm. Thus, it depends on your query work load. In OLTP systems, the algorithm is beneficial to business requirements(almost :) )It's hard to figure out that a configuration change will affect the performance in a good way. Maybe, you can use PostgreSQL warmup features in order to make sure the data pages that you need will be available in cache.Because the results of LRU algorithm can vary depending on your business and system workload. Best Regards.Sachin Divekar <[email protected]>, 2 Ara 2019 Pzt, 20:03 tarihinde şunu yazdı:Hi, I am looking for tuning my PG setup such that recently inserted or updated record will be available in the buffer/cache (I believe they are same in this context). Does PostgreSQL do it by default? If yes, just increasing buffer size sufficient? What will be its effect on LRU performance -- I guess there won't be any adverse effect?My use case is that I am going to use it as a queue and performance will be dependent upon whether the recently updated record is available in the cache.Thank you.regardsSachin\n-- \n\n\nHüseyin DEMİR\n\n\n\n\nIT SOLUTION ARCHITECT\n\n\n\n0534-614-72-06demirhuseyinn.94@gmail.comselfarrival.blogspot.com.tr",
"msg_date": "Mon, 2 Dec 2019 20:13:09 +0300",
"msg_from": "=?UTF-8?Q?H=C3=BCseyin_Demir?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make recently inserted/updated records available in the\n buffer/cache"
},
{
"msg_contents": "All updated/dirty records go through PG internal memory buffer, \nshared_buffers. Make sure that is configured optimally. Use \npg_buffercache extension to set it correctly.\n\nRegards,\nMichael Vitale\n\nHüseyin Demir wrote on 12/2/2019 12:13 PM:\n> I guess there won't be any adverse effect\n\n\n\n",
"msg_date": "Mon, 2 Dec 2019 12:17:35 -0500",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make recently inserted/updated records available in the\n buffer/cache"
},
{
"msg_contents": "\"I am going to use it as a queue\"\n\nYou may want to look at lowering fillfactor if this queue is going to have\nfrequent updates, and also make autovacuum/analyze much more aggressive\nassuming many updates and deletes.\n\n\"I am going to use it as a queue\"You may want to look at lowering fillfactor if this queue is going to have frequent updates, and also make autovacuum/analyze much more aggressive assuming many updates and deletes.",
"msg_date": "Tue, 3 Dec 2019 10:29:18 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make recently inserted/updated records available in the\n buffer/cache"
},
{
"msg_contents": "Yep, I concur completely! For tables treated like queues you gotta do \nthis stuff or deal with bloat and fragmented indexes.\n\nMichael Lewis wrote on 12/3/2019 12:29 PM:\n> \"I am going to use it as a queue\"\n>\n> You may want to look at lowering fillfactor if this queue is going to \n> have frequent updates, and also make autovacuum/analyze much more \n> aggressive assuming many updates and deletes.\n\n\n\n",
"msg_date": "Tue, 3 Dec 2019 12:32:48 -0500",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make recently inserted/updated records available in the\n buffer/cache"
},
{
"msg_contents": "Thank you, Michaels.\n.\nI didn't know about fillfactor and table bloat. Did some reading on those\ntopics. We will definitely need to tweak these settings.\n\nI am also going to use SKIP LOCKED to _select for update_. Any suggestions\non tuning parameters for SKIP LOCKED?\n\nThanks\n\nOn Tue, Dec 3, 2019 at 11:02 PM MichaelDBA <[email protected]> wrote:\n\n> Yep, I concur completely! For tables treated like queues you gotta do\n> this stuff or deal with bloat and fragmented indexes.\n>\n> Michael Lewis wrote on 12/3/2019 12:29 PM:\n> > \"I am going to use it as a queue\"\n> >\n> > You may want to look at lowering fillfactor if this queue is going to\n> > have frequent updates, and also make autovacuum/analyze much more\n> > aggressive assuming many updates and deletes.\n>\n>\n\nThank you, Michaels..I didn't know about fillfactor and table bloat. Did some reading on those topics. We will definitely need to tweak these settings. I am also going to use SKIP LOCKED to _select for update_. Any suggestions on tuning parameters for SKIP LOCKED?ThanksOn Tue, Dec 3, 2019 at 11:02 PM MichaelDBA <[email protected]> wrote:Yep, I concur completely! For tables treated like queues you gotta do \nthis stuff or deal with bloat and fragmented indexes.\n\nMichael Lewis wrote on 12/3/2019 12:29 PM:\n> \"I am going to use it as a queue\"\n>\n> You may want to look at lowering fillfactor if this queue is going to \n> have frequent updates, and also make autovacuum/analyze much more \n> aggressive assuming many updates and deletes.",
"msg_date": "Wed, 4 Dec 2019 00:15:53 +0530",
"msg_from": "Sachin Divekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make recently inserted/updated records available in the\n buffer/cache"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 11:46 AM Sachin Divekar <[email protected]> wrote:\n\n> I am also going to use SKIP LOCKED to _select for update_. Any suggestions\n> on tuning parameters for SKIP LOCKED?\n>\n\nI am not aware of any. Either you use it because it fits your need, or not.\n\nNote- please don't top-post (reply and include all the previous\nconversation below) on the Postgres mailing lists. Quote only the part(s)\nyou are responding to and reply there.\n\nOn Tue, Dec 3, 2019 at 11:46 AM Sachin Divekar <[email protected]> wrote:I am also going to use SKIP LOCKED to _select for update_. Any suggestions on tuning parameters for SKIP LOCKED?I am not aware of any. Either you use it because it fits your need, or not.Note- please don't top-post (reply and include all the previous conversation below) on the Postgres mailing lists. Quote only the part(s) you are responding to and reply there.",
"msg_date": "Tue, 3 Dec 2019 11:58:02 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make recently inserted/updated records available in the\n buffer/cache"
}
] |
[
{
"msg_contents": "Hi\n\nI have a function that prepares data, so the big job can be run it in parallel.\n\nToday I have solved this by using \"Gnu parallel\" like this.\npsql testdb -c\"\\! psql -t -q -o /tmp/run_cmd.sql testdb -c\\\"SELECT find_overlap_gap_make_run_cmd('sl_lop.overlap_gap_input_t1','geom',4258,'sl_lop.overlap_gap_input_t1_res',50);\\\"; parallel -j 4 psql testdb -c :::: /tmp/run_cmd.sql\" 2>> /tmp/analyze.log;\n\nThe problem here is that I depend on external code which may not be installed.\n\nSince Postgres now supports parallel I was wondering if it's easy to trigger parallel dynamically created SQL calls.\n\nIf you look at https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql you see that\n\nfind_overlap_gap_make_run_cmd generates as set of 28 sql calls.\n\n\nSo is it in a simple way possible to use Postgres parallel functionality to call this 28 functions i parallel so I don't have dependent on externally install programs ?\n\n\nWhen this 28 sql calls are done, the find_overlap_gap_make_run_cmd may continue to the next step of work. So the function that triggers parallel calls wait for them complete and then may start on the next step of work.\n\n\nThanks .\n\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi\n\n\n\n\nI have a function that prepares data, so the big job can be run it in parallel. \n\n\n\n\nToday I have solved this by using \"Gnu parallel\" like this.\n\npsql\ntestdb\n -c\"\\! psql\n -t -q -o /tmp/run_cmd.sql\ntestdb\n -c\\\"SELECT find_overlap_gap_make_run_cmd('sl_lop.overlap_gap_input_t1','geom',4258,'sl_lop.overlap_gap_input_t1_res',50);\\\";\n parallel -j 4 \npsql\ntestdb\n -c :::: /tmp/run_cmd.sql\"\n 2>> /tmp/analyze.log;\n\n\n\n\n\nThe problem here is that I depend on external code which may not be installed. \n\n\n\n\n\nSince Postgres now supports parallel I was wondering if it's easy to trigger parallel dynamically created SQL calls.\n\n\n\nIf you look at https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql \n you see that \n\nfind_overlap_gap_make_run_cmd generates as set of 28 sql calls. \n\n\n\nSo is it in a simple way possible to use Postgres parallel functionality to call this 28 functions i parallel so I don't have dependent\n on externally install programs ?\n\n\n\n\nWhen this 28 sql calls are done, the find_overlap_gap_make_run_cmd may continue\n to the next step of work. So the function that triggers parallel calls wait for them complete and then may start on the next step of work.\n\n\n\n\nThanks .\n\n\n\n\nLars",
"msg_date": "Thu, 5 Dec 2019 12:10:42 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to run in parallel in Postgres"
},
{
"msg_contents": "On Thu, 2019-12-05 at 12:10 +0000, Lars Aksel Opsahl wrote:\n> have a function that prepares data, so the big job can be run it in parallel. \n> \n> Today I have solved this by using \"Gnu parallel\" like this.\n> psql testdb -c\"\\! psql -t -q -o /tmp/run_cmd.sql testdb -c\\\"SELECT find_overlap_gap_make_run_cmd('sl_lop.overlap_gap_input_t1','geom',4258,'sl_lop.overlap_gap_input_t1_res',50);\\\"; parallel -j 4 \n> psql testdb -c :::: /tmp/run_cmd.sql\" 2>> /tmp/analyze.log;\n> \n> The problem here is that I depend on external code which may not be installed. \n> \n> Since Postgres now supports parallel I was wondering if it's easy to trigger parallel dynamically created SQL calls.\n> \n> If you look at https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql you see that\n> find_overlap_gap_make_run_cmd generates as set of 28 sql calls. \n> \n> So is it in a simple way possible to use Postgres parallel functionality to call this 28 functions i parallel so I don't have dependent on externally install programs ?\n> \n> When this 28 sql calls are done, the find_overlap_gap_make_run_cmd may continue to the next step of work. So the function that triggers parallel calls wait for them complete and then may start on\n> the next step of work.\n\nYou cannot run several queries in parallel in a PostgreSQL function.\n\nYou may want to have a look at PL/Proxy which might be used for things like that.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 05 Dec 2019 17:42:06 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to run in parallel in Postgres"
},
{
"msg_contents": ">From: Laurenz Albe <[email protected]>\n\n>Sent: Thursday, December 5, 2019 5:42 PM\n\n>To: Lars Aksel Opsahl <[email protected]>; [email protected] <[email protected]>\n\n>Subject: Re: How to run in parallel in Postgres\n\n>\n\n>On Thu, 2019-12-05 at 12:10 +0000, Lars Aksel Opsahl wrote:\n\n>> have a function that prepares data, so the big job can be run it in parallel.\n\n>>\n\n>> Today I have solved this by using \"Gnu parallel\" like this.\n\n>> psql testdb -c\"\\! psql -t -q -o /tmp/run_cmd.sql testdb -c\\\"SELECT find_overlap_gap_make_run_cmd('sl_lop.overlap_gap_input_t1','geom',4258,'sl_lop.overlap_gap_input_t1_res',50);\\\"; parallel -j 4\n\n>> psql testdb -c :::: /tmp/run_cmd.sql\" 2>> /tmp/analyze.log;\n\n>>\n\n>> The problem here is that I depend on external code which may not be installed.\n\n>>\n\n>> Since Postgres now supports parallel I was wondering if it's easy to trigger parallel dynamically created SQL calls.\n\n>>\n\n>> If you look at https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql you see that\n\n>> find_overlap_gap_make_run_cmd generates as set of 28 sql calls.\n\n>>\n\n>> So is it in a simple way possible to use Postgres parallel functionality to call this 28 functions i parallel so I don't have dependent on externally install programs ?\n\n>>\n\n>> When this 28 sql calls are done, the find_overlap_gap_make_run_cmd may continue to the next step of work. So the function that triggers parallel calls wait for them complete and then may start on\n\n>> the next step of work.\n\n>\n\n>You cannot run several queries in parallel in a PostgreSQL function.\n\n>\n\n>You may want to have a look at PL/Proxy which might be used for things like that.\n\n>\n\n>Yours,\n\n>Laurenz Albe\n\n>--\n\n>Cybertec | https://www.cybertec-postgresql.com\n\n\nHi\n\n\nThanks, I checked it out.\n\n\nIf I understand it correct I have to write the code using plproxy syntax and this means if plproxy is not installed the code will fail.\n\n\nSo the only way now to use built in parallel functionality in Postgres is to use C ?\n\n\nDo you believe it will possible in the future to run parallel calls from a PostgresSQL function (or is impossible/difficult because of design) ?\n\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n\n>From: \nLaurenz Albe <[email protected]>\n>Sent: Thursday, December 5, 2019 5:42 PM\n>To: \nLars Aksel \nOpsahl <[email protected]>;\[email protected] <[email protected]>\n>Subject: Re: How to run in parallel in\nPostgres\n> \n>On \nThu, 2019-12-05 at 12:10 +0000, Lars\nAksel \nOpsahl wrote:\n>> have a function that prepares data, so the big job can be run it in parallel. \n>> \n>> Today I have solved this by using \"Gnu parallel\" like this.\n>> \npsql testdb -c\"\\! \npsql -t -q -o /tmp/run_cmd.sql\ntestdb -c\\\"SELECT find_overlap_gap_make_run_cmd('sl_lop.overlap_gap_input_t1','geom',4258,'sl_lop.overlap_gap_input_t1_res',50);\\\"; parallel -j 4 \n>> \npsql testdb -c :::: /tmp/run_cmd.sql\" 2>> /tmp/analyze.log;\n>> \n>> The problem here is that I depend on external code which may not be installed. \n>> \n>> Since \nPostgres now supports parallel I was wondering if it's easy to trigger parallel dynamically created SQL calls.\n>> \n>> If you look at https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql \nyou see that\n>> find_overlap_gap_make_run_cmd generates as set of 28\nsql calls. \n>> \n>> So is it in a simple way possible to use\nPostgres parallel functionality to call this 28 functions i parallel so I don't have dependent on externally install programs \n?\n>> \n>> When this 28 \nsql calls are done, the find_overlap_gap_make_run_cmd may continue to the next step of work. So the function that triggers parallel calls wait for them complete and then may start on\n>> the next step of work.\n>\n>You cannot run several queries in parallel in a PostgreSQL function.\n>\n>You may want to have a look at PL/Proxy which might be used for things like that.\n>\n>Yours,\n>Laurenz\nAlbe\n>-- \n>Cybertec | https://www.cybertec-postgresql.com\n\n\n\nHi\n\n\n\nThanks, I checked it out.\n\n\n\nIf I understand it correct I have to write the code using\nplproxy syntax and this means if \nplproxy is not installed the code will fail.\n\n\n\nSo the only way now to use built in parallel functionality in\nPostgres is to use C ?\n\n\n\nDo you believe it will possible in the future to run parallel calls from a PostgresSQL function (or is impossible/difficult because of design) ?\n\n\n\nLars",
"msg_date": "Fri, 6 Dec 2019 08:39:55 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to run in parallel in Postgres"
},
{
"msg_contents": "Hi Lars,\n\nI have two suggestions:\n\n- `xargs` almost always present and it can run in parallel (-P) but script\nneeds to be changed:\nfor((i=1;i<=28;i++)); do echo \"SELECT\nfind_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',${I},28);\";\ndone | xargs -n1 -P 10 psql ...\n\n- `UNION ALL` might trigger parallel execution (you need to mess with the\ncost of the function and perhaps other settings):\nSELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',\n4258,'test_data.overlap_gap_input_t1_res',1,28) UNION ALL\nSELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',\n4258,'test_data.overlap_gap_input_t1_res',2,28)\n...\n\nCheers,\n\nOn Thu, 5 Dec 2019 at 23:11, Lars Aksel Opsahl <[email protected]> wrote:\n\n> Hi\n>\n> I have a function that prepares data, so the big job can be run it in\n> parallel.\n>\n> Today I have solved this by using \"Gnu parallel\" like this.\n> psql testdb -c\"\\! psql -t -q -o /tmp/run_cmd.sql testdb -c\\\"SELECT\n> find_overlap_gap_make_run_cmd('sl_lop.overlap_gap_input_t1','geom',4258,'sl_lop.overlap_gap_input_t1_res',50);\\\";\n> parallel -j 4 psql testdb -c :::: /tmp/run_cmd.sql\" 2>> /tmp/analyze.log;\n>\n> The problem here is that I depend on external code which may not be\n> installed.\n>\n> Since Postgres now supports parallel I was wondering if it's easy to\n> trigger parallel dynamically created SQL calls.\n>\n> If you look at\n> https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql\n> you see that\n>\n> find_overlap_gap_make_run_cmd generates as set of 28 sql calls.\n>\n>\n> So is it in a simple way possible to use Postgres parallel functionality\n> to call this 28 functions i parallel so I don't have dependent\n> on externally install programs ?\n>\n>\n> When this 28 sql calls are done, the find_overlap_gap_make_run_cmd may continue\n> to the next step of work. So the function that triggers parallel calls wait\n> for them complete and then may start on the next step of work.\n>\n>\n> Thanks .\n>\n>\n> Lars\n>\n>\n>\n>\n>\n>\n\n\n-- \nOndrej\n\nHi Lars,I have two suggestions:- `xargs` almost always present and it can run in parallel (-P) but script needs to be changed:for((i=1;i<=28;i++)); do echo \"SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',${I},28);\"; done | xargs -n1 -P 10 psql ... - `UNION ALL` might trigger parallel execution (you need to mess with the cost of the function and perhaps other settings):SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',1,28)\nUNION ALLSELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',2,28)...Cheers,On Thu, 5 Dec 2019 at 23:11, Lars Aksel Opsahl <[email protected]> wrote:\n\n\nHi\n\n\n\n\nI have a function that prepares data, so the big job can be run it in parallel. \n\n\n\n\nToday I have solved this by using \"Gnu parallel\" like this.\n\npsql\ntestdb\n -c\"\\! psql\n -t -q -o /tmp/run_cmd.sql\ntestdb\n -c\\\"SELECT find_overlap_gap_make_run_cmd('sl_lop.overlap_gap_input_t1','geom',4258,'sl_lop.overlap_gap_input_t1_res',50);\\\";\n parallel -j 4 \npsql\ntestdb\n -c :::: /tmp/run_cmd.sql\"\n 2>> /tmp/analyze.log;\n\n\n\n\n\nThe problem here is that I depend on external code which may not be installed. \n\n\n\n\n\nSince Postgres now supports parallel I was wondering if it's easy to trigger parallel dynamically created SQL calls.\n\n\n\nIf you look at https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql \n you see that \n\nfind_overlap_gap_make_run_cmd generates as set of 28 sql calls. \n\n\n\nSo is it in a simple way possible to use Postgres parallel functionality to call this 28 functions i parallel so I don't have dependent\n on externally install programs ?\n\n\n\n\nWhen this 28 sql calls are done, the find_overlap_gap_make_run_cmd may continue\n to the next step of work. So the function that triggers parallel calls wait for them complete and then may start on the next step of work.\n\n\n\n\nThanks .\n\n\n\n\nLars\n\n\n\n\n\n\n \n\n\n\n\n \n\n-- Ondrej",
"msg_date": "Sat, 7 Dec 2019 12:23:15 +1100",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to run in parallel in Postgres"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 12:10:42PM +0000, Lars Aksel Opsahl wrote:\n> I have a function that prepares data, so the big job can be run it in parallel.\n> \n> Since Postgres now supports parallel I was wondering if it's easy to trigger parallel dynamically created SQL calls.\n> \n> If you look at https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql you see that\n> \n> find_overlap_gap_make_run_cmd generates as set of 28 sql calls.\n>\n> So is it in a simple way possible to use Postgres parallel functionality to call this 28 functions i parallel so I don't have dependent on externally install programs ?\n\nSELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',1,28);\nSELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',2,28);\nSELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',3,28);\n...\n\nI see that find_overlap_gap_single_cell creates tables, so cannot be run in parallel.\nMaybe you could consider rewriting it to return data to its caller instead.\nYou'd also need to mark it as PARALLEL SAFE, of course.\nYour other functions involved should be PARALLEL SAFE too.\n\nJustin\n\n\n",
"msg_date": "Fri, 6 Dec 2019 19:25:21 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to run in parallel in Postgres"
},
{
"msg_contents": ">From: Ondrej Ivanič <[email protected]>\n\n >Sent: Saturday, December 7, 2019 2:23 AM\n\n >Cc: [email protected] <[email protected]>\n\n >Subject: Re: How to run in parallel in Postgres\n\n >\n\n >Hi Lars,\n\n >\n\n >I have two suggestions:\n\n >\n\n >- `xargs` almost always present and it can run in parallel (-P) but script needs to be changed:\n\n >for((i=1;i<=28;i++)); do echo \"SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',${I},28);\"; done | xargs -n1 -P 10 psql ...\n\n >\n\n >- `UNION ALL` might trigger parallel execution (you need to mess with the cost of the function and perhaps other settings):\n\n >SELECT\n\n > find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',1,28)\n\n >UNION ALL\n\n >SELECT\n\n > find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',2,28)\n\n >...\n\n >\n\n >\n\n >Cheers,\n\n >\n\nHi Ondrej\n\n\n * Yes using xargs seems be an alternative to GNU parallel and I will have that in mind.\n * I did a test using UNION ALL in the branch https://github.com/larsop/find-overlap-and-gap/tree/union_all_parallel but I was not able to trigger Postgres parallel . That may be related to what Justin say about create tables.\n\n\nThanks.\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n >From:\nOndrej \nIvanič <[email protected]>\n >Sent: Saturday, December 7, 2019 2:23 AM\n >Cc:\[email protected] <[email protected]>\n >Subject: Re: How to run in parallel in\nPostgres\n > \n >Hi\nLars,\n >\n >I have two suggestions:\n >\n >- `xargs` almost always present and it can run in parallel (-P) but script needs to be changed:\n >for((i=1;i<=28;i++)); do echo \"SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',${I},28);\";\n done | xargs -n1 -P 10 \npsql ... \n >\n >- `UNION ALL` might trigger parallel execution (you need to mess with the cost of the function and perhaps other settings):\n >SELECT\n > find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',1,28)\n >UNION ALL\n >SELECT\n > find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',2,28)\n >...\n >\n >\n >Cheers,\n >\n\n\n\n\nHi Ondrej\n\n\n\n\n\n\n\nYes using xargs seems be an alternative to GNU parallel and I will have that in mind.I did a test using UNION ALL in the branch\n\nhttps://github.com/larsop/find-overlap-and-gap/tree/union_all_parallel but I was not able to trigger Postgres parallel . That may be related to what Justin say about create tables.\n\n\n\nThanks.\n\n\n\n\n\nLars",
"msg_date": "Sat, 7 Dec 2019 11:17:25 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to run in parallel in Postgres"
},
{
"msg_contents": "> From: Justin Pryzby <[email protected]>\n\n> Sent: Saturday, December 7, 2019 2:25 AM\n\n> To: Lars Aksel Opsahl <[email protected]>\n\n> Cc: [email protected] <[email protected]>\n\n> Subject: Re: How to run in parallel in Postgres\n\n>\n\n> On Thu, Dec 05, 2019 at 12:10:42PM +0000, Lars Aksel Opsahl wrote:\n\n> > I have a function that prepares data, so the big job can be run it in parallel.\n\n> >\n\n> > Since Postgres now supports parallel I was wondering if it's easy to trigger parallel dynamically created SQL calls.\n\n> >\n\n> > If you look at https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql you see that\n\n> >\n\n> > find_overlap_gap_make_run_cmd generates as set of 28 sql calls.\n\n> >\n\n> > So is it in a simple way possible to use Postgres parallel functionality to call this 28 functions i parallel so I don't have dependent on externally install programs ?\n\n>\n\n> SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',1,28);\n\n> SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',2,28);\n\n> SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',3,28);\n\n> ...\n\n>\n\n> I see that find_overlap_gap_single_cell creates tables, so cannot be run in parallel.\n\n> Maybe you could consider rewriting it to return data to its caller instead.\n\n> You'd also need to mark it as PARALLEL SAFE, of course.\n\n> Your other functions involved should be PARALLEL SAFE too.\n\n>\n\n> Justin\n\nHi Justin\n\nThe reason why I don't return the results Is that on very bug tables I usually get memory problems if I return all the results to the master function. So I usually break thing up into small unlogged tables. Then I work on each table separately or in groups. When all steps are done i merge all the small tables together. I this case we only single step, but usually I work many more steps.\n\nBut I will keep mind that it may work i parallel if I don't create any child tables but returns the result.\n\nThanks.\n\nLars\n\n\n\n\n\n\n\n\n\n> From: \nJustin Pryzby <[email protected]>\n> Sent: Saturday, December 7, 2019 2:25 AM\n> To: \nLars Aksel \nOpsahl <[email protected]>\n> \nCc: [email protected] <[email protected]>\n> Subject: Re: How to run in parallel in\nPostgres\n> \n> On \nThu, Dec 05, 2019 at 12:10:42PM +0000,\nLars \nAksel Opsahl wrote:\n> > I have a function that prepares data, so the big job can be run it in parallel.\n> > \n> > Since \nPostgres now supports parallel I was wondering if it's easy to trigger parallel dynamically created SQL calls.\n> > \n> > If you look at https://github.com/larsop/find-overlap-and-gap/blob/master/src/test/sql/regress/find_overlap_and_gap.sql \nyou see that\n> > \n> > find_overlap_gap_make_run_cmd generates as set of 28\nsql calls.\n> >\n> > So is it in a simple way possible to use\nPostgres parallel functionality to call this 28 functions i parallel so I don't have dependent on externally install programs \n?\n> \n> SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',1,28);\n> SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',2,28);\n> SELECT find_overlap_gap_single_cell('test_data.overlap_gap_input_t1','geom',4258,'test_data.overlap_gap_input_t1_res',3,28);\n> ...\n> \n> I see that find_overlap_gap_single_cell creates tables, so cannot be run in parallel.\n> Maybe you could consider rewriting it to return data to its caller instead.\n> You'd also need to mark it as PARALLEL SAFE, of course.\n> Your other functions involved should be PARALLEL SAFE too.\n> \n> \nJustin\n\n\n\n\nHi Justin\n\n\nThe reason why I don't return the results Is that on very bug tables I usually get memory problems if I return all the results to the master function. So I usually break thing up into small unlogged tables. Then I work on each table\n separately or in groups. When all steps are done i merge all the small tables together. I this case we only single step, but usually I work many more steps.\n\n\nBut I will keep mind that it may work i parallel if I don't create any child tables but returns the result.\n\n\nThanks.\n\n\nLars",
"msg_date": "Sat, 7 Dec 2019 11:27:56 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to run in parallel in Postgres"
},
{
"msg_contents": ">You cannot run several queries in parallel in a PostgreSQL function.\n>\n>You may want to have a look at PL/Proxy which might be used for things like that.\n>\n>Yours,\n>Laurenz Albe\n>--\n>Cybertec | https://www.cybertec-postgresql.com\n\nHi Laurenz\n\nThe code below takes 3, seconds\nDO\n\n$body$\n\nDECLARE\n\nBEGIN\n\nEXECUTE 'SELECT pg_sleep(1); SELECT pg_sleep(1); SELECT pg_sleep(1);';\n\nEND\n\n$body$;\n\nDo you or anybody know if there are any plans for a function call that support the calling structure below or something like it and that then could finish in 1 second ? (If you are calling a void function, the return value should not be any problem.)\n\nDO\n\n$body$\n\nDECLARE\n\ncommand_string_list text[3];\n\nBEGIN\n\ncommand_string_list[0] = 'SELECT pg_sleep(1)';\n\ncommand_string_list[1] = 'SELECT pg_sleep(1)';\n\ncommand_string_list[2] = 'SELECT pg_sleep(1)';\n\nEXECUTE_PARALLEL command_string_list;\n\nEND\n\n$body$;\n\nThe only way to this today as I understand it, is to open 3 new connections back to the database which you can be done in different ways.\n\nIf we had a parallel functions like the one above it's easier to make parallel sql without using complex scripts, java, python or other system.\n\nThanks.\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n>You cannot run several queries in parallel in a PostgreSQL function.\n\n\n>\n>You may want to have a look at PL/Proxy which might be used for things like that.\n>\n>Yours,\n>Laurenz Albe\n>-- \n>Cybertec | https://www.cybertec-postgresql.com\n\n\nHi Laurenz\n\n\nThe code below takes 3, seconds \nDO\n\n\n$body$\nDECLARE \nBEGIN\n\nEXECUTE\n'SELECT pg_sleep(1); SELECT pg_sleep(1); SELECT pg_sleep(1);';\nEND\n$body$;\n\n\nDo you or anybody know if there are any plans for a function call that support the calling structure below or something like it and that then could finish in 1 second ?\n (If you\n are calling a void function, the return value should not be any problem.)\n\n\n\nDO\n\n\n$body$\nDECLARE \ncommand_string_list text[3];\nBEGIN\ncommand_string_list[0] =\n'SELECT pg_sleep(1)';\ncommand_string_list[1] =\n'SELECT pg_sleep(1)';\ncommand_string_list[2] =\n'SELECT pg_sleep(1)';\nEXECUTE_PARALLEL command_string_list;\nEND\n$body$;\n\n\n\nThe\n only way to this today as I understand it, is to open 3 new connections back to the database which you can be done in different ways. \n \nIf we had a parallel functions like the one above it's easier\n to make parallel sql without using complex scripts, java, python or other system.\n\n\nThanks.\n\n\n\nLars",
"msg_date": "Sun, 8 Dec 2019 18:14:14 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to run in parallel in Postgres, EXECUTE_PARALLEL "
},
{
"msg_contents": "On 12/8/19 1:14 PM, Lars Aksel Opsahl wrote:\n> Do you or anybody know if there are any plans for a function call that\n> support the calling structure below or something like it and that then\n> could finish in 1 second ? (If you are calling a void function, the\n> return value should not be any problem.)\n> \n> DO\n> $body$\n> *DECLARE* \n> command_string_list text[3];\n> *BEGIN*\n> command_string_list[0] = 'SELECT pg_sleep(1)';\n> command_string_list[1] = 'SELECT pg_sleep(1)';\n> command_string_list[2] = 'SELECT pg_sleep(1)';\n> EXECUTE_PARALLEL command_string_list;\n> *END*\n> $body$;\n> \n> The only way to this today as I understand it, is to open 3 new\n> connections back to the database which you can be done in different ways. \n\nYes, correct.\n\n> If we had a parallel functions like the one above it's easier to\n> make parallel sql without using complex scripts, java, python or other\n> system.\n\nIt does require one connection per statement, but with dblink it is not\nnecessarily all that complex. For example (granted, this could use more\nerror checking, etc.):\n\n8<----------------\nCREATE OR REPLACE FUNCTION\n execute_parallel(stmts text[])\nRETURNS text AS\n$$\ndeclare\n i int;\n retv text;\n conn text;\n connstr text;\n rv int;\n db text := current_database();\nbegin\n for i in 1..array_length(stmts,1) loop\n conn := 'conn' || i::text;\n connstr := 'dbname=' || db;\n perform dblink_connect(conn, connstr);\n rv := dblink_send_query(conn, stmts[i]);\n end loop;\n for i in 1..array_length(stmts,1) loop\n conn := 'conn' || i::text;\n select val into retv\n from dblink_get_result(conn) as d(val text);\n end loop;\n for i in 1..array_length(stmts,1) loop\n conn := 'conn' || i::text;\n perform dblink_disconnect(conn);\n end loop;\n return 'OK';\n end;\n$$ language plpgsql;\n8<----------------\n\nAnd then:\n\n8<----------------\n\\timing\nDO $$\n declare\n stmts text[];\n begin\n stmts[1] = 'select pg_sleep(1)';\n stmts[2] = 'select pg_sleep(1)';\n stmts[3] = 'select pg_sleep(1)';\n PERFORM execute_parallel(stmts);\n end;\n$$ LANGUAGE plpgsql;\nDO\nTime: 1010.831 ms (00:01.011)\n8<----------------\n\nHTH,\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Sun, 8 Dec 2019 15:04:05 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to run in parallel in Postgres, EXECUTE_PARALLEL"
}
] |
[
{
"msg_contents": "Hi,\nI am investigating a performance problem in our application and am seeing something unexpected in the postgres logs regarding the autovacuum.\n\n2019-12-01 13:05:39.029 UTC,\"wb\",\"postgres\",6966,\"127.0.0.1:53976\",5ddbd990.1b36,17099,\"INSERT waiting\",2019-11-25 13:39:28 UTC,12/1884256,12615023,LOG,00000,\"process 6966 still waiting for RowExclusiveLock on relation 32938 of database 32768 after 1000.085 ms\",\"Process holding the lock: 6045. Wait queue: 6966.\",,,,,\"INSERT INTO myschema.mytable (...) VALUES (...) RETURNING process.mytable.mytable_id\",13,,\"\"\n2019-12-01 13:05:39.458 UTC,,,6045,,5de3b800.179d,1,,2019-12-01 12:54:24 UTC,10/417900,0,ERROR,57014,\"canceling autovacuum task\",,,,,\"automatic vacuum of table \"\"postgres.myschema.mytable\"\"\",,,,\"\"\n\nMy understanding from reading the documentation was that a vacuum can run concurrently with table inserts/updates, but from reading the logs it appears they are conflicting over a row lock. This particular table gets very frequent inserts/updates (10-100 inserts / sec) so I am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected for the vacuum to be canceled by an insert in this way?\n\nWe are using postgres 9.6.10.\n\nThanks,\nMike\n\n________________________________\n\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively \"K&S\") shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.\n\n\n\n\n\n\n\n\n\nHi,\nI am investigating a performance problem in our application and am seeing something unexpected in the postgres logs regarding the autovacuum.\n \n2019-12-01 13:05:39.029 UTC,\"wb\",\"postgres\",6966,\"127.0.0.1:53976\",5ddbd990.1b36,17099,\"INSERT waiting\",2019-11-25 13:39:28 UTC,12/1884256,12615023,LOG,00000,\"process 6966 still waiting for RowExclusiveLock on relation 32938 of database\n 32768 after 1000.085 ms\",\"Process holding the lock: 6045. Wait queue: 6966.\",,,,,\"INSERT INTO myschema.mytable (...) VALUES (...) RETURNING process.mytable.mytable_id\",13,,\"\"\n2019-12-01 13:05:39.458 UTC,,,6045,,5de3b800.179d,1,,2019-12-01 12:54:24 UTC,10/417900,0,ERROR,57014,\"canceling autovacuum task\",,,,,\"automatic vacuum of table \"\"postgres.myschema.mytable\"\"\",,,,\"\"\n \nMy understanding from reading the documentation was that a vacuum can run concurrently with table inserts/updates, but from reading the logs it appears they are conflicting over a row lock. This particular table gets very frequent inserts/updates\n (10-100 inserts / sec) so I am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected for the vacuum to be canceled by an insert in this way?\n \nWe are using postgres 9.6.10.\n \nThanks,\nMike\n\n\n\n\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other\n than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction\n or use of all or any part of this email without the express written consent of K&S is prohibited.",
"msg_date": "Thu, 5 Dec 2019 17:46:19 +0000",
"msg_from": "Mike Schanne <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum locking question"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 3:26 PM Mike Schanne <[email protected]> wrote:\n\n> I am concerned that if the autovacuum is constantly canceled, then the\n> table never gets cleaned and its performance will continue to degrade over\n> time. Is it expected for the vacuum to be canceled by an insert in this\n> way?\n>\n>\n>\n> We are using postgres 9.6.10.\n>\n\nHave you checked when the table was last autovacuumed in\npg_stat_user_tables? If the autovacuum count is high and timestamp of last\nrun is relatively current, then no reason for concern as far as I can\nfigure.\n\nHave you already configured (non-default values) for autovacuum options for\nyour system or this table?\n\nOn Thu, Dec 5, 2019 at 3:26 PM Mike Schanne <[email protected]> wrote:\n\n\nI am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected for the vacuum to be canceled by an insert in this way?\n \nWe are using postgres 9.6.10.Have you checked when the table was last autovacuumed in pg_stat_user_tables? If the autovacuum count is high and timestamp of last run is relatively current, then no reason for concern as far as I can figure.Have you already configured (non-default values) for autovacuum options for your system or this table?",
"msg_date": "Thu, 5 Dec 2019 15:49:21 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum locking question"
},
{
"msg_contents": "In this particular case autovacuum_count is 0. n_live_tup is 659,631 and n_dead_tup is 3,400,347.\r\n\r\nWe are using the default vacuum parameters.\r\n\r\nFrom: Michael Lewis [mailto:[email protected]]\r\nSent: Thursday, December 05, 2019 5:49 PM\r\nTo: Mike Schanne\r\nCc: [email protected]\r\nSubject: Re: autovacuum locking question\r\n\r\nOn Thu, Dec 5, 2019 at 3:26 PM Mike Schanne <[email protected]<mailto:[email protected]>> wrote:\r\nI am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected for the vacuum to be canceled by an insert in this way?\r\n\r\nWe are using postgres 9.6.10.\r\n\r\nHave you checked when the table was last autovacuumed in pg_stat_user_tables? If the autovacuum count is high and timestamp of last run is relatively current, then no reason for concern as far as I can figure.\r\n\r\nHave you already configured (non-default values) for autovacuum options for your system or this table?\r\n\r\n________________________________\r\n\r\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.\r\n\n\n\n\n\n\n\n\n\nIn this particular case autovacuum_count is 0. n_live_tup is 659,631 and n_dead_tup is 3,400,347.\n \nWe are using the default vacuum parameters.\n \nFrom: Michael Lewis [mailto:[email protected]]\r\n\nSent: Thursday, December 05, 2019 5:49 PM\nTo: Mike Schanne\nCc: [email protected]\nSubject: Re: autovacuum locking question\n \n\n\nOn Thu, Dec 5, 2019 at 3:26 PM Mike Schanne <[email protected]> wrote:\n\n\n\n\n\nI am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected for the vacuum\r\n to be canceled by an insert in this way?\n \nWe are using postgres 9.6.10.\n\n\n\n\n \n\n\nHave you checked when the table was last autovacuumed in pg_stat_user_tables? If the autovacuum count is high and timestamp of last run is relatively current, then no reason for concern as far as I can figure.\n\n\n \n\n\nHave you already configured (non-default values) for autovacuum options for your system or this table?\n\n\n\n\n\n\n\r\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other\r\n than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction\r\n or use of all or any part of this email without the express written consent of K&S is prohibited.",
"msg_date": "Thu, 5 Dec 2019 23:18:12 +0000",
"msg_from": "Mike Schanne <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: autovacuum locking question"
},
{
"msg_contents": "Mike Schanne <[email protected]> writes:\n> I am investigating a performance problem in our application and am seeing something unexpected in the postgres logs regarding the autovacuum.\n\n> 2019-12-01 13:05:39.029 UTC,\"wb\",\"postgres\",6966,\"127.0.0.1:53976\",5ddbd990.1b36,17099,\"INSERT waiting\",2019-11-25 13:39:28 UTC,12/1884256,12615023,LOG,00000,\"process 6966 still waiting for RowExclusiveLock on relation 32938 of database 32768 after 1000.085 ms\",\"Process holding the lock: 6045. Wait queue: 6966.\",,,,,\"INSERT INTO myschema.mytable (...) VALUES (...) RETURNING process.mytable.mytable_id\",13,,\"\"\n> 2019-12-01 13:05:39.458 UTC,,,6045,,5de3b800.179d,1,,2019-12-01 12:54:24 UTC,10/417900,0,ERROR,57014,\"canceling autovacuum task\",,,,,\"automatic vacuum of table \"\"postgres.myschema.mytable\"\"\",,,,\"\"\n\n> My understanding from reading the documentation was that a vacuum can run concurrently with table inserts/updates, but from reading the logs it appears they are conflicting over a row lock. This particular table gets very frequent inserts/updates (10-100 inserts / sec) so I am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected for the vacuum to be canceled by an insert in this way?\n\nThe main part of an autovacuum operation should go through OK. The only\npart that would get canceled in response to somebody taking a\nnon-exclusive lock is the last step, which is truncation of unused blocks\nat the end of the table; that requires an exclusive lock. Normally,\nskipping that step isn't terribly problematic.\n\n> We are using postgres 9.6.10.\n\nIIRC, we've made improvements in this area since 9.6, to allow a\npartial truncation to be done if someone wants the lock, rather\nthan just failing entirely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Dec 2019 18:49:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum locking question"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 5:26 PM Mike Schanne <[email protected]> wrote:\n\n> Hi,\n>\n> I am investigating a performance problem in our application and am seeing\n> something unexpected in the postgres logs regarding the autovacuum.\n>\n>\n>\n> 2019-12-01 13:05:39.029 UTC,\"wb\",\"postgres\",6966,\"127.0.0.1:53976\",5ddbd990.1b36,17099,\"INSERT\n> waiting\",2019-11-25 13:39:28 UTC,12/1884256,12615023,LOG,00000,\"process\n> 6966 still waiting for RowExclusiveLock on relation 32938 of database 32768\n> after 1000.085 ms\",\"Process holding the lock: 6045. Wait queue:\n> 6966.\",,,,,\"INSERT INTO myschema.mytable (...) VALUES (...) RETURNING\n> process.mytable.mytable_id\",13,,\"\"\n>\n> 2019-12-01 13:05:39.458 UTC,,,6045,,5de3b800.179d,1,,2019-12-01 12:54:24\n> UTC,10/417900,0,ERROR,57014,\"canceling autovacuum task\",,,,,\"automatic\n> vacuum of table \"\"postgres.myschema.mytable\"\"\",,,,\"\"\n>\n>\n>\n> My understanding from reading the documentation was that a vacuum can run\n> concurrently with table inserts/updates, but from reading the logs it\n> appears they are conflicting over a row lock. This particular table gets\n> very frequent inserts/updates (10-100 inserts / sec) so I am concerned that\n> if the autovacuum is constantly canceled, then the table never gets cleaned\n> and its performance will continue to degrade over time. Is it expected for\n> the vacuum to be canceled by an insert in this way?\n>\n>\n>\n> We are using postgres 9.6.10.\n>\n\nIf the vacuum finds a lot of empty pages at the end of the table, it will\ntry to truncate them and takes a strong lock to do so. It is supposed to\ncheck every 20ms to see if anyone else is blocked on that lock, at which\npoint it stops doing the truncation and releases the lock. So it should\nnever get \"caught\" holding the lock in order to be cancelled. Is your\nsetting for deadlock_timeout much lower than usual? Also, if the\ntruncation is bogged down in very slow IO, perhaps it doesn't actually get\naround to checking ever 20ms despite its intentionsl\n\nHow often have you seen it in the logs?\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, Dec 5, 2019 at 5:26 PM Mike Schanne <[email protected]> wrote:\n\n\nHi,\nI am investigating a performance problem in our application and am seeing something unexpected in the postgres logs regarding the autovacuum.\n \n2019-12-01 13:05:39.029 UTC,\"wb\",\"postgres\",6966,\"127.0.0.1:53976\",5ddbd990.1b36,17099,\"INSERT waiting\",2019-11-25 13:39:28 UTC,12/1884256,12615023,LOG,00000,\"process 6966 still waiting for RowExclusiveLock on relation 32938 of database\n 32768 after 1000.085 ms\",\"Process holding the lock: 6045. Wait queue: 6966.\",,,,,\"INSERT INTO myschema.mytable (...) VALUES (...) RETURNING process.mytable.mytable_id\",13,,\"\"\n2019-12-01 13:05:39.458 UTC,,,6045,,5de3b800.179d,1,,2019-12-01 12:54:24 UTC,10/417900,0,ERROR,57014,\"canceling autovacuum task\",,,,,\"automatic vacuum of table \"\"postgres.myschema.mytable\"\"\",,,,\"\"\n \nMy understanding from reading the documentation was that a vacuum can run concurrently with table inserts/updates, but from reading the logs it appears they are conflicting over a row lock. This particular table gets very frequent inserts/updates\n (10-100 inserts / sec) so I am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected for the vacuum to be canceled by an insert in this way?\n \nWe are using postgres 9.6.10.If the vacuum finds a lot of empty pages at the end of the table, it will try to truncate them and takes a strong lock to do so. It is supposed to check every 20ms to see if anyone else is blocked on that lock, at which point it stops doing the truncation and releases the lock. So it should never get \"caught\" holding the lock in order to be cancelled. Is your setting for deadlock_timeout much lower than usual? Also, if the truncation is bogged down in very slow IO, perhaps it doesn't actually get around to checking ever 20ms despite its intentionslHow often have you seen it in the logs?Cheers,Jeff",
"msg_date": "Thu, 5 Dec 2019 18:55:02 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum locking question"
},
{
"msg_contents": "Is this what you are referring to?\r\n\r\n- Prevent VACUUM from trying to freeze an old multixact ID involving a still-running transaction (Nathan Bossart, Jeremy Schneider)\r\nThis case would lead to VACUUM failing until the old transaction terminates.\r\nhttps://www.postgresql.org/docs/release/9.6.16/\r\n\r\nThanks,\r\nMike\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane [mailto:[email protected]]\r\nSent: Thursday, December 05, 2019 6:49 PM\r\nTo: Mike Schanne\r\nCc: '[email protected]'\r\nSubject: Re: autovacuum locking question\r\n\r\nMike Schanne <[email protected]> writes:\r\n> I am investigating a performance problem in our application and am seeing something unexpected in the postgres logs regarding the autovacuum.\r\n\r\n> 2019-12-01 13:05:39.029 UTC,\"wb\",\"postgres\",6966,\"127.0.0.1:53976\",5ddbd990.1b36,17099,\"INSERT waiting\",2019-11-25 13:39:28 UTC,12/1884256,12615023,LOG,00000,\"process 6966 still waiting for RowExclusiveLock on relation 32938 of database 32768 after 1000.085 ms\",\"Process holding the lock: 6045. Wait queue: 6966.\",,,,,\"INSERT INTO myschema.mytable (...) VALUES (...) RETURNING process.mytable.mytable_id\",13,,\"\"\r\n> 2019-12-01 13:05:39.458 UTC,,,6045,,5de3b800.179d,1,,2019-12-01 12:54:24 UTC,10/417900,0,ERROR,57014,\"canceling autovacuum task\",,,,,\"automatic vacuum of table \"\"postgres.myschema.mytable\"\"\",,,,\"\"\r\n\r\n> My understanding from reading the documentation was that a vacuum can run concurrently with table inserts/updates, but from reading the logs it appears they are conflicting over a row lock. This particular table gets very frequent inserts/updates (10-100 inserts / sec) so I am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected for the vacuum to be canceled by an insert in this way?\r\n\r\nThe main part of an autovacuum operation should go through OK. The only part that would get canceled in response to somebody taking a non-exclusive lock is the last step, which is truncation of unused blocks at the end of the table; that requires an exclusive lock. Normally, skipping that step isn't terribly problematic.\r\n\r\n> We are using postgres 9.6.10.\r\n\r\nIIRC, we've made improvements in this area since 9.6, to allow a partial truncation to be done if someone wants the lock, rather than just failing entirely.\r\n\r\nregards, tom lane\r\n\r\n\r\n\r\n________________________________\r\n\r\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.\r\n",
"msg_date": "Fri, 6 Dec 2019 15:46:15 +0000",
"msg_from": "Mike Schanne <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: autovacuum locking question"
},
{
"msg_contents": "The error is not actually showing up very often (I have 8 occurrences from 11/29 and none since then). So maybe I should not be concerned about it. I suspect we have an I/O bottleneck from other logs (i.e. long checkpoint sync times), so this error may be a symptom rather than the cause.\r\n\r\nFrom: Jeff Janes [mailto:[email protected]]\r\nSent: Thursday, December 05, 2019 6:55 PM\r\nTo: Mike Schanne\r\nCc: [email protected]\r\nSubject: Re: autovacuum locking question\r\n\r\nOn Thu, Dec 5, 2019 at 5:26 PM Mike Schanne <[email protected]<mailto:[email protected]>> wrote:\r\nHi,\r\nI am investigating a performance problem in our application and am seeing something unexpected in the postgres logs regarding the autovacuum.\r\n\r\n2019-12-01 13:05:39.029 UTC,\"wb\",\"postgres\",6966,\"127.0.0.1:53976<http://127.0.0.1:53976>\",5ddbd990.1b36,17099,\"INSERT waiting\",2019-11-25 13:39:28 UTC,12/1884256,12615023,LOG,00000,\"process 6966 still waiting for RowExclusiveLock on relation 32938 of database 32768 after 1000.085 ms\",\"Process holding the lock: 6045. Wait queue: 6966.\",,,,,\"INSERT INTO myschema.mytable (...) VALUES (...) RETURNING process.mytable.mytable_id\",13,,\"\"\r\n2019-12-01 13:05:39.458 UTC,,,6045,,5de3b800.179d,1,,2019-12-01 12:54:24 UTC,10/417900,0,ERROR,57014,\"canceling autovacuum task\",,,,,\"automatic vacuum of table \"\"postgres.myschema.mytable\"\"\",,,,\"\"\r\n\r\nMy understanding from reading the documentation was that a vacuum can run concurrently with table inserts/updates, but from reading the logs it appears they are conflicting over a row lock. This particular table gets very frequent inserts/updates (10-100 inserts / sec) so I am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected for the vacuum to be canceled by an insert in this way?\r\n\r\nWe are using postgres 9.6.10.\r\n\r\nIf the vacuum finds a lot of empty pages at the end of the table, it will try to truncate them and takes a strong lock to do so. It is supposed to check every 20ms to see if anyone else is blocked on that lock, at which point it stops doing the truncation and releases the lock. So it should never get \"caught\" holding the lock in order to be cancelled. Is your setting for deadlock_timeout much lower than usual? Also, if the truncation is bogged down in very slow IO, perhaps it doesn't actually get around to checking ever 20ms despite its intentionsl\r\n\r\nHow often have you seen it in the logs?\r\n\r\nCheers,\r\n\r\nJeff\r\n\r\n________________________________\r\n\r\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.\r\n\n\n\n\n\n\n\n\n\nThe error is not actually showing up very often (I have 8 occurrences from 11/29 and none since then). So maybe I should not be concerned about it. I suspect\r\n we have an I/O bottleneck from other logs (i.e. long checkpoint sync times), so this error may be a symptom rather than the cause.\n \nFrom: Jeff Janes [mailto:[email protected]]\r\n\nSent: Thursday, December 05, 2019 6:55 PM\nTo: Mike Schanne\nCc: [email protected]\nSubject: Re: autovacuum locking question\n \n\n\nOn Thu, Dec 5, 2019 at 5:26 PM Mike Schanne <[email protected]> wrote:\n\n\n\n\n\nHi,\nI am investigating a performance problem in our application and am seeing something unexpected in the postgres logs regarding the autovacuum.\n \n2019-12-01 13:05:39.029 UTC,\"wb\",\"postgres\",6966,\"127.0.0.1:53976\",5ddbd990.1b36,17099,\"INSERT waiting\",2019-11-25 13:39:28\r\n UTC,12/1884256,12615023,LOG,00000,\"process 6966 still waiting for RowExclusiveLock on relation 32938 of database 32768 after 1000.085 ms\",\"Process holding the lock: 6045. Wait queue: 6966.\",,,,,\"INSERT INTO myschema.mytable (...) VALUES (...) RETURNING process.mytable.mytable_id\",13,,\"\"\n2019-12-01 13:05:39.458 UTC,,,6045,,5de3b800.179d,1,,2019-12-01 12:54:24 UTC,10/417900,0,ERROR,57014,\"canceling autovacuum task\",,,,,\"automatic vacuum of table \"\"postgres.myschema.mytable\"\"\",,,,\"\"\n \nMy understanding from reading the documentation was that a vacuum can run concurrently with table inserts/updates, but from reading the logs it appears they are conflicting over\r\n a row lock. This particular table gets very frequent inserts/updates (10-100 inserts / sec) so I am concerned that if the autovacuum is constantly canceled, then the table never gets cleaned and its performance will continue to degrade over time. Is it expected\r\n for the vacuum to be canceled by an insert in this way?\n \nWe are using postgres 9.6.10.\n\n\n\n\n \n\n\nIf the vacuum finds a lot of empty pages at the end of the table, it will try to truncate them and takes a strong lock to do so. It is supposed to check every 20ms to see if anyone else is blocked on that lock, at which point it stops\r\n doing the truncation and releases the lock. So it should never get \"caught\" holding the lock in order to be cancelled. Is your setting for deadlock_timeout much lower than usual? Also, if the truncation is bogged down in very slow IO, perhaps it doesn't\r\n actually get around to checking ever 20ms despite its intentionsl\n\n\n \n\n\nHow often have you seen it in the logs?\n\n\n \n\n\nCheers,\n\n\n \n\n\nJeff\n\n\n\n\n\n\n\r\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other\r\n than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction\r\n or use of all or any part of this email without the express written consent of K&S is prohibited.",
"msg_date": "Fri, 6 Dec 2019 15:55:32 +0000",
"msg_from": "Mike Schanne <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: autovacuum locking question"
},
{
"msg_contents": "Mike Schanne <[email protected]> writes:\n> Is this what you are referring to?\n> - Prevent VACUUM from trying to freeze an old multixact ID involving a still-running transaction (Nathan Bossart, Jeremy Schneider)\n> This case would lead to VACUUM failing until the old transaction terminates.\n> https://www.postgresql.org/docs/release/9.6.16/\n\nHmmm ... after digging through the commit log, it seems the improvements\nI was thinking of were all pre-9.6. The only post-9.6 vacuum truncation\nperformance fix I can find is\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=7e26e02ee\n\nwhich came in in v10.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Dec 2019 12:12:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum locking question"
},
{
"msg_contents": "Mike Schanne <[email protected]> writes:\n> The error is not actually showing up very often (I have 8 occurrences from 11/29 and none since then). So maybe I should not be concerned about it. I suspect we have an I/O bottleneck from other logs (i.e. long checkpoint sync times), so this error may be a symptom rather than the cause.\n\nWell, it's also an inherently non-repeating problem: once some iteration\nof autovacuum has managed to truncate away the large amount of trailing\ndead space that the file presumably had, later runs won't need to do\nthat.\n\nOf course, if you have a usage pattern that repeatedly bloats the table\nwith lots of stuff-to-be-vacuumed, the issue could recur that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Dec 2019 12:19:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum locking question"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 10:55 AM Mike Schanne <[email protected]> wrote:\n\n> The error is not actually showing up very often (I have 8 occurrences from\n> 11/29 and none since then). So maybe I should not be concerned about it.\n> I suspect we have an I/O bottleneck from other logs (i.e. long checkpoint\n> sync times), so this error may be a symptom rather than the cause.\n>\n\nI think that at the point it is getting cancelled, it has done all the work\nexcept the truncation of the empty pages, and reporting the results (for\nexample, updating n_live_tup and n_dead_tup). If this happens\nevery single time (neither last_autovacuum nor last_vacuum ever advances)\nit will eventually cause problems. So this is mostly a symptom, but not\nentirely. Simply running a manual vacuum should fix the reporting\nproblem. It is not subject to cancelling, so it will detect it is blocking\nsomeone and gracefully bow. Meaning it will suspend the truncation, but\nwill still report its results as normal.\n\nReading the table backwards in order to truncate it might be contributing\nto the IO problems as well as being a victim of those problems. Upgrading\nto v10 might help with this, as it implemented a prefetch where it reads\nthe table forward in 128kB chunks, and then jumps backwards one chunk at a\ntime. Rather than just reading backwards 8kB at a time.\n\nCheers,\n\nJeff\n\n>\n\nOn Fri, Dec 6, 2019 at 10:55 AM Mike Schanne <[email protected]> wrote:\n\n\nThe error is not actually showing up very often (I have 8 occurrences from 11/29 and none since then). So maybe I should not be concerned about it. I suspect\n we have an I/O bottleneck from other logs (i.e. long checkpoint sync times), so this error may be a symptom rather than the cause.I think that at the point it is getting cancelled, it has done all the work except the truncation of the empty pages, and reporting the results (for example, updating n_live_tup and n_dead_tup). If this happens every single time (neither last_autovacuum nor last_vacuum ever advances) it will eventually cause problems. So this is mostly a symptom, but not entirely. Simply running a manual vacuum should fix the reporting problem. It is not subject to cancelling, so it will detect it is blocking someone and gracefully bow. Meaning it will suspend the truncation, but will still report its results as normal. Reading the table backwards in order to truncate it might be contributing to the IO problems as well as being a victim of those problems. Upgrading to v10 might help with this, as it implemented a prefetch where it reads the table forward in 128kB chunks, and then jumps backwards one chunk at a time. Rather than just reading backwards 8kB at a time.Cheers,Jeff",
"msg_date": "Fri, 6 Dec 2019 12:47:56 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum locking question"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 06:49:06PM -0500, Tom Lane wrote:\n> The only part that would get canceled in response to somebody taking a\n> non-exclusive lock is the last step, which is truncation of unused blocks at\n> the end of the table; that requires an exclusive lock.\n\nOn Thu, Dec 05, 2019 at 06:55:02PM -0500, Jeff Janes wrote:\n> If the vacuum finds a lot of empty pages at the end of the table, it will\n> try to truncate them and takes a strong lock to do so.\n\nShould the exclusive lock bit be documented ?\nhttps://www.postgresql.org/docs/12/explicit-locking.html\n\n\n\n",
"msg_date": "Fri, 6 Dec 2019 11:49:34 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum locking question"
},
{
"msg_contents": "And Just to reiterate my own understanding of this...\n\nautovacuum priority is less than a user-initiated request, so issuing a \nmanual vacuum (user-initiated request) will not result in being cancelled.\n\nRegards,\nMichael Vitale\n\nJeff Janes wrote on 12/6/2019 12:47 PM:\n> On Fri, Dec 6, 2019 at 10:55 AM Mike Schanne <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> The error is not actually showing up very often (I have 8\n> occurrences from 11/29 and none since then). So maybe I should\n> not be concerned about it. I suspect we have an I/O bottleneck\n> from other logs (i.e. long checkpoint sync times), so this error\n> may be a symptom rather than the cause.\n>\n>\n> I think that at the point it is getting cancelled, it has done all the \n> work except the truncation of the empty pages, and reporting the \n> results (for example, updating n_live_tup and n_dead_tup). If this \n> happens every single time (neither last_autovacuum nor last_vacuum \n> ever advances) it will eventually cause problems. So this is mostly a \n> symptom, but not entirely. Simply running a manual vacuum should fix \n> the reporting problem. It is not subject to cancelling, so it will \n> detect it is blocking someone and gracefully bow. Meaning it will \n> suspend the truncation, but will still report its results as normal.\n> Reading the table backwards in order to truncate it might be \n> contributing to the IO problems as well as being a victim of those \n> problems. Upgrading to v10 might help with this, as it implemented a \n> prefetch where it reads the table forward in 128kB chunks, and then \n> jumps backwards one chunk at a time. Rather than just reading \n> backwards 8kB at a time.\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n\n\nAnd Just to reiterate my own understanding \nof this...\n\nautovacuum priority is less than a user-initiated request, so issuing a \nmanual vacuum (user-initiated request) will not result in being \ncancelled.\n\nRegards,\nMichael Vitale\n\nJeff Janes wrote on 12/6/2019 12:47 PM:\n\n\nOn Fri, Dec 6, 2019 at 10:55 AM Mike \nSchanne <[email protected]>\n wrote:\n\nThe\n error is not actually showing up very often (I have 8 occurrences from \n11/29 and none since then). So maybe I should not be concerned about \nit. I suspect\n we have an I/O bottleneck from other logs (i.e. long checkpoint sync \ntimes), so this error may be a symptom rather than the cause.I\n think that at the point it is getting cancelled, it has done all the \nwork except the truncation of the empty pages, and reporting the results\n (for example, updating n_live_tup and n_dead_tup). If this happens \nevery single time (neither last_autovacuum nor last_vacuum ever \nadvances) it will eventually cause problems. So this is mostly a \nsymptom, but not entirely. Simply running a manual vacuum should fix \nthe reporting problem. It is not subject to cancelling, so it will \ndetect it is blocking someone and gracefully bow. Meaning it will \nsuspend the truncation, but will still report its results as normal. Reading\n the table backwards in order to truncate it might be contributing to \nthe IO problems as well as being a victim of those problems. Upgrading \nto v10 might help with this, as it implemented a prefetch where it reads\n the table forward in 128kB chunks, and then jumps backwards one chunk \nat a time. Rather than just reading backwards 8kB at a time.Cheers,Jeff",
"msg_date": "Fri, 6 Dec 2019 12:50:44 -0500",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum locking question"
},
{
"msg_contents": "(I've changed the original subject, \"autovacuum locking question\", of the\nsender's email so as not to hijack that thread.)\n\nOn Thu, Dec 5, 2019 at 2:26 PM Mike Schanne <[email protected]> wrote:\n\n> Hi,\n>\n> I am investigating a performance problem...\n> ... This email is non-binding, is subject to contract, and neither Kulicke\n> and Soffa Industries, Inc. nor its subsidiaries (each and collectively\n> “K&S”) shall have any obligation to you to consummate the transactions\n> herein or to enter into any agreement, other than in accordance with the\n> terms and conditions of a definitive agreement if and when negotiated,\n> finalized and executed between the parties. This email and all its contents\n> are protected by International and United States copyright laws. Any\n> reproduction or use of all or any part of this email without the express\n> written consent of K&S is prohibited.\n>\n\nSorry to be off topic, but this bugs me. Language is important. This isn't\ndirected at you specifically, but I see these disclaimers all the time. How\ncan you post to a public newsgroup that automatically reproduces your email\nto thousands of subscribers, and additionally publishes it on\npublicly accessible archives, in direct conflict with your company's policy\nappended to your email? And why on Earth do your company's lawyers think\nthis sort of disclaimer is helpful and even legally useful? Not to mention,\ndo they realize it's vaguely offensive to every customer and colleague who\nreceives it?\n\nCraig\n\n(I've changed the original subject, \"autovacuum locking question\", of the sender's email so as not to hijack that thread.)On Thu, Dec 5, 2019 at 2:26 PM Mike Schanne <[email protected]> wrote:\n\n\nHi,\nI am investigating a performance problem...... This email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other\n than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction\n or use of all or any part of this email without the express written consent of K&S is prohibited.\n\n\nSorry to be off topic, but this bugs me. Language is important. This isn't directed at you specifically, but I see these disclaimers all the time. How can you post to a public newsgroup that automatically reproduces your email to thousands of subscribers, and additionally publishes it on publicly accessible archives, in direct conflict with your company's policy appended to your email? And why on Earth do your company's lawyers think this sort of disclaimer is helpful and even legally useful? Not to mention, do they realize it's vaguely offensive to every customer and colleague who receives it?Craig",
"msg_date": "Fri, 6 Dec 2019 10:42:19 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Legal disclaimers on emails to this group"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> Sorry to be off topic, but this bugs me. Language is important. This isn't\n> directed at you specifically, but I see these disclaimers all the time. How\n> can you post to a public newsgroup that automatically reproduces your email\n> to thousands of subscribers, and additionally publishes it on\n> publicly accessible archives, in direct conflict with your company's policy\n> appended to your email? And why on Earth do your company's lawyers think\n> this sort of disclaimer is helpful and even legally useful? Not to mention,\n> do they realize it's vaguely offensive to every customer and colleague who\n> receives it?\n\nYeah, it's annoying, and the idea that such an addendum is legally\nenforceable is just laughable (bearing in mind that IANAL --- but\nwithout a pre-existing contract, it's laughable). But the folks\nactually emailing to our lists are generally peons with no say over\ncorporate policies, so there's not much they can do about it. Might\nas well chill out, or just ignore any mail with a disclaimer you\nfind particularly offensive.\n\n\t\t\tregards, tom lane\n\nDisclaimer: if you believe that email disclaimers have any legal\nforce whatsoever, you are required to immediately send me $1M USD.\n\n\n",
"msg_date": "Fri, 06 Dec 2019 14:27:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Legal disclaimers on emails to this group"
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 10:42:19AM -0800, Craig James wrote:\n> (I've changed the original subject, \"autovacuum locking question\", of the\n> sender's email so as not to hijack that thread.)\n\nNote that threads are defined by these headers, not by \"Subject\".\n\nReferences: <[email protected]> \nIn-Reply-To: <[email protected]> \n\nhttps://www.postgresql.org/message-id/CAFwQ8rcEExxB8ZuE_CYj6u6FZbRZjyWn%2BPo31hrfLAw1uBnKMg%40mail.gmail.com\n\nJustin\n(now hijacking your thread)\n\n\n",
"msg_date": "Fri, 6 Dec 2019 15:55:04 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "threads (Re: Legal disclaimers on emails to this group)"
},
{
"msg_contents": "\nCraig James <[email protected]> writes:\n\n> (I've changed the original subject, \"autovacuum locking question\", of the\n> sender's email so as not to hijack that thread.)\n>\n> On Thu, Dec 5, 2019 at 2:26 PM Mike Schanne <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> I am investigating a performance problem...\n>> ... This email is non-binding, is subject to contract, and neither Kulicke\n>> and Soffa Industries, Inc. nor its subsidiaries (each and collectively\n>> “K&S”) shall have any obligation to you to consummate the transactions\n>> herein or to enter into any agreement, other than in accordance with the\n>> terms and conditions of a definitive agreement if and when negotiated,\n>> finalized and executed between the parties. This email and all its contents\n>> are protected by International and United States copyright laws. Any\n>> reproduction or use of all or any part of this email without the express\n>> written consent of K&S is prohibited.\n>>\n>\n> Sorry to be off topic, but this bugs me. Language is important. This isn't\n> directed at you specifically, but I see these disclaimers all the time. How\n> can you post to a public newsgroup that automatically reproduces your email\n> to thousands of subscribers, and additionally publishes it on\n> publicly accessible archives, in direct conflict with your company's policy\n> appended to your email? And why on Earth do your company's lawyers think\n> this sort of disclaimer is helpful and even legally useful? Not to mention,\n> do they realize it's vaguely offensive to every customer and colleague who\n> receives it?\n>\n> Craig\n\nOh how I hear you!\n\nThis is what I was using as my email signature (but not for groups). I\nfeel for the OP who probably has little choice (other than work for a\ndifferent employer, which is a very valid choice given the 'organisational\nculture' exhibited by policies requiring such nonsense)\n\nNotice to all senders:\n\nIf you send me a message, on receipt of that message I consider that message to\nbe my property and I will copy, share and deceminate as I see fit. I will\nprovide attribution when appropriate and I willl endeavour to comply with all\nreasonable requests. However, I reject all threats or implied threats of legal\naction arising from an error or mistake on your part. It is your responsibility\nto manage your communications appropriately, not mine.\n\n-- \nTim Cross\n\n\n",
"msg_date": "Sat, 07 Dec 2019 09:42:54 +1100",
"msg_from": "Tim Cross <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Legal disclaimers on emails to this group"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 12:50 PM MichaelDBA <[email protected]> wrote:\n\n> And Just to reiterate my own understanding of this...\n>\n> autovacuum priority is less than a user-initiated request, so issuing a\n> manual vacuum (user-initiated request) will not result in being cancelled.\n>\n\nSomethings happen in some situations and not in others. I don't know that\nit is useful to categorize them into a monotonic priority scale.\n\nAutovacs \"to prevent wraparound\" don't get cancelled the way ordinary\nautovacs do, but they still use autovac IO throttling settings, not the\nunthrottled (by default settings) manual vacuum settings, which can be a\nmajor problem sometimes.\n\nNote that no kind of vacuum should normally get cancelled using the\nsignalling mechanism during truncation phase, that seems to be due to some\nrather extreme situation with IO congestion.\n\nCheers,\n\nJeff\n\nOn Fri, Dec 6, 2019 at 12:50 PM MichaelDBA <[email protected]> wrote:\nAnd Just to reiterate my own understanding \nof this...\n\nautovacuum priority is less than a user-initiated request, so issuing a \nmanual vacuum (user-initiated request) will not result in being \ncancelled.Somethings happen in some situations and not in others. I don't know that it is useful to categorize them into a monotonic priority scale.Autovacs \"to prevent wraparound\" don't get cancelled the way ordinary autovacs do, but they still use autovac IO throttling settings, not the unthrottled (by default settings) manual vacuum settings, which can be a major problem sometimes.Note that no kind of vacuum should normally get cancelled using the signalling mechanism during truncation phase, that seems to be due to some rather extreme situation with IO congestion.Cheers,Jeff",
"msg_date": "Fri, 6 Dec 2019 19:59:00 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum locking question"
},
{
"msg_contents": "On 12/6/19 1:42 PM, Craig James wrote:\n> (I've changed the original subject, \"autovacuum locking question\", of\n> the sender's email so as not to hijack that thread.)\n> \n> On Thu, Dec 5, 2019 at 2:26 PM Mike Schanne <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Hi,____\n> \n> I am investigating a performance problem...\n> \n> ... This email is non-binding, is subject to contract, and neither\n> Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and\n> collectively “K&S”) shall have any obligation to you to consummate\n> the transactions herein or to enter into any agreement, other than\n> in accordance with the terms and conditions of a definitive\n> agreement if and when negotiated, finalized and executed between the\n> parties. This email and all its contents are protected by\n> International and United States copyright laws. Any reproduction or\n> use of all or any part of this email without the express written\n> consent of K&S is prohibited.\n> \n> \n> Sorry to be off topic, but this bugs me. Language is important. This\n> isn't directed at you specifically, but I see these disclaimers all the\n> time. How can you post to a public newsgroup that automatically\n> reproduces your email to thousands of subscribers, and additionally\n> publishes it on publicly accessible archives, in direct conflict with\n> your company's policy appended to your email? And why on Earth do your\n> company's lawyers think this sort of disclaimer is helpful and even\n> legally useful? Not to mention, do they realize it's vaguely offensive\n> to every customer and colleague who receives it?\n> \n> Craig\n\nPeople should probably not post anything on newsgroups from computers\nowned by their employers. They are probably violating the terms of their\nemployment.\n\nIt would be perfectly acceptable to me if all news servers automatically\ndeleted all such would-be posts instead of actually posting them.\n\n\n\n-- \n .~. Jean-David Beyer\n /V\\ PGP-Key:166D840A 0C610C8B\n /( )\\ Shrewsbury, New Jersey\n ^^-^^ 07:05:02 up 23 days, 7:00, 2 users, load average: 5.31, 4.71, 4.38\n\n\n",
"msg_date": "Sat, 7 Dec 2019 07:10:15 -0500",
"msg_from": "Jean-David Beyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Legal disclaimers on emails to this group"
},
{
"msg_contents": "I apologize for the legalese; as others have suggested it’s corporate IT policy and I have no control over it. I certainly intended no offense to the community here. I will use my personal email for future inquiries on this mailing list.\r\n\r\nThanks,\r\nMike\r\n\r\nFrom: Craig James [mailto:[email protected]]\r\nSent: Friday, December 06, 2019 1:42 PM\r\nTo: Mike Schanne\r\nCc: [email protected]\r\nSubject: Legal disclaimers on emails to this group\r\n\r\n(I've changed the original subject, \"autovacuum locking question\", of the sender's email so as not to hijack that thread.)\r\n\r\nOn Thu, Dec 5, 2019 at 2:26 PM Mike Schanne <[email protected]<mailto:[email protected]>> wrote:\r\nHi,\r\nI am investigating a performance problem...\r\n... This email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.\r\n\r\nSorry to be off topic, but this bugs me. Language is important. This isn't directed at you specifically, but I see these disclaimers all the time. How can you post to a public newsgroup that automatically reproduces your email to thousands of subscribers, and additionally publishes it on publicly accessible archives, in direct conflict with your company's policy appended to your email? And why on Earth do your company's lawyers think this sort of disclaimer is helpful and even legally useful? Not to mention, do they realize it's vaguely offensive to every customer and colleague who receives it?\r\n\r\nCraig\r\n\r\n________________________________\r\n\r\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.\r\n\n\n\n\n\n\n\n\n\nI apologize for the legalese; as others have suggested it’s corporate IT policy and I have no control over it. I certainly intended no offense to the community\r\n here. I will use my personal email for future inquiries on this mailing list.\n \nThanks,\nMike\n \nFrom: Craig James [mailto:[email protected]]\r\n\nSent: Friday, December 06, 2019 1:42 PM\nTo: Mike Schanne\nCc: [email protected]\nSubject: Legal disclaimers on emails to this group\n \n\n\n(I've changed the original subject, \"autovacuum locking question\", of the sender's email so as not to hijack that thread.)\n\n \n\n\nOn Thu, Dec 5, 2019 at 2:26 PM Mike Schanne <[email protected]> wrote:\n\n\n\n\nHi,\nI am investigating a performance problem...\n\n... This email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have\r\n any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and\r\n all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.\n\n\n\n\n \n\nSorry to be off topic, but this bugs me. Language is important. This isn't directed at you specifically, but I see these disclaimers all the time. How can you post to a public newsgroup that automatically reproduces your email to thousands\r\n of subscribers, and additionally publishes it on publicly accessible archives, in direct conflict with your company's policy appended to your email? And why on Earth do your company's lawyers think this sort of disclaimer is helpful and even legally useful?\r\n Not to mention, do they realize it's vaguely offensive to every customer and colleague who receives it?\n\n\n \n\n\n\nCraig\n\n\n\n\n\n\r\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other\r\n than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction\r\n or use of all or any part of this email without the express written consent of K&S is prohibited.",
"msg_date": "Mon, 9 Dec 2019 23:05:04 +0000",
"msg_from": "Mike Schanne <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Legal disclaimers on emails to this group"
},
{
"msg_contents": "Il 06/12/19 20:27, Tom Lane ha scritto:\n>\n> Disclaimer: if you believe that email disclaimers have any legal\n> force whatsoever, you are required to immediately send me $1M USD.\n>\n>\n>\nNo legal force (so no 1M USD :-) ) but if you're caught not enforcing \nthese disclaimers (even at the bottom of the signature, so leaving the \nuser the choice to use it or not) in your corporate emails, you're fined \n(and in some cases, also fired).\n\n\n\n\n",
"msg_date": "Thu, 12 Dec 2019 15:49:54 +0100",
"msg_from": "Moreno Andreo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Legal disclaimers on emails to this group"
},
{
"msg_contents": "Il 07/12/19 13:10, Jean-David Beyer ha scritto:\n> People should probably not post anything on newsgroups from computers\n> owned by their employers. They are probably violating the terms of their\n> employment.\nAre you sure? Imagine you are the DBA in your company and you need to \nask a question to PostgreSQL mailing list....\nI think it's perfectly legit that you do it from your corporate email, \nand not from your private one (since private email address use is not \nadmitted, just like private messages from corporate email).\nI have my disclaimer at the bottom of my signature, so I can choose when \nto add it or not, it's my company policy; someone else's policy should \nbe that the corporate mail server automatically appends the disclaimer \nto every mail message (so no user control).\nI agree with Tom to just ignore disclaimers and delete them when \nreplying to threads: you can't blame someone for what's not under his \ncontrol.\n\nObvoiusly you have to be careful to not include sensitive data in your \npublic email, but that's always under your control.\n\nCheers\nMoreno.-\n\n\n\n",
"msg_date": "Thu, 12 Dec 2019 16:03:32 +0100",
"msg_from": "Moreno Andreo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Legal disclaimers on emails to this group"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are in the process of migrating an oracle database to postgres in Google\nCloud and are investigating backup/recovery tools. The database is size is\n> 20TB. We have an SLA that requires us to be able to complete a full\nrestore of the database within 24 hours. We have been testing\npgbackreset, barman, and GCP snapshots but wanted to see if there are any\nother recommendations we should consider.\n\n*Desirable features*\n- Parallel backup/recovery\n- Incremental backups\n- Backup directly to a GCP bucket\n- Deduplication/Compression\n\nAny suggestions would be appreciated.\n\nCraig Jackson\n\nHi,We are in the process of migrating an oracle database to postgres in Google Cloud and are investigating backup/recovery tools. The database is size is > 20TB. We have an SLA that requires us to be able to complete a full restore of the database within 24 hours. We have been testing pgbackreset, barman, and GCP snapshots but wanted to see if there are any other recommendations we should consider. Desirable features- Parallel backup/recovery- Incremental backups- Backup directly to a GCP bucket- Deduplication/CompressionAny suggestions would be appreciated.Craig Jackson",
"msg_date": "Thu, 5 Dec 2019 10:47:46 -0700",
"msg_from": "Craig Jackson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres backup tool recommendations for multi-terabyte database in\n Google Cloud"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 9:48 AM Craig Jackson <[email protected]>\nwrote:\n\n> Hi,\n>\n> We are in the process of migrating an oracle database to postgres in\n> Google Cloud and are investigating backup/recovery tools. The database is\n> size is > 20TB. We have an SLA that requires us to be able to complete a\n> full restore of the database within 24 hours. We have been testing\n> pgbackreset, barman, and GCP snapshots but wanted to see if there are any\n> other recommendations we should consider.\n>\n> *Desirable features*\n> - Parallel backup/recovery\n> - Incremental backups\n> - Backup directly to a GCP bucket\n> - Deduplication/Compression\n>\n\nFor your 24-hour-restore requirement, there's an additional feature you\nmight consider: incremental restore, or what you might call \"recovery in\nplace\"; that is, the ability to keep a more-or-less up-to-date copy, and\nthen in an emergency only restore the diffs on the file system. pgbackup\nuses a built-in rsync-like feature, plus a client-server architecture, that\nallows it to quickly determine which disk blocks need to be updated.\nChecksums are computed on each side, and data are only transferred if\nchecksums differ. It's very efficient. I assume that a 20 TB database is\nmostly static, with only a small fraction of the data updated in any month.\nI believe the checksums are precomputed and stored in the pgbackrest\nrepository, so you can even do this from an Amazon S3 (or whatever Google's\nCloud equivalent is for low-cost storage) backup with just modest bandwidth\nusage.\n\nIn a cloud environment, you can do this on modestly-priced hardware (a few\nCPUs, modest memory). In the event of a failover, unmount your backup disk,\nspin up a big server, mount the database, do the incremental restore, and\nyou're in business.\n\nCraig (James)\n\n\n> Any suggestions would be appreciated.\n>\n> Craig Jackson\n>\n\nOn Thu, Dec 5, 2019 at 9:48 AM Craig Jackson <[email protected]> wrote:Hi,We are in the process of migrating an oracle database to postgres in Google Cloud and are investigating backup/recovery tools. The database is size is > 20TB. We have an SLA that requires us to be able to complete a full restore of the database within 24 hours. We have been testing pgbackreset, barman, and GCP snapshots but wanted to see if there are any other recommendations we should consider. Desirable features- Parallel backup/recovery- Incremental backups- Backup directly to a GCP bucket- Deduplication/CompressionFor your 24-hour-restore requirement, there's an additional feature you might consider: incremental restore, or what you might call \"recovery in place\"; that is, the ability to keep a more-or-less up-to-date copy, and then in an emergency only restore the diffs on the file system. pgbackup uses a built-in rsync-like feature, plus a client-server architecture, that allows it to quickly determine which disk blocks need to be updated. Checksums are computed on each side, and data are only transferred if checksums differ. It's very efficient. I assume that a 20 TB database is mostly static, with only a small fraction of the data updated in any month. I believe the checksums are precomputed and stored in the pgbackrest repository, so you can even do this from an Amazon S3 (or whatever Google's Cloud equivalent is for low-cost storage) backup with just modest bandwidth usage.In a cloud environment, you can do this on modestly-priced hardware (a few CPUs, modest memory). In the event of a failover, unmount your backup disk, spin up a big server, mount the database, do the incremental restore, and you're in business.Craig (James)Any suggestions would be appreciated.Craig Jackson",
"msg_date": "Thu, 5 Dec 2019 11:51:03 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres backup tool recommendations for multi-terabyte database\n in Google Cloud"
},
{
"msg_contents": "Thanks, I'll check it out.\n\nOn Thu, Dec 5, 2019 at 12:51 PM Craig James <[email protected]> wrote:\n\n> On Thu, Dec 5, 2019 at 9:48 AM Craig Jackson <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> We are in the process of migrating an oracle database to postgres in\n>> Google Cloud and are investigating backup/recovery tools. The database is\n>> size is > 20TB. We have an SLA that requires us to be able to complete a\n>> full restore of the database within 24 hours. We have been testing\n>> pgbackreset, barman, and GCP snapshots but wanted to see if there are any\n>> other recommendations we should consider.\n>>\n>> *Desirable features*\n>> - Parallel backup/recovery\n>> - Incremental backups\n>> - Backup directly to a GCP bucket\n>> - Deduplication/Compression\n>>\n>\n> For your 24-hour-restore requirement, there's an additional feature you\n> might consider: incremental restore, or what you might call \"recovery in\n> place\"; that is, the ability to keep a more-or-less up-to-date copy, and\n> then in an emergency only restore the diffs on the file system. pgbackup\n> uses a built-in rsync-like feature, plus a client-server architecture, that\n> allows it to quickly determine which disk blocks need to be updated.\n> Checksums are computed on each side, and data are only transferred if\n> checksums differ. It's very efficient. I assume that a 20 TB database is\n> mostly static, with only a small fraction of the data updated in any month.\n> I believe the checksums are precomputed and stored in the pgbackrest\n> repository, so you can even do this from an Amazon S3 (or whatever Google's\n> Cloud equivalent is for low-cost storage) backup with just modest bandwidth\n> usage.\n>\n> In a cloud environment, you can do this on modestly-priced hardware (a few\n> CPUs, modest memory). In the event of a failover, unmount your backup disk,\n> spin up a big server, mount the database, do the incremental restore, and\n> you're in business.\n>\n> Craig (James)\n>\n>\n>> Any suggestions would be appreciated.\n>>\n>> Craig Jackson\n>>\n>\n>\n>\n\n-- \nCraig\n\nThanks, I'll check it out. On Thu, Dec 5, 2019 at 12:51 PM Craig James <[email protected]> wrote:On Thu, Dec 5, 2019 at 9:48 AM Craig Jackson <[email protected]> wrote:Hi,We are in the process of migrating an oracle database to postgres in Google Cloud and are investigating backup/recovery tools. The database is size is > 20TB. We have an SLA that requires us to be able to complete a full restore of the database within 24 hours. We have been testing pgbackreset, barman, and GCP snapshots but wanted to see if there are any other recommendations we should consider. Desirable features- Parallel backup/recovery- Incremental backups- Backup directly to a GCP bucket- Deduplication/CompressionFor your 24-hour-restore requirement, there's an additional feature you might consider: incremental restore, or what you might call \"recovery in place\"; that is, the ability to keep a more-or-less up-to-date copy, and then in an emergency only restore the diffs on the file system. pgbackup uses a built-in rsync-like feature, plus a client-server architecture, that allows it to quickly determine which disk blocks need to be updated. Checksums are computed on each side, and data are only transferred if checksums differ. It's very efficient. I assume that a 20 TB database is mostly static, with only a small fraction of the data updated in any month. I believe the checksums are precomputed and stored in the pgbackrest repository, so you can even do this from an Amazon S3 (or whatever Google's Cloud equivalent is for low-cost storage) backup with just modest bandwidth usage.In a cloud environment, you can do this on modestly-priced hardware (a few CPUs, modest memory). In the event of a failover, unmount your backup disk, spin up a big server, mount the database, do the incremental restore, and you're in business.Craig (James)Any suggestions would be appreciated.Craig Jackson\n\n-- Craig",
"msg_date": "Thu, 5 Dec 2019 14:05:05 -0700",
"msg_from": "Craig Jackson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres backup tool recommendations for multi-terabyte database\n in Google Cloud"
},
{
"msg_contents": "Consider WAL-G. It works well with GCS nowadays. We have a good fresh\nexperience of using it on GCP, for multi-TB databases.\n\nBTW, what is your opinion on using GCE snapshots in this context? You\nmention that you've consider them. Thoughts?\n\nThanks,\nNik\n\nOn Thu, Dec 5, 2019 at 09:48 Craig Jackson <[email protected]>\nwrote:\n\n> Hi,\n>\n> We are in the process of migrating an oracle database to postgres in\n> Google Cloud and are investigating backup/recovery tools. The database is\n> size is > 20TB. We have an SLA that requires us to be able to complete a\n> full restore of the database within 24 hours. We have been testing\n> pgbackreset, barman, and GCP snapshots but wanted to see if there are any\n> other recommendations we should consider.\n>\n> *Desirable features*\n> - Parallel backup/recovery\n> - Incremental backups\n> - Backup directly to a GCP bucket\n> - Deduplication/Compression\n>\n> Any suggestions would be appreciated.\n>\n> Craig Jackson\n>\n\nConsider WAL-G. It works well with GCS nowadays. We have a good fresh experience of using it on GCP, for multi-TB databases.BTW, what is your opinion on using GCE snapshots in this context? You mention that you've consider them. Thoughts?Thanks,NikOn Thu, Dec 5, 2019 at 09:48 Craig Jackson <[email protected]> wrote:Hi,We are in the process of migrating an oracle database to postgres in Google Cloud and are investigating backup/recovery tools. The database is size is > 20TB. We have an SLA that requires us to be able to complete a full restore of the database within 24 hours. We have been testing pgbackreset, barman, and GCP snapshots but wanted to see if there are any other recommendations we should consider. Desirable features- Parallel backup/recovery- Incremental backups- Backup directly to a GCP bucket- Deduplication/CompressionAny suggestions would be appreciated.Craig Jackson",
"msg_date": "Thu, 5 Dec 2019 13:08:41 -0800",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres backup tool recommendations for multi-terabyte database\n in Google Cloud"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis question is somewhat related to my previous question:\nhttps://www.postgresql.org/message-id/0871fcf35ceb4caa8a2204ca9c38e330%40USEPRDEX1.corp.kns.com\n\nI was attempting to measure the benefit of doing a VACUUM FULL on my database. I was using the query found here:\n\nhttps://wiki.postgresql.org/wiki/Show_database_bloat\n\nHowever, I got an unexpected result in that the \"wastedbytes\" value actually increased for some tables after doing the vacuum.\n\nBefore VACUUM FULL:\ncurrent_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes\n------------------+----------------+---------------------------+--------+-------------+-----------------------------------------------------------------+--------+--------------\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex1 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex2 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex3 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex4 | 0.2 | 0\npostgres | myschema | mytableB | 1.0 | 63324160 | myindex5 | 0.0 | 0\n...\nAfter VACUUM FULL:\n current_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes\n------------------+----------------+---------------------------+--------+-------------+-----------------------------------------------------------------+--------+--------------\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex4 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex3 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex2 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex1 | 0.2 | 0\npostgres | myschema | mytableB | 1.0 | 63332352 | myindex5 | 0.0 | 0\n...\n\nThis is the schema for mytableA above:\n\n Column | Type | Modifiers\n---------------+-----------------------------+----------------------------------------------------------------\ncolA | integer | not null default nextval('myschema.myseq'::regclass)\ncolB | integer |\ncolC | integer |\ncolD | timestamp without time zone |\ncolE | json |\ncolF | integer |\ncolG | integer |\n\nI was wondering if the fact that we use a json column could be interfering with the wastedbytes calculation. Can anyone explain how wastedbytes could increase from a vacuum?\n\nThanks,\nMike\n\n________________________________\n\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively \"K&S\") shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.\n\n\n\n\n\n\n\n\n\nHi all,\n \nThis question is somewhat related to my previous question:\nhttps://www.postgresql.org/message-id/0871fcf35ceb4caa8a2204ca9c38e330%40USEPRDEX1.corp.kns.com\n \nI was attempting to measure the benefit of doing a VACUUM FULL on my database. I was using the query found here:\n \nhttps://wiki.postgresql.org/wiki/Show_database_bloat\n \nHowever, I got an unexpected result in that the “wastedbytes” value actually increased for some tables after doing the vacuum. \n\n \nBefore VACUUM FULL:\ncurrent_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes\n\n------------------+----------------+---------------------------+--------+-------------+-----------------------------------------------------------------+--------+--------------\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex1 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex2 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex3 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex4 | 0.2 | 0\npostgres | myschema | mytableB | 1.0 | 63324160 | myindex5 | 0.0 | 0\n...\n\nAfter VACUUM FULL:\n current_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes\n\n------------------+----------------+---------------------------+--------+-------------+-----------------------------------------------------------------+--------+--------------\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex4 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex3 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex2 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex1 | 0.2 | 0\npostgres | myschema | mytableB | 1.0 | 63332352 | myindex5 | 0.0 | 0\n...\n \nThis is the schema for mytableA above:\n \n Column | Type | Modifiers\n---------------+-----------------------------+----------------------------------------------------------------\ncolA | integer | not null default nextval('myschema.myseq'::regclass)\ncolB | integer |\ncolC | integer |\ncolD | timestamp without time zone |\ncolE | json |\ncolF | integer |\ncolG | integer |\n \nI was wondering if the fact that we use a json column could be interfering with the wastedbytes calculation. Can anyone explain how wastedbytes could increase from a vacuum?\n \nThanks,\nMike\n\n\n\n\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other\n than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction\n or use of all or any part of this email without the express written consent of K&S is prohibited.",
"msg_date": "Fri, 6 Dec 2019 17:18:20 +0000",
"msg_from": "Mike Schanne <[email protected]>",
"msg_from_op": true,
"msg_subject": "unexpected result for wastedbytes query after vacuum full"
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 05:18:20PM +0000, Mike Schanne wrote:\n> Hi all,\n> \n> This question is somewhat related to my previous question:\n> https://www.postgresql.org/message-id/0871fcf35ceb4caa8a2204ca9c38e330%40USEPRDEX1.corp.kns.com\n> \n> I was attempting to measure the benefit of doing a VACUUM FULL on my database. I was using the query found here:\n> https://wiki.postgresql.org/wiki/Show_database_bloat\n> \n> However, I got an unexpected result in that the \"wastedbytes\" value actually increased for some tables after doing the vacuum.\n\n> I was wondering if the fact that we use a json column could be interfering with the wastedbytes calculation. Can anyone explain how wastedbytes could increase from a vacuum?\n\nIs it due to dropped columns, like Tom explained here ?\nhttps://www.postgresql.org/message-id/18375.1520723971%40sss.pgh.pa.us\n\n\n",
"msg_date": "Fri, 6 Dec 2019 17:28:49 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unexpected result for wastedbytes query after vacuum full"
},
{
"msg_contents": "Yes, the additional bitmap could certainly explain the increase.\r\n\r\nThanks,\r\nMike\r\n\r\n-----Original Message-----\r\nFrom: Justin Pryzby [mailto:[email protected]]\r\nSent: Friday, December 06, 2019 6:29 PM\r\nTo: Mike Schanne\r\nCc: [email protected]\r\nSubject: Re: unexpected result for wastedbytes query after vacuum full\r\n\r\nOn Fri, Dec 06, 2019 at 05:18:20PM +0000, Mike Schanne wrote:\r\n> Hi all,\r\n>\r\n> This question is somewhat related to my previous question:\r\n> https://www.postgresql.org/message-id/0871fcf35ceb4caa8a2204ca9c38e330%40USEPRDEX1.corp.kns.com\r\n>\r\n> I was attempting to measure the benefit of doing a VACUUM FULL on my database. I was using the query found here:\r\n> https://wiki.postgresql.org/wiki/Show_database_bloat\r\n>\r\n> However, I got an unexpected result in that the \"wastedbytes\" value actually increased for some tables after doing the vacuum.\r\n\r\n> I was wondering if the fact that we use a json column could be interfering with the wastedbytes calculation. Can anyone explain how wastedbytes could increase from a vacuum?\r\n\r\nIs it due to dropped columns, like Tom explained here ?\r\nhttps://www.postgresql.org/message-id/18375.1520723971%40sss.pgh.pa.us\r\n\r\n________________________________\r\n\r\nThis email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.\r\n",
"msg_date": "Mon, 9 Dec 2019 23:06:52 +0000",
"msg_from": "Mike Schanne <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: unexpected result for wastedbytes query after vacuum full"
},
{
"msg_contents": "Le ven. 6 déc. 2019 à 18:18, Mike Schanne <[email protected]> a écrit :\n\n> Hi all,\n>\n>\n>\n> This question is somewhat related to my previous question:\n>\n>\n> https://www.postgresql.org/message-id/0871fcf35ceb4caa8a2204ca9c38e330%40USEPRDEX1.corp.kns.com\n>\n>\n>\n> I was attempting to measure the benefit of doing a VACUUM FULL on my\n> database. I was using the query found here:\n>\n>\n>\n> https://wiki.postgresql.org/wiki/Show_database_bloat\n>\n>\n>\n> However, I got an unexpected result in that the “wastedbytes” value\n> actually increased for some tables after doing the vacuum.\n>\n>\n>\n> Before VACUUM FULL:\n>\n> current_database | schemaname | tablename | tbloat |\n> wastedbytes |\n> iname | ibloat | wastedibytes\n>\n>\n> ------------------+----------------+---------------------------+--------+-------------+-----------------------------------------------------------------+--------+--------------\n>\n> postgres | myschema | mytableA | 1.1 |\n> 74440704 | myindex1\n> | 0.2 | 0\n>\n> postgres | myschema | mytableA | 1.1 |\n> 74440704 | myindex2\n> | 0.2 | 0\n>\n> postgres | myschema | mytableA | 1.1 |\n> 74440704 | myindex3\n> | 0.2 | 0\n>\n> postgres | myschema | mytableA | 1.1 |\n> 74440704 | myindex4\n> | 0.2 | 0\n>\n> postgres | myschema | mytableB | 1.0 |\n> 63324160 | myindex5\n> | 0.0 | 0\n>\n> ...\n>\n> After VACUUM FULL:\n>\n> current_database | schemaname | tablename | tbloat |\n> wastedbytes |\n> iname | ibloat | wastedibytes\n>\n>\n> ------------------+----------------+---------------------------+--------+-------------+-----------------------------------------------------------------+--------+--------------\n>\n> postgres | myschema | mytableA | 1.1 |\n> 74506240 |\n> myindex4 | 0.2\n> | 0\n>\n> postgres | myschema | mytableA | 1.1 |\n> 74506240 |\n> myindex3 | 0.2\n> | 0\n>\n> postgres | myschema | mytableA | 1.1 |\n> 74506240 | myindex2\n> | 0.2 | 0\n>\n> postgres | myschema | mytableA | 1.1 |\n> 74506240 |\n> myindex1 | 0.2\n> | 0\n>\n> postgres | myschema | mytableB | 1.0 |\n> 63332352 |\n> myindex5 | 0.0\n> | 0\n>\n> ...\n>\n>\n>\n> This is the schema for mytableA above:\n>\n>\n>\n> Column | Type |\n> Modifiers\n>\n>\n> ---------------+-----------------------------+----------------------------------------------------------------\n>\n> colA | integer | not null default\n> nextval('myschema.myseq'::regclass)\n>\n> colB | integer |\n>\n> colC | integer |\n>\n> colD | timestamp without time zone |\n>\n> colE | json |\n>\n> colF | integer |\n>\n> colG | integer |\n>\n>\n>\n> I was wondering if the fact that we use a json column could be interfering\n> with the wastedbytes calculation. Can anyone explain how wastedbytes could\n> increase from a vacuum?\n>\n>\n>\n\nThis query uses the column statistics to estimate bloat. AFAIK, json\ncolumns don't have statistics, so the estimation can't be relied on (for\nthis specific table at least).\n\n\n-- \nGuillaume.\n\nLe ven. 6 déc. 2019 à 18:18, Mike Schanne <[email protected]> a écrit :\n\n\nHi all,\n \nThis question is somewhat related to my previous question:\nhttps://www.postgresql.org/message-id/0871fcf35ceb4caa8a2204ca9c38e330%40USEPRDEX1.corp.kns.com\n \nI was attempting to measure the benefit of doing a VACUUM FULL on my database. I was using the query found here:\n \nhttps://wiki.postgresql.org/wiki/Show_database_bloat\n \nHowever, I got an unexpected result in that the “wastedbytes” value actually increased for some tables after doing the vacuum. \r\n\n \nBefore VACUUM FULL:\ncurrent_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes\r\n\n------------------+----------------+---------------------------+--------+-------------+-----------------------------------------------------------------+--------+--------------\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex1 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex2 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex3 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74440704 | myindex4 | 0.2 | 0\npostgres | myschema | mytableB | 1.0 | 63324160 | myindex5 | 0.0 | 0\n...\n\nAfter VACUUM FULL:\n current_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes\r\n\n------------------+----------------+---------------------------+--------+-------------+-----------------------------------------------------------------+--------+--------------\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex4 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex3 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex2 | 0.2 | 0\npostgres | myschema | mytableA | 1.1 | 74506240 | myindex1 | 0.2 | 0\npostgres | myschema | mytableB | 1.0 | 63332352 | myindex5 | 0.0 | 0\n...\n \nThis is the schema for mytableA above:\n \n Column | Type | Modifiers\n---------------+-----------------------------+----------------------------------------------------------------\ncolA | integer | not null default nextval('myschema.myseq'::regclass)\ncolB | integer |\ncolC | integer |\ncolD | timestamp without time zone |\ncolE | json |\ncolF | integer |\ncolG | integer |\n \nI was wondering if the fact that we use a json column could be interfering with the wastedbytes calculation. Can anyone explain how wastedbytes could increase from a vacuum?\n This query uses the column statistics to estimate bloat. AFAIK, json columns don't have statistics, so the estimation can't be relied on (for this specific table at least).-- Guillaume.",
"msg_date": "Tue, 10 Dec 2019 17:43:31 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unexpected result for wastedbytes query after vacuum full"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 11:43 AM Guillaume Lelarge <[email protected]>\nwrote:\n\nThis query uses the column statistics to estimate bloat. AFAIK, json\n> columns don't have statistics, so the estimation can't be relied on (for\n> this specific table at least).\n>\n\nThis was true prior to 9.5 (for xml at least, I don't know about json), but\nshould not be true from that release onward. But still the difference\nbetween 74440704 and 74506240, this does seem to me to be straining at a\ngnat to swallow a camel.\n\nCheers,\n\nJeff\n\nOn Tue, Dec 10, 2019 at 11:43 AM Guillaume Lelarge <[email protected]> wrote:This query uses the column statistics to estimate bloat. AFAIK, json columns don't have statistics, so the estimation can't be relied on (for this specific table at least).This was true prior to 9.5 (for xml at least, I don't know about json), but should not be true from that release onward. But still the difference between 74440704 and 74506240, this does seem to me to be straining at a gnat to swallow a camel.Cheers,Jeff",
"msg_date": "Tue, 10 Dec 2019 14:48:26 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unexpected result for wastedbytes query after vacuum full"
},
{
"msg_contents": "Le mar. 10 déc. 2019 à 20:48, Jeff Janes <[email protected]> a écrit :\n\n> On Tue, Dec 10, 2019 at 11:43 AM Guillaume Lelarge <[email protected]>\n> wrote:\n>\n> This query uses the column statistics to estimate bloat. AFAIK, json\n>> columns don't have statistics, so the estimation can't be relied on (for\n>> this specific table at least).\n>>\n>\n> This was true prior to 9.5 (for xml at least, I don't know about json),\n> but should not be true from that release onward. But still the difference\n> between 74440704 and 74506240, this does seem to me to be straining at a\n> gnat to swallow a camel.\n>\n>\nI just checked, and you're right. There are less statistics with json, but\nthe important ones (null_frac and avg_width) are available for json and\njsonb datatypes. So the query should work even for tables using these\ndatatypes.\n\nThanks for the information, that's very interesting. And I apologize for\nthe noise.\n\n\n-- \nGuillaume.\n\nLe mar. 10 déc. 2019 à 20:48, Jeff Janes <[email protected]> a écrit :On Tue, Dec 10, 2019 at 11:43 AM Guillaume Lelarge <[email protected]> wrote:This query uses the column statistics to estimate bloat. AFAIK, json columns don't have statistics, so the estimation can't be relied on (for this specific table at least).This was true prior to 9.5 (for xml at least, I don't know about json), but should not be true from that release onward. But still the difference between 74440704 and 74506240, this does seem to me to be straining at a gnat to swallow a camel.I just checked, and you're right. There are less statistics with json, but the important ones (null_frac and avg_width) are available for json and jsonb datatypes. So the query should work even for tables using these datatypes.Thanks for the information, that's very interesting. And I apologize for the noise.-- Guillaume.",
"msg_date": "Wed, 11 Dec 2019 16:23:05 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unexpected result for wastedbytes query after vacuum full"
}
] |
[
{
"msg_contents": "Hi team,\n\nCould you please help me with this strange issue I am facing in my current live server I am maintaining.\n\nThere is a specific search query I am running to get list of Documents and their metadata from several table in the DB.\nWe are running Postgres 9.6.9 on Amazon RDS (db.m5.4xlarge instance)\n\nOur current DB consists of 500GB of data and indexes. Most of the rows in table are consist of 454,078,915\n\nWith the fresh DB with the restore of the DATA without any indexes Search query performs relatively quick and most of the time its less than a second.\n\nBut after 3 weeks of use of the DB it sudenly started to slowdown only for this perticular query and it takes 20+ seconds to respond. If I do a restore the DB again then it continues to work fine and the symptom pops out after 3 weeks time.\n\nI am just suspecting is there any cache or index maxing out causes this issue?\n\nCould you please guide me what can it be the root cause of this issue?\n\n\nThank you,\nFahiz\n\n\n\n\n\n\n\n\nHi team,\n\nCould you please help me with this strange issue I am facing in my current live server I am maintaining.\n\nThere is a specific search query I am running to get list of Documents and their metadata from several table in the DB.\nWe are running Postgres 9.6.9 on Amazon RDS (db.m5.4xlarge instance)\n\nOur current DB consists of 500GB of data and indexes. Most of the rows in table are consist of 454,078,915\n\n\n\nWith the fresh DB with the restore of the DATA without any indexes Search query performs relatively quick and most of the time its less than a second. \n\nBut after 3 weeks of use of the DB it sudenly started to slowdown only for this perticular query and it takes 20+ seconds to respond. If I do a restore the DB again then it continues to work fine and the symptom pops out after 3 weeks time. \n\nI am just suspecting is there any cache or index maxing out causes this issue?\n\nCould you please guide me what can it be the root cause of this issue?\n\n\nThank you,\nFahiz",
"msg_date": "Sat, 7 Dec 2019 20:05:59 +0000",
"msg_from": "Fahiz Mohamed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Specific query taking time to process"
},
{
"msg_contents": "On Sat, Dec 07, 2019 at 08:05:59PM +0000, Fahiz Mohamed wrote:\n> There is a specific search query I am running to get list of Documents and their metadata from several table in the DB.\n> We are running Postgres 9.6.9 on Amazon RDS (db.m5.4xlarge instance)\n> \n> Our current DB consists of 500GB of data and indexes. Most of the rows in table are consist of�454,078,915\n\n454M rows or ??\n\n> With the fresh DB with the restore of the DATA without any indexes Search query performs relatively quick and most of the time its less than a second.\n\n> But after 3 weeks of use of the DB it sudenly started to slowdown only for this perticular query and it takes 20+ seconds to respond. If I do a restore the DB again then it continues to work fine and the symptom pops out after 3 weeks time.\n> \n> I am just suspecting is there any cache or index maxing out causes this issue?\n> \n> Could you please guide me what can it be the root cause of this issue?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nCould you send explain ANALYZE (attach here as txt attachment or link on\ndepesz) now and compared with shortly after a restore ?\n\nJustin\n\n\n",
"msg_date": "Sun, 8 Dec 2019 22:13:07 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": ">\n> There is a specific search query I am running to get list of Documents and\n> their metadata from several table in the DB.\n> We are running Postgres 9.6.9 on Amazon RDS (db.m5.4xlarge instance)\n>\n> Our current DB consists of 500GB of data and indexes. Most of the rows in\n> table are consist of 454,078,915\n>\n> With the fresh DB with the restore of the DATA without any indexes Search\n> query performs relatively quick and most of the time its less than a\n> second.\n>\n> But after 3 weeks of use of the DB it sudenly started to slowdown only for\n> this perticular query and it takes 20+ seconds to respond. If I do a\n> restore the DB again then it continues to work fine and the symptom pops\n> out after 3 weeks time.\n>\n\n\nYou haven't been quite clear on the situation and your use case, but\nassuming this table has 454 million rows and experiences updates/deletes\nthen this sounds like you may be having problems with autovacuum. Have you\ncustomized parameters to ensure it is running more frequently than default?\nHow are you doing those data restores? Perhaps that process is cleaning up\nthe accumulated bloat and you can run fine again for a while. Check\npg_stat_user_tables for the last (auto)vacuum that ran, assuming you didn't\njust restore again and are expecting the issue to occur again soon.\n\nThere is a specific search query I am running to get list of Documents and their metadata from several table in the DB.\nWe are running Postgres 9.6.9 on Amazon RDS (db.m5.4xlarge instance)\n\nOur current DB consists of 500GB of data and indexes. Most of the rows in table are consist of 454,078,915\n\n\n\nWith the fresh DB with the restore of the DATA without any indexes Search query performs relatively quick and most of the time its less than a second. \n\nBut after 3 weeks of use of the DB it sudenly started to slowdown only for this perticular query and it takes 20+ seconds to respond. If I do a restore the DB again then it continues to work fine and the symptom pops out after 3 weeks time. You haven't been quite clear on the situation and your use case, but assuming this table has 454 million rows and experiences updates/deletes then this sounds like you may be having problems with autovacuum. Have you customized parameters to ensure it is running more frequently than default? How are you doing those data restores? Perhaps that process is cleaning up the accumulated bloat and you can run fine again for a while. Check pg_stat_user_tables for the last (auto)vacuum that ran, assuming you didn't just restore again and are expecting the issue to occur again soon.",
"msg_date": "Mon, 9 Dec 2019 12:03:15 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "Thank you very much for your prompt responses.\n\nI have analysed more regarding this and found the long running query.\n\nI ran \"explain analyse\" on this query and I got following result. (We have 2 identical DB instances and they consist of same data. Instane 1 took 20+ second to process and instance 2 took less than a second)\n\nInstance 1: (This is used by regular User - More than 600,000 request a day) - The result is same even when there is no user in the server.\nEXPLAIN ANALYZE\nNested Loop Semi Join (cost=998547.53..3319573.36 rows=1 width=8) (actual time=10568.217..22945.971 rows=22 loops=1)\n -> Hash Semi Join (cost=998546.96..3319545.95 rows=41 width=16) (actual time=10568.198..22945.663 rows=22 loops=1)\n Hash Cond: (node.id = prop.node_id)\n -> Bitmap Heap Scan on alf_node node (cost=995009.97..3303978.85 rows=4565737 width=8) (actual time=3304.419..20465.551 rows=41109751 loops=1)\n Recheck Cond: ((store_id = 6) AND (type_qname_id = 240))\n Rows Removed by Index Recheck: 54239131\n Filter: (NOT (hashed SubPlan 1))\n Rows Removed by Filter: 2816\n Heap Blocks: exact=24301 lossy=1875383\n -> Bitmap Index Scan on idx_alf_node_mdq (cost=0.00..646144.01 rows=20047144 width=0) (actual time=3232.067..3232.067 rows=44246360 loops=1)\n Index Cond: ((store_id = 6) AND (type_qname_id = 240))\n SubPlan 1\n -> Bitmap Heap Scan on alf_node_aspects aspect_1 (cost=2503.51..347403.58 rows=128379 width=8) (actual time=25.447..65.392 rows=5635 loops=1)\n Recheck Cond: (qname_id = 251)\n Heap Blocks: exact=40765\n -> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..2471.41 rows=128379 width=0) (actual time=18.835..18.835 rows=239610 loops=1)\n Index Cond: (qname_id = 251)\n -> Hash (cost=3526.11..3526.11 rows=871 width=8) (actual time=0.045..0.045 rows=23 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..3526.11 rows=871 width=8) (actual time=0.021..0.042 rows=23 loops=1)\n Index Cond: ((qname_id = '242'::bigint) AND (string_value = 'E292432'::text))\n Heap Fetches: 23\n -> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..2.01 rows=15 width=8) (actual time=0.011..0.011 rows=1 loops=22)\n Index Cond: ((node_id = node.id) AND (qname_id = 245))\n Heap Fetches: 22\nPlanning time: 0.639 ms\nExecution time: 22946.036 ms\n\nInstance 2: (Only by testers - 250 request a day)\n\nNested Loop Semi Join (cost=6471.94..173560841.08 rows=2 width=8) (actual time=0.162..0.464 rows=17 loops=1)\n -> Nested Loop (cost=6471.37..173560684.36 rows=45 width=16) (actual time=0.154..0.387 rows=17 loops=1)\n -> HashAggregate (cost=3508.15..3516.80 rows=865 width=8) (actual time=0.041..0.047 rows=18 loops=1)\n Group Key: prop.node_id\n -> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..3505.99 rows=866 width=8) (actual time=0.020..0.035 r\nows=18 loops=1)\n Index Cond: ((qname_id = '242'::bigint) AND (string_value = 'E292432'::text))\n Heap Fetches: 18\n -> Index Scan using alf_node_pkey on alf_node node (cost=2963.22..200644.11 rows=1 width=8) (actual time=0.019..0.019 rows=1 loops=18)\n Index Cond: (id = prop.node_id)\n Filter: ((type_qname_id <> 145) AND (store_id = 6) AND (type_qname_id = 240) AND (NOT (SubPlan 1)))\n Rows Removed by Filter: 0\n SubPlan 1\n -> Materialize (cost=2962.65..397912.89 rows=158204 width=8) (actual time=0.001..0.009 rows=85 loops=17)\n -> Bitmap Heap Scan on alf_node_aspects aspect_1 (cost=2962.65..396503.87 rows=158204 width=8) (actual time=0.021..0.082 rows=\n85 loops=1)\n Recheck Cond: (qname_id = 251)\n Heap Blocks: exact=55\n -> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..2923.10 rows=158204 width=0) (actual time=0.015..0.015 rows=87 loops=\n1)\n Index Cond: (qname_id = 251)\n -> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..34.32 rows=12 width=8) (actual time=0.004..0.004 rows=1 loop\ns=17)\n Index Cond: ((node_id = node.id) AND (qname_id = 245))\n Heap Fetches: 17\nPlanning time: 0.623 ms\nExecution time: 0.540 ms\n\nConfigurations are same in both servers.\n\nPlease advise me on this. Is there any configuration specifically I need to look like “work_mem”, “Shared_buffers”, “checkpoint_segment”, “effective_cache_size”, “enable_seqscan” and “checkpoint_compression_target”?\n\nThanks in advance.\n\nFahiz\n\nOn 9 Dec 2019, 19:03 +0000, Michael Lewis <[email protected]>, wrote:\n> > > There is a specific search query I am running to get list of Documents and their metadata from several table in the DB.\n> > > We are running Postgres 9.6.9 on Amazon RDS (db.m5.4xlarge instance)\n> > >\n> > > Our current DB consists of 500GB of data and indexes. Most of the rows in table are consist of 454,078,915\n> > >\n> > > With the fresh DB with the restore of the DATA without any indexes Search query performs relatively quick and most of the time its less than a second.\n> > >\n> > > But after 3 weeks of use of the DB it sudenly started to slowdown only for this perticular query and it takes 20+ seconds to respond. If I do a restore the DB again then it continues to work fine and the symptom pops out after 3 weeks time.\n> >\n> >\n> > You haven't been quite clear on the situation and your use case, but assuming this table has 454 million rows and experiences updates/deletes then this sounds like you may be having problems with autovacuum. Have you customized parameters to ensure it is running more frequently than default? How are you doing those data restores? Perhaps that process is cleaning up the accumulated bloat and you can run fine again for a while. Check pg_stat_user_tables for the last (auto)vacuum that ran, assuming you didn't just restore again and are expecting the issue to occur again soon.\n\n\n\n\n\n\n\nThank you very much for your prompt responses.\n\nI have analysed more regarding this and found the long running query.\n\nI ran \"explain analyse\" on this query and I got following result. (We have 2 identical DB instances and they consist of same data. Instane 1 took 20+ second to process and instance 2 took less than a second)\n\nInstance 1: (This is used by regular User - More than 600,000 request a day) - The result is same even when there is no user in the server.\n\nEXPLAIN ANALYZENested Loop Semi Join (cost=998547.53..3319573.36 rows=1 width=8) (actual time=10568.217..22945.971 rows=22 loops=1) -> Hash Semi Join (cost=998546.96..3319545.95 rows=41 width=16) (actual time=10568.198..22945.663 rows=22 loops=1) Hash Cond: (node.id = prop.node_id) -> Bitmap Heap Scan on alf_node node (cost=995009.97..3303978.85 rows=4565737 width=8) (actual time=3304.419..20465.551 rows=41109751 loops=1) Recheck Cond: ((store_id = 6) AND (type_qname_id = 240)) Rows Removed by Index Recheck: 54239131 Filter: (NOT (hashed SubPlan 1)) Rows Removed by Filter: 2816 Heap Blocks: exact=24301 lossy=1875383 -> Bitmap Index Scan on idx_alf_node_mdq (cost=0.00..646144.01 rows=20047144 width=0) (actual time=3232.067..3232.067 rows=44246360 loops=1) Index Cond: ((store_id = 6) AND (type_qname_id = 240)) SubPlan 1 -> Bitmap Heap Scan on alf_node_aspects aspect_1 (cost=2503.51..347403.58 rows=128379 width=8) (actual time=25.447..65.392 rows=5635 loops=1) Recheck Cond: (qname_id = 251) Heap Blocks: exact=40765 -> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..2471.41 rows=128379 width=0) (actual time=18.835..18.835 rows=239610 loops=1) Index Cond: (qname_id = 251) -> Hash (cost=3526.11..3526.11 rows=871 width=8) (actual time=0.045..0.045 rows=23 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..3526.11 rows=871 width=8) (actual time=0.021..0.042 rows=23 loops=1) Index Cond: ((qname_id = '242'::bigint) AND (string_value = 'E292432'::text)) Heap Fetches: 23 -> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..2.01 rows=15 width=8) (actual time=0.011..0.011 rows=1 loops=22) Index Cond: ((node_id = node.id) AND (qname_id = 245)) Heap Fetches: 22 Planning time: 0.639 ms Execution time: 22946.036 ms\n\nInstance 2: (Only by testers - 250 request a day)\n\n\nNested Loop Semi Join (cost=6471.94..173560841.08 rows=2 width=8) (actual time=0.162..0.464 rows=17 loops=1) -> Nested Loop (cost=6471.37..173560684.36 rows=45 width=16) (actual time=0.154..0.387 rows=17 loops=1) -> HashAggregate (cost=3508.15..3516.80 rows=865 width=8) (actual time=0.041..0.047 rows=18 loops=1) Group Key: prop.node_id -> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..3505.99 rows=866 width=8) (actual time=0.020..0.035 rows=18 loops=1) Index Cond: ((qname_id = '242'::bigint) AND (string_value = 'E292432'::text)) Heap Fetches: 18 -> Index Scan using alf_node_pkey on alf_node node (cost=2963.22..200644.11 rows=1 width=8) (actual time=0.019..0.019 rows=1 loops=18) Index Cond: (id = prop.node_id) Filter: ((type_qname_id <> 145) AND (store_id = 6) AND (type_qname_id = 240) AND (NOT (SubPlan 1))) Rows Removed by Filter: 0 SubPlan 1 -> Materialize (cost=2962.65..397912.89 rows=158204 width=8) (actual time=0.001..0.009 rows=85 loops=17) -> Bitmap Heap Scan on alf_node_aspects aspect_1 (cost=2962.65..396503.87 rows=158204 width=8) (actual time=0.021..0.082 rows=85 loops=1) Recheck Cond: (qname_id = 251) Heap Blocks: exact=55 -> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..2923.10 rows=158204 width=0) (actual time=0.015..0.015 rows=87 loops=1) Index Cond: (qname_id = 251) -> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..34.32 rows=12 width=8) (actual time=0.004..0.004 rows=1 loops=17) Index Cond: ((node_id = node.id) AND (qname_id = 245)) Heap Fetches: 17 Planning time: 0.623 ms Execution time: 0.540 ms\n\nConfigurations are same in both servers.\n\nPlease advise me on this. Is there any configuration specifically I need to look like “work_mem”, “Shared_buffers”, “checkpoint_segment”, “effective_cache_size”, “enable_seqscan” and “checkpoint_compression_target”? \n\nThanks in advance.\n\nFahiz\n\n\n\nOn 9 Dec 2019, 19:03 +0000, Michael Lewis <[email protected]>, wrote:\n\n\n\n\n\n\n\nThere is a specific search query I am running to get list of Documents and their metadata from several table in the DB.\nWe are running Postgres 9.6.9 on Amazon RDS (db.m5.4xlarge instance)\n\nOur current DB consists of 500GB of data and indexes. Most of the rows in table are consist of 454,078,915\n\n\n\nWith the fresh DB with the restore of the DATA without any indexes Search query performs relatively quick and most of the time its less than a second. \n\nBut after 3 weeks of use of the DB it sudenly started to slowdown only for this perticular query and it takes 20+ seconds to respond. If I do a restore the DB again then it continues to work fine and the symptom pops out after 3 weeks time. \n\n\n\n\n\nYou haven't been quite clear on the situation and your use case, but assuming this table has 454 million rows and experiences updates/deletes then this sounds like you may be having problems with autovacuum. Have you customized parameters to ensure it is running more frequently than default? How are you doing those data restores? Perhaps that process is cleaning up the accumulated bloat and you can run fine again for a while. Check pg_stat_user_tables for the last (auto)vacuum that ran, assuming you didn't just restore again and are expecting the issue to occur again soon.",
"msg_date": "Mon, 9 Dec 2019 22:39:38 +0000",
"msg_from": "Fahiz Mohamed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 3:39 PM Fahiz Mohamed <[email protected]> wrote:\n\n> I ran \"explain analyse\" on this query and I got following result. (We have\n> 2 identical DB instances and they consist of same data. Instane 1 took 20+\n> second to process and instance 2 took less than a second)\n>\n> Instance 1: (This is used by regular User - More than 600,000 request a\n> day) - The result is same even when there is no user in the server.\n>\n> -> Bitmap Heap Scan on alf_node node (cost=995009.97..3303978.85 rows=4565737 width=8) (actual time=3304.419..20465.551 rows=41109751 loops=1)\n> Recheck Cond: ((store_id = 6) AND (type_qname_id = 240))\n> Rows Removed by Index Recheck: 54239131\n> Filter: (NOT (hashed SubPlan 1))\n> Rows Removed by Filter: 2816\n> Heap Blocks: exact=24301 lossy=1875383\n>\n>\n> Planning time: 0.639 ms\n> Execution time: 22946.036 ms\n>\n>\nhttps://paquier.xyz/postgresql-2/postgres-9-4-feature-highlight-lossyexact-pages-for-bitmap-heap-scan/\n\n\nThat seems like a lot of lossy blocks. As I understand it, that means the\nsystem didn't have enough work_mem to fit all the references to the\nindividual rows which perhaps isn't surprising when it estimates it needs\n4.5 million rows and ends up with 41 million.\n\nDo both DB instances have the same data? I ask because the two plans are\nrather different which makes me think that statistics about the data are\nnot very similar. Are both configured the same, particularly for\nshared_buffers and work_mem, as well as the various planning cost\nparameters like random_page cost? If you can provide these plans again with\nexplain( analyze, buffers ) this time? Did you check on the last time\nautovacuum ran in pg_stat_user_tables?\n\nOn Mon, Dec 9, 2019 at 3:39 PM Fahiz Mohamed <[email protected]> wrote:\n\n\nI ran \"explain analyse\" on this query and I got following result. (We have 2 identical DB instances and they consist of same data. Instane 1 took 20+ second to process and instance 2 took less than a second)\n\nInstance 1: (This is used by regular User - More than 600,000 request a day) - The result is same even when there is no user in the server.\n\n -> Bitmap Heap Scan on alf_node node (cost=995009.97..3303978.85 rows=4565737 width=8) (actual time=3304.419..20465.551 rows=41109751 loops=1) Recheck Cond: ((store_id = 6) AND (type_qname_id = 240)) Rows Removed by Index Recheck: 54239131 Filter: (NOT (hashed SubPlan 1)) Rows Removed by Filter: 2816 Heap Blocks: exact=24301 lossy=1875383 Planning time: 0.639 ms Execution time: 22946.036 mshttps://paquier.xyz/postgresql-2/postgres-9-4-feature-highlight-lossyexact-pages-for-bitmap-heap-scan/ That seems like a lot of lossy blocks. As I understand it, that means the system didn't have enough work_mem to fit all the references to the individual rows which perhaps isn't surprising when it estimates it needs 4.5 million rows and ends up with 41 million.Do both DB instances have the same data? I ask because the two plans are rather different which makes me think that statistics about the data are not very similar. Are both configured the same, particularly for shared_buffers and work_mem, as well as the various planning cost parameters like random_page cost? If you can provide these plans again with explain( analyze, buffers ) this time? Did you check on the last time autovacuum ran in pg_stat_user_tables?",
"msg_date": "Tue, 10 Dec 2019 13:15:08 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "On Mon, Dec 09, 2019 at 10:39:38PM +0000, Fahiz Mohamed wrote:\n> Thank you very much for your prompt responses.\n> \n> I have analysed more regarding this and found the long running query.\n> \n> I ran \"explain analyse\" on this query and I got following result. (We have 2 identical DB instances and they consist of same data. Instane 1 took 20+ second to process and instance 2 took less than a second)\n> \n> Instance 1: (This is used by regular User - More than 600,000 request a day) - The result is same even when there is no user in the server.\n> EXPLAIN ANALYZE\n> Nested Loop Semi Join (cost=998547.53..3319573.36 rows=1 width=8) (actual time=10568.217..22945.971 rows=22 loops=1)\n> -> Hash Semi Join (cost=998546.96..3319545.95 rows=41 width=16) (actual time=10568.198..22945.663 rows=22 loops=1)\n> Hash Cond: (node.id = prop.node_id)\n> -> Bitmap Heap Scan on alf_node node (cost=995009.97..3303978.85 rows=4565737 width=8) (actual time=3304.419..20465.551 rows=41109751 loops=1)\n> Recheck Cond: ((store_id = 6) AND (type_qname_id = 240))\n> Rows Removed by Index Recheck: 54239131\n> Filter: (NOT (hashed SubPlan 1))\n> Rows Removed by Filter: 2816\n> Heap Blocks: exact=24301 lossy=1875383\n...\n\nThis is doing a bitmap scan on alf_node where the second is doing an Index Scan.\nIs alf_node well-correlated on the 2nd server on store_id or type_qname_id ?\nIf you CLUSTER and ANALYZE on the 1st server, maybe it would perform similarly.\n(But, that could hurt other queries).\n\n> We are running Postgres 9.6.9 on Amazon RDS (db.m5.4xlarge instance)\n> With the fresh DB with the restore of the DATA without any indexes Search query performs relatively quick and most of the time its less than a second.\n> But after 3 weeks of use of the DB it sudenly started to slowdown only for this perticular query and it takes 20+ seconds to respond. If I do a restore the DB again then it continues to work fine and the symptom pops out after 3 weeks time.\n\n> Instance 2: (Only by testers - 250 request a day)\n> \n> Nested Loop Semi Join (cost=6471.94..173560841.08 rows=2 width=8) (actual time=0.162..0.464 rows=17 loops=1)\n> -> Nested Loop (cost=6471.37..173560684.36 rows=45 width=16) (actual time=0.154..0.387 rows=17 loops=1)\n> -> HashAggregate (cost=3508.15..3516.80 rows=865 width=8) (actual time=0.041..0.047 rows=18 loops=1)\n> Group Key: prop.node_id\n> -> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..3505.99 rows=866 width=8) (actual time=0.020..0.035 r\n> ows=18 loops=1)\n> Index Cond: ((qname_id = '242'::bigint) AND (string_value = 'E292432'::text))\n> Heap Fetches: 18\n> -> Index Scan using alf_node_pkey on alf_node node (cost=2963.22..200644.11 rows=1 width=8) (actual time=0.019..0.019 rows=1 loops=18)\n> Index Cond: (id = prop.node_id)\n> Filter: ((type_qname_id <> 145) AND (store_id = 6) AND (type_qname_id = 240) AND (NOT (SubPlan 1)))\n> Rows Removed by Filter: 0\n> SubPlan 1\n\n\n",
"msg_date": "Tue, 10 Dec 2019 14:42:06 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "There is a slight different in both instance’s data. Inastanbce 1 contains latest data and instance 2 consists of data which is 3 weeks older than instance 1.\n\nFollowing are the number of rows in each table in both instances\n\nInstance 1\nalf_node : 96493129 rows\nalf_node_properties : 455599288 rows\nalf_node_aspects : 150153006 rowsInstance 2\nalf_node : 90831396 rows\nalf_node_properties : 440792247 rows\nalf_node_aspects : 146648241 rows\n\nI hope the above data difference can make a drastic difference. Please correct me if I am wrong.\n\nI checked \"pg_stat_user_tables\" and autovacuum never run against those tables.\n\nI did execute vacuum manually and I noticed the below in the output\n\n\"INFO: vacuuming \"public.alf_node_aspects\"\nINFO: \"alf_node_aspects\": found 0 removable, 150264654 nonremovable row versions in 812242 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nCPU 13.53s/33.35u sec elapsed 77.88 sec.\n\n\nINFO: analyzing \"public.alf_node_aspects\"\nINFO: \"alf_node_aspects\": scanned 30000 of 812242 pages, containing 5550000 live rows and 0 dead rows; 30000 rows in sample, 150264770 estimated total rows\"\n\nDid the vacuum worked fine?\n\nI did \"explain( analyze, buffers )\" below are the results:\n\nInstance 1:\n\n Nested Loop Semi Join (cost=441133.07..1352496.76 rows=1 width=8) (actual time=9054.324..10029.748 rows=22 loops=1)\nBuffers: shared hit=1617812\n-> Merge Semi Join (cost=441132.50..1352359.19 rows=41 width=16) (actual time=9054.296..10029.547 rows=22 loops=1)\nMerge Cond: (node.id = prop.node_id)\nBuffers: shared hit=1617701\n-> Index Only Scan using idx_alf_node_tqn on alf_node node (cost=441041.93..1340831.48 rows=4593371 width=8) (actual t\nime=4.418..7594.637 rows=40998148 loops=1)\nIndex Cond: ((type_qname_id = 240) AND (store_id = 6))\nFilter: (NOT (hashed SubPlan 1))\nRows Removed by Filter: 130\nHeap Fetches: 0\nBuffers: shared hit=1617681\nSubPlan 1\n-> Bitmap Heap Scan on alf_node_aspects aspect_1 (cost=3418.63..440585.22 rows=182459 width=8) (actual time=0.\n583..2.992 rows=4697 loops=1)\nRecheck Cond: (qname_id = 251)\nHeap Blocks: exact=1774\nBuffers: shared hit=1791\n-> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..3373.01 rows=182459 width=0) (actual time=0.384..0.38\n4 rows=4697 loops=1)\nIndex Cond: (qname_id = 251)\nBuffers: shared hit=17\n-> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..41.74 rows=852 width=8) (actual time=\n0.022..0.037 rows=23 loops=1)\nIndex Cond: ((qname_id = '242'::bigint) AND (string_value = 'E292432'::text))\nHeap Fetches: 0\nBuffers: shared hit=20\n-> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..38.80 rows=14 width=8) (actual time=0.\n007..0.007 rows=1 loops=22)\nIndex Cond: ((node_id = node.id) AND (qname_id = 245))\nHeap Fetches: 22\nBuffers: shared hit=111\nPlanning time: 0.621 ms\nExecution time: 10029.799 ms\n(29 rows)\n\n\nInstance 2:\n\nNested Loop Semi Join (cost=6471.94..173560891.23 rows=2 width=8) (actual time=0.162..0.470 rows=17 loops=1)\nBuffers: shared hit=257\n-> Nested Loop (cost=6471.37..173560734.51 rows=45 width=16) (actual time=0.154..0.392 rows=17 loops=1)\nBuffers: shared hit=172\n-> HashAggregate (cost=3508.15..3516.80 rows=865 width=8) (actual time=0.039..0.043 rows=18 loops=1)\nGroup Key: prop.node_id\nBuffers: shared hit=23\n-> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..3505.99 rows=866 width=8)\n(actual time=0.019..0.033 rows=18 loops=1)\nIndex Cond: ((qname_id = '242'::bigint) AND (string_value = 'E292432'::text))\nHeap Fetches: 18\nBuffers: shared hit=23\n-> Index Scan using alf_node_pkey on alf_node node (cost=2963.22..200644.17 rows=1 width=8) (actual time=0.019..\n0.019 rows=1 loops=18)\nIndex Cond: (id = prop.node_id)\nFilter: ((type_qname_id <> 145) AND (store_id = 6) AND (type_qname_id = 240) AND (NOT (SubPlan 1)))\nRows Removed by Filter: 0\nBuffers: shared hit=149\nSubPlan 1\n-> Materialize (cost=2962.65..397913.00 rows=158204 width=8) (actual time=0.002..0.009 rows=85 loops=17)\nBuffers: shared hit=59\n-> Bitmap Heap Scan on alf_node_aspects aspect_1 (cost=2962.65..396503.98 rows=158204 width=8) (ac\ntual time=0.025..0.085 rows=85 loops=1)\nRecheck Cond: (qname_id = 251)\nHeap Blocks: exact=55\nBuffers: shared hit=59\n-> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..2923.10 rows=158204 width=0) (actual time\n=0.019..0.019 rows=87 loops=1)\nIndex Cond: (qname_id = 251)\nBuffers: shared hit=4\n-> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..34.32 rows=12 width=8) (actual t\nime=0.004..0.004 rows=1 loops=17)\nIndex Cond: ((node_id = node.id) AND (qname_id = 245))\nHeap Fetches: 17\nBuffers: shared hit=85\nPlanning time: 0.642 ms\nExecution time: 0.546 ms\n(32 rows)\n\nWith the vacuum and reindex on those tables manage to reduce the time to 10s from 30s, but still I don't get the exact performance as Instance 2.\n\nwork_mem was set to 4mb previously and in instance 1 its set to 160mb and instance 2 still using 4mb as work_mem.\n\nshared_buffers set to 8GB in both.\n\nBoth are on Postgres DB version 9.6.9\n\nThanks in advance.\n\nFahiz\nOn 10 Dec 2019, 20:15 +0000, Michael Lewis <[email protected]>, wrote:\n> On Mon, Dec 9, 2019 at 3:39 PM Fahiz Mohamed <[email protected]> wrote:\n> > > I ran \"explain analyse\" on this query and I got following result. (We have 2 identical DB instances and they consist of same data. Instane 1 took 20+ second to process and instance 2 took less than a second)\n> > >\n> > > Instance 1: (This is used by regular User - More than 600,000 request a day) - The result is same even when there is no user in the server.\n> > > -> Bitmap Heap Scan on alf_node node (cost=995009.97..3303978.85 rows=4565737 width=8) (actual time=3304.419..20465.551 rows=41109751 loops=1)\n> > > Recheck Cond: ((store_id = 6) AND (type_qname_id = 240))\n> > > Rows Removed by Index Recheck: 54239131\n> > > Filter: (NOT (hashed SubPlan 1))\n> > > Rows Removed by Filter: 2816\n> > > Heap Blocks: exact=24301 lossy=1875383\n> > >\n> > > Planning time: 0.639 ms\n> > > Execution time: 22946.036 ms\n> >\n> > https://paquier.xyz/postgresql-2/postgres-9-4-feature-highlight-lossyexact-pages-for-bitmap-heap-scan/\n> >\n> > That seems like a lot of lossy blocks. As I understand it, that means the system didn't have enough work_mem to fit all the references to the individual rows which perhaps isn't surprising when it estimates it needs 4.5 million rows and ends up with 41 million.\n> >\n> > Do both DB instances have the same data? I ask because the two plans are rather different which makes me think that statistics about the data are not very similar. Are both configured the same, particularly for shared_buffers and work_mem, as well as the various planning cost parameters like random_page cost? If you can provide these plans again with explain( analyze, buffers ) this time? Did you check on the last time autovacuum ran in pg_stat_user_tables?\n> >\n> >\n\n\n\n\n\n\n\nThere is a slight different in both instance’s data. Inastanbce 1 contains latest data and instance 2 consists of data which is 3 weeks older than instance 1. \n\nFollowing are the number of rows in each table in both instances\n\n\n\n\n\n\n\nInstance 1\nalf_node : 96493129 rows\nalf_node_properties : 455599288 rows\nalf_node_aspects : 150153006 rowsInstance 2\nalf_node : 90831396 rows\nalf_node_properties : 440792247 rows\nalf_node_aspects : 146648241 rows\n\n\n\n\n\n\n\n\nI hope the above data difference can make a drastic difference. Please correct me if I am wrong.\n\nI checked \"pg_stat_user_tables\" and autovacuum never run against those tables.\n\nI did execute vacuum manually and I noticed the below in the output\n\n\"INFO: vacuuming \"public.alf_node_aspects\"\nINFO: \"alf_node_aspects\": found 0 removable, 150264654 nonremovable row versions in 812242 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nCPU 13.53s/33.35u sec elapsed 77.88 sec.\n\n\nINFO: analyzing \"public.alf_node_aspects\"\nINFO: \"alf_node_aspects\": scanned 30000 of 812242 pages, containing 5550000 live rows and 0 dead rows; 30000 rows in sample, 150264770 estimated total rows\"\n\nDid the vacuum worked fine?\n\nI did \"explain( analyze, buffers )\" below are the results:\n\nInstance 1: \n\n Nested Loop Semi Join (cost=441133.07..1352496.76 rows=1 width=8) (actual time=9054.324..10029.748 rows=22 loops=1)\nBuffers: shared hit=1617812\n-> Merge Semi Join (cost=441132.50..1352359.19 rows=41 width=16) (actual time=9054.296..10029.547 rows=22 loops=1)\nMerge Cond: (node.id = prop.node_id)\nBuffers: shared hit=1617701\n-> Index Only Scan using idx_alf_node_tqn on alf_node node (cost=441041.93..1340831.48 rows=4593371 width=8) (actual t\nime=4.418..7594.637 rows=40998148 loops=1)\nIndex Cond: ((type_qname_id = 240) AND (store_id = 6))\nFilter: (NOT (hashed SubPlan 1))\nRows Removed by Filter: 130\nHeap Fetches: 0\nBuffers: shared hit=1617681\nSubPlan 1\n-> Bitmap Heap Scan on alf_node_aspects aspect_1 (cost=3418.63..440585.22 rows=182459 width=8) (actual time=0.\n583..2.992 rows=4697 loops=1)\nRecheck Cond: (qname_id = 251)\nHeap Blocks: exact=1774\nBuffers: shared hit=1791\n-> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..3373.01 rows=182459 width=0) (actual time=0.384..0.38\n4 rows=4697 loops=1)\nIndex Cond: (qname_id = 251)\nBuffers: shared hit=17\n-> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..41.74 rows=852 width=8) (actual time=\n0.022..0.037 rows=23 loops=1)\nIndex Cond: ((qname_id = '242'::bigint) AND (string_value = 'E292432'::text))\nHeap Fetches: 0\nBuffers: shared hit=20\n-> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..38.80 rows=14 width=8) (actual time=0.\n007..0.007 rows=1 loops=22)\nIndex Cond: ((node_id = node.id) AND (qname_id = 245))\nHeap Fetches: 22\nBuffers: shared hit=111\nPlanning time: 0.621 ms\nExecution time: 10029.799 ms\n(29 rows)\n\n\nInstance 2: \n\nNested Loop Semi Join (cost=6471.94..173560891.23 rows=2 width=8) (actual time=0.162..0.470 rows=17 loops=1)\nBuffers: shared hit=257\n-> Nested Loop (cost=6471.37..173560734.51 rows=45 width=16) (actual time=0.154..0.392 rows=17 loops=1)\nBuffers: shared hit=172\n-> HashAggregate (cost=3508.15..3516.80 rows=865 width=8) (actual time=0.039..0.043 rows=18 loops=1)\nGroup Key: prop.node_id\nBuffers: shared hit=23\n-> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..3505.99 rows=866 width=8)\n(actual time=0.019..0.033 rows=18 loops=1)\nIndex Cond: ((qname_id = '242'::bigint) AND (string_value = 'E292432'::text))\nHeap Fetches: 18\nBuffers: shared hit=23\n-> Index Scan using alf_node_pkey on alf_node node (cost=2963.22..200644.17 rows=1 width=8) (actual time=0.019..\n0.019 rows=1 loops=18)\nIndex Cond: (id = prop.node_id)\nFilter: ((type_qname_id <> 145) AND (store_id = 6) AND (type_qname_id = 240) AND (NOT (SubPlan 1)))\nRows Removed by Filter: 0\nBuffers: shared hit=149\nSubPlan 1\n-> Materialize (cost=2962.65..397913.00 rows=158204 width=8) (actual time=0.002..0.009 rows=85 loops=17)\nBuffers: shared hit=59\n-> Bitmap Heap Scan on alf_node_aspects aspect_1 (cost=2962.65..396503.98 rows=158204 width=8) (ac\ntual time=0.025..0.085 rows=85 loops=1)\nRecheck Cond: (qname_id = 251)\nHeap Blocks: exact=55\nBuffers: shared hit=59\n-> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..2923.10 rows=158204 width=0) (actual time\n=0.019..0.019 rows=87 loops=1)\nIndex Cond: (qname_id = 251)\nBuffers: shared hit=4\n-> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..34.32 rows=12 width=8) (actual t\nime=0.004..0.004 rows=1 loops=17)\nIndex Cond: ((node_id = node.id) AND (qname_id = 245))\nHeap Fetches: 17\nBuffers: shared hit=85\nPlanning time: 0.642 ms\nExecution time: 0.546 ms\n(32 rows)\n\nWith the vacuum and reindex on those tables manage to reduce the time to 10s from 30s, but still I don't get the exact performance as Instance 2.\n\nwork_mem was set to 4mb previously and in instance 1 its set to 160mb and instance 2 still using 4mb as work_mem. \n\nshared_buffers set to 8GB in both.\n\nBoth are on Postgres DB version 9.6.9\n\nThanks in advance.\n\nFahiz\n\nOn 10 Dec 2019, 20:15 +0000, Michael Lewis <[email protected]>, wrote:\n\n\nOn Mon, Dec 9, 2019 at 3:39 PM Fahiz Mohamed <[email protected]> wrote:\n\n\n\n\n\nI ran \"explain analyse\" on this query and I got following result. (We have 2 identical DB instances and they consist of same data. Instane 1 took 20+ second to process and instance 2 took less than a second)\n\nInstance 1: (This is used by regular User - More than 600,000 request a day) - The result is same even when there is no user in the server.\n\n\n -> Bitmap Heap Scan on alf_node node (cost=995009.97..3303978.85 rows=4565737 width=8) (actual time=3304.419..20465.551 rows=41109751 loops=1) Recheck Cond: ((store_id = 6) AND (type_qname_id = 240)) Rows Removed by Index Recheck: 54239131 Filter: (NOT (hashed SubPlan 1)) Rows Removed by Filter: 2816 Heap Blocks: exact=24301 lossy=1875383\n Planning time: 0.639 ms Execution time: 22946.036 ms\n\n\n\n\n\nhttps://paquier.xyz/postgresql-2/postgres-9-4-feature-highlight-lossyexact-pages-for-bitmap-heap-scan/ \n\nThat seems like a lot of lossy blocks. As I understand it, that means the system didn't have enough work_mem to fit all the references to the individual rows which perhaps isn't surprising when it estimates it needs 4.5 million rows and ends up with 41 million.\n\nDo both DB instances have the same data? I ask because the two plans are rather different which makes me think that statistics about the data are not very similar. Are both configured the same, particularly for shared_buffers and work_mem, as well as the various planning cost parameters like random_page cost? If you can provide these plans again with explain( analyze, buffers ) this time? Did you check on the last time autovacuum ran in pg_stat_user_tables?",
"msg_date": "Wed, 11 Dec 2019 19:53:57 +0000",
"msg_from": "Fahiz Mohamed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "This seems beyond me at this point, but I am curious if you also\nvacuumed alf_node_properties and alf_node tables and checked when they last\ngot (auto)vacuumed/analyzed. With default configs for autovacuum parameters\nand tables with that many rows, they don't qualify for autovacuum very\noften. I don't have much experience with tables in excess of 50 million\nrows because of manual sharding clients data.\n\nYou mention work_mem is set differently. Did you try setting work_mem back\nto 4MB in session on instance 1 just to test the query? I don't know if\nwork_mem is included in planning stage, but I would think it may be\nconsidered. It would be odd for more available memory to end up with a\nslower plan, but I like to eliminate variables whenever possible.\n\nIt might be worthwhile to see about increasing default_statistics_target to\nget more specific stats, but that can result in a dramatic increase in\nplanning time for even simple queries.\n\nHopefully one of the real experts chimes in.\n\nThis seems beyond me at this point, but I am curious if you also vacuumed alf_node_properties and alf_node tables and checked when they last got (auto)vacuumed/analyzed. With default configs for autovacuum parameters and tables with that many rows, they don't qualify for autovacuum very often. I don't have much experience with tables in excess of 50 million rows because of manual sharding clients data.You mention work_mem is set differently. Did you try setting work_mem back to 4MB in session on instance 1 just to test the query? I don't know if work_mem is included in planning stage, but I would think it may be considered. It would be odd for more available memory to end up with a slower plan, but I like to eliminate variables whenever possible.It might be worthwhile to see about increasing default_statistics_target to get more specific stats, but that can result in a dramatic increase in planning time for even simple queries.Hopefully one of the real experts chimes in.",
"msg_date": "Wed, 11 Dec 2019 13:09:19 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 3:40 AM Fahiz Mohamed <[email protected]> wrote:\n\n> Thank you very much for your prompt responses.\n>\n> I have analysed more regarding this and found the long running query.\n>\n> I ran \"explain analyse\" on this query and I got following result. (We have\n> 2 identical DB instances and they consist of same data. Instane 1 took 20+\n> second to process and instance 2 took less than a second)\n>\n\nThey do not consist of the same data. One returns 17 rows, the other 22.\n\nOne finds 5635 rows (scattered over 40765 blocks!) where qname_id = 251,\nthe other find 85 rows for the same condition. It seems the first one is\nnot very well vacuumed.\n\nI don't know if these differences are enough to be driving the different\nplans (the estimation differences appear smaller than the actual\ndifferences), but clearly the data is not the same.\n\nYour first query is using the index idx_alf_node_mdq in a way which seems\nto be counter-productive. Perhaps you could inhibit it to see what plan it\nchooses then. For example, specify in your query \"type_qname_id+0 = 240\"\nto prevent the use of that index. Or you could drop the index, if it is\nnot vital.\n\nBut if the data has not be ANALYZEd recently, you should do that before\nanything else. Might as well make it a VACUUM ANALYZE.\n\nCheers,\n\nJeff\n\n>\n\nOn Tue, Dec 10, 2019 at 3:40 AM Fahiz Mohamed <[email protected]> wrote:\n\n\nThank you very much for your prompt responses.\n\nI have analysed more regarding this and found the long running query.\n\nI ran \"explain analyse\" on this query and I got following result. (We have 2 identical DB instances and they consist of same data. Instane 1 took 20+ second to process and instance 2 took less than a second)They do not consist of the same data. One returns 17 rows, the other 22.One finds 5635 rows (scattered over 40765 blocks!) where qname_id = 251, the other find 85 rows for the same condition. It seems the first one is not very well vacuumed.I don't know if these differences are enough to be driving the different plans (the estimation differences appear smaller than the actual differences), but clearly the data is not the same.Your first query is using the index idx_alf_node_mdq in a way which seems to be counter-productive. Perhaps you could inhibit it to see what plan it chooses then. For example, specify in your query \"type_qname_id+0 = 240\" to prevent the use of that index. Or you could drop the index, if it is not vital.But if the data has not be ANALYZEd recently, you should do that before anything else. Might as well make it a VACUUM ANALYZE.Cheers,Jeff",
"msg_date": "Wed, 11 Dec 2019 16:14:44 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 5:21 PM Fahiz Mohamed <[email protected]> wrote:\n\n> There is a slight different in both instance’s data. Inastanbce 1 contains\n> latest data and instance 2 consists of data which is 3 weeks older than\n> instance 1.\n>\n\nIn knowing where to look for differences in performance, there is a big\ndifference between them being identical, and being generally similar, but\nnot identical.\n\n\n> I hope the above data difference can make a drastic difference. Please\n> correct me if I am wrong.\n>\n\nThey are similar in scale, but we know there is a big difference in\ndistribution of some values. For example, we still know the slow plan has\n4697 rows in aspect_1 where qname_id = 251, while the other plan has 85\nrows in aspect_1 meeting that same criterion. That is a big difference, and\nit is real difference in the data, not just a difference in planning or\nestimation. Is this difference driving the difference in plan choice?\nProbably not (plan choice is driven by estimated rows, not actual, and\nestimates are quite similar), but it does demonstrate the data is quite\ndifferent between the two systems when you look under the hood. It is\nlikely that there are other, similar differences in the distribution of\nparticular values which is driving the difference in plans. It is just\nthat we can't see those differences, because the EXPLAIN ANALYZE only\nreports on the plan it ran, not other plans it could have ran but didn't.\n\nYour query is now using the index named idx_alf_node_tqn in a way which is\nequally unproductive as the previous use of idx_alf_node_mdq was. It\nlooks like they have the same columns, just in a different order. My\nprevious advice to try \"type_qname_id+0 = 240\" should still apply.\n\nIf you can't get that to work, then another avenue is to run \"explain\n(analyze, buffers) select count(*) from alf_node where (type_qname_id =\n240) AND (store_id = 6)\" on both instances.\n\n\n\n\n> I did execute vacuum manually and I noticed the below in the output\n>\n> \"INFO: vacuuming \"public.alf_node_aspects\"\n> INFO: \"alf_node_aspects\": found 0 removable, 150264654 nonremovable row\n> versions in 812242 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> CPU 13.53s/33.35u sec elapsed 77.88 sec.\n>\n\nI'm not really sure what that means. I certainly would not have expected 0\nremovable. There should have been some prior output, something like:\n\nINFO: scanned index \"fk_alf_nasp_qn\" to remove 500000 row versions\n\nIt could be that autovacuum had already gotten around to vacuuming after\nyour initial email but before you did the above, meaning there was not much\nfor your manual to do.\n\nBut you can see that the vacuum did have an effect, by comparing these\nlines (despite them finding about same number of rows)\n\nHeap Blocks: exact=40765\n\nHeap Blocks: exact=1774\n\nIt wasn't all that large of an effect in this case, but it is still\nsomething worth fixing.\n\nCheers,\n\nJeff\n\nOn Wed, Dec 11, 2019 at 5:21 PM Fahiz Mohamed <[email protected]> wrote:\n\n\nThere is a slight different in both instance’s data. Inastanbce 1 contains latest data and instance 2 consists of data which is 3 weeks older than instance 1. In knowing where to look for differences in performance, there is a big difference between them being identical, and being generally similar, but not identical.\n\nI hope the above data difference can make a drastic difference. Please correct me if I am wrong.They are similar in scale, but we know there is a big difference in distribution of some values. For example, we still know the slow plan has 4697 rows in aspect_1 where qname_id = 251, while the other plan has 85 rows in aspect_1 meeting that same criterion. That is a big difference, and it is real difference in the data, not just a difference in planning or estimation. Is this difference driving the difference in plan choice? Probably not (plan choice is driven by estimated rows, not actual, and estimates are quite similar), but it does demonstrate the data is quite different between the two systems when you look under the hood. It is likely that there are other, similar differences in the distribution of particular values which is driving the difference in plans. It is just that we can't see those differences, because the EXPLAIN ANALYZE only reports on the plan it ran, not other plans it could have ran but didn't.Your query is now using the index named idx_alf_node_tqn in a way which is equally unproductive as the previous use of idx_alf_node_mdq was. It looks like they have the same columns, just in a different order. My previous advice to try \"type_qname_id+0 = 240\" should still apply.If you can't get that to work, then another avenue is to run \"explain (analyze, buffers) select count(*) from alf_node where (type_qname_id = 240) AND (store_id = 6)\" on both instances.\nI did execute vacuum manually and I noticed the below in the output\n\n\"INFO: vacuuming \"public.alf_node_aspects\"\nINFO: \"alf_node_aspects\": found 0 removable, 150264654 nonremovable row versions in 812242 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nCPU 13.53s/33.35u sec elapsed 77.88 sec.I'm not really sure what that means. I certainly would not have expected 0 removable. There should have been some prior output, something like:INFO: scanned index \"fk_alf_nasp_qn\" to remove 500000 row versions It could be that autovacuum had already gotten around to vacuuming after your initial email but before you did the above, meaning there was not much for your manual to do.But you can see that the vacuum did have an effect, by comparing these lines (despite them finding about same number of rows)Heap Blocks: exact=40765Heap Blocks: exact=1774 It wasn't all that large of an effect in this case, but it is still something worth fixing.Cheers,Jeff",
"msg_date": "Wed, 11 Dec 2019 21:25:12 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "Hi Jeff,\n\nThank you for your email, Sorry I couldn’t respond back to you. I am not working on this project at the moment. I have copied one of my colleague who working on this. He has some progress on this, he will update the email thread with those findings\n\nAppreciate your support.\n\nThank you,\nFahiz\nOn 12 Dec 2019, 02:25 +0000, Jeff Janes <[email protected]>, wrote:\n> On Wed, Dec 11, 2019 at 5:21 PM Fahiz Mohamed <[email protected]> wrote:\n> > > There is a slight different in both instance’s data. Inastanbce 1 contains latest data and instance 2 consists of data which is 3 weeks older than instance 1.\n> >\n> > In knowing where to look for differences in performance, there is a big difference between them being identical, and being generally similar, but not identical.\n> >\n> > >\n> > > I hope the above data difference can make a drastic difference. Please correct me if I am wrong.\n> >\n> > They are similar in scale, but we know there is a big difference in distribution of some values. For example, we still know the slow plan has 4697 rows in aspect_1 where qname_id = 251, while the other plan has 85 rows in aspect_1 meeting that same criterion. That is a big difference, and it is real difference in the data, not just a difference in planning or estimation. Is this difference driving the difference in plan choice? Probably not (plan choice is driven by estimated rows, not actual, and estimates are quite similar), but it does demonstrate the data is quite different between the two systems when you look under the hood. It is likely that there are other, similar differences in the distribution of particular values which is driving the difference in plans. It is just that we can't see those differences, because the EXPLAIN ANALYZE only reports on the plan it ran, not other plans it could have ran but didn't.\n> >\n> > Your query is now using the index named idx_alf_node_tqn in a way which is equally unproductive as the previous use of idx_alf_node_mdq was. It looks like they have the same columns, just in a different order. My previous advice to try \"type_qname_id+0 = 240\" should still apply.\n> >\n> > If you can't get that to work, then another avenue is to run \"explain (analyze, buffers) select count(*) from alf_node where (type_qname_id = 240) AND (store_id = 6)\" on both instances.\n> >\n> >\n> >\n> > >\n> > > I did execute vacuum manually and I noticed the below in the output\n> > >\n> > > \"INFO: vacuuming \"public.alf_node_aspects\"\n> > > INFO: \"alf_node_aspects\": found 0 removable, 150264654 nonremovable row versions in 812242 pages\n> > > DETAIL: 0 dead row versions cannot be removed yet.\n> > > CPU 13.53s/33.35u sec elapsed 77.88 sec.\n> >\n> > I'm not really sure what that means. I certainly would not have expected 0 removable. There should have been some prior output, something like:\n> >\n> > INFO: scanned index \"fk_alf_nasp_qn\" to remove 500000 row versions\n> >\n> > It could be that autovacuum had already gotten around to vacuuming after your initial email but before you did the above, meaning there was not much for your manual to do.\n> >\n> > But you can see that the vacuum did have an effect, by comparing these lines (despite them finding about same number of rows)\n> >\n> > Heap Blocks: exact=40765\n> >\n> > Heap Blocks: exact=1774\n> >\n> > It wasn't all that large of an effect in this case, but it is still something worth fixing.\n> >\n> > Cheers,\n> >\n> > Jeff\n\n\n\n\n\n\n\nHi Jeff, \n\nThank you for your email, Sorry I couldn’t respond back to you. I am not working on this project at the moment. I have copied one of my colleague who working on this. He has some progress on this, he will update the email thread with those findings\n\nAppreciate your support.\n\nThank you,\nFahiz\n\n\nOn 12 Dec 2019, 02:25 +0000, Jeff Janes <[email protected]>, wrote:\n\n\nOn Wed, Dec 11, 2019 at 5:21 PM Fahiz Mohamed <[email protected]> wrote:\n\n\n\n\nThere is a slight different in both instance’s data. Inastanbce 1 contains latest data and instance 2 consists of data which is 3 weeks older than instance 1. \n\n\n\n\nIn knowing where to look for differences in performance, there is a big difference between them being identical, and being generally similar, but not identical.\n\n\n\n\nI hope the above data difference can make a drastic difference. Please correct me if I am wrong.\n\n\n\n\nThey are similar in scale, but we know there is a big difference in distribution of some values. For example, we still know the slow plan has 4697 rows in aspect_1 where qname_id = 251, while the other plan has 85 rows in aspect_1 meeting that same criterion. That is a big difference, and it is real difference in the data, not just a difference in planning or estimation. Is this difference driving the difference in plan choice? Probably not (plan choice is driven by estimated rows, not actual, and estimates are quite similar), but it does demonstrate the data is quite different between the two systems when you look under the hood. It is likely that there are other, similar differences in the distribution of particular values which is driving the difference in plans. It is just that we can't see those differences, because the EXPLAIN ANALYZE only reports on the plan it ran, not other plans it could have ran but didn't.\n\nYour query is now using the index named idx_alf_node_tqn in a way which is equally unproductive as the previous use of idx_alf_node_mdq was. It looks like they have the same columns, just in a different order. My previous advice to try \"type_qname_id+0 = 240\" should still apply.\n\nIf you can't get that to work, then another avenue is to run \"explain (analyze, buffers) select count(*) from alf_node where (type_qname_id = 240) AND (store_id = 6)\" on both instances.\n\n\n\n\n\n\n\nI did execute vacuum manually and I noticed the below in the output\n\n\"INFO: vacuuming \"public.alf_node_aspects\"\nINFO: \"alf_node_aspects\": found 0 removable, 150264654 nonremovable row versions in 812242 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nCPU 13.53s/33.35u sec elapsed 77.88 sec.\n\n\n\n\nI'm not really sure what that means. I certainly would not have expected 0 removable. There should have been some prior output, something like:\n\nINFO: scanned index \"fk_alf_nasp_qn\" to remove 500000 row versions \n \nIt could be that autovacuum had already gotten around to vacuuming after your initial email but before you did the above, meaning there was not much for your manual to do.\n\nBut you can see that the vacuum did have an effect, by comparing these lines (despite them finding about same number of rows)\n\nHeap Blocks: exact=40765\n\nHeap Blocks: exact=1774 \n\nIt wasn't all that large of an effect in this case, but it is still something worth fixing.\n\nCheers,\n\nJeff",
"msg_date": "Thu, 30 Jan 2020 09:44:41 +0000",
"msg_from": "Fahiz Mohamed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "I have read through your comments so far – and they tally with the checks\nwe have been making to an extent – thanks for them.\n\n\n\nWe now only need 1 environment as we can replicate the performance problem\non a copy of live – snapshot/restore from AWS of live. We now have a vacuum\nanalyse running every night on the 3 tables in question on live – to\neliminate bloat and inaccurate stats as the root of the problem.\n\n\n\nWe can flip performance based on setting work_mem. For example, currently\nwork_mem up to an including 5069Kb the performance of the query is well\nunder 1 second – Upping work_mem just 1kb to 5097Kb then changes the query\nplan and performance is a lot worse.\n\n\n\nFresh snapshot and restore.\n\n5096kb – plan –\n\nNested Loop Semi Join (cost=3494.04..184505785.24 rows=1 width=8) (actual\ntime=6.404..130.145 rows=212 loops=1)\n\n Buffers: shared hit=7369\n\n -> Nested Loop (cost=3493.47..184505754.56 rows=36 width=16) (actual\ntime=6.394..129.351 rows=212 loops=1)\n\n Buffers: shared hit=6318\n\n -> HashAggregate (cost=72.32..81.03 rows=871 width=8) (actual\ntime=0.123..0.186 rows=228 loops=1)\n\n Group Key: prop.node_id\n\n Buffers: shared hit=99\n\n -> Index Only Scan using idx_alf_nprop_s on\nalf_node_properties prop (cost=0.70..70.14 rows=872 width=8) (actual\ntime=0.025..0.086 rows=228 loops=1)\n\n Index Cond: ((qname_id = '242'::bigint) AND\n(string_value = 'S434071'::text))\n\n Heap Fetches: 2\n\n Buffers: shared hit=99\n\n -> Index Only Scan using idx_alf_node_tqn on alf_node node\n(cost=3421.15..211831.99 rows=1 width=8) (actual time=0.566..0.566 rows=1\nloops=228)\n\n Index Cond: ((type_qname_id = 240) AND (store_id = 6) AND\n(id = prop.node_id))\n\n Filter: (NOT (SubPlan 1))\n\n Heap Fetches: 13\n\n Buffers: shared hit=6219\n\n SubPlan 1\n\n -> Materialize (cost=3420.59..419826.13 rows=163099\nwidth=8) (actual time=0.007..0.246 rows=4909 loops=212)\n\n Buffers: shared hit=5092\n\n -> Bitmap Heap Scan on alf_node_aspects aspect_1\n(cost=3420.59..418372.63 rows=163099 width=8) (actual time=1.402..5.243\nrows=4909 loops=1)\n\n Recheck Cond: (qname_id = 251)\n\n Heap Blocks: exact=4801\n\n Buffers: shared hit=5092\n\n -> Bitmap Index Scan on fk_alf_nasp_qn\n(cost=0.00..3379.81 rows=163099 width=0) (actual time=0.850..0.850\nrows=7741 loops=1)\n\n Index Cond: (qname_id = 251)\n\n Buffers: shared hit=291\n\n -> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects\naspect (cost=0.57..4.70 rows=15 width=8) (actual time=0.003..0.003 rows=1\nloops=212)\n\n Index Cond: ((node_id = node.id) AND (qname_id = 245))\n\n Heap Fetches: 15\n\n Buffers: shared hit=1051\n\n Planning time: 0.624 ms\n\n Execution time: 130.222 ms\n\n(32 rows)\n\n\n\n5097Kb – plan –\n\n\n\nNested Loop Semi Join (cost=1019128.07..3400161.81 rows=1 width=8) (actual\ntime=4832.639..32783.503 rows=212 loops=1)\n\n Buffers: shared hit=565 read=2191862\n\n -> Hash Semi Join (cost=1019127.50..3400131.13 rows=36 width=16)\n(actual time=4832.599..32779.613 rows=212 loops=1)\n\n Hash Cond: (node.id = prop.node_id)\n\n Buffers: shared hit=25 read=2191476\n\n -> Bitmap Heap Scan on alf_node node\n(cost=1019046.46..3388792.78 rows=4273414 width=8) (actual\ntime=4219.440..29678.682 rows=41976707 loops=1)\n\n Recheck Cond: ((store_id = 6) AND (type_qname_id = 240))\n\n Rows Removed by Index Recheck: 58872880\n\n Filter: (NOT (hashed SubPlan 1))\n\n Rows Removed by Filter: 2453\n\n Heap Blocks: exact=32899 lossy=1939310\n\n Buffers: shared read=2191402\n\n -> Bitmap Index Scan on idx_alf_node_mdq\n(cost=0.00..599197.73 rows=19566916 width=0) (actual\ntime=4186.449..4186.449 rows=41990879 loops=1)\n\n Index Cond: ((store_id = 6) AND (type_qname_id = 240))\n\n Buffers: shared read=214101\n\n SubPlan 1\n\n -> Bitmap Heap Scan on alf_node_aspects aspect_1\n(cost=3420.59..418372.63 rows=163099 width=8) (actual time=2.635..21.981\nrows=4909 loops=1)\n\n Recheck Cond: (qname_id = 251)\n\n Heap Blocks: exact=4801\n\n Buffers: shared read=5092\n\n -> Bitmap Index Scan on fk_alf_nasp_qn\n(cost=0.00..3379.81 rows=163099 width=0) (actual time=2.016..2.016\nrows=7741 loops=1)\n\n Index Cond: (qname_id = 251)\n\n Buffers: shared read=291\n\n -> Hash (cost=70.14..70.14 rows=872 width=8) (actual\ntime=0.357..0.357 rows=228 loops=1)\n\n Buckets: 1024 Batches: 1 Memory Usage: 17kB\n\n Buffers: shared hit=25 read=74\n\n -> Index Only Scan using idx_alf_nprop_s on\nalf_node_properties prop (cost=0.70..70.14 rows=872 width=8) (actual\ntime=0.047..0.325 rows=228 loops=1)\n\n Index Cond: ((qname_id = '242'::bigint) AND\n(string_value = 'S434071'::text))\n\n Heap Fetches: 2\n\n Buffers: shared hit=25 read=74\n\n -> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects\naspect (cost=0.57..4.70 rows=15 width=8) (actual time=0.016..0.016 rows=1\nloops=212)\n\n Index Cond: ((node_id = node.id) AND (qname_id = 245))\n\n Heap Fetches: 15\n\n Buffers: shared hit=540 read=386\n\n Planning time: 0.821 ms\n\n Execution time: 32783.609 ms\n\n(36 rows)\n\n\n\nNote the second plan is not even using any index on alf_node initially.\n\nOn Thu, Jan 30, 2020 at 9:44 AM Fahiz Mohamed <[email protected]> wrote:\n\n> Hi Jeff,\n>\n> Thank you for your email, Sorry I couldn’t respond back to you. I am not\n> working on this project at the moment. I have copied one of my colleague\n> who working on this. He has some progress on this, he will update the email\n> thread with those findings\n>\n> Appreciate your support.\n>\n> Thank you,\n> Fahiz\n> On 12 Dec 2019, 02:25 +0000, Jeff Janes <[email protected]>, wrote:\n>\n> On Wed, Dec 11, 2019 at 5:21 PM Fahiz Mohamed <[email protected]> wrote:\n>\n>> There is a slight different in both instance’s data. Inastanbce 1\n>> contains latest data and instance 2 consists of data which is 3 weeks older\n>> than instance 1.\n>>\n>\n> In knowing where to look for differences in performance, there is a big\n> difference between them being identical, and being generally similar, but\n> not identical.\n>\n>\n>> I hope the above data difference can make a drastic difference. Please\n>> correct me if I am wrong.\n>>\n>\n> They are similar in scale, but we know there is a big difference in\n> distribution of some values. For example, we still know the slow plan has\n> 4697 rows in aspect_1 where qname_id = 251, while the other plan has 85\n> rows in aspect_1 meeting that same criterion. That is a big difference, and\n> it is real difference in the data, not just a difference in planning or\n> estimation. Is this difference driving the difference in plan choice?\n> Probably not (plan choice is driven by estimated rows, not actual, and\n> estimates are quite similar), but it does demonstrate the data is quite\n> different between the two systems when you look under the hood. It is\n> likely that there are other, similar differences in the distribution of\n> particular values which is driving the difference in plans. It is just\n> that we can't see those differences, because the EXPLAIN ANALYZE only\n> reports on the plan it ran, not other plans it could have ran but didn't.\n>\n> Your query is now using the index named idx_alf_node_tqn in a way which\n> is equally unproductive as the previous use of idx_alf_node_mdq was. It\n> looks like they have the same columns, just in a different order. My\n> previous advice to try \"type_qname_id+0 = 240\" should still apply.\n>\n> If you can't get that to work, then another avenue is to run \"explain\n> (analyze, buffers) select count(*) from alf_node where (type_qname_id =\n> 240) AND (store_id = 6)\" on both instances.\n>\n>\n>\n>\n>> I did execute vacuum manually and I noticed the below in the output\n>>\n>> \"INFO: vacuuming \"public.alf_node_aspects\"\n>> INFO: \"alf_node_aspects\": found 0 removable, 150264654 nonremovable row\n>> versions in 812242 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> CPU 13.53s/33.35u sec elapsed 77.88 sec.\n>>\n>\n> I'm not really sure what that means. I certainly would not have expected\n> 0 removable. There should have been some prior output, something like:\n>\n> INFO: scanned index \"fk_alf_nasp_qn\" to remove 500000 row versions\n>\n> It could be that autovacuum had already gotten around to vacuuming after\n> your initial email but before you did the above, meaning there was not much\n> for your manual to do.\n>\n> But you can see that the vacuum did have an effect, by comparing these\n> lines (despite them finding about same number of rows)\n>\n> Heap Blocks: exact=40765\n>\n> Heap Blocks: exact=1774\n>\n> It wasn't all that large of an effect in this case, but it is still\n> something worth fixing.\n>\n> Cheers,\n>\n> Jeff\n>\n>\n\n-- \n[image: logo] <http://www.zaizi.com/> *Duncan Whitham* Developer*, *\n*Zaizi*\n*|* *m:* (+44)751 502 7049 *|* *t:* (+44)20 3582 8330\n\n*|* *e:* [email protected] *|* *w:* http://www.zaizi.com/\n\n<https://www.facebook.com/ZaiziLtd/> <https://twitter.com/zaizi>\n<https://www.linkedin.com/company/zaizi> <https://plus.google.com/+Zaizi>\n<https://vimeo.com/zaizi> <https://www.youtube.com/user/zaizivids>\n\n-- \n\nThis message should be regarded as confidential. If you have received this \nemail in error please notify the sender and destroy it immediately. \nStatements of intent shall only become binding when confirmed in hard copy \nby an authorised signatory. \n\n\nZaizi Ltd is registered in England and Wales \nwith the registration number 6440931. The Registered Office is Kings House, \n174 Hammersmith Road, London W6 7JP.\n\nI have read through your comments so far – and they tally\nwith the checks we have been making to an extent – thanks for them. We now only need 1 environment as we can replicate the performance problem on a\ncopy of live – snapshot/restore from AWS of live. We now have a vacuum analyse running\nevery night on the 3 tables in question on live – to eliminate bloat and\ninaccurate stats as the root of the problem. We can flip performance based on setting work_mem. For example,\ncurrently work_mem up to an including 5069Kb the performance of the query is\nwell under 1 second – Upping work_mem just 1kb to 5097Kb then changes the query\nplan and performance is a lot worse. Fresh snapshot and restore. 5096kb – plan –Nested Loop Semi Join (cost=3494.04..184505785.24 rows=1 width=8)\n(actual time=6.404..130.145 rows=212 loops=1) \nBuffers: shared hit=7369 \n-> Nested Loop (cost=3493.47..184505754.56 rows=36 width=16)\n(actual time=6.394..129.351 rows=212 loops=1) \nBuffers: shared hit=6318 \n-> HashAggregate (cost=72.32..81.03 rows=871 width=8) (actual\ntime=0.123..0.186 rows=228 loops=1) \nGroup Key: prop.node_id \nBuffers: shared hit=99 \n-> Index Only Scan using\nidx_alf_nprop_s on alf_node_properties prop \n(cost=0.70..70.14 rows=872 width=8) (actual time=0.025..0.086 rows=228\nloops=1) Index Cond: ((qname_id =\n'242'::bigint) AND (string_value = 'S434071'::text)) Heap Fetches: 2 Buffers: shared hit=99 \n-> Index Only Scan using\nidx_alf_node_tqn on alf_node node \n(cost=3421.15..211831.99 rows=1 width=8) (actual time=0.566..0.566\nrows=1 loops=228) \nIndex Cond: ((type_qname_id = 240) AND (store_id = 6) AND (id =\nprop.node_id)) \nFilter: (NOT (SubPlan 1)) \nHeap Fetches: 13 \nBuffers: shared hit=6219 \nSubPlan 1 \n-> Materialize (cost=3420.59..419826.13 rows=163099 width=8)\n(actual time=0.007..0.246 rows=4909 loops=212) Buffers: shared hit=5092 \n -> Bitmap Heap Scan on alf_node_aspects\naspect_1 (cost=3420.59..418372.63\nrows=163099 width=8) (actual time=1.402..5.243 rows=4909 loops=1) Recheck Cond:\n(qname_id = 251) Heap Blocks: exact=4801 Buffers: shared\nhit=5092 -> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..3379.81 rows=163099 width=0)\n(actual time=0.850..0.850 rows=7741 loops=1) Index Cond:\n(qname_id = 251) Buffers:\nshared hit=291 \n-> Index Only Scan using\nalf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..4.70 rows=15 width=8) (actual\ntime=0.003..0.003 rows=1 loops=212) \nIndex Cond: ((node_id = node.id) AND (qname_id = 245)) \nHeap Fetches: 15 \nBuffers: shared hit=1051 Planning\ntime: 0.624 ms Execution\ntime: 130.222 ms(32 rows) 5097Kb – plan – Nested Loop Semi Join (cost=1019128.07..3400161.81 rows=1 width=8)\n(actual time=4832.639..32783.503 rows=212 loops=1) \nBuffers: shared hit=565 read=2191862 \n-> Hash Semi Join (cost=1019127.50..3400131.13 rows=36\nwidth=16) (actual time=4832.599..32779.613 rows=212 loops=1) \nHash Cond: (node.id = prop.node_id) \nBuffers: shared hit=25 read=2191476 \n-> Bitmap Heap Scan on\nalf_node node \n(cost=1019046.46..3388792.78 rows=4273414 width=8) (actual\ntime=4219.440..29678.682 rows=41976707 loops=1) \nRecheck Cond: ((store_id = 6) AND (type_qname_id = 240)) \nRows Removed by Index Recheck: 58872880 \nFilter: (NOT (hashed SubPlan 1)) \nRows Removed by Filter: 2453 \nHeap Blocks: exact=32899 lossy=1939310 \nBuffers: shared read=2191402 \n-> Bitmap Index Scan on\nidx_alf_node_mdq (cost=0.00..599197.73\nrows=19566916 width=0) (actual time=4186.449..4186.449 rows=41990879 loops=1) Index Cond: ((store_id =\n6) AND (type_qname_id = 240)) Buffers: shared\nread=214101 \nSubPlan 1 \n-> Bitmap Heap Scan on\nalf_node_aspects aspect_1 \n(cost=3420.59..418372.63 rows=163099 width=8) (actual time=2.635..21.981\nrows=4909 loops=1) Recheck Cond: (qname_id\n= 251) Heap Blocks: exact=4801 Buffers: shared\nread=5092 -> Bitmap Index Scan on fk_alf_nasp_qn (cost=0.00..3379.81 rows=163099 width=0) (actual\ntime=2.016..2.016 rows=7741 loops=1) Index Cond:\n(qname_id = 251) Buffers: shared\nread=291 \n-> Hash (cost=70.14..70.14 rows=872 width=8) (actual\ntime=0.357..0.357 rows=228 loops=1) \nBuckets: 1024 Batches: 1 Memory Usage: 17kB \nBuffers: shared hit=25 read=74 \n-> Index Only Scan using\nidx_alf_nprop_s on alf_node_properties prop \n(cost=0.70..70.14 rows=872 width=8) (actual time=0.047..0.325 rows=228\nloops=1) Index Cond: ((qname_id =\n'242'::bigint) AND (string_value = 'S434071'::text)) Heap Fetches: 2 Buffers: shared hit=25\nread=74 \n-> Index Only Scan using\nalf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..4.70 rows=15 width=8) (actual\ntime=0.016..0.016 rows=1 loops=212) \nIndex Cond: ((node_id = node.id) AND (qname_id = 245)) \nHeap Fetches: 15 \nBuffers: shared hit=540 read=386 Planning\ntime: 0.821 ms Execution\ntime: 32783.609 ms(36 rows) \nNote the second plan is not even using any index on alf_node\ninitially. On Thu, Jan 30, 2020 at 9:44 AM Fahiz Mohamed <[email protected]> wrote:\n\n\nHi Jeff, \n\nThank you for your email, Sorry I couldn’t respond back to you. I am not working on this project at the moment. I have copied one of my colleague who working on this. He has some progress on this, he will update the email thread with those findings\n\nAppreciate your support.\n\nThank you,\nFahiz\n\n\nOn 12 Dec 2019, 02:25 +0000, Jeff Janes <[email protected]>, wrote:\n\n\nOn Wed, Dec 11, 2019 at 5:21 PM Fahiz Mohamed <[email protected]> wrote:\n\n\n\n\nThere is a slight different in both instance’s data. Inastanbce 1 contains latest data and instance 2 consists of data which is 3 weeks older than instance 1. \n\n\n\n\nIn knowing where to look for differences in performance, there is a big difference between them being identical, and being generally similar, but not identical.\n\n\n\n\nI hope the above data difference can make a drastic difference. Please correct me if I am wrong.\n\n\n\n\nThey are similar in scale, but we know there is a big difference in distribution of some values. For example, we still know the slow plan has 4697 rows in aspect_1 where qname_id = 251, while the other plan has 85 rows in aspect_1 meeting that same criterion. That is a big difference, and it is real difference in the data, not just a difference in planning or estimation. Is this difference driving the difference in plan choice? Probably not (plan choice is driven by estimated rows, not actual, and estimates are quite similar), but it does demonstrate the data is quite different between the two systems when you look under the hood. It is likely that there are other, similar differences in the distribution of particular values which is driving the difference in plans. It is just that we can't see those differences, because the EXPLAIN ANALYZE only reports on the plan it ran, not other plans it could have ran but didn't.\n\nYour query is now using the index named idx_alf_node_tqn in a way which is equally unproductive as the previous use of idx_alf_node_mdq was. It looks like they have the same columns, just in a different order. My previous advice to try \"type_qname_id+0 = 240\" should still apply.\n\nIf you can't get that to work, then another avenue is to run \"explain (analyze, buffers) select count(*) from alf_node where (type_qname_id = 240) AND (store_id = 6)\" on both instances.\n\n\n\n\n\n\n\nI did execute vacuum manually and I noticed the below in the output\n\n\"INFO: vacuuming \"public.alf_node_aspects\"\nINFO: \"alf_node_aspects\": found 0 removable, 150264654 nonremovable row versions in 812242 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nCPU 13.53s/33.35u sec elapsed 77.88 sec.\n\n\n\n\nI'm not really sure what that means. I certainly would not have expected 0 removable. There should have been some prior output, something like:\n\nINFO: scanned index \"fk_alf_nasp_qn\" to remove 500000 row versions \n \nIt could be that autovacuum had already gotten around to vacuuming after your initial email but before you did the above, meaning there was not much for your manual to do.\n\nBut you can see that the vacuum did have an effect, by comparing these lines (despite them finding about same number of rows)\n\nHeap Blocks: exact=40765\n\nHeap Blocks: exact=1774 \n\nIt wasn't all that large of an effect in this case, but it is still something worth fixing.\n\nCheers,\n\nJeff\n\n\n\n\n\n-- Duncan Whitham Developer, Zaizi | m: (+44)751 502 7049 | t: (+44)20 3582 8330 | e: [email protected] | w: http://www.zaizi.com/ \n\nThis message should be regarded as confidential. If you have received this email in error please notify the sender and destroy it immediately. Statements of intent shall only become binding when confirmed in hard copy by an authorised signatory. Zaizi Ltd is registered in England and Wales with the registration number 6440931. The Registered Office is Kings House, 174 Hammersmith Road, London W6 7JP.",
"msg_date": "Thu, 30 Jan 2020 12:22:58 +0000",
"msg_from": "Duncan Whitham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "Duncan Whitham <[email protected]> writes:\n> We now only need 1 environment as we can replicate the performance problem\n> on a copy of live – snapshot/restore from AWS of live. We now have a vacuum\n> analyse running every night on the 3 tables in question on live – to\n> eliminate bloat and inaccurate stats as the root of the problem.\n\nHmm, doesn't seem like that's getting the job done. I can see at\nleast one serious misestimate in these plans:\n\n> -> Bitmap Heap Scan on alf_node_aspects aspect_1\n> (cost=3420.59..418372.63 rows=163099 width=8) (actual time=1.402..5.243\n> rows=4909 loops=1)\n> Recheck Cond: (qname_id = 251)\n\nIt doesn't seem to me that such a simple condition ought to be\nmisestimated by a factor of 30, so either you need to crank up\nthe stats target for this column or you need to analyze the\ntable more often.\n\nThe other rowcount estimates don't seem so awful, but this one is\ncontributing to the planner thinking that \"SubPlan 1\" is going to\nbe very expensive, which probably accounts for it trying to avoid\nwhat's actually a cheap plan.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Jan 2020 11:35:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
},
{
"msg_contents": "Thanks for the reply Tom - you've been a great help.\n\nI had been looking at changing - default_statistics_target in a\nbroadbrush fashion - changing it to 1000 on alf_node, alf_node_aspects and\nalf_node_properties and that gets the query to run fine irrespective of\nwork_mem settings.\n\nI then set them all back to default and reanalyzed.\n\nI then applied the change just to the qname_id column on alf_node_aspects -\n\nalter table alf_node_aspects alter qname_id SET STATISTICS 1000;\n\n\nreanalyzed alf_node_aspects and the estimate is much better ....\n\n\nNested Loop Semi Join (cost=618944.02..3000749.68 rows=8 width=8) (actual\ntime=5391.271..31441.085 rows=212 loops=1)\n Buffers: shared hit=756 read=2189959 written=1\n -> Hash Semi Join (cost=618943.45..3000659.89 rows=37 width=16)\n(actual time=5391.212..31437.065 rows=212 loops=1)\n Hash Cond: (node.id = prop.node_id)\n Buffers: shared hit=216 read=2189573 written=1\n -> Bitmap Heap Scan on alf_node node (cost=618862.32..2989274.54\nrows=4290813 width=8) (actual time=4806.877..28240.746 rows=41976707\nloops=1)\n Recheck Cond: ((store_id = 6) AND (type_qname_id = 240))\n Rows Removed by Index Recheck: 57797868\n Filter: (NOT (hashed SubPlan 1))\n Rows Removed by Filter: 2453\n Heap Blocks: exact=63327 lossy=1908882\n Buffers: shared hit=191 read=2189499 written=1\n -> Bitmap Index Scan on idx_alf_node_mdq\n (cost=0.00..600678.68 rows=19600211 width=0) (actual\ntime=4773.927..4773.927 rows=41990879 loops=1)\n Index Cond: ((store_id = 6) AND (type_qname_id = 240))\n Buffers: shared read=214101 written=1\n SubPlan 1\n -> Index Scan using fk_alf_nasp_qn on alf_node_aspects\naspect_1 (cost=0.57..17099.41 rows=4611 width=8) (actual\ntime=0.036..13.453 rows=4909 loops=1)\n Index Cond: (qname_id = 251)\n Buffers: shared hit=191 read=3189\n -> Hash (cost=70.20..70.20 rows=875 width=8) (actual\ntime=0.363..0.363 rows=228 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 17kB\n Buffers: shared hit=25 read=74\n -> Index Only Scan using idx_alf_nprop_s on\nalf_node_properties prop (cost=0.70..70.20 rows=875 width=8) (actual\ntime=0.047..0.337 rows=228 loops=1)\n Index Cond: ((qname_id = '242'::bigint) AND\n(string_value = 'S434071'::text))\n Heap Fetches: 2\n Buffers: shared hit=25 read=74\n -> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects\naspect (cost=0.57..3.99 rows=2 width=8) (actual time=0.016..0.016 rows=1\nloops=212)\n Index Cond: ((node_id = node.id) AND (qname_id = 245))\n Heap Fetches: 15\n Buffers: shared hit=540 read=386\n Planning time: 0.903 ms\n Execution time: 31441.206 ms\n(32 rows)\n\n\nBut there is still a heap scan on alf_node instead of using the index -\nwhat should my next move be ? Change the stats collection level on store_id\nand type_qname_id on alf_node - or reindex alf_node ?\n\n\n\n\nOn Thu, Jan 30, 2020 at 4:36 PM Tom Lane <[email protected]> wrote:\n\n> Duncan Whitham <[email protected]> writes:\n> > We now only need 1 environment as we can replicate the performance\n> problem\n> > on a copy of live – snapshot/restore from AWS of live. We now have a\n> vacuum\n> > analyse running every night on the 3 tables in question on live – to\n> > eliminate bloat and inaccurate stats as the root of the problem.\n>\n> Hmm, doesn't seem like that's getting the job done. I can see at\n> least one serious misestimate in these plans:\n>\n> > -> Bitmap Heap Scan on alf_node_aspects aspect_1\n> > (cost=3420.59..418372.63 rows=163099 width=8) (actual time=1.402..5.243\n> > rows=4909 loops=1)\n> > Recheck Cond: (qname_id = 251)\n>\n> It doesn't seem to me that such a simple condition ought to be\n> misestimated by a factor of 30, so either you need to crank up\n> the stats target for this column or you need to analyze the\n> table more often.\n>\n> The other rowcount estimates don't seem so awful, but this one is\n> contributing to the planner thinking that \"SubPlan 1\" is going to\n> be very expensive, which probably accounts for it trying to avoid\n> what's actually a cheap plan.\n>\n> regards, tom lane\n>\n\n\n-- \n[image: logo] <http://www.zaizi.com/> *Duncan Whitham* Developer*, *\n*Zaizi*\n*|* *m:* (+44)751 502 7049 *|* *t:* (+44)20 3582 8330\n\n*|* *e:* [email protected] *|* *w:* http://www.zaizi.com/\n\n<https://www.facebook.com/ZaiziLtd/> <https://twitter.com/zaizi>\n<https://www.linkedin.com/company/zaizi> <https://plus.google.com/+Zaizi>\n<https://vimeo.com/zaizi> <https://www.youtube.com/user/zaizivids>\n\n-- \n\nThis message should be regarded as confidential. If you have received this \nemail in error please notify the sender and destroy it immediately. \nStatements of intent shall only become binding when confirmed in hard copy \nby an authorised signatory. \n\n\nZaizi Ltd is registered in England and Wales \nwith the registration number 6440931. The Registered Office is Kings House, \n174 Hammersmith Road, London W6 7JP.\n\nThanks for the reply Tom - you've been a great help. I had been looking at changing - default_statistics_target in a broadbrush fashion - changing it to 1000 on alf_node, alf_node_aspects and alf_node_properties and that gets the query to run fine irrespective of work_mem settings. I then set them all back to default and reanalyzed.I then applied the change just to the qname_id column on alf_node_aspects -alter table alf_node_aspects alter qname_id SET\nSTATISTICS 1000;reanalyzed alf_node_aspects and the estimate is much better ....Nested Loop Semi Join (cost=618944.02..3000749.68 rows=8 width=8) (actual time=5391.271..31441.085 rows=212 loops=1) Buffers: shared hit=756 read=2189959 written=1 -> Hash Semi Join (cost=618943.45..3000659.89 rows=37 width=16) (actual time=5391.212..31437.065 rows=212 loops=1) Hash Cond: (node.id = prop.node_id) Buffers: shared hit=216 read=2189573 written=1 -> Bitmap Heap Scan on alf_node node (cost=618862.32..2989274.54 rows=4290813 width=8) (actual time=4806.877..28240.746 rows=41976707 loops=1) Recheck Cond: ((store_id = 6) AND (type_qname_id = 240)) Rows Removed by Index Recheck: 57797868 Filter: (NOT (hashed SubPlan 1)) Rows Removed by Filter: 2453 Heap Blocks: exact=63327 lossy=1908882 Buffers: shared hit=191 read=2189499 written=1 -> Bitmap Index Scan on idx_alf_node_mdq (cost=0.00..600678.68 rows=19600211 width=0) (actual time=4773.927..4773.927 rows=41990879 loops=1) Index Cond: ((store_id = 6) AND (type_qname_id = 240)) Buffers: shared read=214101 written=1 SubPlan 1 -> Index Scan using fk_alf_nasp_qn on alf_node_aspects aspect_1 (cost=0.57..17099.41 rows=4611 width=8) (actual time=0.036..13.453 rows=4909 loops=1) Index Cond: (qname_id = 251) Buffers: shared hit=191 read=3189 -> Hash (cost=70.20..70.20 rows=875 width=8) (actual time=0.363..0.363 rows=228 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 17kB Buffers: shared hit=25 read=74 -> Index Only Scan using idx_alf_nprop_s on alf_node_properties prop (cost=0.70..70.20 rows=875 width=8) (actual time=0.047..0.337 rows=228 loops=1) Index Cond: ((qname_id = '242'::bigint) AND (string_value = 'S434071'::text)) Heap Fetches: 2 Buffers: shared hit=25 read=74 -> Index Only Scan using alf_node_aspects_pkey on alf_node_aspects aspect (cost=0.57..3.99 rows=2 width=8) (actual time=0.016..0.016 rows=1 loops=212) Index Cond: ((node_id = node.id) AND (qname_id = 245)) Heap Fetches: 15 Buffers: shared hit=540 read=386 Planning time: 0.903 ms Execution time: 31441.206 ms(32 rows)But there is still a heap scan on alf_node instead of using the index - what should my next move be ? Change the stats collection level on store_id and type_qname_id on alf_node - or reindex alf_node ? \nOn Thu, Jan 30, 2020 at 4:36 PM Tom Lane <[email protected]> wrote:Duncan Whitham <[email protected]> writes:\n> We now only need 1 environment as we can replicate the performance problem\n> on a copy of live – snapshot/restore from AWS of live. We now have a vacuum\n> analyse running every night on the 3 tables in question on live – to\n> eliminate bloat and inaccurate stats as the root of the problem.\n\nHmm, doesn't seem like that's getting the job done. I can see at\nleast one serious misestimate in these plans:\n\n> -> Bitmap Heap Scan on alf_node_aspects aspect_1\n> (cost=3420.59..418372.63 rows=163099 width=8) (actual time=1.402..5.243\n> rows=4909 loops=1)\n> Recheck Cond: (qname_id = 251)\n\nIt doesn't seem to me that such a simple condition ought to be\nmisestimated by a factor of 30, so either you need to crank up\nthe stats target for this column or you need to analyze the\ntable more often.\n\nThe other rowcount estimates don't seem so awful, but this one is\ncontributing to the planner thinking that \"SubPlan 1\" is going to\nbe very expensive, which probably accounts for it trying to avoid\nwhat's actually a cheap plan.\n\n regards, tom lane\n-- Duncan Whitham Developer, Zaizi | m: (+44)751 502 7049 | t: (+44)20 3582 8330 | e: [email protected] | w: http://www.zaizi.com/ \n\nThis message should be regarded as confidential. If you have received this email in error please notify the sender and destroy it immediately. Statements of intent shall only become binding when confirmed in hard copy by an authorised signatory. Zaizi Ltd is registered in England and Wales with the registration number 6440931. The Registered Office is Kings House, 174 Hammersmith Road, London W6 7JP.",
"msg_date": "Fri, 31 Jan 2020 11:01:20 +0000",
"msg_from": "Duncan Whitham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query taking time to process"
}
] |
[
{
"msg_contents": "Hey all,\nI'm trying to analyze a weird situation that I have seen in my db.\nSometimes my app fails to start because of the following msg :\nSQL State : null\nError Code : 0\nMessage : Cannot create PoolableConnectionFactory (ERROR: canceling\nstatement due to user request)\n\nIn the db at the same time I saw the same msg :\n2019-12-08 00:04:56 IST DB 10035 ERROR: canceling statement due to user\nrequest\n2019-12-08 00:04:56 IST DB 10035 STATEMENT: select 1 as test\n\nI thought that it might be related to the validation query that is set to 2\nseconds (when I change the validation query from \"select 1 as test\" to\n\"select pg_sleep(10)\" ) the same behavior was reproduced .\n\nTherefore, my theory was that the validation query is taking more than 2\nseconds. I decided to log all the statements(log_statements=all) that are\nrunning in order to see for how long the validation query is running in the\ndb (log_min_duration_statement wont be helpful here because the query is\ncanceled and I wont be able to see its duration..).\n\nThe weird thing is that I dont see before that error any log message that\nindicate that the query was running. I hoped to see the following msg in\nthe db log :\n2019-12-08 00:04:55 IST DB 2695 LOG: *execute *<unnamed>: select 1 as test\n\nbut I dont see any execute msg of this query , I just see the ERROR msg :\n 2019-12-08 00:04:56 IST DB 10035 ERROR: canceling statement due to user\nrequest\n2019-12-08 00:04:56 IST DB 10035 STATEMENT: select 1 as test\n\nAny idea why I the query isnt logged but I still get the ERROR msg ?\n\nHey all,I'm trying to analyze a weird situation that I have seen in my db.Sometimes my app fails to start because of the following msg : SQL State : nullError Code : 0Message : Cannot create PoolableConnectionFactory (ERROR: canceling statement due to user request)In the db at the same time I saw the same msg : 2019-12-08 00:04:56 IST DB 10035 ERROR: canceling statement due to user request2019-12-08 00:04:56 IST DB 10035 STATEMENT: select 1 as testI thought that it might be related to the validation query that is set to 2 seconds (when I change the validation query from \"select 1 as test\" to \"select pg_sleep(10)\" ) the same behavior was reproduced .Therefore, my theory was that the validation query is taking more than 2 seconds. I decided to log all the statements(log_statements=all) that are running in order to see for how long the validation query is running in the db (log_min_duration_statement wont be helpful here because the query is canceled and I wont be able to see its duration..).\n\nThe weird thing is that I dont see before that error any log message that indicate that the query was running. I hoped to see the following msg in the db log : 2019-12-08 00:04:55 IST DB 2695 LOG: execute <unnamed>: select 1 as testbut I dont see any execute msg of this query , I just see the ERROR msg : 2019-12-08 00:04:56 IST DB 10035 ERROR: canceling statement due to user request2019-12-08 00:04:56 IST DB 10035 STATEMENT: select 1 as test Any idea why I the query isnt logged but I still get the ERROR msg ?",
"msg_date": "Sun, 8 Dec 2019 15:08:27 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "query that canceled isnt logged"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> ...\n> Therefore, my theory was that the validation query is taking more than 2\n> seconds. I decided to log all the statements(log_statements=all) that are\n> running in order to see for how long the validation query is running in the\n> db (log_min_duration_statement wont be helpful here because the query is\n> canceled and I wont be able to see its duration..).\n> The weird thing is that I dont see before that error any log message that\n> indicate that the query was running. I hoped to see the following msg in\n> the db log :\n> 2019-12-08 00:04:55 IST DB 2695 LOG: *execute *<unnamed>: select 1 as test\n> but I dont see any execute msg of this query , I just see the ERROR msg :\n> 2019-12-08 00:04:56 IST DB 10035 ERROR: canceling statement due to user\n> request\n> 2019-12-08 00:04:56 IST DB 10035 STATEMENT: select 1 as test\n\nHm. Perhaps you should *also* turn on log_min_duration_statement = 0,\nso that the parse and bind phases log something. Maybe one of them\nis taking a long time (hard to see why...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Dec 2019 10:05:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query that canceled isnt logged"
},
{
"msg_contents": "What do you mean by live queries ?\nIf I'll try to run the following query and cancel it manually(ctrl+c) :\nselect pg_sleep(10)\nI will see in the logs the following messages :\n\n2019-12-08 17:16:34 IST postgres 30797 LOG: statement: select\npg_sleep(10);\n2019-12-08 17:16:36 IST postgres 30797 ERROR: canceling statement due to\nuser request\n2019-12-08 17:16:36 IST postgres 30797 STATEMENT: select pg_sleep(10);\n\nThe first message indicates that I run this query (logged because I set\nlog_statements to all) and the two next messages appear because I canceled\nthe query.\nWhen it happens to my application I dont see the first message that\nindicates that I run it and thats what I'm trying to understand.\n\n>\n\nWhat do you mean by live queries ? If I'll try to run the following query and cancel it manually(ctrl+c) : select pg_sleep(10)I will see in the logs the following messages : 2019-12-08 17:16:34 IST postgres 30797 LOG: statement: select pg_sleep(10);2019-12-08 17:16:36 IST postgres 30797 ERROR: canceling statement due to user request2019-12-08 17:16:36 IST postgres 30797 STATEMENT: select pg_sleep(10);The first message indicates that I run this query (logged because I set log_statements to all) and the two next messages appear because I canceled the query.When it happens to my application I dont see the first message that indicates that I run it and thats what I'm trying to understand.",
"msg_date": "Sun, 8 Dec 2019 17:18:37 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query that canceled isnt logged"
},
{
"msg_contents": "that is the first thing I did but it isnt logged even when this parameter\nis set, I guess because it is canceled before it finishes to run - which is\nweird..\n\nthat is the first thing I did but it isnt logged even when this parameter is set, I guess because it is canceled before it finishes to run - which is weird..",
"msg_date": "Sun, 8 Dec 2019 17:23:01 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query that canceled isnt logged"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> If I'll try to run the following query and cancel it manually(ctrl+c) :\n> select pg_sleep(10)\n> I will see in the logs the following messages :\n\n> 2019-12-08 17:16:34 IST postgres 30797 LOG: statement: select\n> pg_sleep(10);\n> 2019-12-08 17:16:36 IST postgres 30797 ERROR: canceling statement due to\n> user request\n> 2019-12-08 17:16:36 IST postgres 30797 STATEMENT: select pg_sleep(10);\n\nNote that that's going through \"simple query\" protocol (which is why\nit says \"statement:\") but your application is evidently using \"extended\nquery\" protocol (because you see \"execute ...:\", at least when it's\nworking correctly). I wonder whether the explanation is somewhere in\nthat.\n\nThe best theory I have at the moment, though, is that something is taking\nexclusive locks on system catalogs, blocking parsing of even trivial\nstatements.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Dec 2019 10:42:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query that canceled isnt logged"
}
] |
[
{
"msg_contents": "> From: Joe Conway\n\n> Sent: Sunday, December 8, 2019 9:04 PM\n\n> To: Lars Aksel Opsahl; Laurenz Albe; [email protected]\n\n> Subject: Re: How to run in parallel in Postgres, EXECUTE_PARALLEL\n\n>\n\n> On 12/8/19 1:14 PM, Lars Aksel Opsahl wrote:\n\n> > Do you or anybody know if there are any plans for a function call that\n\n> > support the calling structure below or something like it and that then\n\n> > could finish in 1 second ? (If you are calling a void function, the\n\n> > return value should not be any problem.)\n\n> >\n\n> > DO\n\n> > $body$\n\n> > *DECLARE*\n\n> > command_string_list text[3];\n\n> > *BEGIN*\n\n> > command_string_list[0] = 'SELECT pg_sleep(1)';\n\n> > command_string_list[1] = 'SELECT pg_sleep(1)';\n\n> > command_string_list[2] = 'SELECT pg_sleep(1)';\n\n> > EXECUTE_PARALLEL command_string_list;\n\n> > *END*\n\n> > $body$;\n\n> >\n\n> > The only way to this today as I understand it, is to open 3 new\n\n> > connections back to the database which you can be done in different ways.\n\n>\n\n> Yes, correct.\n\n>\n\n> > If we had a parallel functions like the one above it's easier to\n\n> > make parallel sql without using complex scripts, java, python or other\n\n> > system.\n\n>\n\n> It does require one connection per statement, but with dblink it is not\n\n> necessarily all that complex. For example (granted, this could use more\n\n> error checking, etc.):\n\n>\n\n> 8<----------------\n\n> CREATE OR REPLACE FUNCTION\n\n> execute_parallel(stmts text[])\n\n> RETURNS text AS\n\n> $$\n\n> declare\n\n> i int;\n\n> retv text;\n\n> conn text;\n\n> connstr text;\n\n> rv int;\n\n> db text := current_database();\n\n> begin\n\n> for i in 1..array_length(stmts,1) loop\n\n> conn := 'conn' || i::text;\n\n> connstr := 'dbname=' || db;\n\n> perform dblink_connect(conn, connstr);\n\n> rv := dblink_send_query(conn, stmts[i]);\n\n> end loop;\n\n> for i in 1..array_length(stmts,1) loop\n\n> conn := 'conn' || i::text;\n\n> select val into retv\n\n> from dblink_get_result(conn) as d(val text);\n\n> end loop;\n\n> for i in 1..array_length(stmts,1) loop\n\n> conn := 'conn' || i::text;\n\n> perform dblink_disconnect(conn);\n\n> end loop;\n\n> return 'OK';\n\n> end;\n\n> $$ language plpgsql;\n\n> 8<----------------\n\n>\n\n> And then:\n\n>\n\n> 8<----------------\n\n> \\timing\n\n> DO $$\n\n> declare\n\n> stmts text[];\n\n> begin\n\n> stmts[1] = 'select pg_sleep(1)';\n\n> stmts[2] = 'select pg_sleep(1)';\n\n> stmts[3] = 'select pg_sleep(1)';\n\n> PERFORM execute_parallel(stmts);\n\n> end;\n\n> $$ LANGUAGE plpgsql;\n\n> DO\n\n> Time: 1010.831 ms (00:01.011)\n\n> 8<----------------\n\n>\n\n> HTH,\n\n>\n\n> Joe\n\n> --\n\n> Crunchy Data - http://crunchydata.com\n\n> PostgreSQL Support for Secure Enterprises\n\n> Consulting, Training, & Open Source Development\n\nHi\n\nThanks a lot it works like a charm. https://github.com/larsop/find-overlap-and-gap/tree/use_dblink_for_parallel\n(The test is failing now because it seems like drop EXTENSION dblink; is not cleaning up every thing)\n\nAs you say we need some error handling. And maybe some retry if not enough free connections and a parameter for max parallel connections and so on.\n\nSo far this is best solution I have seen.\n\nThanks.\n\nLars\n\n\n\n\n\n\n\n\n\n> From: \nJoe Conway\n> Sent: Sunday, December 8, 2019 9:04 PM\n> To: \nLars Aksel \nOpsahl; Laurenz \nAlbe; [email protected]\n> Subject: Re: How to run in parallel in\nPostgres, EXECUTE_PARALLEL\n> \n> On 12/8/19 1:14 PM,\nLars \nAksel Opsahl wrote:\n> > Do you or anybody know if there are any plans for a function call that\n> > support the calling structure below or something like it and that then\n> > could finish in 1 second ? (If you are calling a void function, the\n> > return value should not be any problem.)\n> > \n> > DO\n> > $body$\n> > *DECLARE* \n> > command_string_list text[3];\n> > *BEGIN*\n> > command_string_list[0] = 'SELECT pg_sleep(1)';\n> > command_string_list[1] = 'SELECT pg_sleep(1)';\n> > command_string_list[2] = 'SELECT pg_sleep(1)';\n> > EXECUTE_PARALLEL command_string_list;\n> > *END*\n> > $body$;\n> > \n> > The only way to this today as I understand it, is to open 3 new\n> > connections back to the database which you can be done in different ways. \n> \n> Yes, correct.\n> \n> > If we had a parallel functions like the one above it's easier to\n> > make parallel \nsql without using complex scripts, java, \npython or other\n> > system.\n> \n> It does require one connection per statement, but with\ndblink it is not\n> necessarily all that complex. For example (granted, this could use more\n> error checking, etc.):\n> \n> 8<----------------\n> CREATE OR REPLACE FUNCTION\n> execute_parallel(stmts text[])\n> RETURNS text AS\n> $$\n> declare\n> i\nint;\n> retv text;\n> conn text;\n> connstr text;\n> rv\nint;\n> db text := current_database();\n> begin\n> for i in 1..array_length(stmts,1) loop\n> \nconn := 'conn' || i::text;\n> \nconnstr := 'dbname=' ||\ndb;\n> \nperform dblink_connect(conn, \nconnstr);\n> \nrv := dblink_send_query(conn,\nstmts[i]);\n> end loop;\n> for i in 1..array_length(stmts,1) loop\n> \nconn := 'conn' || i::text;\n> \nselect val into \nretv\n> \nfrom dblink_get_result(conn) as d(val text);\n> end loop;\n> for i in 1..array_length(stmts,1) loop\n> \nconn := 'conn' || i::text;\n> \nperform dblink_disconnect(conn);\n> end loop;\n> return 'OK';\n> end;\n> $$ language \nplpgsql;\n> 8<----------------\n> \n> And then:\n> \n> 8<----------------\n> \\timing\n> DO $$\n> declare\n> \nstmts text[];\n> begin\n> \nstmts[1] = 'select pg_sleep(1)';\n> \nstmts[2] = 'select pg_sleep(1)';\n> \nstmts[3] = 'select pg_sleep(1)';\n> PERFORM execute_parallel(stmts);\n> end;\n> $$ LANGUAGE \nplpgsql;\n> DO\n> Time: 1010.831 \nms (00:01.011)\n> 8<----------------\n> \n> HTH,\n> \n> \nJoe\n> -- \n> Crunchy Data - http://crunchydata.com\n> PostgreSQL Support for Secure Enterprises\n> Consulting, Training, & Open Source Development\n\n\nHi\n\n\nThanks a lot it works like a charm. https://github.com/larsop/find-overlap-and-gap/tree/use_dblink_for_parallel\n(The test is failing now because it seems like drop EXTENSION dblink; is not cleaning up every thing)\n\n\nAs you say we need some error handling. And maybe some retry if not enough free connections and a parameter\n for max parallel connections and so on.\n\n\n\nSo far this is best solution I have seen.\n\n\nThanks.\n\n\nLars",
"msg_date": "Sun, 8 Dec 2019 21:59:51 +0000",
"msg_from": "Lars Aksel Opsahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to run in parallel in Postgres, EXECUTE_PARALLEL"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am running TPC-H on recent postgresql (12.0 and 12.1).\nOn some of the queries (that may involve parallel scans) I see this\ninteresting behavior:\nWhen these queries are executed back-to-back (sent from psql interactive\nterminal), the total execution time of them increase monotonically.\n\nI simplified query-1 to demonstrate this effect:\n``` example.sql\nexplain (analyze, buffers) select\n max(l_shipdate) as max_data,\n count(*) as count_order\nfrom\n lineitem\nwhere\n l_shipdate <= date '1998-12-01' - interval '20' day;\n```\n\nWhen I execute (from fish) following command:\n`for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`\nThe results I get are as follows:\n\"\n Execution Time: 184.864 ms\n Execution Time: 192.758 ms\n Execution Time: 197.380 ms\n Execution Time: 200.384 ms\n Execution Time: 202.950 ms\n Execution Time: 205.695 ms\n Execution Time: 208.082 ms\n Execution Time: 209.108 ms\n Execution Time: 212.428 ms\n Execution Time: 214.539 ms\n Execution Time: 215.799 ms\n Execution Time: 219.057 ms\n Execution Time: 222.102 ms\n Execution Time: 223.779 ms\n Execution Time: 227.819 ms\n Execution Time: 229.710 ms\n Execution Time: 239.439 ms\n Execution Time: 237.649 ms\n Execution Time: 249.178 ms\n Execution Time: 261.268 ms\n\"\nIn addition, if the repeated more times, the total execution time can end\nup being 10X and more!!!\n\nWhen there a wait period in-between queries, (e.g. sleep 10) in the above\nfor loop, this increasing execution time behavior goes a way.\nFor more complex queries, the \"wait period\" needs to be longer to avoid the\nincrease in execution time.\n\nSome metadata about this table \"lineitem\":\ntpch=# \\d lineitem\n Table \"public.lineitem\"\n Column | Type | Collation | Nullable | Default\n-----------------+-----------------------+-----------+----------+---------\n l_orderkey | integer | | not null |\n l_partkey | integer | | not null |\n l_suppkey | integer | | not null |\n l_linenumber | integer | | not null |\n l_quantity | numeric(15,2) | | not null |\n l_extendedprice | numeric(15,2) | | not null |\n l_discount | numeric(15,2) | | not null |\n l_tax | numeric(15,2) | | not null |\n l_returnflag | character(1) | | not null |\n l_linestatus | character(1) | | not null |\n l_shipdate | date | | not null |\n l_commitdate | date | | not null |\n l_receiptdate | date | | not null |\n l_shipinstruct | character(25) | | not null |\n l_shipmode | character(10) | | not null |\n l_comment | character varying(44) | | not null |\nIndexes:\n \"i_l_commitdate\" btree (l_commitdate)\n \"i_l_orderkey\" btree (l_orderkey)\n \"i_l_orderkey_quantity\" btree (l_orderkey, l_quantity)\n \"i_l_partkey\" btree (l_partkey)\n \"i_l_receiptdate\" btree (l_receiptdate)\n \"i_l_shipdate\" btree (l_shipdate)\n \"i_l_suppkey\" btree (l_suppkey)\n \"i_l_suppkey_partkey\" btree (l_partkey, l_suppkey)\n\ntpch=# SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname\n='lineitem';\n relname | relpages | reltuples | relallvisible | relkind | relnatts |\nrelhassubclass | reloptions | pg_table_size\n----------+----------+--------------+---------------+---------+----------+----------------+------------+---------------\n lineitem | 112503 | 6.001167e+06 | 112503 | r | 16 |\nf | | 921903104\n(1 row)\n\nPostgresql 12.0 and 12.1 are all manually installed from source.\nBoth are running on Ubuntu 16.04 kernel 4.4.0-142-generic, on Intel(R)\nCore(TM) i7-6700K.\n\n\nAny help greatly appreciated!\n\nShijia\n\nHi all,I am running TPC-H on recent postgresql (12.0 and 12.1).On some of the queries (that may involve parallel scans) I see this interesting behavior:When these queries are executed back-to-back (sent from psql interactive terminal), the total execution time of them increase monotonically.I simplified query-1 to demonstrate this effect:``` example.sqlexplain (analyze, buffers) select max(l_shipdate) as max_data, count(*) as count_orderfrom lineitemwhere l_shipdate <= date '1998-12-01' - interval '20' day;```When I execute (from fish) following command:`for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`The results I get are as follows:\" Execution Time: 184.864 ms Execution Time: 192.758 ms Execution Time: 197.380 ms Execution Time: 200.384 ms Execution Time: 202.950 ms Execution Time: 205.695 ms Execution Time: 208.082 ms Execution Time: 209.108 ms Execution Time: 212.428 ms Execution Time: 214.539 ms Execution Time: 215.799 ms Execution Time: 219.057 ms Execution Time: 222.102 ms Execution Time: 223.779 ms Execution Time: 227.819 ms Execution Time: 229.710 ms Execution Time: 239.439 ms Execution Time: 237.649 ms Execution Time: 249.178 ms Execution Time: 261.268 ms\"In addition, if the repeated more times, the total execution time can end up being 10X and more!!!When there a wait period in-between queries, (e.g. sleep 10) in the above for loop, this increasing execution time behavior goes a way.For more complex queries, the \"wait period\" needs to be longer to avoid the increase in execution time.Some metadata about this table \"lineitem\":tpch=# \\d lineitem Table \"public.lineitem\" Column | Type | Collation | Nullable | Default-----------------+-----------------------+-----------+----------+--------- l_orderkey | integer | | not null | l_partkey | integer | | not null | l_suppkey | integer | | not null | l_linenumber | integer | | not null | l_quantity | numeric(15,2) | | not null | l_extendedprice | numeric(15,2) | | not null | l_discount | numeric(15,2) | | not null | l_tax | numeric(15,2) | | not null | l_returnflag | character(1) | | not null | l_linestatus | character(1) | | not null | l_shipdate | date | | not null | l_commitdate | date | | not null | l_receiptdate | date | | not null | l_shipinstruct | character(25) | | not null | l_shipmode | character(10) | | not null | l_comment | character varying(44) | | not null |Indexes: \"i_l_commitdate\" btree (l_commitdate) \"i_l_orderkey\" btree (l_orderkey) \"i_l_orderkey_quantity\" btree (l_orderkey, l_quantity) \"i_l_partkey\" btree (l_partkey) \"i_l_receiptdate\" btree (l_receiptdate) \"i_l_shipdate\" btree (l_shipdate) \"i_l_suppkey\" btree (l_suppkey) \"i_l_suppkey_partkey\" btree (l_partkey, l_suppkey)tpch=# SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='lineitem'; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size----------+----------+--------------+---------------+---------+----------+----------------+------------+--------------- lineitem | 112503 | 6.001167e+06 | 112503 | r | 16 | f | | 921903104(1 row)Postgresql 12.0 and 12.1 are all manually installed from source.Both are running on Ubuntu 16.04 kernel 4.4.0-142-generic, on Intel(R) Core(TM) i7-6700K.Any help greatly appreciated!Shijia",
"msg_date": "Sun, 15 Dec 2019 23:59:26 -0600",
"msg_from": "Shijia Wei <[email protected]>",
"msg_from_op": true,
"msg_subject": "Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Hi Shijia,\n\nIt sounds like concurrency on the queries: the second starts before the\nfirst ends, and so on. With a short wait in between you ensure sequential\nexecution. Notice that you also have the overhead of concurrent psql...\n\nSounds normal to me.\n\nBest regards\nOlivier\n\n\nOn Mon, Dec 16, 2019, 07:00 Shijia Wei <[email protected]> wrote:\n\n> Hi all,\n>\n> I am running TPC-H on recent postgresql (12.0 and 12.1).\n> On some of the queries (that may involve parallel scans) I see this\n> interesting behavior:\n> When these queries are executed back-to-back (sent from psql interactive\n> terminal), the total execution time of them increase monotonically.\n>\n> I simplified query-1 to demonstrate this effect:\n> ``` example.sql\n> explain (analyze, buffers) select\n> max(l_shipdate) as max_data,\n> count(*) as count_order\n> from\n> lineitem\n> where\n> l_shipdate <= date '1998-12-01' - interval '20' day;\n> ```\n>\n> When I execute (from fish) following command:\n> `for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`\n> The results I get are as follows:\n> \"\n> Execution Time: 184.864 ms\n> Execution Time: 192.758 ms\n> Execution Time: 197.380 ms\n> Execution Time: 200.384 ms\n> Execution Time: 202.950 ms\n> Execution Time: 205.695 ms\n> Execution Time: 208.082 ms\n> Execution Time: 209.108 ms\n> Execution Time: 212.428 ms\n> Execution Time: 214.539 ms\n> Execution Time: 215.799 ms\n> Execution Time: 219.057 ms\n> Execution Time: 222.102 ms\n> Execution Time: 223.779 ms\n> Execution Time: 227.819 ms\n> Execution Time: 229.710 ms\n> Execution Time: 239.439 ms\n> Execution Time: 237.649 ms\n> Execution Time: 249.178 ms\n> Execution Time: 261.268 ms\n> \"\n> In addition, if the repeated more times, the total execution time can end\n> up being 10X and more!!!\n>\n> When there a wait period in-between queries, (e.g. sleep 10) in the above\n> for loop, this increasing execution time behavior goes a way.\n> For more complex queries, the \"wait period\" needs to be longer to avoid\n> the increase in execution time.\n>\n> Some metadata about this table \"lineitem\":\n> tpch=# \\d lineitem\n> Table \"public.lineitem\"\n> Column | Type | Collation | Nullable | Default\n> -----------------+-----------------------+-----------+----------+---------\n> l_orderkey | integer | | not null |\n> l_partkey | integer | | not null |\n> l_suppkey | integer | | not null |\n> l_linenumber | integer | | not null |\n> l_quantity | numeric(15,2) | | not null |\n> l_extendedprice | numeric(15,2) | | not null |\n> l_discount | numeric(15,2) | | not null |\n> l_tax | numeric(15,2) | | not null |\n> l_returnflag | character(1) | | not null |\n> l_linestatus | character(1) | | not null |\n> l_shipdate | date | | not null |\n> l_commitdate | date | | not null |\n> l_receiptdate | date | | not null |\n> l_shipinstruct | character(25) | | not null |\n> l_shipmode | character(10) | | not null |\n> l_comment | character varying(44) | | not null |\n> Indexes:\n> \"i_l_commitdate\" btree (l_commitdate)\n> \"i_l_orderkey\" btree (l_orderkey)\n> \"i_l_orderkey_quantity\" btree (l_orderkey, l_quantity)\n> \"i_l_partkey\" btree (l_partkey)\n> \"i_l_receiptdate\" btree (l_receiptdate)\n> \"i_l_shipdate\" btree (l_shipdate)\n> \"i_l_suppkey\" btree (l_suppkey)\n> \"i_l_suppkey_partkey\" btree (l_partkey, l_suppkey)\n>\n> tpch=# SELECT relname, relpages, reltuples, relallvisible, relkind,\n> relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class\n> WHERE relname='lineitem';\n> relname | relpages | reltuples | relallvisible | relkind | relnatts\n> | relhassubclass | reloptions | pg_table_size\n>\n> ----------+----------+--------------+---------------+---------+----------+----------------+------------+---------------\n> lineitem | 112503 | 6.001167e+06 | 112503 | r | 16\n> | f | | 921903104\n> (1 row)\n>\n> Postgresql 12.0 and 12.1 are all manually installed from source.\n> Both are running on Ubuntu 16.04 kernel 4.4.0-142-generic, on Intel(R)\n> Core(TM) i7-6700K.\n>\n>\n> Any help greatly appreciated!\n>\n> Shijia\n>\n\nHi Shijia,It sounds like concurrency on the queries: the second starts before the first ends, and so on. With a short wait in between you ensure sequential execution. Notice that you also have the overhead of concurrent psql...Sounds normal to me.Best regardsOlivierOn Mon, Dec 16, 2019, 07:00 Shijia Wei <[email protected]> wrote:Hi all,I am running TPC-H on recent postgresql (12.0 and 12.1).On some of the queries (that may involve parallel scans) I see this interesting behavior:When these queries are executed back-to-back (sent from psql interactive terminal), the total execution time of them increase monotonically.I simplified query-1 to demonstrate this effect:``` example.sqlexplain (analyze, buffers) select max(l_shipdate) as max_data, count(*) as count_orderfrom lineitemwhere l_shipdate <= date '1998-12-01' - interval '20' day;```When I execute (from fish) following command:`for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`The results I get are as follows:\" Execution Time: 184.864 ms Execution Time: 192.758 ms Execution Time: 197.380 ms Execution Time: 200.384 ms Execution Time: 202.950 ms Execution Time: 205.695 ms Execution Time: 208.082 ms Execution Time: 209.108 ms Execution Time: 212.428 ms Execution Time: 214.539 ms Execution Time: 215.799 ms Execution Time: 219.057 ms Execution Time: 222.102 ms Execution Time: 223.779 ms Execution Time: 227.819 ms Execution Time: 229.710 ms Execution Time: 239.439 ms Execution Time: 237.649 ms Execution Time: 249.178 ms Execution Time: 261.268 ms\"In addition, if the repeated more times, the total execution time can end up being 10X and more!!!When there a wait period in-between queries, (e.g. sleep 10) in the above for loop, this increasing execution time behavior goes a way.For more complex queries, the \"wait period\" needs to be longer to avoid the increase in execution time.Some metadata about this table \"lineitem\":tpch=# \\d lineitem Table \"public.lineitem\" Column | Type | Collation | Nullable | Default-----------------+-----------------------+-----------+----------+--------- l_orderkey | integer | | not null | l_partkey | integer | | not null | l_suppkey | integer | | not null | l_linenumber | integer | | not null | l_quantity | numeric(15,2) | | not null | l_extendedprice | numeric(15,2) | | not null | l_discount | numeric(15,2) | | not null | l_tax | numeric(15,2) | | not null | l_returnflag | character(1) | | not null | l_linestatus | character(1) | | not null | l_shipdate | date | | not null | l_commitdate | date | | not null | l_receiptdate | date | | not null | l_shipinstruct | character(25) | | not null | l_shipmode | character(10) | | not null | l_comment | character varying(44) | | not null |Indexes: \"i_l_commitdate\" btree (l_commitdate) \"i_l_orderkey\" btree (l_orderkey) \"i_l_orderkey_quantity\" btree (l_orderkey, l_quantity) \"i_l_partkey\" btree (l_partkey) \"i_l_receiptdate\" btree (l_receiptdate) \"i_l_shipdate\" btree (l_shipdate) \"i_l_suppkey\" btree (l_suppkey) \"i_l_suppkey_partkey\" btree (l_partkey, l_suppkey)tpch=# SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='lineitem'; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size----------+----------+--------------+---------------+---------+----------+----------------+------------+--------------- lineitem | 112503 | 6.001167e+06 | 112503 | r | 16 | f | | 921903104(1 row)Postgresql 12.0 and 12.1 are all manually installed from source.Both are running on Ubuntu 16.04 kernel 4.4.0-142-generic, on Intel(R) Core(TM) i7-6700K.Any help greatly appreciated!Shijia",
"msg_date": "Mon, 16 Dec 2019 09:03:54 +0100",
"msg_from": "Olivier Gautherot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Hi Olivier,\n\nI do not think that the queries are executed concurrently. The bash for\nloop ensures that the next command fires only after the first returns.\nAlso for some 'complex' queries, even a wait-period that is longer than the\ntotal execution time does not completely avoid this effect.\nFor example, a wait-period of 5-second in between queries that take\n2-second to run, does not help avoid the increasing runtime problem\ncompletely.\n\nThanks,\nShijia\n\n\nOn Mon, Dec 16, 2019 at 2:04 AM Olivier Gautherot <[email protected]>\nwrote:\n\n> Hi Shijia,\n>\n> It sounds like concurrency on the queries: the second starts before the\n> first ends, and so on. With a short wait in between you ensure sequential\n> execution. Notice that you also have the overhead of concurrent psql...\n>\n> Sounds normal to me.\n>\n> Best regards\n> Olivier\n>\n>\n> On Mon, Dec 16, 2019, 07:00 Shijia Wei <[email protected]> wrote:\n>\n>> Hi all,\n>>\n>> I am running TPC-H on recent postgresql (12.0 and 12.1).\n>> On some of the queries (that may involve parallel scans) I see this\n>> interesting behavior:\n>> When these queries are executed back-to-back (sent from psql interactive\n>> terminal), the total execution time of them increase monotonically.\n>>\n>> I simplified query-1 to demonstrate this effect:\n>> ``` example.sql\n>> explain (analyze, buffers) select\n>> max(l_shipdate) as max_data,\n>> count(*) as count_order\n>> from\n>> lineitem\n>> where\n>> l_shipdate <= date '1998-12-01' - interval '20' day;\n>> ```\n>>\n>> When I execute (from fish) following command:\n>> `for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`\n>> The results I get are as follows:\n>> \"\n>> Execution Time: 184.864 ms\n>> Execution Time: 192.758 ms\n>> Execution Time: 197.380 ms\n>> Execution Time: 200.384 ms\n>> Execution Time: 202.950 ms\n>> Execution Time: 205.695 ms\n>> Execution Time: 208.082 ms\n>> Execution Time: 209.108 ms\n>> Execution Time: 212.428 ms\n>> Execution Time: 214.539 ms\n>> Execution Time: 215.799 ms\n>> Execution Time: 219.057 ms\n>> Execution Time: 222.102 ms\n>> Execution Time: 223.779 ms\n>> Execution Time: 227.819 ms\n>> Execution Time: 229.710 ms\n>> Execution Time: 239.439 ms\n>> Execution Time: 237.649 ms\n>> Execution Time: 249.178 ms\n>> Execution Time: 261.268 ms\n>> \"\n>> In addition, if the repeated more times, the total execution time can end\n>> up being 10X and more!!!\n>>\n>> When there a wait period in-between queries, (e.g. sleep 10) in the\n>> above for loop, this increasing execution time behavior goes a way.\n>> For more complex queries, the \"wait period\" needs to be longer to avoid\n>> the increase in execution time.\n>>\n>> Some metadata about this table \"lineitem\":\n>> tpch=# \\d lineitem\n>> Table \"public.lineitem\"\n>> Column | Type | Collation | Nullable | Default\n>> -----------------+-----------------------+-----------+----------+---------\n>> l_orderkey | integer | | not null |\n>> l_partkey | integer | | not null |\n>> l_suppkey | integer | | not null |\n>> l_linenumber | integer | | not null |\n>> l_quantity | numeric(15,2) | | not null |\n>> l_extendedprice | numeric(15,2) | | not null |\n>> l_discount | numeric(15,2) | | not null |\n>> l_tax | numeric(15,2) | | not null |\n>> l_returnflag | character(1) | | not null |\n>> l_linestatus | character(1) | | not null |\n>> l_shipdate | date | | not null |\n>> l_commitdate | date | | not null |\n>> l_receiptdate | date | | not null |\n>> l_shipinstruct | character(25) | | not null |\n>> l_shipmode | character(10) | | not null |\n>> l_comment | character varying(44) | | not null |\n>> Indexes:\n>> \"i_l_commitdate\" btree (l_commitdate)\n>> \"i_l_orderkey\" btree (l_orderkey)\n>> \"i_l_orderkey_quantity\" btree (l_orderkey, l_quantity)\n>> \"i_l_partkey\" btree (l_partkey)\n>> \"i_l_receiptdate\" btree (l_receiptdate)\n>> \"i_l_shipdate\" btree (l_shipdate)\n>> \"i_l_suppkey\" btree (l_suppkey)\n>> \"i_l_suppkey_partkey\" btree (l_partkey, l_suppkey)\n>>\n>> tpch=# SELECT relname, relpages, reltuples, relallvisible, relkind,\n>> relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class\n>> WHERE relname='lineitem';\n>> relname | relpages | reltuples | relallvisible | relkind | relnatts\n>> | relhassubclass | reloptions | pg_table_size\n>>\n>> ----------+----------+--------------+---------------+---------+----------+----------------+------------+---------------\n>> lineitem | 112503 | 6.001167e+06 | 112503 | r | 16\n>> | f | | 921903104\n>> (1 row)\n>>\n>> Postgresql 12.0 and 12.1 are all manually installed from source.\n>> Both are running on Ubuntu 16.04 kernel 4.4.0-142-generic, on Intel(R)\n>> Core(TM) i7-6700K.\n>>\n>>\n>> Any help greatly appreciated!\n>>\n>> Shijia\n>>\n>\n\n-- \n*Shijia Wei*\nECE, UT Austin | ACSES | 3rd Year PhD\[email protected] | https://0x161e-swei.github.io\n\nHi Olivier,I do not think that the queries are executed concurrently. The bash for loop ensures that the next command fires only after the first returns.Also for some 'complex' queries, even a wait-period that is longer than the total execution time does not completely avoid this effect.For example, a wait-period of 5-second in between queries that take 2-second to run, does not help avoid the increasing runtime problem completely.Thanks,ShijiaOn Mon, Dec 16, 2019 at 2:04 AM Olivier Gautherot <[email protected]> wrote:Hi Shijia,It sounds like concurrency on the queries: the second starts before the first ends, and so on. With a short wait in between you ensure sequential execution. Notice that you also have the overhead of concurrent psql...Sounds normal to me.Best regardsOlivierOn Mon, Dec 16, 2019, 07:00 Shijia Wei <[email protected]> wrote:Hi all,I am running TPC-H on recent postgresql (12.0 and 12.1).On some of the queries (that may involve parallel scans) I see this interesting behavior:When these queries are executed back-to-back (sent from psql interactive terminal), the total execution time of them increase monotonically.I simplified query-1 to demonstrate this effect:``` example.sqlexplain (analyze, buffers) select max(l_shipdate) as max_data, count(*) as count_orderfrom lineitemwhere l_shipdate <= date '1998-12-01' - interval '20' day;```When I execute (from fish) following command:`for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`The results I get are as follows:\" Execution Time: 184.864 ms Execution Time: 192.758 ms Execution Time: 197.380 ms Execution Time: 200.384 ms Execution Time: 202.950 ms Execution Time: 205.695 ms Execution Time: 208.082 ms Execution Time: 209.108 ms Execution Time: 212.428 ms Execution Time: 214.539 ms Execution Time: 215.799 ms Execution Time: 219.057 ms Execution Time: 222.102 ms Execution Time: 223.779 ms Execution Time: 227.819 ms Execution Time: 229.710 ms Execution Time: 239.439 ms Execution Time: 237.649 ms Execution Time: 249.178 ms Execution Time: 261.268 ms\"In addition, if the repeated more times, the total execution time can end up being 10X and more!!!When there a wait period in-between queries, (e.g. sleep 10) in the above for loop, this increasing execution time behavior goes a way.For more complex queries, the \"wait period\" needs to be longer to avoid the increase in execution time.Some metadata about this table \"lineitem\":tpch=# \\d lineitem Table \"public.lineitem\" Column | Type | Collation | Nullable | Default-----------------+-----------------------+-----------+----------+--------- l_orderkey | integer | | not null | l_partkey | integer | | not null | l_suppkey | integer | | not null | l_linenumber | integer | | not null | l_quantity | numeric(15,2) | | not null | l_extendedprice | numeric(15,2) | | not null | l_discount | numeric(15,2) | | not null | l_tax | numeric(15,2) | | not null | l_returnflag | character(1) | | not null | l_linestatus | character(1) | | not null | l_shipdate | date | | not null | l_commitdate | date | | not null | l_receiptdate | date | | not null | l_shipinstruct | character(25) | | not null | l_shipmode | character(10) | | not null | l_comment | character varying(44) | | not null |Indexes: \"i_l_commitdate\" btree (l_commitdate) \"i_l_orderkey\" btree (l_orderkey) \"i_l_orderkey_quantity\" btree (l_orderkey, l_quantity) \"i_l_partkey\" btree (l_partkey) \"i_l_receiptdate\" btree (l_receiptdate) \"i_l_shipdate\" btree (l_shipdate) \"i_l_suppkey\" btree (l_suppkey) \"i_l_suppkey_partkey\" btree (l_partkey, l_suppkey)tpch=# SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='lineitem'; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size----------+----------+--------------+---------------+---------+----------+----------------+------------+--------------- lineitem | 112503 | 6.001167e+06 | 112503 | r | 16 | f | | 921903104(1 row)Postgresql 12.0 and 12.1 are all manually installed from source.Both are running on Ubuntu 16.04 kernel 4.4.0-142-generic, on Intel(R) Core(TM) i7-6700K.Any help greatly appreciated!Shijia\n\n-- Shijia WeiECE, UT Austin | ACSES | 3rd Year [email protected] | https://0x161e-swei.github.io",
"msg_date": "Mon, 16 Dec 2019 03:51:24 -0600",
"msg_from": "Shijia Wei <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Hi Shijia,\n\nIf you're using fish, I suspect you're on a Mac - I don't have experience\non this platform.\n\nCan you check with pgAdmin (3 or 4) what the server is busy doing after a\nfew iterations? Check for locks, as it could be a cause. Also, do you have\nconcurrent INSERTs?\n\nOlivier\n\nOn Mon, Dec 16, 2019, 10:52 Shijia Wei <[email protected]> wrote:\n\n> Hi Olivier,\n>\n> I do not think that the queries are executed concurrently. The bash for\n> loop ensures that the next command fires only after the first returns.\n> Also for some 'complex' queries, even a wait-period that is longer than\n> the total execution time does not completely avoid this effect.\n> For example, a wait-period of 5-second in between queries that take\n> 2-second to run, does not help avoid the increasing runtime problem\n> completely.\n>\n> Thanks,\n> Shijia\n>\n>\n> On Mon, Dec 16, 2019 at 2:04 AM Olivier Gautherot <\n> [email protected]> wrote:\n>\n>> Hi Shijia,\n>>\n>> It sounds like concurrency on the queries: the second starts before the\n>> first ends, and so on. With a short wait in between you ensure sequential\n>> execution. Notice that you also have the overhead of concurrent psql...\n>>\n>> Sounds normal to me.\n>>\n>> Best regards\n>> Olivier\n>>\n>>\n>> On Mon, Dec 16, 2019, 07:00 Shijia Wei <[email protected]> wrote:\n>>\n>>> Hi all,\n>>>\n>>> I am running TPC-H on recent postgresql (12.0 and 12.1).\n>>> On some of the queries (that may involve parallel scans) I see this\n>>> interesting behavior:\n>>> When these queries are executed back-to-back (sent from psql\n>>> interactive terminal), the total execution time of them\n>>> increase monotonically.\n>>>\n>>> I simplified query-1 to demonstrate this effect:\n>>> ``` example.sql\n>>> explain (analyze, buffers) select\n>>> max(l_shipdate) as max_data,\n>>> count(*) as count_order\n>>> from\n>>> lineitem\n>>> where\n>>> l_shipdate <= date '1998-12-01' - interval '20' day;\n>>> ```\n>>>\n>>> When I execute (from fish) following command:\n>>> `for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`\n>>> The results I get are as follows:\n>>> \"\n>>> Execution Time: 184.864 ms\n>>> Execution Time: 192.758 ms\n>>> Execution Time: 197.380 ms\n>>> Execution Time: 200.384 ms\n>>> Execution Time: 202.950 ms\n>>> Execution Time: 205.695 ms\n>>> Execution Time: 208.082 ms\n>>> Execution Time: 209.108 ms\n>>> Execution Time: 212.428 ms\n>>> Execution Time: 214.539 ms\n>>> Execution Time: 215.799 ms\n>>> Execution Time: 219.057 ms\n>>> Execution Time: 222.102 ms\n>>> Execution Time: 223.779 ms\n>>> Execution Time: 227.819 ms\n>>> Execution Time: 229.710 ms\n>>> Execution Time: 239.439 ms\n>>> Execution Time: 237.649 ms\n>>> Execution Time: 249.178 ms\n>>> Execution Time: 261.268 ms\n>>> \"\n>>> In addition, if the repeated more times, the total execution time can\n>>> end up being 10X and more!!!\n>>>\n>>> When there a wait period in-between queries, (e.g. sleep 10) in the\n>>> above for loop, this increasing execution time behavior goes a way.\n>>> For more complex queries, the \"wait period\" needs to be longer to avoid\n>>> the increase in execution time.\n>>>\n>>> Some metadata about this table \"lineitem\":\n>>> tpch=# \\d lineitem\n>>> Table \"public.lineitem\"\n>>> Column | Type | Collation | Nullable |\n>>> Default\n>>>\n>>> -----------------+-----------------------+-----------+----------+---------\n>>> l_orderkey | integer | | not null |\n>>> l_partkey | integer | | not null |\n>>> l_suppkey | integer | | not null |\n>>> l_linenumber | integer | | not null |\n>>> l_quantity | numeric(15,2) | | not null |\n>>> l_extendedprice | numeric(15,2) | | not null |\n>>> l_discount | numeric(15,2) | | not null |\n>>> l_tax | numeric(15,2) | | not null |\n>>> l_returnflag | character(1) | | not null |\n>>> l_linestatus | character(1) | | not null |\n>>> l_shipdate | date | | not null |\n>>> l_commitdate | date | | not null |\n>>> l_receiptdate | date | | not null |\n>>> l_shipinstruct | character(25) | | not null |\n>>> l_shipmode | character(10) | | not null |\n>>> l_comment | character varying(44) | | not null |\n>>> Indexes:\n>>> \"i_l_commitdate\" btree (l_commitdate)\n>>> \"i_l_orderkey\" btree (l_orderkey)\n>>> \"i_l_orderkey_quantity\" btree (l_orderkey, l_quantity)\n>>> \"i_l_partkey\" btree (l_partkey)\n>>> \"i_l_receiptdate\" btree (l_receiptdate)\n>>> \"i_l_shipdate\" btree (l_shipdate)\n>>> \"i_l_suppkey\" btree (l_suppkey)\n>>> \"i_l_suppkey_partkey\" btree (l_partkey, l_suppkey)\n>>>\n>>> tpch=# SELECT relname, relpages, reltuples, relallvisible, relkind,\n>>> relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class\n>>> WHERE relname='lineitem';\n>>> relname | relpages | reltuples | relallvisible | relkind | relnatts\n>>> | relhassubclass | reloptions | pg_table_size\n>>>\n>>> ----------+----------+--------------+---------------+---------+----------+----------------+------------+---------------\n>>> lineitem | 112503 | 6.001167e+06 | 112503 | r |\n>>> 16 | f | | 921903104\n>>> (1 row)\n>>>\n>>> Postgresql 12.0 and 12.1 are all manually installed from source.\n>>> Both are running on Ubuntu 16.04 kernel 4.4.0-142-generic, on Intel(R)\n>>> Core(TM) i7-6700K.\n>>>\n>>>\n>>> Any help greatly appreciated!\n>>>\n>>> Shijia\n>>>\n>>\n>\n> --\n> *Shijia Wei*\n> ECE, UT Austin | ACSES | 3rd Year PhD\n> [email protected] | https://0x161e-swei.github.io\n>\n\nHi Shijia,If you're using fish, I suspect you're on a Mac - I don't have experience on this platform.Can you check with pgAdmin (3 or 4) what the server is busy doing after a few iterations? Check for locks, as it could be a cause. Also, do you have concurrent INSERTs?OlivierOn Mon, Dec 16, 2019, 10:52 Shijia Wei <[email protected]> wrote:Hi Olivier,I do not think that the queries are executed concurrently. The bash for loop ensures that the next command fires only after the first returns.Also for some 'complex' queries, even a wait-period that is longer than the total execution time does not completely avoid this effect.For example, a wait-period of 5-second in between queries that take 2-second to run, does not help avoid the increasing runtime problem completely.Thanks,ShijiaOn Mon, Dec 16, 2019 at 2:04 AM Olivier Gautherot <[email protected]> wrote:Hi Shijia,It sounds like concurrency on the queries: the second starts before the first ends, and so on. With a short wait in between you ensure sequential execution. Notice that you also have the overhead of concurrent psql...Sounds normal to me.Best regardsOlivierOn Mon, Dec 16, 2019, 07:00 Shijia Wei <[email protected]> wrote:Hi all,I am running TPC-H on recent postgresql (12.0 and 12.1).On some of the queries (that may involve parallel scans) I see this interesting behavior:When these queries are executed back-to-back (sent from psql interactive terminal), the total execution time of them increase monotonically.I simplified query-1 to demonstrate this effect:``` example.sqlexplain (analyze, buffers) select max(l_shipdate) as max_data, count(*) as count_orderfrom lineitemwhere l_shipdate <= date '1998-12-01' - interval '20' day;```When I execute (from fish) following command:`for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`The results I get are as follows:\" Execution Time: 184.864 ms Execution Time: 192.758 ms Execution Time: 197.380 ms Execution Time: 200.384 ms Execution Time: 202.950 ms Execution Time: 205.695 ms Execution Time: 208.082 ms Execution Time: 209.108 ms Execution Time: 212.428 ms Execution Time: 214.539 ms Execution Time: 215.799 ms Execution Time: 219.057 ms Execution Time: 222.102 ms Execution Time: 223.779 ms Execution Time: 227.819 ms Execution Time: 229.710 ms Execution Time: 239.439 ms Execution Time: 237.649 ms Execution Time: 249.178 ms Execution Time: 261.268 ms\"In addition, if the repeated more times, the total execution time can end up being 10X and more!!!When there a wait period in-between queries, (e.g. sleep 10) in the above for loop, this increasing execution time behavior goes a way.For more complex queries, the \"wait period\" needs to be longer to avoid the increase in execution time.Some metadata about this table \"lineitem\":tpch=# \\d lineitem Table \"public.lineitem\" Column | Type | Collation | Nullable | Default-----------------+-----------------------+-----------+----------+--------- l_orderkey | integer | | not null | l_partkey | integer | | not null | l_suppkey | integer | | not null | l_linenumber | integer | | not null | l_quantity | numeric(15,2) | | not null | l_extendedprice | numeric(15,2) | | not null | l_discount | numeric(15,2) | | not null | l_tax | numeric(15,2) | | not null | l_returnflag | character(1) | | not null | l_linestatus | character(1) | | not null | l_shipdate | date | | not null | l_commitdate | date | | not null | l_receiptdate | date | | not null | l_shipinstruct | character(25) | | not null | l_shipmode | character(10) | | not null | l_comment | character varying(44) | | not null |Indexes: \"i_l_commitdate\" btree (l_commitdate) \"i_l_orderkey\" btree (l_orderkey) \"i_l_orderkey_quantity\" btree (l_orderkey, l_quantity) \"i_l_partkey\" btree (l_partkey) \"i_l_receiptdate\" btree (l_receiptdate) \"i_l_shipdate\" btree (l_shipdate) \"i_l_suppkey\" btree (l_suppkey) \"i_l_suppkey_partkey\" btree (l_partkey, l_suppkey)tpch=# SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='lineitem'; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size----------+----------+--------------+---------------+---------+----------+----------------+------------+--------------- lineitem | 112503 | 6.001167e+06 | 112503 | r | 16 | f | | 921903104(1 row)Postgresql 12.0 and 12.1 are all manually installed from source.Both are running on Ubuntu 16.04 kernel 4.4.0-142-generic, on Intel(R) Core(TM) i7-6700K.Any help greatly appreciated!Shijia\n\n-- Shijia WeiECE, UT Austin | ACSES | 3rd Year [email protected] | https://0x161e-swei.github.io",
"msg_date": "Mon, 16 Dec 2019 11:17:50 +0100",
"msg_from": "Olivier Gautherot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "On Sun, 2019-12-15 at 23:59 -0600, Shijia Wei wrote:\n> I am running TPC-H on recent postgresql (12.0 and 12.1).\n> On some of the queries (that may involve parallel scans) I see this interesting behavior:\n> When these queries are executed back-to-back (sent from psql interactive terminal), the total execution time of them increase monotonically.\n> \n> I simplified query-1 to demonstrate this effect:\n> ``` example.sql\n> explain (analyze, buffers) select\n> max(l_shipdate) as max_data,\n> count(*) as count_order\n> from\n> lineitem\n> where\n> l_shipdate <= date '1998-12-01' - interval '20' day;\n> ```\n> \n> When I execute (from fish) following command:\n> `for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`\n> The results I get are as follows:\n> \"\n> Execution Time: 184.864 ms\n> Execution Time: 192.758 ms\n> Execution Time: 197.380 ms\n> Execution Time: 200.384 ms\n> Execution Time: 202.950 ms\n> Execution Time: 205.695 ms\n> Execution Time: 208.082 ms\n> Execution Time: 209.108 ms\n> Execution Time: 212.428 ms\n> Execution Time: 214.539 ms\n> Execution Time: 215.799 ms\n> Execution Time: 219.057 ms\n> Execution Time: 222.102 ms\n> Execution Time: 223.779 ms\n> Execution Time: 227.819 ms\n> Execution Time: 229.710 ms\n> Execution Time: 239.439 ms\n> Execution Time: 237.649 ms\n> Execution Time: 249.178 ms\n> Execution Time: 261.268 ms\n\nI don't know TPC-H, but the slowdown is not necessarily surprising:\n\nIf the number of rows that satisfy the condition keeps growing over time,\ncounting those rows will necessarily take longer.\n\nMaybe you can provide more details, for example EXPLAIN (ANALYZE, BUFFERS)\noutput for the query when it is fast and when it is slow.\n\nYours,\nLaurenz Albe\n-- \n+43-670-6056265\nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26, A-2700 Wiener Neustadt\nWeb: https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 16 Dec 2019 14:25:49 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Shijia Wei <[email protected]> writes:\n> I am running TPC-H on recent postgresql (12.0 and 12.1).\n> On some of the queries (that may involve parallel scans) I see this\n> interesting behavior:\n> When these queries are executed back-to-back (sent from psql interactive\n> terminal), the total execution time of them increase monotonically.\n\nFWIW, I can't reproduce this here. Using a different chosen-at-random\nquery, I tried\n\n$ for i in `seq 1 20`; do \n> psql -c 'explain (analyze) select * from tenk1 a,tenk1 b where a.hundred=b.hundred;' regression | grep Execution\n> done\n Execution Time: 468.548 ms\n Execution Time: 467.905 ms\n Execution Time: 467.634 ms\n Execution Time: 465.852 ms\n Execution Time: 463.328 ms\n Execution Time: 462.541 ms\n Execution Time: 463.922 ms\n Execution Time: 466.171 ms\n Execution Time: 464.778 ms\n Execution Time: 464.474 ms\n Execution Time: 466.087 ms\n Execution Time: 463.092 ms\n Execution Time: 463.700 ms\n Execution Time: 468.924 ms\n Execution Time: 464.970 ms\n Execution Time: 464.844 ms\n Execution Time: 464.665 ms\n Execution Time: 465.247 ms\n Execution Time: 465.931 ms\n Execution Time: 466.722 ms\n\n\n> When there a wait period in-between queries, (e.g. sleep 10) in the above\n> for loop, this increasing execution time behavior goes a way.\n\nA conceivable theory is that the previous backends haven't exited yet and\nthe extra runtime represents overhead due to having lots of active\nPGPROC entries. This is pretty hard to credit on a multi-core machine,\nhowever. I think it'd require assuming that the old backends have some\nseconds' worth of cleanup work to do before they can exit, which makes\nlittle sense. (Unless, perhaps, you have turned on coverage\ninstrumentation, or some other expensive debug monitoring?)\n\nI concur with the suggestion to try to pin down where the cycles are\ngoing. I'd suggest using perf or some similar tool.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 08:50:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Hi Laurenz,\n\nEach time the identical query executes, the total number of rows selected\nstays the same. The table is actually not modified between/during runs.\n\nThe query plan stays the same between fast and slow runs. Please find two\ncopied here:\nThe first one is the output of the first query in the loop 1-to-20; The\nsecond one is the output of the last query (20th).\n\n1st Query:\n Finalize Aggregate (cost=126840.37..126840.38 rows=1 width=12) (actual\ntime=178.825..178.826 rows=1 loops=1)\n Buffers: shared hit=17074 read=16388\n -> Gather (cost=126839.73..126840.34 rows=6 width=12) (actual\ntime=178.786..180.064 rows=7 loops=1)\n Workers Planned: 6\n Workers Launched: 6\n Buffers: shared hit=17074 read=16388\n -> Partial Aggregate (cost=125839.73..125839.74 rows=1 width=12)\n(actual time=176.781..176.781 rows=1 loops=7)\n Buffers: shared hit=17074 read=16388\n -> Parallel Index Only Scan using i_l_shipdate on lineitem\n (cost=0.43..120842.46 rows=999455 width=4) (actual time=0.045..114.871\nrows=856704 loops=7)\n Index Cond: (l_shipdate <= '1998-11-11\n00:00:00'::timestamp without time zone)\n Heap Fetches: 0\n Buffers: shared hit=17074 read=16388\n Planning Time: 0.458 ms\n Execution Time: 180.111 ms\n(14 rows)\n\n20th Query:\n Finalize Aggregate (cost=126840.37..126840.38 rows=1 width=12) (actual\ntime=223.928..223.929 rows=1 loops=1)\n Buffers: shared hit=17037 read=16390\n -> Gather (cost=126839.73..126840.34 rows=6 width=12) (actual\ntime=223.856..225.474 rows=7 loops=1)\n Workers Planned: 6\n Workers Launched: 6\n Buffers: shared hit=17037 read=16390\n -> Partial Aggregate (cost=125839.73..125839.74 rows=1 width=12)\n(actual time=221.918..221.918 rows=1 loops=7)\n Buffers: shared hit=17037 read=16390\n -> Parallel Index Only Scan using i_l_shipdate on lineitem\n (cost=0.43..120842.46 rows=999455 width=4) (actual time=0.062..143.808\nrows=856704 loops=7)\n Index Cond: (l_shipdate <= '1998-11-11\n00:00:00'::timestamp without time zone)\n Heap Fetches: 0\n Buffers: shared hit=17037 read=16390\n Planning Time: 0.552 ms\n Execution Time: 225.529 ms\n(14 rows)\n\nOne difference I noticed here is that \"actual time\" of the Parallel Index\nOnly Scan increased from 114ms to 143ms.\nThe same holds for other examples that involve Parallel Seq Scan.\n\nThanks,\nShijia\n\n\nOn Mon, Dec 16, 2019 at 7:25 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Sun, 2019-12-15 at 23:59 -0600, Shijia Wei wrote:\n> > I am running TPC-H on recent postgresql (12.0 and 12.1).\n> > On some of the queries (that may involve parallel scans) I see this\n> interesting behavior:\n> > When these queries are executed back-to-back (sent from psql interactive\n> terminal), the total execution time of them increase monotonically.\n> >\n> > I simplified query-1 to demonstrate this effect:\n> > ``` example.sql\n> > explain (analyze, buffers) select\n> > max(l_shipdate) as max_data,\n> > count(*) as count_order\n> > from\n> > lineitem\n> > where\n> > l_shipdate <= date '1998-12-01' - interval '20' day;\n> > ```\n> >\n> > When I execute (from fish) following command:\n> > `for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`\n> > The results I get are as follows:\n> > \"\n> > Execution Time: 184.864 ms\n> > Execution Time: 192.758 ms\n> > Execution Time: 197.380 ms\n> > Execution Time: 200.384 ms\n> > Execution Time: 202.950 ms\n> > Execution Time: 205.695 ms\n> > Execution Time: 208.082 ms\n> > Execution Time: 209.108 ms\n> > Execution Time: 212.428 ms\n> > Execution Time: 214.539 ms\n> > Execution Time: 215.799 ms\n> > Execution Time: 219.057 ms\n> > Execution Time: 222.102 ms\n> > Execution Time: 223.779 ms\n> > Execution Time: 227.819 ms\n> > Execution Time: 229.710 ms\n> > Execution Time: 239.439 ms\n> > Execution Time: 237.649 ms\n> > Execution Time: 249.178 ms\n> > Execution Time: 261.268 ms\n>\n> I don't know TPC-H, but the slowdown is not necessarily surprising:\n>\n> If the number of rows that satisfy the condition keeps growing over time,\n> counting those rows will necessarily take longer.\n>\n> Maybe you can provide more details, for example EXPLAIN (ANALYZE, BUFFERS)\n> output for the query when it is fast and when it is slow.\n>\n> Yours,\n> Laurenz Albe\n> --\n> +43-670-6056265\n> Cybertec Schönig & Schönig GmbH\n> Gröhrmühlgasse 26, A-2700 Wiener Neustadt\n> Web: https://www.cybertec-postgresql.com\n>\n\nHi Laurenz,Each time the identical query executes, the total number of rows selected stays the same. The table is actually not modified between/during runs.The query plan stays the same between fast and slow runs. Please find two copied here:The first one is the output of the first query in the loop 1-to-20; The second one is the output of the last query (20th).1st Query: Finalize Aggregate (cost=126840.37..126840.38 rows=1 width=12) (actual time=178.825..178.826 rows=1 loops=1) Buffers: shared hit=17074 read=16388 -> Gather (cost=126839.73..126840.34 rows=6 width=12) (actual time=178.786..180.064 rows=7 loops=1) Workers Planned: 6 Workers Launched: 6 Buffers: shared hit=17074 read=16388 -> Partial Aggregate (cost=125839.73..125839.74 rows=1 width=12) (actual time=176.781..176.781 rows=1 loops=7) Buffers: shared hit=17074 read=16388 -> Parallel Index Only Scan using i_l_shipdate on lineitem (cost=0.43..120842.46 rows=999455 width=4) (actual time=0.045..114.871 rows=856704 loops=7) Index Cond: (l_shipdate <= '1998-11-11 00:00:00'::timestamp without time zone) Heap Fetches: 0 Buffers: shared hit=17074 read=16388 Planning Time: 0.458 ms Execution Time: 180.111 ms(14 rows)20th Query: Finalize Aggregate (cost=126840.37..126840.38 rows=1 width=12) (actual time=223.928..223.929 rows=1 loops=1) Buffers: shared hit=17037 read=16390 -> Gather (cost=126839.73..126840.34 rows=6 width=12) (actual time=223.856..225.474 rows=7 loops=1) Workers Planned: 6 Workers Launched: 6 Buffers: shared hit=17037 read=16390 -> Partial Aggregate (cost=125839.73..125839.74 rows=1 width=12) (actual time=221.918..221.918 rows=1 loops=7) Buffers: shared hit=17037 read=16390 -> Parallel Index Only Scan using i_l_shipdate on lineitem (cost=0.43..120842.46 rows=999455 width=4) (actual time=0.062..143.808 rows=856704 loops=7) Index Cond: (l_shipdate <= '1998-11-11 00:00:00'::timestamp without time zone) Heap Fetches: 0 Buffers: shared hit=17037 read=16390 Planning Time: 0.552 ms Execution Time: 225.529 ms(14 rows)One difference I noticed here is that \"actual time\" of the Parallel Index Only Scan increased from 114ms to 143ms.The same holds for other examples that involve Parallel Seq Scan.Thanks,ShijiaOn Mon, Dec 16, 2019 at 7:25 AM Laurenz Albe <[email protected]> wrote:On Sun, 2019-12-15 at 23:59 -0600, Shijia Wei wrote:\n> I am running TPC-H on recent postgresql (12.0 and 12.1).\n> On some of the queries (that may involve parallel scans) I see this interesting behavior:\n> When these queries are executed back-to-back (sent from psql interactive terminal), the total execution time of them increase monotonically.\n> \n> I simplified query-1 to demonstrate this effect:\n> ``` example.sql\n> explain (analyze, buffers) select\n> max(l_shipdate) as max_data,\n> count(*) as count_order\n> from\n> lineitem\n> where\n> l_shipdate <= date '1998-12-01' - interval '20' day;\n> ```\n> \n> When I execute (from fish) following command:\n> `for i in (seq 1 20); psql tpch < example.sql | grep Execution; end`\n> The results I get are as follows:\n> \"\n> Execution Time: 184.864 ms\n> Execution Time: 192.758 ms\n> Execution Time: 197.380 ms\n> Execution Time: 200.384 ms\n> Execution Time: 202.950 ms\n> Execution Time: 205.695 ms\n> Execution Time: 208.082 ms\n> Execution Time: 209.108 ms\n> Execution Time: 212.428 ms\n> Execution Time: 214.539 ms\n> Execution Time: 215.799 ms\n> Execution Time: 219.057 ms\n> Execution Time: 222.102 ms\n> Execution Time: 223.779 ms\n> Execution Time: 227.819 ms\n> Execution Time: 229.710 ms\n> Execution Time: 239.439 ms\n> Execution Time: 237.649 ms\n> Execution Time: 249.178 ms\n> Execution Time: 261.268 ms\n\nI don't know TPC-H, but the slowdown is not necessarily surprising:\n\nIf the number of rows that satisfy the condition keeps growing over time,\ncounting those rows will necessarily take longer.\n\nMaybe you can provide more details, for example EXPLAIN (ANALYZE, BUFFERS)\noutput for the query when it is fast and when it is slow.\n\nYours,\nLaurenz Albe\n-- \n+43-670-6056265\nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26, A-2700 Wiener Neustadt\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 16 Dec 2019 11:28:09 -0600",
"msg_from": "Shijia Wei <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 9:28 AM Shijia Wei <[email protected]> wrote:\n> 1st Query:\n\n> Buffers: shared hit=17074 read=16388\n\n> 20th Query:\n\n> Buffers: shared hit=17037 read=16390\n\nWhy do the first and the twentieth executions of the query have almost\nidentical \"buffers shared/read\" numbers? That seems odd.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 16 Dec 2019 12:39:10 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> Why do the first and the twentieth executions of the query have almost\n> identical \"buffers shared/read\" numbers? That seems odd.\n\nIt's repeat execution of the same query, so that doesn't seem odd to me.\n\nThis last set of numbers suggests that there's some issue with the\nparallel execution infrastructure in particular, though I don't see what\nit would be. Doesn't execParallel wait for the workers to exit before\nthe leader finishes its query? If so, how is there any persistent state\nthat would interfere with a later query?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 15:50:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "\n\nCould it be that your CPUs is warming and throttling? You didn't mention the platform used, so I'm not sure whether it's a server or a laptop\n\nNicolas\n\nLe 16 décembre 2019 21:50:17 GMT+01:00, Tom Lane <[email protected]> a écrit :\n>Peter Geoghegan <[email protected]> writes:\n>> Why do the first and the twentieth executions of the query have\n>almost\n>> identical \"buffers shared/read\" numbers? That seems odd.\n>\n>It's repeat execution of the same query, so that doesn't seem odd to\n>me.\n>\n>This last set of numbers suggests that there's some issue with the\n>parallel execution infrastructure in particular, though I don't see\n>what\n>it would be. Doesn't execParallel wait for the workers to exit before\n>the leader finishes its query? If so, how is there any persistent\n>state\n>that would interfere with a later query?\n>\n>\t\t\tregards, tom lane\n\n-- \nEnvoyé de mon appareil Android avec Courriel K-9 Mail. Veuillez excuser ma brièveté.\n\n\n",
"msg_date": "Mon, 16 Dec 2019 23:08:52 +0100",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Nicolas Charles <[email protected]> writes:\n> Could it be that your CPUs is warming and throttling? You didn't mention the platform used, so I'm not sure whether it's a server or a laptop\n\nHmm, that's an interesting thought. The OP did say the CPU type,\nbut according to Intel's spec page for it [1] the difference between\nbase and turbo frequency is only 4.0 vs 4.2 GHz, which doesn't seem\nlike enough to explain the results ... unless you suppose it actually\nthrottled to below base freq, which surely shouldn't happen that fast.\nMight be worth watching the CPU frequency while doing the test though.\n\nI was speculating about some OS-level problem myself. Plain old \"top\"\nmight be enough to show relevant info if it's in that area.\n\n\t\t\tregards, tom lane\n\n[1] https://ark.intel.com/content/www/us/en/ark/products/88195/intel-core-i7-6700k-processor-8m-cache-up-to-4-20-ghz.html\n\n\n",
"msg_date": "Mon, 16 Dec 2019 17:48:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 2:48 PM Tom Lane <[email protected]> wrote:\n\n> unless you suppose it actually\n> throttled to below base freq, which surely shouldn't happen that fast.\n> Might be worth watching the CPU frequency while doing the test though.\n>\n\nWouldn't expect to see such linear progression if that were the case.\nSteps, over a relatively long period of time, would be the likely pattern,\nno? Same goes for some other process fighting for resources. Every\niteration requiring what appears to be a fairly constant increase in\nexecution time (2-5ms on every iteration) seems an unlikely pattern unless\nthe two processes are linked in some way, I would think.\n\nOn Mon, Dec 16, 2019 at 2:48 PM Tom Lane <[email protected]> wrote:unless you suppose it actually\nthrottled to below base freq, which surely shouldn't happen that fast.\nMight be worth watching the CPU frequency while doing the test though.Wouldn't expect to see such linear progression if that were the case. Steps, over a relatively long period of time, would be the likely pattern, no? Same goes for some other process fighting for resources. Every iteration requiring what appears to be a fairly constant increase in execution time (2-5ms on every iteration) seems an unlikely pattern unless the two processes are linked in some way, I would think.",
"msg_date": "Mon, 16 Dec 2019 16:53:31 -0800",
"msg_from": "Sam Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-16 17:48:16 -0500, Tom Lane wrote:\n> Hmm, that's an interesting thought. The OP did say the CPU type,\n> but according to Intel's spec page for it [1] the difference between\n> base and turbo frequency is only 4.0 vs 4.2 GHz, which doesn't seem\n> like enough to explain the results ... unless you suppose it actually\n> throttled to below base freq, which surely shouldn't happen that fast.\n> Might be worth watching the CPU frequency while doing the test though.\n\nFWIW, it takes about 3s for my laptop CPU to throttle way below\nnon-turbo when I put it under strenuous load. Obviously that's a laptop,\nand caused by a firmware bug leading to fans not spinning fast enough\nautomatically. But it'd not take that much for insufficient cooling to\ncause problems in a desktop either. Been there, done that.\n\nBut: I don't see that causing a 10x slowdown as reported in the first\nmail in this thread.\n\n\nI think we need a system-wide perf profile during a few initial \"good\"\nruns and then later from a few \"really bad\" runs. For that you'd have to\nmake sure you compiled postgres with debug symbols (--enable-debug to\nconfigure), and then run something like\nperf record -o fast.data --call-graph dwarf -a sleep 3\nwhile running repeated \"fast\" queries and then\nperf record -o slow.data --call-graph dwarf -a sleep 3\n\nand then show us the results of something like\nperf report -i fast.data -g folded --percent-limit 1 > fast.txt\nperf report -i slow.data -g folded --percent-limit 1 > slow.txt\n\nand also, if your perf is new enough:\nperf diff fast.data slow.data > diff.txt\n\n- Andres\n\n\n",
"msg_date": "Mon, 16 Dec 2019 20:04:45 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "On Mon, 2019-12-16 at 15:50 -0500, Tom Lane wrote:\n> Peter Geoghegan <[email protected]> writes:\n> > Why do the first and the twentieth executions of the query have almost\n> > identical \"buffers shared/read\" numbers? That seems odd.\n> \n> It's repeat execution of the same query, so that doesn't seem odd to me.\n\nReally? Shouldn't the blocks be in shared buffers after a couple\nof executions?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Tue, 17 Dec 2019 14:08:39 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 8:08 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Mon, 2019-12-16 at 15:50 -0500, Tom Lane wrote:\n> > Peter Geoghegan <[email protected]> writes:\n> > > Why do the first and the twentieth executions of the query have almost\n> > > identical \"buffers shared/read\" numbers? That seems odd.\n> >\n> > It's repeat execution of the same query, so that doesn't seem odd to me.\n>\n> Really? Shouldn't the blocks be in shared buffers after a couple\n> of executions?\n>\n\nIf it is doing a seq scan (I don't know if it is) they intentionally use a\nsmall ring buffer to, so they evict their own recently used blocks, rather\nthan evicting other people's blocks. So these blocks won't build up in\nshared_buffers very rapidly just on the basis of repeated seq scans.\n\nCheers,\n\nJeff\n\nOn Tue, Dec 17, 2019 at 8:08 AM Laurenz Albe <[email protected]> wrote:On Mon, 2019-12-16 at 15:50 -0500, Tom Lane wrote:\n> Peter Geoghegan <[email protected]> writes:\n> > Why do the first and the twentieth executions of the query have almost\n> > identical \"buffers shared/read\" numbers? That seems odd.\n> \n> It's repeat execution of the same query, so that doesn't seem odd to me.\n\nReally? Shouldn't the blocks be in shared buffers after a couple\nof executions?If it is doing a seq scan (I don't know if it is) they intentionally use a small ring buffer to, so they evict their own recently used blocks, rather than evicting other people's blocks. So these blocks won't build up in shared_buffers very rapidly just on the basis of repeated seq scans. Cheers,Jeff",
"msg_date": "Tue, 17 Dec 2019 11:11:12 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "On Tue, 2019-12-17 at 11:11 -0500, Jeff Janes wrote:\n> On Tue, Dec 17, 2019 at 8:08 AM Laurenz Albe <[email protected]> wrote:\n> > On Mon, 2019-12-16 at 15:50 -0500, Tom Lane wrote:\n> > > Peter Geoghegan <[email protected]> writes:\n> > > > Why do the first and the twentieth executions of the query have almost\n> > > > identical \"buffers shared/read\" numbers? That seems odd.\n> > > \n> > > It's repeat execution of the same query, so that doesn't seem odd to me.\n> > \n> > Really? Shouldn't the blocks be in shared buffers after a couple\n> > of executions?\n> \n> If it is doing a seq scan (I don't know if it is) they intentionally use a\n> small ring buffer to, so they evict their own recently used blocks, rather\n> than evicting other people's blocks. So these blocks won't build up in\n> shared_buffers very rapidly just on the basis of repeated seq scans.\n\nSure, but according to the execution plans it is doing a Parallel Index Only Scan.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 18 Dec 2019 13:17:10 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Tue, 2019-12-17 at 11:11 -0500, Jeff Janes wrote:\n>> If it is doing a seq scan (I don't know if it is) they intentionally use a\n>> small ring buffer to, so they evict their own recently used blocks, rather\n>> than evicting other people's blocks. So these blocks won't build up in\n>> shared_buffers very rapidly just on the basis of repeated seq scans.\n\n> Sure, but according to the execution plans it is doing a Parallel Index Only Scan.\n\nNonetheless, the presented test case consists of repeatedly doing\nthe same query, in a fresh session each time. If there's not other\nactivity then this should reach some sort of steady state. The\ntable is apparently fairly large, so I don't find it surprising\nthat the steady state fails to be 100% cached.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 08:44:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
},
{
"msg_contents": "Hi everyone!\n\nThanks a ton for this brilliant discussion here!\nIt turned out that Nicolas was correct! I found that the CPU was broken and\nnot spinning at all.\nWith consecutive parallel query execution, the CPU temperature hits 100C\nalmost immediately after 1 or 2 iterations.\nSo the processor starts throttling way below baseline clk frequency to\nsomething like 1.2G or even 1G.\n\nI waited until the new Fan came to report back, and now this weird behavior\nwent away.\n\nThanks,\nShijia\n\nOn Wed, Dec 18, 2019 at 7:44 AM Tom Lane <[email protected]> wrote:\n\n> Laurenz Albe <[email protected]> writes:\n> > On Tue, 2019-12-17 at 11:11 -0500, Jeff Janes wrote:\n> >> If it is doing a seq scan (I don't know if it is) they intentionally\n> use a\n> >> small ring buffer to, so they evict their own recently used blocks,\n> rather\n> >> than evicting other people's blocks. So these blocks won't build up in\n> >> shared_buffers very rapidly just on the basis of repeated seq scans.\n>\n> > Sure, but according to the execution plans it is doing a Parallel Index\n> Only Scan.\n>\n> Nonetheless, the presented test case consists of repeatedly doing\n> the same query, in a fresh session each time. If there's not other\n> activity then this should reach some sort of steady state. The\n> table is apparently fairly large, so I don't find it surprising\n> that the steady state fails to be 100% cached.\n>\n> regards, tom lane\n>\n\nHi everyone!Thanks a ton for this brilliant discussion here!It turned out that Nicolas was correct! I found that the CPU was broken and not spinning at all.With consecutive parallel query execution, the CPU temperature hits 100C almost immediately after 1 or 2 iterations.So the processor starts throttling way below baseline clk frequency to something like 1.2G or even 1G.I waited until the new Fan came to report back, and now this weird behavior went away.Thanks,ShijiaOn Wed, Dec 18, 2019 at 7:44 AM Tom Lane <[email protected]> wrote:Laurenz Albe <[email protected]> writes:\n> On Tue, 2019-12-17 at 11:11 -0500, Jeff Janes wrote:\n>> If it is doing a seq scan (I don't know if it is) they intentionally use a\n>> small ring buffer to, so they evict their own recently used blocks, rather\n>> than evicting other people's blocks. So these blocks won't build up in\n>> shared_buffers very rapidly just on the basis of repeated seq scans.\n\n> Sure, but according to the execution plans it is doing a Parallel Index Only Scan.\n\nNonetheless, the presented test case consists of repeatedly doing\nthe same query, in a fresh session each time. If there's not other\nactivity then this should reach some sort of steady state. The\ntable is apparently fairly large, so I don't find it surprising\nthat the steady state fails to be 100% cached.\n\n regards, tom lane",
"msg_date": "Thu, 19 Dec 2019 23:21:10 -0600",
"msg_from": "Shijia Wei <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Consecutive Query Executions with Increasing Execution Time"
}
] |
[
{
"msg_contents": "I'm using postgres 9.4.17 on centos 7.\nI check the running queries with the following SQL:\nSELECT\n procpid,\n start,\n now() - start AS lap,\n current_query\nFROM\n (SELECT\n backendid,\n pg_stat_get_backend_pid(S.backendid) AS procpid,\n pg_stat_get_backend_activity_start(S.backendid) AS start,\n pg_stat_get_backend_activity(S.backendid) AS current_query\n FROM\n (SELECT pg_stat_get_backend_idset() AS backendid) AS S\n ) AS S\nWHERE\n current_query <> '<IDLE>'\nORDER BY\n lap DESC;\n\nThen, I found a SQL that has run for some days (and still running):\nprocpid | 32638\nstart | 2019-11-25 16:29:29.529318+08\nlap | 21 days 18:24:54.707369\ncurrent_query | DEALLOCATE pdo_stmt_00000388\n\nI tried to kill it with: SELECT pg_cancel_backend(32638) but it takes no\neffects.\n\nWhat's this query and what shall I do for it?\n\nBest Wishes\nKaijiang\n\nI'm using postgres 9.4.17 on centos 7.I check the running queries with the following SQL:SELECT procpid, start, now() - start AS lap, current_query FROM (SELECT backendid, pg_stat_get_backend_pid(S.backendid) AS procpid, pg_stat_get_backend_activity_start(S.backendid) AS start, pg_stat_get_backend_activity(S.backendid) AS current_query FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS S ) AS S WHERE current_query <> '<IDLE>' ORDER BY lap DESC;Then, I found a SQL that has run for some days (and still running):procpid | 32638start | 2019-11-25 16:29:29.529318+08lap | 21 days 18:24:54.707369current_query | DEALLOCATE pdo_stmt_00000388I tried to kill it with: SELECT pg_cancel_backend(32638) but it takes no effects.What's this query and what shall I do for it?Best WishesKaijiang",
"msg_date": "Tue, 17 Dec 2019 10:58:17 +0800",
"msg_from": "Kaijiang Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "weird long time query"
},
{
"msg_contents": "út 17. 12. 2019 v 11:45 odesílatel Kaijiang Chen <[email protected]>\nnapsal:\n\n> I'm using postgres 9.4.17 on centos 7.\n> I check the running queries with the following SQL:\n> SELECT\n> procpid,\n> start,\n> now() - start AS lap,\n> current_query\n> FROM\n> (SELECT\n> backendid,\n> pg_stat_get_backend_pid(S.backendid) AS procpid,\n> pg_stat_get_backend_activity_start(S.backendid) AS start,\n> pg_stat_get_backend_activity(S.backendid) AS current_query\n> FROM\n> (SELECT pg_stat_get_backend_idset() AS backendid) AS S\n> ) AS S\n> WHERE\n> current_query <> '<IDLE>'\n> ORDER BY\n> lap DESC;\n>\n\nI think so this query is weird - probably this query was finished\n\nyou should to use constraint\n\nWHERE state <> 'idle';\n\nRegards\n\nPavel\n\n\n> Then, I found a SQL that has run for some days (and still running):\n> procpid | 32638\n> start | 2019-11-25 16:29:29.529318+08\n> lap | 21 days 18:24:54.707369\n> current_query | DEALLOCATE pdo_stmt_00000388\n>\n> I tried to kill it with: SELECT pg_cancel_backend(32638) but it takes no\n> effects.\n>\n> What's this query and what shall I do for it?\n>\n> Best Wishes\n> Kaijiang\n>\n>\n\nút 17. 12. 2019 v 11:45 odesílatel Kaijiang Chen <[email protected]> napsal:I'm using postgres 9.4.17 on centos 7.I check the running queries with the following SQL:SELECT procpid, start, now() - start AS lap, current_query FROM (SELECT backendid, pg_stat_get_backend_pid(S.backendid) AS procpid, pg_stat_get_backend_activity_start(S.backendid) AS start, pg_stat_get_backend_activity(S.backendid) AS current_query FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS S ) AS S WHERE current_query <> '<IDLE>' ORDER BY lap DESC;I think so this query is weird - probably this query was finished you should to use constraint WHERE state <> 'idle';RegardsPavelThen, I found a SQL that has run for some days (and still running):procpid | 32638start | 2019-11-25 16:29:29.529318+08lap | 21 days 18:24:54.707369current_query | DEALLOCATE pdo_stmt_00000388I tried to kill it with: SELECT pg_cancel_backend(32638) but it takes no effects.What's this query and what shall I do for it?Best WishesKaijiang",
"msg_date": "Tue, 17 Dec 2019 12:08:33 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird long time query"
},
{
"msg_contents": "Kaijiang Chen <[email protected]> writes:\n> I'm using postgres 9.4.17 on centos 7.\n> I check the running queries with the following SQL:\n> SELECT\n> procpid,\n> start,\n> now() - start AS lap,\n> current_query\n> FROM\n> (SELECT\n> backendid,\n> pg_stat_get_backend_pid(S.backendid) AS procpid,\n> pg_stat_get_backend_activity_start(S.backendid) AS start,\n> pg_stat_get_backend_activity(S.backendid) AS current_query\n> FROM\n> (SELECT pg_stat_get_backend_idset() AS backendid) AS S\n> ) AS S\n> WHERE\n> current_query <> '<IDLE>'\n> ORDER BY\n> lap DESC;\n\nDon't know where you got this query from, but it's wrong for any PG\nversion more recent than (I think) 9.1. We don't use \"<IDLE>\" as an\nindicator of idle sessions anymore; rather, those can be identified\nby having state = 'idle'. What's in the query column for such a session\nis its last query.\n\n> Then, I found a SQL that has run for some days (and still running):\n> procpid | 32638\n> start | 2019-11-25 16:29:29.529318+08\n> lap | 21 days 18:24:54.707369\n> current_query | DEALLOCATE pdo_stmt_00000388\n\nIt's not running. That was the last query it ran, back in November :-(\nYou could zap the session with pg_terminate_backend(), but\npg_cancel_backend() is not going to have any effect because there's\nno active query.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Dec 2019 12:04:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird long time query"
},
{
"msg_contents": "I think I should also report it as a bug since logically, it couldn't exist.\n\nOn Wed, Dec 18, 2019 at 1:04 AM Tom Lane <[email protected]> wrote:\n\n> Kaijiang Chen <[email protected]> writes:\n> > I'm using postgres 9.4.17 on centos 7.\n> > I check the running queries with the following SQL:\n> > SELECT\n> > procpid,\n> > start,\n> > now() - start AS lap,\n> > current_query\n> > FROM\n> > (SELECT\n> > backendid,\n> > pg_stat_get_backend_pid(S.backendid) AS procpid,\n> > pg_stat_get_backend_activity_start(S.backendid) AS start,\n> > pg_stat_get_backend_activity(S.backendid) AS current_query\n> > FROM\n> > (SELECT pg_stat_get_backend_idset() AS backendid) AS S\n> > ) AS S\n> > WHERE\n> > current_query <> '<IDLE>'\n> > ORDER BY\n> > lap DESC;\n>\n> Don't know where you got this query from, but it's wrong for any PG\n> version more recent than (I think) 9.1. We don't use \"<IDLE>\" as an\n> indicator of idle sessions anymore; rather, those can be identified\n> by having state = 'idle'. What's in the query column for such a session\n> is its last query.\n>\n> > Then, I found a SQL that has run for some days (and still running):\n> > procpid | 32638\n> > start | 2019-11-25 16:29:29.529318+08\n> > lap | 21 days 18:24:54.707369\n> > current_query | DEALLOCATE pdo_stmt_00000388\n>\n> It's not running. That was the last query it ran, back in November :-(\n> You could zap the session with pg_terminate_backend(), but\n> pg_cancel_backend() is not going to have any effect because there's\n> no active query.\n>\n> regards, tom lane\n>\n\nI think I should also report it as a bug since logically, it couldn't exist.On Wed, Dec 18, 2019 at 1:04 AM Tom Lane <[email protected]> wrote:Kaijiang Chen <[email protected]> writes:\n> I'm using postgres 9.4.17 on centos 7.\n> I check the running queries with the following SQL:\n> SELECT\n> procpid,\n> start,\n> now() - start AS lap,\n> current_query\n> FROM\n> (SELECT\n> backendid,\n> pg_stat_get_backend_pid(S.backendid) AS procpid,\n> pg_stat_get_backend_activity_start(S.backendid) AS start,\n> pg_stat_get_backend_activity(S.backendid) AS current_query\n> FROM\n> (SELECT pg_stat_get_backend_idset() AS backendid) AS S\n> ) AS S\n> WHERE\n> current_query <> '<IDLE>'\n> ORDER BY\n> lap DESC;\n\nDon't know where you got this query from, but it's wrong for any PG\nversion more recent than (I think) 9.1. We don't use \"<IDLE>\" as an\nindicator of idle sessions anymore; rather, those can be identified\nby having state = 'idle'. What's in the query column for such a session\nis its last query.\n\n> Then, I found a SQL that has run for some days (and still running):\n> procpid | 32638\n> start | 2019-11-25 16:29:29.529318+08\n> lap | 21 days 18:24:54.707369\n> current_query | DEALLOCATE pdo_stmt_00000388\n\nIt's not running. That was the last query it ran, back in November :-(\nYou could zap the session with pg_terminate_backend(), but\npg_cancel_backend() is not going to have any effect because there's\nno active query.\n\n regards, tom lane",
"msg_date": "Wed, 18 Dec 2019 11:23:37 +0800",
"msg_from": "Kaijiang Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: weird long time query"
},
{
"msg_contents": "I'm using postgres 9.4.17 on centos 7.\nI check the running queries with the following SQL:\nSELECT\n procpid,\n start,\n now() - start AS lap,\n current_query\nFROM\n (SELECT\n backendid,\n pg_stat_get_backend_pid(S.backendid) AS procpid,\n pg_stat_get_backend_activity_start(S.backendid) AS start,\n pg_stat_get_backend_activity(S.backendid) AS current_query\n FROM\n (SELECT pg_stat_get_backend_idset() AS backendid) AS S\n ) AS S\nWHERE\n current_query <> '<IDLE>'\nORDER BY\n lap DESC;\n\nThen, I found a SQL that has run for some days (and still running):\nprocpid | 32638\nstart | 2019-11-25 16:29:29.529318+08\nlap | 21 days 18:24:54.707369\ncurrent_query | DEALLOCATE pdo_stmt_00000388\n\nI tried to kill it with: SELECT pg_cancel_backend(32638) but it takes no\neffects.\n\nWhat's this query and what shall I do for it?\n\nI think it is a bug since logically, this query should be gone.\n\nBest Wishes\nKaijiang\n\nI'm using postgres 9.4.17 on centos 7.I check the running queries with the following SQL:SELECT procpid, start, now() - start AS lap, current_query FROM (SELECT backendid, pg_stat_get_backend_pid(S.backendid) AS procpid, pg_stat_get_backend_activity_start(S.backendid) AS start, pg_stat_get_backend_activity(S.backendid) AS current_query FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS S ) AS S WHERE current_query <> '<IDLE>' ORDER BY lap DESC;Then, I found a SQL that has run for some days (and still running):procpid | 32638start | 2019-11-25 16:29:29.529318+08lap | 21 days 18:24:54.707369current_query | DEALLOCATE pdo_stmt_00000388I tried to kill it with: SELECT pg_cancel_backend(32638) but it takes no effects.What's this query and what shall I do for it?I think it is a bug since logically, this query should be gone.Best WishesKaijiang",
"msg_date": "Wed, 18 Dec 2019 11:25:32 +0800",
"msg_from": "Kaijiang Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: weird long time query"
},
{
"msg_contents": "Hello,I encountered into this kernel message, and I cannot login into the Linux system anymore:\r\n\r\nDec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1Dec 17 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than 120 seconds.Dec 17 23:02:30 hq-pg kernel: \"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\" disables this message.Dec 17 23:02:30 hq-pg kernel: sync D ffff965ebabd1040 0 6573 6572 0x00000080Dec 17 23:02:30 hq-pg kernel: Call Trace:Dec 17 23:02:30 hq-pg kernel: [<ffffffffa48760a0>] ? generic_write_sync+0x70/0x70\r\nAfter some google I guess it's the problem that IO speed is low, while the insert requests are coming too much quickly.So PG put these into cache first then kernel called sync.I know I can queue the requests, so that POSTGRES will not accept these requests which will result in an increase in system cache.But is there any way I can tell POSTGRES, that you can only handle 20000 records per second, or 4M per second, please don't accept inserts more than that speed.For me, POSTGRES just waiting is much better than current behavior.\r\nAny help will be much appreciated.\r\n\r\nThanks,James\nHello,I encountered into this kernel message, and I cannot login into the Linux system anymore:Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1Dec 17 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than 120 seconds.Dec 17 23:02:30 hq-pg kernel: \"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\" disables this message.Dec 17 23:02:30 hq-pg kernel: sync D ffff965ebabd1040 0 6573 6572 0x00000080Dec 17 23:02:30 hq-pg kernel: Call Trace:Dec 17 23:02:30 hq-pg kernel: [<ffffffffa48760a0>] ? generic_write_sync+0x70/0x70After some google I guess it's the problem that IO speed is low, while the insert requests are coming too much quickly.So PG put these into cache first then kernel called sync.I know I can queue the requests, so that POSTGRES will not accept these requests which will result in an increase in system cache.But is there any way I can tell POSTGRES, that you can only handle 20000 records per second, or 4M per second, please don't accept inserts more than that speed.For me, POSTGRES just waiting is much better than current behavior.Any help will be much appreciated.Thanks,James",
"msg_date": "Wed, 18 Dec 2019 17:53:26 +0800",
"msg_from": "\"=?utf-8?B?SmFtZXMo546L5petKQ==?=\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "How to prevent POSTGRES killing linux system from accepting too much\n inserts?"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 11:25:32AM +0800, Kaijiang Chen wrote:\n> I'm using postgres 9.4.17 on centos 7.\n> I check the running queries with the following SQL:\n> SELECT\n> procpid,\n> start,\n> now() - start AS lap,\n> current_query\n> FROM\n> (SELECT\n> backendid,\n> pg_stat_get_backend_pid(S.backendid) AS procpid,\n> pg_stat_get_backend_activity_start(S.backendid) AS start,\n> pg_stat_get_backend_activity(S.backendid) AS current_query\n> FROM\n> (SELECT pg_stat_get_backend_idset() AS backendid) AS S\n> ) AS S\n> WHERE\n> current_query <> '<IDLE>'\n> ORDER BY\n> lap DESC;\n> \n> Then, I found a SQL that has run for some days (and still running):\n> procpid | 32638\n> start | 2019-11-25 16:29:29.529318+08\n> lap | 21 days 18:24:54.707369\n> current_query | DEALLOCATE pdo_stmt_00000388\n> \n> I tried to kill it with: SELECT pg_cancel_backend(32638) but it takes no\n> effects.\n> \n> What's this query and what shall I do for it?\n> \n> I think it is a bug since logically, this query should be gone.\n\nIt's not a bug. Most likely this backend is not doing anything.\n\nYou're using old way to check if backend is working - current_query <>\n'<IDLE>';\n\nCheck: select * from pg_stat_activity where pid = 32638 \n\nMost likely you'll see state = 'idle'\n\nIn such cases, query just shows last executed query, not currently\nrunning one.\n\nAlso - WHY are you calling internal pg* functions directly, instead of\nusing pg_stat_activity view?\n\nBest regards,\n\ndepesz\n\n\n\n",
"msg_date": "Wed, 18 Dec 2019 15:06:31 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: weird long time query"
},
{
"msg_contents": "Thanks!\nI learn the SQL from the web. pg views should be better.\nBTW, I got the similar result (still see that proc) with \"select * from\npg_stat_activity\":\n\nbackend_start | 2019-11-25 16:27:05.103901+08\nxact_start |\nquery_start | 2019-11-25 16:29:29.529318+08\nstate_change | 2019-11-25 16:29:29.529344+08\nwaiting | f\nstate | idle\nbackend_xid |\nbackend_xmin |\nquery | DEALLOCATE pdo_stmt_00000388\n\nLooks not very nice :-)\n\nOn Wed, Dec 18, 2019 at 10:06 PM hubert depesz lubaczewski <\[email protected]> wrote:\n\n> On Wed, Dec 18, 2019 at 11:25:32AM +0800, Kaijiang Chen wrote:\n> > I'm using postgres 9.4.17 on centos 7.\n> > I check the running queries with the following SQL:\n> > SELECT\n> > procpid,\n> > start,\n> > now() - start AS lap,\n> > current_query\n> > FROM\n> > (SELECT\n> > backendid,\n> > pg_stat_get_backend_pid(S.backendid) AS procpid,\n> > pg_stat_get_backend_activity_start(S.backendid) AS start,\n> > pg_stat_get_backend_activity(S.backendid) AS current_query\n> > FROM\n> > (SELECT pg_stat_get_backend_idset() AS backendid) AS S\n> > ) AS S\n> > WHERE\n> > current_query <> '<IDLE>'\n> > ORDER BY\n> > lap DESC;\n> >\n> > Then, I found a SQL that has run for some days (and still running):\n> > procpid | 32638\n> > start | 2019-11-25 16:29:29.529318+08\n> > lap | 21 days 18:24:54.707369\n> > current_query | DEALLOCATE pdo_stmt_00000388\n> >\n> > I tried to kill it with: SELECT pg_cancel_backend(32638) but it takes no\n> > effects.\n> >\n> > What's this query and what shall I do for it?\n> >\n> > I think it is a bug since logically, this query should be gone.\n>\n> It's not a bug. Most likely this backend is not doing anything.\n>\n> You're using old way to check if backend is working - current_query <>\n> '<IDLE>';\n>\n> Check: select * from pg_stat_activity where pid = 32638\n>\n> Most likely you'll see state = 'idle'\n>\n> In such cases, query just shows last executed query, not currently\n> running one.\n>\n> Also - WHY are you calling internal pg* functions directly, instead of\n> using pg_stat_activity view?\n>\n> Best regards,\n>\n> depesz\n>\n>\n\nThanks!I learn the SQL from the web. pg views should be better.BTW, I got the similar result (still see that proc) with \"select * from pg_stat_activity\":backend_start | 2019-11-25 16:27:05.103901+08xact_start | query_start | 2019-11-25 16:29:29.529318+08state_change | 2019-11-25 16:29:29.529344+08waiting | fstate | idlebackend_xid | backend_xmin | query | DEALLOCATE pdo_stmt_00000388Looks not very nice :-)On Wed, Dec 18, 2019 at 10:06 PM hubert depesz lubaczewski <[email protected]> wrote:On Wed, Dec 18, 2019 at 11:25:32AM +0800, Kaijiang Chen wrote:\n> I'm using postgres 9.4.17 on centos 7.\n> I check the running queries with the following SQL:\n> SELECT\n> procpid,\n> start,\n> now() - start AS lap,\n> current_query\n> FROM\n> (SELECT\n> backendid,\n> pg_stat_get_backend_pid(S.backendid) AS procpid,\n> pg_stat_get_backend_activity_start(S.backendid) AS start,\n> pg_stat_get_backend_activity(S.backendid) AS current_query\n> FROM\n> (SELECT pg_stat_get_backend_idset() AS backendid) AS S\n> ) AS S\n> WHERE\n> current_query <> '<IDLE>'\n> ORDER BY\n> lap DESC;\n> \n> Then, I found a SQL that has run for some days (and still running):\n> procpid | 32638\n> start | 2019-11-25 16:29:29.529318+08\n> lap | 21 days 18:24:54.707369\n> current_query | DEALLOCATE pdo_stmt_00000388\n> \n> I tried to kill it with: SELECT pg_cancel_backend(32638) but it takes no\n> effects.\n> \n> What's this query and what shall I do for it?\n> \n> I think it is a bug since logically, this query should be gone.\n\nIt's not a bug. Most likely this backend is not doing anything.\n\nYou're using old way to check if backend is working - current_query <>\n'<IDLE>';\n\nCheck: select * from pg_stat_activity where pid = 32638 \n\nMost likely you'll see state = 'idle'\n\nIn such cases, query just shows last executed query, not currently\nrunning one.\n\nAlso - WHY are you calling internal pg* functions directly, instead of\nusing pg_stat_activity view?\n\nBest regards,\n\ndepesz",
"msg_date": "Thu, 19 Dec 2019 00:14:26 +0800",
"msg_from": "Kaijiang Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: weird long time query"
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 12:14:26AM +0800, Kaijiang Chen wrote:\n> Thanks!\n> I learn the SQL from the web. pg views should be better.\n> BTW, I got the similar result (still see that proc) with \"select * from\n> pg_stat_activity\":\n> \n> backend_start | 2019-11-25 16:27:05.103901+08\n> xact_start |\n> query_start | 2019-11-25 16:29:29.529318+08\n> state_change | 2019-11-25 16:29:29.529344+08\n> waiting | f\n> state | idle\n> backend_xid |\n> backend_xmin |\n> query | DEALLOCATE pdo_stmt_00000388\n> \n> Looks not very nice :-)\n\nnot sure what you mean by not nice.\n\nAs you can clearly see the backend is *NOT* running anything (state is\nidle).\n\nValue in \"query\" column is simply last query that it ran. It *finished*\nrunning this query at 2019-11-25 16:29:29.529344+08.\n\nSo your app is keeping connection open. It's not Pg problem or a bug.\n\nBest regards,\n\ndepesz\n\n\n\n",
"msg_date": "Wed, 18 Dec 2019 17:23:49 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: weird long time query"
},
{
"msg_contents": "hubert depesz lubaczewski <[email protected]> writes:\n> On Thu, Dec 19, 2019 at 12:14:26AM +0800, Kaijiang Chen wrote:\n>> BTW, I got the similar result (still see that proc) with \"select * from\n>> pg_stat_activity\":\n>> ...\n>> state | idle\n>> ...\n>> query | DEALLOCATE pdo_stmt_00000388\n>> \n>> Looks not very nice :-)\n\n> not sure what you mean by not nice.\n\nThat's a feature not a bug (and yes, the behavior is documented).\nPeople requested that the view continue to display the last query\nof an idle session. IIRC, the main argument was that otherwise\nit's hard to tell apart a bunch of idle sessions.\n\nIf you don't like it, you can always do something like\n\ncase when state = idle then null else query end\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 11:30:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: weird long time query"
},
{
"msg_contents": "On Wed, 18 Dec 2019 17:53:26 +0800\n\"James(王旭)\" <[email protected]> wrote:\n\n> Hello,I encountered into this kernel message, and I cannot login into\n> the Linux system anymore:\n> \n> Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1Dec 17\n> 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than 120\n> seconds.Dec 17 23:02:30 hq-pg kernel: \"echo 0\n> > /proc/sys/kernel/hung_task_timeout_secs\" disables this\n> message.Dec 17 23:02:30 hq-pg kernel: sync \n> D ffff965ebabd1040 0 \n> 6573 6572 0x00000080Dec 17 23:02:30 hq-pg kernel: Call\n> Trace:Dec 17 23:02:30 hq-pg kernel: [<ffffffffa48760a0>] ?\n> generic_write_sync+0x70/0x70 After some google I guess it's the\n> problem that IO speed is low, while the insert requests are coming\n> too much quickly.So PG put these into cache first then kernel called\n> sync.I know I can queue the requests, so that POSTGRES will not\n> accept these requests which will result in an increase in system\n> cache.But is there any way I can tell POSTGRES, that you can only\n> handle 20000 records per second, or 4M per second, please don't\n> accept inserts more than that speed.For me, POSTGRES just waiting is\n> much better than current behavior. Any help will be much appreciated.\n\nThere isn't one magic-bullet solution for this. It may be that you can \ntune Linux, PG, or the filesystem to handle the load more \ngracefully; or that you just need more hardware. Streaming inserts might\nbe better batched and handled via synchronous ETL than pushed in at\nrandom, at that point you can control the resources.\n\nOne approach might be tighter timeouts on the server or client, which\nwill leave the queries failing when the queue gets too high. That\nfrees up resources on the server, at the obvious expense of having\ntransactions roll back. On the other hand, you can end up with \ntimeouts so tight that you end up thrashing, which doesn't help the\nproblem.\n\nCatch from this end is that without more informaton on the system\nyou are dealing with there isn't any quick-and-dirty fix.\n\nI'd suggest looking over:\n\n<https://duckduckgo.com/?q=linux+postgres+tuning&t=ffab&ia=web>\n\nfor suggestions and seeing which ones work or don't. If you have\nmore specific questions on the parameters or how to evaluate the\nstats PG is keeping feel free to ask them here, but you will need\nto be specific as to the stats and situation in which they were\nacquired so that people have enough context to give you a reasonable\nanswer.\n\n-- \nSteven Lembark 3646 Flora Place\nWorkhorse Computing St. Louis, MO 63110\[email protected] +1 888 359 3508\n\n\n",
"msg_date": "Wed, 18 Dec 2019 12:07:28 -0600",
"msg_from": "Steven Lembark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to prevent POSTGRES killing linux system from accepting too\n much inserts?"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 3:53 AM James(王旭) <[email protected]> wrote:\n>\n> Hello,\n>>\n>> I encountered into this kernel message, and I cannot login into the Linux system anymore:\n>>\n>>\n>>\n>>> Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1\n>>>\n>>> Dec 17 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than 120 seconds.\n>>>\n>>> Dec 17 23:02:30 hq-pg kernel: \"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\" disables this message.\n>>>\n>>> Dec 17 23:02:30 hq-pg kernel: sync D ffff965ebabd1040 0 6573 6572 0x00000080\n>>>\n>>> Dec 17 23:02:30 hq-pg kernel: Call Trace:\n>>>\n>>> Dec 17 23:02:30 hq-pg kernel: [<ffffffffa48760a0>] ? generic_write_sync+0x70/0x70\n>>\n>>\n>> After some google I guess it's the problem that IO speed is low, while the insert requests are coming too much quickly.So PG put these into cache first then kernel called sync.\n>>\n>> I know I can queue the requests, so that POSTGRES will not accept these requests which will result in an increase in system cache.\n>>\n>> But is there any way I can tell POSTGRES, that you can only handle 20000 records per second, or 4M per second, please don't accept inserts more than that speed.\n>>\n>> For me, POSTGRES just waiting is much better than current behavior.\n>>\n>>\n>> Any help will be much appreciated.\n\nThis is more a problem with the o/s than with postgres itself.\n\nsynchronous_commit is one influential parameter that can possibly help\nmitigate the issue with some safety tradeoffs (read the docs). For\nlinux, one possible place to look is tuning dirty_background_ratio and\nrelated parameters. The idea is you want the o/s to be more\naggressive about syncing to reduce the impact of i/o storm; basically\nyou are trading off some burst performance for consistency of\nperformance. Another place to look is checkpoint behavior. Do some\nsearches, there is tons of information about this on the net.\n\nmerlin\n\n\n",
"msg_date": "Wed, 18 Dec 2019 12:34:36 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to prevent POSTGRES killing linux system from accepting too\n much inserts?"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 4:53 AM James(王旭) <[email protected]> wrote:\n\n> Hello,\n>>\n>> I encountered into this kernel message, and I cannot login into the Linux\n>> system anymore:\n>\n>\n>>\n>> Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1\n>>\n>> Dec 17 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than\n>>> 120 seconds.\n>>\n>> Dec 17 23:02:30 hq-pg kernel: \"echo 0 >\n>>> /proc/sys/kernel/hung_task_timeout_secs\" disables this message.\n>>\n>> Dec 17 23:02:30 hq-pg kernel: sync D ffff965ebabd1040 0\n>>> 6573 6572 0x00000080\n>>\n>> Dec 17 23:02:30 hq-pg kernel: Call Trace:\n>>\n>> Dec 17 23:02:30 hq-pg kernel: [<ffffffffa48760a0>] ?\n>>> generic_write_sync+0x70/0x70\n>>\n>>\n>> After some google I guess it's the problem that IO speed is low, while\n>> the insert requests are coming too much quickly.So PG put these into cache\n>> first then kernel called sync\n>\n>\nCould you expand on what you found in the googling, with links? I've never\nseen these in my kernel log, and I don't know what they mean other than the\nobvious that it is something to do with IO. Also, what kernel and file\nsystem are you using?\n\n\n> .\n>\n> I know I can queue the requests, so that POSTGRES will not accept these\n>> requests which will result in an increase in system cache.\n>\n> But is there any way I can tell POSTGRES, that you can only handle 20000\n>> records per second, or 4M per second, please don't accept inserts more than\n>> that speed.\n>\n> For me, POSTGRES just waiting is much better than current behavior.\n>\n>\nI don't believe there is a setting from within PostgreSQL to do this.\n\nThere was a proposal for a throttle on WAL generation back in February, but\nwith no recent discussion or (visible) progress:\n\nhttps://www.postgresql.org/message-id/flat/2B42AB02-03FC-406B-B92B-18DED2D8D491%40anarazel.de#b63131617e84d3a0ac29da956e6b8c5f\n\n\nI think the real answer here to get a better IO system, or maybe a better\nkernel. Otherwise, once you find a painful workaround for one symptom you\nwill just smack into another one.\n\nCheers,\n\nJeff\n\n>\n\nOn Wed, Dec 18, 2019 at 4:53 AM James(王旭) <[email protected]> wrote:Hello,I encountered into this kernel message, and I cannot login into the Linux system anymore:Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1Dec 17 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than 120 seconds.Dec 17 23:02:30 hq-pg kernel: \"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\" disables this message.Dec 17 23:02:30 hq-pg kernel: sync D ffff965ebabd1040 0 6573 6572 0x00000080Dec 17 23:02:30 hq-pg kernel: Call Trace:Dec 17 23:02:30 hq-pg kernel: [<ffffffffa48760a0>] ? generic_write_sync+0x70/0x70After some google I guess it's the problem that IO speed is low, while the insert requests are coming too much quickly.So PG put these into cache first then kernel called syncCould you expand on what you found in the googling, with links? I've never seen these in my kernel log, and I don't know what they mean other than the obvious that it is something to do with IO. Also, what kernel and file system are you using? .I know I can queue the requests, so that POSTGRES will not accept these requests which will result in an increase in system cache.But is there any way I can tell POSTGRES, that you can only handle 20000 records per second, or 4M per second, please don't accept inserts more than that speed.For me, POSTGRES just waiting is much better than current behavior.I don't believe there is a setting from within PostgreSQL to do this.There was a proposal for a throttle on WAL generation back in February, but with no recent discussion or (visible) progress:https://www.postgresql.org/message-id/flat/2B42AB02-03FC-406B-B92B-18DED2D8D491%40anarazel.de#b63131617e84d3a0ac29da956e6b8c5f I think the real answer here to get a better IO system, or maybe a better kernel. Otherwise, once you find a painful workaround for one symptom you will just smack into another one.Cheers,Jeff",
"msg_date": "Wed, 18 Dec 2019 14:09:25 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to prevent POSTGRES killing linux system from accepting too\n much inserts?"
}
] |
[
{
"msg_contents": "Hello,\n\nCurrently we're working on PSQL 11.5 and we're trying upgrade to 12.1.\n\nDuring that we have a problem:\n\ncommand: \"/usr/pgsql-12/bin/pg_dump\" --host /cluster/postgresql --port 50432\n--username postgres --schema-only --quote-all-identifiers --binary-upgrade\n--format=custom --file=\"pg_upgrade_dump_281535902.custom\" 'dbname=sprint'\n>> \"pg_upgrade_dump_281535902.log\" 2>&1\npg_dump: error: query failed: ERROR: out of shared memory\nHINT: You might need to increase max_locks_per_transaction.\npg_dump: error: query was: LOCK TABLE\n\"some_schemaa\".\"table_part_80000000_2018q3\" IN ACCESS SHARE MODE\n\nOn current instance we have about one thousand of partitions, partitioned in\ntwo levels: first by id_product, and second level by quarter of the year, as\nyou can see on above log.\n\nHow have we to calculate shared memory, and (eventually\nmax_locks_per_transaction) to be fit to the limits during upgrade?",
"msg_date": "Tue, 17 Dec 2019 20:03:41 +0000",
"msg_from": "=?iso-8859-2?Q?Piotr_W=B3odarczyk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared memory size during upgrade pgsql with partitions"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 08:03:41PM +0000, Piotr Włodarczyk wrote:\n> Currently we're working on PSQL 11.5 and we're trying upgrade to 12.1.\n> \n> During that we have a problem:\n> \n> command: \"/usr/pgsql-12/bin/pg_dump\" --host /cluster/postgresql --port 50432\n> --username postgres --schema-only --quote-all-identifiers --binary-upgrade\n> --format=custom --file=\"pg_upgrade_dump_281535902.custom\" 'dbname=sprint'\n> >> \"pg_upgrade_dump_281535902.log\" 2>&1\n> pg_dump: error: query failed: ERROR: out of shared memory\n> HINT: You might need to increase max_locks_per_transaction.\n> pg_dump: error: query was: LOCK TABLE\n> \"some_schemaa\".\"table_part_80000000_2018q3\" IN ACCESS SHARE MODE\n> \n> On current instance we have about one thousand of partitions, partitioned in\n> two levels: first by id_product, and second level by quarter of the year, as\n> you can see on above log.\n> \n> How have we to calculate shared memory, and (eventually\n> max_locks_per_transaction) to be fit to the limits during upgrade? \n\nGreat question. Clearly, if you can run that (or similar) pg_dump command,\nthen you can pg_upgrade. I think you could also do pg_upgrade --check,\n\nThe query looks like\n\t\tFROM pg_class c...\n\t\tWHERE c.relkind in ('%c', '%c', '%c', '%c', '%c', '%c', '%c') \"\n\n..and then does:\n\n if (tblinfo[i].dobj.dump &&\n (tblinfo[i].relkind == RELKIND_RELATION ||\n tblinfo->relkind == RELKIND_PARTITIONED_TABLE) &&\n (tblinfo[i].dobj.dump & DUMP_COMPONENTS_REQUIRING_LOCK))\n {\n resetPQExpBuffer(query);\n appendPQExpBuffer(query,\n \"LOCK TABLE %s IN ACCESS SHARE MODE\",\n fmtQualifiedDumpable(&tblinfo[i]));\n ExecuteSqlStatement(fout, query->data);\n }\n\n..then filters by -N/-n/-t/-T (which doesn't apply to pg_upgrade):\n selectDumpableTable(&tblinfo[i], fout);\n\nSo it looks like COUNT(1) FROM pg_class WHERE relkind IN ('r','p') should do it.\n\nBut actually, during pg_upgrade, since nothing else is running, you actually\nhave max_connections*max_locks_per_transaction total locks.\n\nSaid differently, I think you could set max_locks_per_transaction to:\nSELECT (SELECT COUNT(1) FROM pg_class WHERE relkind IN ('r','p'))/current_setting('max_connections')::int;\n\n..probably with a fudge factor of +10 for any system process (and due to\ninteger truncation).\n\nSomeone might say that pg_upgrade or pg_dump could check for that specifically..\n\nJustin\n\n\n",
"msg_date": "Tue, 17 Dec 2019 22:01:19 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared memory size during upgrade pgsql with partitions\n (max_locks_per_transaction)"
}
] |
[
{
"msg_contents": ">> On Tue, Dec 17, 2019 at 08:03:41PM +0000, Piotr Włodarczyk wrote:\n>> Currently we're working on PSQL 11.5 and we're trying upgrade to 12.1.\n>> \n>> During that we have a problem:\n>> \n>> command: \"/usr/pgsql-12/bin/pg_dump\" --host /cluster/postgresql --port\n50432\n>> --username postgres --schema-only --quote-all-identifiers\n--binary-upgrade\n>> --format=custom --file=\"pg_upgrade_dump_281535902.custom\"\n'dbname=sprint'\n>> >> \"pg_upgrade_dump_281535902.log\" 2> &1\n>> pg_dump: error: query failed: ERROR: out of shared memory\n>> HINT: You might need to increase max_locks_per_transaction.\n>> pg_dump: error: query was: LOCK TABLE\n>> \"some_schemaa\".\"table_part_80000000_2018q3\" IN ACCESS SHARE MODE\n>> \n>> On current instance we have about one thousand of partitions, partitioned\nin\n>> two levels: first by id_product, and second level by quarter of the year,\nas\n>> you can see on above log.\n>> \n>> How have we to calculate shared memory, and (eventually\n>> max_locks_per_transaction) to be fit to the limits during upgrade? \n> \n> \n> \n> Great question. Clearly, if you can run that (or similar) pg_dump\ncommand,\n> then you can pg_upgrade. I think you could also do pg_upgrade --check,\n\npg_upgrade --check doesn't prompt any error or warning \n\n> \n> \n> \n> The query looks like\n> \t\tFROM pg_class c...\n> \t\tWHERE c.relkind in ('%c', '%c', '%c', '%c', '%c', '%c',\n'%c') \"\n> \n> \n> \n> ..and then does:\n> \n> \n> \n> if (tblinfo[i].dobj.dump &&\n> (tblinfo[i].relkind == RELKIND_RELATION ||\n> tblinfo-> relkind == RELKIND_PARTITIONED_TABLE) &&\n> (tblinfo[i].dobj.dump &\nDUMP_COMPONENTS_REQUIRING_LOCK))\n> {\n> resetPQExpBuffer(query);\n> appendPQExpBuffer(query,\n> \"LOCK TABLE %s IN\nACCESS SHARE MODE\",\n>\nfmtQualifiedDumpable(&tblinfo[i]));\n> ExecuteSqlStatement(fout, query-> data);\n> }\n> \n> \n> \n> ..then filters by -N/-n/-t/-T (which doesn't apply to pg_upgrade):\n> selectDumpableTable(&tblinfo[i], fout);\n> \n> \n> \n> So it looks like COUNT(1) FROM pg_class WHERE relkind IN ('r','p') should\ndo it.\n> \n> \n> \n> But actually, during pg_upgrade, since nothing else is running, you\nactually\n> have max_connections*max_locks_per_transaction total locks.\n> \n> \n> \n> Said differently, I think you could set max_locks_per_transaction to:\n> SELECT (SELECT COUNT(1) FROM pg_class WHERE relkind IN\n('r','p'))/current_setting('max_connections')::int;\n> \n> \n> \n> ..probably with a fudge factor of +10 for any system process (and due to\n> integer truncation).\n> \n> \n> \n> Someone might say that pg_upgrade or pg_dump could check for that\nspecifically..\n\nYes, and temporarily increase, or HINT how to calculate proper value.\n\n> \n> \n> \n> Justin\n> \n> \n\nWe realized that the problem is with pg_dump doing during pg_upgreade.\n\nNow we're after upgrade and we can't check Yours calculation. We simply\nincreased max_connections until migration passed :) \n\nI'll try to check it on empty, fake database.",
"msg_date": "Wed, 18 Dec 2019 09:16:11 +0000",
"msg_from": "=?iso-8859-2?Q?Piotr_W=B3odarczyk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared memory size during upgrade pgsql with partitions\n (max_locks_per_transaction)"
}
] |
[
{
"msg_contents": "The docs for parallel_tuple_cost are quite terse, as the reference section\nof the docs usually are:\n\n\"Sets the planner's estimate of the cost of transferring one tuple from a\nparallel worker process to another process. The default is 0.1.\"\n\nUsually you can find more extensive discussion of such settings in\ninformal resources like blog posts or mailing lists, but Googling the name\nI don't find much for this setting. Is there good information out there\nsomewhere?\n\nIf you were take the doc description literally, then the default value\nseems much too high, as it doesn't take 10x the (default) cpu_tuple_cost to\ntransfer a tuple up from a parallel worker. On the other hand, you\nprobably don't want a query which consumes 8x the CPU resources just to\nfinish only 5% faster (on an otherwise idle server with 8 CPUs). Maybe\nthis Amdahl factor is what inspired the high default value?\n\nCheers,\n\nJeff\n\nThe docs for parallel_tuple_cost are quite terse, as the reference section of the docs usually are:\"Sets the planner's estimate of the cost of transferring one tuple from a parallel worker process to another process. The default is 0.1.\"Usually you can find more extensive discussion of such settings in informal resources like blog posts or mailing lists, but Googling the name I don't find much for this setting. Is there good information out there somewhere?If you were take the doc description literally, then the default value seems much too high, as it doesn't take 10x the (default) cpu_tuple_cost to transfer a tuple up from a parallel worker. On the other hand, you probably don't want a query which consumes 8x the CPU resources just to finish only 5% faster (on an otherwise idle server with 8 CPUs). Maybe this Amdahl factor is what inspired the high default value?Cheers,Jeff",
"msg_date": "Fri, 20 Dec 2019 13:03:29 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to set parallel_tuple_cost"
},
{
"msg_contents": "On Fri, 2019-12-20 at 13:03 -0500, Jeff Janes wrote:\n> The docs for parallel_tuple_cost are quite terse, as the reference section of the docs usually are:\n> \n> \"Sets the planner's estimate of the cost of transferring one tuple from a parallel worker process to another process. The default is 0.1.\"\n> \n> Usually you can find more extensive discussion of such settings in informal resources like blog posts or mailing lists,\n> but Googling the name I don't find much for this setting. Is there good information out there somewhere?\n> \n> If you were take the doc description literally, then the default value seems much too high, as it doesn't take\n> 10x the (default) cpu_tuple_cost to transfer a tuple up from a parallel worker. On the other hand, you probably\n> don't want a query which consumes 8x the CPU resources just to finish only 5% faster (on an otherwise idle server with 8 CPUs).\n> Maybe this Amdahl factor is what inspired the high default value?\n\nHmm. The parameter was introduced into the discussion here:\nhttps://www.postgresql.org/message-id/CAA4eK1L0dk9D3hARoAb84v2pGvUw4B5YoS4x18ORQREwR%2B1VCg%40mail.gmail.com\nand while the name was changed from \"cpu_tuple_comm_cost\" to \"parallel_tuple_cost\"\nlater, the default value seems not to have been the subject of discussion.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Fri, 20 Dec 2019 19:42:19 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to set parallel_tuple_cost"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> The docs for parallel_tuple_cost are quite terse, as the reference section\n> of the docs usually are:\n> \"Sets the planner's estimate of the cost of transferring one tuple from a\n> parallel worker process to another process. The default is 0.1.\"\n\n> If you were take the doc description literally, then the default value\n> seems much too high, as it doesn't take 10x the (default) cpu_tuple_cost to\n> transfer a tuple up from a parallel worker.\n\nReally? If anything, I'd have thought it might be worse than 10x.\nCross-process communication isn't cheap, at least not according to\nmy instincts.\n\n> On the other hand, you\n> probably don't want a query which consumes 8x the CPU resources just to\n> finish only 5% faster (on an otherwise idle server with 8 CPUs). Maybe\n> this Amdahl factor is what inspired the high default value?\n\nI think the large value of parallel_setup_cost is what's meant to\ndiscourage that scenario.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 13:58:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to set parallel_tuple_cost"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-20 13:58:57 -0500, Tom Lane wrote:\n> Jeff Janes <[email protected]> writes:\n> > The docs for parallel_tuple_cost are quite terse, as the reference section\n> > of the docs usually are:\n> > \"Sets the planner's estimate of the cost of transferring one tuple from a\n> > parallel worker process to another process. The default is 0.1.\"\n> \n> > If you were take the doc description literally, then the default value\n> > seems much too high, as it doesn't take 10x the (default) cpu_tuple_cost to\n> > transfer a tuple up from a parallel worker.\n> \n> Really? If anything, I'd have thought it might be worse than 10x.\n> Cross-process communication isn't cheap, at least not according to\n> my instincts.\n\n+1. I did at some point measure the cost of transferring through a\ntuplequeue, and it's quite expensive, compared to local tuple\nhandoff. Some of that is not intrinsic, and could be fixed - e.g. by\njust putting pointers to tuples into the queue, instead of the whole\ntuple (but that's hard due to our process model leading to dynamic shm\nhaving differing addresses). What's worse, putting a tuple into a\ntuplequeue requires the input slot to be materialized into a HeapTuple\n(should probably be MinimalTuple....), which often the input will not\nyet be. So I think it'll often be much worse than 10x.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Dec 2019 16:24:53 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to set parallel_tuple_cost"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 1:58 PM Tom Lane <[email protected]> wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > The docs for parallel_tuple_cost are quite terse, as the reference\n> section\n> > of the docs usually are:\n> > \"Sets the planner's estimate of the cost of transferring one tuple from a\n> > parallel worker process to another process. The default is 0.1.\"\n>\n> > If you were take the doc description literally, then the default value\n> > seems much too high, as it doesn't take 10x the (default) cpu_tuple_cost\n> to\n> > transfer a tuple up from a parallel worker.\n>\n> Really? If anything, I'd have thought it might be worse than 10x.\n> Cross-process communication isn't cheap, at least not according to\n> my instincts.\n>\n\nI was a bit surprised. I set it up so that there was a fine-tunable filter\nwhich can be applied in the parallel workers, and then only the surviving\ntuples get passed up to the leader. The use of a parallel seq scan didn't\nbecome slower than the non-parallel version until over 95% of the tuples\nwere surviving the filter. If I wanted to make the estimated cost\ncross-over point match the measured time cross-over point, I had to mark\nthe parallel_tuple_cost down to about 0.011. This was an 8 CPU machine, an\nAWS m5.4xlarge, with max_parallel_workers_per_gather=7. (On my crummy\n2-CPU Windows 10 laptop running ubuntu via VirtualBox, the cross-over point\nwas closer to 40% of the tuples surviving, and the parallel_tuple_cost to\nmatch cross-over point would be about 0.016, but I don't have enough RAM to\nmake a large enough all-in-shared-buffer table to really get a good\nassessments).\n\nMy method was to make shared_buffers be a large fraction of RAM (55GB, out\nof 64GB), then make a table slightly smaller than that and forced it into\nshared_buffers with pg_prewarm. I set seq_page_cost = random_age_cost = 0,\nto accurately reflect the fact that no IO is occuring.\n\ncreate table para_seq as select floor(random()*10000)::int as id, random()\nas x, md5(random()::text)||md5(random()::text) t from\ngenerate_series(1,8000000*55);\nvacuum ANALYZE para_seq ;\nselect pg_prewarm('para_seq');\n\nexplain (analyze, buffers, settings, timing off) select * from para_seq\nwhere id<9500;\n\nWhere you can change the 9500 to tune the selectivity of the filter. Is\nthis the correct way to try to isolate just the overhead of transferring of\na tuple away from other considerations so it can be measured?\n\nI don't think the fact that EXPLAIN ANALYZE throws away the result set\nwithout reading it should change anything. Reading it should add the same\nfixed overhead to both parallel and non-parallel, so would dilute out\npercentage difference without change absolute differences.\n\nI tried it with wider tuples as well, but not so wide they would activate\nTOAST, and didn't really see a difference in the conclusion.\n\n\n> > On the other hand, you\n> > probably don't want a query which consumes 8x the CPU resources just to\n> > finish only 5% faster (on an otherwise idle server with 8 CPUs). Maybe\n> > this Amdahl factor is what inspired the high default value?\n>\n> I think the large value of parallel_setup_cost is what's meant to\n> discourage that scenario.\n>\n\nI think that can only account for overhead like forking and setting up\nmemory segments. The overhead of moving around tuples (more than\nsingle-threaded execution already moves them around) would need to scale\nwith the number of tuples moved around.\n\nCheers,\n\nJeff\n\nOn Fri, Dec 20, 2019 at 1:58 PM Tom Lane <[email protected]> wrote:Jeff Janes <[email protected]> writes:\n> The docs for parallel_tuple_cost are quite terse, as the reference section\n> of the docs usually are:\n> \"Sets the planner's estimate of the cost of transferring one tuple from a\n> parallel worker process to another process. The default is 0.1.\"\n\n> If you were take the doc description literally, then the default value\n> seems much too high, as it doesn't take 10x the (default) cpu_tuple_cost to\n> transfer a tuple up from a parallel worker.\n\nReally? If anything, I'd have thought it might be worse than 10x.\nCross-process communication isn't cheap, at least not according to\nmy instincts.I was a bit surprised. I set it up so that there was a fine-tunable filter which can be applied in the parallel workers, and then only the surviving tuples get passed up to the leader. The use of a parallel seq scan didn't become slower than the non-parallel version until over 95% of the tuples were surviving the filter. If I wanted to make the estimated cost cross-over point match the measured time cross-over point, I had to mark the parallel_tuple_cost down to about 0.011. This was an 8 CPU machine, an AWS m5.4xlarge, with max_parallel_workers_per_gather=7. (On my crummy 2-CPU Windows 10 laptop running ubuntu via VirtualBox, the cross-over point was closer to 40% of the tuples surviving, and the parallel_tuple_cost to match cross-over point would be about 0.016, but I don't have enough RAM to make a large enough all-in-shared-buffer table to really get a good assessments).My method was to make shared_buffers be a large fraction of RAM (55GB, out of 64GB), then make a table slightly smaller than that and forced it into shared_buffers with pg_prewarm. I set seq_page_cost = random_age_cost = 0, to accurately reflect the fact that no IO is occuring.create table para_seq as select floor(random()*10000)::int as id, random() as x, md5(random()::text)||md5(random()::text) t from generate_series(1,8000000*55);vacuum ANALYZE para_seq ;select pg_prewarm('para_seq');explain (analyze, buffers, settings, timing off) select * from para_seq where id<9500; Where you can change the 9500 to tune the selectivity of the filter. Is this the correct way to try to isolate just the overhead of transferring of a tuple away from other considerations so it can be measured?I don't think the fact that EXPLAIN ANALYZE throws away the result set without reading it should change anything. Reading it should add the same fixed overhead to both parallel and non-parallel, so would dilute out percentage difference without change absolute differences.I tried it with wider tuples as well, but not so wide they would activate TOAST, and didn't really see a difference in the conclusion.\n\n> On the other hand, you\n> probably don't want a query which consumes 8x the CPU resources just to\n> finish only 5% faster (on an otherwise idle server with 8 CPUs). Maybe\n> this Amdahl factor is what inspired the high default value?\n\nI think the large value of parallel_setup_cost is what's meant to\ndiscourage that scenario.I think that can only account for overhead like forking and setting up memory segments. The overhead of moving around tuples (more than single-threaded execution already moves them around) would need to scale with the number of tuples moved around. Cheers,Jeff",
"msg_date": "Fri, 20 Dec 2019 20:37:15 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to set parallel_tuple_cost"
}
] |
[
{
"msg_contents": "Hi all,\n\nQuery plan quick link: https://explain.depesz.com/s/JVxn\nVersion: PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bit\n\ntbl: ~780 million rows, bigint primary key (keytbl), col1 is smallint and\nthere is an index on (col1, col2)\ntmp_tbl: ~10 million rows; columns are identical to tbl, no indices or\nprimary key\nBoth tables are analyzed.\n\nI'm doing an update of tbl from tmp_tbl joining on the primary key column\nof tbl. The planner picks a merge join which takes about 4 hours. If I\nforce it to use a hash join instead (set local enable_mergejoin=false) it\ntakes about 2.5 minutes (see https://explain.depesz.com/s/vtLe for\nresulting plan, it's my current workaround). The index scan done for the\nmerge join below [*] is what eats up the time and planner knows it's\nexpensive [*], but I think it expects it to stop early [**], however it\nappears to index scan the entire table based on the rows removed by the\nfilter [***] and the time taken. I don't think the planner's wrong here,\nthe merge join should break early, the max value of keytbl in tmp_tbl is\nless than all but a small portion of tbl (see **** below). So I think it\nshould have to only go ~12.5 million rows into the index scan before\nstopping. Continued below...\n\nexplain analyze:\nUpdate on schema.tbl t (cost=5831491.55..7428707.35 rows=120536 width=523)\n(actual time=12422900.337..12422900.337 rows=0 loops=1)\n -> Merge Join (cost=5831491.55..*7428707.35[**]* rows=120536 width=523)\n(actual time=121944.122..12406202.383 rows=86663 loops=1)\n Merge Cond: (t.keytbl = tt.keytbl)\n Join Filter: <removed, see link>\n Rows Removed by Join Filter: 9431176\n -> Index Scan using tbl_pkey on schema.tbl t (cost=0.57..\n*302404355.44[*]* rows=9680745 width=273) (actual\ntime=99112.377..12354593.205 rows=9517839 loops=1)\n Filter: (t.col1 = ANY ('{13,14}'::integer[]))\n Rows Removed by Filter: *769791484[***]*\n -> Materialize (cost=2807219.47..2855692.46 rows=9694598\nwidth=489) (actual time=19432.549..31462.007 rows=9616269 loops=1)\n -> Sort (cost=2807219.47..2831455.96 rows=9694598\nwidth=489) (actual time=19432.537..23493.665 rows=9616269 loops=1)\n Sort Key: tt.keytbl\n Sort Method: quicksort Memory: 3487473kB\n -> Seq Scan on schema.tmp_tbl tt\n (cost=0.00..389923.98 rows=9694598 width=489) (actual time=0.023..8791.086\nrows=9692217 loops=1)\nPlanning time: 4.454 ms\nExecution time: 12438992.607 ms\n\nselect max(keytbl) from tmp_tbl;\n max 3940649685073901\nselect count(*) from tbl where keytbl <= 3940649685073901;\n count *12454354 [*****]\nselect max(keytbl) from tbl;\n max 147211412825225362\n\n\nSo far I've been unable to create a smaller / toy example that exhibits the\nsame behavior. Some things that may be unusual about the situation: keytbl\nis bigint and the values are large (all are > 2^48) and sparse/dense (big\nchunks where the id advances by 1 separated by large (> 2^48) regions with\nno rows), the top 200k or so rows of tmp_table by keytbl don't have a\ncorresponding row in tbl, and this is a bit of an older dot release\n(10.4). I have a workaround (disabling merge join for the query) so I'm\nmostly trying to figure out what's going on and if I'm understanding the\nsituation correctly.\n\nIt's interesting that even if it worked as expected, the merge join plan\nseems a lot riskier in that if the analyze didn't catch a single large\noutlier value of keytbl in tmp_tbl or a row with a large value for keytbl\nwas inserted into tmp_tbl since the last analyze it could be forced to walk\nthe entire index of the tbl (which based on the filter count looks like it\ninvolves touching each row of this large table for the filter even if it\ndoesn't have a corresponding row to merge to).\n\nAdditional info:\nTable schema (for tbl and tmp_tbl)\n Column | Type | Collation |\nNullable | Default\n-------------------------------+-----------------------------+-----------+----------+-------------------\n keytbl | bigint | |\nnot null |\n | smallint | |\nnot null |\n col1 | smallint | |\nnot null |\n col2 | integer | |\nnot null |\n | integer | |\n |\n | integer | |\n |\n | character varying(100) | |\n |\n | character varying(100) | |\n |\n | date | |\nnot null |\n | integer | |\n |\n | smallint | |\n |\n | smallint | |\n |\n | integer | |\n |\n | smallint | |\n |\n | bigint | |\n |\n | smallint | |\n |\n | character varying(2) | |\n |\n | text | |\n |\n | character varying(3) | |\n |\n | numeric(14,2) | |\n |\n | smallint | |\n |\n | numeric(13,3) | |\n |\n | smallint | |\n |\n | boolean | |\n |\n | smallint | |\n |\n | character varying(2) | |\n |\n | smallint | |\n |\n | character varying(2) | |\n |\n | smallint | |\n |\n | integer | |\n |\n | integer | |\n |\n | bigint | |\n |\n | smallint | |\nnot null | 0\n | timestamp without time zone | |\nnot null | CURRENT_TIMESTAMP\n | timestamp without time zone | |\nnot null | CURRENT_TIMESTAMP\n | timestamp without time zone | |\nnot null | CURRENT_TIMESTAMP\n | timestamp without time zone | |\nnot null | CURRENT_TIMESTAMP\n | integer | |\n |\n | integer | |\n |\n | integer | |\n |\nIndexes (tbl only, not on tmp_tbl):\n \"xxxx\" PRIMARY KEY, btree (keytbl)\n \"index_1\" UNIQUE, btree (col1, col2)\n \"index_2\" btree (x, z DESC) WHERE x IS NOT NULL\n \"index_3\" btree (y, z DESC) WHERE y IS NOT NULL\n\nQuery:\n explain analyze verbose\n UPDATE \"tbl\" t\n SET xxxxx\n FROM \"tmp_tbl\" tt\n WHERE t.\"keytbl\" = tt.\"keytbl\" AND t.col1 IN (13,14) AND (xxx)\n\nThanks,\nTim\n\nHi all,Query plan quick link: https://explain.depesz.com/s/JVxnVersion: PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bittbl: ~780 million rows, bigint primary key (keytbl), col1 is smallint and there is an index on (col1, col2)tmp_tbl: ~10 million rows; columns are identical to tbl, no indices or primary keyBoth tables are analyzed.I'm doing an update of tbl from tmp_tbl joining on the primary key column of tbl. The planner picks a merge join which takes about 4 hours. If I force it to use a hash join instead (set local enable_mergejoin=false) it takes about 2.5 minutes (see https://explain.depesz.com/s/vtLe for resulting plan, it's my current workaround). The index scan done for the merge join below [*] is what eats up the time and planner knows it's expensive [*], but I think it expects it to stop early [**], however it appears to index scan the entire table based on the rows removed by the filter [***] and the time taken. I don't think the planner's wrong here, the merge join should break early, the max value of keytbl in tmp_tbl is less than all but a small portion of tbl (see **** below). So I think it should have to only go ~12.5 million rows into the index scan before stopping. Continued below...explain analyze:Update on schema.tbl t (cost=5831491.55..7428707.35 rows=120536 width=523) (actual time=12422900.337..12422900.337 rows=0 loops=1) -> Merge Join (cost=5831491.55..7428707.35[**] rows=120536 width=523) (actual time=121944.122..12406202.383 rows=86663 loops=1) Merge Cond: (t.keytbl = tt.keytbl) Join Filter: <removed, see link> Rows Removed by Join Filter: 9431176 -> Index Scan using tbl_pkey on schema.tbl t (cost=0.57..302404355.44[*] rows=9680745 width=273) (actual time=99112.377..12354593.205 rows=9517839 loops=1) Filter: (t.col1 = ANY ('{13,14}'::integer[])) Rows Removed by Filter: 769791484[***] -> Materialize (cost=2807219.47..2855692.46 rows=9694598 width=489) (actual time=19432.549..31462.007 rows=9616269 loops=1) -> Sort (cost=2807219.47..2831455.96 rows=9694598 width=489) (actual time=19432.537..23493.665 rows=9616269 loops=1) Sort Key: tt.keytbl Sort Method: quicksort Memory: 3487473kB -> Seq Scan on schema.tmp_tbl tt (cost=0.00..389923.98 rows=9694598 width=489) (actual time=0.023..8791.086 rows=9692217 loops=1)Planning time: 4.454 msExecution time: 12438992.607 msselect max(keytbl) from tmp_tbl; max 3940649685073901select count(*) from tbl where keytbl <= 3940649685073901; count 12454354 [****]select max(keytbl) from tbl; max 147211412825225362So far I've been unable to create a smaller / toy example that exhibits the same behavior. Some things that may be unusual about the situation: keytbl is bigint and the values are large (all are > 2^48) and sparse/dense (big chunks where the id advances by 1 separated by large (> 2^48) regions with no rows), the top 200k or so rows of tmp_table by keytbl don't have a corresponding row in tbl, and this is a bit of an older dot release (10.4). I have a workaround (disabling merge join for the query) so I'm mostly trying to figure out what's going on and if I'm understanding the situation correctly.It's interesting that even if it worked as expected, the merge join plan seems a lot riskier in that if the analyze didn't catch a single large outlier value of keytbl in tmp_tbl or a row with a large value for keytbl was inserted into tmp_tbl since the last analyze it could be forced to walk the entire index of the tbl (which based on the filter count looks like it involves touching each row of this large table for the filter even if it doesn't have a corresponding row to merge to).Additional info:Table schema (for tbl and tmp_tbl) Column | Type | Collation | Nullable | Default -------------------------------+-----------------------------+-----------+----------+------------------- keytbl | bigint | | not null | | smallint | | not null | col1 | smallint | | not null | col2 | integer | | not null | | integer | | | | integer | | | | character varying(100) | | | | character varying(100) | | | | date | | not null | | integer | | | | smallint | | | | smallint | | | | integer | | | | smallint | | | | bigint | | | | smallint | | | | character varying(2) | | | | text | | | | character varying(3) | | | | numeric(14,2) | | | | smallint | | | | numeric(13,3) | | | | smallint | | | | boolean | | | | smallint | | | | character varying(2) | | | | smallint | | | | character varying(2) | | | | smallint | | | | integer | | | | integer | | | | bigint | | | | smallint | | not null | 0 | timestamp without time zone | | not null | CURRENT_TIMESTAMP | timestamp without time zone | | not null | CURRENT_TIMESTAMP | timestamp without time zone | | not null | CURRENT_TIMESTAMP | timestamp without time zone | | not null | CURRENT_TIMESTAMP | integer | | | | integer | | | | integer | | | Indexes (tbl only, not on tmp_tbl): \"xxxx\" PRIMARY KEY, btree (keytbl) \"index_1\" UNIQUE, btree (col1, col2) \"index_2\" btree (x, z DESC) WHERE x IS NOT NULL \"index_3\" btree (y, z DESC) WHERE y IS NOT NULLQuery: explain analyze verbose UPDATE \"tbl\" t SET xxxxx FROM \"tmp_tbl\" tt WHERE t.\"keytbl\" = tt.\"keytbl\" AND t.col1 IN (13,14) AND (xxx)Thanks,Tim",
"msg_date": "Thu, 26 Dec 2019 15:07:41 -0500",
"msg_from": "Timothy Garnett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Merge join doesn't seem to break early when I (and planner) think it\n should - 10.4"
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 6:57 PM Timothy Garnett <[email protected]>\nwrote:\n\n>\n> So far I've been unable to create a smaller / toy example that exhibits\n> the same behavior. Some things that may be unusual about the situation:\n> keytbl is bigint and the values are large (all are > 2^48) and sparse/dense\n> (big chunks where the id advances by 1 separated by large (> 2^48) regions\n> with no rows), the top 200k or so rows of tmp_table by keytbl don't have a\n> corresponding row in tbl, and this is a bit of an older dot release\n> (10.4). I have a workaround (disabling merge join for the query) so I'm\n> mostly trying to figure out what's going on and if I'm understanding the\n> situation correctly.\n>\n\nCan you share the toy example, using things like random() and\ngenerate_series() to populate it? Preferably scaled down to 10 million\nrows or so in the larger table.\n\nDoes it reproduce in 10.11? If not, then there is really nothing worth\nlooking into. Any fix that can be done would certainly not be re-released\ninto 10.4. And does it reproduce in 12.1 or 13dev? Because chances are\nany improvement wouldn't even be back-patches into any minor release at all.\n\n\n> It's interesting that even if it worked as expected, the merge join plan\n> seems a lot riskier in that if the analyze didn't catch a single large\n> outlier value of keytbl in tmp_tbl or a row with a large value for keytbl\n> was inserted into tmp_tbl since the last analyze it could be forced to walk\n> the entire index of the tbl (which based on the filter count looks like it\n> involves touching each row of this large table for the filter even if it\n> doesn't have a corresponding row to merge to).\n>\n\nThere has been discussion of building a riskiness factor into the planner,\nbut it has never gone anywhere. Everything has its own risk (with Hash\nJoins, for example, the data could be pathological and everything might\nhash to a few buckets, or 32 bits of hashcode might not be enough bits).\nBy the time you can adequately analyze all the risks, you would probably\nlearn enough to just make the planner better absolutely, without adding\nanother dimension to all the things it considers.\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, Dec 26, 2019 at 6:57 PM Timothy Garnett <[email protected]> wrote:So far I've been unable to create a smaller / toy example that exhibits the same behavior. Some things that may be unusual about the situation: keytbl is bigint and the values are large (all are > 2^48) and sparse/dense (big chunks where the id advances by 1 separated by large (> 2^48) regions with no rows), the top 200k or so rows of tmp_table by keytbl don't have a corresponding row in tbl, and this is a bit of an older dot release (10.4). I have a workaround (disabling merge join for the query) so I'm mostly trying to figure out what's going on and if I'm understanding the situation correctly.Can you share the toy example, using things like random() and generate_series() to populate it? Preferably scaled down to 10 million rows or so in the larger table.Does it reproduce in 10.11? If not, then there is really nothing worth looking into. Any fix that can be done would certainly not be re-released into 10.4. And does it reproduce in 12.1 or 13dev? Because chances are any improvement wouldn't even be back-patches into any minor release at all. It's interesting that even if it worked as expected, the merge join plan seems a lot riskier in that if the analyze didn't catch a single large outlier value of keytbl in tmp_tbl or a row with a large value for keytbl was inserted into tmp_tbl since the last analyze it could be forced to walk the entire index of the tbl (which based on the filter count looks like it involves touching each row of this large table for the filter even if it doesn't have a corresponding row to merge to).There has been discussion of building a riskiness factor into the planner, but it has never gone anywhere. Everything has its own risk (with Hash Joins, for example, the data could be pathological and everything might hash to a few buckets, or 32 bits of hashcode might not be enough bits). By the time you can adequately analyze all the risks, you would probably learn enough to just make the planner better absolutely, without adding another dimension to all the things it considers. Cheers,Jeff",
"msg_date": "Fri, 27 Dec 2019 11:52:18 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merge join doesn't seem to break early when I (and planner) think\n it should - 10.4"
}
] |
[
{
"msg_contents": "Is it possible to tell what component of the cost estimate of an index scan is\nfrom the index reads vs heap ?\n\nIt would help to be able to set enable_bitmapscan=FORCE (to make all index\nscans go through a bitmap). Adding OR conditions can sometimes do this. That\nincludes cost of bitmap manipulation, but it's good enough for me.\n\nOr maybe explain should report it.\n\n\n",
"msg_date": "Fri, 3 Jan 2020 08:14:27 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "distinguish index cost component from table component"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 9:14 AM Justin Pryzby <[email protected]> wrote:\n\n> Is it possible to tell what component of the cost estimate of an index\n> scan is\n> from the index reads vs heap ?\n>\n\nNot that I have found, other than through sprinkling elog statements\nthroughout the costing code. Which is horrible, because then you get\nestimates for all the considered but rejected index scans as well, but\nwithout the context to know what they are for. So it only works for toy\nqueries where there are few possible indexes to consider.\n\nIt would help to be able to set enable_bitmapscan=FORCE (to make all index\n> scans go through a bitmap).\n\n\nDoesn't enable_indexscan=off accomplish this already? It is possible but\nnot terribly likely to switch from index to seq, rather than from index to\nbitmap. (Unless the index scan was being used to obtain an ordered result,\nbut a hypothetical enable_bitmapscan=FORCE can't fix that).\n\nOf course this doesn't really answer your question, as the\nseparately-reported costs of a bitmap heap and bitmap index scan are\nunlikely to match what the costs would be of a regular index scan, if they\nwere reported separately.\n\nOr maybe explain should report it.\n>\n\nI wouldn't be optimistic about getting such a backwards-incompatible change\naccepted (plus it would surely add some small accounting overhead, which\nagain would probably not be acceptable). But if you do enough tuning work,\nperhaps it would be worth carrying an out-of-tree patch to implement that.\nI wouldn't be so interested in writing such a patch, but would be\ninterested in using one were it available somewhere.\n\nCheers,\n\nJeff\n\nOn Fri, Jan 3, 2020 at 9:14 AM Justin Pryzby <[email protected]> wrote:Is it possible to tell what component of the cost estimate of an index scan is\nfrom the index reads vs heap ?Not that I have found, other than through sprinkling elog statements throughout the costing code. Which is horrible, because then you get estimates for all the considered but rejected index scans as well, but without the context to know what they are for. So it only works for toy queries where there are few possible indexes to consider.\nIt would help to be able to set enable_bitmapscan=FORCE (to make all index\nscans go through a bitmap). Doesn't enable_indexscan=off accomplish this already? It is possible but not terribly likely to switch from index to seq, rather than from index to bitmap. (Unless the index scan was being used to obtain an ordered result, but a hypothetical \n\nenable_bitmapscan=FORCE can't fix that).Of course this doesn't really answer your question, as the separately-reported costs of a bitmap heap and bitmap index scan are unlikely to match what the costs would be of a regular index scan, if they were reported separately.\nOr maybe explain should report it.I wouldn't be optimistic about getting such a backwards-incompatible change accepted \n\n(plus it would surely add some small accounting overhead, which again would probably not be acceptable). But if you do enough tuning work, perhaps it would be worth carrying an out-of-tree patch to implement that. I wouldn't be so interested in writing such a patch, but would be interested in using one were it available somewhere.Cheers,Jeff",
"msg_date": "Fri, 3 Jan 2020 09:33:35 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: distinguish index cost component from table component"
},
{
"msg_contents": "On Fri, Jan 03, 2020 at 09:33:35AM -0500, Jeff Janes wrote:\n> Of course this doesn't really answer your question, as the\n> separately-reported costs of a bitmap heap and bitmap index scan are\n> unlikely to match what the costs would be of a regular index scan, if they\n> were reported separately.\n\nI think the cost of index component of bitmap scan would be exactly the same\nas the cost of the original indexscan.\n\n>> Or maybe explain should report it.\n> \n> I wouldn't be optimistic about getting such a backwards-incompatible change\n> accepted (plus it would surely add some small accounting overhead, which\n> again would probably not be acceptable). But if you do enough tuning work,\n> perhaps it would be worth carrying an out-of-tree patch to implement that.\n> I wouldn't be so interested in writing such a patch, but would be\n> interested in using one were it available somewhere.\n\nI did the attached in the simplest possible way. If it's somehow possible get\nthe path's index_total_cost from the plan, then there'd be no additional\noverhead.\n\nJustin",
"msg_date": "Fri, 3 Jan 2020 10:03:15 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: distinguish index cost component from table component"
},
{
"msg_contents": "Moving to -hackers\n\nI was asking about how to distinguish the index cost component of an indexscan\nfrom the cost of heap.\nhttps://www.postgresql.org/message-id/20200103141427.GK12066%40telsasoft.com\n\nOn Fri, Jan 03, 2020 at 09:33:35AM -0500, Jeff Janes wrote:\n> > It would help to be able to set enable_bitmapscan=FORCE (to make all index\n> > scans go through a bitmap).\n> \n> Doesn't enable_indexscan=off accomplish this already? It is possible but\n> not terribly likely to switch from index to seq, rather than from index to\n> bitmap. (Unless the index scan was being used to obtain an ordered result,\n> but a hypothetical enable_bitmapscan=FORCE can't fix that).\n\nNo, enable_indexscan=off implicitly disables bitmap index scans, since it does:\n\ncost_bitmap_heap_scan():\n|startup_cost += indexTotalCost;\n\nBut maybe it shouldn't (?) Or maybe it should take a third value, like\nenable_indexscan=bitmaponly, which means what it says. Actually the name is\nconfusable with indexonly, so maybe enable_indexscan=bitmap.\n\nA third value isn't really needed anyway; its only utility is that someone\nupgrading from v12 who uses enable_indexscan=off (presumably in limited scope)\nwouldn't have to also set enable_bitmapscan=off - not a big benefit.\n\nThat doesn't affect regress tests at all.\n\nNote, when I tested it, the cost of \"bitmap heap scan\" was several times higher\nthan the total cost of indexscan (including heap), even with CPU costs at 0. I\napplied my \"bitmap correlation\" patch, which seems to gives more reasonable\nresult. In any case, the purpose of this patch was primarily diagnostic, and\nthe heap cost of index scan would be its total cost minus the cost of the\nbitmap indexscan node when enable_indexscan=off. The high cost attributed to\nbitmap heapscan is topic for the other patch.\n\nJustin",
"msg_date": "Sat, 4 Jan 2020 10:50:47 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "allow disabling indexscans without disabling bitmapscans"
},
{
"msg_contents": "On Sat, Jan 04, 2020 at 10:50:47AM -0600, Justin Pryzby wrote:\n> > Doesn't enable_indexscan=off accomplish this already? It is possible but\n> > not terribly likely to switch from index to seq, rather than from index to\n> > bitmap. (Unless the index scan was being used to obtain an ordered result,\n> > but a hypothetical enable_bitmapscan=FORCE can't fix that).\n> \n> No, enable_indexscan=off implicitly disables bitmap index scans, since it does:\n\nI don't know how I went wrong, but the regress tests clued me in..it's as Jeff\nsaid.\n\nSorry for the noise.\n\nJustin\n\n\n",
"msg_date": "Sat, 4 Jan 2020 13:34:16 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: allow disabling indexscans without disabling bitmapscans"
},
{
"msg_contents": "commit 6f3a13ff058f15d565a30c16c0c2cb14cc994e42 Enhance docs for ALTER TABLE lock levels of storage parms\nAuthor: Simon Riggs <[email protected]>\nDate: Mon Mar 6 16:48:12 2017 +0530\n\n <varlistentry>\n <term><literal>SET ( <replaceable class=\"PARAMETER\">storage_parameter</replaceable> = <replaceable class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n...\n- Changing fillfactor and autovacuum storage parameters acquires a <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n+ <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for \n+ fillfactor and autovacuum storage parameters, as well as the\n+ following planner related parameters:\n+ effective_io_concurrency, parallel_workers, seq_page_cost\n+ random_page_cost, n_distinct and n_distinct_inherited.\n\neffective_io_concurrency, seq_page_cost and random_page_cost cannot be set for\na table - reloptions.c shows that they've always been RELOPT_KIND_TABLESPACE.\n\nn_distinct lock mode seems to have been changed and documented at e5550d5f ;\n21d4e2e2 claimed to do the same, but the LOCKMODE is never used.\n\nSee also:\n\ncommit 21d4e2e20656381b4652eb675af4f6d65053607f Reduce lock levels for table storage params related to planning\nAuthor: Simon Riggs <[email protected]>\nDate: Mon Mar 6 16:04:31 2017 +0530\n\ncommit 47167b7907a802ed39b179c8780b76359468f076 Reduce lock levels for ALTER TABLE SET autovacuum storage options\nAuthor: Simon Riggs <[email protected]>\nDate: Fri Aug 14 14:19:28 2015 +0100\n\ncommit e5550d5fec66aa74caad1f79b79826ec64898688 Reduce lock levels of some ALTER TABLE cmds\nAuthor: Simon Riggs <[email protected]>\nDate: Sun Apr 6 11:13:43 2014 -0400\n\ncommit 2dbbda02e7e688311e161a912a0ce00cde9bb6fc Reduce lock levels of CREATE TRIGGER and some ALTER TABLE, CREATE RULE actions.\nAuthor: Simon Riggs <[email protected]>\nDate: Wed Jul 28 05:22:24 2010 +0000\n\ncommit d86d51a95810caebcea587498068ff32fe28293e Support ALTER TABLESPACE name SET/RESET ( tablespace_options ).\nAuthor: Robert Haas <[email protected]>\nDate: Tue Jan 5 21:54:00 2010 +0000\n\nJustin",
"msg_date": "Sun, 5 Jan 2020 20:56:23 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "doc: alter table references bogus table-specific planner parameters"
},
{
"msg_contents": "On Mon, 6 Jan 2020 at 02:56, Justin Pryzby <[email protected]> wrote:\n\n> commit 6f3a13ff058f15d565a30c16c0c2cb14cc994e42 Enhance docs for ALTER\n> TABLE lock levels of storage parms\n> Author: Simon Riggs <[email protected]>\n> Date: Mon Mar 6 16:48:12 2017 +0530\n>\n> <varlistentry>\n> <term><literal>SET ( <replaceable\n> class=\"PARAMETER\">storage_parameter</replaceable> = <replaceable\n> class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n> ...\n> - Changing fillfactor and autovacuum storage parameters acquires a\n> <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n> + <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for\n> + fillfactor and autovacuum storage parameters, as well as the\n> + following planner related parameters:\n> + effective_io_concurrency, parallel_workers, seq_page_cost\n> + random_page_cost, n_distinct and n_distinct_inherited.\n>\n> effective_io_concurrency, seq_page_cost and random_page_cost cannot be set\n> for\n> a table - reloptions.c shows that they've always been\n> RELOPT_KIND_TABLESPACE.\n>\n\nRight, but if they were settable at table-level, the lock levels shown\nwould be accurate.\n\nI agree with the sentiment of the third doc change, but your patch removes\nthe mention of n_distinct, which isn't appropriate.\n\nThe second change in your patch alters the meaning of the sentence in a way\nthat is counter to the first change. The name of these parameters is\n\"Storage Parameters\" (in various places); I might agree with describing\nthem in text as \"storage or planner parameters\", but if you do that you\ncan't then just refer to \"storage parameters\" later, because if you do it\nimplies that planner parameters operate differently to storage parameters,\nwhich they don't.\n\n\n> n_distinct lock mode seems to have been changed and documented at e5550d5f\n> ;\n> 21d4e2e2 claimed to do the same, but the LOCKMODE is never used.\n>\n\nBut neither does it need to because we don't lock tablespaces.\n\nThanks for your comments.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Mon, 6 Jan 2020 at 02:56, Justin Pryzby <[email protected]> wrote:commit 6f3a13ff058f15d565a30c16c0c2cb14cc994e42 Enhance docs for ALTER TABLE lock levels of storage parms\nAuthor: Simon Riggs <[email protected]>\nDate: Mon Mar 6 16:48:12 2017 +0530\n\n <varlistentry>\n <term><literal>SET ( <replaceable class=\"PARAMETER\">storage_parameter</replaceable> = <replaceable class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n...\n- Changing fillfactor and autovacuum storage parameters acquires a <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n+ <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for \n+ fillfactor and autovacuum storage parameters, as well as the\n+ following planner related parameters:\n+ effective_io_concurrency, parallel_workers, seq_page_cost\n+ random_page_cost, n_distinct and n_distinct_inherited.\n\neffective_io_concurrency, seq_page_cost and random_page_cost cannot be set for\na table - reloptions.c shows that they've always been RELOPT_KIND_TABLESPACE.Right, but if they were settable at table-level, the lock levels shown would be accurate.I agree with the sentiment of the third doc change, but your patch removes the mention of n_distinct, which isn't appropriate.The second change in your patch alters the meaning of the sentence in a way that is counter to the first change. The name of these parameters is \"Storage Parameters\" (in various places); I might agree with describing them in text as \"storage or planner parameters\", but if you do that you can't then just refer to \"storage parameters\" later, because if you do it implies that planner parameters operate differently to storage parameters, which they don't. \nn_distinct lock mode seems to have been changed and documented at e5550d5f ;\n21d4e2e2 claimed to do the same, but the LOCKMODE is never used.But neither does it need to because we don't lock tablespaces.Thanks for your comments.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Mon, 6 Jan 2020 03:48:52 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: alter table references bogus table-specific planner\n parameters"
},
{
"msg_contents": "On Mon, Jan 06, 2020 at 03:48:52AM +0000, Simon Riggs wrote:\n> On Mon, 6 Jan 2020 at 02:56, Justin Pryzby <[email protected]> wrote:\n> \n> > commit 6f3a13ff058f15d565a30c16c0c2cb14cc994e42 Enhance docs for ALTER TABLE lock levels of storage parms\n> > Author: Simon Riggs <[email protected]>\n> > Date: Mon Mar 6 16:48:12 2017 +0530\n> >\n> > <varlistentry>\n> > <term><literal>SET ( <replaceable\n> > class=\"PARAMETER\">storage_parameter</replaceable> = <replaceable\n> > class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n> > ...\n> > - Changing fillfactor and autovacuum storage parameters acquires a\n> > <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n> > + <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for\n> > + fillfactor and autovacuum storage parameters, as well as the\n> > + following planner related parameters:\n> > + effective_io_concurrency, parallel_workers, seq_page_cost\n> > + random_page_cost, n_distinct and n_distinct_inherited.\n> >\n> > effective_io_concurrency, seq_page_cost and random_page_cost cannot be set\n> > for\n> > a table - reloptions.c shows that they've always been\n> > RELOPT_KIND_TABLESPACE.\n> \n> I agree with the sentiment of the third doc change, but your patch removes\n> the mention of n_distinct, which isn't appropriate.\n\nI think it's correct to remove n_distinct there, as it's documented previously,\nsince e5550d5f. That's a per-attribute option (not storage) and can't be\nspecified there.\n\n <varlistentry>\n <term><literal>SET ( <replaceable class=\"PARAMETER\">attribute_option</replaceable> = <replaceable class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n <term><literal>RESET ( <replaceable class=\"PARAMETER\">attribute_option</replaceable> [, ... ] )</literal></term>\n <listitem>\n <para>\n This form sets or resets per-attribute options. Currently, the only\n...\n+ <para>\n+ Changing per-attribute options acquires a\n+ <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n+ </para>\n\n> The second change in your patch alters the meaning of the sentence in a way\n> that is counter to the first change. The name of these parameters is\n> \"Storage Parameters\" (in various places); I might agree with describing\n> them in text as \"storage or planner parameters\", but if you do that you\n> can't then just refer to \"storage parameters\" later, because if you do it\n> implies that planner parameters operate differently to storage parameters,\n> which they don't.\n\nThe 2nd change is:\n\n for details on the available parameters. Note that the table contents\n- will not be modified immediately by this command; depending on the\n+ will not be modified immediately by setting its storage parameters; depending on the\n parameter you might need to rewrite the table to get the desired effects.\n\nI deliberately qualified that as referring only to \"storage params\" rather than\n\"this command\", since planner params never \"modify the table contents\".\nPossibly other instances in the document (and createtable) should be changed\nfor consistency.\n\nJustin\n\n\n",
"msg_date": "Sun, 5 Jan 2020 22:13:14 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: alter table references bogus table-specific planner\n parameters"
},
{
"msg_contents": "On Mon, 6 Jan 2020 at 04:13, Justin Pryzby <[email protected]> wrote:\n\n>\n> > I agree with the sentiment of the third doc change, but your patch\n> removes\n> > the mention of n_distinct, which isn't appropriate.\n>\n> I think it's correct to remove n_distinct there, as it's documented\n> previously,\n> since e5550d5f. That's a per-attribute option (not storage) and can't be\n> specified there.\n>\n\nOK, then agreed.\n\n> The second change in your patch alters the meaning of the sentence in a\n> way\n> > that is counter to the first change. The name of these parameters is\n> > \"Storage Parameters\" (in various places); I might agree with describing\n> > them in text as \"storage or planner parameters\", but if you do that you\n> > can't then just refer to \"storage parameters\" later, because if you do it\n> > implies that planner parameters operate differently to storage\n> parameters,\n> > which they don't.\n>\n> The 2nd change is:\n>\n> for details on the available parameters. Note that the table\n> contents\n> - will not be modified immediately by this command; depending on the\n> + will not be modified immediately by setting its storage parameters;\n> depending on the\n> parameter you might need to rewrite the table to get the desired\n> effects.\n>\n> I deliberately qualified that as referring only to \"storage params\" rather\n> than\n> \"this command\", since planner params never \"modify the table contents\".\n> Possibly other instances in the document (and createtable) should be\n> changed\n> for consistency.\n>\n\nYes, but it's not a correction, just a different preference of wording.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Mon, 6 Jan 2020 at 04:13, Justin Pryzby <[email protected]> wrote:\n> I agree with the sentiment of the third doc change, but your patch removes\n> the mention of n_distinct, which isn't appropriate.\n\nI think it's correct to remove n_distinct there, as it's documented previously,\nsince e5550d5f. That's a per-attribute option (not storage) and can't be\nspecified there.OK, then agreed.\n> The second change in your patch alters the meaning of the sentence in a way\n> that is counter to the first change. The name of these parameters is\n> \"Storage Parameters\" (in various places); I might agree with describing\n> them in text as \"storage or planner parameters\", but if you do that you\n> can't then just refer to \"storage parameters\" later, because if you do it\n> implies that planner parameters operate differently to storage parameters,\n> which they don't.\n\nThe 2nd change is:\n\n for details on the available parameters. Note that the table contents\n- will not be modified immediately by this command; depending on the\n+ will not be modified immediately by setting its storage parameters; depending on the\n parameter you might need to rewrite the table to get the desired effects.\n\nI deliberately qualified that as referring only to \"storage params\" rather than\n\"this command\", since planner params never \"modify the table contents\".\nPossibly other instances in the document (and createtable) should be changed\nfor consistency.Yes, but it's not a correction, just a different preference of wording.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Mon, 6 Jan 2020 04:33:46 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: alter table references bogus table-specific planner\n parameters"
}
] |
[
{
"msg_contents": "Hello!\n\nI have a query on a large table that is very fast (0s):\nhttps://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txt\n\nBasically the query matches the rows that have a tag1 OR tag2 OR tag3 OR\ntag4 OR tag5...\n\nHowever if you increase the number of OR at some point PostgreSQL makes the\nbad decision to change its query plan! And the new plan makes the query\nterribly slow:\nhttps://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-slow_query-txt\n\nInstead of this (which is fast):\n Bitmap Index Scan on index_subscriptions_on_project_id_and_tags\nIt starts using this (which is slow):\n Parallel Index Scan using index_subscriptions_on_project_id_and_created_at\nThe choice seems quite stupid since it doesn't have the tags on the new\nindex... and indeed the query takes about 1 minute instead of a few\nmilliseconds. Here's a list of the available indexes:\nhttps://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-_indexes-txt\n\nHow can I encourage PostgreSQL to use the Bitmap Index Scan even when there\nare many OR conditions? I have tried with VACUUM ANALYZE subscriptions but\nit doesn't help.\n\nNote: the query is generated dynamically by customers of a SaaS, so I don't\nhave full control on it\n\n\nThank you very much for any advice!\nMarco Colli\n\nHello!I have a query on a large table that is very fast (0s):https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txtBasically the query matches the rows that have a tag1 OR tag2 OR tag3 OR tag4 OR tag5... However if you increase the number of OR at some point PostgreSQL makes the bad decision to change its query plan! And the new plan makes the query terribly slow:https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-slow_query-txtInstead of this (which is fast): Bitmap Index Scan on index_subscriptions_on_project_id_and_tagsIt starts using this (which is slow): Parallel Index Scan using index_subscriptions_on_project_id_and_created_atThe choice seems quite stupid since it doesn't have the tags on the new index... and indeed the query takes about 1 minute instead of a few milliseconds. Here's a list of the available indexes:https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-_indexes-txtHow can I encourage PostgreSQL to use the Bitmap Index Scan even when there are many OR conditions? I have tried with VACUUM ANALYZE subscriptions but it doesn't help.Note: the query is generated dynamically by customers of a SaaS, so I don't have full control on itThank you very much for any advice!Marco Colli",
"msg_date": "Fri, 10 Jan 2020 02:11:14 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad query plan when you add many OR conditions"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 02:11:14AM +0100, Marco Colli wrote:\n> I have a query on a large table that is very fast (0s):\n> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txt\n\nORDER BY + LIMIT is a query which sometimes has issues, you can probably find\nmore by searching. The planner thinks it'll hit the LIMIT pretty soon and only\nrun a fraction of the index scan - but then it turns out to be wrong.\n\nYou might have poor statistics on project_id and/or tags. This *might* help:\nALTER TABLE subscriptions ALTER project_id SET STATISTICS 2000; ANALYZE subscriptions;\n\nBut I'm guessing there's correlation between the two, which the planner doesn't\nknow. If you're running at least v10, I'm guessing it would help to CREATE\nSTATISTICS on those columns (and analyze).\n\nSee one similar problem here (not involving LIMIT).\nhttps://www.postgresql.org/message-id/flat/CABFxtPedz4zL%2BaPWut4%2B%3Dum4av1aAXr6OVRfRB_6K7mJKMbEcw%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 9 Jan 2020 21:06:02 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
},
{
"msg_contents": "I am trying different solutions and what I have found is even more\nsurprising to me...\n\nThe query is always this:\nhttps://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-slow_query-txt\n\nI have added this index which would allow an index only scan:\n\"index_subscriptions_on_project_id_and_created_at_and_tags\" btree\n(project_id, created_at DESC, tags) WHERE trashed_at IS NULL\n\nBut Postgresql continues to use this index (which has less information and\nthen requires slow access to disk):\n\"index_subscriptions_on_project_id_and_created_at\" btree (project_id,\ncreated_at DESC)\n\n\n\nOn Fri, Jan 10, 2020 at 4:06 AM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, Jan 10, 2020 at 02:11:14AM +0100, Marco Colli wrote:\n> > I have a query on a large table that is very fast (0s):\n> >\n> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txt\n>\n> ORDER BY + LIMIT is a query which sometimes has issues, you can probably\n> find\n> more by searching. The planner thinks it'll hit the LIMIT pretty soon and\n> only\n> run a fraction of the index scan - but then it turns out to be wrong.\n>\n> You might have poor statistics on project_id and/or tags. This *might*\n> help:\n> ALTER TABLE subscriptions ALTER project_id SET STATISTICS 2000; ANALYZE\n> subscriptions;\n>\n> But I'm guessing there's correlation between the two, which the planner\n> doesn't\n> know. If you're running at least v10, I'm guessing it would help to CREATE\n> STATISTICS on those columns (and analyze).\n>\n> See one similar problem here (not involving LIMIT).\n>\n> https://www.postgresql.org/message-id/flat/CABFxtPedz4zL%2BaPWut4%2B%3Dum4av1aAXr6OVRfRB_6K7mJKMbEcw%40mail.gmail.com\n>\n\nI am trying different solutions and what I have found is even more surprising to me...The query is always this:https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-slow_query-txtI have added this index which would allow an index only scan:\"index_subscriptions_on_project_id_and_created_at_and_tags\" btree (project_id, created_at DESC, tags) WHERE trashed_at IS NULLBut Postgresql continues to use this index (which has less information and then requires slow access to disk):\"index_subscriptions_on_project_id_and_created_at\" btree (project_id, created_at DESC)\nOn Fri, Jan 10, 2020 at 4:06 AM Justin Pryzby <[email protected]> wrote:On Fri, Jan 10, 2020 at 02:11:14AM +0100, Marco Colli wrote:\n> I have a query on a large table that is very fast (0s):\n> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txt\n\nORDER BY + LIMIT is a query which sometimes has issues, you can probably find\nmore by searching. The planner thinks it'll hit the LIMIT pretty soon and only\nrun a fraction of the index scan - but then it turns out to be wrong.\n\nYou might have poor statistics on project_id and/or tags. This *might* help:\nALTER TABLE subscriptions ALTER project_id SET STATISTICS 2000; ANALYZE subscriptions;\n\nBut I'm guessing there's correlation between the two, which the planner doesn't\nknow. If you're running at least v10, I'm guessing it would help to CREATE\nSTATISTICS on those columns (and analyze).\n\nSee one similar problem here (not involving LIMIT).\nhttps://www.postgresql.org/message-id/flat/CABFxtPedz4zL%2BaPWut4%2B%3Dum4av1aAXr6OVRfRB_6K7mJKMbEcw%40mail.gmail.com",
"msg_date": "Fri, 10 Jan 2020 12:03:39 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
},
{
"msg_contents": "@Justin Pryzby I have tried this as you suggested:\n\nCREATE STATISTICS statistics_on_subscriptions_project_id_and_tags ON\nproject_id, tags FROM subscriptions;\nVACUUM ANALYZE subscriptions;\n\nUnfortunately nothing changes and Postgresql continues to use the wrong\nplan (maybe stats don't work well on array fields like tags??).\n\nOn Fri, Jan 10, 2020 at 4:06 AM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, Jan 10, 2020 at 02:11:14AM +0100, Marco Colli wrote:\n> > I have a query on a large table that is very fast (0s):\n> >\n> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txt\n>\n> ORDER BY + LIMIT is a query which sometimes has issues, you can probably\n> find\n> more by searching. The planner thinks it'll hit the LIMIT pretty soon and\n> only\n> run a fraction of the index scan - but then it turns out to be wrong.\n>\n> You might have poor statistics on project_id and/or tags. This *might*\n> help:\n> ALTER TABLE subscriptions ALTER project_id SET STATISTICS 2000; ANALYZE\n> subscriptions;\n>\n> But I'm guessing there's correlation between the two, which the planner\n> doesn't\n> know. If you're running at least v10, I'm guessing it would help to CREATE\n> STATISTICS on those columns (and analyze).\n>\n> See one similar problem here (not involving LIMIT).\n>\n> https://www.postgresql.org/message-id/flat/CABFxtPedz4zL%2BaPWut4%2B%3Dum4av1aAXr6OVRfRB_6K7mJKMbEcw%40mail.gmail.com\n>\n\n@Justin Pryzby I have tried this as you suggested:CREATE STATISTICS statistics_on_subscriptions_project_id_and_tags ON project_id, tags FROM subscriptions;VACUUM ANALYZE subscriptions;Unfortunately nothing changes and Postgresql continues to use the wrong plan (maybe stats don't work well on array fields like tags??).On Fri, Jan 10, 2020 at 4:06 AM Justin Pryzby <[email protected]> wrote:On Fri, Jan 10, 2020 at 02:11:14AM +0100, Marco Colli wrote:\n> I have a query on a large table that is very fast (0s):\n> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txt\n\nORDER BY + LIMIT is a query which sometimes has issues, you can probably find\nmore by searching. The planner thinks it'll hit the LIMIT pretty soon and only\nrun a fraction of the index scan - but then it turns out to be wrong.\n\nYou might have poor statistics on project_id and/or tags. This *might* help:\nALTER TABLE subscriptions ALTER project_id SET STATISTICS 2000; ANALYZE subscriptions;\n\nBut I'm guessing there's correlation between the two, which the planner doesn't\nknow. If you're running at least v10, I'm guessing it would help to CREATE\nSTATISTICS on those columns (and analyze).\n\nSee one similar problem here (not involving LIMIT).\nhttps://www.postgresql.org/message-id/flat/CABFxtPedz4zL%2BaPWut4%2B%3Dum4av1aAXr6OVRfRB_6K7mJKMbEcw%40mail.gmail.com",
"msg_date": "Fri, 10 Jan 2020 14:30:27 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 12:03:39PM +0100, Marco Colli wrote:\n> I have added this index which would allow an index only scan:\n> \"index_subscriptions_on_project_id_and_created_at_and_tags\" btree\n> (project_id, created_at DESC, tags) WHERE trashed_at IS NULL\n\nAre those the only columns in subscriptions ?\n\n> But Postgresql continues to use this index (which has less information and\n> then requires slow access to disk):\n> \"index_subscriptions_on_project_id_and_created_at\" btree (project_id,\n> created_at DESC)\n\nDid you vacuum the table ?\nDid you try to \"explain\" the query after dropping the 1st index (like: begin;\nDROP INDEX..; explain analyze..; rollback).\n\nAlso, is the first (other) index btree_gin (you can \\dx to show extensions) ?\n\nI think it needs to be a gin index to search tags ?\n\nOn Fri, Jan 10, 2020 at 01:42:24PM +0100, Marco Colli wrote:\n> I would like to try your solution but I read that ALTER TABLE... SET\n> STATISTICS locks the table... Since it is just an experiment and we don't\n> know if it actually works it would be greate to avoid locking a large table\n> (50M) in production.\n\nI suggest to CREATE TABLE test_subscriptions (LIKE subscriptions INCLUDING\nALL); INSERT INTO test_subscriptions SELECT * FROM subscriptions; ANALYZE test_subscriptions;\n\nAnyway, ALTER..SET STATS requires a strong lock but for only a brief moment\n(assuming it doesn't have to wait). Possibly you'd be ok doing SET\nstatement_timeout='1s'; ALTER TABLE.... \n\n> Does CREATE STATISTICS lock the table too?\n\nYou can check by SET client_min_messages=debug; SET lock_timeout=333; SET log_lock_waits=on;\nLooks like it needs ShareUpdateExclusiveLock.\n\n> Does statistics work on an array field like tags? (I can't find any\n> information)\n\nIt think it'd be data type agnostic. And seems to work with arrays.\n\nOn Fri, Jan 10, 2020 at 02:30:27PM +0100, Marco Colli wrote:\n> @Justin Pryzby I have tried this as you suggested:\n> \n> CREATE STATISTICS statistics_on_subscriptions_project_id_and_tags ON\n> project_id, tags FROM subscriptions;\n> VACUUM ANALYZE subscriptions;\n> \n> Unfortunately nothing changes and Postgresql continues to use the wrong\n> plan (maybe stats don't work well on array fields like tags??).\n\nIt'd help to see SELECT stxddependencies FROM pg_statistic_ext WHERE\nstxoid='subscriptions'::regclass\n\n\n",
"msg_date": "Fri, 10 Jan 2020 07:34:47 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
},
{
"msg_contents": "Before trying other solutions I would like to make PG use an index-only\nscan (it should be fast enough for our purpose).\n\nI have tried to disable the other indexes and forced PG to use this index\n(which includes all the fields of the query):\nindex_subscriptions_on_project_id_and_created_at_and_tags\n\nThe problem is that the query plan is this:\nhttps://gist.github.com/collimarco/03f3dde372f001485518b8deca2f3b24#file-index_scan_instead_of_index_only-txt\n\nAs you can see it is a *index scan* and not an *index only* scan... I don't\nunderstand why. The index includes all the fields used by the query... so\nan index only scan should be possible.\n\n\nOn Fri, Jan 10, 2020 at 2:34 PM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, Jan 10, 2020 at 12:03:39PM +0100, Marco Colli wrote:\n> > I have added this index which would allow an index only scan:\n> > \"index_subscriptions_on_project_id_and_created_at_and_tags\" btree\n> > (project_id, created_at DESC, tags) WHERE trashed_at IS NULL\n>\n> Are those the only columns in subscriptions ?\n>\n> > But Postgresql continues to use this index (which has less information\n> and\n> > then requires slow access to disk):\n> > \"index_subscriptions_on_project_id_and_created_at\" btree (project_id,\n> > created_at DESC)\n>\n> Did you vacuum the table ?\n> Did you try to \"explain\" the query after dropping the 1st index (like:\n> begin;\n> DROP INDEX..; explain analyze..; rollback).\n>\n> Also, is the first (other) index btree_gin (you can \\dx to show\n> extensions) ?\n>\n> I think it needs to be a gin index to search tags ?\n>\n> On Fri, Jan 10, 2020 at 01:42:24PM +0100, Marco Colli wrote:\n> > I would like to try your solution but I read that ALTER TABLE... SET\n> > STATISTICS locks the table... Since it is just an experiment and we\n> don't\n> > know if it actually works it would be greate to avoid locking a large\n> table\n> > (50M) in production.\n>\n> I suggest to CREATE TABLE test_subscriptions (LIKE subscriptions INCLUDING\n> ALL); INSERT INTO test_subscriptions SELECT * FROM subscriptions; ANALYZE\n> test_subscriptions;\n>\n> Anyway, ALTER..SET STATS requires a strong lock but for only a brief moment\n> (assuming it doesn't have to wait). Possibly you'd be ok doing SET\n> statement_timeout='1s'; ALTER TABLE....\n>\n> > Does CREATE STATISTICS lock the table too?\n>\n> You can check by SET client_min_messages=debug; SET lock_timeout=333; SET\n> log_lock_waits=on;\n> Looks like it needs ShareUpdateExclusiveLock.\n>\n> > Does statistics work on an array field like tags? (I can't find any\n> > information)\n>\n> It think it'd be data type agnostic. And seems to work with arrays.\n>\n> On Fri, Jan 10, 2020 at 02:30:27PM +0100, Marco Colli wrote:\n> > @Justin Pryzby I have tried this as you suggested:\n> >\n> > CREATE STATISTICS statistics_on_subscriptions_project_id_and_tags ON\n> > project_id, tags FROM subscriptions;\n> > VACUUM ANALYZE subscriptions;\n> >\n> > Unfortunately nothing changes and Postgresql continues to use the wrong\n> > plan (maybe stats don't work well on array fields like tags??).\n>\n> It'd help to see SELECT stxddependencies FROM pg_statistic_ext WHERE\n> stxoid='subscriptions'::regclass\n>\n\nBefore trying other solutions I would like to make PG use an index-only scan (it should be fast enough for our purpose).I have tried to disable the other indexes and forced PG to use this index (which includes all the fields of the query):index_subscriptions_on_project_id_and_created_at_and_tagsThe problem is that the query plan is this:https://gist.github.com/collimarco/03f3dde372f001485518b8deca2f3b24#file-index_scan_instead_of_index_only-txtAs you can see it is a *index scan* and not an *index only* scan... I don't understand why. The index includes all the fields used by the query... so an index only scan should be possible.On Fri, Jan 10, 2020 at 2:34 PM Justin Pryzby <[email protected]> wrote:On Fri, Jan 10, 2020 at 12:03:39PM +0100, Marco Colli wrote:\n> I have added this index which would allow an index only scan:\n> \"index_subscriptions_on_project_id_and_created_at_and_tags\" btree\n> (project_id, created_at DESC, tags) WHERE trashed_at IS NULL\n\nAre those the only columns in subscriptions ?\n\n> But Postgresql continues to use this index (which has less information and\n> then requires slow access to disk):\n> \"index_subscriptions_on_project_id_and_created_at\" btree (project_id,\n> created_at DESC)\n\nDid you vacuum the table ?\nDid you try to \"explain\" the query after dropping the 1st index (like: begin;\nDROP INDEX..; explain analyze..; rollback).\n\nAlso, is the first (other) index btree_gin (you can \\dx to show extensions) ?\n\nI think it needs to be a gin index to search tags ?\n\nOn Fri, Jan 10, 2020 at 01:42:24PM +0100, Marco Colli wrote:\n> I would like to try your solution but I read that ALTER TABLE... SET\n> STATISTICS locks the table... Since it is just an experiment and we don't\n> know if it actually works it would be greate to avoid locking a large table\n> (50M) in production.\n\nI suggest to CREATE TABLE test_subscriptions (LIKE subscriptions INCLUDING\nALL); INSERT INTO test_subscriptions SELECT * FROM subscriptions; ANALYZE test_subscriptions;\n\nAnyway, ALTER..SET STATS requires a strong lock but for only a brief moment\n(assuming it doesn't have to wait). Possibly you'd be ok doing SET\nstatement_timeout='1s'; ALTER TABLE.... \n\n> Does CREATE STATISTICS lock the table too?\n\nYou can check by SET client_min_messages=debug; SET lock_timeout=333; SET log_lock_waits=on;\nLooks like it needs ShareUpdateExclusiveLock.\n\n> Does statistics work on an array field like tags? (I can't find any\n> information)\n\nIt think it'd be data type agnostic. And seems to work with arrays.\n\nOn Fri, Jan 10, 2020 at 02:30:27PM +0100, Marco Colli wrote:\n> @Justin Pryzby I have tried this as you suggested:\n> \n> CREATE STATISTICS statistics_on_subscriptions_project_id_and_tags ON\n> project_id, tags FROM subscriptions;\n> VACUUM ANALYZE subscriptions;\n> \n> Unfortunately nothing changes and Postgresql continues to use the wrong\n> plan (maybe stats don't work well on array fields like tags??).\n\nIt'd help to see SELECT stxddependencies FROM pg_statistic_ext WHERE\nstxoid='subscriptions'::regclass",
"msg_date": "Fri, 10 Jan 2020 15:53:22 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
},
{
"msg_contents": "Marco Colli <[email protected]> writes:\n> As you can see it is a *index scan* and not an *index only* scan... I don't\n> understand why. The index includes all the fields used by the query... so\n> an index only scan should be possible.\n\nHuh? The query is \"select * from ...\", so it retrieves *all* columns\nof the table.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jan 2020 10:18:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
},
{
"msg_contents": "Sorry, I didn't notice the SELECT * and I said something stupid...\nHowever my reasoning should be still valid: I mean, PG could find the few\nrelevant rows (there's a LIMIT 30) using ONLY the index. It has all the\ninformation required inside the index! Then it can simply access to that\nrows on disk... It cannot take ~1 minute to access a few rows on disk (max\n30 rows, actual 0 rows).\n\n\n\nOn Fri, Jan 10, 2020 at 4:18 PM Tom Lane <[email protected]> wrote:\n\n> Marco Colli <[email protected]> writes:\n> > As you can see it is a *index scan* and not an *index only* scan... I\n> don't\n> > understand why. The index includes all the fields used by the query... so\n> > an index only scan should be possible.\n>\n> Huh? The query is \"select * from ...\", so it retrieves *all* columns\n> of the table.\n>\n> regards, tom lane\n>\n\nSorry, I didn't notice the SELECT * and I said something stupid... However my reasoning should be still valid: I mean, PG could find the few relevant rows (there's a LIMIT 30) using ONLY the index. It has all the information required inside the index! Then it can simply access to that rows on disk... It cannot take ~1 minute to access a few rows on disk (max 30 rows, actual 0 rows).On Fri, Jan 10, 2020 at 4:18 PM Tom Lane <[email protected]> wrote:Marco Colli <[email protected]> writes:\n> As you can see it is a *index scan* and not an *index only* scan... I don't\n> understand why. The index includes all the fields used by the query... so\n> an index only scan should be possible.\n\nHuh? The query is \"select * from ...\", so it retrieves *all* columns\nof the table.\n\n regards, tom lane",
"msg_date": "Fri, 10 Jan 2020 17:03:41 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 8:11 PM Marco Colli <[email protected]> wrote:\n\n> Hello!\n>\n> I have a query on a large table that is very fast (0s):\n>\n> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txt\n>\n> Basically the query matches the rows that have a tag1 OR tag2 OR tag3 OR\n> tag4 OR tag5...\n>\n\nEach branch of the OR is anticipated to return 400 rows, but it actually\nreturns 0. If it actually were to return 400 rows per branch, than\neventually the plan switch actually would make sense.\n\nWhy is the estimate off by so much? If you run a simple select, what the\nactual and expected number of rows WHERE project_id = 12345? WHERE tags @>\n'{crt:2018_11}'? Is one of those estimates way off reality, or is it only\nthe conjunction which is deranged?\n\n\n> How can I encourage PostgreSQL to use the Bitmap Index Scan even when\n> there are many OR conditions? I have tried with VACUUM ANALYZE\n> subscriptions but it doesn't help.\n>\n> Note: the query is generated dynamically by customers of a SaaS, so I\n> don't have full control on it\n>\n\nDo you have enough control to change the ORDER BY to:\n\nORDER BY (\"subscriptions\".\"created_at\" + interval '0 days') DESC\n\nCheers,\n\nJeff\n\nOn Thu, Jan 9, 2020 at 8:11 PM Marco Colli <[email protected]> wrote:Hello!I have a query on a large table that is very fast (0s):https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txtBasically the query matches the rows that have a tag1 OR tag2 OR tag3 OR tag4 OR tag5... Each branch of the OR is anticipated to return 400 rows, but it actually returns 0. If it actually were to return 400 rows per branch, than eventually the plan switch actually would make sense.Why is the estimate off by so much? If you run a simple select, what the actual and expected number of rows WHERE project_id = 12345? WHERE tags @> '{crt:2018_11}'? Is one of those estimates way off reality, or is it only the conjunction which is deranged? How can I encourage PostgreSQL to use the Bitmap Index Scan even when there are many OR conditions? I have tried with VACUUM ANALYZE subscriptions but it doesn't help.Note: the query is generated dynamically by customers of a SaaS, so I don't have full control on itDo you have enough control to change the ORDER BY to:ORDER BY (\"subscriptions\".\"created_at\" + interval '0 days') DESC Cheers,Jeff",
"msg_date": "Fri, 10 Jan 2020 12:12:52 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 02:30:27PM +0100, Marco Colli wrote:\n>@Justin Pryzby I have tried this as you suggested:\n>\n>CREATE STATISTICS statistics_on_subscriptions_project_id_and_tags ON\n>project_id, tags FROM subscriptions;\n>VACUUM ANALYZE subscriptions;\n>\n>Unfortunately nothing changes and Postgresql continues to use the wrong\n>plan (maybe stats don't work well on array fields like tags??).\n>\n\nWe support this type of clause for extended statistics (yet).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 13 Jan 2020 23:14:11 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
},
{
"msg_contents": "Marco Colli schrieb am 10.01.2020 um 02:11:\n> I have a query on a large table that is very fast (0s):\n> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txt\n> \n> Basically the query matches the rows that have a tag1 OR tag2 OR tag3 OR tag4 OR tag5... \n> \n> However if you increase the number of OR at some point PostgreSQL makes the bad decision to change its query plan! And the new plan makes the query terribly slow:\n> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-slow_query-txt\n> \n> Instead of this (which is fast):\n> Bitmap Index Scan on index_subscriptions_on_project_id_and_tags\n> It starts using this (which is slow):\n> Parallel Index Scan using index_subscriptions_on_project_id_and_created_at\n> The choice seems quite stupid since it doesn't have the tags on the new index... and indeed the query takes about 1 minute instead of a few milliseconds. Here's a list of the available indexes:\n> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-_indexes-txt\n> \n> How can I encourage PostgreSQL to use the Bitmap Index Scan even when there are many OR conditions? I have tried with VACUUM ANALYZE subscriptions but it doesn't help.\n> \n> Note: the query is generated dynamically by customers of a SaaS, so I don't have full control on it\n\nCan you replace the many ORs with a single \"overlaps\" comparison?\n\nThis \n\n (tags @> ARRAY['crt:2018_04']::varchar[]) OR (tags @> ARRAY['crt:2018_05']::varchar[]) OR (tags @> ARRAY['crt:2018_06']::varchar[])\n\nis equivalent to \n\n tags && array['crt:2018_04','crt:2018_05','crt:2018_06', ...]\n\nThe && operator can make use of a GIN index so maybe that uses the index_subscriptions_on_project_id_and_tags regardless of the number of elements.\n\n\n\n \n\n\n\n\n",
"msg_date": "Tue, 14 Jan 2020 09:01:30 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan when you add many OR conditions"
}
] |
[
{
"msg_contents": "Hi All,\n\nI'm testing an upgrade from Postgres 9.6.16 to 12.1 and seeing a\nsignificant performance gain in one specific query. This is really great,\nbut I'm just looking to understand why. Reading through the release notes\nacross all the new versions (10, 11, 12) hasn't yielded an obvious cause,\nbut maybe I missed something. Also, I realize it could be related to other\nfactors (config parameters, physical hosts, etc), but the systems are\npretty similar so just wondering about Postgres changes.\n\nThe query is the following:\n\nSELECT pvc.value, SUM(pvc.count) AS sum\nFROM\n(SELECT (ST_ValueCount(cv.rast, 1)).*\nFROM calveg_whrtype_20m AS cv) AS pvc\nGROUP BY pvc.value\n\nHere is the EXPLAIN (ANALYZE ON, BUFFERS ON) output from both systems:\n\n9.6 plan <https://explain.depesz.com/s/W8HN>\n12.1 plan <https://explain.depesz.com/s/lIRS>\n\nIn the 9.6 plan, the Seq Scan node produced 15,812 rows.\nIn the 12 plan, the Seq Scan produced 2,502 rows, and then the ProjectSet\nnode produced 15,812 rows.\n\nNote that the table (calveg_whrtype_20m) in the two databases have the same\nnumber of rows (2,502).\n\nSo it seems something about the introduction of the ProjectSet node between\nthe Seq Scan and HashAggregate is optimizing things...? Is this the right\nconclusion to draw and if so, why might this be happening? Is there\nsomething that was changed/improved in either 10, 11 or 12 that this\nbehavior can be attributed to?\n\nTwo more notes --\n\n1. If I run the inner subquery without the outer sum/group by, the plans\nbetween the two systems are identical.\n\n2. As the calgeg_whrtype_20m table is a raster, I started my question on\nthe PostGIS list, but there was no obvious answer that the gain is related\nto a change in the PostGIS code so I'm now turning to this list.\n\nThank you,\nShira\n\nHi All,I'm testing an upgrade from Postgres 9.6.16 to 12.1 and seeing a significant performance gain in one specific query. This is really great, but I'm just looking to understand why. Reading through the release notes across all the new versions (10, 11, 12) hasn't yielded an obvious cause, but maybe I missed something. Also, I realize it could be related to other factors (config parameters, physical hosts, etc), but the systems are pretty similar so just wondering about Postgres changes.The query is the following:SELECT pvc.value, SUM(pvc.count) AS sumFROM(SELECT (ST_ValueCount(cv.rast, 1)).*FROM calveg_whrtype_20m AS cv) AS pvcGROUP BY pvc.value Here is the EXPLAIN (ANALYZE ON, BUFFERS ON) output from both systems:9.6 plan12.1 planIn the 9.6 plan, the \n\nSeq Scan \n\nnode produced 15,812 rows. In the 12 plan, the \n\nSeq Scan produced 2,502 rows, and then the ProjectSet node produced 15,812 rows. Note that the \n\n\n\ntable (calveg_whrtype_20m) in the two databases have the same number of rows (2,502).So it seems something about the introduction of the ProjectSet node between the Seq Scan and HashAggregate is optimizing things...? Is this the right conclusion to draw and if so, why might this be happening? Is there something that was changed/improved in either 10, 11 or 12 that this behavior can be attributed to? Two more notes -- 1. If I run the inner subquery without the outer sum/group by, the plans between the two systems are identical.2. As the calgeg_whrtype_20m table is a raster, I started my question on the \n\nPostGIS \n\nlist, but there was no obvious answer that the gain is related to a change in the PostGIS code so I'm now turning to this list. \n\nThank you,Shira",
"msg_date": "Mon, 13 Jan 2020 08:29:05 -0800",
"msg_from": "Shira Bezalel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seeking reason behind performance gain in 12 with HashAggregate"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 08:29:05AM -0800, Shira Bezalel wrote:\n> Here is the EXPLAIN (ANALYZE ON, BUFFERS ON) output from both systems:\n> \n> 9.6 plan <https://explain.depesz.com/s/W8HN>\n> 12.1 plan <https://explain.depesz.com/s/lIRS>\n\n> Is there something that was changed/improved in either 10, 11 or 12 that this\n> behavior can be attributed to?\n\nv12 has JIT enabled by default.\nYou can test if that's significant with SET jit=off.\n\n\n",
"msg_date": "Mon, 13 Jan 2020 10:42:20 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seeking reason behind performance gain in 12 with HashAggregate"
},
{
"msg_contents": "Hi Justin,\n\nI'm seeing no difference in the query plan with JIT disabled in 12.1.\n\nThanks,\nShira\n\nOn Mon, Jan 13, 2020 at 8:42 AM Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Jan 13, 2020 at 08:29:05AM -0800, Shira Bezalel wrote:\n> > Here is the EXPLAIN (ANALYZE ON, BUFFERS ON) output from both systems:\n> >\n> > 9.6 plan <https://explain.depesz.com/s/W8HN>\n> > 12.1 plan <https://explain.depesz.com/s/lIRS>\n>\n> > Is there something that was changed/improved in either 10, 11 or 12 that\n> this\n> > behavior can be attributed to?\n>\n> v12 has JIT enabled by default.\n> You can test if that's significant with SET jit=off.\n>\n\n\n-- \nShira Bezalel\nDatabase Administrator & Desktop Support Manager\nSan Francisco Estuary Institute\nwww.sfei.org\nPh: 510-746-7304\n\nHi Justin,I'm seeing no difference in the query plan with JIT disabled in 12.1. Thanks,ShiraOn Mon, Jan 13, 2020 at 8:42 AM Justin Pryzby <[email protected]> wrote:On Mon, Jan 13, 2020 at 08:29:05AM -0800, Shira Bezalel wrote:\n> Here is the EXPLAIN (ANALYZE ON, BUFFERS ON) output from both systems:\n> \n> 9.6 plan <https://explain.depesz.com/s/W8HN>\n> 12.1 plan <https://explain.depesz.com/s/lIRS>\n\n> Is there something that was changed/improved in either 10, 11 or 12 that this\n> behavior can be attributed to?\n\nv12 has JIT enabled by default.\nYou can test if that's significant with SET jit=off.\n-- Shira Bezalel Database Administrator & Desktop Support ManagerSan Francisco Estuary Institutewww.sfei.orgPh: 510-746-7304",
"msg_date": "Mon, 13 Jan 2020 09:34:04 -0800",
"msg_from": "Shira Bezalel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seeking reason behind performance gain in 12 with HashAggregate"
},
{
"msg_contents": "I am not at all familiar with PostGIS so perhaps this is a silly question,\nis bloat an issue on the older instance? Correlation isn't causation, but\nhalf the buffers scanned and half the runtime in the v12 plan has me\ncurious why that might be.\n\n>\n\nI am not at all familiar with PostGIS so perhaps this is a silly question, is bloat an issue on the older instance? Correlation isn't causation, but half the buffers scanned and half the runtime in the v12 plan has me curious why that might be.",
"msg_date": "Mon, 13 Jan 2020 11:07:16 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seeking reason behind performance gain in 12 with HashAggregate"
},
{
"msg_contents": "Hi Michael,\n\nI appreciate your question. I ran a vacuum analyze on the 9.6 table and it\nyielded no difference. Same number of buffers were read, same query plan.\n\nThanks,\nShira\n\nOn Mon, Jan 13, 2020 at 10:07 AM Michael Lewis <[email protected]> wrote:\n\n> I am not at all familiar with PostGIS so perhaps this is a silly question,\n> is bloat an issue on the older instance? Correlation isn't causation, but\n> half the buffers scanned and half the runtime in the v12 plan has me\n> curious why that might be.\n>\n>>\n\n-- \nShira Bezalel\nDatabase Administrator & Desktop Support Manager\nSan Francisco Estuary Institute\nwww.sfei.org\nPh: 510-746-7304\n\nHi Michael,I appreciate your question. I ran a vacuum analyze on the 9.6 table and it yielded no difference. Same number of buffers were read, same query plan.Thanks,ShiraOn Mon, Jan 13, 2020 at 10:07 AM Michael Lewis <[email protected]> wrote:I am not at all familiar with PostGIS so perhaps this is a silly question, is bloat an issue on the older instance? Correlation isn't causation, but half the buffers scanned and half the runtime in the v12 plan has me curious why that might be.\n\n-- Shira Bezalel Database Administrator & Desktop Support ManagerSan Francisco Estuary Institutewww.sfei.orgPh: 510-746-7304",
"msg_date": "Mon, 13 Jan 2020 12:44:14 -0800",
"msg_from": "Shira Bezalel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seeking reason behind performance gain in 12 with HashAggregate"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 12:44:14PM -0800, Shira Bezalel wrote:\n>Hi Michael,\n>\n>I appreciate your question. I ran a vacuum analyze on the 9.6 table and it\n>yielded no difference. Same number of buffers were read, same query plan.\n>\n\nVACUUM ANALYZE won't shrink the table - the number of buffers will be\nexactly the same. You need to do VACUUM FULL, but be careful as that\nacquires exclusive lock on the table.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 13 Jan 2020 22:11:56 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seeking reason behind performance gain in 12 with HashAggregate"
},
{
"msg_contents": "Thanks Tomas. I ran a vacuum full on the 9.6 table -- still no difference\nin the query plan. The shared buffers hit went up slightly to 36069.\n\nShira\n\nOn Mon, Jan 13, 2020 at 1:12 PM Tomas Vondra <[email protected]>\nwrote:\n\n> On Mon, Jan 13, 2020 at 12:44:14PM -0800, Shira Bezalel wrote:\n> >Hi Michael,\n> >\n> >I appreciate your question. I ran a vacuum analyze on the 9.6 table and it\n> >yielded no difference. Same number of buffers were read, same query plan.\n> >\n>\n> VACUUM ANALYZE won't shrink the table - the number of buffers will be\n> exactly the same. You need to do VACUUM FULL, but be careful as that\n> acquires exclusive lock on the table.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nShira Bezalel\nDatabase Administrator & Desktop Support Manager\nSan Francisco Estuary Institute\nwww.sfei.org\nPh: 510-746-7304\n\nThanks Tomas. I ran a vacuum full on the 9.6 table -- still no difference in the query plan. The shared buffers hit went up slightly to 36069. ShiraOn Mon, Jan 13, 2020 at 1:12 PM Tomas Vondra <[email protected]> wrote:On Mon, Jan 13, 2020 at 12:44:14PM -0800, Shira Bezalel wrote:\n>Hi Michael,\n>\n>I appreciate your question. I ran a vacuum analyze on the 9.6 table and it\n>yielded no difference. Same number of buffers were read, same query plan.\n>\n\nVACUUM ANALYZE won't shrink the table - the number of buffers will be\nexactly the same. You need to do VACUUM FULL, but be careful as that\nacquires exclusive lock on the table.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n-- Shira Bezalel Database Administrator & Desktop Support ManagerSan Francisco Estuary Institutewww.sfei.orgPh: 510-746-7304",
"msg_date": "Mon, 13 Jan 2020 13:45:55 -0800",
"msg_from": "Shira Bezalel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seeking reason behind performance gain in 12 with HashAggregate"
},
{
"msg_contents": "On 2020-Jan-13, Shira Bezalel wrote:\n\n> Hi All,\n> \n> I'm testing an upgrade from Postgres 9.6.16 to 12.1 and seeing a\n> significant performance gain in one specific query. This is really great,\n> but I'm just looking to understand why.\n\npg12 reads half the number of buffers. I bet it's because of this change:\n\ncommit 4d0e994eed83c845a05da6e9a417b4efec67efaf\nAuthor: Stephen Frost <[email protected]>\nAuthorDate: Tue Apr 2 12:35:32 2019 -0400\nCommitDate: Tue Apr 2 12:35:32 2019 -0400\n\n Add support for partial TOAST decompression\n \n When asked for a slice of a TOAST entry, decompress enough to return the\n slice instead of decompressing the entire object.\n \n For use cases where the slice is at, or near, the beginning of the entry,\n this avoids a lot of unnecessary decompression work.\n \n This changes the signature of pglz_decompress() by adding a boolean to\n indicate if it's ok for the call to finish before consuming all of the\n source or destination buffers.\n \n Author: Paul Ramsey\n Reviewed-By: Rafia Sabih, Darafei Praliaskouski, Regina Obe\n Discussion: https://postgr.es/m/CACowWR07EDm7Y4m2kbhN_jnys%3DBBf9A6768RyQdKm_%3DNpkcaWg%40mail.gmail.com\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 Jan 2020 19:15:11 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seeking reason behind performance gain in 12 with HashAggregate"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 2:15 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2020-Jan-13, Shira Bezalel wrote:\n>\n> > Hi All,\n> >\n> > I'm testing an upgrade from Postgres 9.6.16 to 12.1 and seeing a\n> > significant performance gain in one specific query. This is really great,\n> > but I'm just looking to understand why.\n>\n> pg12 reads half the number of buffers. I bet it's because of this change:\n>\n> commit 4d0e994eed83c845a05da6e9a417b4efec67efaf\n> Author: Stephen Frost <[email protected]>\n> AuthorDate: Tue Apr 2 12:35:32 2019 -0400\n> CommitDate: Tue Apr 2 12:35:32 2019 -0400\n>\n> Add support for partial TOAST decompression\n>\n> When asked for a slice of a TOAST entry, decompress enough to return\n> the\n> slice instead of decompressing the entire object.\n>\n> For use cases where the slice is at, or near, the beginning of the\n> entry,\n> this avoids a lot of unnecessary decompression work.\n>\n> This changes the signature of pglz_decompress() by adding a boolean to\n> indicate if it's ok for the call to finish before consuming all of the\n> source or destination buffers.\n>\n> Author: Paul Ramsey\n> Reviewed-By: Rafia Sabih, Darafei Praliaskouski, Regina Obe\n> Discussion:\n> https://postgr.es/m/CACowWR07EDm7Y4m2kbhN_jnys%3DBBf9A6768RyQdKm_%3DNpkcaWg%40mail.gmail.com\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nThat sounds like a possibility. Thanks Alvaro.\n\nShira\n\nOn Mon, Jan 13, 2020 at 2:15 PM Alvaro Herrera <[email protected]> wrote:On 2020-Jan-13, Shira Bezalel wrote:\n\n> Hi All,\n> \n> I'm testing an upgrade from Postgres 9.6.16 to 12.1 and seeing a\n> significant performance gain in one specific query. This is really great,\n> but I'm just looking to understand why.\n\npg12 reads half the number of buffers. I bet it's because of this change:\n\ncommit 4d0e994eed83c845a05da6e9a417b4efec67efaf\nAuthor: Stephen Frost <[email protected]>\nAuthorDate: Tue Apr 2 12:35:32 2019 -0400\nCommitDate: Tue Apr 2 12:35:32 2019 -0400\n\n Add support for partial TOAST decompression\n\n When asked for a slice of a TOAST entry, decompress enough to return the\n slice instead of decompressing the entire object.\n\n For use cases where the slice is at, or near, the beginning of the entry,\n this avoids a lot of unnecessary decompression work.\n\n This changes the signature of pglz_decompress() by adding a boolean to\n indicate if it's ok for the call to finish before consuming all of the\n source or destination buffers.\n\n Author: Paul Ramsey\n Reviewed-By: Rafia Sabih, Darafei Praliaskouski, Regina Obe\n Discussion: https://postgr.es/m/CACowWR07EDm7Y4m2kbhN_jnys%3DBBf9A6768RyQdKm_%3DNpkcaWg%40mail.gmail.com\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\nThat sounds like a possibility. Thanks Alvaro.Shira",
"msg_date": "Mon, 13 Jan 2020 16:11:48 -0800",
"msg_from": "Shira Bezalel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seeking reason behind performance gain in 12 with HashAggregate"
}
] |
[
{
"msg_contents": "Hi,\nI have a master slave setup with streaming replication.\nI have a lots of wal files that were moved from master but not applied yet\n, a big delay in the replica.\nThe replica is not working on hotstandby mode , no conflicts to delay\nwals apply on it.\n\nI would like to know if increasing the amount of shared-buffers could help\nthe startup process applying the wals. I would like to know if in\nthe process of reading the wals and applying them, blocks that should be\nwritten will be brought to shared buffer or not?? If yes, having a bigger\nshared buffer will keep as much as possible the amount of pages there and\nincrease the startup process's speed avoiding pages's replacement and going\nto the OS cache and maybe to the disk .\nDoes it make sense?\n\nThanks in advanced.\n\nRegards,\nJoao\n\nHi, I have a master slave setup with streaming replication.I have a lots of wal files that were moved from master but not applied yet , a big delay in the replica.The replica is not working on hotstandby mode , no conflicts to delay wals apply on it.I would like to know if increasing the amount of shared-buffers could help the startup process applying the wals. I would like to know if in the process of reading the wals and applying them, blocks that should be written will be brought to shared buffer or not?? If yes, having a bigger shared buffer will keep as much as possible the amount of pages there and increase the startup process's speed avoiding pages's replacement and going to the OS cache and maybe to the disk .Does it make sense?Thanks in advanced.Regards,Joao",
"msg_date": "Tue, 14 Jan 2020 16:29:51 +0100",
"msg_from": "Joao Junior <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared buffers and startup process"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 04:29:51PM +0100, Joao Junior wrote:\n> I would like to know if increasing the amount of shared-buffers could help\n> the startup process applying the wals. I would like to know if in\n> the process of reading the wals and applying them, blocks that should be\n> written will be brought to shared buffer or not??\n\nPlease feel free to look at XLogReadBufferForRedo() in xlogutils.c and\ncheck what the routine does, and when/where it gets called. The code\nis well-documented, so you will find your answer easily.\n\n> If yes, having a bigger\n> shared buffer will keep as much as possible the amount of pages there and\n> increase the startup process's speed avoiding pages's replacement and going\n> to the OS cache and maybe to the disk .\n> Does it make sense?\n\nIt does. Even if relying on the OS cache would be enough in most\ncases, it is good to keep a certain of pages hot enough, and you need\nto be careful with not setting shared_buffers too high either.\n--\nMichael",
"msg_date": "Wed, 15 Jan 2020 11:48:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared buffers and startup process"
}
] |
[
{
"msg_contents": "After migrating to a partitioned table, I noticed that a\nperformance-critical plpgsql function is a few times slower.\nBasically, the function takes a key as an argument, and performs SELECT,\nUPDATE and DELETE operations on tables partitioned by the key.\nI narrowed down the problem to the following: let's have an empty table\n\"demo\" with column \"key\", and two plpgsql functions that run \"DELETE FROM\ndemo WHERE key = XYZ\" 10000 times in two flavours: one takes the key by\nargument, and in the other the key hardcoded.\n\nHere are the running times:\n- delete by hardcoded value from non-partitioned table: 39.807 ms\n- delete by argument from non-partitioned table: 45.734 ms\n- delete by hardcoded value from partitioned table: 47.101 ms\n- delete by argument from partitioned table: 295.748 ms\n\nDeleting by argument from an empty partitioned table is 6 times slower!\nWhy is it so? The number of partitions doesn't seem to be important. And\ndeleting is just an example, SELECT behaves in the same way.\n\n\nSample code:\n\n-- partioned table\n\nDROP TABLE IF EXISTS demo_partitioned;\nCREATE TABLE demo_partitioned(key BIGINT, val BIGINT) PARTITION BY LIST\n(key);\nDO $$\nDECLARE\n i BIGINT;\nBEGIN\n FOR i IN SELECT * FROM generate_series(1, 15)\n LOOP\n EXECUTE 'CREATE TABLE demo_partitioned_key_'|| i ||' PARTITION OF\ndemo_partitioned FOR VALUES IN (' || i || ');';\n END LOOP;\nEND$$;\n\n\nCREATE OR REPLACE FUNCTION del_from_partitioned_by_arg(k BIGINT)\n RETURNS VOID AS $$\nDECLARE\n i BIGINT;\nBEGIN\n FOR i IN SELECT * FROM generate_series(1, 10000)\n LOOP\n DELETE FROM demo_partitioned WHERE key = k;\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION del_from_partitioned_hardcoded()\n RETURNS VOID AS $$\nDECLARE\n i BIGINT;\nBEGIN\n FOR i IN SELECT * FROM generate_series(1, 10000)\n LOOP\n DELETE FROM demo_partitioned WHERE key = 3;\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nANALYZE demo_partitioned;\n\nEXPLAIN ANALYZE DELETE FROM demo_partitioned WHERE key = 3;\nEXPLAIN ANALYZE SELECT * FROM del_from_partitioned_hardcoded();\nEXPLAIN ANALYZE SELECT * FROM del_from_partitioned_by_arg(3);\n\n\n-- non-partitioned table\n\n\nDROP TABLE IF EXISTS demo_non_partitioned;\nCREATE TABLE demo_non_partitioned(key BIGINT, val BIGINT);\nANALYZE demo_non_partitioned;\n\n\nCREATE OR REPLACE FUNCTION del_from_non_partitioned_by_arg(k BIGINT)\n RETURNS VOID AS $$\nDECLARE\n i BIGINT;\nBEGIN\n FOR i IN SELECT * FROM generate_series(1, 10000)\n LOOP\n DELETE FROM demo_non_partitioned WHERE key = k;\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION del_from_non_partitioned_hardcoded()\n RETURNS VOID AS $$\nDECLARE\n i BIGINT;\nBEGIN\n FOR i IN SELECT * FROM generate_series(1, 10000)\n LOOP\n DELETE FROM demo_non_partitioned WHERE key = 3;\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nEXPLAIN ANALYZE DELETE FROM demo_non_partitioned WHERE key = 3;\nEXPLAIN ANALYZE SELECT * FROM del_from_non_partitioned_hardcoded();\nEXPLAIN ANALYZE SELECT * FROM del_from_non_partitioned_by_arg(3);\n\n\nOutput:\n\n\nDROP TABLE\nCREATE TABLE\nDO\nCREATE FUNCTION\nCREATE FUNCTION\nANALYZE\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------\n Delete on demo_partitioned (cost=0.00..29.43 rows=9 width=6) (actual\ntime=0.002..0.002 rows=0 loops=1)\n Delete on demo_partitioned_key_3\n -> Seq Scan on demo_partitioned_key_3 (cost=0.00..29.43 rows=9\nwidth=6) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (key = 3)\n Planning Time: 0.180 ms\n Execution Time: 0.069 ms\n(6 rows)\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------\n Function Scan on del_from_partitioned_hardcoded (cost=0.05..0.06 rows=1\nwidth=4) (actual time=47.030..47.030 rows=1 loops=1)\n Planning Time: 0.020 ms\n Execution Time: 47.101 ms\n(3 rows)\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------\n Function Scan on del_from_partitioned_by_arg (cost=0.05..0.06 rows=1\nwidth=4) (actual time=295.737..295.737 rows=1 loops=1)\n Planning Time: 0.023 ms\n Execution Time: 295.748 ms\n(3 rows)\n\nDROP TABLE\nCREATE TABLE\nANALYZE\nCREATE FUNCTION\nCREATE FUNCTION\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------\n Delete on demo_non_partitioned (cost=0.00..29.43 rows=9 width=6) (actual\ntime=0.002..0.003 rows=0 loops=1)\n -> Seq Scan on demo_non_partitioned (cost=0.00..29.43 rows=9 width=6)\n(actual time=0.002..0.002 rows=0 loops=1)\n Filter: (key = 3)\n Planning Time: 0.046 ms\n Execution Time: 0.028 ms\n(5 rows)\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------\n Function Scan on del_from_non_partitioned_hardcoded (cost=0.05..0.06\nrows=1 width=4) (actual time=39.796..39.796 rows=1 loops=1)\n Planning Time: 0.010 ms\n Execution Time: 39.807 ms\n(3 rows)\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------\n Function Scan on del_from_non_partitioned_by_arg (cost=0.05..0.06 rows=1\nwidth=4) (actual time=45.723..45.723 rows=1 loops=1)\n Planning Time: 0.024 ms\n Execution Time: 45.734 ms\n(3 rows)\n\nAfter migrating to a partitioned table, I noticed that a performance-critical plpgsql function is a few times slower.Basically, the function takes a key as an argument, and performs SELECT, UPDATE and DELETE operations on tables partitioned by the key.I narrowed down the problem to the following: let's have an empty table \"demo\" with column \"key\", and two plpgsql functions that run \"DELETE FROM demo WHERE key = XYZ\" 10000 times in two flavours: one takes the key by argument, and in the other the key hardcoded. Here are the running times:- delete by hardcoded value from non-partitioned table: 39.807 ms- delete by argument from non-partitioned table: 45.734 ms- delete by hardcoded value from partitioned table: 47.101 ms- delete by argument from partitioned table: 295.748 msDeleting by argument from an empty partitioned table is 6 times slower! Why is it so? The number of partitions doesn't seem to be important. And deleting is just an example, SELECT behaves in the same way.Sample code:-- partioned tableDROP TABLE IF EXISTS demo_partitioned;CREATE TABLE demo_partitioned(key BIGINT, val BIGINT) PARTITION BY LIST (key);DO $$DECLARE i BIGINT;BEGIN FOR i IN SELECT * FROM generate_series(1, 15) LOOP EXECUTE 'CREATE TABLE demo_partitioned_key_'|| i ||' PARTITION OF demo_partitioned FOR VALUES IN (' || i || ');'; END LOOP;END$$;CREATE OR REPLACE FUNCTION del_from_partitioned_by_arg(k BIGINT) RETURNS VOID AS $$DECLARE i BIGINT;BEGIN FOR i IN SELECT * FROM generate_series(1, 10000) LOOP DELETE FROM demo_partitioned WHERE key = k; END LOOP;END;$$ LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION del_from_partitioned_hardcoded() RETURNS VOID AS $$DECLARE i BIGINT;BEGIN FOR i IN SELECT * FROM generate_series(1, 10000) LOOP DELETE FROM demo_partitioned WHERE key = 3; END LOOP;END;$$ LANGUAGE plpgsql;ANALYZE demo_partitioned;EXPLAIN ANALYZE DELETE FROM demo_partitioned WHERE key = 3;EXPLAIN ANALYZE SELECT * FROM del_from_partitioned_hardcoded();EXPLAIN ANALYZE SELECT * FROM del_from_partitioned_by_arg(3);-- non-partitioned tableDROP TABLE IF EXISTS demo_non_partitioned;CREATE TABLE demo_non_partitioned(key BIGINT, val BIGINT);ANALYZE demo_non_partitioned;CREATE OR REPLACE FUNCTION del_from_non_partitioned_by_arg(k BIGINT) RETURNS VOID AS $$DECLARE i BIGINT;BEGIN FOR i IN SELECT * FROM generate_series(1, 10000) LOOP DELETE FROM demo_non_partitioned WHERE key = k; END LOOP;END;$$ LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION del_from_non_partitioned_hardcoded() RETURNS VOID AS $$DECLARE i BIGINT;BEGIN FOR i IN SELECT * FROM generate_series(1, 10000) LOOP DELETE FROM demo_non_partitioned WHERE key = 3; END LOOP;END;$$ LANGUAGE plpgsql;EXPLAIN ANALYZE DELETE FROM demo_non_partitioned WHERE key = 3;EXPLAIN ANALYZE SELECT * FROM del_from_non_partitioned_hardcoded();EXPLAIN ANALYZE SELECT * FROM del_from_non_partitioned_by_arg(3);Output:DROP TABLECREATE TABLEDOCREATE FUNCTIONCREATE FUNCTIONANALYZE QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Delete on demo_partitioned (cost=0.00..29.43 rows=9 width=6) (actual time=0.002..0.002 rows=0 loops=1) Delete on demo_partitioned_key_3 -> Seq Scan on demo_partitioned_key_3 (cost=0.00..29.43 rows=9 width=6) (actual time=0.001..0.001 rows=0 loops=1) Filter: (key = 3) Planning Time: 0.180 ms Execution Time: 0.069 ms(6 rows) QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------- Function Scan on del_from_partitioned_hardcoded (cost=0.05..0.06 rows=1 width=4) (actual time=47.030..47.030 rows=1 loops=1) Planning Time: 0.020 ms Execution Time: 47.101 ms(3 rows) QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------ Function Scan on del_from_partitioned_by_arg (cost=0.05..0.06 rows=1 width=4) (actual time=295.737..295.737 rows=1 loops=1) Planning Time: 0.023 ms Execution Time: 295.748 ms(3 rows)DROP TABLECREATE TABLEANALYZECREATE FUNCTIONCREATE FUNCTION QUERY PLAN --------------------------------------------------------------------------------------------------------------------- Delete on demo_non_partitioned (cost=0.00..29.43 rows=9 width=6) (actual time=0.002..0.003 rows=0 loops=1) -> Seq Scan on demo_non_partitioned (cost=0.00..29.43 rows=9 width=6) (actual time=0.002..0.002 rows=0 loops=1) Filter: (key = 3) Planning Time: 0.046 ms Execution Time: 0.028 ms(5 rows) QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------- Function Scan on del_from_non_partitioned_hardcoded (cost=0.05..0.06 rows=1 width=4) (actual time=39.796..39.796 rows=1 loops=1) Planning Time: 0.010 ms Execution Time: 39.807 ms(3 rows) QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------- Function Scan on del_from_non_partitioned_by_arg (cost=0.05..0.06 rows=1 width=4) (actual time=45.723..45.723 rows=1 loops=1) Planning Time: 0.024 ms Execution Time: 45.734 ms(3 rows)",
"msg_date": "Thu, 16 Jan 2020 14:21:57 +0100",
"msg_from": "=?UTF-8?Q?Marcin_Barczy=C5=84ski?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queries in plpgsql are 6 times slower on partitioned tables"
}
] |
[
{
"msg_contents": "Hello List, I'm Cosmin. This is my first post and I'll get right down to\nthe problem. I'm using Postgresql 10 (because that's what's installed by\ndefault on Ubuntu 18.04):\n\nexplain analyze\n select R, C, V from LBD\n where Ver = 92 and Id in (10,11)\n\nIndex Scan using \"IX_LBD_Ver_Id\" on \"LBD\" (cost=0.56..2.37 rows=1\nwidth=13) (actual time=0.063..857.725 rows=2 loops=1)\n Index Cond: (\"Ver\" = 92)\n Filter: (\"Id\" = ANY ('{10,11}'::integer[]))\n Rows Removed by Filter: 1869178\nPlanning time: 0.170 ms\nExecution time: 857.767 ms\n\nThe IX_LBD_Ver_Id index is on two columns (Ver, Id) - it's not in \"Ver\"\nalone!\n\nSomehow the query planner thinks that scanning the index on \"Ver\" alone\nshould only return 1 record. The problem is that there are, on average,\nmillions of records for each \"Ver\"!\nThe current query is not my real query: the original problem was with a\nJOIN. I boiled it down to this simple query because it shows the same\nproblem: when dealing with more then one \"Id\" the planner scans on \"Ver\"\nand filters on \"Id\". Running the query with a single \"Id\" does use the\nindex on both columns and the query finishes in only 0.7 ms (one thousand\ntimes faster)\nThe planner doesn't always get it this bad. The original JOIN usually runs\ninstantaneously. Unless the planner gets into it's current funk and then\nthe original JOIN never finishes.\n\n- I've reindexed the whole database\n- I ran ANALYZE on all tables\n- I checked \"pg_stats\", here are the stats:\n\nselect attname, null_frac, avg_width, n_distinct, correlation from pg_stats\nwhere tablename = 'LBD' and (attname in ('Id', 'Ver'))\nattname null_frac acg_width n_distinct correlation\nId 0 4 2029846 0.0631249\nVer 0 2 22 0.624823\n\nAccording to data from \"pg_stats\" the query planner should know that\nscanning the \"LBD\" table has on average millions of records per \"Ver\".\nThe fact that this works right most of the time tells me I'm dealing with\nsome kind of statistical information (something along the lines of\nn_distinct from pg_stat) and gives me hope. Once I know why the planner\ngets this wrong I should be able to make it right.\n\nPlease point me in the right direction. Where should I look, what should I\ntry?\n\nThank you,\nCosmin\n\nHello List, I'm Cosmin. This is my first post and I'll get right down to the problem. I'm using Postgresql 10 (because that's what's installed by default on Ubuntu 18.04):explain analyze select R, C, V from LBD where Ver = 92 and Id in (10,11)Index Scan using \"IX_LBD_Ver_Id\" on \"LBD\" (cost=0.56..2.37 rows=1 width=13) (actual time=0.063..857.725 rows=2 loops=1) Index Cond: (\"Ver\" = 92) Filter: (\"Id\" = ANY ('{10,11}'::integer[])) Rows Removed by Filter: 1869178Planning time: 0.170 msExecution time: 857.767 msThe \n\nIX_LBD_Ver_Id index is on two columns (Ver, Id) - it's not in \"Ver\" alone!Somehow the query planner thinks that scanning the index on \"Ver\" alone should only return 1 record. The problem is that there are, on average, millions of records for each \"Ver\"!The current query is not my real query: the original problem was with a JOIN. I boiled it down to this simple query because it shows the same problem: when dealing with more then one \"Id\" the planner scans on \"Ver\" and filters on \"Id\". Running the query with a single \"Id\" does use the index on both columns and the query finishes in only 0.7 ms (one thousand times faster)The planner doesn't always get it this bad. The original JOIN usually runs instantaneously. Unless the planner gets into it's current funk and then the original JOIN never finishes.- I've reindexed the whole database- I ran ANALYZE on all tables- I checked \"pg_stats\", here are the stats:select attname, null_frac, avg_width, n_distinct, correlation from pg_stats where tablename = 'LBD' and (attname in ('Id', 'Ver'))attname null_frac acg_width n_distinct correlationId 0 4 2029846 0.0631249Ver 0 2 22 0.624823According to data from \"pg_stats\" the query planner should know that scanning the \"LBD\" table has on average millions of records per \"Ver\".The fact that this works right most of the time tells me I'm dealing with some kind of statistical information (something along the lines of n_distinct from pg_stat) and gives me hope. Once I know why the planner gets this wrong I should be able to make it right.Please point me in the right direction. Where should I look, what should I try?Thank you,Cosmin",
"msg_date": "Thu, 16 Jan 2020 16:06:06 +0200",
"msg_from": "Cosmin Prund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad query plan decision when using multiple column index - postgresql\n uses only first column then filters"
},
{
"msg_contents": "Cosmin Prund <[email protected]> writes:\n> explain analyze\n> select R, C, V from LBD\n> where Ver = 92 and Id in (10,11)\n\n> Index Scan using \"IX_LBD_Ver_Id\" on \"LBD\" (cost=0.56..2.37 rows=1\n> width=13) (actual time=0.063..857.725 rows=2 loops=1)\n> Index Cond: (\"Ver\" = 92)\n> Filter: (\"Id\" = ANY ('{10,11}'::integer[]))\n> Rows Removed by Filter: 1869178\n> Planning time: 0.170 ms\n> Execution time: 857.767 ms\n\n> The IX_LBD_Ver_Id index is on two columns (Ver, Id) - it's not in \"Ver\"\n> alone!\n\nSeems like an odd choice of plan, then, but you haven't provided any\ndetail that would let anyone guess why it's not using the second index\ncolumn. For starters it would be good to show the exact table and\nindex schema (eg via \\d+ in psql). Also, does explicitly ANALYZE'ing\nthe table change anything?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 10:11:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "Does the behavior change with different values of Ver column? I'd be\ncurious of the fraction in the MCVs frequency list in stats indicates that\nrows with Ver = 92 are rare and therefore the index on only Ver column is\nsufficient to find the rows quickly. What is reltuples for this table by\nthe way?\n\nI also wonder if the situation may be helped by re-indexing the \"index on\nboth columns\" to remove any chance of issues on bloat in the index. Which\norder are the columns by the way? If Ver is first, is there also an index\non only id column?. Since you aren't on v12, you don't get to re-index\nconcurrently but I assume you know the work around of create concurrently\n(different name), drop concurrently (old one), and finally rename new index.\n\nDoes the behavior change with different values of Ver column? I'd be curious of the fraction in the MCVs frequency list in stats indicates that rows with Ver = 92 are rare and therefore the index on only Ver column is sufficient to find the rows quickly. What is reltuples for this table by the way?I also wonder if the situation may be helped by re-indexing the \"index on both columns\" to remove any chance of issues on bloat in the index. Which order are the columns by the way? If Ver is first, is there also an index on only id column?. Since you aren't on v12, you don't get to re-index concurrently but I assume you know the work around of create concurrently (different name), drop concurrently (old one), and finally rename new index.",
"msg_date": "Thu, 16 Jan 2020 09:59:40 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "Hi Tom, and thanks.\n\nRunning ANALYZE doesn't change a thing. REINDEXING doesn't change a thing.\nI know it's an odd choice of plan - that's why I'm here!\n\nI thought I'd just post what felt relevant, hoping it's not something out\nof the ordinary and I'm just missing something obvious.\nHere's lots of data:\n\n===========================================================================\n\nSELECT version()\nPostgreSQL 10.10 (Ubuntu 10.10-0ubuntu0.18.04.1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit\n\n===========================================================================\n\n \\d \"LucrareBugetDate\"\n Table\n\"public.LucrareBugetDate\"\n Column | Type | Collation | Nullable\n| Default\n----------------------------+-----------------------+-----------+----------+------------------------------------------------------------------\n OrdonatorPrincipalId | uuid | | |\n UnitateSubordonataId | uuid | | |\n CentralizatorSelectiv | text | | |\n IdRand | character varying(32) | | |\n IdColoana | character varying(32) | | |\n ClasEc | character varying(32) | | |\n CodSector | character varying(4) | | |\n CodSursa | character varying(4) | | |\n Paragraf | character varying(16) | | |\n Venit | character varying(16) | | |\n FelValoare | integer | | not null |\n Valoare | numeric | | not null |\n RangOperator | integer | | not null |\n OrdineCalcul | integer | | not null |\n ConflictFormuleAlternative | boolean | | not null\n| false\n Sectiune | integer | | |\n RefColoana | text | | |\n RefDocument | text | | |\n RefLinie | text | | |\n SeqModificare | integer | | not null\n| 0\n LucrareBugetDateId | integer | | not null\n| nextval('\"LucrareBugetDate_LucrareBugetDateIdV2_seq\"'::regclass)\n LucrareBugetVersiuneId | smallint | | not null |\n CentralizatorSelectivId | uuid | | |\n Stil | text | | |\n ValoareArhivata | boolean | | |\nIndexes:\n \"PK_LucrareBugetDate\" PRIMARY KEY, btree (\"LucrareBugetVersiuneId\",\n\"LucrareBugetDateId\")\n \"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" btree\n(\"LucrareBugetVersiuneId\", \"LucrareBugetDateId\")\nForeign-key constraints:\n \"FK_LucrareBugetDate_LucrareBugetVersiune_LucrareBugetVersiuneId\"\nFOREIGN KEY (\"LucrareBugetVersiuneId\") REFERENCES\n\"LucrareBugetVersiune\"(\"LucrareBugetVersiuneId\") ON DELETE CASCADE\n\n===========================================================================\n\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\nrelname='LucrareBugetDate';\n relname | relpages | reltuples | relallvisible | relkind |\nrelnatts | relhassubclass | reloptions | pg_table_size\n------------------+----------+-------------+---------------+---------+----------+----------------+-----------------+---------------\n LucrareBugetDate | 2659660 | 4.17124e+07 | 671510 | r |\n 25 | f | {fillfactor=50} | 21793775616\n(1 row)\n\n===========================================================================\n\nDoes the table have anything unusual about it?\n\n - contains large objects: NO\n - has a large proportion of NULLs in several columns: NO\n - receives a large number of UPDATEs or DELETEs regularly: YES - Lots of\n UPDATES but no UPDATES to indexed columns. No DELETE's.\n - is growing rapidly: I'm inserting millions of records at once but not\n very often. Have manually done ANALYZE and REINDEX\n - has many indexes on it: NO\n - uses triggers that may be executing database functions, or is calling\n functions directly: NO\n\n\n\n===========================================================================\nEXPLAIN (ANALYZE, BUFFERS) select \"IdRand\", \"IdColoana\", \"Valoare\" from\n\"LucrareBugetDate\" where (\"LucrareBugetVersiuneId\" = 92) and\n(\"LucrareBugetDateId\" in (10,11));\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using\n\"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" on\n\"LucrareBugetDate\" (cost=0.56..2.37 rows=1 width=13) (actual\ntime=0.096..978.398 rows=2 loops=1)\n Index Cond: (\"LucrareBugetVersiuneId\" = 92)\n Filter: (\"LucrareBugetDateId\" = ANY ('{10,11}'::integer[]))\n Rows Removed by Filter: 1869178\n Buffers: shared hit=161178\n Planning time: 0.699 ms\n Execution time: 978.433 ms\n\n===========================================================================\nWas this query always slow, or has it gotten slower over time? If the\nplan/execution of the query used to be different, do you have copies of\nthose query plans? Has anything changed in your database other than the\naccumulation of data?\nThe query is usually instantaneous.\nHere's the same query ran o a different server running the same database\nwith comparable data (COLD server, frist run! The second run has execution\ntime = 0.040ms):\n\n EXPLAIN (ANALYZE, BUFFERS) select \"IdRand\", \"IdColoana\", \"Valoare\" from\n\"LucrareBugetDate\" where (\"LucrareBugetVersiuneId\" = 92) and\n(\"LucrareBugetDateId\" in (10,11));\n\nIndex Scan using \"PK_LucrareBugetDate\" on \"LucrareBugetDate\"\n (cost=0.56..4.85 rows=2 width=13) (actual time=22.922..23.123 rows=2\nloops=1)\n Index Cond: ((\"LucrareBugetVersiuneId\" = 92) AND (\"LucrareBugetDateId\" =\nANY ('{10,11}'::integer[])))\n Buffers: shared hit=12 read=4\nPlanning time: 66.743 ms\nExecution time: 23.190 ms\n\n===========================================================================\n\nHardware:\n\n2 x Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz, 128 GBhz, Local ZFS-based\nstorage built from 4 x NVME SSD drives.\nI doubt it's hardware related.\n\n===========================================================================\n\nSELECT * FROM pg_stat_user_tables WHERE relname='table_name';\n relid | schemaname | relname | seq_scan | seq_tup_read |\nidx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del |\nn_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze |\n last_vacuum | last_autovacuum | last_analyze |\n last_autoanalyze | vacuum_count | autovacuum_count |\nanalyze_count | autoanalyze_count\n-------+------------+------------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+-------------------------------+-----------------+-------------------------------+-------------------------------+--------------+------------------+---------------+-------------------\n 20655 | public | LucrareBugetDate | 306 | 7765749768 | 8398680\n| 983464378904 | 58388025 | 2944618 | 16675590 | 2887093 |\n41712435 | 61524 | 2588381 | 2019-11-03 19:15:58.765546+00\n| | 2020-01-15 16:11:26.301756+00 | 2019-12-20\n10:12:53.737619+00 | 1 | 0 | 40 |\n 12\n\n\n===========================================================================\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\ntablename, attname, inherited, null_frac, n_distinct,\narray_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1)\nn_hist, correlation FROM pg_stats WHERE attname='LucrareBugetDateId' AND\ntablename='LucrareBugetDate' ORDER BY 1 DESC;\n\n frac_mcv | tablename | attname | inherited | null_frac\n| n_distinct | n_mcv | n_hist | correlation\n------------+------------------+--------------------+-----------+-----------+-------------+-------+--------+-------------\n 0.00666667 | LucrareBugetDate | LucrareBugetDateId | f | 0\n| 2.02985e+06 | 100 | 101 | 0.0631249\n\n\n===========================================================================\n\n\n SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\ntablename, attname, inherited, null_frac, n_distinct,\narray_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1)\nn_hist, correlation FROM pg_stats WHERE attname='LucrareBugetVersiuneId'\nAND tablename='LucrareBugetDate' ORDER BY 1 DESC;\n frac_mcv | tablename | attname | inherited |\nnull_frac | n_distinct | n_mcv | n_hist | correlation\n----------+------------------+------------------------+-----------+-----------+------------+-------+--------+-------------\n 1 | LucrareBugetDate | LucrareBugetVersiuneId | f |\n0 | 22 | 22 | | 0.624823\n(1 row)\n\n\n ===========================================================================\n\nI think I went through most of\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\nI can provide more information if helpful.\n\nOn Thu, 16 Jan 2020 at 17:11, Tom Lane <[email protected]> wrote:\n\n> Cosmin Prund <[email protected]> writes:\n> > explain analyze\n> > select R, C, V from LBD\n> > where Ver = 92 and Id in (10,11)\n>\n> > Index Scan using \"IX_LBD_Ver_Id\" on \"LBD\" (cost=0.56..2.37 rows=1\n> > width=13) (actual time=0.063..857.725 rows=2 loops=1)\n> > Index Cond: (\"Ver\" = 92)\n> > Filter: (\"Id\" = ANY ('{10,11}'::integer[]))\n> > Rows Removed by Filter: 1869178\n> > Planning time: 0.170 ms\n> > Execution time: 857.767 ms\n>\n> > The IX_LBD_Ver_Id index is on two columns (Ver, Id) - it's not in \"Ver\"\n> > alone!\n>\n> Seems like an odd choice of plan, then, but you haven't provided any\n> detail that would let anyone guess why it's not using the second index\n> column. For starters it would be good to show the exact table and\n> index schema (eg via \\d+ in psql). Also, does explicitly ANALYZE'ing\n> the table change anything?\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> regards, tom lane\n>\n\nHi Tom, and thanks.Running ANALYZE doesn't change a thing. REINDEXING doesn't change a thing. I know it's an odd choice of plan - that's why I'm here!I thought I'd just post what felt relevant, hoping it's not something out of the ordinary and I'm just missing something obvious.Here's lots of data:===========================================================================SELECT version()PostgreSQL 10.10 (Ubuntu 10.10-0ubuntu0.18.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit\n\n===========================================================================\n\n \\d \"LucrareBugetDate\" Table \"public.LucrareBugetDate\" Column | Type | Collation | Nullable | Default----------------------------+-----------------------+-----------+----------+------------------------------------------------------------------ OrdonatorPrincipalId | uuid | | | UnitateSubordonataId | uuid | | | CentralizatorSelectiv | text | | | IdRand | character varying(32) | | | IdColoana | character varying(32) | | | ClasEc | character varying(32) | | | CodSector | character varying(4) | | | CodSursa | character varying(4) | | | Paragraf | character varying(16) | | | Venit | character varying(16) | | | FelValoare | integer | | not null | Valoare | numeric | | not null | RangOperator | integer | | not null | OrdineCalcul | integer | | not null | ConflictFormuleAlternative | boolean | | not null | false Sectiune | integer | | | RefColoana | text | | | RefDocument | text | | | RefLinie | text | | | SeqModificare | integer | | not null | 0 LucrareBugetDateId | integer | | not null | nextval('\"LucrareBugetDate_LucrareBugetDateIdV2_seq\"'::regclass) LucrareBugetVersiuneId | smallint | | not null | CentralizatorSelectivId | uuid | | | Stil | text | | | ValoareArhivata | boolean | | |Indexes: \"PK_LucrareBugetDate\" PRIMARY KEY, btree (\"LucrareBugetVersiuneId\", \"LucrareBugetDateId\") \"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" btree (\"LucrareBugetVersiuneId\", \"LucrareBugetDateId\")Foreign-key constraints: \"FK_LucrareBugetDate_LucrareBugetVersiune_LucrareBugetVersiuneId\" FOREIGN KEY (\"LucrareBugetVersiuneId\") REFERENCES \"LucrareBugetVersiune\"(\"LucrareBugetVersiuneId\") ON DELETE CASCADE\n\n===========================================================================\n\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='LucrareBugetDate'; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size------------------+----------+-------------+---------------+---------+----------+----------------+-----------------+--------------- LucrareBugetDate | 2659660 | 4.17124e+07 | 671510 | r | 25 | f | {fillfactor=50} | 21793775616(1 row)\n\n===========================================================================\n\nDoes the table have anything unusual about it?contains large objects: NOhas a large proportion of NULLs in several columns: NOreceives a large number of UPDATEs or DELETEs regularly: YES - Lots of UPDATES but no UPDATES to indexed columns. No DELETE's.is growing rapidly: I'm inserting millions of records at once but not very often. Have manually done ANALYZE and REINDEXhas many indexes on it: NOuses triggers that may be executing database functions, or is calling functions directly: NO =========================================================================== EXPLAIN (ANALYZE, BUFFERS) select \"IdRand\", \"IdColoana\", \"Valoare\" from \"LucrareBugetDate\" where (\"LucrareBugetVersiuneId\" = 92) and (\"LucrareBugetDateId\" in (10,11)); QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using \"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" on \"LucrareBugetDate\" (cost=0.56..2.37 rows=1 width=13) (actual time=0.096..978.398 rows=2 loops=1) Index Cond: (\"LucrareBugetVersiuneId\" = 92) Filter: (\"LucrareBugetDateId\" = ANY ('{10,11}'::integer[])) Rows Removed by Filter: 1869178 Buffers: shared hit=161178 Planning time: 0.699 ms Execution time: 978.433 ms===========================================================================Was this query always slow, or has it gotten slower over time? If the plan/execution of the query used to be different, do you have copies of those query plans? Has anything changed in your database other than the accumulation of data? The query is usually instantaneous. Here's the same query ran o a different server running the same database with comparable data (COLD server, frist run! The second run has execution time = 0.040ms): EXPLAIN (ANALYZE, BUFFERS) select \"IdRand\", \"IdColoana\", \"Valoare\" from \"LucrareBugetDate\" where (\"LucrareBugetVersiuneId\" = 92) and (\"LucrareBugetDateId\" in (10,11)); Index Scan using \"PK_LucrareBugetDate\" on \"LucrareBugetDate\" (cost=0.56..4.85 rows=2 width=13) (actual time=22.922..23.123 rows=2 loops=1) Index Cond: ((\"LucrareBugetVersiuneId\" = 92) AND (\"LucrareBugetDateId\" = ANY ('{10,11}'::integer[]))) Buffers: shared hit=12 read=4Planning time: 66.743 msExecution time: 23.190 ms===========================================================================Hardware: 2 x Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz, 128 GBhz, Local ZFS-based storage built from 4 x NVME SSD drives.I doubt it's hardware related.===========================================================================SELECT * FROM pg_stat_user_tables WHERE relname='table_name'; relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze | vacuum_count | autovacuum_count | analyze_count | autoanalyze_count-------+------------+------------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+-------------------------------+-----------------+-------------------------------+-------------------------------+--------------+------------------+---------------+------------------- 20655 | public | LucrareBugetDate | 306 | 7765749768 | 8398680 | 983464378904 | 58388025 | 2944618 | 16675590 | 2887093 | 41712435 | 61524 | 2588381 | 2019-11-03 19:15:58.765546+00 | | 2020-01-15 16:11:26.301756+00 | 2019-12-20 10:12:53.737619+00 | 1 | 0 | 40 | 12 =========================================================================== SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE attname='LucrareBugetDateId' AND tablename='LucrareBugetDate' ORDER BY 1 DESC; frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation------------+------------------+--------------------+-----------+-----------+-------------+-------+--------+------------- 0.00666667 | LucrareBugetDate | LucrareBugetDateId | f | 0 | 2.02985e+06 | 100 | 101 | 0.0631249 \n\n =========================================================================== \n\n SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE attname='LucrareBugetVersiuneId' AND tablename='LucrareBugetDate' ORDER BY 1 DESC; frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation----------+------------------+------------------------+-----------+-----------+------------+-------+--------+------------- 1 | LucrareBugetDate | LucrareBugetVersiuneId | f | 0 | 22 | 22 | | 0.624823(1 row) =========================================================================== I think I went through most of https://wiki.postgresql.org/wiki/SlowQueryQuestionsI can provide more information if helpful. On Thu, 16 Jan 2020 at 17:11, Tom Lane <[email protected]> wrote:Cosmin Prund <[email protected]> writes:\n> explain analyze\n> select R, C, V from LBD\n> where Ver = 92 and Id in (10,11)\n\n> Index Scan using \"IX_LBD_Ver_Id\" on \"LBD\" (cost=0.56..2.37 rows=1\n> width=13) (actual time=0.063..857.725 rows=2 loops=1)\n> Index Cond: (\"Ver\" = 92)\n> Filter: (\"Id\" = ANY ('{10,11}'::integer[]))\n> Rows Removed by Filter: 1869178\n> Planning time: 0.170 ms\n> Execution time: 857.767 ms\n\n> The IX_LBD_Ver_Id index is on two columns (Ver, Id) - it's not in \"Ver\"\n> alone!\n\nSeems like an odd choice of plan, then, but you haven't provided any\ndetail that would let anyone guess why it's not using the second index\ncolumn. For starters it would be good to show the exact table and\nindex schema (eg via \\d+ in psql). Also, does explicitly ANALYZE'ing\nthe table change anything?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n regards, tom lane",
"msg_date": "Thu, 16 Jan 2020 19:18:24 +0200",
"msg_from": "Cosmin Prund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "Michael Lewis <[email protected]> writes:\n> Does the behavior change with different values of Ver column?\n\nBy and large, indxpath.c will just add all qual clauses that match\nan index to the indexscan's conditions --- there's no attempt to\ndecide that some of them might not be worth it on cost grounds.\nSo I'd be pretty surprised if altering the Ver constant made any\ndifference. My money is on there being some reason why the IN\nclause doesn't match the index, perhaps a type mismatch. Without\nseeing the table schema, and the exact query, it's hard to say what\nthat reason is. (I'll not insult your intelligence by saying how\nI know that the OP didn't just copy-and-paste that query.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 12:27:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "Hello Michael and hello again Tom, sorry for mailing you directly. I just\nhit Reply in gmail - I expected the emails to have a reply-to=Pgsql.\nApparently they do not.\n\nRunning the same query with a different \"Ver\" produces a proper plan.\nHere's a non-redacted example (Ver=91):\n\nEXPLAIN (ANALYZE, BUFFERS) select \"IdRand\", \"IdColoana\", \"Valoare\" from\n\"LucrareBugetDate\" where (\"LucrareBugetVersiuneId\" = 91) and\n(\"LucrareBugetDateId\" in (10,11));\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using\n\"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" on\n\"LucrareBugetDate\" (cost=0.56..4.95 rows=2 width=13) (actual\ntime=3.617..3.631 rows=2 loops=1)\n Index Cond: ((\"LucrareBugetVersiuneId\" = 91) AND (\"LucrareBugetDateId\" =\nANY ('{10,11}'::integer[])))\n Buffers: shared hit=9 read=3\n Planning time: 0.223 ms\n Execution time: 3.663 ms\n(5 rows)\n\nI have reindex everything, not just this INDEX.\n\n\"reltuples\" for this table is 41712436.\n\n> I'd be curious of the fraction in the MCVs frequency list in stats\nindicates that rows with Ver = 92 are rare and therefore the index on\nonly Ver column is sufficient to find the rows quickly.\n\nThere are 25 valid values for \"Ver\" in this database. I ran the query for\nall of them. The only one miss-behaving is \"92\". I ran the query with\nrandom values for Ver (invalid values), the query plan always attempts to\nuse the index using both values.\nI looked into \"most_common_values\" in pg_stats, this value (92) is not in\nthat list.\nFinally I ran \"ANALYZE\" again and now the problem went away. Running the\nquery with Ver=92 uses the proper plan. I'm not happy with this - I know I\nhaven't solved the problem (I've ran ANALYZE multiple times before).\n\n\nOn Thu, 16 Jan 2020 at 19:00, Michael Lewis <[email protected]> wrote:\n\n> Does the behavior change with different values of Ver column? I'd be\n> curious of the fraction in the MCVs frequency list in stats indicates that\n> rows with Ver = 92 are rare and therefore the index on only Ver column is\n> sufficient to find the rows quickly. What is reltuples for this table by\n> the way?\n>\n> I also wonder if the situation may be helped by re-indexing the \"index on\n> both columns\" to remove any chance of issues on bloat in the index. Which\n> order are the columns by the way? If Ver is first, is there also an index\n> on only id column?. Since you aren't on v12, you don't get to re-index\n> concurrently but I assume you know the work around of create concurrently\n> (different name), drop concurrently (old one), and finally rename new index.\n>\n\nHello Michael and hello again Tom, sorry for mailing you directly. I just hit Reply in gmail - I expected the emails to have a reply-to=Pgsql. Apparently they do not.Running the same query with a different \"Ver\" produces a proper plan. Here's a non-redacted example (Ver=91):EXPLAIN (ANALYZE, BUFFERS) select \"IdRand\", \"IdColoana\", \"Valoare\" from \"LucrareBugetDate\" where (\"LucrareBugetVersiuneId\" = 91) and (\"LucrareBugetDateId\" in (10,11)); QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using \"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" on \"LucrareBugetDate\" (cost=0.56..4.95 rows=2 width=13) (actual time=3.617..3.631 rows=2 loops=1) Index Cond: ((\"LucrareBugetVersiuneId\" = 91) AND (\"LucrareBugetDateId\" = ANY ('{10,11}'::integer[]))) Buffers: shared hit=9 read=3 Planning time: 0.223 ms Execution time: 3.663 ms(5 rows)I have reindex everything, not just this INDEX.\"reltuples\" for this table is 41712436.> I'd be curious of the fraction in the MCVs frequency list in stats indicates that rows with Ver = 92 are rare and therefore the index on only Ver column is sufficient to find the rows quickly.There are 25 valid values for \"Ver\" in this database. I ran the query for all of them. The only one miss-behaving is \"92\". I ran the query with random values for Ver (invalid values), the query plan always attempts to use the index using both values.I looked into \"most_common_values\" in pg_stats, this value (92) is not in that list.Finally I ran \"ANALYZE\" again and now the problem went away. Running the query with Ver=92 uses the proper plan. I'm not happy with this - I know I haven't solved the problem (I've ran ANALYZE multiple times before).On Thu, 16 Jan 2020 at 19:00, Michael Lewis <[email protected]> wrote:Does the behavior change with different values of Ver column? I'd be curious of the fraction in the MCVs frequency list in stats indicates that rows with Ver = 92 are rare and therefore the index on only Ver column is sufficient to find the rows quickly. What is reltuples for this table by the way?I also wonder if the situation may be helped by re-indexing the \"index on both columns\" to remove any chance of issues on bloat in the index. Which order are the columns by the way? If Ver is first, is there also an index on only id column?. Since you aren't on v12, you don't get to re-index concurrently but I assume you know the work around of create concurrently (different name), drop concurrently (old one), and finally rename new index.",
"msg_date": "Thu, 16 Jan 2020 20:15:09 +0200",
"msg_from": "Cosmin Prund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "On Thu, 2020-01-16 at 19:18 +0200, Cosmin Prund wrote:\n> Indexes:\n> \"PK_LucrareBugetDate\" PRIMARY KEY, btree (\"LucrareBugetVersiuneId\", \"LucrareBugetDateId\")\n> \"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" btree (\"LucrareBugetVersiuneId\", \"LucrareBugetDateId\")\n> Foreign-key constraints:\n> \"FK_LucrareBugetDate_LucrareBugetVersiune_LucrareBugetVersiuneId\" FOREIGN KEY (\"LucrareBugetVersiuneId\") REFERENCES \"LucrareBugetVersiune\"(\"LucrareBugetVersiuneId\") ON DELETE CASCADE\n> \n> EXPLAIN (ANALYZE, BUFFERS) select \"IdRand\", \"IdColoana\", \"Valoare\" from \"LucrareBugetDate\" where (\"LucrareBugetVersiuneId\" = 92) and (\"LucrareBugetDateId\" in (10,11));\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using \"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" on \"LucrareBugetDate\" (cost=0.56..2.37 rows=1 width=13) (actual time=0.096..978.398 rows=2 loops=1)\n> Index Cond: (\"LucrareBugetVersiuneId\" = 92)\n> Filter: (\"LucrareBugetDateId\" = ANY ('{10,11}'::integer[]))\n> Rows Removed by Filter: 1869178\n> Buffers: shared hit=161178\n> Planning time: 0.699 ms\n> Execution time: 978.433 ms\n\nWell, what should the poor thing do?\nThere is no index on \"LucrareBugetDateId\".\n\nRather, you have two indexes on (\"LucrareBugetVersiuneId\", \"LucrareBugetDateId\"),\none of which should be dropped.\n\nTry with an index on \"LucrareBugetDateId\".\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 16 Jan 2020 19:20:12 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "Cosmin Prund <[email protected]> writes:\n> I know it's an odd choice of plan - that's why I'm here!\n\nIndeed. I cannot reproduce it here on 10.11:\n\nregression=# create table bb(f1 smallint, f2 serial, primary key(f1,f2));\nCREATE TABLE\nregression=# explain select * from bb where f1 = 92 and f2 in (10,11);\n QUERY PLAN \n-----------------------------------------------------------------------\n Index Only Scan using bb_pkey on bb (cost=0.15..8.34 rows=1 width=6)\n Index Cond: ((f1 = 92) AND (f2 = ANY ('{10,11}'::integer[])))\n(2 rows)\n\nAs I said before, as long as it chooses an indexscan at all, I wouldn't\nexpect variation in what clauses it chooses to use with the index.\nSo I don't see why this trivial example doesn't replicate your result.\n\nIf you try exactly the above on your database, do you get my result,\nor a plan more like yours?\n\nI wonder if you have some extension installed that's causing the\noperators to be interpreted differently.\n\nBTW, why do you have two identical indexes on the table?\n\n> Indexes:\n> \"PK_LucrareBugetDate\" PRIMARY KEY, btree (\"LucrareBugetVersiuneId\",\n> \"LucrareBugetDateId\")\n> \"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" btree\n> (\"LucrareBugetVersiuneId\", \"LucrareBugetDateId\")\n\nThat shouldn't be affecting this either, but it seems wasteful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 13:23:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "\"Finally I ran \"ANALYZE\" again and now the problem went away. Running the\nquery with Ver=92 uses the proper plan. I'm not happy with this - I know I\nhaven't solved the problem (I've ran ANALYZE multiple times before).\"\n\nDoes 92 appear in MCVs list with that new sampling? I wonder if\ndefault_statistics_target should be increased a bit to help ensure a\nthorough sample of the data in this table. Note- don't go too high (maybe\n250, not 1000) or planning time can increase significantly. Also, perhaps\nonly increase on this Ver column.\n\nWhat is the real frequency of value 92? With default_statistics_target =\n100, analyze takes 100*300 rows as sample, and if it is missed in that 30k\nrows set, or very very small when in fact it has equal weight with other\nvalues, then during planning time it is expected to be very very rare when\nin fact it is only slightly less common than the others in the list. If the\nothers in the list are expected to be 100% of the table as you showed with\nthe query to compute \"frac_MCV\" from pg_stats for that column, then perhaps\nthe optimizer is wise to scan only the LucrareBugetVersiuneId column of the\ncomposite index and filter in memory.\n\nCurious, when you get bad plans (re-analyze the table repeatedly to get new\nsamples until the wrong plan is chosen), what does PG estimate for total\nrows returned with ONLY LucrareBugetVersiuneId = 92 as the where condition?\n\nNote- Tom & Laurenz are real experts. I might have no idea what I am doing\nyet. It is too early to say.\n\nOn Thu, Jan 16, 2020 at 11:15 AM Cosmin Prund <[email protected]> wrote:\n\n> Hello Michael and hello again Tom, sorry for mailing you directly. I just\n> hit Reply in gmail - I expected the emails to have a reply-to=Pgsql.\n> Apparently they do not.\n>\n> Running the same query with a different \"Ver\" produces a proper plan.\n> Here's a non-redacted example (Ver=91):\n>\n> EXPLAIN (ANALYZE, BUFFERS) select \"IdRand\", \"IdColoana\", \"Valoare\" from\n> \"LucrareBugetDate\" where (\"LucrareBugetVersiuneId\" = 91) and\n> (\"LucrareBugetDateId\" in (10,11));\n>\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using\n> \"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" on\n> \"LucrareBugetDate\" (cost=0.56..4.95 rows=2 width=13) (actual\n> time=3.617..3.631 rows=2 loops=1)\n> Index Cond: ((\"LucrareBugetVersiuneId\" = 91) AND (\"LucrareBugetDateId\"\n> = ANY ('{10,11}'::integer[])))\n> Buffers: shared hit=9 read=3\n> Planning time: 0.223 ms\n> Execution time: 3.663 ms\n> (5 rows)\n>\n> I have reindex everything, not just this INDEX.\n>\n> \"reltuples\" for this table is 41712436.\n>\n> > I'd be curious of the fraction in the MCVs frequency list in stats\n> indicates that rows with Ver = 92 are rare and therefore the index on\n> only Ver column is sufficient to find the rows quickly.\n>\n> There are 25 valid values for \"Ver\" in this database. I ran the query for\n> all of them. The only one miss-behaving is \"92\". I ran the query with\n> random values for Ver (invalid values), the query plan always attempts to\n> use the index using both values.\n> I looked into \"most_common_values\" in pg_stats, this value (92) is not in\n> that list.\n> Finally I ran \"ANALYZE\" again and now the problem went away. Running the\n> query with Ver=92 uses the proper plan. I'm not happy with this - I know I\n> haven't solved the problem (I've ran ANALYZE multiple times before).\n>\n>\n> On Thu, 16 Jan 2020 at 19:00, Michael Lewis <[email protected]> wrote:\n>\n>> Does the behavior change with different values of Ver column? I'd be\n>> curious of the fraction in the MCVs frequency list in stats indicates that\n>> rows with Ver = 92 are rare and therefore the index on only Ver column is\n>> sufficient to find the rows quickly. What is reltuples for this table by\n>> the way?\n>>\n>> I also wonder if the situation may be helped by re-indexing the \"index on\n>> both columns\" to remove any chance of issues on bloat in the index. Which\n>> order are the columns by the way? If Ver is first, is there also an index\n>> on only id column?. Since you aren't on v12, you don't get to re-index\n>> concurrently but I assume you know the work around of create concurrently\n>> (different name), drop concurrently (old one), and finally rename new index.\n>>\n>\n\n\"Finally I ran \"ANALYZE\" again and now the problem went away. Running the query with Ver=92 uses the proper plan. I'm not happy with this - I know I haven't solved the problem (I've ran ANALYZE multiple times before).\"Does 92 appear in MCVs list with that new sampling? I wonder if default_statistics_target should be increased a bit to help ensure a thorough sample of the data in this table. Note- don't go too high (maybe 250, not 1000) or planning time can increase significantly. Also, perhaps only increase on this Ver column.What is the real frequency of value 92? With default_statistics_target = 100, analyze takes 100*300 rows as sample, and if it is missed in that 30k rows set, or very very small when in fact it has equal weight with other values, then during planning time it is expected to be very very rare when in fact it is only slightly less common than the others in the list. If the others in the list are expected to be 100% of the table as you showed with the query to compute \"frac_MCV\" from pg_stats for that column, then perhaps the optimizer is wise to scan only the LucrareBugetVersiuneId column of the composite index and filter in memory.Curious, when you get bad plans (re-analyze the table repeatedly to get new samples until the wrong plan is chosen), what does PG estimate for total rows returned with ONLY LucrareBugetVersiuneId = 92 as the where condition?Note- Tom & Laurenz are real experts. I might have no idea what I am doing yet. It is too early to say.On Thu, Jan 16, 2020 at 11:15 AM Cosmin Prund <[email protected]> wrote:Hello Michael and hello again Tom, sorry for mailing you directly. I just hit Reply in gmail - I expected the emails to have a reply-to=Pgsql. Apparently they do not.Running the same query with a different \"Ver\" produces a proper plan. Here's a non-redacted example (Ver=91):EXPLAIN (ANALYZE, BUFFERS) select \"IdRand\", \"IdColoana\", \"Valoare\" from \"LucrareBugetDate\" where (\"LucrareBugetVersiuneId\" = 91) and (\"LucrareBugetDateId\" in (10,11)); QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using \"IX_LucrareBugetDate_LucrareBugetVersiuneId_LucrareBugetDateId\" on \"LucrareBugetDate\" (cost=0.56..4.95 rows=2 width=13) (actual time=3.617..3.631 rows=2 loops=1) Index Cond: ((\"LucrareBugetVersiuneId\" = 91) AND (\"LucrareBugetDateId\" = ANY ('{10,11}'::integer[]))) Buffers: shared hit=9 read=3 Planning time: 0.223 ms Execution time: 3.663 ms(5 rows)I have reindex everything, not just this INDEX.\"reltuples\" for this table is 41712436.> I'd be curious of the fraction in the MCVs frequency list in stats indicates that rows with Ver = 92 are rare and therefore the index on only Ver column is sufficient to find the rows quickly.There are 25 valid values for \"Ver\" in this database. I ran the query for all of them. The only one miss-behaving is \"92\". I ran the query with random values for Ver (invalid values), the query plan always attempts to use the index using both values.I looked into \"most_common_values\" in pg_stats, this value (92) is not in that list.Finally I ran \"ANALYZE\" again and now the problem went away. Running the query with Ver=92 uses the proper plan. I'm not happy with this - I know I haven't solved the problem (I've ran ANALYZE multiple times before).On Thu, 16 Jan 2020 at 19:00, Michael Lewis <[email protected]> wrote:Does the behavior change with different values of Ver column? I'd be curious of the fraction in the MCVs frequency list in stats indicates that rows with Ver = 92 are rare and therefore the index on only Ver column is sufficient to find the rows quickly. What is reltuples for this table by the way?I also wonder if the situation may be helped by re-indexing the \"index on both columns\" to remove any chance of issues on bloat in the index. Which order are the columns by the way? If Ver is first, is there also an index on only id column?. Since you aren't on v12, you don't get to re-index concurrently but I assume you know the work around of create concurrently (different name), drop concurrently (old one), and finally rename new index.",
"msg_date": "Thu, 16 Jan 2020 11:52:48 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "Cosmin Prund <[email protected]> writes:\n> Running the same query with a different \"Ver\" produces a proper plan.\n\nOh, that *is* interesting.\n\nAfter studying the code a bit more I see why this is possible when I\noriginally thought not. The case that you are interested in is one that\nhas special handling -- it's a \"lower-order ScalarArrayOpExpr\" in the\nterms of the code. This means that get_index_paths will actually produce\ntwo index paths, one with the IN clause as an indexqual and one without,\nbecause it expects that they have different sort behaviors [1]. So then\nwe do have a chance for a cost-based choice, making it possible for the\nestimated selectivity of the higher-order clause to affect the outcome.\n\nI'm still a bit surprised that it wouldn't choose the alternative with\nthe IN ... but if the estimated number of rows matching just the first\ncolumn is small enough, it might see the paths as having indistinguishable\ncosts, and then it's down to luck which it chooses.\n\n> There are 25 valid values for \"Ver\" in this database. I ran the query for\n> all of them. The only one miss-behaving is \"92\". I ran the query with\n> random values for Ver (invalid values), the query plan always attempts to\n> use the index using both values.\n> I looked into \"most_common_values\" in pg_stats, this value (92) is not in\n> that list.\n\nAre the other 24 all in the list?\n\n> Finally I ran \"ANALYZE\" again and now the problem went away. Running the\n> query with Ver=92 uses the proper plan. I'm not happy with this - I know I\n> haven't solved the problem (I've ran ANALYZE multiple times before).\n\nMaybe increasing the stats target for the \"Ver\" column would help. It\nsounds like you want to get to a point where all the valid values are\ngiven in the MCV list, so that the estimates for them will be accurate.\n\n\t\t\tregards, tom lane\n\n[1] Right at the moment, it seems like that's wrong and we could just\ngenerate one path. Need to study this.\n\n\n",
"msg_date": "Thu, 16 Jan 2020 13:55:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "On Thu, 16 Jan 2020 at 20:20, Laurenz Albe <[email protected]> wrote:\n\n> Well, what should the poor thing do?\n> There is no index on \"LucrareBugetDateId\".\n>\n\nI did add an index on \"LucrareBugetDateId\" (before accidentally \"fixing\"\nthe problem with ANALYZE) and it didn't help.\n\n\n> Rather, you have two indexes on (\"LucrareBugetVersiuneId\",\n> \"LucrareBugetDateId\"),\n> one of which should be dropped.\n>\n\nOne will be dropped. The second one was added out of desperation (because\nit wasn't using the first one).\n\n\n> Try with an index on \"LucrareBugetDateId\".\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\nOn Thu, 16 Jan 2020 at 20:20, Laurenz Albe <[email protected]> wrote:Well, what should the poor thing do?\nThere is no index on \"LucrareBugetDateId\".I did add an index on \"LucrareBugetDateId\" (before accidentally \"fixing\" the problem with ANALYZE) and it didn't help. \nRather, you have two indexes on (\"LucrareBugetVersiuneId\", \"LucrareBugetDateId\"),\none of which should be dropped.One will be dropped. The second one was added out of desperation (because it wasn't using the first one). \nTry with an index on \"LucrareBugetDateId\".\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com",
"msg_date": "Thu, 16 Jan 2020 21:06:38 +0200",
"msg_from": "Cosmin Prund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
},
{
"msg_contents": "most_common_values before the ANALYZE had 22 values. After ANALYZE it has\n23 values.\nAfter ANALYZE I get an entry for \"92\" with 0.0441333 frequency (the\nfrequency is about right).\n\nThe stats target for the \"Ver\" column is already at 10000. I'm going to\nhave to bring the stats target back on everything, but I'm not sure about\nthis. The life-cycle of this table is a bit special. Once-in-a-while a new\n\"Version\" is created: 1 to 3 million records are inserted at once, all with\nthe same Version and with sequential Id-s (re-starting from 1 with each\nversion). The unfortunate side-effect is that I get huge clusters of\nrecords with the same \"Ver\". I created a script that calculates the correct\n\"n_distinct\" value for the column and repeatedly runs ANALYZE until the\nreported \"n_distinct\" value is grater then 75% of the correct number; on\neach loop of the script the stats target is increased by 5%. I thought this\nwould help me find a good value for the stats target but it invariably\nbrings the stats target all the way up to 10000.\n\nFinally I have one last option: take \"stats\" into my own hands. Since\ninserting anything into those tables is such a big (but rare and well\ndefined) event, I could simply set the stats target to ZERO and compute\ncorrect values on my own after pushing a new version. The issue here is\nthat I don't understand the system well-enough to make this work.\n\nHopefully I'll be able to reproduce this on a backup of the database so I\ncan safely experiment. Until I manage to reproduce this I don't think I can\nmake any more progress, so thank you everyone for the help.\n\nOn Thu, 16 Jan 2020 at 20:55, Tom Lane <[email protected]> wrote:\n\n> Cosmin Prund <[email protected]> writes:\n> > Running the same query with a different \"Ver\" produces a proper plan.\n>\n> Oh, that *is* interesting.\n>\n> After studying the code a bit more I see why this is possible when I\n> originally thought not. The case that you are interested in is one that\n> has special handling -- it's a \"lower-order ScalarArrayOpExpr\" in the\n> terms of the code. This means that get_index_paths will actually produce\n> two index paths, one with the IN clause as an indexqual and one without,\n> because it expects that they have different sort behaviors [1]. So then\n> we do have a chance for a cost-based choice, making it possible for the\n> estimated selectivity of the higher-order clause to affect the outcome.\n>\n> I'm still a bit surprised that it wouldn't choose the alternative with\n> the IN ... but if the estimated number of rows matching just the first\n> column is small enough, it might see the paths as having indistinguishable\n> costs, and then it's down to luck which it chooses.\n>\n> > There are 25 valid values for \"Ver\" in this database. I ran the query for\n> > all of them. The only one miss-behaving is \"92\". I ran the query with\n> > random values for Ver (invalid values), the query plan always attempts to\n> > use the index using both values.\n> > I looked into \"most_common_values\" in pg_stats, this value (92) is not in\n> > that list.\n>\n> Are the other 24 all in the list?\n>\n> > Finally I ran \"ANALYZE\" again and now the problem went away. Running the\n> > query with Ver=92 uses the proper plan. I'm not happy with this - I know\n> I\n> > haven't solved the problem (I've ran ANALYZE multiple times before).\n>\n> Maybe increasing the stats target for the \"Ver\" column would help. It\n> sounds like you want to get to a point where all the valid values are\n> given in the MCV list, so that the estimates for them will be accurate.\n>\n> regards, tom lane\n>\n> [1] Right at the moment, it seems like that's wrong and we could just\n> generate one path. Need to study this.\n>\n\nmost_common_values before the ANALYZE had 22 values. After ANALYZE it has 23 values.After ANALYZE I get an entry for \"92\" with 0.0441333 frequency (the frequency is about right).The stats target for the \"Ver\" column is already at 10000. I'm going to have to bring the stats target back on everything, but I'm not sure about this. The life-cycle of this table is a bit special. Once-in-a-while a new \"Version\" is created: 1 to 3 million records are inserted at once, all with the same Version and with sequential Id-s (re-starting from 1 with each version). The unfortunate side-effect is that I get huge clusters of records with the same \"Ver\". I created a script that calculates the correct \"n_distinct\" value for the column and repeatedly runs ANALYZE until the reported \"n_distinct\" value is grater then 75% of the correct number; on each loop of the script the stats target is increased by 5%. I thought this would help me find a good value for the stats target but it invariably brings the stats target all the way up to 10000.Finally I have one last option: take \"stats\" into my own hands. Since inserting anything into those tables is such a big (but rare and well defined) event, I could simply set the stats target to ZERO and compute correct values on my own after pushing a new version. The issue here is that I don't understand the system well-enough to make this work.Hopefully I'll be able to reproduce this on a backup of the database so I can safely experiment. Until I manage to reproduce this I don't think I can make any more progress, so thank you everyone for the help.On Thu, 16 Jan 2020 at 20:55, Tom Lane <[email protected]> wrote:Cosmin Prund <[email protected]> writes:\n> Running the same query with a different \"Ver\" produces a proper plan.\n\nOh, that *is* interesting.\n\nAfter studying the code a bit more I see why this is possible when I\noriginally thought not. The case that you are interested in is one that\nhas special handling -- it's a \"lower-order ScalarArrayOpExpr\" in the\nterms of the code. This means that get_index_paths will actually produce\ntwo index paths, one with the IN clause as an indexqual and one without,\nbecause it expects that they have different sort behaviors [1]. So then\nwe do have a chance for a cost-based choice, making it possible for the\nestimated selectivity of the higher-order clause to affect the outcome.\n\nI'm still a bit surprised that it wouldn't choose the alternative with\nthe IN ... but if the estimated number of rows matching just the first\ncolumn is small enough, it might see the paths as having indistinguishable\ncosts, and then it's down to luck which it chooses.\n\n> There are 25 valid values for \"Ver\" in this database. I ran the query for\n> all of them. The only one miss-behaving is \"92\". I ran the query with\n> random values for Ver (invalid values), the query plan always attempts to\n> use the index using both values.\n> I looked into \"most_common_values\" in pg_stats, this value (92) is not in\n> that list.\n\nAre the other 24 all in the list?\n\n> Finally I ran \"ANALYZE\" again and now the problem went away. Running the\n> query with Ver=92 uses the proper plan. I'm not happy with this - I know I\n> haven't solved the problem (I've ran ANALYZE multiple times before).\n\nMaybe increasing the stats target for the \"Ver\" column would help. It\nsounds like you want to get to a point where all the valid values are\ngiven in the MCV list, so that the estimates for them will be accurate.\n\n regards, tom lane\n\n[1] Right at the moment, it seems like that's wrong and we could just\ngenerate one path. Need to study this.",
"msg_date": "Thu, 16 Jan 2020 21:52:15 +0200",
"msg_from": "Cosmin Prund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan decision when using multiple column index -\n postgresql uses only first column then filters"
}
] |
[
{
"msg_contents": "Hello mail group members,\n\nI started a new job as PostgreSQL DBA. This is my first mail, I hope the mail I sent meets the rules.\n\nThere is a query that runs slowly when I look at the logs of the database. When I check the resources of the system, there is no problem in the resources, but this query running slowly. There is no \"Seq Scan\" in the queries, so the tables are already indexed. But I did not fully understand if the indexes were made correctly. When I analyze the query result on explain.depesz, it seems that the query is taking too long.\n\nHow should I fix the query below? How should I read the output of explain.depesz?\n\nThank you in advance for your help.\n\n\nselect pro.id as pro_id\n , pro.code\n , coalesce(s.is_pick, false)\n , coalesce(sum(sb.quantity), 0) as pick_quantity\nfrom mainproduct_productmetaproduction pro, order_basketitemdetail bid\nleft join shelf_shelvedproductbatch sb on sb.basketitem_detail_id = bid.id\nleft join shelf_shelvedproducts sp on sp.id = sb.shelved_product_id\nleft join shelf_shelf s on s.id = sp.shelf_id\nwhere pro.id = bid.production_id\nand (\n select coalesce(sum(bid.quantity), 0)\n from order_basketitem bi\n , order_basketitemdetail bid\n , order_order o\n where o.type in (2,7,9) and o.id = bi.order_id\n and o.is_cancelled = false\n and bi.is_cancelled = false\n and o.is_closed = false\n and o.is_picked = false\n and o.is_invoiced = false\n and o.is_sent = false\n and bi.id = bid.basketitem_id\n and bid.quantity > (\n select coalesce(sum(picked_quantity),0)\n from order_basketitembatch bib\n where bib.detail_id=bid.id\n )\n and bid.code = pro.code\n ) > 0\ngroup by 1,2,3 --,bid.pallet_item_quantity\nhaving coalesce(s.is_pick, false)\nand round((coalesce(sum(sb.quantity), 0) / GREATEST(MAX(bid.pallet_item_quantity), 1)::float)::numeric, 2) <= 0.15\n\nhttps://explain.depesz.com/s/G4vq\n\nYours truly,\nKemal Ortanca\n\n\n\n\n\n\n\n\nHello mail group members, \n\n\n\nI started a new job as PostgreSQL DBA. This is my first mail, I hope the mail I sent meets the rules. \n\nThere is a query that runs slowly when I look at the logs of the database. When I check the resources of the system, there is no problem in the resources, but this query running slowly. There is no \"Seq Scan\" in the queries, so the tables are already indexed. But\n I did not fully understand if the indexes were made correctly. When I analyze the query result on explain.depesz, it seems that the query is taking too long. \n\nHow should I fix the query below? How should I read the output of explain.depesz? \n\nThank you in advance for your help.\n\nselect pro.id as pro_id , pro.code , coalesce(s.is_pick, false) , coalesce(sum(sb.quantity), 0) as pick_quantityfrom mainproduct_productmetaproduction pro, order_basketitemdetail bidleft join shelf_shelvedproductbatch sb on sb.basketitem_detail_id = bid.idleft join shelf_shelvedproducts sp on sp.id = sb.shelved_product_idleft join shelf_shelf s on s.id = sp.shelf_idwhere pro.id = bid.production_idand (\t\t select coalesce(sum(bid.quantity), 0)\t\t from order_basketitem bi\t\t\t , order_basketitemdetail bid\t\t\t , order_order o\t\t where o.type in (2,7,9) and o.id = bi.order_id\t\t and o.is_cancelled = false\t\t and bi.is_cancelled = false\t\t and o.is_closed = false and o.is_picked = false and o.is_invoiced = false and o.is_sent = false\t\t and bi.id = bid.basketitem_id\t\t and bid.quantity > (\t\t\t\t\t\t\t\tselect coalesce(sum(picked_quantity),0)\t\t\t\t\t\t\t\tfrom order_basketitembatch bib\t\t\t\t\t\t\t\twhere bib.detail_id=bid.id\t\t\t\t\t\t\t )\t\t and bid.code = pro.code\t ) > 0group by 1,2,3 --,bid.pallet_item_quantityhaving coalesce(s.is_pick, false)and round((coalesce(sum(sb.quantity), 0) / GREATEST(MAX(bid.pallet_item_quantity), 1)::float)::numeric, 2) <= 0.15\n\nhttps://explain.depesz.com/s/G4vq\n\nYours truly,\nKemal Ortanca",
"msg_date": "Mon, 27 Jan 2020 13:15:59 +0000",
"msg_from": "Kemal Ortanca <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query optimization advice for beginners"
},
{
"msg_contents": "\n\nAm 27.01.20 um 14:15 schrieb Kemal Ortanca:\n>\n> https://explain.depesz.com/s/G4vq\n>\n>\n\nthe estimates and the real values are very different, seems like \nproblems with autoanalyze.\n\nwhich version?\n\n\n\nAndreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n",
"msg_date": "Mon, 27 Jan 2020 14:57:11 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization advice for beginners"
},
{
"msg_contents": "Firstly, thank you for coming back.\n\nPostgreSQL version = 11.5\n\nIs there a resource or postgresql configuration you want me to check in addition?\n\n\n________________________________\nFrom: Andreas Kretschmer <[email protected]>\nSent: Monday, January 27, 2020 3:57 PM\nTo: [email protected] <[email protected]>\nSubject: Re: Query optimization advice for beginners\n\n\n\nAm 27.01.20 um 14:15 schrieb Kemal Ortanca:\n>\n> https://explain.depesz.com/s/G4vq\n>\n>\n\nthe estimates and the real values are very different, seems like\nproblems with autoanalyze.\n\nwhich version?\n\n\n\nAndreas\n\n--\n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com<http://www.2ndQuadrant.com>\n\n\n\n\n\n\n\n\n\n\n\nFirstly, thank you for coming back.\n\nPostgreSQL version = 11.5\n\nIs there a resource or postgresql configuration you want me to check in addition?\n\n\n\n\n\n\n\nFrom: Andreas Kretschmer <[email protected]>\nSent: Monday, January 27, 2020 3:57 PM\nTo: [email protected] <[email protected]>\nSubject: Re: Query optimization advice for beginners\n \n\n\n\n\nAm 27.01.20 um 14:15 schrieb Kemal Ortanca:\n>\n> https://explain.depesz.com/s/G4vq\n>\n>\n\nthe estimates and the real values are very different, seems like \nproblems with autoanalyze.\n\nwhich version?\n\n\n\nAndreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com",
"msg_date": "Mon, 27 Jan 2020 15:33:29 +0000",
"msg_from": "Kemal Ortanca <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query optimization advice for beginners"
},
{
"msg_contents": "You've got two references to order_basketitemdetail both aliased to bid and\nALSO a table called order_basketitembatch aliased to bib. I assume that\nconfuses the planner, but even if it doesn't it certainly confuses any new\ndevelopers trying to understand the query's intention.\n\nThe biggest thing that leaps out at me on the explain plan is the 822\nthousand loops on index order_basketitembatch_detail_id_9268ccff. That\nseems to be the subquery in the where clause of the subquery in the main\nwhere clause. I never get great results when I nest sub-queries multiple\nlevels. Without knowing your data, we can only make guesses about\nrestructuring the query so it performs better.\n\nselect bi.id AS basketitem_id --coalesce(sum(bid.quantity), 0)\n\t\t from order_basketitem bi\n\t\t\t --, order_basketitemdetail bid\n\t\t\t , order_order o\n\t\t where o.type in (2,7,9) and o.id = bi.order_id\n\t\t and o.is_cancelled = false\n\t\t and bi.is_cancelled = false\n\t\t and o.is_closed = false\n and o.is_picked = false\n and o.is_invoiced = false\n and o.is_sent = false\n\t\t --and bi.id = bid.basketitem_id\n\nFor a query like the above, how restrictive is it? That is, of ALL the\nrecords in order_basketitem table, how many are returned by the above\ncondition? I would think that the number of orders that have been picked or\ninvoiced or sent or closed or cancelled would be LARGE and so this query\nmay eliminate most of the orders from being considered. Not to mention the\norder type id restriction.\n\nIf I found that the above query resulted in 1% of the table being returned\nperhaps, there are a number of ways to influence the planner to do this\nwork first such as-\n\n1) put this in a sub-query as the FROM and include OFFSET 0 hack to prevent\nin-lining\n2) put in a CTE using the WITH keyword (note- need to use MATERIALIZED\noption once on PG12 since default behavior changes)\n3) if the number of records returned is large (10 thousand maybe?) and the\noptimizer is making bad choices on the rest of the query that uses this\nresult set, put this query into a temp table, analyze it, and then use it.\n\n>\n\nYou've got two references to order_basketitemdetail both aliased to bid and ALSO a table called order_basketitembatch aliased to bib. I assume that confuses the planner, but even if it doesn't it certainly confuses any new developers trying to understand the query's intention.The biggest thing that leaps out at me on the explain plan is the 822 thousand loops on index order_basketitembatch_detail_id_9268ccff. That seems to be the subquery in the where clause of the subquery in the main where clause. I never get great results when I nest sub-queries multiple levels. Without knowing your data, we can only make guesses about restructuring the query so it performs better.select bi.id AS basketitem_id --coalesce(sum(bid.quantity), 0)\t\t from order_basketitem bi\t\t\t --, order_basketitemdetail bid\t\t\t , order_order o\t\t where o.type in (2,7,9) and o.id = bi.order_id\t\t and o.is_cancelled = false\t\t and bi.is_cancelled = false\t\t and o.is_closed = false and o.is_picked = false and o.is_invoiced = false and o.is_sent = false\t\t --and bi.id = bid.basketitem_idFor a query like the above, how restrictive is it? That is, of ALL the records in order_basketitem table, how many are returned by the above condition? I would think that the number of orders that have been picked or invoiced or sent or closed or cancelled would be LARGE and so this query may eliminate most of the orders from being considered. Not to mention the order type id restriction.If I found that the above query resulted in 1% of the table being returned perhaps, there are a number of ways to influence the planner to do this work first such as-1) put this in a sub-query as the FROM and include OFFSET 0 hack to prevent in-lining2) put in a CTE using the WITH keyword (note- need to use MATERIALIZED option once on PG12 since default behavior changes)3) if the number of records returned is large (10 thousand maybe?) and the optimizer is making bad choices on the rest of the query that uses this result set, put this query into a temp table, analyze it, and then use it.",
"msg_date": "Mon, 27 Jan 2020 09:46:18 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization advice for beginners"
},
{
"msg_contents": "On Mon, 2020-01-27 at 13:15 +0000, Kemal Ortanca wrote:\n> There is a query that runs slowly when I look at the logs of the database. When I check the\n> resources of the system, there is no problem in the resources, but this query running slowly.\n> There is no \"Seq Scan\" in the queries, so the tables are already indexed. But I did not\n> fully understand if the indexes were made correctly. When I analyze the query result on\n> explain.depesz, it seems that the query is taking too long. \n> \n> How should I fix the query below? How should I read the output of explain.depesz? \n> \n> https://explain.depesz.com/s/G4vq\n\nNormally you focus on where the time is spent and the mis-estimates.\n\nThe mis-estimates are notable, but this time not the reason for a\nwrong choice of join strategy: evern though there are overestimates,\na nested loop join is chosen.\n\nThe time is spent in the 16979 executions of the outer subquery,\nparticularly in the inner subquery.\n\nBecause the query uses correlated subqueries, PostgreSQL has to execute\nthese conditions in the fashion of a nested loop, that is, the subquery\nis executed for every row found.\n\nIf you manage to rewrite the query so that it uses (outer) joins instead\nof correlated subqueries, the optimizer can use different strategies\nthat may be more efficient.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Tue, 28 Jan 2020 09:37:42 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query optimization advice for beginners"
}
] |
[
{
"msg_contents": "Hello!\n\nLet's say that you have a simple query like the following on a large table\n(for a multi-tenant application):\nSELECT \"subscribers\".* FROM \"subscribers\" WHERE \"subscribers\".\"project_id\"\n= 123 AND (tags @> ARRAY['de']::varchar[]);\n\nIf you run EXPLAIN ANALYZE you can see that stats are completely wrong.\nFor example I get an expected count of 3,500 rows whereas the actual\nresult is 20 rows. This also results in bad query plans...\n\nIn a previous discussion someone said that this wrong estimate is because\n@> uses a fixed selectivity of 0.001, **regardless of actual data**!!\nIs that true? Is there any solution or any plan to improve this in future\nversions of PostgreSQL?\n\nFinally it would be useful to have the ability to CREATE STATISTICS, to\nshow PostgreSQL that there's a correlation between project_id and tag\nvalues... but this is a further step. Currently I can create statistics,\nhowever it seems to have no positive effect on the estimates for the\nabove case\n\n\nMarco Colli\n\nHello!Let's say that you have a simple query like the following on a large table (for a multi-tenant application):SELECT \"subscribers\".* FROM \"subscribers\" WHERE \"subscribers\".\"project_id\" = 123 AND (tags @> ARRAY['de']::varchar[]);If you run EXPLAIN ANALYZE you can see that stats are completely wrong. For example I get an expected count of 3,500 rows whereas the actual result is 20 rows. This also results in bad query plans...In a previous discussion someone said that this wrong estimate is because @> uses a fixed selectivity of 0.001, **regardless of actual data**!!Is that true? Is there any solution or any plan to improve this in future versions of PostgreSQL?Finally it would be useful to have the ability to CREATE STATISTICS, to show PostgreSQL that there's a correlation between project_id and tag values... but this is a further step. Currently I can create statistics, however it seems to have no positive effect on the estimates for the above caseMarco Colli",
"msg_date": "Sun, 2 Feb 2020 15:18:19 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Statistics on array values"
},
{
"msg_contents": "On Sun, Feb 02, 2020 at 03:18:19PM +0100, Marco Colli wrote:\n> Hello!\n> \n> Let's say that you have a simple query like the following on a large table\n> (for a multi-tenant application):\n> SELECT \"subscribers\".* FROM \"subscribers\" WHERE \"subscribers\".\"project_id\"\n> = 123 AND (tags @> ARRAY['de']::varchar[]);\n> \n> If you run EXPLAIN ANALYZE you can see that stats are completely wrong.\n> For example I get an expected count of 3,500 rows whereas the actual\n> result is 20 rows. This also results in bad query plans...\n\nhttps://www.postgresql.org/message-id/CAMkU%3D1z%2BQijUWAYgeqeyw%2BAvD7adPgOmEnY%2BOcTw6qDVFtD7cQ%40mail.gmail.com\nOn Fri, Jan 10, 2020 at 12:12:52PM -0500, Jeff Janes wrote:\n> Why is the estimate off by so much? If you run a simple select, what the\n> actual and expected number of rows WHERE project_id = 12345? WHERE tags @>\n> '{crt:2018_11}'? Is one of those estimates way off reality, or is it only\n> the conjunction which is deranged?\n\nCould you respond to Jeff's inquiry ?\n\nJustin\n\n\n",
"msg_date": "Sun, 2 Feb 2020 08:23:21 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics on array values"
},
{
"msg_contents": "> Is one of those estimates way off reality, or is it only the conjunction\nwhich is deranged?\n\nThe estimate is wrong *even with a single tag*, without the conjunction\n(e.g. expected 3500, actual 20). Then the conjunction can make the bias\neven worse...\n\nOn Sun, Feb 2, 2020 at 3:23 PM Justin Pryzby <[email protected]> wrote:\n\n> On Sun, Feb 02, 2020 at 03:18:19PM +0100, Marco Colli wrote:\n> > Hello!\n> >\n> > Let's say that you have a simple query like the following on a large\n> table\n> > (for a multi-tenant application):\n> > SELECT \"subscribers\".* FROM \"subscribers\" WHERE\n> \"subscribers\".\"project_id\"\n> > = 123 AND (tags @> ARRAY['de']::varchar[]);\n> >\n> > If you run EXPLAIN ANALYZE you can see that stats are completely wrong.\n> > For example I get an expected count of 3,500 rows whereas the actual\n> > result is 20 rows. This also results in bad query plans...\n>\n>\n> https://www.postgresql.org/message-id/CAMkU%3D1z%2BQijUWAYgeqeyw%2BAvD7adPgOmEnY%2BOcTw6qDVFtD7cQ%40mail.gmail.com\n> On Fri, Jan 10, 2020 at 12:12:52PM -0500, Jeff Janes wrote:\n> > Why is the estimate off by so much? If you run a simple select, what the\n> > actual and expected number of rows WHERE project_id = 12345? WHERE tags\n> @>\n> > '{crt:2018_11}'? Is one of those estimates way off reality, or is it\n> only\n> > the conjunction which is deranged?\n>\n> Could you respond to Jeff's inquiry ?\n>\n> Justin\n>\n\n> Is one of those estimates way off reality, or is it only the conjunction which is deranged?The estimate is wrong *even with a single tag*, without the conjunction (e.g. expected 3500, actual 20). Then the conjunction can make the bias even worse...On Sun, Feb 2, 2020 at 3:23 PM Justin Pryzby <[email protected]> wrote:On Sun, Feb 02, 2020 at 03:18:19PM +0100, Marco Colli wrote:\n> Hello!\n> \n> Let's say that you have a simple query like the following on a large table\n> (for a multi-tenant application):\n> SELECT \"subscribers\".* FROM \"subscribers\" WHERE \"subscribers\".\"project_id\"\n> = 123 AND (tags @> ARRAY['de']::varchar[]);\n> \n> If you run EXPLAIN ANALYZE you can see that stats are completely wrong.\n> For example I get an expected count of 3,500 rows whereas the actual\n> result is 20 rows. This also results in bad query plans...\n\nhttps://www.postgresql.org/message-id/CAMkU%3D1z%2BQijUWAYgeqeyw%2BAvD7adPgOmEnY%2BOcTw6qDVFtD7cQ%40mail.gmail.com\nOn Fri, Jan 10, 2020 at 12:12:52PM -0500, Jeff Janes wrote:\n> Why is the estimate off by so much? If you run a simple select, what the\n> actual and expected number of rows WHERE project_id = 12345? WHERE tags @>\n> '{crt:2018_11}'? Is one of those estimates way off reality, or is it only\n> the conjunction which is deranged?\n\nCould you respond to Jeff's inquiry ?\n\nJustin",
"msg_date": "Sun, 2 Feb 2020 15:38:57 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics on array values"
},
{
"msg_contents": "Marco Colli <[email protected]> writes:\n> Let's say that you have a simple query like the following on a large table\n> (for a multi-tenant application):\n> SELECT \"subscribers\".* FROM \"subscribers\" WHERE \"subscribers\".\"project_id\"\n> = 123 AND (tags @> ARRAY['de']::varchar[]);\n\n> If you run EXPLAIN ANALYZE you can see that stats are completely wrong.\n> For example I get an expected count of 3,500 rows whereas the actual\n> result is 20 rows. This also results in bad query plans...\n\n> In a previous discussion someone said that this wrong estimate is because\n> @> uses a fixed selectivity of 0.001, **regardless of actual data**!!\n> Is that true?\n\nHasn't been true since 9.2.\n\nYou might get some insight from looking into the most_common_elems,\nmost_common_elem_freqs, and elem_count_histogram fields of the pg_stats\nview.\n\nIt seems likely to me that increasing the statistics target for this array\ncolumn would help. IIRC, estimates for values that don't show up in\nmost_common_elems are going to depend on the lowest frequency that *does*\nshow up there ... so if you want better resolution for non-common values,\nyou need more entries.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Feb 2020 12:11:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics on array values"
},
{
"msg_contents": "Thanks Tom for the clear explanation.\nUnfortunately I don't get actual improvements. I use PG 11 and I run the\nfollowing commands:\n\nALTER TABLE subscriptions ALTER tags SET STATISTICS 1000;\nANALYZE subscriptions;\n\nHowever the bias remains pretty much the same (slightly worse after). Any\nidea?\n\nOn Sun, Feb 2, 2020 at 6:11 PM Tom Lane <[email protected]> wrote:\n\n> Marco Colli <[email protected]> writes:\n> > Let's say that you have a simple query like the following on a large\n> table\n> > (for a multi-tenant application):\n> > SELECT \"subscribers\".* FROM \"subscribers\" WHERE\n> \"subscribers\".\"project_id\"\n> > = 123 AND (tags @> ARRAY['de']::varchar[]);\n>\n> > If you run EXPLAIN ANALYZE you can see that stats are completely wrong.\n> > For example I get an expected count of 3,500 rows whereas the actual\n> > result is 20 rows. This also results in bad query plans...\n>\n> > In a previous discussion someone said that this wrong estimate is because\n> > @> uses a fixed selectivity of 0.001, **regardless of actual data**!!\n> > Is that true?\n>\n> Hasn't been true since 9.2.\n>\n> You might get some insight from looking into the most_common_elems,\n> most_common_elem_freqs, and elem_count_histogram fields of the pg_stats\n> view.\n>\n> It seems likely to me that increasing the statistics target for this array\n> column would help. IIRC, estimates for values that don't show up in\n> most_common_elems are going to depend on the lowest frequency that *does*\n> show up there ... so if you want better resolution for non-common values,\n> you need more entries.\n>\n> regards, tom lane\n>\n\nThanks Tom for the clear explanation. Unfortunately I don't get actual improvements. I use PG 11 and I run the following commands:ALTER TABLE subscriptions ALTER tags SET STATISTICS 1000; ANALYZE subscriptions;However the bias remains pretty much the same (slightly worse after). Any idea?On Sun, Feb 2, 2020 at 6:11 PM Tom Lane <[email protected]> wrote:Marco Colli <[email protected]> writes:\n> Let's say that you have a simple query like the following on a large table\n> (for a multi-tenant application):\n> SELECT \"subscribers\".* FROM \"subscribers\" WHERE \"subscribers\".\"project_id\"\n> = 123 AND (tags @> ARRAY['de']::varchar[]);\n\n> If you run EXPLAIN ANALYZE you can see that stats are completely wrong.\n> For example I get an expected count of 3,500 rows whereas the actual\n> result is 20 rows. This also results in bad query plans...\n\n> In a previous discussion someone said that this wrong estimate is because\n> @> uses a fixed selectivity of 0.001, **regardless of actual data**!!\n> Is that true?\n\nHasn't been true since 9.2.\n\nYou might get some insight from looking into the most_common_elems,\nmost_common_elem_freqs, and elem_count_histogram fields of the pg_stats\nview.\n\nIt seems likely to me that increasing the statistics target for this array\ncolumn would help. IIRC, estimates for values that don't show up in\nmost_common_elems are going to depend on the lowest frequency that *does*\nshow up there ... so if you want better resolution for non-common values,\nyou need more entries.\n\n regards, tom lane",
"msg_date": "Sun, 2 Feb 2020 19:26:19 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics on array values"
},
{
"msg_contents": "Marco Colli <[email protected]> writes:\n> Unfortunately I don't get actual improvements. I use PG 11 and I run the\n> following commands:\n> ALTER TABLE subscriptions ALTER tags SET STATISTICS 1000;\n> ANALYZE subscriptions;\n> However the bias remains pretty much the same (slightly worse after). Any\n> idea?\n\nSo what have you got in the pg_stats fields I asked about?\nHow big is this table anyway (how many rows, how many different tag\nvalues)?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Feb 2020 13:32:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics on array values"
},
{
"msg_contents": "Sorry, I don't understand your exact question about pg_stats. In any case I\ncannot make strict assumptions about data, because that greatly varies from\nproject to project (it's a SaaS) and over time. To give an idea the table\nhas some tens of millions of rows, each project has from a few thousands to\na few millions of rows and each project has its own tags that the customer\ncan define (unlimited tags for each row, but usually only 1 - 10 actually\nused)\n\nIl Dom 2 Feb 2020, 19:32 Tom Lane <[email protected]> ha scritto:\n\n> Marco Colli <[email protected]> writes:\n> > Unfortunately I don't get actual improvements. I use PG 11 and I run the\n> > following commands:\n> > ALTER TABLE subscriptions ALTER tags SET STATISTICS 1000;\n> > ANALYZE subscriptions;\n> > However the bias remains pretty much the same (slightly worse after). Any\n> > idea?\n>\n> So what have you got in the pg_stats fields I asked about?\n> How big is this table anyway (how many rows, how many different tag\n> values)?\n>\n> regards, tom lane\n>\n\nSorry, I don't understand your exact question about pg_stats. In any case I cannot make strict assumptions about data, because that greatly varies from project to project (it's a SaaS) and over time. To give an idea the table has some tens of millions of rows, each project has from a few thousands to a few millions of rows and each project has its own tags that the customer can define (unlimited tags for each row, but usually only 1 - 10 actually used)Il Dom 2 Feb 2020, 19:32 Tom Lane <[email protected]> ha scritto:Marco Colli <[email protected]> writes:\n> Unfortunately I don't get actual improvements. I use PG 11 and I run the\n> following commands:\n> ALTER TABLE subscriptions ALTER tags SET STATISTICS 1000;\n> ANALYZE subscriptions;\n> However the bias remains pretty much the same (slightly worse after). Any\n> idea?\n\nSo what have you got in the pg_stats fields I asked about?\nHow big is this table anyway (how many rows, how many different tag\nvalues)?\n\n regards, tom lane",
"msg_date": "Sun, 2 Feb 2020 19:55:59 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics on array values"
}
] |
[
{
"msg_contents": "Hello,\n\nI am experiencing slow performance when joining a table against itself on its\nprimary key column.\n\nI expect the query plan to be identical for both of the below queries (and I\nexpect the performance to also be identical). But the second one is much slower:\n\nThe FAST QUERY has a planning time of 0.110 ms and execution time of 3.836 ms\nThe SLOW QUERY has a planning time of 0.296 ms and execution time of 22.969 ms\n\nThe reason I believe that they should be the same is because the postgres query\nplanner should notice that I am joining a table against itself on its primary\nkey column (which is not null + unique) and therefore it should realize that it\ndoesn't actually have to do any additional work and can simply directly access\nthe existing columns.\n\nI've tested this on PostgreSQL 10, 11, 12, 12.1 and 13devel (source snapshot\nfrom 2020-02-03, git commit f1f10a1ba9e17e606a7b217ccccdd3cc4d8cb771)\n\nHere is a full example session:\n\n\n---------\n-- SETUP\n---------\n\nCREATE TABLE test_data (\n id SERIAL4 PRIMARY KEY,\n value TEXT\n);\n\nINSERT INTO test_data (value)\nSELECT value FROM (\n SELECT\n generate_series(1, 100000) AS id,\n md5(random()::TEXT) AS value\n) q;\n\n--------------\n-- FAST QUERY\n--------------\n\nEXPLAIN ANALYZE SELECT\n test_data.id,\n md5(test_data.value) AS x,\n md5(md5(md5(md5(md5(test_data.value))))) AS y\nFROM\n test_data\nWHERE TRUE\n AND test_data.id BETWEEN 3000 AND 4000;\n\n--------------\n-- SLOW QUERY\n--------------\n\nEXPLAIN ANALYZE SELECT\n test_data.id,\n md5(test_data.value) AS x,\n md5(md5(md5(md5(md5(t2.value))))) AS y\nFROM\n test_data,\n test_data AS t2\nWHERE TRUE\n AND t2.id = test_data.id\n AND test_data.id BETWEEN 3000 AND 4000;\n\n--- END ---\n\n\nHere is the query plan of the FAST QUERY:\n\n Index Scan using test_data_pkey on test_data (cost=0.29..60.17\nrows=1025 width=68) (actual time=0.047..3.747 rows=1001 loops=1)\n Index Cond: ((id >= 3000) AND (id <= 4000))\n Planning Time: 0.110 ms\n Execution Time: 3.836 ms\n(4 rows)\n\n\nHere is the query plan of the SLOW QUERY:\n\n Hash Join (cost=57.60..2169.49 rows=1025 width=68) (actual\ntime=1.372..22.876 rows=1001 loops=1)\n Hash Cond: (t2.id = test_data.id)\n -> Seq Scan on test_data t2 (cost=0.00..1834.00 rows=100000\nwidth=37) (actual time=0.010..8.800 rows=100000 loops=1)\n -> Hash (cost=44.79..44.79 rows=1025 width=37) (actual\ntime=0.499..0.499 rows=1001 loops=1)\n Buckets: 2048 Batches: 1 Memory Usage: 84kB\n -> Index Scan using test_data_pkey on test_data\n(cost=0.29..44.79 rows=1025 width=37) (actual time=0.023..0.287\nrows=1001 loops=1)\n Index Cond: ((id >= 3000) AND (id <= 4000))\n Planning Time: 0.296 ms\n Execution Time: 22.969 ms\n(9 rows)\n\n\n",
"msg_date": "Mon, 3 Feb 2020 15:38:15 -0500",
"msg_from": "Benny Kramek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow performance with trivial self-joins"
},
{
"msg_contents": "Benny Kramek <[email protected]> writes:\n> I expect the query plan to be identical for both of the below queries (and I\n> expect the performance to also be identical).\n\n[ shrug... ] Your expectation is mistaken. There is no code in Postgres\nto eliminate useless self-joins. People have been fooling around with\na patch to do so [1], but I'm unsure whether it'll ever get committed,\nor whether we even want the feature. It seems not unlikely that the\ntime wasted trying to identify useless self-joins (in queries where\nthe optimization doesn't actually apply) would outweigh the win when\nit does apply. So there's a limit to how much effort the server should\nspend trying to clean up after poorly-written queries.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/[email protected]\n\n\n",
"msg_date": "Mon, 03 Feb 2020 16:10:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance with trivial self-joins"
},
{
"msg_contents": "Thank you for your response. I have tested out the patch in the linked\nthread and it works very well on a bunch of complex queries that I\nhave tested, improving both the planning time significantly and the\nexecution time drastically.\n\nI have also read through the entire linked discussion thread as well\nas a few other large threads linked from it, and found the discussion\nvery interesting.\n\nI don't believe that all such queries are \"poorly-written\". As was\ndiscussed in the other threads, the reason these types of self-joins\ncan occur is when you use SQL views. You can create a library of\nreusable views that are small, easy-to-understand and readable. Then\nyou build them up into bigger views, and finally query from them. But\nthen you end up with lots of (hidden) self-joins. The alternative is\nto copy&paste the shared logic from the views into all of the queries.\n\nI understand the need to be conservative about which optimizations to\napply in order to not waste time looking for opportunities that don't\nexist. One idea I had that I didn't see mentioned is the following\nheuristic: Only if a query references an SQL view (or multiple views),\nthen try to apply the self_join_removal optimization. This should be\nenough, because as you say, no human would intentionally write such a\nquery. Queries generated by ORMs were also discussed, so I believe it\nmight also be beneficial to consider queries that contain inner\nSELECTs.\n\n\n",
"msg_date": "Wed, 5 Feb 2020 16:55:49 -0500",
"msg_from": "Benny Kramek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow performance with trivial self-joins"
},
{
"msg_contents": "> You can create a library of\n> reusable views that are small, easy-to-understand and readable. Then\n> you build them up into bigger views, and finally query from them. But\n> then you end up with lots of (hidden) self-joins.\n\nI will concur with this use case being pretty common, but also something I\nhave actively avoided anywhere performance is important because of the\nlack of this optimization.\n\nEven still, I have 20+ views like that in my database.\n\n> You can create a library of> reusable views that are small, easy-to-understand and readable. Then> you build them up into bigger views, and finally query from them. But> then you end up with lots of (hidden) self-joins. I will concur with this use case being pretty common, but also something I have actively avoided anywhere performance is important because of thelack of this optimization.Even still, I have 20+ views like that in my database.",
"msg_date": "Wed, 5 Feb 2020 17:12:21 -0500",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance with trivial self-joins"
},
{
"msg_contents": "On Thu, 6 Feb 2020 at 11:12, Adam Brusselback <[email protected]> wrote:\n>\n> > You can create a library of\n> > reusable views that are small, easy-to-understand and readable. Then\n> > you build them up into bigger views, and finally query from them. But\n> > then you end up with lots of (hidden) self-joins.\n>\n> I will concur with this use case being pretty common, but also something I\n> have actively avoided anywhere performance is important because of the\n> lack of this optimization.\n>\n> Even still, I have 20+ views like that in my database.\n\nI think the best direction to move in to push that forward would be to\ngo and benchmark the proposed patch and see if the overhead of\ndetecting the self joined relations is measurable with various queries\nwith varying numbers of joins.\n\nIt does not sound too like it would be a great deal of effort to look\nthrough the rangetable for duplicate Oids and only do further\nprocessing to attempt self-join removal if there are. However, if that\neffort happened to slow down all queries by say 5%, then perhaps it\nwould be a bad idea. People's opinions don't really have much\ntraction for arguments on this. Unbiased and reproducible benchmarks\nshould be used as evidence to support discussion. Doing worst-case and\naverage-case benchmarks initially will save you time, as someone will\nalmost certainly ask if you don't do it.\n\n(I've not been following the thread for the patch)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 6 Feb 2020 12:47:11 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance with trivial self-joins"
}
] |
[
{
"msg_contents": "I'm looking to write about 1100 rows per second to tables up to 100 million\nrows. I'm trying to come up with a design that I can do all the writes to a\ndatabase with no indexes. When having indexes the write performance slows\ndown dramatically after the table gets bigger than 30 million rows.\n\n\nI was thinking of having a server dedicated for all the writes and have\nanother server for reads that has indexes and use logical replication to\nupdate the read only server.\n\n\nWould that work? Or any recommendations how I can achieve good performance\nfor a lot of writes?\n\nThank you\n\nI'm looking to write about 1100 rows per second to tables up to 100 million rows. I'm trying to come up with a design that I can do all the writes to a database with no indexes. When having indexes the write performance slows down dramatically after the table gets bigger than 30 million rows.I was thinking of having a server dedicated for all the writes and have another server for reads that has indexes and use logical replication to update the read only server.Would that work? Or any recommendations how I can achieve good performance for a lot of writes?Thank you",
"msg_date": "Wed, 5 Feb 2020 12:03:52 -0500",
"msg_from": "Arya F <[email protected]>",
"msg_from_op": true,
"msg_subject": "Writing 1100 rows per second"
},
{
"msg_contents": "On Wed, 2020-02-05 at 12:03 -0500, Arya F wrote:\n> I'm looking to write about 1100 rows per second to tables up to 100 million rows. I'm trying to\n> come up with a design that I can do all the writes to a database with no indexes. When having\n> indexes the write performance slows down dramatically after the table gets bigger than 30 million rows.\n> \n> I was thinking of having a server dedicated for all the writes and have another server for reads\n> that has indexes and use logical replication to update the read only server.\n> \n> Would that work? Or any recommendations how I can achieve good performance for a lot of writes?\n\nLogical replication wouldn't make a difference, because with many indexes, replay of the\ninserts would be slow as well, and replication would lag more and more.\n\nNo matter what you do, there will be no magic way to have your tables indexed and\nhave fast inserts at the same time.\n\nOne idea I can come up with is a table that is partitioned by a column that appears\nin a selective search condition, but have no indexes on the table, so that you always get\naway with a sequential scan of a single partition.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 05 Feb 2020 18:12:34 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writing 1100 rows per second"
},
{
"msg_contents": "On Wed, Feb 05, 2020 at 12:03:52PM -0500, Arya F wrote:\n> I'm looking to write about 1100 rows per second to tables up to 100 million\n> rows. I'm trying to come up with a design that I can do all the writes to a\n> database with no indexes. When having indexes the write performance slows\n> down dramatically after the table gets bigger than 30 million rows.\n> \n> I was thinking of having a server dedicated for all the writes and have\n> another server for reads that has indexes and use logical replication to\n> update the read only server.\n\nWouldn't the readonly server still have bad performance for all the wites being\nreplicated to it ?\n\n> Would that work? Or any recommendations how I can achieve good performance\n> for a lot of writes?\n\nCan you use partitioning so the updates are mostly affecting only one table at\nonce, and its indices are of reasonable size, such that they can fit easily in\nshared_buffers.\n\nbrin indices may help for some, but likely not for all your indices.\n\nJustin\n\n\n",
"msg_date": "Wed, 5 Feb 2020 11:15:49 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writing 1100 rows per second"
},
{
"msg_contents": "If I run the database on a server that has enough ram to load all the\nindexes and tables into ram. And then it would update the index on the HDD\nevery x seconds. Would that work to increase performance dramatically?\n\nOn Wed, Feb 5, 2020, 12:15 PM Justin Pryzby <[email protected]> wrote:\n\n> On Wed, Feb 05, 2020 at 12:03:52PM -0500, Arya F wrote:\n> > I'm looking to write about 1100 rows per second to tables up to 100\n> million\n> > rows. I'm trying to come up with a design that I can do all the writes\n> to a\n> > database with no indexes. When having indexes the write performance slows\n> > down dramatically after the table gets bigger than 30 million rows.\n> >\n> > I was thinking of having a server dedicated for all the writes and have\n> > another server for reads that has indexes and use logical replication to\n> > update the read only server.\n>\n> Wouldn't the readonly server still have bad performance for all the wites\n> being\n> replicated to it ?\n>\n> > Would that work? Or any recommendations how I can achieve good\n> performance\n> > for a lot of writes?\n>\n> Can you use partitioning so the updates are mostly affecting only one\n> table at\n> once, and its indices are of reasonable size, such that they can fit\n> easily in\n> shared_buffers.\n>\n> brin indices may help for some, but likely not for all your indices.\n>\n> Justin\n>\n\nIf I run the database on a server that has enough ram to load all the indexes and tables into ram. And then it would update the index on the HDD every x seconds. Would that work to increase performance dramatically?On Wed, Feb 5, 2020, 12:15 PM Justin Pryzby <[email protected]> wrote:On Wed, Feb 05, 2020 at 12:03:52PM -0500, Arya F wrote:\n> I'm looking to write about 1100 rows per second to tables up to 100 million\n> rows. I'm trying to come up with a design that I can do all the writes to a\n> database with no indexes. When having indexes the write performance slows\n> down dramatically after the table gets bigger than 30 million rows.\n> \n> I was thinking of having a server dedicated for all the writes and have\n> another server for reads that has indexes and use logical replication to\n> update the read only server.\n\nWouldn't the readonly server still have bad performance for all the wites being\nreplicated to it ?\n\n> Would that work? Or any recommendations how I can achieve good performance\n> for a lot of writes?\n\nCan you use partitioning so the updates are mostly affecting only one table at\nonce, and its indices are of reasonable size, such that they can fit easily in\nshared_buffers.\n\nbrin indices may help for some, but likely not for all your indices.\n\nJustin",
"msg_date": "Wed, 5 Feb 2020 12:25:30 -0500",
"msg_from": "Arya F <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Writing 1100 rows per second"
},
{
"msg_contents": "Arya,\nWe ran into the issue of decreasing insert performance for tables of\nhundreds of millions of rows and they are indeed due to index updates.\nWe tested TimescaleDB (a pgsql plugin) with success in all use cases that\nwe have. It does a \"behind the scenes\" single-level partitioning that is\nindeed very efficient.\nNot sure about the 1100 inserts/s as it is hardware dependent, but we got\nthe flat response curve (inserts per second stayed stable with hundreds of\nmillions of rows, regardless of indexes).\nMy suggestion: have a look at\nhttps://blog.timescale.com/timescaledb-vs-6a696248104e/ , and do some PoCs.\n\nRegards,\nHaroldo Kerry\n\nOn Wed, Feb 5, 2020 at 2:25 PM Arya F <[email protected]> wrote:\n\n> If I run the database on a server that has enough ram to load all the\n> indexes and tables into ram. And then it would update the index on the HDD\n> every x seconds. Would that work to increase performance dramatically?\n>\n> On Wed, Feb 5, 2020, 12:15 PM Justin Pryzby <[email protected]> wrote:\n>\n>> On Wed, Feb 05, 2020 at 12:03:52PM -0500, Arya F wrote:\n>> > I'm looking to write about 1100 rows per second to tables up to 100\n>> million\n>> > rows. I'm trying to come up with a design that I can do all the writes\n>> to a\n>> > database with no indexes. When having indexes the write performance\n>> slows\n>> > down dramatically after the table gets bigger than 30 million rows.\n>> >\n>> > I was thinking of having a server dedicated for all the writes and have\n>> > another server for reads that has indexes and use logical replication to\n>> > update the read only server.\n>>\n>> Wouldn't the readonly server still have bad performance for all the wites\n>> being\n>> replicated to it ?\n>>\n>> > Would that work? Or any recommendations how I can achieve good\n>> performance\n>> > for a lot of writes?\n>>\n>> Can you use partitioning so the updates are mostly affecting only one\n>> table at\n>> once, and its indices are of reasonable size, such that they can fit\n>> easily in\n>> shared_buffers.\n>>\n>> brin indices may help for some, but likely not for all your indices.\n>>\n>> Justin\n>>\n>\n\n-- \n\nHaroldo Kerry\n\nCTO/COO\n\nRua do Rócio, 220, 7° andar, conjunto 72\n\nSão Paulo – SP / CEP 04552-000\n\[email protected]\n\nwww.callix.com.br\n\nArya,We ran into the issue of decreasing insert performance for tables of hundreds of millions of rows and they are indeed due to index updates.We tested TimescaleDB (a pgsql plugin) with success in all use cases that we have. It does a \"behind the scenes\" single-level partitioning that is indeed very efficient.Not sure about the 1100 inserts/s as it is hardware dependent, but we got the flat response curve (inserts per second stayed stable with hundreds of millions of rows, regardless of indexes).My suggestion: have a look at https://blog.timescale.com/timescaledb-vs-6a696248104e/ , and do some PoCs.Regards,Haroldo KerryOn Wed, Feb 5, 2020 at 2:25 PM Arya F <[email protected]> wrote:If I run the database on a server that has enough ram to load all the indexes and tables into ram. And then it would update the index on the HDD every x seconds. Would that work to increase performance dramatically?On Wed, Feb 5, 2020, 12:15 PM Justin Pryzby <[email protected]> wrote:On Wed, Feb 05, 2020 at 12:03:52PM -0500, Arya F wrote:\n> I'm looking to write about 1100 rows per second to tables up to 100 million\n> rows. I'm trying to come up with a design that I can do all the writes to a\n> database with no indexes. When having indexes the write performance slows\n> down dramatically after the table gets bigger than 30 million rows.\n> \n> I was thinking of having a server dedicated for all the writes and have\n> another server for reads that has indexes and use logical replication to\n> update the read only server.\n\nWouldn't the readonly server still have bad performance for all the wites being\nreplicated to it ?\n\n> Would that work? Or any recommendations how I can achieve good performance\n> for a lot of writes?\n\nCan you use partitioning so the updates are mostly affecting only one table at\nonce, and its indices are of reasonable size, such that they can fit easily in\nshared_buffers.\n\nbrin indices may help for some, but likely not for all your indices.\n\nJustin\n\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]",
"msg_date": "Wed, 5 Feb 2020 14:46:58 -0300",
"msg_from": "Haroldo Kerry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writing 1100 rows per second"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 9:12 AM Laurenz Albe <[email protected]>\nwrote:\n\n> One idea I can come up with is a table that is partitioned by a column\n> that appears\n> in a selective search condition, but have no indexes on the table, so that\n> you always get\n> away with a sequential scan of a single partition.\n>\n>\nThis is an approach that I am currently using successfully. We have a large\ndataset that continues to be computed and so insert speed is of importance\nto us. The DB currently has about 45 billion rows. There are three columns\nthat are involved in all searches of the data. We have separate tables for\nall unique combination of those 3 values (which gives us about 2000\ntables). Thus, we were able to save the space for having to store those\ncolumns (since the name of the table defines what those 3 columns are in\nthat table). We don't have any indices on those tables (except for the\ndefault one which gets created for the pk serial number). As a result all\nsearches only involve 1 table and a sequential scan of that table. The\nlogic to choose the correct tables for insertionse or searches lives in our\napplication code and not in SQL.\n\nThe size of the 2000 tables forms a gaussian distirbution, so our largest\ntable is about a billion rows and there are many tables that have hundreds\nof millions of rows. The ongoing insertions form the same distribution, so\nthe bulk of insertions is happening into the largest tables. It is not a\nspeed demon and I have not run tests recently but back of the envelope\ncalculations give me confidence that we are definitely inserting more than\n1100 per second. And that is running the server on an old puny i5 processor\nwith regular HDDs and only 32Gb of memory.\n\nOn Wed, Feb 5, 2020 at 9:12 AM Laurenz Albe <[email protected]> wrote:One idea I can come up with is a table that is partitioned by a column that appears\nin a selective search condition, but have no indexes on the table, so that you always get\naway with a sequential scan of a single partition.This is an approach that I am currently using successfully. We have a large dataset that continues to be computed and so insert speed is of importance to us. The DB currently has about 45 billion rows. There are three columns that are involved in all searches of the data. We have separate tables for all unique combination of those 3 values (which gives us about 2000 tables). Thus, we were able to save the space for having to store those columns (since the name of the table defines what those 3 columns are in that table). We don't have any indices on those tables (except for the default one which gets created for the pk serial number). As a result all searches only involve 1 table and a sequential scan of that table. The logic to choose the correct tables for insertionse or searches lives in our application code and not in SQL.The size of the 2000 tables forms a gaussian distirbution, so our largest table is about a billion rows and there are many tables that have hundreds of millions of rows. The ongoing insertions form the same distribution, so the bulk of insertions is happening into the largest tables. It is not a speed demon and I have not run tests recently but back of the envelope calculations give me confidence that we are definitely inserting more than 1100 per second. And that is running the server on an old puny i5 processor with regular HDDs and only 32Gb of memory.",
"msg_date": "Thu, 6 Feb 2020 10:15:23 -0800",
"msg_from": "Ogden Brash <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writing 1100 rows per second"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 12:25 PM Arya F <[email protected]> wrote:\n\n> If I run the database on a server that has enough ram to load all the\n> indexes and tables into ram. And then it would update the index on the HDD\n> every x seconds. Would that work to increase performance dramatically?\n>\n\nPerhaps. Probably not dramatically though. If x seconds (called a\ncheckpoint) is not long enough for the entire index to have been dirtied,\nthen my finding is that writing half of the pages (randomly interspersed)\nof a file, even in block order, still has the horrid performance of a long\nsequence of random writes, not the much better performance of a handful of\nsequential writes. Although this probably depends strongly on your RAID\ncontroller and OS version and such, so you should try it for yourself on\nyour own hardware.\n\nCheers,\n\nJeff\n\nOn Wed, Feb 5, 2020 at 12:25 PM Arya F <[email protected]> wrote:If I run the database on a server that has enough ram to load all the indexes and tables into ram. And then it would update the index on the HDD every x seconds. Would that work to increase performance dramatically?Perhaps. Probably not dramatically though. If x seconds (called a checkpoint) is not long enough for the entire index to have been dirtied, then my finding is that writing half of the pages (randomly interspersed) of a file, even in block order, still has the horrid performance of a long sequence of random writes, not the much better performance of a handful of sequential writes. Although this probably depends strongly on your RAID controller and OS version and such, so you should try it for yourself on your own hardware.Cheers,Jeff",
"msg_date": "Sun, 9 Feb 2020 14:30:32 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writing 1100 rows per second"
},
{
"msg_contents": "I am using npgsql in my c# program,\nI have a table in which we need to dump data from a flat file approx. 5.5\nmillion rows once in every 24 hours.\nSoon it will grow to 10 million rows.\nWe are using npgsql binary import and all this is done (5.5 million)\ninserts in < 3 minutes.\nThen we create two indexes on the table which takes 25-30 seconds.\n\nWith Warm Regards,\n\nAmol P. Tarte\nProject Manager,\nRajdeep Info Techno Pvt. Ltd.\nVisit us at http://it.rajdeepgroup.com\n\n\nOn Mon, Feb 10, 2020 at 1:00 AM Jeff Janes <[email protected]> wrote:\n\n> On Wed, Feb 5, 2020 at 12:25 PM Arya F <[email protected]> wrote:\n>\n>> If I run the database on a server that has enough ram to load all the\n>> indexes and tables into ram. And then it would update the index on the HDD\n>> every x seconds. Would that work to increase performance dramatically?\n>>\n>\n> Perhaps. Probably not dramatically though. If x seconds (called a\n> checkpoint) is not long enough for the entire index to have been dirtied,\n> then my finding is that writing half of the pages (randomly interspersed)\n> of a file, even in block order, still has the horrid performance of a long\n> sequence of random writes, not the much better performance of a handful of\n> sequential writes. Although this probably depends strongly on your RAID\n> controller and OS version and such, so you should try it for yourself on\n> your own hardware.\n>\n> Cheers,\n>\n> Jeff\n>\n\nI am using npgsql in my c# program,I have a table in which we need to dump data from a flat file approx. 5.5 million rows once in every 24 hours.Soon it will grow to 10 million rows.We are using npgsql binary import and all this is done (5.5 million) inserts in < 3 minutes.Then we create two indexes on the table which takes 25-30 seconds.With Warm Regards,Amol P. TarteProject Manager,Rajdeep Info Techno Pvt. Ltd.Visit us at http://it.rajdeepgroup.comOn Mon, Feb 10, 2020 at 1:00 AM Jeff Janes <[email protected]> wrote:On Wed, Feb 5, 2020 at 12:25 PM Arya F <[email protected]> wrote:If I run the database on a server that has enough ram to load all the indexes and tables into ram. And then it would update the index on the HDD every x seconds. Would that work to increase performance dramatically?Perhaps. Probably not dramatically though. If x seconds (called a checkpoint) is not long enough for the entire index to have been dirtied, then my finding is that writing half of the pages (randomly interspersed) of a file, even in block order, still has the horrid performance of a long sequence of random writes, not the much better performance of a handful of sequential writes. Although this probably depends strongly on your RAID controller and OS version and such, so you should try it for yourself on your own hardware.Cheers,Jeff",
"msg_date": "Wed, 12 Feb 2020 10:58:15 +0530",
"msg_from": "Amol Tarte <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writing 1100 rows per second"
}
] |
Subsets and Splits