threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi,\n\nwe are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an\nhour, with disk I/O utilization (percent of time device is busy) at 100%\nthe whole time, although I/O bandwidth is not saturated. This is on\nPostgreSQL 9.1.13.\n\nWhat could cause this? Note that there is no ANALYZE. Is it possible that\nthe SELECT is actually executed, in planning the INSERT?\n\nWhen executing the INSERT itself (not EXPLAIN) immediately afterwards, that\nlogs a \"temporary file\" message, but the EXPLAIN invocation does not\n(though the disk I/O suggests that a large on-disk sort is occurring):\n\nLOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp6016.0\", size 744103936\nSTATEMENT: INSERT INTO f_foo SELECT\n[...]\n\nDuring that actual execution, there's a lower disk I/O utilization (though\na higher MB/s rate).\n\nCharts of disk I/O utilization and rate are at\nhttp://postimg.org/image/628h6jmn3/ ... the solid 100% span is the EXPLAIN\nstatement, ending at 6:13:30pm, followed by the INSERT statement ending at\n6:32pm. Metrics are collected by New Relic; their definition of I/O\nutilization is at https://discuss.newrelic.com/t/disk-i-o-metrics/2900\n\nHere's the EXPLAIN statement:\n\nLOG: duration: 3928778.823 ms statement: EXPLAIN INSERT INTO f_foo SELECT\n t_foo.fk_d1,\n t_foo.fk_d2,\n t_foo.fk_d3,\n t_foo.fk_d4,\n t_foo.fk_d5,\n t_foo.fk_d6,\n t_foo.value\nFROM t_foo\nWHERE NOT (EXISTS\n (SELECT *\n FROM f_foo\n WHERE f_foo.fk_d2 = t_foo.fk_d2\n AND f_foo.fk_d3 = t_foo.fk_d3\n AND f_foo.fk_d4 = t_foo.fk_d4\n AND f_foo.fk_d5 = t_foo.fk_d5\n AND f_foo.fk_d6 = t_foo.fk_d6\n AND f_foo.fk_d1 = t_foo.fk_d1))\n\n(where t_foo is a temp table previously populated using COPY, and the NOT\nEXISTS subquery refers to the same table we are inserting into)\n\nHere's the EXPLAIN output:\n\nInsert on f_foo (cost=8098210.50..9354519.69 rows=1 width=16)\n -> Merge Anti Join (cost=8098210.50..9354519.69 rows=1 width=16)\n Merge Cond: ((t_foo.fk_d2 = public.f_foo.fk_d2) AND\n (t_foo.fk_d3 = public.f_foo.fk_d3) AND\n (t_foo.fk_d4 = public.f_foo.fk_d4) AND\n (t_foo.fk_d5 = public.f_foo.fk_d5) AND\n (t_foo.fk_d6 = public.f_foo.fk_d6) AND\n (t_foo.fk_d1 = public.f_foo.fk_d1))\n -> Sort (cost=3981372.25..4052850.70 rows=28591380 width=16)\n Sort Key: t_foo.fk_d2, t_foo.fk_d3, t_foo.fk_d4, t_foo.fk_d5,\n t_foo.fk_d6, t_foo.fk_d1\n -> Seq Scan on t_foo (cost=0.00..440461.80 rows=28591380\n width=16)\n -> Sort (cost=4116838.25..4188025.36 rows=28474842 width=12)\n Sort Key: public.f_foo.fk_d2, public.f_foo.fk_d3,\n public.f_foo.fk_d4, public.f_foo.fk_d5,\n public.f_foo.fk_d6, public.f_foo.fk_d1\n -> Seq Scan on f_foo (cost=0.00..591199.42 rows=28474842\n width=12)\n\nThe INSERT is indeed rather large (which is why we're issuing an EXPLAIN\nahead of it to log the plan). So its long execution time is expected. But I\nwant to understand why the EXPLAIN takes even longer.\n\nThe table looks like this:\n\n\\d f_foo\nTable \"public.f_foo\"\n Column | Type | Modifiers\n--------+----------+-----------\n fk_d1 | smallint | not null\n fk_d2 | smallint | not null\n fk_d3 | smallint | not null\n fk_d4 | smallint | not null\n fk_d5 | smallint | not null\n fk_d6 | smallint | not null\n value | integer |\nIndexes:\n \"f_foo_pkey\" PRIMARY KEY, btree (fk_d2, fk_d6, fk_d4, fk_d3, fk_d5,\nfk_d1) CLUSTER\n \"ix_f_foo_d4\" btree (fk_d4)\n \"ix_f_foo_d3\" btree (fk_d3)\n \"ix_f_foo_d5\" btree (fk_d5)\n \"ix_f_foo_d6\" btree (fk_d6)\nForeign-key constraints:\n \"f_foo_d2_fkey\" FOREIGN KEY (fk_d2) REFERENCES d2(id) DEFERRABLE\n \"f_foo_d6_fkey\" FOREIGN KEY (fk_d6) REFERENCES d6(id) DEFERRABLE\n \"f_foo_d5_fkey\" FOREIGN KEY (fk_d5) REFERENCES d5(id) DEFERRABLE\n \"f_foo_d4_fkey\" FOREIGN KEY (fk_d4) REFERENCES d4(id) DEFERRABLE\n \"f_foo_d3_fkey\" FOREIGN KEY (fk_d3) REFERENCES d3(id) DEFERRABLE\n\nConceivably relevant (though I don't know how): this database has a very\nlarge number of table objects (1.3 million rows in pg_class). But other\nEXPLAINs are not taking anywhere near this long in this DB; the heavy\nEXPLAIN is only seen on INSERT into this and a couple of other tables with\ntens of millions of rows.\n\nAny ideas?\n\nThanks, best regards,\n\n- Gulli\n\nHi, we are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an hour, with disk I/O utilization (percent of time device is busy) at 100% the whole time, although I/O bandwidth is not saturated. This is on PostgreSQL 9.1.13.What could cause this? Note that there is no ANALYZE. Is it possible that the SELECT is actually executed, in planning the INSERT?When executing the INSERT itself (not EXPLAIN) immediately afterwards, that logs a \"temporary file\" message, but the EXPLAIN invocation does not (though the disk I/O suggests that a large on-disk sort is occurring):LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp6016.0\", size 744103936STATEMENT: INSERT INTO f_foo SELECT[...] During that actual execution, there's a lower disk I/O utilization (though a higher MB/s rate).Charts of disk I/O utilization and rate are at http://postimg.org/image/628h6jmn3/ ... the solid 100% span is the EXPLAIN statement, ending at 6:13:30pm, followed by the INSERT statement ending at 6:32pm. Metrics are collected by New Relic; their definition of I/O utilization is at https://discuss.newrelic.com/t/disk-i-o-metrics/2900Here's the EXPLAIN statement: LOG: duration: 3928778.823 ms statement: EXPLAIN INSERT INTO f_foo SELECT t_foo.fk_d1, t_foo.fk_d2, t_foo.fk_d3, t_foo.fk_d4, t_foo.fk_d5, t_foo.fk_d6, t_foo.valueFROM t_fooWHERE NOT (EXISTS (SELECT * FROM f_foo WHERE f_foo.fk_d2 = t_foo.fk_d2 AND f_foo.fk_d3 = t_foo.fk_d3 AND f_foo.fk_d4 = t_foo.fk_d4 AND f_foo.fk_d5 = t_foo.fk_d5 AND f_foo.fk_d6 = t_foo.fk_d6 AND f_foo.fk_d1 = t_foo.fk_d1))(where t_foo is a temp table previously populated using COPY, and the NOT EXISTS subquery refers to the same table we are inserting into)Here's the EXPLAIN output: Insert on f_foo (cost=8098210.50..9354519.69 rows=1 width=16) -> Merge Anti Join (cost=8098210.50..9354519.69 rows=1 width=16) Merge Cond: ((t_foo.fk_d2 = public.f_foo.fk_d2) AND (t_foo.fk_d3 = public.f_foo.fk_d3) AND (t_foo.fk_d4 = public.f_foo.fk_d4) AND (t_foo.fk_d5 = public.f_foo.fk_d5) AND (t_foo.fk_d6 = public.f_foo.fk_d6) AND (t_foo.fk_d1 = public.f_foo.fk_d1)) -> Sort (cost=3981372.25..4052850.70 rows=28591380 width=16) Sort Key: t_foo.fk_d2, t_foo.fk_d3, t_foo.fk_d4, t_foo.fk_d5, t_foo.fk_d6, t_foo.fk_d1 -> Seq Scan on t_foo (cost=0.00..440461.80 rows=28591380 width=16) -> Sort (cost=4116838.25..4188025.36 rows=28474842 width=12) Sort Key: public.f_foo.fk_d2, public.f_foo.fk_d3, public.f_foo.fk_d4, public.f_foo.fk_d5, public.f_foo.fk_d6, public.f_foo.fk_d1 -> Seq Scan on f_foo (cost=0.00..591199.42 rows=28474842 width=12)The INSERT is indeed rather large (which is why we're issuing an EXPLAIN ahead of it to log the plan). So its long execution time is expected. But I want to understand why the EXPLAIN takes even longer.The table looks like this: \\d f_fooTable \"public.f_foo\" Column | Type | Modifiers --------+----------+----------- fk_d1 | smallint | not null fk_d2 | smallint | not null fk_d3 | smallint | not null fk_d4 | smallint | not null fk_d5 | smallint | not null fk_d6 | smallint | not null value | integer | Indexes: \"f_foo_pkey\" PRIMARY KEY, btree (fk_d2, fk_d6, fk_d4, fk_d3, fk_d5, fk_d1) CLUSTER \"ix_f_foo_d4\" btree (fk_d4) \"ix_f_foo_d3\" btree (fk_d3) \"ix_f_foo_d5\" btree (fk_d5) \"ix_f_foo_d6\" btree (fk_d6)Foreign-key constraints: \"f_foo_d2_fkey\" FOREIGN KEY (fk_d2) REFERENCES d2(id) DEFERRABLE \"f_foo_d6_fkey\" FOREIGN KEY (fk_d6) REFERENCES d6(id) DEFERRABLE \"f_foo_d5_fkey\" FOREIGN KEY (fk_d5) REFERENCES d5(id) DEFERRABLE \"f_foo_d4_fkey\" FOREIGN KEY (fk_d4) REFERENCES d4(id) DEFERRABLE \"f_foo_d3_fkey\" FOREIGN KEY (fk_d3) REFERENCES d3(id) DEFERRABLEConceivably relevant (though I don't know how): this database has a very large number of table objects (1.3 million rows in pg_class). But other EXPLAINs are not taking anywhere near this long in this DB; the heavy EXPLAIN is only seen on INSERT into this and a couple of other tables with tens of millions of rows.Any ideas?Thanks, best regards, - Gulli",
"msg_date": "Wed, 4 Mar 2015 19:03:12 +0000",
"msg_from": "Gunnlaugur Thor Briem <[email protected]>",
"msg_from_op": true,
"msg_subject": "EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": ">Hi,\n>we are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an hour, with disk I/O utilization (percent of time device is busy) at 100% the whole time, although I/O bandwidth is not saturated. This is on PostgreSQL 9.1.13.\n>What could cause this? Note that there is no ANALYZE. Is it possible that the SELECT is actually executed, in planning the INSERT?\n>When executing the INSERT itself (not EXPLAIN) immediately afterwards, that logs a \"temporary file\" message, but the EXPLAIN invocation does not (though the disk I/O suggests that a large on-disk sort is occurring):\n>LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp6016.0\", size 744103936\n>STATEMENT: INSERT INTO f_foo SELECT\n>[...]\n>During that actual execution, there's a lower disk I/O utilization (though a higher MB/s rate).\n>Charts of disk I/O utilization and rate are at http://postimg.org/image/628h6jmn3/ ... the solid 100% span is the EXPLAIN statement, ending at 6:13:30pm, followed by the INSERT statement ending at 6:32pm. Metrics are collected by New Relic; their definition of I/O utilization is at https://discuss.newrelic.com/t/disk-i-o-metrics/2900\n>Here's the EXPLAIN statement:\n>LOG: duration: 3928778.823 ms statement: EXPLAIN INSERT INTO f_foo SELECT\n> t_foo.fk_d1,\n> t_foo.fk_d2,\n> t_foo.fk_d3,\n> t_foo.fk_d4,\n> t_foo.fk_d5,\n> t_foo.fk_d6,\n> t_foo.value\n>FROM t_foo\n>WHERE NOT (EXISTS\n> (SELECT *\n> FROM f_foo\n> WHERE f_foo.fk_d2 = t_foo.fk_d2\n> AND f_foo.fk_d3 = t_foo.fk_d3\n> AND f_foo.fk_d4 = t_foo.fk_d4\n> AND f_foo.fk_d5 = t_foo.fk_d5\n> AND f_foo.fk_d6 = t_foo.fk_d6\n> AND f_foo.fk_d1 = t_foo.fk_d1))\n>(where t_foo is a temp table previously populated using COPY, and the NOT EXISTS subquery refers to the same table we are inserting into)\n>Here's the EXPLAIN output:\n>Insert on f_foo (cost=8098210.50..9354519.69 rows=1 width=16)\n> -> Merge Anti Join (cost=8098210.50..9354519.69 rows=1 width=16)\n> Merge Cond: ((t_foo.fk_d2 = public.f_foo.fk_d2) AND\n> (t_foo.fk_d3 = public.f_foo.fk_d3) AND\n> (t_foo.fk_d4 = public.f_foo.fk_d4) AND\n> (t_foo.fk_d5 = public.f_foo.fk_d5) AND\n> (t_foo.fk_d6 = public.f_foo.fk_d6) AND\n> (t_foo.fk_d1 = public.f_foo.fk_d1))\n> -> Sort (cost=3981372.25..4052850.70 rows=28591380 width=16)\n> Sort Key: t_foo.fk_d2, t_foo.fk_d3, t_foo.fk_d4, t_foo.fk_d5,\n> t_foo.fk_d6, t_foo.fk_d1\n> -> Seq Scan on t_foo (cost=0.00..440461.80 rows=28591380\n> width=16)\n> -> Sort (cost=4116838.25..4188025.36 rows=28474842 width=12)\n> Sort Key: public.f_foo.fk_d2, public.f_foo.fk_d3,\n> public.f_foo.fk_d4, public.f_foo.fk_d5,\n> public.f_foo.fk_d6, public.f_foo.fk_d1\n> -> Seq Scan on f_foo (cost=0.00..591199.42 rows=28474842\n> width=12)\n>The INSERT is indeed rather large (which is why we're issuing an EXPLAIN ahead of it to log the plan). So its long execution time is expected. But I want to understand why the EXPLAIN takes even longer.\n>The table looks like this:\n>\\d f_foo\n>Table \"public.f_foo\"\n> Column | Type | Modifiers\n>--------+----------+-----------\n> fk_d1 | smallint | not null\n> fk_d2 | smallint | not null\n> fk_d3 | smallint | not null\n> fk_d4 | smallint | not null\n> fk_d5 | smallint | not null\n> fk_d6 | smallint | not null\n> value | integer |\n>Indexes:\n> \"f_foo_pkey\" PRIMARY KEY, btree (fk_d2, fk_d6, fk_d4, fk_d3, fk_d5, fk_d1) CLUSTER\n> \"ix_f_foo_d4\" btree (fk_d4)\n> \"ix_f_foo_d3\" btree (fk_d3)\n> \"ix_f_foo_d5\" btree (fk_d5)\n> \"ix_f_foo_d6\" btree (fk_d6)\n>Foreign-key constraints:\n> \"f_foo_d2_fkey\" FOREIGN KEY (fk_d2) REFERENCES d2(id) DEFERRABLE\n> \"f_foo_d6_fkey\" FOREIGN KEY (fk_d6) REFERENCES d6(id) DEFERRABLE\n> \"f_foo_d5_fkey\" FOREIGN KEY (fk_d5) REFERENCES d5(id) DEFERRABLE\n> \"f_foo_d4_fkey\" FOREIGN KEY (fk_d4) REFERENCES d4(id) DEFERRABLE\n> \"f_foo_d3_fkey\" FOREIGN KEY (fk_d3) REFERENCES d3(id) DEFERRABLE\n>Conceivably relevant (though I don't know how): this database has a very large number of table objects (1.3 million rows in pg_class). But other EXPLAINs are not taking anywhere near this long in this DB; the heavy EXPLAIN is only seen on INSERT into this and a couple of other tables with tens of millions of rows.\n>Any ideas?\n>Thanks, best regards,\n>- Gulli\n>\n\nHi,\nI've no clue for the time required by EXPLAIN\nbut some more information are probably relevant to find an explanation:\n\n- postgres version\n- number of rows inserted by the query\n- how clean is your catalog in regard to vacuum\n ( can you run vacuum full verbose & analyze it, and then retry the analyze statement ?)\n- any other process that may interfere, e.g. while locking some catalog tables ?\n- statistic target ?\n- is your temp table analyzed?\n- any index on it ?\n\nWe have about 300'000 entries in our pg_class tables, and I've never seen such an issue.\n\nregards,\nMarc Mamin\n\n\n\n\n\n\n\n\n>Hi,\n\n>we are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an hour, with disk I/O utilization (percent of time device is busy) at 100% the whole time, although I/O bandwidth is not saturated. This is on PostgreSQL 9.1.13.\n>What could cause this? Note that there is no ANALYZE. Is it possible that the SELECT is actually executed, in planning the INSERT?\n>When executing the INSERT itself (not EXPLAIN) immediately afterwards, that logs a \"temporary file\" message, but the EXPLAIN invocation does not (though the disk I/O suggests that a large on-disk sort is occurring):\n>LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp6016.0\", size 744103936\n>STATEMENT: INSERT INTO f_foo SELECT\n>[...] \n>During that actual execution, there's a lower disk I/O utilization (though a higher MB/s rate).\n>Charts of disk I/O utilization and rate are at http://postimg.org/image/628h6jmn3/ ... the solid 100% span is the EXPLAIN statement, ending at 6:13:30pm, followed by the INSERT statement ending at 6:32pm. Metrics are collected by New Relic; their definition\n of I/O utilization is at https://discuss.newrelic.com/t/disk-i-o-metrics/2900\n>Here's the EXPLAIN statement: \n>LOG: duration: 3928778.823 ms statement: EXPLAIN INSERT INTO f_foo SELECT\n> t_foo.fk_d1,\n> t_foo.fk_d2,\n> t_foo.fk_d3,\n> t_foo.fk_d4,\n> t_foo.fk_d5,\n> t_foo.fk_d6,\n> t_foo.value\n>FROM t_foo\n>WHERE NOT (EXISTS\n> (SELECT *\n> FROM f_foo\n> WHERE f_foo.fk_d2 = t_foo.fk_d2\n> AND f_foo.fk_d3 = t_foo.fk_d3\n> AND f_foo.fk_d4 = t_foo.fk_d4\n> AND f_foo.fk_d5 = t_foo.fk_d5\n> AND f_foo.fk_d6 = t_foo.fk_d6\n> AND f_foo.fk_d1 = t_foo.fk_d1))\n>(where t_foo is a temp table previously populated using COPY, and the NOT EXISTS subquery refers to the same table we are inserting into)\n>Here's the EXPLAIN output: \n>Insert on f_foo (cost=8098210.50..9354519.69 rows=1 width=16)\n> -> Merge Anti Join (cost=8098210.50..9354519.69 rows=1 width=16)\n> Merge Cond: ((t_foo.fk_d2 = public.f_foo.fk_d2) AND\n> (t_foo.fk_d3 = public.f_foo.fk_d3) AND\n> (t_foo.fk_d4 = public.f_foo.fk_d4) AND\n> (t_foo.fk_d5 = public.f_foo.fk_d5) AND\n> (t_foo.fk_d6 = public.f_foo.fk_d6) AND\n> (t_foo.fk_d1 = public.f_foo.fk_d1))\n> -> Sort (cost=3981372.25..4052850.70 rows=28591380 width=16)\n> Sort Key: t_foo.fk_d2, t_foo.fk_d3, t_foo.fk_d4, t_foo.fk_d5,\n> t_foo.fk_d6, t_foo.fk_d1\n> -> Seq Scan on t_foo (cost=0.00..440461.80 rows=28591380\n> width=16)\n> -> Sort (cost=4116838.25..4188025.36 rows=28474842 width=12)\n> Sort Key: public.f_foo.fk_d2, public.f_foo.fk_d3,\n> public.f_foo.fk_d4, public.f_foo.fk_d5,\n> public.f_foo.fk_d6, public.f_foo.fk_d1\n> -> Seq Scan on f_foo (cost=0.00..591199.42 rows=28474842\n> width=12)\n>The INSERT is indeed rather large (which is why we're issuing an EXPLAIN ahead of it to log the plan). So its long execution time is expected. But I want to understand why the EXPLAIN takes even longer.\n>The table looks like this: \n>\\d f_foo\n>Table \"public.f_foo\"\n> Column | Type | Modifiers \n>--------+----------+-----------\n> fk_d1 | smallint | not null\n> fk_d2 | smallint | not null\n> fk_d3 | smallint | not null\n> fk_d4 | smallint | not null\n> fk_d5 | smallint | not null\n> fk_d6 | smallint | not null\n> value | integer | \n>Indexes:\n> \"f_foo_pkey\" PRIMARY KEY, btree (fk_d2, fk_d6, fk_d4, fk_d3, fk_d5, fk_d1) CLUSTER\n> \"ix_f_foo_d4\" btree (fk_d4)\n> \"ix_f_foo_d3\" btree (fk_d3)\n> \"ix_f_foo_d5\" btree (fk_d5)\n> \"ix_f_foo_d6\" btree (fk_d6)\n>Foreign-key constraints:\n> \"f_foo_d2_fkey\" FOREIGN KEY (fk_d2) REFERENCES d2(id) DEFERRABLE\n> \"f_foo_d6_fkey\" FOREIGN KEY (fk_d6) REFERENCES d6(id) DEFERRABLE\n> \"f_foo_d5_fkey\" FOREIGN KEY (fk_d5) REFERENCES d5(id) DEFERRABLE\n> \"f_foo_d4_fkey\" FOREIGN KEY (fk_d4) REFERENCES d4(id) DEFERRABLE\n> \"f_foo_d3_fkey\" FOREIGN KEY (fk_d3) REFERENCES d3(id) DEFERRABLE\n>Conceivably relevant (though I don't know how): this database has a very large number of table objects (1.3 million rows in pg_class). But other EXPLAINs are not taking anywhere near this long in this DB; the heavy EXPLAIN is only seen on INSERT into this\n and a couple of other tables with tens of millions of rows.\n>Any ideas?\n>Thanks, best regards, \n>- Gulli\n>\n\nHi,\nI've no clue for the time required by EXPLAIN\nbut some more information are probably relevant to find an explanation:\n\n- postgres version\n- number of rows inserted by the query\n- how clean is your catalog in regard to vacuum\n ( can you run vacuum full verbose & analyze it, and then retry the analyze statement ?)\n- any other process that may interfere, e.g. while locking some catalog tables ? \n- statistic target ?\n- is your temp table analyzed?\n- any index on it ?\n \nWe have about 300'000 entries in our pg_class tables, and I've never seen such an issue. \n\nregards,\nMarc Mamin",
"msg_date": "Wed, 4 Mar 2015 20:10:56 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM\n SELECT"
},
{
"msg_contents": "Hi, thanks for your follow-up questions.\n\n- postgres version is 9.1.13\n- the number of rows (in this latest instance) is 28,474,842\n- I've clustered and vacuum-full-ed and analyzed this table frequently,\nattempting to troubleshoot this. (Running vacuum full on the whole catalog\nseems a little excessive, and unlikely to help.)\n- no other processes are likely to be interfering; nothing other than\nPostgreSQL runs on this machine (except for normal OS processes and New\nRelic server monitoring service); concurrent activity in PostgreSQL is\nlow-level and unrelated, and this effect is observed systematically\nwhenever this kind of operation is performed on this table\n- no override for this table; the system default_statistics_target is 100\n(the default)\n- yes, an ANALYZE is performed on the temp table after the COPY and before\nthe INSERT\n- no index on the temp table (but I'm scanning the whole thing anyway).\nThere are indexes on f_foo as detailed in my original post, and I expect\nthe PK to make the WHERE NOT EXISTS filtering efficient (as it filters on\nexactly all columns of the PK) ... but even if it didn't, I would expect\nthat to only slow down the actual insert execution, not the EXPLAIN.\n\nCheers,\n\nGulli\n\n\nOn Wed, Mar 4, 2015 at 8:10 PM, Marc Mamin <[email protected]> wrote:\n\n> >Hi,\n> >we are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an\n> hour, with disk I/O utilization (percent of time device is busy) at 100%\n> the whole time, although I/O bandwidth is not saturated. This is on\n> PostgreSQL 9.1.13.\n> >What could cause this? Note that there is no ANALYZE. Is it possible that\n> the SELECT is actually executed, in planning the INSERT?\n> >When executing the INSERT itself (not EXPLAIN) immediately afterwards,\n> that logs a \"temporary file\" message, but the EXPLAIN invocation does not\n> (though the disk I/O suggests that a large on-disk sort is occurring):\n> >LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp6016.0\", size\n> 744103936\n> >STATEMENT: INSERT INTO f_foo SELECT\n> >[...]\n> >During that actual execution, there's a lower disk I/O utilization\n> (though a higher MB/s rate).\n> >Charts of disk I/O utilization and rate are at\n> http://postimg.org/image/628h6jmn3/ ... the solid 100% span is the\n> EXPLAIN statement, ending at 6:13:30pm, followed by the INSERT statement\n> ending at 6:32pm. Metrics are collected by New Relic; their definition of\n> I/O utilization is at https://discuss.newrelic.com/t/disk-i-o-metrics/2900\n> >Here's the EXPLAIN statement:\n> >LOG: duration: 3928778.823 ms statement: EXPLAIN INSERT INTO f_foo\n> SELECT\n> > t_foo.fk_d1,\n> > t_foo.fk_d2,\n> > t_foo.fk_d3,\n> > t_foo.fk_d4,\n> > t_foo.fk_d5,\n> > t_foo.fk_d6,\n> > t_foo.value\n> >FROM t_foo\n> >WHERE NOT (EXISTS\n> > (SELECT *\n> > FROM f_foo\n> > WHERE f_foo.fk_d2 = t_foo.fk_d2\n> > AND f_foo.fk_d3 = t_foo.fk_d3\n> > AND f_foo.fk_d4 = t_foo.fk_d4\n> > AND f_foo.fk_d5 = t_foo.fk_d5\n> > AND f_foo.fk_d6 = t_foo.fk_d6\n> > AND f_foo.fk_d1 = t_foo.fk_d1))\n> >(where t_foo is a temp table previously populated using COPY, and the NOT\n> EXISTS subquery refers to the same table we are inserting into)\n> >Here's the EXPLAIN output:\n> >Insert on f_foo (cost=8098210.50..9354519.69 rows=1 width=16)\n> > -> Merge Anti Join (cost=8098210.50..9354519.69 rows=1 width=16)\n> > Merge Cond: ((t_foo.fk_d2 = public.f_foo.fk_d2) AND\n> > (t_foo.fk_d3 = public.f_foo.fk_d3) AND\n> > (t_foo.fk_d4 = public.f_foo.fk_d4) AND\n> > (t_foo.fk_d5 = public.f_foo.fk_d5) AND\n> > (t_foo.fk_d6 = public.f_foo.fk_d6) AND\n> > (t_foo.fk_d1 = public.f_foo.fk_d1))\n> > -> Sort (cost=3981372.25..4052850.70 rows=28591380 width=16)\n> > Sort Key: t_foo.fk_d2, t_foo.fk_d3, t_foo.fk_d4,\n> t_foo.fk_d5,\n> > t_foo.fk_d6, t_foo.fk_d1\n> > -> Seq Scan on t_foo (cost=0.00..440461.80 rows=28591380\n> > width=16)\n> > -> Sort (cost=4116838.25..4188025.36 rows=28474842 width=12)\n> > Sort Key: public.f_foo.fk_d2, public.f_foo.fk_d3,\n> > public.f_foo.fk_d4, public.f_foo.fk_d5,\n> > public.f_foo.fk_d6, public.f_foo.fk_d1\n> > -> Seq Scan on f_foo (cost=0.00..591199.42 rows=28474842\n> > width=12)\n> >The INSERT is indeed rather large (which is why we're issuing an EXPLAIN\n> ahead of it to log the plan). So its long execution time is expected. But I\n> want to understand why the EXPLAIN takes even longer.\n> >The table looks like this:\n> >\\d f_foo\n> >Table \"public.f_foo\"\n> > Column | Type | Modifiers\n> >--------+----------+-----------\n> > fk_d1 | smallint | not null\n> > fk_d2 | smallint | not null\n> > fk_d3 | smallint | not null\n> > fk_d4 | smallint | not null\n> > fk_d5 | smallint | not null\n> > fk_d6 | smallint | not null\n> > value | integer |\n> >Indexes:\n> > \"f_foo_pkey\" PRIMARY KEY, btree (fk_d2, fk_d6, fk_d4, fk_d3, fk_d5,\n> fk_d1) CLUSTER\n> > \"ix_f_foo_d4\" btree (fk_d4)\n> > \"ix_f_foo_d3\" btree (fk_d3)\n> > \"ix_f_foo_d5\" btree (fk_d5)\n> > \"ix_f_foo_d6\" btree (fk_d6)\n> >Foreign-key constraints:\n> > \"f_foo_d2_fkey\" FOREIGN KEY (fk_d2) REFERENCES d2(id) DEFERRABLE\n> > \"f_foo_d6_fkey\" FOREIGN KEY (fk_d6) REFERENCES d6(id) DEFERRABLE\n> > \"f_foo_d5_fkey\" FOREIGN KEY (fk_d5) REFERENCES d5(id) DEFERRABLE\n> > \"f_foo_d4_fkey\" FOREIGN KEY (fk_d4) REFERENCES d4(id) DEFERRABLE\n> > \"f_foo_d3_fkey\" FOREIGN KEY (fk_d3) REFERENCES d3(id) DEFERRABLE\n> >Conceivably relevant (though I don't know how): this database has a very\n> large number of table objects (1.3 million rows in pg_class). But other\n> EXPLAINs are not taking anywhere near this long in this DB; the heavy\n> EXPLAIN is only seen on INSERT into this and a couple of other tables with\n> tens of millions of rows.\n> >Any ideas?\n> >Thanks, best regards,\n> >- Gulli\n> >\n>\n> Hi,\n> I've no clue for the time required by EXPLAIN\n> but some more information are probably relevant to find an explanation:\n>\n> - postgres version\n> - number of rows inserted by the query\n> - how clean is your catalog in regard to vacuum\n> ( can you run vacuum full verbose & analyze it, and then retry the\n> analyze statement ?)\n> - any other process that may interfere, e.g. while locking some catalog\n> tables ?\n> - statistic target ?\n> - is your temp table analyzed?\n> - any index on it ?\n>\n> We have about 300'000 entries in our pg_class tables, and I've never seen\n> such an issue.\n>\n> regards,\n> Marc Mamin\n>\n>\n\nHi, thanks for your follow-up questions.- postgres version is 9.1.13- the number of rows (in this latest instance) is 28,474,842- I've clustered and vacuum-full-ed and analyzed this table frequently, attempting to troubleshoot this. (Running vacuum full on the whole catalog seems a little excessive, and unlikely to help.)- no other processes are likely to be interfering; nothing other than PostgreSQL runs on this machine (except for normal OS processes and New Relic server monitoring service); concurrent activity in PostgreSQL is low-level and unrelated, and this effect is observed systematically whenever this kind of operation is performed on this table- no override for this table; the system default_statistics_target is 100 (the default)- yes, an ANALYZE is performed on the temp table after the COPY and before the INSERT- no index on the temp table (but I'm scanning the whole thing anyway). There are indexes on f_foo as detailed in my original post, and I expect the PK to make the WHERE NOT EXISTS filtering efficient (as it filters on exactly all columns of the PK) ... but even if it didn't, I would expect that to only slow down the actual insert execution, not the EXPLAIN.Cheers,GulliOn Wed, Mar 4, 2015 at 8:10 PM, Marc Mamin <[email protected]> wrote:\n\n>Hi,\n\n>we are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an hour, with disk I/O utilization (percent of time device is busy) at 100% the whole time, although I/O bandwidth is not saturated. This is on PostgreSQL 9.1.13.\n>What could cause this? Note that there is no ANALYZE. Is it possible that the SELECT is actually executed, in planning the INSERT?\n>When executing the INSERT itself (not EXPLAIN) immediately afterwards, that logs a \"temporary file\" message, but the EXPLAIN invocation does not (though the disk I/O suggests that a large on-disk sort is occurring):\n>LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp6016.0\", size 744103936\n>STATEMENT: INSERT INTO f_foo SELECT\n>[...] \n>During that actual execution, there's a lower disk I/O utilization (though a higher MB/s rate).\n>Charts of disk I/O utilization and rate are at http://postimg.org/image/628h6jmn3/ ... the solid 100% span is the EXPLAIN statement, ending at 6:13:30pm, followed by the INSERT statement ending at 6:32pm. Metrics are collected by New Relic; their definition\n of I/O utilization is at https://discuss.newrelic.com/t/disk-i-o-metrics/2900\n>Here's the EXPLAIN statement: \n>LOG: duration: 3928778.823 ms statement: EXPLAIN INSERT INTO f_foo SELECT\n> t_foo.fk_d1,\n> t_foo.fk_d2,\n> t_foo.fk_d3,\n> t_foo.fk_d4,\n> t_foo.fk_d5,\n> t_foo.fk_d6,\n> t_foo.value\n>FROM t_foo\n>WHERE NOT (EXISTS\n> (SELECT *\n> FROM f_foo\n> WHERE f_foo.fk_d2 = t_foo.fk_d2\n> AND f_foo.fk_d3 = t_foo.fk_d3\n> AND f_foo.fk_d4 = t_foo.fk_d4\n> AND f_foo.fk_d5 = t_foo.fk_d5\n> AND f_foo.fk_d6 = t_foo.fk_d6\n> AND f_foo.fk_d1 = t_foo.fk_d1))\n>(where t_foo is a temp table previously populated using COPY, and the NOT EXISTS subquery refers to the same table we are inserting into)\n>Here's the EXPLAIN output: \n>Insert on f_foo (cost=8098210.50..9354519.69 rows=1 width=16)\n> -> Merge Anti Join (cost=8098210.50..9354519.69 rows=1 width=16)\n> Merge Cond: ((t_foo.fk_d2 = public.f_foo.fk_d2) AND\n> (t_foo.fk_d3 = public.f_foo.fk_d3) AND\n> (t_foo.fk_d4 = public.f_foo.fk_d4) AND\n> (t_foo.fk_d5 = public.f_foo.fk_d5) AND\n> (t_foo.fk_d6 = public.f_foo.fk_d6) AND\n> (t_foo.fk_d1 = public.f_foo.fk_d1))\n> -> Sort (cost=3981372.25..4052850.70 rows=28591380 width=16)\n> Sort Key: t_foo.fk_d2, t_foo.fk_d3, t_foo.fk_d4, t_foo.fk_d5,\n> t_foo.fk_d6, t_foo.fk_d1\n> -> Seq Scan on t_foo (cost=0.00..440461.80 rows=28591380\n> width=16)\n> -> Sort (cost=4116838.25..4188025.36 rows=28474842 width=12)\n> Sort Key: public.f_foo.fk_d2, public.f_foo.fk_d3,\n> public.f_foo.fk_d4, public.f_foo.fk_d5,\n> public.f_foo.fk_d6, public.f_foo.fk_d1\n> -> Seq Scan on f_foo (cost=0.00..591199.42 rows=28474842\n> width=12)\n>The INSERT is indeed rather large (which is why we're issuing an EXPLAIN ahead of it to log the plan). So its long execution time is expected. But I want to understand why the EXPLAIN takes even longer.\n>The table looks like this: \n>\\d f_foo\n>Table \"public.f_foo\"\n> Column | Type | Modifiers \n>--------+----------+-----------\n> fk_d1 | smallint | not null\n> fk_d2 | smallint | not null\n> fk_d3 | smallint | not null\n> fk_d4 | smallint | not null\n> fk_d5 | smallint | not null\n> fk_d6 | smallint | not null\n> value | integer | \n>Indexes:\n> \"f_foo_pkey\" PRIMARY KEY, btree (fk_d2, fk_d6, fk_d4, fk_d3, fk_d5, fk_d1) CLUSTER\n> \"ix_f_foo_d4\" btree (fk_d4)\n> \"ix_f_foo_d3\" btree (fk_d3)\n> \"ix_f_foo_d5\" btree (fk_d5)\n> \"ix_f_foo_d6\" btree (fk_d6)\n>Foreign-key constraints:\n> \"f_foo_d2_fkey\" FOREIGN KEY (fk_d2) REFERENCES d2(id) DEFERRABLE\n> \"f_foo_d6_fkey\" FOREIGN KEY (fk_d6) REFERENCES d6(id) DEFERRABLE\n> \"f_foo_d5_fkey\" FOREIGN KEY (fk_d5) REFERENCES d5(id) DEFERRABLE\n> \"f_foo_d4_fkey\" FOREIGN KEY (fk_d4) REFERENCES d4(id) DEFERRABLE\n> \"f_foo_d3_fkey\" FOREIGN KEY (fk_d3) REFERENCES d3(id) DEFERRABLE\n>Conceivably relevant (though I don't know how): this database has a very large number of table objects (1.3 million rows in pg_class). But other EXPLAINs are not taking anywhere near this long in this DB; the heavy EXPLAIN is only seen on INSERT into this\n and a couple of other tables with tens of millions of rows.\n>Any ideas?\n>Thanks, best regards, \n>- Gulli\n>\n\nHi,\nI've no clue for the time required by EXPLAIN\nbut some more information are probably relevant to find an explanation:\n\n- postgres version\n- number of rows inserted by the query\n- how clean is your catalog in regard to vacuum\n ( can you run vacuum full verbose & analyze it, and then retry the analyze statement ?)\n- any other process that may interfere, e.g. while locking some catalog tables ? \n- statistic target ?\n- is your temp table analyzed?\n- any index on it ?\n \nWe have about 300'000 entries in our pg_class tables, and I've never seen such an issue. \n\nregards,\nMarc Mamin",
"msg_date": "Thu, 5 Mar 2015 15:01:58 +0000",
"msg_from": "Gunnlaugur Thor Briem <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": ">What could cause this? Note that there is no ANALYZE.\n\nCan you capture pstack and/or perf report while explain hangs?\nI think it should shed light on the activity of PostgreSQL.\n\nVladimir\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Mar 2015 19:25:08 +0300",
"msg_from": "Vladimir Sitnikov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": ">Hi, thanks for your follow-up questions.\n>- postgres version is 9.1.13\n>- the number of rows (in this latest instance) is 28,474,842\n>- I've clustered and vacuum-full-ed and analyzed this table frequently, attempting to troubleshoot this. (Running vacuum full on the whole catalog seems a little excessive, and unlikely to help.)\n\nHi,\n\nI mean the pg_* tables. When working with temp objects and a high number of classes, regular vacuum may not clean them efficiently.\nIt is not a bad idea to run a vacuum full verbose manually on the largest of those from time to time to verify that they don't grow outer control.\nAnd this normally requires a few seconds only.\nThe verbose output of vacuum full sometimes returns interesting information...\nFor the ANALYZE performance, I guess that these are the most relevant one:\n pg_statistic;\n pg_class;\n pg_attribute;\n pg_index;\n pg_constraint;\n\nregards,\n\nMarc Mamin\n\n\n\n>- no other processes are likely to be interfering; nothing other than PostgreSQL runs on this machine (except for normal OS processes and New Relic server monitoring service); concurrent activity in PostgreSQL is low-level and unrelated, and this effect is observed systematically whenever this kind of operation is performed on this table\n>- no override for this table; the system default_statistics_target is 100 (the default)\n>- yes, an ANALYZE is performed on the temp table after the COPY and before the INSERT\n>- no index on the temp table (but I'm scanning the whole thing anyway). There are indexes on f_foo as detailed in my original post, and I expect the PK to make the WHERE NOT EXISTS filtering efficient (as it filters on exactly all columns of the PK) ... but even if it didn't, I would expect that to only slow down the actual insert execution, not the EXPLAIN.\n>Cheers,\n>Gulli\n>On Wed, Mar 4, 2015 at 8:10 PM, Marc Mamin <[email protected]> wrote:\n>\n> >Hi,\n> >we are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an hour, with disk I/O utilization (percent of time device is busy) at 100% the whole time, although I/O bandwidth is not saturated. This is on PostgreSQL 9.1.13.\n> >What could cause this? Note that there is no ANALYZE. Is it possible that the SELECT is actually executed, in planning the INSERT?\n> >When executing the INSERT itself (not EXPLAIN) immediately afterwards, that logs a \"temporary file\" message, but the EXPLAIN invocation does not (though the disk I/O suggests that a large on-disk sort is occurring):\n> >LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp6016.0\", size 744103936\n> >STATEMENT: INSERT INTO f_foo SELECT\n> >[...]\n> >During that actual execution, there's a lower disk I/O utilization (though a higher MB/s rate).\n> >Charts of disk I/O utilization and rate are at http://postimg.org/image/628h6jmn3/ ... the solid 100% span is the EXPLAIN statement, ending at 6:13:30pm, followed by the INSERT statement ending at 6:32pm. Metrics are collected by New Relic; their definition of I/O utilization is at https://discuss.newrelic.com/t/disk-i-o-metrics/2900\n> >Here's the EXPLAIN statement:\n> >LOG: duration: 3928778.823 ms statement: EXPLAIN INSERT INTO f_foo SELECT\n> > t_foo.fk_d1,\n> > t_foo.fk_d2,\n> > t_foo.fk_d3,\n> > t_foo.fk_d4,\n> > t_foo.fk_d5,\n> > t_foo.fk_d6,\n> > t_foo.value\n> >FROM t_foo\n> >WHERE NOT (EXISTS\n> > (SELECT *\n> > FROM f_foo\n> > WHERE f_foo.fk_d2 = t_foo.fk_d2\n> > AND f_foo.fk_d3 = t_foo.fk_d3\n> > AND f_foo.fk_d4 = t_foo.fk_d4\n> > AND f_foo.fk_d5 = t_foo.fk_d5\n> > AND f_foo.fk_d6 = t_foo.fk_d6\n> > AND f_foo.fk_d1 = t_foo.fk_d1))\n> >(where t_foo is a temp table previously populated using COPY, and the NOT EXISTS subquery refers to the same table we are inserting into)\n> >Here's the EXPLAIN output:\n> >Insert on f_foo (cost=8098210.50..9354519.69 rows=1 width=16)\n> > -> Merge Anti Join (cost=8098210.50..9354519.69 rows=1 width=16)\n> > Merge Cond: ((t_foo.fk_d2 = public.f_foo.fk_d2) AND\n> > (t_foo.fk_d3 = public.f_foo.fk_d3) AND\n> > (t_foo.fk_d4 = public.f_foo.fk_d4) AND\n> > (t_foo.fk_d5 = public.f_foo.fk_d5) AND\n> > (t_foo.fk_d6 = public.f_foo.fk_d6) AND\n> > (t_foo.fk_d1 = public.f_foo.fk_d1))\n> > -> Sort (cost=3981372.25..4052850.70 rows=28591380 width=16)\n> > Sort Key: t_foo.fk_d2, t_foo.fk_d3, t_foo.fk_d4, t_foo.fk_d5,\n> > t_foo.fk_d6, t_foo.fk_d1\n> > -> Seq Scan on t_foo (cost=0.00..440461.80 rows=28591380\n> > width=16)\n> > -> Sort (cost=4116838.25..4188025.36 rows=28474842 width=12)\n> > Sort Key: public.f_foo.fk_d2, public.f_foo.fk_d3,\n> > public.f_foo.fk_d4, public.f_foo.fk_d5,\n> > public.f_foo.fk_d6, public.f_foo.fk_d1\n> > -> Seq Scan on f_foo (cost=0.00..591199.42 rows=28474842\n> > width=12)\n> >The INSERT is indeed rather large (which is why we're issuing an EXPLAIN ahead of it to log the plan). So its long execution time is expected. But I want to understand why the EXPLAIN takes even longer.\n> >The table looks like this:\n> >\\d f_foo\n> >Table \"public.f_foo\"\n> > Column | Type | Modifiers\n> >--------+----------+-----------\n> > fk_d1 | smallint | not null\n> > fk_d2 | smallint | not null\n> > fk_d3 | smallint | not null\n> > fk_d4 | smallint | not null\n> > fk_d5 | smallint | not null\n> > fk_d6 | smallint | not null\n> > value | integer |\n> >Indexes:\n> > \"f_foo_pkey\" PRIMARY KEY, btree (fk_d2, fk_d6, fk_d4, fk_d3, fk_d5, fk_d1) CLUSTER\n> > \"ix_f_foo_d4\" btree (fk_d4)\n> > \"ix_f_foo_d3\" btree (fk_d3)\n> > \"ix_f_foo_d5\" btree (fk_d5)\n> > \"ix_f_foo_d6\" btree (fk_d6)\n> >Foreign-key constraints:\n> > \"f_foo_d2_fkey\" FOREIGN KEY (fk_d2) REFERENCES d2(id) DEFERRABLE\n> > \"f_foo_d6_fkey\" FOREIGN KEY (fk_d6) REFERENCES d6(id) DEFERRABLE\n> > \"f_foo_d5_fkey\" FOREIGN KEY (fk_d5) REFERENCES d5(id) DEFERRABLE\n> > \"f_foo_d4_fkey\" FOREIGN KEY (fk_d4) REFERENCES d4(id) DEFERRABLE\n> > \"f_foo_d3_fkey\" FOREIGN KEY (fk_d3) REFERENCES d3(id) DEFERRABLE\n> >Conceivably relevant (though I don't know how): this database has a very large number of table objects (1.3 million rows in pg_class). But other EXPLAINs are not taking anywhere near this long in this DB; the heavy EXPLAIN is only seen on INSERT into this and a couple of other tables with tens of millions of rows.\n> >Any ideas?\n> >Thanks, best regards,\n> >- Gulli\n> >\n> Hi,\n> I've no clue for the time required by EXPLAIN\n> but some more information are probably relevant to find an explanation:\n>\n> - postgres version\n> - number of rows inserted by the query\n> - how clean is your catalog in regard to vacuum\n> ( can you run vacuum full verbose & analyze it, and then retry the analyze statement ?)\n> - any other process that may interfere, e.g. while locking some catalog tables ?\n> - statistic target ?\n> - is your temp table analyzed?\n> - any index on it ?\n>\n> We have about 300'000 entries in our pg_class tables, and I've never seen such an issue.\n>\n> regards,\n> Marc Mamin\n>\n>\n\n\n\n\n\n\n\n\n\n>Hi, thanks for your follow-up questions.\n>- postgres version is 9.1.13\n>- the number of rows (in this latest instance) is 28,474,842\n>- I've clustered and vacuum-full-ed and analyzed this table frequently, attempting to troubleshoot this. (Running vacuum full on the whole catalog seems a little excessive, and unlikely to help.)\n\nHi,\n\nI mean the pg_* tables. When working with temp objects and a high number of classes, regular vacuum may not clean them efficiently.\nIt is not a bad idea to run a vacuum full verbose manually on the largest of those from time to time to verify that they don't grow outer control.\nAnd this normally requires a few seconds only.\nThe verbose output of vacuum full sometimes returns interesting information...\nFor the ANALYZE performance, I guess that these are the most relevant one:\n pg_statistic;\n pg_class;\n pg_attribute;\n pg_index;\n pg_constraint;\n \nregards,\n\nMarc Mamin\n\n\n\n>- no other processes are likely to be interfering; nothing other than PostgreSQL runs on this machine (except for normal OS processes and New Relic server monitoring service); concurrent activity in PostgreSQL is low-level and unrelated, and this effect is\n observed systematically whenever this kind of operation is performed on this table\n>- no override for this table; the system default_statistics_target is 100 (the default)\n>- yes, an ANALYZE is performed on the temp table after the COPY and before the INSERT\n>- no index on the temp table (but I'm scanning the whole thing anyway). There are indexes on f_foo as detailed in my original post, and I expect the PK to make the WHERE NOT EXISTS filtering efficient (as it filters on exactly all columns of the PK) ... but\n even if it didn't, I would expect that to only slow down the actual insert execution, not the EXPLAIN.\n>Cheers,\n>Gulli\n>On Wed, Mar 4, 2015 at 8:10 PM, Marc Mamin <[email protected]> wrote:\n>\n> >Hi,\n> >we are seeing cases of EXPLAIN INSERT INTO foo SELECT ... taking over an hour, with disk I/O utilization (percent of time device is busy) at 100% the whole time, although I/O bandwidth is not saturated. This is on PostgreSQL 9.1.13.\n> >What could cause this? Note that there is no ANALYZE. Is it possible that the SELECT is actually executed, in planning the INSERT?\n> >When executing the INSERT itself (not EXPLAIN) immediately afterwards, that logs a \"temporary file\" message, but the EXPLAIN invocation does not (though the disk I/O suggests that a large on-disk sort is occurring):\n> >LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp6016.0\", size 744103936\n> >STATEMENT: INSERT INTO f_foo SELECT\n> >[...]\n> >During that actual execution, there's a lower disk I/O utilization (though a higher MB/s rate).\n> >Charts of disk I/O utilization and rate are at http://postimg.org/image/628h6jmn3/ ... the solid 100% span is the EXPLAIN statement, ending at 6:13:30pm, followed by the INSERT statement ending at 6:32pm. Metrics are collected by New Relic; their definition\n of I/O utilization is at https://discuss.newrelic.com/t/disk-i-o-metrics/2900\n> >Here's the EXPLAIN statement:\n> >LOG: duration: 3928778.823 ms statement: EXPLAIN INSERT INTO f_foo SELECT\n> > t_foo.fk_d1,\n> > t_foo.fk_d2,\n> > t_foo.fk_d3,\n> > t_foo.fk_d4,\n> > t_foo.fk_d5,\n> > t_foo.fk_d6,\n> > t_foo.value\n> >FROM t_foo\n> >WHERE NOT (EXISTS\n> > (SELECT *\n> > FROM f_foo\n> > WHERE f_foo.fk_d2 = t_foo.fk_d2\n> > AND f_foo.fk_d3 = t_foo.fk_d3\n> > AND f_foo.fk_d4 = t_foo.fk_d4\n> > AND f_foo.fk_d5 = t_foo.fk_d5\n> > AND f_foo.fk_d6 = t_foo.fk_d6\n> > AND f_foo.fk_d1 = t_foo.fk_d1))\n> >(where t_foo is a temp table previously populated using COPY, and the NOT EXISTS subquery refers to the same table we are inserting into)\n> >Here's the EXPLAIN output:\n> >Insert on f_foo (cost=8098210.50..9354519.69 rows=1 width=16)\n> > -> Merge Anti Join (cost=8098210.50..9354519.69 rows=1 width=16)\n> > Merge Cond: ((t_foo.fk_d2 = public.f_foo.fk_d2) AND\n> > (t_foo.fk_d3 = public.f_foo.fk_d3) AND\n> > (t_foo.fk_d4 = public.f_foo.fk_d4) AND\n> > (t_foo.fk_d5 = public.f_foo.fk_d5) AND\n> > (t_foo.fk_d6 = public.f_foo.fk_d6) AND\n> > (t_foo.fk_d1 = public.f_foo.fk_d1))\n> > -> Sort (cost=3981372.25..4052850.70 rows=28591380 width=16)\n> > Sort Key: t_foo.fk_d2, t_foo.fk_d3, t_foo.fk_d4, t_foo.fk_d5,\n> > t_foo.fk_d6, t_foo.fk_d1\n> > -> Seq Scan on t_foo (cost=0.00..440461.80 rows=28591380\n> > width=16)\n> > -> Sort (cost=4116838.25..4188025.36 rows=28474842 width=12)\n> > Sort Key: public.f_foo.fk_d2, public.f_foo.fk_d3,\n> > public.f_foo.fk_d4, public.f_foo.fk_d5,\n> > public.f_foo.fk_d6, public.f_foo.fk_d1\n> > -> Seq Scan on f_foo (cost=0.00..591199.42 rows=28474842\n> > width=12)\n> >The INSERT is indeed rather large (which is why we're issuing an EXPLAIN ahead of it to log the plan). So its long execution time is expected. But I want to understand why the EXPLAIN takes even longer.\n> >The table looks like this:\n> >\\d f_foo\n> >Table \"public.f_foo\"\n> > Column | Type | Modifiers\n> >--------+----------+-----------\n> > fk_d1 | smallint | not null\n> > fk_d2 | smallint | not null\n> > fk_d3 | smallint | not null\n> > fk_d4 | smallint | not null\n> > fk_d5 | smallint | not null\n> > fk_d6 | smallint | not null\n> > value | integer |\n> >Indexes:\n> > \"f_foo_pkey\" PRIMARY KEY, btree (fk_d2, fk_d6, fk_d4, fk_d3, fk_d5, fk_d1) CLUSTER\n> > \"ix_f_foo_d4\" btree (fk_d4)\n> > \"ix_f_foo_d3\" btree (fk_d3)\n> > \"ix_f_foo_d5\" btree (fk_d5)\n> > \"ix_f_foo_d6\" btree (fk_d6)\n> >Foreign-key constraints:\n> > \"f_foo_d2_fkey\" FOREIGN KEY (fk_d2) REFERENCES d2(id) DEFERRABLE\n> > \"f_foo_d6_fkey\" FOREIGN KEY (fk_d6) REFERENCES d6(id) DEFERRABLE\n> > \"f_foo_d5_fkey\" FOREIGN KEY (fk_d5) REFERENCES d5(id) DEFERRABLE\n> > \"f_foo_d4_fkey\" FOREIGN KEY (fk_d4) REFERENCES d4(id) DEFERRABLE\n> > \"f_foo_d3_fkey\" FOREIGN KEY (fk_d3) REFERENCES d3(id) DEFERRABLE\n> >Conceivably relevant (though I don't know how): this database has a very large number of table objects (1.3 million rows in pg_class). But other EXPLAINs are not taking anywhere near this long in this DB; the heavy EXPLAIN is only seen on INSERT into\n this and a couple of other tables with tens of millions of rows.\n> >Any ideas?\n> >Thanks, best regards,\n> >- Gulli\n> >\n> Hi,\n> I've no clue for the time required by EXPLAIN\n> but some more information are probably relevant to find an explanation:\n>\n> - postgres version\n> - number of rows inserted by the query\n> - how clean is your catalog in regard to vacuum\n> ( can you run vacuum full verbose & analyze it, and then retry the analyze statement ?)\n> - any other process that may interfere, e.g. while locking some catalog tables ?\n> - statistic target ?\n> - is your temp table analyzed?\n> - any index on it ?\n> \n> We have about 300'000 entries in our pg_class tables, and I've never seen such an issue. \n>\n> regards,\n> Marc Mamin\n>\n>",
"msg_date": "Thu, 5 Mar 2015 21:13:47 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM\n SELECT"
},
{
"msg_contents": "Hi,\n\nOn 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:\n> Hi, thanks for your follow-up questions.\n> \n> - postgres version is 9.1.13\n> - the number of rows (in this latest instance) is 28,474,842\n> - I've clustered and vacuum-full-ed and analyzed this table frequently,\n> attempting to troubleshoot this. (Running vacuum full on the whole\n> catalog seems a little excessive, and unlikely to help.)\n> - no other processes are likely to be interfering; nothing other than\n> PostgreSQL runs on this machine (except for normal OS processes and New\n> Relic server monitoring service); concurrent activity in PostgreSQL is\n> low-level and unrelated, and this effect is observed systematically\n> whenever this kind of operation is performed on this table\n> - no override for this table; the system default_statistics_target is\n> 100 (the default)\n> - yes, an ANALYZE is performed on the temp table after the COPY and\n> before the INSERT\n> - no index on the temp table (but I'm scanning the whole thing anyway).\n> There are indexes on f_foo as detailed in my original post, and I expect\n> the PK to make the WHERE NOT EXISTS filtering efficient (as it filters\n> on exactly all columns of the PK) ... but even if it didn't, I would\n> expect that to only slow down the actual insert execution, not the EXPLAIN.\n\nThe only thing I can think of is some sort of memory exhaustion,\nresulting in swapping out large amounts of memory. That'd explain the\nI/O load. Can you run something like vmstat to see if this really is swap?\n\nThe fact that plain INSERT does not do that contradicts that, as it\nshould be able to plan either both queries (plain and EXPLAIN), or none\nof them.\n\nCan you prepare a self-contained test case? I.e. a script that\ndemonstrates the issue? I tried to reproduce the issue using the\ninformation provided so far, but unsuccessfully :-(\n\nEven if you could reproduce the problem on another machine (because of\nkeeping the data internal) on a server with debug symbols and see where\nmost of the time is spent (e.g. using 'perf top'), that'd be useful.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 06 Mar 2015 01:20:11 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM\n SELECT"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:\n>> - postgres version is 9.1.13\n\n> The only thing I can think of is some sort of memory exhaustion,\n> resulting in swapping out large amounts of memory.\n\nI'm wondering about the issue addressed by commit fccebe421 (\"Use\nSnapshotDirty rather than an active snapshot to probe index endpoints\").\nNow, that was allegedly fixed in 9.1.13 ... but if the OP were confused\nand this server were running, say, 9.1.12, that could be a viable\nexplanation. Another possibly viable explanation for seeing the issue\nin 9.1.13 would be if I fat-fingered the back-patch somehow :-(.\n\nIn any case, I concur with your advice:\n\n> Even if you could reproduce the problem on another machine (because of\n> keeping the data internal) on a server with debug symbols and see where\n> most of the time is spent (e.g. using 'perf top'), that'd be useful.\n\nWithout some more-precise idea of where the time is going, we're really\njust guessing as to the cause.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 05 Mar 2015 19:44:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": "On 6.3.2015 01:44, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:\n>>> - postgres version is 9.1.13\n> \n>> The only thing I can think of is some sort of memory exhaustion,\n>> resulting in swapping out large amounts of memory.\n> \n> I'm wondering about the issue addressed by commit fccebe421 (\"Use \n> SnapshotDirty rather than an active snapshot to probe index\n> endpoints\"). Now, that was allegedly fixed in 9.1.13 ... but if the\n> OP were confused and this server were running, say, 9.1.12, that\n> could be a viable explanation. Another possibly viable explanation\n> for seeing the issue in 9.1.13 would be if I fat-fingered the\n> back-patch somehow :-(.\n\nHow would fccebe421 explain the large amount of random writes (~4MB/s\nfor more than an hour), reported in the initial post? And why would that\nonly affect the EXPLAIN and not the bare query?\n\nI guess there might be two sessions, one keeping uncommitted changes\n(thus invisible tuples), and the other one doing the explain. And the\nactual query might be executed after the first session does a commit.\n\nBut the random writes don't really match in this scenario ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 06 Mar 2015 18:18:49 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM\n SELECT"
},
{
"msg_contents": "On Thu, Mar 5, 2015 at 4:44 PM, Tom Lane <[email protected]> wrote:\n\n> Tomas Vondra <[email protected]> writes:\n> > On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:\n> >> - postgres version is 9.1.13\n>\n> > The only thing I can think of is some sort of memory exhaustion,\n> > resulting in swapping out large amounts of memory.\n>\n> I'm wondering about the issue addressed by commit fccebe421 (\"Use\n> SnapshotDirty rather than an active snapshot to probe index endpoints\").\n> Now, that was allegedly fixed in 9.1.13 ... but if the OP were confused\n> and this server were running, say, 9.1.12, that could be a viable\n> explanation. Another possibly viable explanation for seeing the issue\n> in 9.1.13 would be if I fat-fingered the back-patch somehow :-(.\n>\n\n\nThe back patch into 9.1.13 seems to have worked.\n\npsql -c 'create table foo (x integer ); create index on foo(x);insert into\nfoo select * from generate_series(1,10000); analyze foo;'\n\nperl -le 'use DBI; my $dbh=DBI->connect(\"DBi:Pg:\"); $dbh->begin_work();\nforeach (1..1e6) {$dbh->do(\"insert into foo values ($_)\") or die; };\n$dbh->rollback()' &\n\nwhile (true);\n do pgbench -T5 -c4 -j4 -n -f <(echo \"explain select count(*) from foo a\njoin foo b using (x);\");\ndone\n\non 9.1.12 this slows down dramatically and on 9.1.13 it does not.\n\nCheers,\n\nJeff\n\nOn Thu, Mar 5, 2015 at 4:44 PM, Tom Lane <[email protected]> wrote:Tomas Vondra <[email protected]> writes:\n> On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:\n>> - postgres version is 9.1.13\n\n> The only thing I can think of is some sort of memory exhaustion,\n> resulting in swapping out large amounts of memory.\n\nI'm wondering about the issue addressed by commit fccebe421 (\"Use\nSnapshotDirty rather than an active snapshot to probe index endpoints\").\nNow, that was allegedly fixed in 9.1.13 ... but if the OP were confused\nand this server were running, say, 9.1.12, that could be a viable\nexplanation. Another possibly viable explanation for seeing the issue\nin 9.1.13 would be if I fat-fingered the back-patch somehow :-(.The back patch into 9.1.13 seems to have worked.psql -c 'create table foo (x integer ); create index on foo(x);insert into foo select * from generate_series(1,10000); analyze foo;'perl -le 'use DBI; my $dbh=DBI->connect(\"DBi:Pg:\"); $dbh->begin_work(); foreach (1..1e6) {$dbh->do(\"insert into foo values ($_)\") or die; }; $dbh->rollback()' & while (true); do pgbench -T5 -c4 -j4 -n -f <(echo \"explain select count(*) from foo a join foo b using (x);\"); doneon 9.1.12 this slows down dramatically and on 9.1.13 it does not.Cheers,Jeff",
"msg_date": "Fri, 6 Mar 2015 10:18:31 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 6.3.2015 01:44, Tom Lane wrote:\n>> I'm wondering about the issue addressed by commit fccebe421 (\"Use \n>> SnapshotDirty rather than an active snapshot to probe index\n>> endpoints\").\n\n> How would fccebe421 explain the large amount of random writes (~4MB/s\n> for more than an hour), reported in the initial post? And why would that\n> only affect the EXPLAIN and not the bare query?\n\nHard to say. There's probably additional factors involved, and this\nmay ultimately turn out to be unrelated. But it seemed like it might\nbe an issue, particularly since the plan had a mergejoin with lots of\nclauses ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 06 Mar 2015 13:36:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": "Tomas Vondra <[email protected]> wrote:\n> On 6.3.2015 01:44, Tom Lane wrote:\n>> Tomas Vondra <[email protected]> writes:\n>>> On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:\n>>>> - postgres version is 9.1.13\n>>>\n>>> The only thing I can think of is some sort of memory exhaustion,\n>>> resulting in swapping out large amounts of memory.\n>>\n>> I'm wondering about the issue addressed by commit fccebe421 (\"Use\n>> SnapshotDirty rather than an active snapshot to probe index\n>> endpoints\"). Now, that was allegedly fixed in 9.1.13 ... but if the\n>> OP were confused and this server were running, say, 9.1.12, that\n>> could be a viable explanation. Another possibly viable explanation\n>> for seeing the issue in 9.1.13 would be if I fat-fingered the\n>> back-patch somehow :-(.\n>\n> How would fccebe421 explain the large amount of random writes (~4MB/s\n> for more than an hour), reported in the initial post? And why would that\n> only affect the EXPLAIN and not the bare query?\n>\n> I guess there might be two sessions, one keeping uncommitted changes\n> (thus invisible tuples), and the other one doing the explain. And the\n> actual query might be executed after the first session does a commit.\n>\n> But the random writes don't really match in this scenario ...\n\nSure they do -- both the index and heap pages may be rewritten with\nhints that the rows are dead. In this email:\n\nhttp://www.postgresql.org/message-id/[email protected]\n\n... Tom said:\n\n| The other thing we might consider doing is using SnapshotAny, which\n| would amount to just taking the extremal index entry at face value.\n| This would be cheap all right (in fact, we might later be able to optimize\n| it to not even visit the heap). However, I don't like the implications\n| for the accuracy of the approximation. It seems quite likely that an\n| erroneous transaction could insert a wildly out-of-range index entry\n| and then roll back --- or even if it committed, somebody might soon come\n| along and clean up the bogus entry in a separate transaction. If we use\n| SnapshotAny then we'd believe that bogus entry until it got vacuumed, not\n| just till it got deleted. This is a fairly scary proposition, because\n| the wackier that extremal value is, the more impact it will have on the\n| selectivity estimates.\n|\n| If it's demonstrated that SnapshotDirty doesn't reduce the estimation\n| costs enough to solve the performance problem the complainants are facing,\n| I'd be willing to consider using SnapshotAny here. But I don't want to\n| take that step until it's proven that the more conservative approach\n| doesn't solve the performance problem.\n\nI wonder whether this isn't an example of a case where SnapshotAny\nwould do better. I have definitely seen cases where BIND time on a\nbloated database has exceeded all other PostgreSQL time combined --\nin this planner probe of the index. I haven't seen two hours on a\nsingle EXPLAIN, but I have seen it be several times the execution\ntime.\n\nRegarding the concerns about the accuracy of the estimate and the\neffects on the planner that an incorrect selectivity estimate could\nhave -- clearly scanning a large number of index tuples and chasing\nthem to the heap just to find that the tuples are not visible is\nabout as expensive as if they were visible *for the index scan step\nitself*. I wonder whether there would be any way to allow the\nindex scan cost to be based on the work it has to do while somehow\ngenerating a reasonable number for the rows it will find to be\nvisible, so that estimation of other nodes can be based on that.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 7 Mar 2015 01:24:42 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM\n SELECT"
},
{
"msg_contents": "Kevin Grittner <[email protected]> writes:\n> Tomas Vondra <[email protected]> wrote:\n>> How would fccebe421 explain the large amount of random writes (~4MB/s\n>> for more than an hour), reported in the initial post? And why would that\n>> only affect the EXPLAIN and not the bare query?\n>> But the random writes don't really match in this scenario ...\n\n> Sure they do -- both the index and heap pages may be rewritten with\n> hints that the rows are dead.\n\nHm ... yeah, this idea could match the symptoms: if the database has built\nup a large debt of dead-but-unhinted rows, then the first pass through the\nindex would cause a write storm but the next would not, which would\nexplain why doing EXPLAIN immediately followed by the real query would put\nall the hint-update burden on the EXPLAIN even though the planning phase\nof the real query would inspect the same index entries.\n\nBut if that's the situation, those hint-updating writes would get done\nsooner or later --- probably sooner, by actual execution of the query\nitself. So I'm not convinced that moving to SnapshotAny would fix\nanything much, only change where the problem manifests.\n\nAlso, it's less than clear why only this particular query is showing\nany stress. Dead rows should be a hazard for anything, especially if\nthere are enough of them to require hours to re-hint. And why wouldn't\nautovacuum get to them first?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 06 Mar 2015 20:38:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": "On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane <[email protected]> wrote:\n\n> Kevin Grittner <[email protected]> writes:\n> > Tomas Vondra <[email protected]> wrote:\n> >> How would fccebe421 explain the large amount of random writes (~4MB/s\n> >> for more than an hour), reported in the initial post? And why would that\n> >> only affect the EXPLAIN and not the bare query?\n> >> But the random writes don't really match in this scenario ...\n>\n> > Sure they do -- both the index and heap pages may be rewritten with\n> > hints that the rows are dead.\n>\n> Hm ... yeah, this idea could match the symptoms: if the database has built\n> up a large debt of dead-but-unhinted rows, then the first pass through the\n> index would cause a write storm but the next would not, which would\n> explain why doing EXPLAIN immediately followed by the real query would put\n> all the hint-update burden on the EXPLAIN even though the planning phase\n> of the real query would inspect the same index entries.\n>\n> But if that's the situation, those hint-updating writes would get done\n> sooner or later --- probably sooner, by actual execution of the query\n> itself. So I'm not convinced that moving to SnapshotAny would fix\n> anything much, only change where the problem manifests.\n>\n\nBut the actual query is using a seq scan, and so it would hint the table in\nefficient sequential order, rather than hinting the table haphazardly in\nindex order like probing the endpoint does.\n\n\n\n\n> Also, it's less than clear why only this particular query is showing\n> any stress. Dead rows should be a hazard for anything, especially if\n> there are enough of them to require hours to re-hint. And why wouldn't\n> autovacuum get to them first?\n>\n\nSay the timing of this query is such that 10% of the parent turns over\nbetween invocations of this query, and that this 10% is all at the end of\nsome index but random over the table heap. If autovac kicks in at 20% turn\nover, then half the time autovac does get to them first, and half the time\nit doesn't. It would be interesting to know if this query is bad every\ntime it is planner, or just sometimes.\n\nCheers,\n\nJeff\n\nOn Fri, Mar 6, 2015 at 5:38 PM, Tom Lane <[email protected]> wrote:\n\n> Kevin Grittner <[email protected]> writes:\n> > Tomas Vondra <[email protected]> wrote:\n> >> How would fccebe421 explain the large amount of random writes (~4MB/s\n> >> for more than an hour), reported in the initial post? And why would that\n> >> only affect the EXPLAIN and not the bare query?\n> >> But the random writes don't really match in this scenario ...\n>\n> > Sure they do -- both the index and heap pages may be rewritten with\n> > hints that the rows are dead.\n>\n> Hm ... yeah, this idea could match the symptoms: if the database has built\n> up a large debt of dead-but-unhinted rows, then the first pass through the\n> index would cause a write storm but the next would not, which would\n> explain why doing EXPLAIN immediately followed by the real query would put\n> all the hint-update burden on the EXPLAIN even though the planning phase\n> of the real query would inspect the same index entries.\n>\n> But if that's the situation, those hint-updating writes would get done\n> sooner or later --- probably sooner, by actual execution of the query\n> itself. So I'm not convinced that moving to SnapshotAny would fix\n> anything much, only change where the problem manifests.\n>\n> Also, it's less than clear why only this particular query is showing\n> any stress. Dead rows should be a hazard for anything, especially if\n> there are enough of them to require hours to re-hint. And why wouldn't\n> autovacuum get to them first?\n>\n> regards, tom lane\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Fri, Mar 6, 2015 at 5:38 PM, Tom Lane <[email protected]> wrote:Kevin Grittner <[email protected]> writes:\n> Tomas Vondra <[email protected]> wrote:\n>> How would fccebe421 explain the large amount of random writes (~4MB/s\n>> for more than an hour), reported in the initial post? And why would that\n>> only affect the EXPLAIN and not the bare query?\n>> But the random writes don't really match in this scenario ...\n\n> Sure they do -- both the index and heap pages may be rewritten with\n> hints that the rows are dead.\n\nHm ... yeah, this idea could match the symptoms: if the database has built\nup a large debt of dead-but-unhinted rows, then the first pass through the\nindex would cause a write storm but the next would not, which would\nexplain why doing EXPLAIN immediately followed by the real query would put\nall the hint-update burden on the EXPLAIN even though the planning phase\nof the real query would inspect the same index entries.\n\nBut if that's the situation, those hint-updating writes would get done\nsooner or later --- probably sooner, by actual execution of the query\nitself. So I'm not convinced that moving to SnapshotAny would fix\nanything much, only change where the problem manifests.But the actual query is using a seq scan, and so it would hint the table in efficient sequential order, rather than hinting the table haphazardly in index order like probing the endpoint does. \nAlso, it's less than clear why only this particular query is showing\nany stress. Dead rows should be a hazard for anything, especially if\nthere are enough of them to require hours to re-hint. And why wouldn't\nautovacuum get to them first?Say the timing of this query is such that 10% of the parent turns over between invocations of this query, and that this 10% is all at the end of some index but random over the table heap. If autovac kicks in at 20% turn over, then half the time autovac does get to them first, and half the time it doesn't. It would be interesting to know if this query is bad every time it is planner, or just sometimes. Cheers,JeffOn Fri, Mar 6, 2015 at 5:38 PM, Tom Lane <[email protected]> wrote:Kevin Grittner <[email protected]> writes:\n> Tomas Vondra <[email protected]> wrote:\n>> How would fccebe421 explain the large amount of random writes (~4MB/s\n>> for more than an hour), reported in the initial post? And why would that\n>> only affect the EXPLAIN and not the bare query?\n>> But the random writes don't really match in this scenario ...\n\n> Sure they do -- both the index and heap pages may be rewritten with\n> hints that the rows are dead.\n\nHm ... yeah, this idea could match the symptoms: if the database has built\nup a large debt of dead-but-unhinted rows, then the first pass through the\nindex would cause a write storm but the next would not, which would\nexplain why doing EXPLAIN immediately followed by the real query would put\nall the hint-update burden on the EXPLAIN even though the planning phase\nof the real query would inspect the same index entries.\n\nBut if that's the situation, those hint-updating writes would get done\nsooner or later --- probably sooner, by actual execution of the query\nitself. So I'm not convinced that moving to SnapshotAny would fix\nanything much, only change where the problem manifests.\n\nAlso, it's less than clear why only this particular query is showing\nany stress. Dead rows should be a hazard for anything, especially if\nthere are enough of them to require hours to re-hint. And why wouldn't\nautovacuum get to them first?\n\n regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 6 Mar 2015 18:26:48 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": "On 7.3.2015 03:26, Jeff Janes wrote:\n> On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> But the actual query is using a seq scan, and so it would hint the\n> table in efficient sequential order, rather than hinting the table\n> haphazardly in index order like probing the endpoint does.\n\nI think this has nothing to do with the plan itself, but with the\nestimation in optimizer - that looks up the range from the index in some\ncases, and that may generate random I/O to the table.\n\n> \n>> Also, it's less than clear why only this particular query is\n>> showing any stress. Dead rows should be a hazard for anything,\n>> especially if there are enough of them to require hours to re-hint.\n>> And why wouldn't autovacuum get to them first?\n> \n> \n> Say the timing of this query is such that 10% of the parent turns\n> over between invocations of this query, and that this 10% is all at\n> the end of some index but random over the table heap. If autovac\n> kicks in at 20% turn over, then half the time autovac does get to\n> them first, and half the time it doesn't. It would be interesting to\n> know if this query is bad every time it is planner, or just\n> sometimes.\n\nYeah, this might be the reason. Another possibility is that this is part\nof some large batch, and autovacuum simply did not have change to do the\nwork.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 07 Mar 2015 16:44:26 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM\n SELECT"
},
{
"msg_contents": "On Sat, Mar 7, 2015 at 3:44 PM, Tomas Vondra <[email protected]>\nwrote:\n\n> Another possibility is that this is part\n> of some large batch, and autovacuum simply did not have change to do the\n> work.\n>\n\nYes, I think that's it: I've just realized that immediately prior to the\nINSERT, in the same transaction, an unfiltered DELETE has been issued; i.e.\nthe whole table is being rewritten. Then the INSERT is issued ... with a\nWHERE clause on non-existence in the (now empty) table.\n\nIn that case of course the WHERE clause is unnecessary, as it will always\nevaluate as true (and we've locked the whole table for writes). Looks like\nit is a lot worse than unnecessary, though, if it triggers this performance\nsnafu in EXPLAIN INSERT.\n\nThis seems very likely to be the explanation here. So our workaround will\nbe to simply omit the WHERE clause in those cases where the full DELETE has\nbeen issued. (And then vacuum afterwards.)\n\n(Even better, just make the new table not temporary, and have it replace\nthe former table altogether. But that's for later; requires some broader\nchanges in our application.)\n\nI'll report back if I *do* see the problem come up again despite this\nchange.\n\nThanks all for your help figuring this out!\n\nBest regards,\n\nGulli\n\nOn Sat, Mar 7, 2015 at 3:44 PM, Tomas Vondra <[email protected]> wrote:Another possibility is that this is part\nof some large batch, and autovacuum simply did not have change to do the\nwork.Yes, I think that's it: I've just realized that immediately prior to the INSERT, in the same transaction, an unfiltered DELETE has been issued; i.e. the whole table is being rewritten. Then the INSERT is issued ... with a WHERE clause on non-existence in the (now empty) table.In that case of course the WHERE clause is unnecessary, as it will always evaluate as true (and we've locked the whole table for writes). Looks like it is a lot worse than unnecessary, though, if it triggers this performance snafu in EXPLAIN INSERT.This seems very likely to be the explanation here. So our workaround will be to simply omit the WHERE clause in those cases where the full DELETE has been issued. (And then vacuum afterwards.)(Even better, just make the new table not temporary, and have it replace the former table altogether. But that's for later; requires some broader changes in our application.)I'll report back if I *do* see the problem come up again despite this change.Thanks all for your help figuring this out!Best regards,Gulli",
"msg_date": "Wed, 11 Mar 2015 15:54:40 +0000",
"msg_from": "Gunnlaugur Thor Briem <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": "Gunnlaugur Thor Briem <[email protected]> writes:\n> Yes, I think that's it: I've just realized that immediately prior to the\n> INSERT, in the same transaction, an unfiltered DELETE has been issued; i.e.\n> the whole table is being rewritten. Then the INSERT is issued ... with a\n> WHERE clause on non-existence in the (now empty) table.\n\n> In that case of course the WHERE clause is unnecessary, as it will always\n> evaluate as true (and we've locked the whole table for writes). Looks like\n> it is a lot worse than unnecessary, though, if it triggers this performance\n> snafu in EXPLAIN INSERT.\n\nAh-hah. So what's happening is that the planner is doing an indexscan\nover the entire table of now-dead rows, looking vainly for an undeleted\nmaximal row. Ouch.\n\nI wonder how hard it would be to make the indexscan give up after hitting\nN consecutive dead rows, for some suitable N, maybe ~1000. From the\nplanner's viewpoint it'd be easy enough to fall back to using whatever\nit had in the histogram after all. But that's all happening down inside\nindex_getnext, and I'm hesitant to stick some kind of wart into that\nmachinery for this purpose.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Mar 2015 12:15:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": "On Sat, Mar 7, 2015 at 7:44 AM, Tomas Vondra <[email protected]>\nwrote:\n\n> On 7.3.2015 03:26, Jeff Janes wrote:\n> > On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > But the actual query is using a seq scan, and so it would hint the\n> > table in efficient sequential order, rather than hinting the table\n> > haphazardly in index order like probing the endpoint does.\n>\n> I think this has nothing to do with the plan itself, but with the\n> estimation in optimizer - that looks up the range from the index in some\n> cases, and that may generate random I/O to the table.\n>\n\nRight. Tom was saying that the work needs to be done anyway, but it is\njust that some ways of doing the work are far more efficient than others.\nIt just happens that the executed plan in this case would do it more\nefficiently, (but in general you aren't going to get any less efficient\nthan having the planner do it in index order).\n\nIn other similar cases I've looked at (for a different reason), the\nexecutor wouldn't do that work at all because the plan it actually chooses\nonly touches a handful of rows. So it is planning a merge join, only to\nrealize how ridiculous one would be and so not use one. But it still pays\nthe price. In that, case the thing that would do the needful, in lieu of\nthe planner, would be a vacuum process. Which is optimal both because it\nis in the background, and is optimized for efficient IO.\n\nCheers,\n\nJeff\n\nOn Sat, Mar 7, 2015 at 7:44 AM, Tomas Vondra <[email protected]> wrote:On 7.3.2015 03:26, Jeff Janes wrote:\n> On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> But the actual query is using a seq scan, and so it would hint the\n> table in efficient sequential order, rather than hinting the table\n> haphazardly in index order like probing the endpoint does.\n\nI think this has nothing to do with the plan itself, but with the\nestimation in optimizer - that looks up the range from the index in some\ncases, and that may generate random I/O to the table.Right. Tom was saying that the work needs to be done anyway, but it is just that some ways of doing the work are far more efficient than others. It just happens that the executed plan in this case would do it more efficiently, (but in general you aren't going to get any less efficient than having the planner do it in index order).In other similar cases I've looked at (for a different reason), the executor wouldn't do that work at all because the plan it actually chooses only touches a handful of rows. So it is planning a merge join, only to realize how ridiculous one would be and so not use one. But it still pays the price. In that, case the thing that would do the needful, in lieu of the planner, would be a vacuum process. Which is optimal both because it is in the background, and is optimized for efficient IO.Cheers,Jeff",
"msg_date": "Wed, 11 Mar 2015 10:30:01 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
},
{
"msg_contents": "On 11.3.2015 18:30, Jeff Janes wrote:\n> On Sat, Mar 7, 2015 at 7:44 AM, Tomas Vondra\n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> On 7.3.2015 03:26, Jeff Janes wrote:\n> > On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane <[email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>> wrote:\n> >\n> > But the actual query is using a seq scan, and so it would hint the\n> > table in efficient sequential order, rather than hinting the table\n> > haphazardly in index order like probing the endpoint does.\n> \n> I think this has nothing to do with the plan itself, but with the\n> estimation in optimizer - that looks up the range from the index in some\n> cases, and that may generate random I/O to the table.\n> \n> \n> Right. Tom was saying that the work needs to be done anyway, but it is\n> just that some ways of doing the work are far more efficient than\n> others. It just happens that the executed plan in this case would do it\n> more efficiently, (but in general you aren't going to get any less\n> efficient than having the planner do it in index order).\n\nOh! Now I see what you meant. I parsed is as if you're suggesting that\nthe theory does not match the symptoms because the plan contains\nsequential scan yet there's a lot of random I/O. But now I see that's\nnot what you claimed, so sorry for the noise.\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Mar 2015 18:55:08 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM\n SELECT"
},
{
"msg_contents": "On 3/11/15 11:15 AM, Tom Lane wrote:\n> Gunnlaugur Thor Briem <[email protected]> writes:\n>> Yes, I think that's it: I've just realized that immediately prior to the\n>> INSERT, in the same transaction, an unfiltered DELETE has been issued; i.e.\n>> the whole table is being rewritten. Then the INSERT is issued ... with a\n>> WHERE clause on non-existence in the (now empty) table.\n>\n>> In that case of course the WHERE clause is unnecessary, as it will always\n>> evaluate as true (and we've locked the whole table for writes). Looks like\n>> it is a lot worse than unnecessary, though, if it triggers this performance\n>> snafu in EXPLAIN INSERT.\n>\n> Ah-hah. So what's happening is that the planner is doing an indexscan\n> over the entire table of now-dead rows, looking vainly for an undeleted\n> maximal row. Ouch.\n>\n> I wonder how hard it would be to make the indexscan give up after hitting\n> N consecutive dead rows, for some suitable N, maybe ~1000. From the\n> planner's viewpoint it'd be easy enough to fall back to using whatever\n> it had in the histogram after all. But that's all happening down inside\n> index_getnext, and I'm hesitant to stick some kind of wart into that\n> machinery for this purpose.\n\nISTM what we really want here is a time-based behavior, not number of \nrows. Given that, could we do the index probe in a subtransaction, set \nan alarm for X ms, and simply abort the subtransaction if the alarm fires?\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 14:23:11 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM\n SELECT"
},
{
"msg_contents": "On 3/11/15 10:54 AM, Gunnlaugur Thor Briem wrote:\n> (Even better, just make the new table not temporary, and have it replace\n> the former table altogether. But that's for later; requires some broader\n> changes in our application.)\n\nThe other thing you should consider is using TRUNCATE instead of an \nun-filtered DELETE. It will both be much faster to perform and won't \nleave any dead rows behind.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 14:24:22 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM\n SELECT"
},
{
"msg_contents": "On Mon, Mar 16, 2015 at 7:24 PM, Jim Nasby <[email protected]> wrote:\n\n> The other thing you should consider is using TRUNCATE instead of an\n> un-filtered DELETE. It will both be much faster to perform and won't leave\n> any dead rows behind.\n\n\nYep, but it does take an ACCESS EXCLUSIVE lock. We want the old table\ncontents to be readable to other sessions while the new table contents are\nbeing populated (which can take quite a while), hence we don't use TRUNCATE.\n\nBest of both worlds is to just populate a new table, flip over to that when\nit's ready, and drop the old one once nobody's referring to it anymore.\nThat way we don't pay the DELETE scan penalty and don't leave dead rows,\nand also don't lock reads out while we repopulate.\n\nGulli\n\nOn Mon, Mar 16, 2015 at 7:24 PM, Jim Nasby <[email protected]> wrote:The other thing you should consider is using TRUNCATE instead of an un-filtered DELETE. It will both be much faster to perform and won't leave any dead rows behind.Yep, but it does take an ACCESS EXCLUSIVE lock. We want the old table contents to be readable to other sessions while the new table contents are being populated (which can take quite a while), hence we don't use TRUNCATE.Best of both worlds is to just populate a new table, flip over to that when it's ready, and drop the old one once nobody's referring to it anymore. That way we don't pay the DELETE scan penalty and don't leave dead rows, and also don't lock reads out while we repopulate.Gulli",
"msg_date": "Mon, 16 Mar 2015 21:08:32 +0000",
"msg_from": "Gunnlaugur Thor Briem <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT"
}
] |
[
{
"msg_contents": "Hello All,\n\nMaster db size 1.5 TB\nAll postgres 9.1.13 installed from RHEL package.\nIt has streaming replica and slony replica to another servers.\n\nServer performance is slower than usual, before that, there's a big query\ngot cancelled and then performance get slow.\n\nNo sign of IO wait.\n\non sar, it's %user and %system dominate the cpu usage\n01:25:04 PM CPU %user %nice %system %iowait %steal\n%idle\nAverage: all 51.91 0.00 12.03 0.66 0.00\n35.39\n\non perf top, i saw\n 18.93% postgres [.] s_lock\n 10.72% postgres [.] _bt_checkkeys\nalmost always at top.\n\nI don't have any idea, what's causing it or how to resolve it ?\n\nAny answer is very appreciated.\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nHello All,Master db size 1.5 TBAll postgres 9.1.13 installed from RHEL package.It has streaming replica and slony replica to another servers.Server performance is slower than usual, before that, there's a big query got cancelled and then performance get slow.No sign of IO wait.on sar, it's %user and %system dominate the cpu usage01:25:04 PM CPU %user %nice %system %iowait %steal %idleAverage: all 51.91 0.00 12.03 0.66 0.00 35.39on perf top, i saw 18.93% postgres [.] s_lock 10.72% postgres [.] _bt_checkkeysalmost always at top.I don't have any idea, what's causing it or how to resolve it ?Any answer is very appreciated.-- Regards,Soni Maula Harriz",
"msg_date": "Thu, 5 Mar 2015 02:31:23 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow server : s_lock and _bt_checkkeys on perf top"
},
{
"msg_contents": "On 3/4/15 1:31 PM, Soni M wrote:\n> Hello All,\n>\n> Master db size 1.5 TB\n> All postgres 9.1.13 installed from RHEL package.\n> It has streaming replica and slony replica to another servers.\n>\n> Server performance is slower than usual, before that, there's a big\n> query got cancelled and then performance get slow.\n>\n> No sign of IO wait.\n>\n> on sar, it's %user and %system dominate the cpu usage\n> 01:25:04 PM CPU %user %nice %system %iowait %steal\n> %idle\n> Average: all 51.91 0.00 12.03 0.66 0.00\n> 35.39\n>\n> on perf top, i saw\n> 18.93% postgres [.] s_lock\n> 10.72% postgres [.] _bt_checkkeys\n> almost always at top.\n\n_bt_checkkeys is the function that compares a row in a btree index to a \ncondition. s_lock is a spinlock; the high CPU usage in there indicates \nthere's heavy lock contention somewhere.\n\nIs there one PG process that's clearly using more CPU than the others? \nWhat else is running in the database? Are there any unusual data types \ninvolved?\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Mar 2015 19:29:26 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow server : s_lock and _bt_checkkeys on perf top"
},
{
"msg_contents": "On Wed, Mar 4, 2015 at 1:31 PM, Soni M <[email protected]> wrote:\n> Hello All,\n>\n> Master db size 1.5 TB\n> All postgres 9.1.13 installed from RHEL package.\n> It has streaming replica and slony replica to another servers.\n>\n> Server performance is slower than usual, before that, there's a big query\n> got cancelled and then performance get slow.\n>\n> No sign of IO wait.\n>\n> on sar, it's %user and %system dominate the cpu usage\n> 01:25:04 PM CPU %user %nice %system %iowait %steal\n> %idle\n> Average: all 51.91 0.00 12.03 0.66 0.00\n> 35.39\n>\n> on perf top, i saw\n> 18.93% postgres [.] s_lock\n> 10.72% postgres [.] _bt_checkkeys\n> almost always at top.\n>\n> I don't have any idea, what's causing it or how to resolve it ?\n\nCan you post the entire 'perf top'? do you see (specifically I'm\nwondering if you are bumping against the RecoveryInProgress s_lock\nissue). If so, upgrading postgres might be the best way to resolve\nthe issue.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Mar 2015 08:31:14 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow server : s_lock and _bt_checkkeys on perf top"
},
{
"msg_contents": "Thanks All for the response, finally we figure it out. The slow is due to\nhigh number of dead rows on main table, repack these tables wipe out the\nissue.\nOn Mar 6, 2015 9:31 PM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> On Wed, Mar 4, 2015 at 1:31 PM, Soni M <[email protected]> wrote:\n> > Hello All,\n> >\n> > Master db size 1.5 TB\n> > All postgres 9.1.13 installed from RHEL package.\n> > It has streaming replica and slony replica to another servers.\n> >\n> > Server performance is slower than usual, before that, there's a big query\n> > got cancelled and then performance get slow.\n> >\n> > No sign of IO wait.\n> >\n> > on sar, it's %user and %system dominate the cpu usage\n> > 01:25:04 PM CPU %user %nice %system %iowait %steal\n> > %idle\n> > Average: all 51.91 0.00 12.03 0.66 0.00\n> > 35.39\n> >\n> > on perf top, i saw\n> > 18.93% postgres [.] s_lock\n> > 10.72% postgres [.] _bt_checkkeys\n> > almost always at top.\n> >\n> > I don't have any idea, what's causing it or how to resolve it ?\n>\n> Can you post the entire 'perf top'? do you see (specifically I'm\n> wondering if you are bumping against the RecoveryInProgress s_lock\n> issue). If so, upgrading postgres might be the best way to resolve\n> the issue.\n>\n> merlin\n>\n\nThanks All for the response, finally we figure it out. The slow is due to high number of dead rows on main table, repack these tables wipe out the issue.\nOn Mar 6, 2015 9:31 PM, \"Merlin Moncure\" <[email protected]> wrote:On Wed, Mar 4, 2015 at 1:31 PM, Soni M <[email protected]> wrote:\n> Hello All,\n>\n> Master db size 1.5 TB\n> All postgres 9.1.13 installed from RHEL package.\n> It has streaming replica and slony replica to another servers.\n>\n> Server performance is slower than usual, before that, there's a big query\n> got cancelled and then performance get slow.\n>\n> No sign of IO wait.\n>\n> on sar, it's %user and %system dominate the cpu usage\n> 01:25:04 PM CPU %user %nice %system %iowait %steal\n> %idle\n> Average: all 51.91 0.00 12.03 0.66 0.00\n> 35.39\n>\n> on perf top, i saw\n> 18.93% postgres [.] s_lock\n> 10.72% postgres [.] _bt_checkkeys\n> almost always at top.\n>\n> I don't have any idea, what's causing it or how to resolve it ?\n\nCan you post the entire 'perf top'? do you see (specifically I'm\nwondering if you are bumping against the RecoveryInProgress s_lock\nissue). If so, upgrading postgres might be the best way to resolve\nthe issue.\n\nmerlin",
"msg_date": "Fri, 6 Mar 2015 23:04:09 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow server : s_lock and _bt_checkkeys on perf top"
}
] |
[
{
"msg_contents": "Hello,\n\nI wonder if the process of index creation can benefit from other indexes.\n\nEG: Creating a partial index with predicat based on a boolean column, will\nuse an hypothetic index on that boolean column or always use a seq scan on\nall rows ?\n\nGoal is to create partial indexes faster.\n\nExplain command does not work with Create index.\n\nThanks by advance\n\nNicolas PARIS\n\nHello,I wonder if the process of index creation can benefit from other indexes.EG: Creating a partial index with predicat based on a boolean column, will use an hypothetic index on that boolean column or always use a seq scan on all rows ?Goal is to create partial indexes faster.Explain command does not work with Create index.Thanks by advance Nicolas PARIS",
"msg_date": "Sat, 7 Mar 2015 11:30:23 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "CREATE INDEX uses INDEX ?"
},
{
"msg_contents": "Le 7 mars 2015 11:32, \"Nicolas Paris\" <[email protected]> a écrit :\n>\n> Hello,\n>\n> I wonder if the process of index creation can benefit from other indexes.\n>\n\nIt cannot.\n\n> EG: Creating a partial index with predicat based on a boolean column,\nwill use an hypothetic index on that boolean column or always use a seq\nscan on all rows ?\n>\n\nNope, it always does a seqscan.\n\n> Goal is to create partial indexes faster.\n>\n> Explain command does not work with Create index.\n>\n\nYou cannot use EXPLAIN on most DDL commands.\n\nLe 7 mars 2015 11:32, \"Nicolas Paris\" <[email protected]> a écrit :\n>\n> Hello,\n>\n> I wonder if the process of index creation can benefit from other indexes.\n>\nIt cannot.\n> EG: Creating a partial index with predicat based on a boolean column, will use an hypothetic index on that boolean column or always use a seq scan on all rows ?\n>\nNope, it always does a seqscan.\n> Goal is to create partial indexes faster.\n>\n> Explain command does not work with Create index.\n>\nYou cannot use EXPLAIN on most DDL commands.",
"msg_date": "Sat, 7 Mar 2015 12:56:11 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX uses INDEX ?"
},
{
"msg_contents": "Thanks.\n\nThen,\nIs it a good idea to run multiple instance of \"create index on tableX\" on\nthe same time ? Or is it better to do it sequentially ?\nIn other words : Can one seq scan on a table benefit to multiple create\nindex stmt on that table ?\n\nNicolas PARIS\n\n2015-03-07 12:56 GMT+01:00 Guillaume Lelarge <[email protected]>:\n\n> Le 7 mars 2015 11:32, \"Nicolas Paris\" <[email protected]> a écrit :\n> >\n> > Hello,\n> >\n> > I wonder if the process of index creation can benefit from other indexes.\n> >\n>\n> It cannot.\n>\n> > EG: Creating a partial index with predicat based on a boolean column,\n> will use an hypothetic index on that boolean column or always use a seq\n> scan on all rows ?\n> >\n>\n> Nope, it always does a seqscan.\n>\n> > Goal is to create partial indexes faster.\n> >\n> > Explain command does not work with Create index.\n> >\n>\n> You cannot use EXPLAIN on most DDL commands.\n>\n\nThanks.Then,Is it a good idea to run multiple instance of \"create index on tableX\" on the same time ? Or is it better to do it sequentially ? In other words : Can one seq scan on a table benefit to multiple create index stmt on that table ?Nicolas PARIS\n2015-03-07 12:56 GMT+01:00 Guillaume Lelarge <[email protected]>:Le 7 mars 2015 11:32, \"Nicolas Paris\" <[email protected]> a écrit :\n>\n> Hello,\n>\n> I wonder if the process of index creation can benefit from other indexes.\n>\nIt cannot.\n> EG: Creating a partial index with predicat based on a boolean column, will use an hypothetic index on that boolean column or always use a seq scan on all rows ?\n>\nNope, it always does a seqscan.\n> Goal is to create partial indexes faster.\n>\n> Explain command does not work with Create index.\n>\nYou cannot use EXPLAIN on most DDL commands.",
"msg_date": "Sun, 8 Mar 2015 10:04:29 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX uses INDEX ?"
},
{
"msg_contents": "2015-03-08 10:04 GMT+01:00 Nicolas Paris <[email protected]>:\n\n> Thanks.\n>\n> Then,\n> Is it a good idea to run multiple instance of \"create index on tableX\" on\n> the same time ? Or is it better to do it sequentially ?\n> In other words : Can one seq scan on a table benefit to multiple create\n> index stmt on that table ?\n>\n>\nIt usually is a good idea to parallelize index creation. That's one of the\ngood things that pg_restore does since the 8.4 release.\n\nNicolas PARIS\n>\n> 2015-03-07 12:56 GMT+01:00 Guillaume Lelarge <[email protected]>:\n>\n>> Le 7 mars 2015 11:32, \"Nicolas Paris\" <[email protected]> a écrit :\n>> >\n>> > Hello,\n>> >\n>> > I wonder if the process of index creation can benefit from other\n>> indexes.\n>> >\n>>\n>> It cannot.\n>>\n>> > EG: Creating a partial index with predicat based on a boolean column,\n>> will use an hypothetic index on that boolean column or always use a seq\n>> scan on all rows ?\n>> >\n>>\n>> Nope, it always does a seqscan.\n>>\n>> > Goal is to create partial indexes faster.\n>> >\n>> > Explain command does not work with Create index.\n>> >\n>>\n>> You cannot use EXPLAIN on most DDL commands.\n>>\n>\n>\n\n\n-- \nGuillaume.\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com\n\n2015-03-08 10:04 GMT+01:00 Nicolas Paris <[email protected]>:Thanks.Then,Is it a good idea to run multiple instance of \"create index on tableX\" on the same time ? Or is it better to do it sequentially ? In other words : Can one seq scan on a table benefit to multiple create index stmt on that table ?It usually is a good idea to parallelize index creation. That's one of the good things that pg_restore does since the 8.4 release.Nicolas PARIS\n2015-03-07 12:56 GMT+01:00 Guillaume Lelarge <[email protected]>:Le 7 mars 2015 11:32, \"Nicolas Paris\" <[email protected]> a écrit :\n>\n> Hello,\n>\n> I wonder if the process of index creation can benefit from other indexes.\n>\nIt cannot.\n> EG: Creating a partial index with predicat based on a boolean column, will use an hypothetic index on that boolean column or always use a seq scan on all rows ?\n>\nNope, it always does a seqscan.\n> Goal is to create partial indexes faster.\n>\n> Explain command does not work with Create index.\n>\nYou cannot use EXPLAIN on most DDL commands.\n\n-- Guillaume. http://blog.guillaume.lelarge.info http://www.dalibo.com",
"msg_date": "Sun, 8 Mar 2015 10:09:15 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX uses INDEX ?"
}
] |
[
{
"msg_contents": "Currently seeing massive increase in performance when optimizer chooses Hash\nJoin over Nested Loops. I achieve this by temporarily setting nested loops\noff. I'd like to setup some database variables where the optimizer prefers\nhash joins. Any suggestions?\n\n*Query in question:*\nexplain analyze\nselect dp.market_day, dp.hour_ending, dp.repeated_hour_flag,\ndp.settlement_point, sum(dp.mw) dp_mw \nfrom dp_hist_gen_actual dp \n Inner Join api_settlement_points sp on \n sp.settlement_point = dp.settlement_point and \n sp.settlement_point_rdfid =\n'#_{09F3A628-3B9D-481A-AC90-72AF8EAB64CA}' and \n sp.start_date <= '2015-01-01'::date and \n sp.end_date > '2015-01-01'::date and \n sp.rt_model = (select case when c.rt_model_loaded = 2 then true\nelse false end \n from cim_calendar c where c.nodal_load <=\n'2015-01-01'::date \n order by c.cim desc limit 1) \nwhere dp.market_day BETWEEN '2015-01-01'::date and \n '2015-01-01'::date and \n dp.expiry_date is null \ngroup by dp.market_day, dp.hour_ending, dp.repeated_hour_flag,\ndp.settlement_point;\n\n*Nested Loop Explain Analyze Output:*\nHashAggregate (cost=58369.29..58369.30 rows=1 width=24) (actual\ntime=496287.249..496287.257 rows=24 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=8.30..8.30 rows=1 width=9) (actual time=0.145..0.145\nrows=1 loops=1)\n -> Sort (cost=8.30..8.78 rows=193 width=9) (actual\ntime=0.145..0.145 rows=1 loops=1)\n Sort Key: c.cim\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on cim_calendar c (cost=0.00..7.33 rows=193\nwidth=9) (actual time=0.007..0.075 rows=192 loops=1)\n Filter: (nodal_load <= '2015-01-01'::date)\n Rows Removed by Filter: 36\n -> * Nested Loop (cost=0.99..58360.98 rows=1 width=24) (actual\ntime=883.718..496287.058 rows=24 loops=1)*\n Join Filter: ((dp.settlement_point)::text =\n(sp.settlement_point)::text)\n Rows Removed by Join Filter: 12312\n -> Index Scan using dp_hist_gen_actual_idx2 on dp_hist_gen_actual\ndp (cost=0.56..2.78 rows=1 width=24) (actual time=0.020..20.012 rows=12336\nloops=1)\n Index Cond: ((market_day >= '2015-01-01'::date) AND\n(market_day <= '2015-01-01'::date) AND (expiry_date IS NULL))\n -> Index Scan using api_settlement_points_idx on\napi_settlement_points sp (cost=0.43..58358.05 rows=12 width=9) (actual\ntime=39.066..40.223 rows=1 loops=12336)\n Index Cond: ((rt_model = $0) AND (start_date <=\n'2015-01-01'::date) AND (end_date > '2015-01-01'::date))\n Filter: ((settlement_point_rdfid)::text =\n'#_{09F3A628-3B9D-481A-AC90-72AF8EAB64CA}'::text)\n Rows Removed by Filter: 5298\n*Total runtime: 496287.325 ms*\n\n*Hash Join Explain Analyze Output:*\nHashAggregate (cost=58369.21..58369.22 rows=1 width=24) (actual\ntime=50.835..50.843 rows=24 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=8.30..8.30 rows=1 width=9) (actual time=0.149..0.149\nrows=1 loops=1)\n -> Sort (cost=8.30..8.78 rows=193 width=9) (actual\ntime=0.148..0.148 rows=1 loops=1)\n Sort Key: c.cim\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on cim_calendar c (cost=0.00..7.33 rows=193\nwidth=9) (actual time=0.009..0.082 rows=192 loops=1)\n Filter: (nodal_load <= '2015-01-01'::date)\n Rows Removed by Filter: 36\n -> *Hash Join (cost=3.23..58360.90 rows=1 width=24) (actual\ntime=49.644..50.811 rows=24 loops=1)*\n Hash Cond: ((sp.settlement_point)::text =\n(dp.settlement_point)::text)\n -> Index Scan using api_settlement_points_idx on\napi_settlement_points sp (cost=0.43..58358.05 rows=12 width=9) (actual\ntime=39.662..40.822 rows=1 loops=1)\n Index Cond: ((rt_model = $0) AND (start_date <=\n'2015-01-01'::date) AND (end_date > '2015-01-01'::date))\n Filter: ((settlement_point_rdfid)::text =\n'#_{09F3A628-3B9D-481A-AC90-72AF8EAB64CA}'::text)\n Rows Removed by Filter: 5298\n -> Hash (cost=2.78..2.78 rows=1 width=24) (actual\ntime=9.962..9.962 rows=12336 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 684kB\n -> Index Scan using dp_hist_gen_actual_idx2 on\ndp_hist_gen_actual dp (cost=0.56..2.78 rows=1 width=24) (actual\ntime=0.023..5.962 rows=12336 loops=1)\n Index Cond: ((market_day >= '2015-01-01'::date) AND\n(market_day <= '2015-01-01'::date) AND (expiry_date IS NULL))\n*Total runtime: 50.906 ms*\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-get-explain-plan-to-prefer-Hash-Join-tp5841450.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Mar 2015 10:01:04 -0700 (MST)",
"msg_from": "atxcanadian <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to get explain plan to prefer Hash Join"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of atxcanadian\n> Sent: Wednesday, March 11, 2015 1:01 PM\n> To: [email protected]\n> Subject: [PERFORM] How to get explain plan to prefer Hash Join\n> \n> Currently seeing massive increase in performance when optimizer chooses\n> Hash Join over Nested Loops. I achieve this by temporarily setting nested loops\n> off. I'd like to setup some database variables where the optimizer prefers hash\n> joins. Any suggestions?\n\nTry making small adjustments to either random_page_cost or cpu_tuple_cost. They can influence the planners choice here. I have solved similar issues in the past by adjusting one or the other. Be aware thought that those changes can have negative effects in other places, so be sure to test.\n\nBrad.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Mar 2015 21:11:42 +0000",
"msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get explain plan to prefer Hash Join"
},
{
"msg_contents": "On Wed, Mar 11, 2015 at 10:01 AM, atxcanadian <[email protected]>\nwrote:\n\n> Currently seeing massive increase in performance when optimizer chooses\n> Hash\n> Join over Nested Loops. I achieve this by temporarily setting nested loops\n> off. I'd like to setup some database variables where the optimizer prefers\n> hash joins. Any suggestions?\n>\n>\n\n -> Index Scan using dp_hist_gen_actual_idx2 on dp_hist_gen_actual\n> dp (cost=0.56..2.78 rows=1 width=24) (actual time=0.020..20.012 rows=12336\n> loops=1)\n>\n\nHere it thinks it will find 1 row, but actually finds 12336. That is not\nconducive to good plans. Has the table been analyzed recently?\n\nIndex Cond: ((market_day >= '2015-01-01'::date) AND (market_day <=\n'2015-01-01'::date) AND (expiry_date IS NULL))\n\nIf you query just this one table with just these criteria, what do you get\nfor the row estimates and actual rows, with and without the IS NULL\ncondition?\n\nCheers,\n\nJeff\n\nOn Wed, Mar 11, 2015 at 10:01 AM, atxcanadian <[email protected]> wrote:Currently seeing massive increase in performance when optimizer chooses Hash\nJoin over Nested Loops. I achieve this by temporarily setting nested loops\noff. I'd like to setup some database variables where the optimizer prefers\nhash joins. Any suggestions?\n -> Index Scan using dp_hist_gen_actual_idx2 on dp_hist_gen_actual\ndp (cost=0.56..2.78 rows=1 width=24) (actual time=0.020..20.012 rows=12336\nloops=1)Here it thinks it will find 1 row, but actually finds 12336. That is not conducive to good plans. Has the table been analyzed recently?Index Cond: ((market_day >= '2015-01-01'::date) AND (market_day <= '2015-01-01'::date) AND (expiry_date IS NULL))If you query just this one table with just these criteria, what do you get for the row estimates and actual rows, with and without the IS NULL condition?Cheers,Jeff",
"msg_date": "Wed, 11 Mar 2015 15:33:29 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get explain plan to prefer Hash Join"
},
{
"msg_contents": "So I implemented two changes.\n\n- Moved random_page_cost from 1.1 to 2.0 \n- Manually ran analyze on all the tables\n\n*Here is the new explain analyze:*\nQUERY PLAN\nHashAggregate (cost=74122.97..74125.53 rows=256 width=24) (actual\ntime=45.205..45.211 rows=24 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=8.30..8.30 rows=1 width=9) (actual time=0.152..0.152\nrows=1 loops=1)\n -> Sort (cost=8.30..8.78 rows=193 width=9) (actual\ntime=0.150..0.150 rows=1 loops=1)\n Sort Key: c.cim\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on cim_calendar c (cost=0.00..7.33 rows=193\nwidth=9) (actual time=0.008..0.085 rows=192 loops=1)\n Filter: (nodal_load <= '2015-01-01'::date)\n Rows Removed by Filter: 36\n -> Nested Loop (cost=22623.47..74111.47 rows=256 width=24) (actual\ntime=43.798..45.181 rows=24 loops=1)\n -> Bitmap Heap Scan on api_settlement_points sp \n(cost=22622.91..67425.92 rows=12 width=9) (actual time=43.756..43.823 rows=1\nloops=1)\n Recheck Cond: ((rt_model = $0) AND (start_date <=\n'2015-01-01'::date) AND (end_date > '2015-01-01'::date))\n Filter: ((settlement_point_rdfid)::text =\n'#_{09F3A628-3B9D-481A-AC90-72AF8EAB64CA}'::text)\n Rows Removed by Filter: 5298\n -> Bitmap Index Scan on api_settlement_points_idx \n(cost=0.00..22622.90 rows=72134 width=0) (actual time=42.998..42.998\nrows=5299 loops=1)\n Index Cond: ((rt_model = $0) AND (start_date <=\n'2015-01-01'::date) AND (end_date > '2015-01-01'::date))\n -> Index Scan using dp_hist_gen_actual_idx2 on dp_hist_gen_actual\ndp (cost=0.56..556.88 rows=25 width=24) (actual time=0.033..1.333 rows=24\nloops=1)\n Index Cond: ((market_day >= '2015-01-01'::date) AND\n(market_day <= '2015-01-01'::date) AND (expiry_date IS NULL) AND\n((settlement_point)::text = (sp.settlement_point)::text))\nTotal runtime: 45.278 ms\n\nI'm a little perplexed why the autovacuum wasn't keeping up. Any\nrecommendations for those settings to push it to do a bit more analyzing of\nthe tables??\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-get-explain-plan-to-prefer-Hash-Join-tp5841450p5841520.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Mar 2015 17:35:52 -0700 (MST)",
"msg_from": "atxcanadian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get explain plan to prefer Hash Join"
},
{
"msg_contents": "2015-03-12 1:35 GMT+01:00 atxcanadian <[email protected]>:\n\n> So I implemented two changes.\n>\n> - Moved random_page_cost from 1.1 to 2.0\n>\n\nrandom_page_cost 1 can enforce nested_loop - it is very cheap with it\n\n\n> - Manually ran analyze on all the tables\n>\n> *Here is the new explain analyze:*\n> QUERY PLAN\n> HashAggregate (cost=74122.97..74125.53 rows=256 width=24) (actual\n> time=45.205..45.211 rows=24 loops=1)\n> InitPlan 1 (returns $0)\n> -> Limit (cost=8.30..8.30 rows=1 width=9) (actual time=0.152..0.152\n> rows=1 loops=1)\n> -> Sort (cost=8.30..8.78 rows=193 width=9) (actual\n> time=0.150..0.150 rows=1 loops=1)\n> Sort Key: c.cim\n> Sort Method: top-N heapsort Memory: 25kB\n> -> Seq Scan on cim_calendar c (cost=0.00..7.33 rows=193\n> width=9) (actual time=0.008..0.085 rows=192 loops=1)\n> Filter: (nodal_load <= '2015-01-01'::date)\n> Rows Removed by Filter: 36\n> -> Nested Loop (cost=22623.47..74111.47 rows=256 width=24) (actual\n> time=43.798..45.181 rows=24 loops=1)\n> -> Bitmap Heap Scan on api_settlement_points sp\n> (cost=22622.91..67425.92 rows=12 width=9) (actual time=43.756..43.823\n> rows=1\n> loops=1)\n> Recheck Cond: ((rt_model = $0) AND (start_date <=\n> '2015-01-01'::date) AND (end_date > '2015-01-01'::date))\n> Filter: ((settlement_point_rdfid)::text =\n> '#_{09F3A628-3B9D-481A-AC90-72AF8EAB64CA}'::text)\n> Rows Removed by Filter: 5298\n> -> Bitmap Index Scan on api_settlement_points_idx\n> (cost=0.00..22622.90 rows=72134 width=0) (actual time=42.998..42.998\n> rows=5299 loops=1)\n> Index Cond: ((rt_model = $0) AND (start_date <=\n> '2015-01-01'::date) AND (end_date > '2015-01-01'::date))\n> -> Index Scan using dp_hist_gen_actual_idx2 on dp_hist_gen_actual\n> dp (cost=0.56..556.88 rows=25 width=24) (actual time=0.033..1.333 rows=24\n> loops=1)\n> Index Cond: ((market_day >= '2015-01-01'::date) AND\n> (market_day <= '2015-01-01'::date) AND (expiry_date IS NULL) AND\n> ((settlement_point)::text = (sp.settlement_point)::text))\n> Total runtime: 45.278 ms\n>\n> I'm a little perplexed why the autovacuum wasn't keeping up. Any\n> recommendations for those settings to push it to do a bit more analyzing of\n> the tables??\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/How-to-get-explain-plan-to-prefer-Hash-Join-tp5841450p5841520.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2015-03-12 1:35 GMT+01:00 atxcanadian <[email protected]>:So I implemented two changes.\n\n- Moved random_page_cost from 1.1 to 2.0random_page_cost 1 can enforce nested_loop - it is very cheap with it \n- Manually ran analyze on all the tables\n\n*Here is the new explain analyze:*\nQUERY PLAN\nHashAggregate (cost=74122.97..74125.53 rows=256 width=24) (actual\ntime=45.205..45.211 rows=24 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=8.30..8.30 rows=1 width=9) (actual time=0.152..0.152\nrows=1 loops=1)\n -> Sort (cost=8.30..8.78 rows=193 width=9) (actual\ntime=0.150..0.150 rows=1 loops=1)\n Sort Key: c.cim\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on cim_calendar c (cost=0.00..7.33 rows=193\nwidth=9) (actual time=0.008..0.085 rows=192 loops=1)\n Filter: (nodal_load <= '2015-01-01'::date)\n Rows Removed by Filter: 36\n -> Nested Loop (cost=22623.47..74111.47 rows=256 width=24) (actual\ntime=43.798..45.181 rows=24 loops=1)\n -> Bitmap Heap Scan on api_settlement_points sp\n(cost=22622.91..67425.92 rows=12 width=9) (actual time=43.756..43.823 rows=1\nloops=1)\n Recheck Cond: ((rt_model = $0) AND (start_date <=\n'2015-01-01'::date) AND (end_date > '2015-01-01'::date))\n Filter: ((settlement_point_rdfid)::text =\n'#_{09F3A628-3B9D-481A-AC90-72AF8EAB64CA}'::text)\n Rows Removed by Filter: 5298\n -> Bitmap Index Scan on api_settlement_points_idx\n(cost=0.00..22622.90 rows=72134 width=0) (actual time=42.998..42.998\nrows=5299 loops=1)\n Index Cond: ((rt_model = $0) AND (start_date <=\n'2015-01-01'::date) AND (end_date > '2015-01-01'::date))\n -> Index Scan using dp_hist_gen_actual_idx2 on dp_hist_gen_actual\ndp (cost=0.56..556.88 rows=25 width=24) (actual time=0.033..1.333 rows=24\nloops=1)\n Index Cond: ((market_day >= '2015-01-01'::date) AND\n(market_day <= '2015-01-01'::date) AND (expiry_date IS NULL) AND\n((settlement_point)::text = (sp.settlement_point)::text))\nTotal runtime: 45.278 ms\n\nI'm a little perplexed why the autovacuum wasn't keeping up. Any\nrecommendations for those settings to push it to do a bit more analyzing of\nthe tables??\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-get-explain-plan-to-prefer-Hash-Join-tp5841450p5841520.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 12 Mar 2015 06:42:32 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get explain plan to prefer Hash Join"
},
{
"msg_contents": "Isn't a random_page_cost of 1 a little aggressive?\n\nWe are currently setup on Amazon SSD with software RAID 5. \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-get-explain-plan-to-prefer-Hash-Join-tp5841450p5841605.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Mar 2015 08:45:38 -0700 (MST)",
"msg_from": "atxcanadian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get explain plan to prefer Hash Join"
},
{
"msg_contents": "On Wed, Mar 11, 2015 at 5:35 PM, atxcanadian <[email protected]>\nwrote:\n\n>\n> I'm a little perplexed why the autovacuum wasn't keeping up. Any\n> recommendations for those settings to push it to do a bit more analyzing of\n> the tables??\n>\n\nWhat does pg_stat_user_tables show for that table?\n\nOn Wed, Mar 11, 2015 at 5:35 PM, atxcanadian <[email protected]> wrote:\nI'm a little perplexed why the autovacuum wasn't keeping up. Any\nrecommendations for those settings to push it to do a bit more analyzing of\nthe tables??What does pg_stat_user_tables show for that table?",
"msg_date": "Thu, 12 Mar 2015 08:53:46 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get explain plan to prefer Hash Join"
},
{
"msg_contents": "Here is the output:\n\n<http://postgresql.nabble.com/file/n5841610/pg_stat_user_table.jpg> \n\nThis is after I've manually ran analyze.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-get-explain-plan-to-prefer-Hash-Join-tp5841450p5841610.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Mar 2015 08:59:34 -0700 (MST)",
"msg_from": "atxcanadian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get explain plan to prefer Hash Join"
},
{
"msg_contents": "On Thu, Mar 12, 2015 at 8:59 AM, atxcanadian <[email protected]>\nwrote:\n\n> Here is the output:\n>\n> <http://postgresql.nabble.com/file/n5841610/pg_stat_user_table.jpg>\n>\n> This is after I've manually ran analyze.\n>\n\nThe \"last_*\" columns are only showing times, and not full timestamps. Does\nyour reporting tool drop the date part of a timestamp when it is equal to\ntoday? Or does it just drop the date part altogether regardless of what it\nis?\n\nCheers,\n\nJeff\n\nOn Thu, Mar 12, 2015 at 8:59 AM, atxcanadian <[email protected]> wrote:Here is the output:\n\n<http://postgresql.nabble.com/file/n5841610/pg_stat_user_table.jpg>\n\nThis is after I've manually ran analyze.The \"last_*\" columns are only showing times, and not full timestamps. Does your reporting tool drop the date part of a timestamp when it is equal to today? Or does it just drop the date part altogether regardless of what it is?Cheers,Jeff",
"msg_date": "Thu, 12 Mar 2015 09:25:12 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get explain plan to prefer Hash Join"
},
{
"msg_contents": "Sorry about that, excel clipped off the dates.\n\n<http://postgresql.nabble.com/file/n5841633/pg_stat_user_table.jpg> \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-get-explain-plan-to-prefer-Hash-Join-tp5841450p5841633.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Mar 2015 10:58:08 -0700 (MST)",
"msg_from": "atxcanadian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get explain plan to prefer Hash Join"
}
] |
[
{
"msg_contents": "Hi Team,\n\n\n\nI am a novice to this territory. We are trying to migrate few jasper\nreports from Netezza to PostgreSQL.\n\n\n\nI have one report ready with me but queries are taking too much time. To be\nhonest, it is not giving any result most of the time.\n\n\n\nThe same query in Netezza is running in less than 2-3 seconds.\n\n\n\n========================================================================================================\n\n\n\nThis is the query :\n\n\n\n\n\nSELECT\n\n COUNT(DISTINCT TARGET_ID)\n\n FROM\n\n S_V_F_PROMOTION_HISTORY_EMAIL PH\n\n INNER JOIN S_V_D_CAMPAIGN_HIERARCHY CH\n\n ON PH.TOUCHPOINT_EXECUTION_ID =\nCH.TOUCHPOINT_EXECUTION_ID\n\n WHERE\n\n 1=1\n\n AND SEND_DT >= '2014-03-13'\n\n AND SEND_DT <= '2015-03-14'\n\n\n\nStatistics:\n\n\n\nSelect Count(1) from S_V_F_PROMOTION_HISTORY_EMAIL\n\n4559289\n\nTime: 16781.409 ms\n\n\n\nSelect count(1) from S_V_D_CAMPAIGN_HIERARCHY;\n\ncount\n\n-------\n\n45360\n\n(1 row)\n\n\n\nTime: 467869.185 ms\n\n==================================================================\n\nEXPLAIN PLAN FOR QUERY:\n\n\n\n\"Aggregate (cost=356422.36..356422.37 rows=1 width=8)\"\n\n\" Output: count(DISTINCT base.target_id)\"\n\n\" -> Nested Loop (cost=68762.23..356422.36 rows=1 width=8)\"\n\n\" Output: base.target_id\"\n\n\" Join Filter: (base.touchpoint_execution_id =\ntp_exec.touchpoint_execution_id)\"\n\n\" -> Nested Loop (cost=33927.73..38232.16 rows=1 width=894)\"\n\n\" Output: camp.campaign_id, camp.campaign_name,\ncamp.initiative, camp.objective, camp.category_id,\n\"CATEGORY\".category_name, camp_exec.campaign_execution_id,\ncamp_exec.campaign_execution_name, camp_exec.group_id, grup.group_name,\ncamp_exec.star (...)\"\n\n\" Join Filter: (tp_exec.touchpoint_execution_id =\nvalid_executions.touchpoint_execution_id)\"\n\n\" CTE valid_executions\"\n\n\" -> Merge Join (cost=30420.45..31971.94 rows=1 width=8)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_2.touchpoint_execution_id\"\n\n\" Merge Cond:\n((s_f_touchpoint_execution_status_history_2.touchpoint_execution_id =\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id) AND\n(s_f_touchpoint_execution_status_history_2.creation_dt =\n(max(s_f_touchpoint_ex (...)\"\n\n\" -> Sort (cost=17196.30..17539.17 rows=137149\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_2.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_2.creation_dt\"\n\n\" Sort Key:\ns_f_touchpoint_execution_status_history_2.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_2.creation_dt\"\n\n\" -> Seq Scan on\npublic.s_f_touchpoint_execution_status_history\ns_f_touchpoint_execution_status_history_2 (cost=0.00..5493.80 rows=137149\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_2.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_2.creation_dt\"\n\n\" Filter:\n(s_f_touchpoint_execution_status_history_2.touchpoint_execution_status_type_id\n= ANY ('{3,4}'::integer[]))\"\n\n\" -> Sort (cost=13224.15..13398.43 rows=69715\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\n(max(s_f_touchpoint_execution_status_history_1_1.creation_dt))\"\n\n\" Sort Key:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\n(max(s_f_touchpoint_execution_status_history_1_1.creation_dt))\"\n\n\" -> HashAggregate (cost=6221.56..6918.71\nrows=69715 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\nmax(s_f_touchpoint_execution_status_history_1_1.creation_dt)\"\n\n\" Group Key:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id\"\n\n\" -> Seq Scan on\npublic.s_f_touchpoint_execution_status_history\ns_f_touchpoint_execution_status_history_1_1 (cost=0.00..4766.04\nrows=291104 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_status_type_id,\ns_f_touchpoint_execution_status_history_1_1.status_message (...)\"\n\n\" -> Nested Loop Left Join (cost=1955.80..6260.19 rows=1\nwidth=894)\"\n\n\" Output: tp_exec.touchpoint_execution_id,\ntp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\nwave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, camp_exec.campaign_execution_id, camp_exec.campaign_\n(...)\"\n\n\" -> Nested Loop (cost=1955.67..6260.04 rows=1\nwidth=776)\"\n\n\" Output: tp_exec.touchpoint_execution_id,\ntp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\nwave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, camp_exec.campaign_execution_id, camp_exec.cam (...)\"\n\n\" -> Nested Loop Left Join\n(cost=1955.54..6259.87 rows=1 width=658)\"\n\n\" Output: tp_exec.touchpoint_execution_id,\ntp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\nwave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, camp_exec.campaign_execution_id, camp_ex (...)\"\n\n\" -> Nested Loop Left Join\n(cost=1955.40..6259.71 rows=1 width=340)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt,\ncamp_exec.campaign_execution_id, c (...)\"\n\n\" -> Nested Loop Left Join\n(cost=1955.27..6259.55 rows=1 width=222)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt,\ncamp_exec.campaign_execution (...)\"\n\n\" -> Nested Loop\n(cost=1954.99..6259.24 rows=1 width=197)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt, camp_exec.campaign_exe\n(...)\"\n\n\" -> Nested Loop\n(cost=1954.71..6258.92 rows=1 width=173)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt, camp_exec.campai (...)\"\n\n\" Join Filter:\n(camp_exec.campaign_id = wave.campaign_id)\"\n\n\" -> Nested Loop\n(cost=1954.42..6254.67 rows=13 width=167)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt, wave_exec. (...)\"\n\n\" -> Hash\nJoin (cost=1954.13..6249.67 rows=13 width=108)\"\n\n\"\nOutput: tp_exec.touchpoint_execution_id, tp_exec.start_dt,\ntp_exec.message_type_id, tp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt, wave (...)\"\n\n\" Hash\nCond: ((tp_exec.touchpoint_id = tp.touchpoint_id) AND (wave_exec.wave_id =\ntp.wave_id))\"\n\n\" ->\nHash Join (cost=1576.83..4595.51 rows=72956 width=90)\"\n\n\"\n Output: tp_exec.touchpoint_execution_id,\ntp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\ntp_exec.touchpoint_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_n (...)\"\n\n\"\n Hash Cond:\n(tp_exec.wave_execution_id = wave_exec.wave_execution_id)\"\n\n\"\n-> Seq Scan on public.s_d_touchpoint_execution tp_exec\n(cost=0.00..1559.56 rows=72956 width=42)\"\n\n\"\nOutput: tp_exec.touchpoint_execution_id, tp_exec.wave_execution_id,\ntp_exec.touchpoint_id, tp_exec.channel_type_id, tp_exec.content_id,\ntp_exec.message_type_id, tp_exec.start_d (...)\"\n\n\"\n-> Hash (cost=1001.37..1001.37 rows=46037 width=56)\"\n\n\"\nOutput: wave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, wave_exec.campaign_execution_id, wave_exec.wave_id\"\n\n\"\n-> Seq Scan on public.s_d_wave_execution wave_exec (cost=0.00..1001.37\nrows=46037 width=56)\"\n\n\"\nOutput: wave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, wave_exec.campaign_execution_id, wave_exec.wave_id\"\n\n\" ->\nHash (cost=212.72..212.72 rows=10972 width=26)\"\n\n\"\nOutput: tp.touchpoint_id, tp.touchpoint_name, tp.wave_id,\ntp.channel_type_id\"\n\n\"\n-> Seq Scan on public.s_d_touchpoint tp (cost=0.00..212.72 rows=10972\nwidth=26)\"\n\n\"\nOutput: tp.touchpoint_id, tp.touchpoint_name, tp.wave_id,\ntp.channel_type_id\"\n\n\" -> Index\nScan using s_d_campaign_execution_idx on public.s_d_campaign_execution\ncamp_exec (cost=0.29..0.37 rows=1 width=67)\"\n\n\"\nOutput: camp_exec.campaign_execution_id, camp_exec.campaign_id,\ncamp_exec.group_id, camp_exec.campaign_execution_name, camp_exec.start_dt,\ncamp_exec.creation_dt\"\n\n\" Index\nCond: (camp_exec.campaign_execution_id = wave_exec.campaign_execution_id)\"\n\n\" -> Index Scan\nusing s_d_wave_pkey on public.s_d_wave wave (cost=0.29..0.31 rows=1\nwidth=22)\"\n\n\" Output:\nwave.wave_id, wave.campaign_id, wave.wave_name, wave.creation_dt,\nwave.modified_dt\"\n\n\" Index Cond:\n(wave.wave_id = wave_exec.wave_id)\"\n\n\" -> Index Scan using\ns_d_campaign_pkey on public.s_d_campaign camp (cost=0.29..0.32 rows=1\nwidth=40)\"\n\n\" Output:\ncamp.campaign_id, camp.campaign_name, camp.objective, camp.initiative,\ncamp.category_id, camp.creation_dt, camp.modified_dt\"\n\n\" Index Cond:\n(camp.campaign_id = camp_exec.campaign_id)\"\n\n\" -> Index Scan using\ns_d_content_pkey on public.s_d_content content (cost=0.28..0.30 rows=1\nwidth=33)\"\n\n\" Output:\ncontent.content_id, content.content_name, content.creation_dt,\ncontent.channel_type_id, content.modified_dt\"\n\n\" Index Cond:\n(tp_exec.content_id = content.content_id)\"\n\n\" -> Index Scan using\ns_d_message_type_pkey on public.s_d_message_type message_type\n(cost=0.13..0.15 rows=1 width=120)\"\n\n\" Output:\nmessage_type.message_type_id, message_type.message_type_name,\nmessage_type.creation_dt, message_type.modified_dt\"\n\n\" Index Cond:\n(tp_exec.message_type_id = message_type.message_type_id)\"\n\n\" -> Index Scan using s_d_group_pkey on\npublic.s_d_group grup (cost=0.13..0.15 rows=1 width=320)\"\n\n\" Output: grup.group_id,\ngrup.group_name, grup.creation_dt, grup.modified_dt\"\n\n\" Index Cond: (camp_exec.group_id =\ngrup.group_id)\"\n\n\" -> Index Scan using d_channel_pk on\npublic.s_d_channel_type channel (cost=0.13..0.15 rows=1 width=120)\"\n\n\" Output: channel.channel_type_id,\nchannel.channel_type_name\"\n\n\" Index Cond: (channel.channel_type_id =\ntp.channel_type_id)\"\n\n\" -> Index Scan using s_d_category_pkey on\npublic.s_d_category \"CATEGORY\" (cost=0.13..0.15 rows=1 width=120)\"\n\n\" Output: \"CATEGORY\".category_id,\n\"CATEGORY\".category_name, \"CATEGORY\".creation_dt, \"CATEGORY\".modified_dt\"\n\n\" Index Cond: (camp.category_id =\n\"CATEGORY\".category_id)\"\n\n\" -> CTE Scan on valid_executions (cost=0.00..0.02 rows=1\nwidth=8)\"\n\n\" Output: valid_executions.touchpoint_execution_id\"\n\n\" -> Nested Loop Left Join (cost=34834.49..318190.14 rows=2\nwidth=148)\"\n\n\" Output: base.promo_hist_id, base.audience_member_id,\nbase.target_id, base.touchpoint_execution_id, base.contact_group_id,\nbase.content_version_execution_id, base.sent_ind, CASE WHEN\n(email.sbounce_ind IS NOT NULL) THEN (email.sbounce_ind)::in (...)\"\n\n\" CTE valid_executions\"\n\n\" -> Nested Loop (cost=33089.13..34834.20 rows=1 width=8)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id\"\n\n\" -> Nested Loop (cost=33088.84..34833.88 rows=1\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id,\ntpe.touchpoint_id\"\n\n\" -> Unique (cost=33088.42..34825.42 rows=1\nwidth=8)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id\"\n\n\" -> Merge Join (cost=33088.42..34825.42\nrows=1 width=8)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id\"\n\n\" Merge Cond:\n((s_f_touchpoint_execution_status_history.touchpoint_execution_id =\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id) AND\n(s_f_touchpoint_execution_status_history.creation_dt = (max(s_f_t (...)\"\n\n\" -> Sort (cost=19864.28..20268.98\nrows=161883 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history.creation_dt\"\n\n\" Sort Key:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history.creation_dt\"\n\n\" -> Seq Scan on\npublic.s_f_touchpoint_execution_status_history (cost=0.00..5857.68\nrows=161883 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history.creation_dt\"\n\n\" Filter:\n(s_f_touchpoint_execution_status_history.touchpoint_execution_status_type_id\n= ANY ('{3,4,6}'::integer[]))\"\n\n\" -> Sort (cost=13224.15..13398.43\nrows=69715 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\n(max(s_f_touchpoint_execution_status_history_1.creation_dt))\"\n\n\" Sort Key:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\n(max(s_f_touchpoint_execution_status_history_1.creation_dt))\"\n\n\" -> HashAggregate\n(cost=6221.56..6918.71 rows=69715 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\nmax(s_f_touchpoint_execution_status_history_1.creation_dt)\"\n\n\" Group Key:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id\"\n\n\" -> Seq Scan on\npublic.s_f_touchpoint_execution_status_history\ns_f_touchpoint_execution_status_history_1 (cost=0.00..4766.04 rows=291104\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_status_type_id,\ns_f_touchpoint_execution_status_history_1.st (...)\"\n\n\" -> Index Scan using\ns_d_touchpoint_execution_pkey on public.s_d_touchpoint_execution tpe\n(cost=0.42..8.44 rows=1 width=16)\"\n\n\" Output: tpe.touchpoint_execution_id,\ntpe.wave_execution_id, tpe.touchpoint_id, tpe.channel_type_id,\ntpe.content_id, tpe.message_type_id, tpe.start_dt, tpe.creation_dt\"\n\n\" Index Cond: (tpe.touchpoint_execution_id\n= s_f_touchpoint_execution_status_history.touchpoint_execution_id)\"\n\n\" -> Index Only Scan using s_d_touchpoint_pkey on\npublic.s_d_touchpoint tp_1 (cost=0.29..0.32 rows=1 width=8)\"\n\n\" Output: tp_1.touchpoint_id,\ntp_1.channel_type_id\"\n\n\" Index Cond: ((tp_1.touchpoint_id =\ntpe.touchpoint_id) AND (tp_1.channel_type_id = 1))\"\n\n\" -> Nested Loop (cost=0.00..283350.22 rows=2 width=74)\"\n\n\" Output: base.promo_hist_id, base.audience_member_id,\nbase.target_id, base.touchpoint_execution_id, base.contact_group_id,\nbase.content_version_execution_id, base.sent_ind, base.send_dt,\nbase.creation_dt, base.modified_dt\"\n\n\" Join Filter: (base.touchpoint_execution_id =\nvalid_executions_1.touchpoint_execution_id)\"\n\n\" -> CTE Scan on valid_executions valid_executions_1\n(cost=0.00..0.02 rows=1 width=8)\"\n\n\" Output:\nvalid_executions_1.touchpoint_execution_id\"\n\n\" -> Seq Scan on public.s_f_promotion_history base\n(cost=0.00..283334.00 rows=1296 width=74)\"\n\n\" Output: base.promo_hist_id, base.target_id,\nbase.audience_member_id, base.touchpoint_execution_id,\nbase.contact_group_id, base.content_version_execution_id, base.sent_ind,\nbase.send_dt, base.creation_dt, base.modified_dt\"\n\n\" Filter: ((base.send_dt >= '2014-03-13\n00:00:00'::timestamp without time zone) AND (base.send_dt <= '2015-03-14\n00:00:00'::timestamp without time zone))\"\n\n\" -> Index Scan using s_f_promotion_history_email_pk1 on\npublic.s_f_promotion_history_email email (cost=0.29..2.83 rows=1 width=90)\"\n\n\" Output: email.promo_hist_id, email.target_id,\nemail.audience_member_id, email.touchpoint_execution_id,\nemail.contact_group_id, email.sbounce_ind, email.hbounce_ind,\nemail.opened_ind, email.clicked_ind, email.unsubscribe_ind, email.unsub\n(...)\"\n\n\" Index Cond: (base.promo_hist_id = email.promo_hist_id)\"\n\n\" Filter: (base.audience_member_id =\nemail.audience_member_id)\"\n\n\n\n=================================================================================================\n\nQuestions here are :\n\n\n\nIs the query written correctly as per the PostgreSQL?\n\nAm I missing anything here?\n\n\n\nTotal Memory : 8 GB\n\nshared_buffers = 2GB\n\nwork_mem = 64MB\n\nmaintenance_work_mem = 700MB\n\neffective_cache_size = 4GB\n\nAny kind of help is appreciated.\n\n\n\nWarm Regards,\n\n\nVivekanand Joshi\n+919654227927\n\n\n\n[image: Zeta Interactive]\n\n185 Madison Ave. New York, NY 10016\n\nwww.zetainteractive.com",
"msg_date": "Fri, 13 Mar 2015 17:14:25 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issues"
},
{
"msg_contents": "Hi Vivekanand,\n\n From the query plan, we can see that good amount of time is spent in this\nline\n\n-> Seq Scan on public.s_f_promotion_history base (cost=0.00..283334.00\nrows=1296 width=74)\n\n Output: base.promo_hist_id, base.target_id,\nbase.audience_member_id, base.touchpoint_execution_id,\nbase.contact_group_id, base.content_version_execution_id, base.sent_ind,\nbase.send_dt, base.creation_dt, base.modified_dt\"\nFilter: ((base.send_dt >= '2014-03-13 00:00:00'::timestamp without time\nzone) AND (base.send_dt <= '2015-03-14 00:00:00'::timestamp without time\nzone))\"\n\n\n\nCan you try creating (partial) index based on the filter fields? ( Good\ntutorial @ https://devcenter.heroku.com/articles/postgresql-indexes). Did\nyou try doing a VACUUM ANALYZE? Other approach worth trying it out is\npartitioning the public.s_f_promotion_history table by date (BTW, what is\nthe size and number of rows in this table?).\n\nOn Fri, Mar 13, 2015 at 12:44 PM, Vivekanand Joshi <\[email protected]> wrote:\n\n> Hi Team,\n>\n>\n>\n> I am a novice to this territory. We are trying to migrate few jasper\n> reports from Netezza to PostgreSQL.\n>\n>\n>\n> I have one report ready with me but queries are taking too much time. To\n> be honest, it is not giving any result most of the time.\n>\n>\n>\n> The same query in Netezza is running in less than 2-3 seconds.\n>\n>\n>\n>\n> ========================================================================================================\n>\n>\n>\n> This is the query :\n>\n>\n>\n>\n>\n> SELECT\n>\n> COUNT(DISTINCT TARGET_ID)\n>\n> FROM\n>\n> S_V_F_PROMOTION_HISTORY_EMAIL PH\n>\n> INNER JOIN S_V_D_CAMPAIGN_HIERARCHY CH\n>\n> ON PH.TOUCHPOINT_EXECUTION_ID =\n> CH.TOUCHPOINT_EXECUTION_ID\n>\n> WHERE\n>\n> 1=1\n>\n> AND SEND_DT >= '2014-03-13'\n>\n> AND SEND_DT <= '2015-03-14'\n>\n>\n>\n> Statistics:\n>\n>\n>\n> Select Count(1) from S_V_F_PROMOTION_HISTORY_EMAIL\n>\n> 4559289\n>\n> Time: 16781.409 ms\n>\n>\n>\n> Select count(1) from S_V_D_CAMPAIGN_HIERARCHY;\n>\n> count\n>\n> -------\n>\n> 45360\n>\n> (1 row)\n>\n>\n>\n> Time: 467869.185 ms\n>\n> ==================================================================\n>\n> EXPLAIN PLAN FOR QUERY:\n>\n>\n>\n> \"Aggregate (cost=356422.36..356422.37 rows=1 width=8)\"\n>\n> \" Output: count(DISTINCT base.target_id)\"\n>\n> \" -> Nested Loop (cost=68762.23..356422.36 rows=1 width=8)\"\n>\n> \" Output: base.target_id\"\n>\n> \" Join Filter: (base.touchpoint_execution_id =\n> tp_exec.touchpoint_execution_id)\"\n>\n> \" -> Nested Loop (cost=33927.73..38232.16 rows=1 width=894)\"\n>\n> \" Output: camp.campaign_id, camp.campaign_name,\n> camp.initiative, camp.objective, camp.category_id,\n> \"CATEGORY\".category_name, camp_exec.campaign_execution_id,\n> camp_exec.campaign_execution_name, camp_exec.group_id, grup.group_name,\n> camp_exec.star (...)\"\n>\n> \" Join Filter: (tp_exec.touchpoint_execution_id =\n> valid_executions.touchpoint_execution_id)\"\n>\n> \" CTE valid_executions\"\n>\n> \" -> Merge Join (cost=30420.45..31971.94 rows=1 width=8)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history_2.touchpoint_execution_id\"\n>\n> \" Merge Cond:\n> ((s_f_touchpoint_execution_status_history_2.touchpoint_execution_id =\n> s_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id) AND\n> (s_f_touchpoint_execution_status_history_2.creation_dt =\n> (max(s_f_touchpoint_ex (...)\"\n>\n> \" -> Sort (cost=17196.30..17539.17 rows=137149\n> width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history_2.touchpoint_execution_id,\n> s_f_touchpoint_execution_status_history_2.creation_dt\"\n>\n> \" Sort Key:\n> s_f_touchpoint_execution_status_history_2.touchpoint_execution_id,\n> s_f_touchpoint_execution_status_history_2.creation_dt\"\n>\n> \" -> Seq Scan on\n> public.s_f_touchpoint_execution_status_history\n> s_f_touchpoint_execution_status_history_2 (cost=0.00..5493.80 rows=137149\n> width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history_2.touchpoint_execution_id,\n> s_f_touchpoint_execution_status_history_2.creation_dt\"\n>\n> \" Filter:\n> (s_f_touchpoint_execution_status_history_2.touchpoint_execution_status_type_id\n> = ANY ('{3,4}'::integer[]))\"\n>\n> \" -> Sort (cost=13224.15..13398.43 rows=69715\n> width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\n> (max(s_f_touchpoint_execution_status_history_1_1.creation_dt))\"\n>\n> \" Sort Key:\n> s_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\n> (max(s_f_touchpoint_execution_status_history_1_1.creation_dt))\"\n>\n> \" -> HashAggregate (cost=6221.56..6918.71\n> rows=69715 width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\n> max(s_f_touchpoint_execution_status_history_1_1.creation_dt)\"\n>\n> \" Group Key:\n> s_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id\"\n>\n> \" -> Seq Scan on\n> public.s_f_touchpoint_execution_status_history\n> s_f_touchpoint_execution_status_history_1_1 (cost=0.00..4766.04\n> rows=291104 width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\n> s_f_touchpoint_execution_status_history_1_1.touchpoint_execution_status_type_id,\n> s_f_touchpoint_execution_status_history_1_1.status_message (...)\"\n>\n> \" -> Nested Loop Left Join (cost=1955.80..6260.19 rows=1\n> width=894)\"\n>\n> \" Output: tp_exec.touchpoint_execution_id,\n> tp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\n> wave_exec.wave_execution_id, wave_exec.wave_execution_name,\n> wave_exec.start_dt, camp_exec.campaign_execution_id, camp_exec.campaign_\n> (...)\"\n>\n> \" -> Nested Loop (cost=1955.67..6260.04 rows=1\n> width=776)\"\n>\n> \" Output: tp_exec.touchpoint_execution_id,\n> tp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\n> wave_exec.wave_execution_id, wave_exec.wave_execution_name,\n> wave_exec.start_dt, camp_exec.campaign_execution_id, camp_exec.cam (...)\"\n>\n> \" -> Nested Loop Left Join\n> (cost=1955.54..6259.87 rows=1 width=658)\"\n>\n> \" Output: tp_exec.touchpoint_execution_id,\n> tp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\n> wave_exec.wave_execution_id, wave_exec.wave_execution_name,\n> wave_exec.start_dt, camp_exec.campaign_execution_id, camp_ex (...)\"\n>\n> \" -> Nested Loop Left Join\n> (cost=1955.40..6259.71 rows=1 width=340)\"\n>\n> \" Output:\n> tp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\n> tp_exec.content_id, wave_exec.wave_execution_id,\n> wave_exec.wave_execution_name, wave_exec.start_dt,\n> camp_exec.campaign_execution_id, c (...)\"\n>\n> \" -> Nested Loop Left Join\n> (cost=1955.27..6259.55 rows=1 width=222)\"\n>\n> \" Output:\n> tp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\n> tp_exec.content_id, wave_exec.wave_execution_id,\n> wave_exec.wave_execution_name, wave_exec.start_dt,\n> camp_exec.campaign_execution (...)\"\n>\n> \" -> Nested Loop\n> (cost=1954.99..6259.24 rows=1 width=197)\"\n>\n> \" Output:\n> tp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\n> tp_exec.content_id, wave_exec.wave_execution_id,\n> wave_exec.wave_execution_name, wave_exec.start_dt, camp_exec.campaign_exe\n> (...)\"\n>\n> \" -> Nested Loop\n> (cost=1954.71..6258.92 rows=1 width=173)\"\n>\n> \" Output:\n> tp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\n> tp_exec.content_id, wave_exec.wave_execution_id,\n> wave_exec.wave_execution_name, wave_exec.start_dt, camp_exec.campai (...)\"\n>\n> \" Join Filter:\n> (camp_exec.campaign_id = wave.campaign_id)\"\n>\n> \" -> Nested Loop\n> (cost=1954.42..6254.67 rows=13 width=167)\"\n>\n> \" Output:\n> tp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\n> tp_exec.content_id, wave_exec.wave_execution_id,\n> wave_exec.wave_execution_name, wave_exec.start_dt, wave_exec. (...)\"\n>\n> \" -> Hash\n> Join (cost=1954.13..6249.67 rows=13 width=108)\"\n>\n> \"\n> Output: tp_exec.touchpoint_execution_id, tp_exec.start_dt,\n> tp_exec.message_type_id, tp_exec.content_id, wave_exec.wave_execution_id,\n> wave_exec.wave_execution_name, wave_exec.start_dt, wave (...)\"\n>\n> \" Hash\n> Cond: ((tp_exec.touchpoint_id = tp.touchpoint_id) AND (wave_exec.wave_id =\n> tp.wave_id))\"\n>\n> \" ->\n> Hash Join (cost=1576.83..4595.51 rows=72956 width=90)\"\n>\n> \"\n> Output: tp_exec.touchpoint_execution_id,\n> tp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\n> tp_exec.touchpoint_id, wave_exec.wave_execution_id,\n> wave_exec.wave_execution_n (...)\"\n>\n> \"\n> Hash Cond:\n> (tp_exec.wave_execution_id = wave_exec.wave_execution_id)\"\n>\n> \"\n> -> Seq Scan on public.s_d_touchpoint_execution tp_exec\n> (cost=0.00..1559.56 rows=72956 width=42)\"\n>\n> \"\n> Output: tp_exec.touchpoint_execution_id, tp_exec.wave_execution_id,\n> tp_exec.touchpoint_id, tp_exec.channel_type_id, tp_exec.content_id,\n> tp_exec.message_type_id, tp_exec.start_d (...)\"\n>\n> \"\n> -> Hash (cost=1001.37..1001.37 rows=46037 width=56)\"\n>\n> \"\n> Output: wave_exec.wave_execution_id, wave_exec.wave_execution_name,\n> wave_exec.start_dt, wave_exec.campaign_execution_id, wave_exec.wave_id\"\n>\n> \"\n> -> Seq Scan on public.s_d_wave_execution wave_exec (cost=0.00..1001.37\n> rows=46037 width=56)\"\n>\n> \"\n> Output: wave_exec.wave_execution_id, wave_exec.wave_execution_name,\n> wave_exec.start_dt, wave_exec.campaign_execution_id, wave_exec.wave_id\"\n>\n> \" ->\n> Hash (cost=212.72..212.72 rows=10972 width=26)\"\n>\n> \"\n> Output: tp.touchpoint_id, tp.touchpoint_name, tp.wave_id,\n> tp.channel_type_id\"\n>\n> \"\n> -> Seq Scan on public.s_d_touchpoint tp (cost=0.00..212.72 rows=10972\n> width=26)\"\n>\n> \"\n> Output: tp.touchpoint_id, tp.touchpoint_name, tp.wave_id,\n> tp.channel_type_id\"\n>\n> \" -> Index\n> Scan using s_d_campaign_execution_idx on public.s_d_campaign_execution\n> camp_exec (cost=0.29..0.37 rows=1 width=67)\"\n>\n> \"\n> Output: camp_exec.campaign_execution_id, camp_exec.campaign_id,\n> camp_exec.group_id, camp_exec.campaign_execution_name, camp_exec.start_dt,\n> camp_exec.creation_dt\"\n>\n> \" Index\n> Cond: (camp_exec.campaign_execution_id = wave_exec.campaign_execution_id)\"\n>\n> \" -> Index Scan\n> using s_d_wave_pkey on public.s_d_wave wave (cost=0.29..0.31 rows=1\n> width=22)\"\n>\n> \" Output:\n> wave.wave_id, wave.campaign_id, wave.wave_name, wave.creation_dt,\n> wave.modified_dt\"\n>\n> \" Index Cond:\n> (wave.wave_id = wave_exec.wave_id)\"\n>\n> \" -> Index Scan using\n> s_d_campaign_pkey on public.s_d_campaign camp (cost=0.29..0.32 rows=1\n> width=40)\"\n>\n> \" Output:\n> camp.campaign_id, camp.campaign_name, camp.objective, camp.initiative,\n> camp.category_id, camp.creation_dt, camp.modified_dt\"\n>\n> \" Index Cond:\n> (camp.campaign_id = camp_exec.campaign_id)\"\n>\n> \" -> Index Scan using\n> s_d_content_pkey on public.s_d_content content (cost=0.28..0.30 rows=1\n> width=33)\"\n>\n> \" Output:\n> content.content_id, content.content_name, content.creation_dt,\n> content.channel_type_id, content.modified_dt\"\n>\n> \" Index Cond:\n> (tp_exec.content_id = content.content_id)\"\n>\n> \" -> Index Scan using\n> s_d_message_type_pkey on public.s_d_message_type message_type\n> (cost=0.13..0.15 rows=1 width=120)\"\n>\n> \" Output:\n> message_type.message_type_id, message_type.message_type_name,\n> message_type.creation_dt, message_type.modified_dt\"\n>\n> \" Index Cond:\n> (tp_exec.message_type_id = message_type.message_type_id)\"\n>\n> \" -> Index Scan using s_d_group_pkey on\n> public.s_d_group grup (cost=0.13..0.15 rows=1 width=320)\"\n>\n> \" Output: grup.group_id,\n> grup.group_name, grup.creation_dt, grup.modified_dt\"\n>\n> \" Index Cond: (camp_exec.group_id =\n> grup.group_id)\"\n>\n> \" -> Index Scan using d_channel_pk on\n> public.s_d_channel_type channel (cost=0.13..0.15 rows=1 width=120)\"\n>\n> \" Output: channel.channel_type_id,\n> channel.channel_type_name\"\n>\n> \" Index Cond: (channel.channel_type_id =\n> tp.channel_type_id)\"\n>\n> \" -> Index Scan using s_d_category_pkey on\n> public.s_d_category \"CATEGORY\" (cost=0.13..0.15 rows=1 width=120)\"\n>\n> \" Output: \"CATEGORY\".category_id,\n> \"CATEGORY\".category_name, \"CATEGORY\".creation_dt, \"CATEGORY\".modified_dt\"\n>\n> \" Index Cond: (camp.category_id =\n> \"CATEGORY\".category_id)\"\n>\n> \" -> CTE Scan on valid_executions (cost=0.00..0.02 rows=1\n> width=8)\"\n>\n> \" Output: valid_executions.touchpoint_execution_id\"\n>\n> \" -> Nested Loop Left Join (cost=34834.49..318190.14 rows=2\n> width=148)\"\n>\n> \" Output: base.promo_hist_id, base.audience_member_id,\n> base.target_id, base.touchpoint_execution_id, base.contact_group_id,\n> base.content_version_execution_id, base.sent_ind, CASE WHEN\n> (email.sbounce_ind IS NOT NULL) THEN (email.sbounce_ind)::in (...)\"\n>\n> \" CTE valid_executions\"\n>\n> \" -> Nested Loop (cost=33089.13..34834.20 rows=1 width=8)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history.touchpoint_execution_id\"\n>\n> \" -> Nested Loop (cost=33088.84..34833.88 rows=1\n> width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history.touchpoint_execution_id,\n> tpe.touchpoint_id\"\n>\n> \" -> Unique (cost=33088.42..34825.42 rows=1\n> width=8)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history.touchpoint_execution_id\"\n>\n> \" -> Merge Join\n> (cost=33088.42..34825.42 rows=1 width=8)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history.touchpoint_execution_id\"\n>\n> \" Merge Cond:\n> ((s_f_touchpoint_execution_status_history.touchpoint_execution_id =\n> s_f_touchpoint_execution_status_history_1.touchpoint_execution_id) AND\n> (s_f_touchpoint_execution_status_history.creation_dt = (max(s_f_t (...)\"\n>\n> \" -> Sort\n> (cost=19864.28..20268.98 rows=161883 width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history.touchpoint_execution_id,\n> s_f_touchpoint_execution_status_history.creation_dt\"\n>\n> \" Sort Key:\n> s_f_touchpoint_execution_status_history.touchpoint_execution_id,\n> s_f_touchpoint_execution_status_history.creation_dt\"\n>\n> \" -> Seq Scan on\n> public.s_f_touchpoint_execution_status_history (cost=0.00..5857.68\n> rows=161883 width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history.touchpoint_execution_id,\n> s_f_touchpoint_execution_status_history.creation_dt\"\n>\n> \" Filter:\n> (s_f_touchpoint_execution_status_history.touchpoint_execution_status_type_id\n> = ANY ('{3,4,6}'::integer[]))\"\n>\n> \" -> Sort\n> (cost=13224.15..13398.43 rows=69715 width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\n> (max(s_f_touchpoint_execution_status_history_1.creation_dt))\"\n>\n> \" Sort Key:\n> s_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\n> (max(s_f_touchpoint_execution_status_history_1.creation_dt))\"\n>\n> \" -> HashAggregate\n> (cost=6221.56..6918.71 rows=69715 width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\n> max(s_f_touchpoint_execution_status_history_1.creation_dt)\"\n>\n> \" Group Key:\n> s_f_touchpoint_execution_status_history_1.touchpoint_execution_id\"\n>\n> \" -> Seq Scan on\n> public.s_f_touchpoint_execution_status_history\n> s_f_touchpoint_execution_status_history_1 (cost=0.00..4766.04 rows=291104\n> width=16)\"\n>\n> \" Output:\n> s_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\n> s_f_touchpoint_execution_status_history_1.touchpoint_execution_status_type_id,\n> s_f_touchpoint_execution_status_history_1.st (...)\"\n>\n> \" -> Index Scan using\n> s_d_touchpoint_execution_pkey on public.s_d_touchpoint_execution tpe\n> (cost=0.42..8.44 rows=1 width=16)\"\n>\n> \" Output: tpe.touchpoint_execution_id,\n> tpe.wave_execution_id, tpe.touchpoint_id, tpe.channel_type_id,\n> tpe.content_id, tpe.message_type_id, tpe.start_dt, tpe.creation_dt\"\n>\n> \" Index Cond:\n> (tpe.touchpoint_execution_id =\n> s_f_touchpoint_execution_status_history.touchpoint_execution_id)\"\n>\n> \" -> Index Only Scan using s_d_touchpoint_pkey on\n> public.s_d_touchpoint tp_1 (cost=0.29..0.32 rows=1 width=8)\"\n>\n> \" Output: tp_1.touchpoint_id,\n> tp_1.channel_type_id\"\n>\n> \" Index Cond: ((tp_1.touchpoint_id =\n> tpe.touchpoint_id) AND (tp_1.channel_type_id = 1))\"\n>\n> \" -> Nested Loop (cost=0.00..283350.22 rows=2 width=74)\"\n>\n> \" Output: base.promo_hist_id, base.audience_member_id,\n> base.target_id, base.touchpoint_execution_id, base.contact_group_id,\n> base.content_version_execution_id, base.sent_ind, base.send_dt,\n> base.creation_dt, base.modified_dt\"\n>\n> \" Join Filter: (base.touchpoint_execution_id =\n> valid_executions_1.touchpoint_execution_id)\"\n>\n> \" -> CTE Scan on valid_executions valid_executions_1\n> (cost=0.00..0.02 rows=1 width=8)\"\n>\n> \" Output:\n> valid_executions_1.touchpoint_execution_id\"\n>\n> \" -> Seq Scan on public.s_f_promotion_history base\n> (cost=0.00..283334.00 rows=1296 width=74)\"\n>\n> \" Output: base.promo_hist_id, base.target_id,\n> base.audience_member_id, base.touchpoint_execution_id,\n> base.contact_group_id, base.content_version_execution_id, base.sent_ind,\n> base.send_dt, base.creation_dt, base.modified_dt\"\n>\n> \" Filter: ((base.send_dt >= '2014-03-13\n> 00:00:00'::timestamp without time zone) AND (base.send_dt <= '2015-03-14\n> 00:00:00'::timestamp without time zone))\"\n>\n> \" -> Index Scan using s_f_promotion_history_email_pk1 on\n> public.s_f_promotion_history_email email (cost=0.29..2.83 rows=1 width=90)\"\n>\n> \" Output: email.promo_hist_id, email.target_id,\n> email.audience_member_id, email.touchpoint_execution_id,\n> email.contact_group_id, email.sbounce_ind, email.hbounce_ind,\n> email.opened_ind, email.clicked_ind, email.unsubscribe_ind, email.unsub\n> (...)\"\n>\n> \" Index Cond: (base.promo_hist_id =\n> email.promo_hist_id)\"\n>\n> \" Filter: (base.audience_member_id =\n> email.audience_member_id)\"\n>\n>\n>\n>\n> =================================================================================================\n>\n> Questions here are :\n>\n>\n>\n> Is the query written correctly as per the PostgreSQL?\n>\n> Am I missing anything here?\n>\n>\n>\n> Total Memory : 8 GB\n>\n> shared_buffers = 2GB\n>\n> work_mem = 64MB\n>\n> maintenance_work_mem = 700MB\n>\n> effective_cache_size = 4GB\n>\n> Any kind of help is appreciated.\n>\n>\n>\n> Warm Regards,\n>\n>\n> Vivekanand Joshi\n> +919654227927\n>\n>\n>\n> [image: Zeta Interactive]\n>\n> 185 Madison Ave. New York, NY 10016\n>\n> www.zetainteractive.com\n>\n>\n>\n\n\n\n-- \nThanks,\nM. Varadharajan\n\n------------------------------------------------\n\n\"Experience is what you get when you didn't get what you wanted\"\n -By Prof. Randy Pausch in \"The Last Lecture\"\n\nMy Journal :- www.thinkasgeek.wordpress.com",
"msg_date": "Fri, 13 Mar 2015 13:58:39 +0100",
"msg_from": "Varadharajan Mukundan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "10 million records in s_f_promotion_history table.\n\n\n\n*From:* Varadharajan Mukundan [mailto:[email protected]]\n*Sent:* Friday, March 13, 2015 6:29 PM\n*To:* [email protected]\n*Cc:* [email protected]\n*Subject:* Re: [PERFORM] Performance issues\n\n\n\nHi Vivekanand,\n\n\n\n From the query plan, we can see that good amount of time is spent in this\nline\n\n\n\n-> Seq Scan on public.s_f_promotion_history base (cost=0.00..283334.00\nrows=1296 width=74)\n\n Output: base.promo_hist_id, base.target_id,\nbase.audience_member_id, base.touchpoint_execution_id,\nbase.contact_group_id, base.content_version_execution_id, base.sent_ind,\nbase.send_dt, base.creation_dt, base.modified_dt\"\nFilter: ((base.send_dt >= '2014-03-13 00:00:00'::timestamp without time\nzone) AND (base.send_dt <= '2015-03-14 00:00:00'::timestamp without time\nzone))\"\n\n\n\n\n\nCan you try creating (partial) index based on the filter fields? ( Good\ntutorial @ https://devcenter.heroku.com/articles/postgresql-indexes). Did\nyou try doing a VACUUM ANALYZE? Other approach worth trying it out is\npartitioning the public.s_f_promotion_history table by date (BTW, what is\nthe size and number of rows in this table?).\n\n\n\nOn Fri, Mar 13, 2015 at 12:44 PM, Vivekanand Joshi <\[email protected]> wrote:\n\nHi Team,\n\n\n\nI am a novice to this territory. We are trying to migrate few jasper\nreports from Netezza to PostgreSQL.\n\n\n\nI have one report ready with me but queries are taking too much time. To be\nhonest, it is not giving any result most of the time.\n\n\n\nThe same query in Netezza is running in less than 2-3 seconds.\n\n\n\n========================================================================================================\n\n\n\nThis is the query :\n\n\n\n\n\nSELECT\n\n COUNT(DISTINCT TARGET_ID)\n\n FROM\n\n S_V_F_PROMOTION_HISTORY_EMAIL PH\n\n INNER JOIN S_V_D_CAMPAIGN_HIERARCHY CH\n\n ON PH.TOUCHPOINT_EXECUTION_ID =\nCH.TOUCHPOINT_EXECUTION_ID\n\n WHERE\n\n 1=1\n\n AND SEND_DT >= '2014-03-13'\n\n AND SEND_DT <= '2015-03-14'\n\n\n\nStatistics:\n\n\n\nSelect Count(1) from S_V_F_PROMOTION_HISTORY_EMAIL\n\n4559289\n\nTime: 16781.409 ms\n\n\n\nSelect count(1) from S_V_D_CAMPAIGN_HIERARCHY;\n\ncount\n\n-------\n\n45360\n\n(1 row)\n\n\n\nTime: 467869.185 ms\n\n==================================================================\n\nEXPLAIN PLAN FOR QUERY:\n\n\n\n\"Aggregate (cost=356422.36..356422.37 rows=1 width=8)\"\n\n\" Output: count(DISTINCT base.target_id)\"\n\n\" -> Nested Loop (cost=68762.23..356422.36 rows=1 width=8)\"\n\n\" Output: base.target_id\"\n\n\" Join Filter: (base.touchpoint_execution_id =\ntp_exec.touchpoint_execution_id)\"\n\n\" -> Nested Loop (cost=33927.73..38232.16 rows=1 width=894)\"\n\n\" Output: camp.campaign_id, camp.campaign_name,\ncamp.initiative, camp.objective, camp.category_id,\n\"CATEGORY\".category_name, camp_exec.campaign_execution_id,\ncamp_exec.campaign_execution_name, camp_exec.group_id, grup.group_name,\ncamp_exec.star (...)\"\n\n\" Join Filter: (tp_exec.touchpoint_execution_id =\nvalid_executions.touchpoint_execution_id)\"\n\n\" CTE valid_executions\"\n\n\" -> Merge Join (cost=30420.45..31971.94 rows=1 width=8)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_2.touchpoint_execution_id\"\n\n\" Merge Cond:\n((s_f_touchpoint_execution_status_history_2.touchpoint_execution_id =\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id) AND\n(s_f_touchpoint_execution_status_history_2.creation_dt =\n(max(s_f_touchpoint_ex (...)\"\n\n\" -> Sort (cost=17196.30..17539.17 rows=137149\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_2.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_2.creation_dt\"\n\n\" Sort Key:\ns_f_touchpoint_execution_status_history_2.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_2.creation_dt\"\n\n\" -> Seq Scan on\npublic.s_f_touchpoint_execution_status_history\ns_f_touchpoint_execution_status_history_2 (cost=0.00..5493.80 rows=137149\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_2.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_2.creation_dt\"\n\n\" Filter:\n(s_f_touchpoint_execution_status_history_2.touchpoint_execution_status_type_id\n= ANY ('{3,4}'::integer[]))\"\n\n\" -> Sort (cost=13224.15..13398.43 rows=69715\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\n(max(s_f_touchpoint_execution_status_history_1_1.creation_dt))\"\n\n\" Sort Key:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\n(max(s_f_touchpoint_execution_status_history_1_1.creation_dt))\"\n\n\" -> HashAggregate (cost=6221.56..6918.71\nrows=69715 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\nmax(s_f_touchpoint_execution_status_history_1_1.creation_dt)\"\n\n\" Group Key:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id\"\n\n\" -> Seq Scan on\npublic.s_f_touchpoint_execution_status_history\ns_f_touchpoint_execution_status_history_1_1 (cost=0.00..4766.04\nrows=291104 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_1_1.touchpoint_execution_status_type_id,\ns_f_touchpoint_execution_status_history_1_1.status_message (...)\"\n\n\" -> Nested Loop Left Join (cost=1955.80..6260.19 rows=1\nwidth=894)\"\n\n\" Output: tp_exec.touchpoint_execution_id,\ntp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\nwave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, camp_exec.campaign_execution_id, camp_exec.campaign_\n(...)\"\n\n\" -> Nested Loop (cost=1955.67..6260.04 rows=1\nwidth=776)\"\n\n\" Output: tp_exec.touchpoint_execution_id,\ntp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\nwave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, camp_exec.campaign_execution_id, camp_exec.cam (...)\"\n\n\" -> Nested Loop Left Join\n(cost=1955.54..6259.87 rows=1 width=658)\"\n\n\" Output: tp_exec.touchpoint_execution_id,\ntp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\nwave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, camp_exec.campaign_execution_id, camp_ex (...)\"\n\n\" -> Nested Loop Left Join\n(cost=1955.40..6259.71 rows=1 width=340)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt,\ncamp_exec.campaign_execution_id, c (...)\"\n\n\" -> Nested Loop Left Join\n(cost=1955.27..6259.55 rows=1 width=222)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt,\ncamp_exec.campaign_execution (...)\"\n\n\" -> Nested Loop\n(cost=1954.99..6259.24 rows=1 width=197)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt, camp_exec.campaign_exe\n(...)\"\n\n\" -> Nested Loop\n(cost=1954.71..6258.92 rows=1 width=173)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt, camp_exec.campai (...)\"\n\n\" Join Filter:\n(camp_exec.campaign_id = wave.campaign_id)\"\n\n\" -> Nested Loop\n(cost=1954.42..6254.67 rows=13 width=167)\"\n\n\" Output:\ntp_exec.touchpoint_execution_id, tp_exec.start_dt, tp_exec.message_type_id,\ntp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt, wave_exec. (...)\"\n\n\" -> Hash\nJoin (cost=1954.13..6249.67 rows=13 width=108)\"\n\n\"\nOutput: tp_exec.touchpoint_execution_id, tp_exec.start_dt,\ntp_exec.message_type_id, tp_exec.content_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_name, wave_exec.start_dt, wave (...)\"\n\n\" Hash\nCond: ((tp_exec.touchpoint_id = tp.touchpoint_id) AND (wave_exec.wave_id =\ntp.wave_id))\"\n\n\" ->\nHash Join (cost=1576.83..4595.51 rows=72956 width=90)\"\n\n\"\n Output: tp_exec.touchpoint_execution_id,\ntp_exec.start_dt, tp_exec.message_type_id, tp_exec.content_id,\ntp_exec.touchpoint_id, wave_exec.wave_execution_id,\nwave_exec.wave_execution_n (...)\"\n\n\"\n Hash Cond:\n(tp_exec.wave_execution_id = wave_exec.wave_execution_id)\"\n\n\"\n-> Seq Scan on public.s_d_touchpoint_execution tp_exec\n(cost=0.00..1559.56 rows=72956 width=42)\"\n\n\"\nOutput: tp_exec.touchpoint_execution_id, tp_exec.wave_execution_id,\ntp_exec.touchpoint_id, tp_exec.channel_type_id, tp_exec.content_id,\ntp_exec.message_type_id, tp_exec.start_d (...)\"\n\n\"\n-> Hash (cost=1001.37..1001.37 rows=46037 width=56)\"\n\n\"\nOutput: wave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, wave_exec.campaign_execution_id, wave_exec.wave_id\"\n\n\"\n-> Seq Scan on public.s_d_wave_execution wave_exec (cost=0.00..1001.37\nrows=46037 width=56)\"\n\n\"\nOutput: wave_exec.wave_execution_id, wave_exec.wave_execution_name,\nwave_exec.start_dt, wave_exec.campaign_execution_id, wave_exec.wave_id\"\n\n\" ->\nHash (cost=212.72..212.72 rows=10972 width=26)\"\n\n\"\nOutput: tp.touchpoint_id, tp.touchpoint_name, tp.wave_id,\ntp.channel_type_id\"\n\n\"\n-> Seq Scan on public.s_d_touchpoint tp (cost=0.00..212.72 rows=10972\nwidth=26)\"\n\n\"\nOutput: tp.touchpoint_id, tp.touchpoint_name, tp.wave_id,\ntp.channel_type_id\"\n\n\" -> Index\nScan using s_d_campaign_execution_idx on public.s_d_campaign_execution\ncamp_exec (cost=0.29..0.37 rows=1 width=67)\"\n\n\"\nOutput: camp_exec.campaign_execution_id, camp_exec.campaign_id,\ncamp_exec.group_id, camp_exec.campaign_execution_name, camp_exec.start_dt,\ncamp_exec.creation_dt\"\n\n\" Index\nCond: (camp_exec.campaign_execution_id = wave_exec.campaign_execution_id)\"\n\n\" -> Index Scan\nusing s_d_wave_pkey on public.s_d_wave wave (cost=0.29..0.31 rows=1\nwidth=22)\"\n\n\" Output:\nwave.wave_id, wave.campaign_id, wave.wave_name, wave.creation_dt,\nwave.modified_dt\"\n\n\" Index Cond:\n(wave.wave_id = wave_exec.wave_id)\"\n\n\" -> Index Scan using\ns_d_campaign_pkey on public.s_d_campaign camp (cost=0.29..0.32 rows=1\nwidth=40)\"\n\n\" Output:\ncamp.campaign_id, camp.campaign_name, camp.objective, camp.initiative,\ncamp.category_id, camp.creation_dt, camp.modified_dt\"\n\n\" Index Cond:\n(camp.campaign_id = camp_exec.campaign_id)\"\n\n\" -> Index Scan using\ns_d_content_pkey on public.s_d_content content (cost=0.28..0.30 rows=1\nwidth=33)\"\n\n\" Output:\ncontent.content_id, content.content_name, content.creation_dt,\ncontent.channel_type_id, content.modified_dt\"\n\n\" Index Cond:\n(tp_exec.content_id = content.content_id)\"\n\n\" -> Index Scan using\ns_d_message_type_pkey on public.s_d_message_type message_type\n(cost=0.13..0.15 rows=1 width=120)\"\n\n\" Output:\nmessage_type.message_type_id, message_type.message_type_name,\nmessage_type.creation_dt, message_type.modified_dt\"\n\n\" Index Cond:\n(tp_exec.message_type_id = message_type.message_type_id)\"\n\n\" -> Index Scan using s_d_group_pkey on\npublic.s_d_group grup (cost=0.13..0.15 rows=1 width=320)\"\n\n\" Output: grup.group_id,\ngrup.group_name, grup.creation_dt, grup.modified_dt\"\n\n\" Index Cond: (camp_exec.group_id =\ngrup.group_id)\"\n\n\" -> Index Scan using d_channel_pk on\npublic.s_d_channel_type channel (cost=0.13..0.15 rows=1 width=120)\"\n\n\" Output: channel.channel_type_id,\nchannel.channel_type_name\"\n\n\" Index Cond: (channel.channel_type_id =\ntp.channel_type_id)\"\n\n\" -> Index Scan using s_d_category_pkey on\npublic.s_d_category \"CATEGORY\" (cost=0.13..0.15 rows=1 width=120)\"\n\n\" Output: \"CATEGORY\".category_id,\n\"CATEGORY\".category_name, \"CATEGORY\".creation_dt, \"CATEGORY\".modified_dt\"\n\n\" Index Cond: (camp.category_id =\n\"CATEGORY\".category_id)\"\n\n\" -> CTE Scan on valid_executions (cost=0.00..0.02 rows=1\nwidth=8)\"\n\n\" Output: valid_executions.touchpoint_execution_id\"\n\n\" -> Nested Loop Left Join (cost=34834.49..318190.14 rows=2\nwidth=148)\"\n\n\" Output: base.promo_hist_id, base.audience_member_id,\nbase.target_id, base.touchpoint_execution_id, base.contact_group_id,\nbase.content_version_execution_id, base.sent_ind, CASE WHEN\n(email.sbounce_ind IS NOT NULL) THEN (email.sbounce_ind)::in (...)\"\n\n\" CTE valid_executions\"\n\n\" -> Nested Loop (cost=33089.13..34834.20 rows=1 width=8)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id\"\n\n\" -> Nested Loop (cost=33088.84..34833.88 rows=1\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id,\ntpe.touchpoint_id\"\n\n\" -> Unique (cost=33088.42..34825.42 rows=1\nwidth=8)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id\"\n\n\" -> Merge Join (cost=33088.42..34825.42\nrows=1 width=8)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id\"\n\n\" Merge Cond:\n((s_f_touchpoint_execution_status_history.touchpoint_execution_id =\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id) AND\n(s_f_touchpoint_execution_status_history.creation_dt = (max(s_f_t (...)\"\n\n\" -> Sort (cost=19864.28..20268.98\nrows=161883 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history.creation_dt\"\n\n\" Sort Key:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history.creation_dt\"\n\n\" -> Seq Scan on\npublic.s_f_touchpoint_execution_status_history (cost=0.00..5857.68\nrows=161883 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history.creation_dt\"\n\n\" Filter:\n(s_f_touchpoint_execution_status_history.touchpoint_execution_status_type_id\n= ANY ('{3,4,6}'::integer[]))\"\n\n\" -> Sort (cost=13224.15..13398.43\nrows=69715 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\n(max(s_f_touchpoint_execution_status_history_1.creation_dt))\"\n\n\" Sort Key:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\n(max(s_f_touchpoint_execution_status_history_1.creation_dt))\"\n\n\" -> HashAggregate\n(cost=6221.56..6918.71 rows=69715 width=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\nmax(s_f_touchpoint_execution_status_history_1.creation_dt)\"\n\n\" Group Key:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id\"\n\n\" -> Seq Scan on\npublic.s_f_touchpoint_execution_status_history\ns_f_touchpoint_execution_status_history_1 (cost=0.00..4766.04 rows=291104\nwidth=16)\"\n\n\" Output:\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_id,\ns_f_touchpoint_execution_status_history_1.touchpoint_execution_status_type_id,\ns_f_touchpoint_execution_status_history_1.st (...)\"\n\n\" -> Index Scan using\ns_d_touchpoint_execution_pkey on public.s_d_touchpoint_execution tpe\n(cost=0.42..8.44 rows=1 width=16)\"\n\n\" Output: tpe.touchpoint_execution_id,\ntpe.wave_execution_id, tpe.touchpoint_id, tpe.channel_type_id,\ntpe.content_id, tpe.message_type_id, tpe.start_dt, tpe.creation_dt\"\n\n\" Index Cond: (tpe.touchpoint_execution_id\n= s_f_touchpoint_execution_status_history.touchpoint_execution_id)\"\n\n\" -> Index Only Scan using s_d_touchpoint_pkey on\npublic.s_d_touchpoint tp_1 (cost=0.29..0.32 rows=1 width=8)\"\n\n\" Output: tp_1.touchpoint_id,\ntp_1.channel_type_id\"\n\n\" Index Cond: ((tp_1.touchpoint_id =\ntpe.touchpoint_id) AND (tp_1.channel_type_id = 1))\"\n\n\" -> Nested Loop (cost=0.00..283350.22 rows=2 width=74)\"\n\n\" Output: base.promo_hist_id, base.audience_member_id,\nbase.target_id, base.touchpoint_execution_id, base.contact_group_id,\nbase.content_version_execution_id, base.sent_ind, base.send_dt,\nbase.creation_dt, base.modified_dt\"\n\n\" Join Filter: (base.touchpoint_execution_id =\nvalid_executions_1.touchpoint_execution_id)\"\n\n\" -> CTE Scan on valid_executions valid_executions_1\n(cost=0.00..0.02 rows=1 width=8)\"\n\n\" Output:\nvalid_executions_1.touchpoint_execution_id\"\n\n\" -> Seq Scan on public.s_f_promotion_history base\n(cost=0.00..283334.00 rows=1296 width=74)\"\n\n\" Output: base.promo_hist_id, base.target_id,\nbase.audience_member_id, base.touchpoint_execution_id,\nbase.contact_group_id, base.content_version_execution_id, base.sent_ind,\nbase.send_dt, base.creation_dt, base.modified_dt\"\n\n\" Filter: ((base.send_dt >= '2014-03-13\n00:00:00'::timestamp without time zone) AND (base.send_dt <= '2015-03-14\n00:00:00'::timestamp without time zone))\"\n\n\" -> Index Scan using s_f_promotion_history_email_pk1 on\npublic.s_f_promotion_history_email email (cost=0.29..2.83 rows=1 width=90)\"\n\n\" Output: email.promo_hist_id, email.target_id,\nemail.audience_member_id, email.touchpoint_execution_id,\nemail.contact_group_id, email.sbounce_ind, email.hbounce_ind,\nemail.opened_ind, email.clicked_ind, email.unsubscribe_ind, email.unsub\n(...)\"\n\n\" Index Cond: (base.promo_hist_id = email.promo_hist_id)\"\n\n\" Filter: (base.audience_member_id =\nemail.audience_member_id)\"\n\n\n\n=================================================================================================\n\nQuestions here are :\n\n\n\nIs the query written correctly as per the PostgreSQL?\n\nAm I missing anything here?\n\n\n\nTotal Memory : 8 GB\n\nshared_buffers = 2GB\n\nwork_mem = 64MB\n\nmaintenance_work_mem = 700MB\n\neffective_cache_size = 4GB\n\nAny kind of help is appreciated.\n\n\n\nWarm Regards,\n\n\nVivekanand Joshi\n+919654227927\n\n\n\n[image: Zeta Interactive]\n\n185 Madison Ave. New York, NY 10016\n\nwww.zetainteractive.com\n\n\n\n\n\n\n\n-- \n\nThanks,\nM. Varadharajan\n\n------------------------------------------------\n\n\"Experience is what you get when you didn't get what you wanted\"\n -By Prof. Randy Pausch in \"The Last Lecture\"\n\nMy Journal :- www.thinkasgeek.wordpress.com",
"msg_date": "Fri, 13 Mar 2015 18:33:22 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "If the s_f_promotion_history table will have a explosive growth, then its\nworth considering partitioning by date and using constraint exclusion to\nspeed up the queries. Else, it makes sense to get started with multiple\npartial index (like, have a index for each week or something like that. You\nmay want to start with a coarse grain timeline for the index and then fine\ngrain it based on the needs)\n\nIf the s_f_promotion_history table will have a explosive growth, then its worth considering partitioning by date and using constraint exclusion to speed up the queries. Else, it makes sense to get started with multiple partial index (like, have a index for each week or something like that. You may want to start with a coarse grain timeline for the index and then fine grain it based on the needs)",
"msg_date": "Fri, 13 Mar 2015 14:10:00 +0100",
"msg_from": "Varadharajan Mukundan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "I am really worried about the performance of PostgreSQL as we have almost\n1.5 billion records in promotion history table. Do you guys really think\nPostgreSQL can handle this much load. We have fact tables which are more\nthan 15 GB in size and we have to make joins with those tables in almost\nevery query.\nOn 13 Mar 2015 18:40, \"Varadharajan Mukundan\" <[email protected]> wrote:\n\n> If the s_f_promotion_history table will have a explosive growth, then its\n> worth considering partitioning by date and using constraint exclusion to\n> speed up the queries. Else, it makes sense to get started with multiple\n> partial index (like, have a index for each week or something like that. You\n> may want to start with a coarse grain timeline for the index and then fine\n> grain it based on the needs)\n>\n\nI am really worried about the performance of PostgreSQL as we have almost 1.5 billion records in promotion history table. Do you guys really think PostgreSQL can handle this much load. We have fact tables which are more than 15 GB in size and we have to make joins with those tables in almost every query.\nOn 13 Mar 2015 18:40, \"Varadharajan Mukundan\" <[email protected]> wrote:If the s_f_promotion_history table will have a explosive growth, then its worth considering partitioning by date and using constraint exclusion to speed up the queries. Else, it makes sense to get started with multiple partial index (like, have a index for each week or something like that. You may want to start with a coarse grain timeline for the index and then fine grain it based on the needs)",
"msg_date": "Sat, 14 Mar 2015 01:29:45 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Hi,\n\nOn 13.3.2015 20:59, Vivekanand Joshi wrote:\n> I am really worried about the performance of PostgreSQL as we have\n> almost 1.5 billion records in promotion history table. Do you guys\n\nIn the previous message you claimed the post table has 10M rows ...\n\n> really think PostgreSQL can handle this much load. We have fact\n> tables which are more than 15 GB in size and we have to make joins\n> with those tables in almost every query.\n\nThat depends on what performance you're looking for. You'll have to\nprovide considerably more information until we can help you. You might\nwant to check this:\n\n https://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nYou have not provided the full query, just a query apparently\nreferencing views, so that the actual query is way more complicated.\nAlso, plain EXPLAIN is not really sufficient, we need EXPLAIN ANALYZE.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Mar 2015 21:36:24 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Since I was doing it only for the testing purposes and on a development\nserver which has only 8 GB of RAM, I used only 10m rows. But the original\ntable has 1.5 billion rows. We will obviously be using a server with very\nhigh capacity, but I am not satisfied with the performance at all. This\nmight be only a start, so I might get a better performance later.\n\nYes, the view is complex and almost is created by using 10 tables. Same\ngoes with other views as well but this is what we are using in Netezza as\nwell. And we are getting results of the full report in less than 5 seconds.\nAnd add to that, this is only a very little part of the whole query used in\na report.\n\nI will post the result of whole query with Explain analyze tomorrow.\n\nWe might even consider taking experts advice on how to tune queries and\nserver, but if postgres is going to behave like this, I am not sure we\nwould be able to continue with it.\n\nHaving said that, I would day again that I am completely new to this\nterritory, so I might miss lots and lots of thing.\nOn 14 Mar 2015 02:07, \"Tomas Vondra\" <[email protected]> wrote:\n\n> Hi,\n>\n> On 13.3.2015 20:59, Vivekanand Joshi wrote:\n> > I am really worried about the performance of PostgreSQL as we have\n> > almost 1.5 billion records in promotion history table. Do you guys\n>\n> In the previous message you claimed the post table has 10M rows ...\n>\n> > really think PostgreSQL can handle this much load. We have fact\n> > tables which are more than 15 GB in size and we have to make joins\n> > with those tables in almost every query.\n>\n> That depends on what performance you're looking for. You'll have to\n> provide considerably more information until we can help you. You might\n> want to check this:\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> You have not provided the full query, just a query apparently\n> referencing views, so that the actual query is way more complicated.\n> Also, plain EXPLAIN is not really sufficient, we need EXPLAIN ANALYZE.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSince I was doing it only for the testing purposes and on a development server which has only 8 GB of RAM, I used only 10m rows. But the original table has 1.5 billion rows. We will obviously be using a server with very high capacity, but I am not satisfied with the performance at all. This might be only a start, so I might get a better performance later. \nYes, the view is complex and almost is created by using 10 tables. Same goes with other views as well but this is what we are using in Netezza as well. And we are getting results of the full report in less than 5 seconds. And add to that, this is only a very little part of the whole query used in a report.\nI will post the result of whole query with Explain analyze tomorrow.\nWe might even consider taking experts advice on how to tune queries and server, but if postgres is going to behave like this, I am not sure we would be able to continue with it.\nHaving said that, I would day again that I am completely new to this territory, so I might miss lots and lots of thing.\nOn 14 Mar 2015 02:07, \"Tomas Vondra\" <[email protected]> wrote:Hi,\n\nOn 13.3.2015 20:59, Vivekanand Joshi wrote:\n> I am really worried about the performance of PostgreSQL as we have\n> almost 1.5 billion records in promotion history table. Do you guys\n\nIn the previous message you claimed the post table has 10M rows ...\n\n> really think PostgreSQL can handle this much load. We have fact\n> tables which are more than 15 GB in size and we have to make joins\n> with those tables in almost every query.\n\nThat depends on what performance you're looking for. You'll have to\nprovide considerably more information until we can help you. You might\nwant to check this:\n\n https://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nYou have not provided the full query, just a query apparently\nreferencing views, so that the actual query is way more complicated.\nAlso, plain EXPLAIN is not really sufficient, we need EXPLAIN ANALYZE.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 14 Mar 2015 02:16:17 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 13.3.2015 21:46, Vivekanand Joshi wrote:\n> Since I was doing it only for the testing purposes and on a\n> development server which has only 8 GB of RAM, I used only 10m rows.\n> But the original table has 1.5 billion rows. We will obviously be\n> using a server with very high capacity, but I am not satisfied with\n> the performance at all. This might be only a start, so I might get a\n> better performance later.\n\nOK, understood.\n\n> Yes, the view is complex and almost is created by using 10 tables. Same\n> goes with other views as well but this is what we are using in Netezza\n> as well. And we are getting results of the full report in less than 5\n> seconds. And add to that, this is only a very little part of the whole\n> query used in a report.\n\nWell, in the very first message you asked \"Is the query written\ncorrectly as per the PostgreSQL?\" - how can we decide that when most of\nthe query is hidden in some unknown view?\n\n> I will post the result of whole query with Explain analyze tomorrow.\n\nPlease also collect some information about the system using iostat,\nvmstat and such, so that we know what is the bottleneck.\n\n> We might even consider taking experts advice on how to tune queries\n> and server, but if postgres is going to behave like this, I am not\n> sure we would be able to continue with it.\n\nThat's probably a good idea.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Mar 2015 22:20:20 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "> We might even consider taking experts advice on how to tune queries and\n> server, but if postgres is going to behave like this, I am not sure we would\n> be able to continue with it.\n>\n> Having said that, I would day again that I am completely new to this\n> territory, so I might miss lots and lots of thing.\n\nMy two cents: Postgres out of the box might not be a good choice for\ndata warehouse style queries, that is because it is optimized to run\nthousands of small queries (OLTP style processing) and not one big\nmonolithic query. I've faced similar problems myself before and here\nare few tricks i followed to get my elephant do real time adhoc\nanalysis on a table with ~45 columns and few billion rows in it.\n\n1. Partition your table! use constraint exclusion to the fullest extent\n2. Fire multiple small queries distributed over partitions and\naggregate them at the application layer. This is needed because, you\nmight to exploit all your cores to the fullest extent (Assuming that\nyou've enough memory for effective FS cache). If your dataset goes\nbeyond the capability of a single system, try something like Stado\n(GridSQL)\n3. Storing index on a RAM / faster disk disk (using tablespaces) and\nusing it properly makes the system blazing fast. CAUTION: This\nrequires some other infrastructure setup for backup and recovery\n4. If you're accessing a small set of columns in a big table and if\nyou feel compressing the data helps a lot, give this FDW a try -\nhttps://github.com/citusdata/cstore_fdw\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Mar 2015 23:03:05 +0100",
"msg_from": "Varadharajan Mukundan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On Fri, Mar 13, 2015 at 4:03 PM, Varadharajan Mukundan\n<[email protected]> wrote:\n>> We might even consider taking experts advice on how to tune queries and\n>> server, but if postgres is going to behave like this, I am not sure we would\n>> be able to continue with it.\n>>\n>> Having said that, I would day again that I am completely new to this\n>> territory, so I might miss lots and lots of thing.\n>\n> My two cents: Postgres out of the box might not be a good choice for\n> data warehouse style queries, that is because it is optimized to run\n> thousands of small queries (OLTP style processing) and not one big\n> monolithic query. I've faced similar problems myself before and here\n> are few tricks i followed to get my elephant do real time adhoc\n> analysis on a table with ~45 columns and few billion rows in it.\n>\n> 1. Partition your table! use constraint exclusion to the fullest extent\n> 2. Fire multiple small queries distributed over partitions and\n> aggregate them at the application layer. This is needed because, you\n> might to exploit all your cores to the fullest extent (Assuming that\n> you've enough memory for effective FS cache). If your dataset goes\n> beyond the capability of a single system, try something like Stado\n> (GridSQL)\n> 3. Storing index on a RAM / faster disk disk (using tablespaces) and\n> using it properly makes the system blazing fast. CAUTION: This\n> requires some other infrastructure setup for backup and recovery\n> 4. If you're accessing a small set of columns in a big table and if\n> you feel compressing the data helps a lot, give this FDW a try -\n> https://github.com/citusdata/cstore_fdw\n\nAgreed here. IF you're gonna run reporting queries against postgresql\nyou have to optimize for fast seq scan stuff. I.e. an IO subsystem\nthat can read a big table in hundreds of megabytes per second.\nGigabytes if you can get it. A lot of spinning drives on a fast RAID\ncard or good software raid can do this on the cheapish, since a lot of\ntimes you don't need big drives if you have a lot. 24 cheap 1TB drives\nthat each can read at ~100 MB/s can gang up on the data and you can\nread a 100GB in a few seconds. But you can't deny physics. If you need\nto read a 2TB table it's going to take time.\n\nIf you're only running 1 or 2 queries at a time, you can crank up the\nwork_mem to something crazy like 1GB even on an 8GB machine. Stopping\nsorts from spilling to disk, or at least giving queries a big\nplayground to work in can make a huge difference. If you're gonna give\nbig work_mem then definitely limit connections to a handful. If you\nneed a lot of persistent connections then use a pooler.\n\nThe single biggest mistake people make in setting up reporting servers\non postgresql is thinking that the same hardware that worked well for\ntransactional stuff (a handful of SSDs and lots of memory) might not\nhelp when you're working with TB data sets. The hardware you need\nisn't the same, and using that for a reporting server is gonna result\nin sub-optimal performance.\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Mar 2015 16:25:50 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Hi Guys,\n\nSo here is the full information attached as well as in the link provided\nbelow:\n\nhttp://pgsql.privatepaste.com/41207bea45\n\nI can provide new information as well.\n\nWould like to see if queries of these type can actually run in postgres\nserver?\n\nIf yes, what would be the minimum requirements for hardware? We would like\nto migrate our whole solution on PostgreSQL as we can spend on hardware as\nmuch as we can but working on a proprietary appliance is becoming very\ndifficult for us.\n\nVivek\n\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]]\nSent: Saturday, March 14, 2015 3:56 AM\nTo: Varadharajan Mukundan\nCc: [email protected]; Tomas Vondra;\[email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn Fri, Mar 13, 2015 at 4:03 PM, Varadharajan Mukundan\n<[email protected]> wrote:\n>> We might even consider taking experts advice on how to tune queries\n>> and server, but if postgres is going to behave like this, I am not\n>> sure we would be able to continue with it.\n>>\n>> Having said that, I would day again that I am completely new to this\n>> territory, so I might miss lots and lots of thing.\n>\n> My two cents: Postgres out of the box might not be a good choice for\n> data warehouse style queries, that is because it is optimized to run\n> thousands of small queries (OLTP style processing) and not one big\n> monolithic query. I've faced similar problems myself before and here\n> are few tricks i followed to get my elephant do real time adhoc\n> analysis on a table with ~45 columns and few billion rows in it.\n>\n> 1. Partition your table! use constraint exclusion to the fullest\n> extent 2. Fire multiple small queries distributed over partitions and\n> aggregate them at the application layer. This is needed because, you\n> might to exploit all your cores to the fullest extent (Assuming that\n> you've enough memory for effective FS cache). If your dataset goes\n> beyond the capability of a single system, try something like Stado\n> (GridSQL)\n> 3. Storing index on a RAM / faster disk disk (using tablespaces) and\n> using it properly makes the system blazing fast. CAUTION: This\n> requires some other infrastructure setup for backup and recovery 4. If\n> you're accessing a small set of columns in a big table and if you feel\n> compressing the data helps a lot, give this FDW a try -\n> https://github.com/citusdata/cstore_fdw\n\nAgreed here. IF you're gonna run reporting queries against postgresql you\nhave to optimize for fast seq scan stuff. I.e. an IO subsystem that can read\na big table in hundreds of megabytes per second.\nGigabytes if you can get it. A lot of spinning drives on a fast RAID card or\ngood software raid can do this on the cheapish, since a lot of times you\ndon't need big drives if you have a lot. 24 cheap 1TB drives that each can\nread at ~100 MB/s can gang up on the data and you can read a 100GB in a few\nseconds. But you can't deny physics. If you need to read a 2TB table it's\ngoing to take time.\n\nIf you're only running 1 or 2 queries at a time, you can crank up the\nwork_mem to something crazy like 1GB even on an 8GB machine. Stopping sorts\nfrom spilling to disk, or at least giving queries a big playground to work\nin can make a huge difference. If you're gonna give big work_mem then\ndefinitely limit connections to a handful. If you need a lot of persistent\nconnections then use a pooler.\n\nThe single biggest mistake people make in setting up reporting servers on\npostgresql is thinking that the same hardware that worked well for\ntransactional stuff (a handful of SSDs and lots of memory) might not help\nwhen you're working with TB data sets. The hardware you need isn't the same,\nand using that for a reporting server is gonna result in sub-optimal\nperformance.\n\n--\nTo understand recursion, one must first understand recursion.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 14 Mar 2015 04:58:39 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 14.3.2015 00:28, Vivekanand Joshi wrote:\n> Hi Guys,\n> \n> So here is the full information attached as well as in the link\n> provided below:\n> \n> http://pgsql.privatepaste.com/41207bea45\n> \n> I can provide new information as well.\n\nThanks.\n\nWe still don't have EXPLAIN ANALYZE - how long was the query running (I\nassume it got killed at some point)? It's really difficult to give you\nany advices because we don't know where the problem is.\n\nIf EXPLAIN ANALYZE really takes too long (say, it does not complete\nafter an hour / over night), you'll have to break the query into parts\nand first tweak those independently.\n\nFor example in the first message you mentioned that select from the\nS_V_D_CAMPAIGN_HIERARCHY view takes ~9 minutes, so start with that. Give\nus EXPLAIN ANALYZE for that query.\n\nFew more comments:\n\n(1) You're using CTEs - be aware that CTEs are not just aliases, but\n impact planning / optimization, and in some cases may prevent\n proper optimization. Try replacing them with plain views.\n\n(2) Varadharajan Mukundan already recommended you to create index on\n s_f_promotion_history.send_dt. Have you tried that? You may also\n try creating an index on all the columns needed by the query, so\n that \"Index Only Scan\" is possible.\n\n(3) There are probably additional indexes that might be useful here.\n What I'd try is adding indexes on all columns that are either a\n foreign key or used in a WHERE condition. This might be an\n overkill in some cases, but let's see.\n\n(4) I suspect many of the relations referenced in the views are not\n actually needed in the query, i.e. the join is performed but\n then it's just discarded because those columns are not used.\n Try to simplify the views as much has possible - remove all the\n tables that are not really necessary to run the query. If two\n queries need different tables, maybe defining two views is\n a better approach.\n\n(5) The vmstat / iostat data are pretty useless - what you provided are\n averages since the machine was started, but we need a few samples\n collected when the query is running. I.e. start the query, and then\n give us a few samples from these commands:\n\n iostat -x -k 1\n vmstat 1\n\n> Would like to see if queries of these type can actually run in\n> postgres server?\n\nWhy not? We're running DWH applications on tens/hundreds of GBs.\n\n> If yes, what would be the minimum requirements for hardware? We would\n> like to migrate our whole solution on PostgreSQL as we can spend on\n> hardware as much as we can but working on a proprietary appliance is\n> becoming very difficult for us.\n\nThat's difficult to say, because we really don't know where the problem\nis and how much the queries can be optimized.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 14 Mar 2015 01:12:26 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 14/03/15 13:12, Tomas Vondra wrote:\n> On 14.3.2015 00:28, Vivekanand Joshi wrote:\n>> Hi Guys,\n>>\n>> So here is the full information attached as well as in the link\n>> provided below:\n>>\n>> http://pgsql.privatepaste.com/41207bea45\n>>\n>> I can provide new information as well.\n> Thanks.\n>\n> We still don't have EXPLAIN ANALYZE - how long was the query running (I\n> assume it got killed at some point)? It's really difficult to give you\n> any advices because we don't know where the problem is.\n>\n> If EXPLAIN ANALYZE really takes too long (say, it does not complete\n> after an hour / over night), you'll have to break the query into parts\n> and first tweak those independently.\n>\n> For example in the first message you mentioned that select from the\n> S_V_D_CAMPAIGN_HIERARCHY view takes ~9 minutes, so start with that. Give\n> us EXPLAIN ANALYZE for that query.\n>\n> Few more comments:\n>\n> (1) You're using CTEs - be aware that CTEs are not just aliases, but\n> impact planning / optimization, and in some cases may prevent\n> proper optimization. Try replacing them with plain views.\n>\n> (2) Varadharajan Mukundan already recommended you to create index on\n> s_f_promotion_history.send_dt. Have you tried that? You may also\n> try creating an index on all the columns needed by the query, so\n> that \"Index Only Scan\" is possible.\n>\n> (3) There are probably additional indexes that might be useful here.\n> What I'd try is adding indexes on all columns that are either a\n> foreign key or used in a WHERE condition. This might be an\n> overkill in some cases, but let's see.\n>\n> (4) I suspect many of the relations referenced in the views are not\n> actually needed in the query, i.e. the join is performed but\n> then it's just discarded because those columns are not used.\n> Try to simplify the views as much has possible - remove all the\n> tables that are not really necessary to run the query. If two\n> queries need different tables, maybe defining two views is\n> a better approach.\n>\n> (5) The vmstat / iostat data are pretty useless - what you provided are\n> averages since the machine was started, but we need a few samples\n> collected when the query is running. I.e. start the query, and then\n> give us a few samples from these commands:\n>\n> iostat -x -k 1\n> vmstat 1\n>\n>> Would like to see if queries of these type can actually run in\n>> postgres server?\n> Why not? We're running DWH applications on tens/hundreds of GBs.\n>\n>> If yes, what would be the minimum requirements for hardware? We would\n>> like to migrate our whole solution on PostgreSQL as we can spend on\n>> hardware as much as we can but working on a proprietary appliance is\n>> becoming very difficult for us.\n> That's difficult to say, because we really don't know where the problem\n> is and how much the queries can be optimized.\n>\n>\nI notice that no one appears to have suggested the default setting in \npostgresql.conf - these need changing as they are initially set up for \nsmall machines, and to let PostgreSQL take anywhere near full advantage \nof a box have large amounts of RAM, you need to change some of the \nconfiguration settings!\n\nFor example 'temp_buffers' (default 8MB) and 'maintenance_work_mem' \n(default 16MB) should be drastically increased, and there are other \nsettings that need changing. The precise values depend on many factors, \nbut the initial values set by default are definitely far too small for \nyour usage.\n\nAm assuming that you are looking at PostgreSQL 9.4.\n\n\n\nCheers,\nGavin\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 10:06:22 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Hi Gavin,\n\nVivekanand is his first mail itself mentioned the below configuration\nof postgresql.conf. It looks good enough to me.\n\nTotal Memory : 8 GB\n\nshared_buffers = 2GB\n\nwork_mem = 64MB\n\nmaintenance_work_mem = 700MB\n\neffective_cache_size = 4GB\n\nOn Sat, Mar 14, 2015 at 10:06 PM, Gavin Flower\n<[email protected]> wrote:\n> On 14/03/15 13:12, Tomas Vondra wrote:\n>>\n>> On 14.3.2015 00:28, Vivekanand Joshi wrote:\n>>>\n>>> Hi Guys,\n>>>\n>>> So here is the full information attached as well as in the link\n>>> provided below:\n>>>\n>>> http://pgsql.privatepaste.com/41207bea45\n>>>\n>>> I can provide new information as well.\n>>\n>> Thanks.\n>>\n>> We still don't have EXPLAIN ANALYZE - how long was the query running (I\n>> assume it got killed at some point)? It's really difficult to give you\n>> any advices because we don't know where the problem is.\n>>\n>> If EXPLAIN ANALYZE really takes too long (say, it does not complete\n>> after an hour / over night), you'll have to break the query into parts\n>> and first tweak those independently.\n>>\n>> For example in the first message you mentioned that select from the\n>> S_V_D_CAMPAIGN_HIERARCHY view takes ~9 minutes, so start with that. Give\n>> us EXPLAIN ANALYZE for that query.\n>>\n>> Few more comments:\n>>\n>> (1) You're using CTEs - be aware that CTEs are not just aliases, but\n>> impact planning / optimization, and in some cases may prevent\n>> proper optimization. Try replacing them with plain views.\n>>\n>> (2) Varadharajan Mukundan already recommended you to create index on\n>> s_f_promotion_history.send_dt. Have you tried that? You may also\n>> try creating an index on all the columns needed by the query, so\n>> that \"Index Only Scan\" is possible.\n>>\n>> (3) There are probably additional indexes that might be useful here.\n>> What I'd try is adding indexes on all columns that are either a\n>> foreign key or used in a WHERE condition. This might be an\n>> overkill in some cases, but let's see.\n>>\n>> (4) I suspect many of the relations referenced in the views are not\n>> actually needed in the query, i.e. the join is performed but\n>> then it's just discarded because those columns are not used.\n>> Try to simplify the views as much has possible - remove all the\n>> tables that are not really necessary to run the query. If two\n>> queries need different tables, maybe defining two views is\n>> a better approach.\n>>\n>> (5) The vmstat / iostat data are pretty useless - what you provided are\n>> averages since the machine was started, but we need a few samples\n>> collected when the query is running. I.e. start the query, and then\n>> give us a few samples from these commands:\n>>\n>> iostat -x -k 1\n>> vmstat 1\n>>\n>>> Would like to see if queries of these type can actually run in\n>>> postgres server?\n>>\n>> Why not? We're running DWH applications on tens/hundreds of GBs.\n>>\n>>> If yes, what would be the minimum requirements for hardware? We would\n>>> like to migrate our whole solution on PostgreSQL as we can spend on\n>>> hardware as much as we can but working on a proprietary appliance is\n>>> becoming very difficult for us.\n>>\n>> That's difficult to say, because we really don't know where the problem\n>> is and how much the queries can be optimized.\n>>\n>>\n> I notice that no one appears to have suggested the default setting in\n> postgresql.conf - these need changing as they are initially set up for small\n> machines, and to let PostgreSQL take anywhere near full advantage of a box\n> have large amounts of RAM, you need to change some of the configuration\n> settings!\n>\n> For example 'temp_buffers' (default 8MB) and 'maintenance_work_mem' (default\n> 16MB) should be drastically increased, and there are other settings that\n> need changing. The precise values depend on many factors, but the initial\n> values set by default are definitely far too small for your usage.\n>\n> Am assuming that you are looking at PostgreSQL 9.4.\n>\n>\n>\n> Cheers,\n> Gavin\n>\n>\n\n\n\n-- \nThanks,\nM. Varadharajan\n\n------------------------------------------------\n\n\"Experience is what you get when you didn't get what you wanted\"\n -By Prof. Randy Pausch in \"The Last Lecture\"\n\nMy Journal :- www.thinkasgeek.wordpress.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 14 Mar 2015 22:23:46 +0100",
"msg_from": "Varadharajan Mukundan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 15/03/15 10:23, Varadharajan Mukundan wrote:\n> Hi Gavin,\n>\n> Vivekanand is his first mail itself mentioned the below configuration\n> of postgresql.conf. It looks good enough to me.\n>\n> Total Memory : 8 GB\n>\n> shared_buffers = 2GB\n>\n> work_mem = 64MB\n>\n> maintenance_work_mem = 700MB\n>\n> effective_cache_size = 4GB\n\n\nSorry, it didn't register when I read it!\n(Probably reading too fast)\n>\n> On Sat, Mar 14, 2015 at 10:06 PM, Gavin Flower\n> <[email protected]> wrote:\n>> On 14/03/15 13:12, Tomas Vondra wrote:\n>>> On 14.3.2015 00:28, Vivekanand Joshi wrote:\n>>>> Hi Guys,\n>>>>\n>>>> So here is the full information attached as well as in the link\n>>>> provided below:\n>>>>\n>>>> http://pgsql.privatepaste.com/41207bea45\n>>>>\n>>>> I can provide new information as well.\n>>> Thanks.\n>>>\n>>> We still don't have EXPLAIN ANALYZE - how long was the query running (I\n>>> assume it got killed at some point)? It's really difficult to give you\n>>> any advices because we don't know where the problem is.\n>>>\n>>> If EXPLAIN ANALYZE really takes too long (say, it does not complete\n>>> after an hour / over night), you'll have to break the query into parts\n>>> and first tweak those independently.\n>>>\n>>> For example in the first message you mentioned that select from the\n>>> S_V_D_CAMPAIGN_HIERARCHY view takes ~9 minutes, so start with that. Give\n>>> us EXPLAIN ANALYZE for that query.\n>>>\n>>> Few more comments:\n>>>\n>>> (1) You're using CTEs - be aware that CTEs are not just aliases, but\n>>> impact planning / optimization, and in some cases may prevent\n>>> proper optimization. Try replacing them with plain views.\n>>>\n>>> (2) Varadharajan Mukundan already recommended you to create index on\n>>> s_f_promotion_history.send_dt. Have you tried that? You may also\n>>> try creating an index on all the columns needed by the query, so\n>>> that \"Index Only Scan\" is possible.\n>>>\n>>> (3) There are probably additional indexes that might be useful here.\n>>> What I'd try is adding indexes on all columns that are either a\n>>> foreign key or used in a WHERE condition. This might be an\n>>> overkill in some cases, but let's see.\n>>>\n>>> (4) I suspect many of the relations referenced in the views are not\n>>> actually needed in the query, i.e. the join is performed but\n>>> then it's just discarded because those columns are not used.\n>>> Try to simplify the views as much has possible - remove all the\n>>> tables that are not really necessary to run the query. If two\n>>> queries need different tables, maybe defining two views is\n>>> a better approach.\n>>>\n>>> (5) The vmstat / iostat data are pretty useless - what you provided are\n>>> averages since the machine was started, but we need a few samples\n>>> collected when the query is running. I.e. start the query, and then\n>>> give us a few samples from these commands:\n>>>\n>>> iostat -x -k 1\n>>> vmstat 1\n>>>\n>>>> Would like to see if queries of these type can actually run in\n>>>> postgres server?\n>>> Why not? We're running DWH applications on tens/hundreds of GBs.\n>>>\n>>>> If yes, what would be the minimum requirements for hardware? We would\n>>>> like to migrate our whole solution on PostgreSQL as we can spend on\n>>>> hardware as much as we can but working on a proprietary appliance is\n>>>> becoming very difficult for us.\n>>> That's difficult to say, because we really don't know where the problem\n>>> is and how much the queries can be optimized.\n>>>\n>>>\n>> I notice that no one appears to have suggested the default setting in\n>> postgresql.conf - these need changing as they are initially set up for small\n>> machines, and to let PostgreSQL take anywhere near full advantage of a box\n>> have large amounts of RAM, you need to change some of the configuration\n>> settings!\n>>\n>> For example 'temp_buffers' (default 8MB) and 'maintenance_work_mem' (default\n>> 16MB) should be drastically increased, and there are other settings that\n>> need changing. The precise values depend on many factors, but the initial\n>> values set by default are definitely far too small for your usage.\n>>\n>> Am assuming that you are looking at PostgreSQL 9.4.\n>>\n>>\n>>\n>> Cheers,\n>> Gavin\n>>\n>>\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 10:32:23 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Hi Team,\n\nThis is the EXPLAIN ANALYZE for one of the view : S_V_D_CAMPAIGN_HIERARCHY:\n\n===========================================\n\n\nNested Loop (cost=33666.96..37971.39 rows=1 width=894) (actual\ntime=443.556..966558.767 rows=45360 loops=1)\n Join Filter: (tp_exec.touchpoint_execution_id =\nvalid_executions.touchpoint_execution_id)\n Rows Removed by Join Filter: 3577676116\n CTE valid_executions\n -> Hash Join (cost=13753.53..31711.17 rows=1 width=8) (actual\ntime=232.571..357.749 rows=52997 loops=1)\n Hash Cond:\n((s_f_touchpoint_execution_status_history_1.touchpoint_execution_id =\ns_f_touchpoint_execution_status_history.touchpoint_execution_id) AND ((max(s\n_f_touchpoint_execution_status_history_1.creation_dt)) =\ns_f_touchpoint_execution_status_history.creation_dt))\n -> HashAggregate (cost=6221.56..6905.66 rows=68410 width=16)\n(actual time=139.713..171.340 rows=76454 loops=1)\n -> Seq Scan on s_f_touchpoint_execution_status_history\ns_f_touchpoint_execution_status_history_1 (cost=0.00..4766.04 rows=291104\nwidth=16) (actual ti\nme=0.006..38.582 rows=291104 loops=1)\n -> Hash (cost=5493.80..5493.80 rows=135878 width=16) (actual\ntime=92.737..92.737 rows=136280 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 6389kB\n -> Seq Scan on s_f_touchpoint_execution_status_history\n(cost=0.00..5493.80 rows=135878 width=16) (actual time=0.012..55.078\nrows=136280 loops=1)\n Filter: (touchpoint_execution_status_type_id = ANY\n('{3,4}'::integer[]))\n Rows Removed by Filter: 154824\n -> Nested Loop Left Join (cost=1955.80..6260.19 rows=1 width=894)\n(actual time=31.608..3147.015 rows=67508 loops=1)\n -> Nested Loop (cost=1955.67..6260.04 rows=1 width=776) (actual\ntime=31.602..2912.625 rows=67508 loops=1)\n -> Nested Loop Left Join (cost=1955.54..6259.87 rows=1\nwidth=658) (actual time=31.595..2713.696 rows=72427 loops=1)\n -> Nested Loop Left Join (cost=1955.40..6259.71\nrows=1 width=340) (actual time=31.589..2532.926 rows=72427 loops=1)\n -> Nested Loop Left Join (cost=1955.27..6259.55\nrows=1 width=222) (actual time=31.581..2354.662 rows=72427 loops=1)\n -> Nested Loop (cost=1954.99..6259.24\nrows=1 width=197) (actual time=31.572..2090.104 rows=72427 loops=1)\n -> Nested Loop\n(cost=1954.71..6258.92 rows=1 width=173) (actual time=31.562..1802.857\nrows=72427 loops=1)\n Join Filter:\n(camp_exec.campaign_id = wave.campaign_id)\n Rows Removed by Join Filter:\n243\n -> Nested Loop\n(cost=1954.42..6254.67 rows=13 width=167) (actual time=31.551..1468.718\nrows=72670 loops=1)\n -> Hash Join\n(cost=1954.13..6249.67 rows=13 width=108) (actual time=31.525..402.039\nrows=72670 loops=1)\n Hash Cond:\n((tp_exec.touchpoint_id = tp.touchpoint_id) AND (wave_exec.wave_id =\ntp.wave_id))\n -> Hash Join\n(cost=1576.83..4595.51 rows=72956 width=90) (actual time=26.254..256.328\nrows=72956 loops=1)\n Hash Cond:\n(tp_exec.wave_execution_id = wave_exec.wave_execution_id)\n -> Seq Scan\non s_d_touchpoint_execution tp_exec (cost=0.00..1559.56 rows=72956\nwidth=42) (actual time=0.005..76.099 rows=72956 loops=1)\n -> Hash\n(cost=1001.37..1001.37 rows=46037 width=56) (actual time=26.178..26.178\nrows=46037 loops=1)\n Buckets:\n8192 Batches: 1 Memory Usage: 4104kB\n -> Seq\nScan on s_d_wave_execution wave_exec (cost=0.00..1001.37 rows=46037\nwidth=56) (actual time=0.006..10.388 rows=46037 loops=1)\n -> Hash\n(cost=212.72..212.72 rows=10972 width=26) (actual time=5.252..5.252\nrows=10972 loops=1)\n Buckets: 2048\nBatches: 1 Memory Usage: 645kB\n -> Seq Scan\non s_d_touchpoint tp (cost=0.00..212.72 rows=10972 width=26) (actual\ntime=0.012..2.319 rows=10972 loops=1)\n -> Index Scan using\ns_d_campaign_execution_idx on s_d_campaign_execution camp_exec\n(cost=0.29..0.37 rows=1 width=67) (actual time=0.013..0.013 rows=1\nloops=72670)\n Index Cond:\n(campaign_execution_id = wave_exec.campaign_execution_id)\n -> Index Scan using\ns_d_wave_pkey on s_d_wave wave (cost=0.29..0.31 rows=1 width=22) (actual\ntime=0.003..0.003 rows=1 loops=72670)\n Index Cond: (wave_id =\nwave_exec.wave_id)\n -> Index Scan using\ns_d_campaign_pkey on s_d_campaign camp (cost=0.29..0.32 rows=1 width=40)\n(actual time=0.003..0.003 rows=1 loops=72427)\n Index Cond: (campaign_id =\ncamp_exec.campaign_id)\n -> Index Scan using s_d_content_pkey on\ns_d_content content (cost=0.28..0.30 rows=1 width=33) (actual\ntime=0.002..0.003 rows=1 loops=72427)\n Index Cond: (tp_exec.content_id =\ncontent_id)\n -> Index Scan using s_d_message_type_pkey on\ns_d_message_type message_type (cost=0.13..0.15 rows=1 width=120) (actual\ntime=0.001..0.002 rows=1 loops=72427)\n Index Cond: (tp_exec.message_type_id =\nmessage_type_id)\n -> Index Scan using s_d_group_pkey on s_d_group grup\n(cost=0.13..0.15 rows=1 width=320) (actual time=0.001..0.002 rows=1\nloops=72427)\n Index Cond: (camp_exec.group_id = group_id)\n -> Index Scan using d_channel_pk on s_d_channel_type channel\n(cost=0.13..0.15 rows=1 width=120) (actual time=0.001..0.002 rows=1\nloops=72427)\n Index Cond: (channel_type_id = tp.channel_type_id)\n -> Index Scan using s_d_category_pkey on s_d_category \"CATEGORY\"\n(cost=0.13..0.15 rows=1 width=120) (actual time=0.001..0.002 rows=1\nloops=67508)\n Index Cond: (camp.category_id = category_id)\n -> CTE Scan on valid_executions (cost=0.00..0.02 rows=1 width=8)\n(actual time=0.004..6.803 rows=52997 loops=67508)\n Total runtime: 966566.574 ms\n\n========================================================\n\nCan you please see it an let me know where is the issue?\n\n\n-----Original Message-----\nFrom: Gavin Flower [mailto:[email protected]]\nSent: Sunday, March 15, 2015 3:02 AM\nTo: Varadharajan Mukundan\nCc: Tomas Vondra; [email protected]; Scott Marlowe;\[email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn 15/03/15 10:23, Varadharajan Mukundan wrote:\n> Hi Gavin,\n>\n> Vivekanand is his first mail itself mentioned the below configuration\n> of postgresql.conf. It looks good enough to me.\n>\n> Total Memory : 8 GB\n>\n> shared_buffers = 2GB\n>\n> work_mem = 64MB\n>\n> maintenance_work_mem = 700MB\n>\n> effective_cache_size = 4GB\n\n\nSorry, it didn't register when I read it!\n(Probably reading too fast)\n>\n> On Sat, Mar 14, 2015 at 10:06 PM, Gavin Flower\n> <[email protected]> wrote:\n>> On 14/03/15 13:12, Tomas Vondra wrote:\n>>> On 14.3.2015 00:28, Vivekanand Joshi wrote:\n>>>> Hi Guys,\n>>>>\n>>>> So here is the full information attached as well as in the link\n>>>> provided below:\n>>>>\n>>>> http://pgsql.privatepaste.com/41207bea45\n>>>>\n>>>> I can provide new information as well.\n>>> Thanks.\n>>>\n>>> We still don't have EXPLAIN ANALYZE - how long was the query running\n>>> (I assume it got killed at some point)? It's really difficult to\n>>> give you any advices because we don't know where the problem is.\n>>>\n>>> If EXPLAIN ANALYZE really takes too long (say, it does not complete\n>>> after an hour / over night), you'll have to break the query into\n>>> parts and first tweak those independently.\n>>>\n>>> For example in the first message you mentioned that select from the\n>>> S_V_D_CAMPAIGN_HIERARCHY view takes ~9 minutes, so start with that.\n>>> Give us EXPLAIN ANALYZE for that query.\n>>>\n>>> Few more comments:\n>>>\n>>> (1) You're using CTEs - be aware that CTEs are not just aliases, but\n>>> impact planning / optimization, and in some cases may prevent\n>>> proper optimization. Try replacing them with plain views.\n>>>\n>>> (2) Varadharajan Mukundan already recommended you to create index on\n>>> s_f_promotion_history.send_dt. Have you tried that? You may also\n>>> try creating an index on all the columns needed by the query, so\n>>> that \"Index Only Scan\" is possible.\n>>>\n>>> (3) There are probably additional indexes that might be useful here.\n>>> What I'd try is adding indexes on all columns that are either a\n>>> foreign key or used in a WHERE condition. This might be an\n>>> overkill in some cases, but let's see.\n>>>\n>>> (4) I suspect many of the relations referenced in the views are not\n>>> actually needed in the query, i.e. the join is performed but\n>>> then it's just discarded because those columns are not used.\n>>> Try to simplify the views as much has possible - remove all the\n>>> tables that are not really necessary to run the query. If two\n>>> queries need different tables, maybe defining two views is\n>>> a better approach.\n>>>\n>>> (5) The vmstat / iostat data are pretty useless - what you provided are\n>>> averages since the machine was started, but we need a few samples\n>>> collected when the query is running. I.e. start the query, and\n>>> then\n>>> give us a few samples from these commands:\n>>>\n>>> iostat -x -k 1\n>>> vmstat 1\n>>>\n>>>> Would like to see if queries of these type can actually run in\n>>>> postgres server?\n>>> Why not? We're running DWH applications on tens/hundreds of GBs.\n>>>\n>>>> If yes, what would be the minimum requirements for hardware? We\n>>>> would like to migrate our whole solution on PostgreSQL as we can\n>>>> spend on hardware as much as we can but working on a proprietary\n>>>> appliance is becoming very difficult for us.\n>>> That's difficult to say, because we really don't know where the\n>>> problem is and how much the queries can be optimized.\n>>>\n>>>\n>> I notice that no one appears to have suggested the default setting in\n>> postgresql.conf - these need changing as they are initially set up\n>> for small machines, and to let PostgreSQL take anywhere near full\n>> advantage of a box have large amounts of RAM, you need to change some\n>> of the configuration settings!\n>>\n>> For example 'temp_buffers' (default 8MB) and 'maintenance_work_mem'\n>> (default\n>> 16MB) should be drastically increased, and there are other settings\n>> that need changing. The precise values depend on many factors, but\n>> the initial values set by default are definitely far too small for your\n>> usage.\n>>\n>> Am assuming that you are looking at PostgreSQL 9.4.\n>>\n>>\n>>\n>> Cheers,\n>> Gavin\n>>\n>>\n>\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 17:54:05 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "\n>Hi Team,\n>\n>This is the EXPLAIN ANALYZE for one of the view : S_V_D_CAMPAIGN_HIERARCHY:\n\n\n\t>Rows Removed by Join Filter: 3577676116\n\n\tThat's quite a lot.\n\tYou're possibly missing a clause in a join, resulting in a cross join.\n\tIt is also helpful to put your result here:\n\thttp://explain.depesz.com/\n\tregards,\n\n\tMarc Mamin\n\n\n>\n>===========================================\n>\n>\n>Nested Loop (cost=33666.96..37971.39 rows=1 width=894) (actual\n>time=443.556..966558.767 rows=45360 loops=1)\n> Join Filter: (tp_exec.touchpoint_execution_id =\n>valid_executions.touchpoint_execution_id)\n> Rows Removed by Join Filter: 3577676116\n> CTE valid_executions\n> -> Hash Join (cost=13753.53..31711.17 rows=1 width=8) (actual\n>time=232.571..357.749 rows=52997 loops=1)\n> Hash Cond:\n>((s_f_touchpoint_execution_status_history_1.touchpoint_execution_id =\n>s_f_touchpoint_execution_status_history.touchpoint_execution_id) AND ((max(s\n>_f_touchpoint_execution_status_history_1.creation_dt)) =\n>s_f_touchpoint_execution_status_history.creation_dt))\n> -> HashAggregate (cost=6221.56..6905.66 rows=68410 width=16)\n>(actual time=139.713..171.340 rows=76454 loops=1)\n> -> Seq Scan on s_f_touchpoint_execution_status_history\n>s_f_touchpoint_execution_status_history_1 (cost=0.00..4766.04 rows=291104\n>width=16) (actual ti\n>me=0.006..38.582 rows=291104 loops=1)\n> -> Hash (cost=5493.80..5493.80 rows=135878 width=16) (actual\n>time=92.737..92.737 rows=136280 loops=1)\n> Buckets: 16384 Batches: 1 Memory Usage: 6389kB\n> -> Seq Scan on s_f_touchpoint_execution_status_history\n>(cost=0.00..5493.80 rows=135878 width=16) (actual time=0.012..55.078\n>rows=136280 loops=1)\n> Filter: (touchpoint_execution_status_type_id = ANY\n>('{3,4}'::integer[]))\n> Rows Removed by Filter: 154824\n> -> Nested Loop Left Join (cost=1955.80..6260.19 rows=1 width=894)\n>(actual time=31.608..3147.015 rows=67508 loops=1)\n> -> Nested Loop (cost=1955.67..6260.04 rows=1 width=776) (actual\n>time=31.602..2912.625 rows=67508 loops=1)\n> -> Nested Loop Left Join (cost=1955.54..6259.87 rows=1\n>width=658) (actual time=31.595..2713.696 rows=72427 loops=1)\n> -> Nested Loop Left Join (cost=1955.40..6259.71\n>rows=1 width=340) (actual time=31.589..2532.926 rows=72427 loops=1)\n> -> Nested Loop Left Join (cost=1955.27..6259.55\n>rows=1 width=222) (actual time=31.581..2354.662 rows=72427 loops=1)\n> -> Nested Loop (cost=1954.99..6259.24\n>rows=1 width=197) (actual time=31.572..2090.104 rows=72427 loops=1)\n> -> Nested Loop\n>(cost=1954.71..6258.92 rows=1 width=173) (actual time=31.562..1802.857\n>rows=72427 loops=1)\n> Join Filter:\n>(camp_exec.campaign_id = wave.campaign_id)\n> Rows Removed by Join Filter:\n>243\n> -> Nested Loop\n>(cost=1954.42..6254.67 rows=13 width=167) (actual time=31.551..1468.718\n>rows=72670 loops=1)\n> -> Hash Join\n>(cost=1954.13..6249.67 rows=13 width=108) (actual time=31.525..402.039\n>rows=72670 loops=1)\n> Hash Cond:\n>((tp_exec.touchpoint_id = tp.touchpoint_id) AND (wave_exec.wave_id =\n>tp.wave_id))\n> -> Hash Join\n>(cost=1576.83..4595.51 rows=72956 width=90) (actual time=26.254..256.328\n>rows=72956 loops=1)\n> Hash Cond:\n>(tp_exec.wave_execution_id = wave_exec.wave_execution_id)\n> -> Seq Scan\n>on s_d_touchpoint_execution tp_exec (cost=0.00..1559.56 rows=72956\n>width=42) (actual time=0.005..76.099 rows=72956 loops=1)\n> -> Hash\n>(cost=1001.37..1001.37 rows=46037 width=56) (actual time=26.178..26.178\n>rows=46037 loops=1)\n> Buckets:\n>8192 Batches: 1 Memory Usage: 4104kB\n> -> Seq\n>Scan on s_d_wave_execution wave_exec (cost=0.00..1001.37 rows=46037\n>width=56) (actual time=0.006..10.388 rows=46037 loops=1)\n> -> Hash\n>(cost=212.72..212.72 rows=10972 width=26) (actual time=5.252..5.252\n>rows=10972 loops=1)\n> Buckets: 2048\n>Batches: 1 Memory Usage: 645kB\n> -> Seq Scan\n>on s_d_touchpoint tp (cost=0.00..212.72 rows=10972 width=26) (actual\n>time=0.012..2.319 rows=10972 loops=1)\n> -> Index Scan using\n>s_d_campaign_execution_idx on s_d_campaign_execution camp_exec\n>(cost=0.29..0.37 rows=1 width=67) (actual time=0.013..0.013 rows=1\n>loops=72670)\n> Index Cond:\n>(campaign_execution_id = wave_exec.campaign_execution_id)\n> -> Index Scan using\n>s_d_wave_pkey on s_d_wave wave (cost=0.29..0.31 rows=1 width=22) (actual\n>time=0.003..0.003 rows=1 loops=72670)\n> Index Cond: (wave_id =\n>wave_exec.wave_id)\n> -> Index Scan using\n>s_d_campaign_pkey on s_d_campaign camp (cost=0.29..0.32 rows=1 width=40)\n>(actual time=0.003..0.003 rows=1 loops=72427)\n> Index Cond: (campaign_id =\n>camp_exec.campaign_id)\n> -> Index Scan using s_d_content_pkey on\n>s_d_content content (cost=0.28..0.30 rows=1 width=33) (actual\n>time=0.002..0.003 rows=1 loops=72427)\n> Index Cond: (tp_exec.content_id =\n>content_id)\n> -> Index Scan using s_d_message_type_pkey on\n>s_d_message_type message_type (cost=0.13..0.15 rows=1 width=120) (actual\n>time=0.001..0.002 rows=1 loops=72427)\n> Index Cond: (tp_exec.message_type_id =\n>message_type_id)\n> -> Index Scan using s_d_group_pkey on s_d_group grup\n>(cost=0.13..0.15 rows=1 width=320) (actual time=0.001..0.002 rows=1\n>loops=72427)\n> Index Cond: (camp_exec.group_id = group_id)\n> -> Index Scan using d_channel_pk on s_d_channel_type channel\n>(cost=0.13..0.15 rows=1 width=120) (actual time=0.001..0.002 rows=1\n>loops=72427)\n> Index Cond: (channel_type_id = tp.channel_type_id)\n> -> Index Scan using s_d_category_pkey on s_d_category \"CATEGORY\"\n>(cost=0.13..0.15 rows=1 width=120) (actual time=0.001..0.002 rows=1\n>loops=67508)\n> Index Cond: (camp.category_id = category_id)\n> -> CTE Scan on valid_executions (cost=0.00..0.02 rows=1 width=8)\n>(actual time=0.004..6.803 rows=52997 loops=67508)\n> Total runtime: 966566.574 ms\n>\n>========================================================\n>\n>Can you please see it an let me know where is the issue?\n>\n>\n>-----Original Message-----\n>From: Gavin Flower [mailto:[email protected]]\n>Sent: Sunday, March 15, 2015 3:02 AM\n>To: Varadharajan Mukundan\n>Cc: Tomas Vondra; [email protected]; Scott Marlowe;\n>[email protected]\n>Subject: Re: [PERFORM] Performance issues\n>\n>On 15/03/15 10:23, Varadharajan Mukundan wrote:\n>> Hi Gavin,\n>>\n>> Vivekanand is his first mail itself mentioned the below configuration\n>> of postgresql.conf. It looks good enough to me.\n>>\n>> Total Memory : 8 GB\n>>\n>> shared_buffers = 2GB\n>>\n>> work_mem = 64MB\n>>\n>> maintenance_work_mem = 700MB\n>>\n>> effective_cache_size = 4GB\n>\n>\n>Sorry, it didn't register when I read it!\n>(Probably reading too fast)\n>>\n>> On Sat, Mar 14, 2015 at 10:06 PM, Gavin Flower\n>> <[email protected]> wrote:\n>>> On 14/03/15 13:12, Tomas Vondra wrote:\n>>>> On 14.3.2015 00:28, Vivekanand Joshi wrote:\n>>>>> Hi Guys,\n>>>>>\n>>>>> So here is the full information attached as well as in the link\n>>>>> provided below:\n>>>>>\n>>>>> http://pgsql.privatepaste.com/41207bea45\n>>>>>\n>>>>> I can provide new information as well.\n>>>> Thanks.\n>>>>\n>>>> We still don't have EXPLAIN ANALYZE - how long was the query running\n>>>> (I assume it got killed at some point)? It's really difficult to\n>>>> give you any advices because we don't know where the problem is.\n>>>>\n>>>> If EXPLAIN ANALYZE really takes too long (say, it does not complete\n>>>> after an hour / over night), you'll have to break the query into\n>>>> parts and first tweak those independently.\n>>>>\n>>>> For example in the first message you mentioned that select from the\n>>>> S_V_D_CAMPAIGN_HIERARCHY view takes ~9 minutes, so start with that.\n>>>> Give us EXPLAIN ANALYZE for that query.\n>>>>\n>>>> Few more comments:\n>>>>\n>>>> (1) You're using CTEs - be aware that CTEs are not just aliases, but\n>>>> impact planning / optimization, and in some cases may prevent\n>>>> proper optimization. Try replacing them with plain views.\n>>>>\n>>>> (2) Varadharajan Mukundan already recommended you to create index on\n>>>> s_f_promotion_history.send_dt. Have you tried that? You may also\n>>>> try creating an index on all the columns needed by the query, so\n>>>> that \"Index Only Scan\" is possible.\n>>>>\n>>>> (3) There are probably additional indexes that might be useful here.\n>>>> What I'd try is adding indexes on all columns that are either a\n>>>> foreign key or used in a WHERE condition. This might be an\n>>>> overkill in some cases, but let's see.\n>>>>\n>>>> (4) I suspect many of the relations referenced in the views are not\n>>>> actually needed in the query, i.e. the join is performed but\n>>>> then it's just discarded because those columns are not used.\n>>>> Try to simplify the views as much has possible - remove all the\n>>>> tables that are not really necessary to run the query. If two\n>>>> queries need different tables, maybe defining two views is\n>>>> a better approach.\n>>>>\n>>>> (5) The vmstat / iostat data are pretty useless - what you provided are\n>>>> averages since the machine was started, but we need a few samples\n>>>> collected when the query is running. I.e. start the query, and\n>>>> then\n>>>> give us a few samples from these commands:\n>>>>\n>>>> iostat -x -k 1\n>>>> vmstat 1\n>>>>\n>>>>> Would like to see if queries of these type can actually run in\n>>>>> postgres server?\n>>>> Why not? We're running DWH applications on tens/hundreds of GBs.\n>>>>\n>>>>> If yes, what would be the minimum requirements for hardware? We\n>>>>> would like to migrate our whole solution on PostgreSQL as we can\n>>>>> spend on hardware as much as we can but working on a proprietary\n>>>>> appliance is becoming very difficult for us.\n>>>> That's difficult to say, because we really don't know where the\n>>>> problem is and how much the queries can be optimized.\n>>>>\n>>>>\n>>> I notice that no one appears to have suggested the default setting in\n>>> postgresql.conf - these need changing as they are initially set up\n>>> for small machines, and to let PostgreSQL take anywhere near full\n>>> advantage of a box have large amounts of RAM, you need to change some\n>>> of the configuration settings!\n>>>\n>>> For example 'temp_buffers' (default 8MB) and 'maintenance_work_mem'\n>>> (default\n>>> 16MB) should be drastically increased, and there are other settings\n>>> that need changing. The precise values depend on many factors, but\n>>> the initial values set by default are definitely far too small for your\n>>> usage.\n>>>\n>>> Am assuming that you are looking at PostgreSQL 9.4.\n>>>\n>>>\n>>>\n>>> Cheers,\n>>> Gavin\n>>>\n>>>\n>>\n>>\n>>\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 17:49:59 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 16.3.2015 18:49, Marc Mamin wrote:\n> \n>> Hi Team,\n>>\n>> This is the EXPLAIN ANALYZE for one of the view : S_V_D_CAMPAIGN_HIERARCHY:\n\nFWIW, this is a somewhat more readable version of the plan:\n\n http://explain.depesz.com/s/nbB\n\nIn the future, please do two things:\n\n(1) Attach the plan as a text file, because the mail clients tend to\n screw things up (wrapping long lines). Unless the plan is trivial,\n of course - but pgsql-performance usually deals with complex stuff.\n\n(2) Put the plan on explain.depesz.com helps too, because it's\n considerably more readable (but always do 1, because resorces\n placed somewhere else tends to disappear, and the posts then make\n very little sense, which is bad when searching in the archives)\n\n(3) Same for stuff pasted somewhere else - always attach it to the\n message. For example I'd like to give you more accurate advice, but\n I can't as http://pgsql.privatepaste.com/41207bea45 is unavailable.\n\n> \n> \n> \t>Rows Removed by Join Filter: 3577676116\n> \n> \tThat's quite a lot.\n> \tYou're possibly missing a clause in a join, resulting in a cross join.\n> \tIt is also helpful to put your result here:\n> \thttp://explain.depesz.com/\n> \tregards,\n\nIMHO this is merely a consequence of using the CTE, which produces 52997\nrows and is scanned 67508x as the inner relation of a nested loop. That\ngives you 3577721476 tuples in total, and only 45360 are kept (hence\n3577676116 are removed).\n\nThis is a prime example of why CTEs are not just aliases for subqueries,\nbut may actually cause serious trouble.\n\nThere are other issues (e.g. the row count estimate of the CTE is\nseriously off, most likely because of the HashAggregate in the outer\nbranch), but that's a secondary issue IMHO.\n\nVivekanand, try this (in the order of intrusiveness):\n\n(1) Get rid of the CTE, and just replace it with subselect in the FROM\n part of the query, so instead of this:\n\n WITH valid_executions AS (...)\n SELECT ... FROM ... JOIN valid_executions ON (...)\n\n you'll have something like this:\n\n SELECT ... FROM ... JOIN (...) AS valid_executions ON (...)\n\n This way the subselect will optimized properly.\n\n\n(2) Replace the CTE with a materialized view, or a temporary table.\n This has both advantages and disadvantages - the main advantage is\n that you can create indexes, collect statistics. Disadvantage is\n you have to refresh the MV, fill temporary table etc.\n\nI expect (1) to improve the performance significantly, and (2) might\nimprove it even further by fixing the misestimates.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 19:24:24 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Hey guys, thanks a lot.\n\nThis is really helping. I am learning a lot. BTW, I changed CTE into\nsubquery and it improved the performance by miles. I am getting the result\nin less than 3 seconds, though I am using a 24 GB ram server. It is still a\ngreat turnaround time as compared to previous execution time.\n\nNow I will look into the bigger query. I read explain analyze and that\nhelped a lot. I will be coming up with more questions tomorrow as bigger\nquery still has got some problems.\nOn 16 Mar 2015 23:55, \"Tomas Vondra\" <[email protected]> wrote:\n\n> On 16.3.2015 18:49, Marc Mamin wrote:\n> >\n> >> Hi Team,\n> >>\n> >> This is the EXPLAIN ANALYZE for one of the view :\n> S_V_D_CAMPAIGN_HIERARCHY:\n>\n> FWIW, this is a somewhat more readable version of the plan:\n>\n> http://explain.depesz.com/s/nbB\n>\n> In the future, please do two things:\n>\n> (1) Attach the plan as a text file, because the mail clients tend to\n> screw things up (wrapping long lines). Unless the plan is trivial,\n> of course - but pgsql-performance usually deals with complex stuff.\n>\n> (2) Put the plan on explain.depesz.com helps too, because it's\n> considerably more readable (but always do 1, because resorces\n> placed somewhere else tends to disappear, and the posts then make\n> very little sense, which is bad when searching in the archives)\n>\n> (3) Same for stuff pasted somewhere else - always attach it to the\n> message. For example I'd like to give you more accurate advice, but\n> I can't as http://pgsql.privatepaste.com/41207bea45 is unavailable.\n>\n> >\n> >\n> > >Rows Removed by Join Filter: 3577676116\n> >\n> > That's quite a lot.\n> > You're possibly missing a clause in a join, resulting in a cross\n> join.\n> > It is also helpful to put your result here:\n> > http://explain.depesz.com/\n> > regards,\n>\n> IMHO this is merely a consequence of using the CTE, which produces 52997\n> rows and is scanned 67508x as the inner relation of a nested loop. That\n> gives you 3577721476 tuples in total, and only 45360 are kept (hence\n> 3577676116 are removed).\n>\n> This is a prime example of why CTEs are not just aliases for subqueries,\n> but may actually cause serious trouble.\n>\n> There are other issues (e.g. the row count estimate of the CTE is\n> seriously off, most likely because of the HashAggregate in the outer\n> branch), but that's a secondary issue IMHO.\n>\n> Vivekanand, try this (in the order of intrusiveness):\n>\n> (1) Get rid of the CTE, and just replace it with subselect in the FROM\n> part of the query, so instead of this:\n>\n> WITH valid_executions AS (...)\n> SELECT ... FROM ... JOIN valid_executions ON (...)\n>\n> you'll have something like this:\n>\n> SELECT ... FROM ... JOIN (...) AS valid_executions ON (...)\n>\n> This way the subselect will optimized properly.\n>\n>\n> (2) Replace the CTE with a materialized view, or a temporary table.\n> This has both advantages and disadvantages - the main advantage is\n> that you can create indexes, collect statistics. Disadvantage is\n> you have to refresh the MV, fill temporary table etc.\n>\n> I expect (1) to improve the performance significantly, and (2) might\n> improve it even further by fixing the misestimates.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHey guys, thanks a lot. \nThis is really helping. I am learning a lot. BTW, I changed CTE into subquery and it improved the performance by miles. I am getting the result in less than 3 seconds, though I am using a 24 GB ram server. It is still a great turnaround time as compared to previous execution time.\nNow I will look into the bigger query. I read explain analyze and that helped a lot. I will be coming up with more questions tomorrow as bigger query still has got some problems.\nOn 16 Mar 2015 23:55, \"Tomas Vondra\" <[email protected]> wrote:On 16.3.2015 18:49, Marc Mamin wrote:\n>\n>> Hi Team,\n>>\n>> This is the EXPLAIN ANALYZE for one of the view : S_V_D_CAMPAIGN_HIERARCHY:\n\nFWIW, this is a somewhat more readable version of the plan:\n\n http://explain.depesz.com/s/nbB\n\nIn the future, please do two things:\n\n(1) Attach the plan as a text file, because the mail clients tend to\n screw things up (wrapping long lines). Unless the plan is trivial,\n of course - but pgsql-performance usually deals with complex stuff.\n\n(2) Put the plan on explain.depesz.com helps too, because it's\n considerably more readable (but always do 1, because resorces\n placed somewhere else tends to disappear, and the posts then make\n very little sense, which is bad when searching in the archives)\n\n(3) Same for stuff pasted somewhere else - always attach it to the\n message. For example I'd like to give you more accurate advice, but\n I can't as http://pgsql.privatepaste.com/41207bea45 is unavailable.\n\n>\n>\n> >Rows Removed by Join Filter: 3577676116\n>\n> That's quite a lot.\n> You're possibly missing a clause in a join, resulting in a cross join.\n> It is also helpful to put your result here:\n> http://explain.depesz.com/\n> regards,\n\nIMHO this is merely a consequence of using the CTE, which produces 52997\nrows and is scanned 67508x as the inner relation of a nested loop. That\ngives you 3577721476 tuples in total, and only 45360 are kept (hence\n3577676116 are removed).\n\nThis is a prime example of why CTEs are not just aliases for subqueries,\nbut may actually cause serious trouble.\n\nThere are other issues (e.g. the row count estimate of the CTE is\nseriously off, most likely because of the HashAggregate in the outer\nbranch), but that's a secondary issue IMHO.\n\nVivekanand, try this (in the order of intrusiveness):\n\n(1) Get rid of the CTE, and just replace it with subselect in the FROM\n part of the query, so instead of this:\n\n WITH valid_executions AS (...)\n SELECT ... FROM ... JOIN valid_executions ON (...)\n\n you'll have something like this:\n\n SELECT ... FROM ... JOIN (...) AS valid_executions ON (...)\n\n This way the subselect will optimized properly.\n\n\n(2) Replace the CTE with a materialized view, or a temporary table.\n This has both advantages and disadvantages - the main advantage is\n that you can create indexes, collect statistics. Disadvantage is\n you have to refresh the MV, fill temporary table etc.\n\nI expect (1) to improve the performance significantly, and (2) might\nimprove it even further by fixing the misestimates.\n\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 17 Mar 2015 00:02:19 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 3/13/15 7:12 PM, Tomas Vondra wrote:\n> (4) I suspect many of the relations referenced in the views are not\n> actually needed in the query, i.e. the join is performed but\n> then it's just discarded because those columns are not used.\n> Try to simplify the views as much has possible - remove all the\n> tables that are not really necessary to run the query. If two\n> queries need different tables, maybe defining two views is\n> a better approach.\n\nA better alternative with multi-purpose views is to use an outer join \ninstead of an inner join. With an outer join if you ultimately don't \nrefer to any of the columns in a particular table Postgres will remove \nthe table from the query completely.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 14:43:46 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 16.3.2015 20:43, Jim Nasby wrote:\n> On 3/13/15 7:12 PM, Tomas Vondra wrote:\n>> (4) I suspect many of the relations referenced in the views are not\n>> actually needed in the query, i.e. the join is performed but\n>> then it's just discarded because those columns are not used.\n>> Try to simplify the views as much has possible - remove all the\n>> tables that are not really necessary to run the query. If two\n>> queries need different tables, maybe defining two views is\n>> a better approach.\n> \n> A better alternative with multi-purpose views is to use an outer\n> join instead of an inner join. With an outer join if you ultimately\n> don't refer to any of the columns in a particular table Postgres will\n> remove the table from the query completely.\n\nReally? Because a quick test suggests otherwise:\n\ndb=# create table test_a (id int);\nCREATE TABLE\ndb=# create table test_b (id int);\nCREATE TABLE\ndb=# explain select test_a.* from test_a left join test_b using (id);\n QUERY PLAN\n----------------------------------------------------------------------\n Merge Left Join (cost=359.57..860.00 rows=32512 width=4)\n Merge Cond: (test_a.id = test_b.id)\n -> Sort (cost=179.78..186.16 rows=2550 width=4)\n Sort Key: test_a.id\n -> Seq Scan on test_a (cost=0.00..35.50 rows=2550 width=4)\n -> Sort (cost=179.78..186.16 rows=2550 width=4)\n Sort Key: test_b.id\n -> Seq Scan on test_b (cost=0.00..35.50 rows=2550 width=4)\n(8 rows)\n\nAlso, how would that work with duplicate rows in the referenced table?\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 21:59:16 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 3/16/15 3:59 PM, Tomas Vondra wrote:\n> On 16.3.2015 20:43, Jim Nasby wrote:\n>> On 3/13/15 7:12 PM, Tomas Vondra wrote:\n>>> (4) I suspect many of the relations referenced in the views are not\n>>> actually needed in the query, i.e. the join is performed but\n>>> then it's just discarded because those columns are not used.\n>>> Try to simplify the views as much has possible - remove all the\n>>> tables that are not really necessary to run the query. If two\n>>> queries need different tables, maybe defining two views is\n>>> a better approach.\n>>\n>> A better alternative with multi-purpose views is to use an outer\n>> join instead of an inner join. With an outer join if you ultimately\n>> don't refer to any of the columns in a particular table Postgres will\n>> remove the table from the query completely.\n>\n> Really? Because a quick test suggests otherwise:\n>\n> db=# create table test_a (id int);\n> CREATE TABLE\n> db=# create table test_b (id int);\n> CREATE TABLE\n> db=# explain select test_a.* from test_a left join test_b using (id);\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Merge Left Join (cost=359.57..860.00 rows=32512 width=4)\n> Merge Cond: (test_a.id = test_b.id)\n> -> Sort (cost=179.78..186.16 rows=2550 width=4)\n> Sort Key: test_a.id\n> -> Seq Scan on test_a (cost=0.00..35.50 rows=2550 width=4)\n> -> Sort (cost=179.78..186.16 rows=2550 width=4)\n> Sort Key: test_b.id\n> -> Seq Scan on test_b (cost=0.00..35.50 rows=2550 width=4)\n> (8 rows)\n>\n> Also, how would that work with duplicate rows in the referenced table?\n\nRight, I neglected to mention that the omitted table must also be unique \non the join key:\n\[email protected]=# create table a(a_id serial primary key);\nCREATE TABLE\[email protected]=# create table b(a_id int);\nCREATE TABLE\[email protected]=# explain analyze select a.* from a left join b \nusing(a_id);\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------\n Hash Right Join (cost=67.38..137.94 rows=2550 width=4) (actual \ntime=0.035..0.035 rows=0 loops=1)\n Hash Cond: (b.a_id = a.a_id)\n -> Seq Scan on b (cost=0.00..35.50 rows=2550 width=4) (never executed)\n -> Hash (cost=35.50..35.50 rows=2550 width=4) (actual \ntime=0.002..0.002 rows=0 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 32kB\n -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=4) \n(actual time=0.001..0.001 rows=0 loops=1)\n Planning time: 0.380 ms\n Execution time: 0.086 ms\n(8 rows)\n\[email protected]=# alter table b add primary key(a_id);\nALTER TABLE\[email protected]=# explain analyze select a.* from a left join b \nusing(a_id);\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------\n Seq Scan on a (cost=0.00..35.50 rows=2550 width=4) (actual \ntime=0.001..0.001 rows=0 loops=1)\n Planning time: 0.247 ms\n Execution time: 0.029 ms\n(3 rows)\n\[email protected]=# alter table a drop constraint a_pkey;\nALTER TABLE\[email protected]=# explain analyze select a.* from a left join b \nusing(a_id);\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------\n Seq Scan on a (cost=0.00..35.50 rows=2550 width=4) (actual \ntime=0.001..0.001 rows=0 loops=1)\n Planning time: 0.098 ms\n Execution time: 0.011 ms\n(3 rows)\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 19:06:19 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Hi Guys,\n\nNext level of query is following:\n\nIf this works, I guess 90% of the problem will be solved.\n\nSELECT\n COUNT(DISTINCT TARGET_ID)\n FROM\n S_V_F_PROMOTION_HISTORY_EMAIL PH\n INNER JOIN S_V_D_CAMPAIGN_HIERARCHY CH\n ON PH.TOUCHPOINT_EXECUTION_ID =\nCH.TOUCHPOINT_EXECUTION_ID\n WHERE\n 1=1\n AND SEND_DT >= '2014-03-13'\n AND SEND_DT <= '2015-03-14'\n\n\nIn this query, I am joining two views which were made earlier with CTEs. I\nhave replaced the CTE's with subqueries. The view were giving me output in\naround 5-10 minutes and now I am getting the same result in around 3-4\nseconds.\n\nBut when I executed the query written above, I am again stuck. I am\nattaching the query plan as well the link.\n\nhttp://explain.depesz.com/s/REeu\n\nI can see most of the time is spending inside a nested loop and total\ncosts comes out be cost=338203.81..338203.82.\n\nHow to take care of this? I need to run this query in a report so I cannot\ncreate a table like select * from views and then join the table. If I do\nthat I am getting the answer of whole big query in some 6-7 seconds. But\nthat is not feasible. A report (Jasper can have only one single (big/small\nquery).\n\nLet me know if you need any other information.\n\nThanks a ton!\nVivek\n\n\n-----Original Message-----\nFrom: Jim Nasby [mailto:[email protected]]\nSent: Tuesday, March 17, 2015 5:36 AM\nTo: Tomas Vondra; [email protected]; Scott Marlowe; Varadharajan\nMukundan\nCc: [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn 3/16/15 3:59 PM, Tomas Vondra wrote:\n> On 16.3.2015 20:43, Jim Nasby wrote:\n>> On 3/13/15 7:12 PM, Tomas Vondra wrote:\n>>> (4) I suspect many of the relations referenced in the views are not\n>>> actually needed in the query, i.e. the join is performed but\n>>> then it's just discarded because those columns are not used.\n>>> Try to simplify the views as much has possible - remove all the\n>>> tables that are not really necessary to run the query. If two\n>>> queries need different tables, maybe defining two views is\n>>> a better approach.\n>>\n>> A better alternative with multi-purpose views is to use an outer join\n>> instead of an inner join. With an outer join if you ultimately don't\n>> refer to any of the columns in a particular table Postgres will\n>> remove the table from the query completely.\n>\n> Really? Because a quick test suggests otherwise:\n>\n> db=# create table test_a (id int);\n> CREATE TABLE\n> db=# create table test_b (id int);\n> CREATE TABLE\n> db=# explain select test_a.* from test_a left join test_b using (id);\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Merge Left Join (cost=359.57..860.00 rows=32512 width=4)\n> Merge Cond: (test_a.id = test_b.id)\n> -> Sort (cost=179.78..186.16 rows=2550 width=4)\n> Sort Key: test_a.id\n> -> Seq Scan on test_a (cost=0.00..35.50 rows=2550 width=4)\n> -> Sort (cost=179.78..186.16 rows=2550 width=4)\n> Sort Key: test_b.id\n> -> Seq Scan on test_b (cost=0.00..35.50 rows=2550 width=4)\n> (8 rows)\n>\n> Also, how would that work with duplicate rows in the referenced table?\n\nRight, I neglected to mention that the omitted table must also be unique\non the join key:\n\[email protected]=# create table a(a_id serial primary key); CREATE\nTABLE [email protected]=# create table b(a_id int); CREATE TABLE\[email protected]=# explain analyze select a.* from a left join b\nusing(a_id);\n QUERY PLAN\n\n--------------------------------------------------------------------------\n---------------------------------\n Hash Right Join (cost=67.38..137.94 rows=2550 width=4) (actual\ntime=0.035..0.035 rows=0 loops=1)\n Hash Cond: (b.a_id = a.a_id)\n -> Seq Scan on b (cost=0.00..35.50 rows=2550 width=4) (never\nexecuted)\n -> Hash (cost=35.50..35.50 rows=2550 width=4) (actual\ntime=0.002..0.002 rows=0 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 32kB\n -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=4) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Planning time: 0.380 ms\n Execution time: 0.086 ms\n(8 rows)\n\[email protected]=# alter table b add primary key(a_id); ALTER TABLE\[email protected]=# explain analyze select a.* from a left join b\nusing(a_id);\n QUERY PLAN\n\n--------------------------------------------------------------------------\n---------------------\n Seq Scan on a (cost=0.00..35.50 rows=2550 width=4) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Planning time: 0.247 ms\n Execution time: 0.029 ms\n(3 rows)\n\[email protected]=# alter table a drop constraint a_pkey; ALTER\nTABLE [email protected]=# explain analyze select a.* from a left\njoin b using(a_id);\n QUERY PLAN\n\n--------------------------------------------------------------------------\n---------------------\n Seq Scan on a (cost=0.00..35.50 rows=2550 width=4) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Planning time: 0.098 ms\n Execution time: 0.011 ms\n(3 rows)\n--\nJim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it\nin Treble! http://BlueTreble.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 17 Mar 2015 13:11:27 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "EXPLAIN ANALYZE didn't give result even after three hours.\n\n\n-----Original Message-----\nFrom: Vivekanand Joshi [mailto:[email protected]]\nSent: Tuesday, March 17, 2015 1:11 PM\nTo: 'Jim Nasby'; 'Tomas Vondra'; 'Scott Marlowe'; 'Varadharajan Mukundan'\nCc: '[email protected]'\nSubject: RE: [PERFORM] Performance issues\n\nHi Guys,\n\nNext level of query is following:\n\nIf this works, I guess 90% of the problem will be solved.\n\nSELECT\n COUNT(DISTINCT TARGET_ID)\n FROM\n S_V_F_PROMOTION_HISTORY_EMAIL PH\n INNER JOIN S_V_D_CAMPAIGN_HIERARCHY CH\n ON PH.TOUCHPOINT_EXECUTION_ID =\nCH.TOUCHPOINT_EXECUTION_ID\n WHERE\n 1=1\n AND SEND_DT >= '2014-03-13'\n AND SEND_DT <= '2015-03-14'\n\n\nIn this query, I am joining two views which were made earlier with CTEs. I\nhave replaced the CTE's with subqueries. The view were giving me output in\naround 5-10 minutes and now I am getting the same result in around 3-4\nseconds.\n\nBut when I executed the query written above, I am again stuck. I am\nattaching the query plan as well the link.\n\nhttp://explain.depesz.com/s/REeu\n\nI can see most of the time is spending inside a nested loop and total\ncosts comes out be cost=338203.81..338203.82.\n\nHow to take care of this? I need to run this query in a report so I cannot\ncreate a table like select * from views and then join the table. If I do\nthat I am getting the answer of whole big query in some 6-7 seconds. But\nthat is not feasible. A report (Jasper can have only one single (big/small\nquery).\n\nLet me know if you need any other information.\n\nThanks a ton!\nVivek\n\n\n-----Original Message-----\nFrom: Jim Nasby [mailto:[email protected]]\nSent: Tuesday, March 17, 2015 5:36 AM\nTo: Tomas Vondra; [email protected]; Scott Marlowe; Varadharajan\nMukundan\nCc: [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn 3/16/15 3:59 PM, Tomas Vondra wrote:\n> On 16.3.2015 20:43, Jim Nasby wrote:\n>> On 3/13/15 7:12 PM, Tomas Vondra wrote:\n>>> (4) I suspect many of the relations referenced in the views are not\n>>> actually needed in the query, i.e. the join is performed but\n>>> then it's just discarded because those columns are not used.\n>>> Try to simplify the views as much has possible - remove all the\n>>> tables that are not really necessary to run the query. If two\n>>> queries need different tables, maybe defining two views is\n>>> a better approach.\n>>\n>> A better alternative with multi-purpose views is to use an outer join\n>> instead of an inner join. With an outer join if you ultimately don't\n>> refer to any of the columns in a particular table Postgres will\n>> remove the table from the query completely.\n>\n> Really? Because a quick test suggests otherwise:\n>\n> db=# create table test_a (id int);\n> CREATE TABLE\n> db=# create table test_b (id int);\n> CREATE TABLE\n> db=# explain select test_a.* from test_a left join test_b using (id);\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Merge Left Join (cost=359.57..860.00 rows=32512 width=4)\n> Merge Cond: (test_a.id = test_b.id)\n> -> Sort (cost=179.78..186.16 rows=2550 width=4)\n> Sort Key: test_a.id\n> -> Seq Scan on test_a (cost=0.00..35.50 rows=2550 width=4)\n> -> Sort (cost=179.78..186.16 rows=2550 width=4)\n> Sort Key: test_b.id\n> -> Seq Scan on test_b (cost=0.00..35.50 rows=2550 width=4)\n> (8 rows)\n>\n> Also, how would that work with duplicate rows in the referenced table?\n\nRight, I neglected to mention that the omitted table must also be unique\non the join key:\n\[email protected]=# create table a(a_id serial primary key); CREATE\nTABLE [email protected]=# create table b(a_id int); CREATE TABLE\[email protected]=# explain analyze select a.* from a left join b\nusing(a_id);\n QUERY PLAN\n\n--------------------------------------------------------------------------\n---------------------------------\n Hash Right Join (cost=67.38..137.94 rows=2550 width=4) (actual\ntime=0.035..0.035 rows=0 loops=1)\n Hash Cond: (b.a_id = a.a_id)\n -> Seq Scan on b (cost=0.00..35.50 rows=2550 width=4) (never\nexecuted)\n -> Hash (cost=35.50..35.50 rows=2550 width=4) (actual\ntime=0.002..0.002 rows=0 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 32kB\n -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=4) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Planning time: 0.380 ms\n Execution time: 0.086 ms\n(8 rows)\n\[email protected]=# alter table b add primary key(a_id); ALTER TABLE\[email protected]=# explain analyze select a.* from a left join b\nusing(a_id);\n QUERY PLAN\n\n--------------------------------------------------------------------------\n---------------------\n Seq Scan on a (cost=0.00..35.50 rows=2550 width=4) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Planning time: 0.247 ms\n Execution time: 0.029 ms\n(3 rows)\n\[email protected]=# alter table a drop constraint a_pkey; ALTER\nTABLE [email protected]=# explain analyze select a.* from a left\njoin b using(a_id);\n QUERY PLAN\n\n--------------------------------------------------------------------------\n---------------------\n Seq Scan on a (cost=0.00..35.50 rows=2550 width=4) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Planning time: 0.098 ms\n Execution time: 0.011 ms\n(3 rows)\n--\nJim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it\nin Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 16:37:41 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Hi,\n\nOn 17.3.2015 08:41, Vivekanand Joshi wrote:\n> Hi Guys,\n> \n> Next level of query is following:\n> \n> If this works, I guess 90% of the problem will be solved.\n> \n> SELECT\n> COUNT(DISTINCT TARGET_ID)\n> FROM\n> S_V_F_PROMOTION_HISTORY_EMAIL PH\n> INNER JOIN S_V_D_CAMPAIGN_HIERARCHY CH\n> ON PH.TOUCHPOINT_EXECUTION_ID =\n> CH.TOUCHPOINT_EXECUTION_ID\n> WHERE\n> 1=1\n> AND SEND_DT >= '2014-03-13'\n> AND SEND_DT <= '2015-03-14'\n> \n> \n> In this query, I am joining two views which were made earlier with CTEs. I\n> have replaced the CTE's with subqueries. The view were giving me output in\n> around 5-10 minutes and now I am getting the same result in around 3-4\n> seconds.\n> \n> But when I executed the query written above, I am again stuck. I am\n> attaching the query plan as well the link.\n> \n> http://explain.depesz.com/s/REeu\n> \n> I can see most of the time is spending inside a nested loop and total\n> costs comes out be cost=338203.81..338203.82.\n\nMost of that cost comes from this:\n\nSeq Scan on s_f_promotion_history base (cost=0.00..283,333.66 rows=1\nwidth=32)\n Filter: ((send_dt >= '2014-03-13 00:00:00'::timestamp without time\nzone) AND (send_dt <= '2015-03-14 00:00:00'::timestamp without time\n\n\nThat's a bit weird, I guess. If you analyze this part of the query\nseparately, i.e.\n\nEXPLAIN ANALYZE SELECT * FROM s_f_promotion_history\n WHERE (send_dt >= '2014-03-13 00:00:00')\n AND (send_dt <= '2015-03-14 00:00:00')\n\nwhat do you get?\n\nI suspect it's used in EXISTS, i.e. something like this:\n\n... WHERE EXISTS (SELECT * FROM s_f_promotion_history\n WHERE ... send_dt conditions ...\n AND touchpoint_execution_id =\n s_f_touchpoint_execution_status_history_1.touchpoint_execution_id)\n\nand this is transformed into a nested loop join. If there's a\nmisestimate, this may be quite expensive - try to create index on\n\n s_f_promotion_history (touchpoint_execution_id, send_date)\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 12:42:06 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 17.3.2015 12:07, Vivekanand Joshi wrote:\n> EXPLAIN ANALYZE didn't give result even after three hours.\n\nIn that case the only thing you can do is 'slice' the query into smaller\nparts (representing subtrees of the plan), and analyze those first. Look\nfor misestimates (significant differences between estimated and actual\nrow counts, and very expensive parts).\n\nWe can't do that, because we don't have your data or queries, and\nwithout the explain analyze it's difficult to give advices.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 12:45:17 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Hi Tomas,\n\nThis is what I am getting,\n\n\nEXPLAIN ANALYZE SELECT * FROM s_f_promotion_history WHERE (send_dt >=\n'2014-03-13 00:00:00');\n\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on s_f_promotion_history (cost=0.00..283333.66 rows=1 width=74)\n(actual time=711.023..1136.393 rows=1338 loops=1)\n Filter: ((send_dt >= '2014-03-13 00:00:00'::timestamp without time zone)\nAND (send_dt <= '2015-03-14 00:00:00'::timestamp without time zone))\n Rows Removed by Filter: 9998662\n Total runtime: 1170.682 ms\n\n\nCREATE INDEX idx_pr_history ON\nS_F_PROMOTION_HISTORY(touchpoint_execution_id, send_dt);\n\n After Creating Index:\n\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_pr_history on s_f_promotion_history\n(cost=0.43..254028.45 rows=1 width=74) (actual time=375.796..604.587\nrows=1338 loops=1)\n Index Cond: ((send_dt >= '2014-03-13 00:00:00'::timestamp without time\nzone) AND (send_dt <= '2015-03-14 00:00:00'::timestamp without time zone))\n Total runtime: 604.733 ms\n\n\nThe query I gave you is the smallest query, it is using two views and both\nthe views I have changed by using subqueries instead of CTEs. When I join\nthese two views, it is not getting completed at all.\n\nExplain analyze plan for view s_v_f_promotion_history_email:\nhttp://explain.depesz.com/s/ure\nExplain analyze plan for view s_v_d_campaign_hierarchy :\nhttp://explain.depesz.com/s/WxI\n\n\nRegards,\nVivek\n\n\n\n\n\n\n-----Original Message-----\nFrom: Tomas Vondra [mailto:[email protected]]\nSent: Tuesday, March 17, 2015 5:15 PM\nTo: [email protected]; Jim Nasby; Scott Marlowe; Varadharajan\nMukundan\nCc: [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn 17.3.2015 12:07, Vivekanand Joshi wrote:\n> EXPLAIN ANALYZE didn't give result even after three hours.\n\nIn that case the only thing you can do is 'slice' the query into smaller\nparts (representing subtrees of the plan), and analyze those first. Look\nfor misestimates (significant differences between estimated and actual\nrow counts, and very expensive parts).\n\nWe can't do that, because we don't have your data or queries, and\nwithout the explain analyze it's difficult to give advices.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 17:35:46 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Attaching explain analyze file as well.\n\nVivek\n\n\n-----Original Message-----\nFrom: Vivekanand Joshi [mailto:[email protected]]\nSent: Tuesday, March 17, 2015 5:36 PM\nTo: 'Tomas Vondra'; 'Jim Nasby'; 'Scott Marlowe'; 'Varadharajan Mukundan'\nCc: '[email protected]'\nSubject: RE: [PERFORM] Performance issues\n\nHi Tomas,\n\nThis is what I am getting,\n\n\nEXPLAIN ANALYZE SELECT * FROM s_f_promotion_history WHERE (send_dt >=\n'2014-03-13 00:00:00');\n\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on s_f_promotion_history (cost=0.00..283333.66 rows=1 width=74)\n(actual time=711.023..1136.393 rows=1338 loops=1)\n Filter: ((send_dt >= '2014-03-13 00:00:00'::timestamp without time zone)\nAND (send_dt <= '2015-03-14 00:00:00'::timestamp without time zone))\n Rows Removed by Filter: 9998662\n Total runtime: 1170.682 ms\n\n\nCREATE INDEX idx_pr_history ON\nS_F_PROMOTION_HISTORY(touchpoint_execution_id, send_dt);\n\n After Creating Index:\n\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_pr_history on s_f_promotion_history\n(cost=0.43..254028.45 rows=1 width=74) (actual time=375.796..604.587\nrows=1338 loops=1)\n Index Cond: ((send_dt >= '2014-03-13 00:00:00'::timestamp without time\nzone) AND (send_dt <= '2015-03-14 00:00:00'::timestamp without time zone))\nTotal runtime: 604.733 ms\n\n\nThe query I gave you is the smallest query, it is using two views and both\nthe views I have changed by using subqueries instead of CTEs. When I join\nthese two views, it is not getting completed at all.\n\nExplain analyze plan for view s_v_f_promotion_history_email:\nhttp://explain.depesz.com/s/ure Explain analyze plan for view\ns_v_d_campaign_hierarchy : http://explain.depesz.com/s/WxI\n\n\nRegards,\nVivek\n\n\n\n\n\n\n-----Original Message-----\nFrom: Tomas Vondra [mailto:[email protected]]\nSent: Tuesday, March 17, 2015 5:15 PM\nTo: [email protected]; Jim Nasby; Scott Marlowe; Varadharajan\nMukundan\nCc: [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn 17.3.2015 12:07, Vivekanand Joshi wrote:\n> EXPLAIN ANALYZE didn't give result even after three hours.\n\nIn that case the only thing you can do is 'slice' the query into smaller\nparts (representing subtrees of the plan), and analyze those first. Look for\nmisestimates (significant differences between estimated and actual row\ncounts, and very expensive parts).\n\nWe can't do that, because we don't have your data or queries, and without\nthe explain analyze it's difficult to give advices.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 17 Mar 2015 17:40:01 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Just as I feared, the attached explain analyze results show significant\nmisestimates, like this for example:\n\nNested Loop (cost=32782.19..34504.16 rows=1 width=16)\n (actual time=337.484..884.438 rows=46454 loops=1)\n\nNested Loop (cost=18484.94..20366.29 rows=1 width=776)\n (actual time=2445.487..3741.049 rows=45360 loops=1)\n\nHash Left Join (cost=34679.90..37396.37 rows=11644 width=148)\n (actual time=609.472..9070.675 rows=4559289 loops=1)\n\nThere's plenty of nested loop joins - the optimizer believes there will\nbe only a few rows in the outer relation, but gets order of magnitude\nmore tuples. And nested loops are terrible in that case.\n\nIn case of the first view, it seems to be caused by this:\n\nMerge Cond:\n((s_f_touchpoint_execution_status_history.touchpoint_execution_id =\ns_f_touchpoint_execution_status_history_1.touchpoint_ex\necution_id) AND (s_f_touchpoint_execution_status_history.creation_dt =\n(max(s_f_touchpoint_execution_status_history_1.creation_dt))))\n\nespecially the ':id = max(:id)' condition is probably giving the\noptimizer a hard time. This is a conceptually difficult poblem (i.e.\nfixing this at the optimizer level is unlikely to happen any time soon,\nbecause it effectively means you have to predict the statistical\nproperties of the aggregation).\n\nYou may try increasing the statistical target, which makes the stats a\nbit more detailed (the default on 9.4 is 100):\n\n SET default_statistics_target = 10000;\n ANALYZE;\n\nBut I don't really believe this might really fix the problem.\n\nBut maybe it's possible to rewrite the query somehow?\n\nLet's experiment a bit - remove the aggregation, i.e. join directly to\ns_f_touchpoint_execution_status_history. It'll return wrong results, but\nthe estimates should be better, so let's see what happens.\n\nYou may also try disabling nested loops - the other join algorithms\nusually perform better with large row counts.\n\n SET enable_nestloop = false;\n\nThis is not a production-suitable solution, but for experimenting that's OK.\n\nISTM what the aggregation (or the whole mergejoin) does is selecting the\nlast s_f_touchpoint_execution_status_history record for each\ntouchpoint_execution_id.\n\nThere are better ways to determine that, IMHO. For example:\n\n (1) adding a 'is_last' flag to s_f_touchpoint_execution_status_history\n\n This however requires maintaining that flag somehow, but the join\n would not be needed at all.\n\n The \"last IDs\" might be maintained in a separate table - the join\n would be still necessary, but it might be less intrusive and\n cheper to maintain.\n\n (2) using window functions, e.g. like this:\n\n SELECT * FROM (\n SELECT *,\n ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n ORDER BY FROM max_creation_dt) AS rn\n FROM s_f_touchpoint_execution_status_history\n ) foo WHERE rn = 1\n\n But estimating this is also rather difficult ...\n\n (3) Using temporary table / MV - this really depends on your\n requirements, load schedule, how you run the queries etc. It would\n however fix the estimation errors (probably).\n\nThe 2nd view seems to suffer because of the same issue (underestimates\nleading to choice of nested loops), but caused by something else:\n\n-> Hash Join (cost=1954.13..6249.67 rows=13 width=108)\n (actual time=31.777..210.346 rows=72670 loops=1)\n Hash Cond: ((tp_exec.touchpoint_id = tp.touchpoint_id)\n AND (wave_exec.wave_id = tp.wave_id))\n\nEstimating cardinality of joins with multi-column conditions is\ndifficult, no idea how to fix that at the moment.\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 14:55:02 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Tomas Vondra schrieb am 17.03.2015 um 14:55:\n> (2) using window functions, e.g. like this:\n> \n> SELECT * FROM (\n> SELECT *,\n> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n> ORDER BY FROM max_creation_dt) AS rn\n> FROM s_f_touchpoint_execution_status_history\n> ) foo WHERE rn = 1\n> \n> But estimating this is also rather difficult ...\n\n\n From my experience rewriting something like the above using DISTINCT ON is usually faster. \n\ne.g.:\n\nselect distinct on (touchpoint_execution_id) *\nfrom s_f_touchpoint_execution_status_history\norder by touchpoint_execution_id, max_creation_dt;\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 15:19:10 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 17.3.2015 15:19, Thomas Kellerer wrote:\n> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>> (2) using window functions, e.g. like this:\n>>\n>> SELECT * FROM (\n>> SELECT *,\n>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>> ORDER BY FROM max_creation_dt) AS rn\n>> FROM s_f_touchpoint_execution_status_history\n>> ) foo WHERE rn = 1\n>>\n>> But estimating this is also rather difficult ...\n> \n> \n> From my experience rewriting something like the above using DISTINCT \n> ON is usually faster.\n\nHow do you get the last record (with respect to a timestamp column)\nusing a DISTINCT ON?\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 15:43:00 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "The confusion for me here is that :\n\n\nI am getting results from the view in around 3 seconds\n(S_V_D_CAMPAIGN_HIERARCHY) and 25 seconds (S_V_F_PROMOTION_HISTORY_EMAIL)\n\nBut when I am using these two views in the query as the joining tables, it\ndoesn't give any result. As per my understanding, the planner is making new\nplan and that is costly instead of using output from the view, which is\nactually understandable.\n\nIs there a way, we can do anything about it?\n\nI hope I am making some sense here.\n\nRegards,\nVivek\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tomas Vondra\nSent: Tuesday, March 17, 2015 8:13 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn 17.3.2015 15:19, Thomas Kellerer wrote:\n> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>> (2) using window functions, e.g. like this:\n>>\n>> SELECT * FROM (\n>> SELECT *,\n>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>> ORDER BY FROM max_creation_dt) AS rn\n>> FROM s_f_touchpoint_execution_status_history\n>> ) foo WHERE rn = 1\n>>\n>> But estimating this is also rather difficult ...\n>\n>\n> From my experience rewriting something like the above using DISTINCT\n> ON is usually faster.\n\nHow do you get the last record (with respect to a timestamp column) using a\nDISTINCT ON?\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 20:40:03 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "This is the explain for a simple query:\n\nexplain Select * from S_V_F_PROMOTION_HISTORY_EMAIL a inner join\nS_V_D_CAMPAIGN_HIERARCHY b on a.touchpoint_execution_id =\nb.touchpoint_execution_id;\n\n\nhttp://explain.depesz.com/s/gse\n\nI am wondering the total cost here is less even then the result is not\ncoming out.\n\nRegards,\nVivek\n\n-----Original Message-----\nFrom: Vivekanand Joshi [mailto:[email protected]]\nSent: Tuesday, March 17, 2015 8:40 PM\nTo: 'Tomas Vondra'; '[email protected]'\nSubject: RE: [PERFORM] Performance issues\n\nThe confusion for me here is that :\n\n\nI am getting results from the view in around 3 seconds\n(S_V_D_CAMPAIGN_HIERARCHY) and 25 seconds (S_V_F_PROMOTION_HISTORY_EMAIL)\n\nBut when I am using these two views in the query as the joining tables, it\ndoesn't give any result. As per my understanding, the planner is making new\nplan and that is costly instead of using output from the view, which is\nactually understandable.\n\nIs there a way, we can do anything about it?\n\nI hope I am making some sense here.\n\nRegards,\nVivek\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tomas Vondra\nSent: Tuesday, March 17, 2015 8:13 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn 17.3.2015 15:19, Thomas Kellerer wrote:\n> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>> (2) using window functions, e.g. like this:\n>>\n>> SELECT * FROM (\n>> SELECT *,\n>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>> ORDER BY FROM max_creation_dt) AS rn\n>> FROM s_f_touchpoint_execution_status_history\n>> ) foo WHERE rn = 1\n>>\n>> But estimating this is also rather difficult ...\n>\n>\n> From my experience rewriting something like the above using DISTINCT\n> ON is usually faster.\n\nHow do you get the last record (with respect to a timestamp column) using a\nDISTINCT ON?\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 17 Mar 2015 20:46:03 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Tomas Vondra schrieb am 17.03.2015 um 15:43:\n> On 17.3.2015 15:19, Thomas Kellerer wrote:\n>> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>>> (2) using window functions, e.g. like this:\n>>>\n>>> SELECT * FROM (\n>>> SELECT *,\n>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>>> ORDER BY FROM max_creation_dt) AS rn\n>>> FROM s_f_touchpoint_execution_status_history\n>>> ) foo WHERE rn = 1\n>>>\n>>> But estimating this is also rather difficult ...\n>>\n>>\n>> From my experience rewriting something like the above using DISTINCT \n>> ON is usually faster.\n> \n> How do you get the last record (with respect to a timestamp column)\n> using a DISTINCT ON?\n\nYou need to use \"order by ... desc\". See here: http://sqlfiddle.com/#!15/d4846/2\n\nBtw: your row_number() usage wouldn't return the \"latest\" row either. \nIt would return the \"oldest\" row.\n\n\n\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 16:24:14 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 17.3.2015 16:10, Vivekanand Joshi wrote:\n> The confusion for me here is that :\n> \n> \n> I am getting results from the view in around 3 seconds\n> (S_V_D_CAMPAIGN_HIERARCHY) and 25 seconds (S_V_F_PROMOTION_HISTORY_EMAIL)\n> \n> But when I am using these two views in the query as the joining \n> tables, it doesn't give any result. As per my understanding, the \n> planner is making new plan and that is costly instead of using\n> output from the view, which is actually understandable.\n\nIn general, yes. The problem is that the plan is constructed based on\nthe estimates, and those are very inaccurate in this case.\n\nThe planner may do various changes, but let's assume that does not\nhappen and the plans are executed and and the results are joined.\n\nFor example what might happen is this:\n\n for each row in 's_v_d_campaign_hierarchy' (1 row expected):\n execute s_v_f_promotion_history_email & join (11644 rows exp.)\n\nBut then it gets 45k rows from s_v_d_campaign_hierarchy, and ~400x more\nrows from s_v_f_promotion_history_email (I'm neglecting the join\ncondition here, but that's not really significant). Kaboooom!\n\nIn reality, the plan is reorganized (e.g. different join order), but the\nmisestimates are still lurking there.\n\n> Is there a way, we can do anything about it?\n\nRephrasing the query so that the planner can estimate it more accurately.\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 16:28:03 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 17.3.2015 16:24, Thomas Kellerer wrote:\n> Tomas Vondra schrieb am 17.03.2015 um 15:43:\n>> On 17.3.2015 15:19, Thomas Kellerer wrote:\n>>> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>>>> (2) using window functions, e.g. like this:\n>>>>\n>>>> SELECT * FROM (\n>>>> SELECT *,\n>>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>>>> ORDER BY FROM max_creation_dt) AS rn\n>>>> FROM s_f_touchpoint_execution_status_history\n>>>> ) foo WHERE rn = 1\n>>>>\n>>>> But estimating this is also rather difficult ...\n>>>\n>>>\n>>> From my experience rewriting something like the above using DISTINCT \n>>> ON is usually faster.\n>>\n>> How do you get the last record (with respect to a timestamp column)\n>> using a DISTINCT ON?\n> \n> You need to use \"order by ... desc\". See here: http://sqlfiddle.com/#!15/d4846/2\n\nNice, thanks!\n\n> \n> Btw: your row_number() usage wouldn't return the \"latest\" row either. \n> It would return the \"oldest\" row.\n\nOh, right. I forgot the DESC in the window.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Mar 2015 16:30:24 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "So, here is the first taste of success and which gives me the confidence\nthat if properly worked out with a good hardware and proper tuning,\nPostgreSQL could be a good replacement.\n\nOut of the 9 reports which needs to be migrated in PostgreSQL, 3 are now\nrunning.\n\nReport 4 was giving an issue and I will see it tomorrow.\n\nJust to inform you guys that, the thing that helped most is setting\nenable_nestloops to false worked. Plans are now not miscalculated.\n\nBut this is not a production-suitable setting. So what do you think how to\nget a work around this?\n\n\nRegards,\nVivek\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tomas Vondra\nSent: Tuesday, March 17, 2015 9:00 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn 17.3.2015 16:24, Thomas Kellerer wrote:\n> Tomas Vondra schrieb am 17.03.2015 um 15:43:\n>> On 17.3.2015 15:19, Thomas Kellerer wrote:\n>>> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>>>> (2) using window functions, e.g. like this:\n>>>>\n>>>> SELECT * FROM (\n>>>> SELECT *,\n>>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>>>> ORDER BY FROM max_creation_dt) AS rn\n>>>> FROM s_f_touchpoint_execution_status_history\n>>>> ) foo WHERE rn = 1\n>>>>\n>>>> But estimating this is also rather difficult ...\n>>>\n>>>\n>>> From my experience rewriting something like the above using DISTINCT\n>>> ON is usually faster.\n>>\n>> How do you get the last record (with respect to a timestamp column)\n>> using a DISTINCT ON?\n>\n> You need to use \"order by ... desc\". See here:\n> http://sqlfiddle.com/#!15/d4846/2\n\nNice, thanks!\n\n>\n> Btw: your row_number() usage wouldn't return the \"latest\" row either.\n> It would return the \"oldest\" row.\n\nOh, right. I forgot the DESC in the window.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 23:01:15 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "2015-03-18 14:31 GMT-03:00 Vivekanand Joshi <[email protected]>:\n\n> So, here is the first taste of success and which gives me the confidence\n> that if properly worked out with a good hardware and proper tuning,\n> PostgreSQL could be a good replacement.\n>\n> Out of the 9 reports which needs to be migrated in PostgreSQL, 3 are now\n> running.\n>\n> Report 4 was giving an issue and I will see it tomorrow.\n>\n> Just to inform you guys that, the thing that helped most is setting\n> enable_nestloops to false worked. Plans are now not miscalculated.\n>\n>\n>\n>\n> Regards,\n> Vivek\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Tomas Vondra\n> Sent: Tuesday, March 17, 2015 9:00 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] Performance issues\n>\n> On 17.3.2015 16:24, Thomas Kellerer wrote:\n> > Tomas Vondra schrieb am 17.03.2015 um 15:43:\n> >> On 17.3.2015 15:19, Thomas Kellerer wrote:\n> >>> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n> >>>> (2) using window functions, e.g. like this:\n> >>>>\n> >>>> SELECT * FROM (\n> >>>> SELECT *,\n> >>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n> >>>> ORDER BY FROM max_creation_dt) AS rn\n> >>>> FROM s_f_touchpoint_execution_status_history\n> >>>> ) foo WHERE rn = 1\n> >>>>\n> >>>> But estimating this is also rather difficult ...\n> >>>\n> >>>\n> >>> From my experience rewriting something like the above using DISTINCT\n> >>> ON is usually faster.\n> >>\n> >> How do you get the last record (with respect to a timestamp column)\n> >> using a DISTINCT ON?\n> >\n> > You need to use \"order by ... desc\". See here:\n> > http://sqlfiddle.com/#!15/d4846/2\n>\n> Nice, thanks!\n>\n> >\n> > Btw: your row_number() usage wouldn't return the \"latest\" row either.\n> > It would return the \"oldest\" row.\n>\n> Oh, right. I forgot the DESC in the window.\n>\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n\n\"But this is not a production-suitable setting. So what do you think how to\nget a work around this?\"\n\nWhat about creating a read-only replica and apply this setting there?\n\n2015-03-18 14:31 GMT-03:00 Vivekanand Joshi <[email protected]>:So, here is the first taste of success and which gives me the confidence\nthat if properly worked out with a good hardware and proper tuning,\nPostgreSQL could be a good replacement.\n\nOut of the 9 reports which needs to be migrated in PostgreSQL, 3 are now\nrunning.\n\nReport 4 was giving an issue and I will see it tomorrow.\n\nJust to inform you guys that, the thing that helped most is setting\nenable_nestloops to false worked. Plans are now not miscalculated.\n\n\n\nRegards,\nVivek\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tomas Vondra\nSent: Tuesday, March 17, 2015 9:00 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nOn 17.3.2015 16:24, Thomas Kellerer wrote:\n> Tomas Vondra schrieb am 17.03.2015 um 15:43:\n>> On 17.3.2015 15:19, Thomas Kellerer wrote:\n>>> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>>>> (2) using window functions, e.g. like this:\n>>>>\n>>>> SELECT * FROM (\n>>>> SELECT *,\n>>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>>>> ORDER BY FROM max_creation_dt) AS rn\n>>>> FROM s_f_touchpoint_execution_status_history\n>>>> ) foo WHERE rn = 1\n>>>>\n>>>> But estimating this is also rather difficult ...\n>>>\n>>>\n>>> From my experience rewriting something like the above using DISTINCT\n>>> ON is usually faster.\n>>\n>> How do you get the last record (with respect to a timestamp column)\n>> using a DISTINCT ON?\n>\n> You need to use \"order by ... desc\". See here:\n> http://sqlfiddle.com/#!15/d4846/2\n\nNice, thanks!\n\n>\n> Btw: your row_number() usage wouldn't return the \"latest\" row either.\n> It would return the \"oldest\" row.\n\nOh, right. I forgot the DESC in the window.\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\"But this is not a production-suitable setting. So what do you think how toget a work around this?\"What about creating a read-only replica and apply this setting there?",
"msg_date": "Wed, 18 Mar 2015 14:51:06 -0300",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Vivekanand Joshi <[email protected]> writes:\n\n> So, here is the first taste of success and which gives me the confidence\n> that if properly worked out with a good hardware and proper tuning,\n> PostgreSQL could be a good replacement.\n>\n> Out of the 9 reports which needs to be migrated in PostgreSQL, 3 are now\n> running.\n>\n> Report 4 was giving an issue and I will see it tomorrow.\n>\n> Just to inform you guys that, the thing that helped most is setting\n> enable_nestloops to false worked. Plans are now not miscalculated.\n>\n> But this is not a production-suitable setting. So what do you think how to\n> get a work around this?\n\nConsider just disabling that setting for 1 or a few odd queries you have\nfor which they are known to plan badly.\n\nbegin;\nset local enable_nestloops to false;\nselect ...;\ncommit/abort;\n\nI'd say never make that sort of setting DB or cluster-wide.\n\n\n>\n>\n> Regards,\n> Vivek\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Tomas Vondra\n> Sent: Tuesday, March 17, 2015 9:00 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] Performance issues\n>\n> On 17.3.2015 16:24, Thomas Kellerer wrote:\n>> Tomas Vondra schrieb am 17.03.2015 um 15:43:\n>>> On 17.3.2015 15:19, Thomas Kellerer wrote:\n>>>> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>>>>> (2) using window functions, e.g. like this:\n>>>>>\n>>>>> SELECT * FROM (\n>>>>> SELECT *,\n>>>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>>>>> ORDER BY FROM max_creation_dt) AS rn\n>>>>> FROM s_f_touchpoint_execution_status_history\n>>>>> ) foo WHERE rn = 1\n>>>>>\n>>>>> But estimating this is also rather difficult ...\n>>>>\n>>>>\n>>>> From my experience rewriting something like the above using DISTINCT\n>>>> ON is usually faster.\n>>>\n>>> How do you get the last record (with respect to a timestamp column)\n>>> using a DISTINCT ON?\n>>\n>> You need to use \"order by ... desc\". See here:\n>> http://sqlfiddle.com/#!15/d4846/2\n>\n> Nice, thanks!\n>\n>>\n>> Btw: your row_number() usage wouldn't return the \"latest\" row either.\n>> It would return the \"oldest\" row.\n>\n> Oh, right. I forgot the DESC in the window.\n>\n>\n> -- \n> Tomas Vondra http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 13:36:12 -0500",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "The issue here is that the queries are running inside a Jasper Reports. So\nwe cannot set this only for a one single query.\n\nWe are accessing our reports from a web-browser, which in turn runs the\nreport from Application Server (Jasper). This server connects to\nPostgreSQL server.\n\nInside a JRXML(Jasper report file) file we cannot set this parameter.\n\nI am attaching a JRXML file for a feel. You can open this file in\nnotepad. I don't think we can set server level property in this file. So\nhow about a workaround?\n\nVivek\n\n\n\n-----Original Message-----\nFrom: Jerry Sievers [mailto:[email protected]]\nSent: Thursday, March 19, 2015 12:06 AM\nTo: [email protected]\nCc: Tomas Vondra; [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nVivekanand Joshi <[email protected]> writes:\n\n> So, here is the first taste of success and which gives me the\n> confidence that if properly worked out with a good hardware and proper\n> tuning, PostgreSQL could be a good replacement.\n>\n> Out of the 9 reports which needs to be migrated in PostgreSQL, 3 are\n> now running.\n>\n> Report 4 was giving an issue and I will see it tomorrow.\n>\n> Just to inform you guys that, the thing that helped most is setting\n> enable_nestloops to false worked. Plans are now not miscalculated.\n>\n> But this is not a production-suitable setting. So what do you think\n> how to get a work around this?\n\nConsider just disabling that setting for 1 or a few odd queries you have\nfor which they are known to plan badly.\n\nbegin;\nset local enable_nestloops to false;\nselect ...;\ncommit/abort;\n\nI'd say never make that sort of setting DB or cluster-wide.\n\n\n>\n>\n> Regards,\n> Vivek\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Tomas\n> Vondra\n> Sent: Tuesday, March 17, 2015 9:00 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] Performance issues\n>\n> On 17.3.2015 16:24, Thomas Kellerer wrote:\n>> Tomas Vondra schrieb am 17.03.2015 um 15:43:\n>>> On 17.3.2015 15:19, Thomas Kellerer wrote:\n>>>> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>>>>> (2) using window functions, e.g. like this:\n>>>>>\n>>>>> SELECT * FROM (\n>>>>> SELECT *,\n>>>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>>>>> ORDER BY FROM max_creation_dt) AS rn\n>>>>> FROM s_f_touchpoint_execution_status_history\n>>>>> ) foo WHERE rn = 1\n>>>>>\n>>>>> But estimating this is also rather difficult ...\n>>>>\n>>>>\n>>>> From my experience rewriting something like the above using\n>>>> DISTINCT ON is usually faster.\n>>>\n>>> How do you get the last record (with respect to a timestamp column)\n>>> using a DISTINCT ON?\n>>\n>> You need to use \"order by ... desc\". See here:\n>> http://sqlfiddle.com/#!15/d4846/2\n>\n> Nice, thanks!\n>\n>>\n>> Btw: your row_number() usage wouldn't return the \"latest\" row either.\n>> It would return the \"oldest\" row.\n>\n> Oh, right. I forgot the DESC in the window.\n>\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n--\nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 19 Mar 2015 00:18:16 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "You can set it for the db user or use stored proc.\n\nBest regards, Vitalii Tymchyshyn\n\nСр, 18 бер. 2015 14:48 Vivekanand Joshi <[email protected]> пише:\n\n> The issue here is that the queries are running inside a Jasper Reports. So\n> we cannot set this only for a one single query.\n>\n> We are accessing our reports from a web-browser, which in turn runs the\n> report from Application Server (Jasper). This server connects to\n> PostgreSQL server.\n>\n> Inside a JRXML(Jasper report file) file we cannot set this parameter.\n>\n> I am attaching a JRXML file for a feel. You can open this file in\n> notepad. I don't think we can set server level property in this file. So\n> how about a workaround?\n>\n> Vivek\n>\n>\n>\n> -----Original Message-----\n> From: Jerry Sievers [mailto:[email protected]]\n> Sent: Thursday, March 19, 2015 12:06 AM\n> To: [email protected]\n> Cc: Tomas Vondra; [email protected]\n> Subject: Re: [PERFORM] Performance issues\n>\n> Vivekanand Joshi <[email protected]> writes:\n>\n> > So, here is the first taste of success and which gives me the\n> > confidence that if properly worked out with a good hardware and proper\n> > tuning, PostgreSQL could be a good replacement.\n> >\n> > Out of the 9 reports which needs to be migrated in PostgreSQL, 3 are\n> > now running.\n> >\n> > Report 4 was giving an issue and I will see it tomorrow.\n> >\n> > Just to inform you guys that, the thing that helped most is setting\n> > enable_nestloops to false worked. Plans are now not miscalculated.\n> >\n> > But this is not a production-suitable setting. So what do you think\n> > how to get a work around this?\n>\n> Consider just disabling that setting for 1 or a few odd queries you have\n> for which they are known to plan badly.\n>\n> begin;\n> set local enable_nestloops to false;\n> select ...;\n> commit/abort;\n>\n> I'd say never make that sort of setting DB or cluster-wide.\n>\n>\n> >\n> >\n> > Regards,\n> > Vivek\n> >\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf Of Tomas\n> > Vondra\n> > Sent: Tuesday, March 17, 2015 9:00 PM\n> > To: [email protected]\n> > Subject: Re: [PERFORM] Performance issues\n> >\n> > On 17.3.2015 16:24, Thomas Kellerer wrote:\n> >> Tomas Vondra schrieb am 17.03.2015 um 15:43:\n> >>> On 17.3.2015 15:19, Thomas Kellerer wrote:\n> >>>> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n> >>>>> (2) using window functions, e.g. like this:\n> >>>>>\n> >>>>> SELECT * FROM (\n> >>>>> SELECT *,\n> >>>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n> >>>>> ORDER BY FROM max_creation_dt) AS rn\n> >>>>> FROM s_f_touchpoint_execution_status_history\n> >>>>> ) foo WHERE rn = 1\n> >>>>>\n> >>>>> But estimating this is also rather difficult ...\n> >>>>\n> >>>>\n> >>>> From my experience rewriting something like the above using\n> >>>> DISTINCT ON is usually faster.\n> >>>\n> >>> How do you get the last record (with respect to a timestamp column)\n> >>> using a DISTINCT ON?\n> >>\n> >> You need to use \"order by ... desc\". See here:\n> >> http://sqlfiddle.com/#!15/d4846/2\n> >\n> > Nice, thanks!\n> >\n> >>\n> >> Btw: your row_number() usage wouldn't return the \"latest\" row either.\n> >> It would return the \"oldest\" row.\n> >\n> > Oh, right. I forgot the DESC in the window.\n> >\n> >\n> > --\n> > Tomas Vondra http://www.2ndQuadrant.com/\n> > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list\n> > ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n>\n> --\n> Jerry Sievers\n> Postgres DBA/Development Consulting\n> e: [email protected]\n> p: 312.241.7800\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYou can set it for the db user or use stored proc.\nBest regards, Vitalii Tymchyshyn\n\nСр, 18 бер. 2015 14:48 Vivekanand Joshi <[email protected]> пише:The issue here is that the queries are running inside a Jasper Reports. So\nwe cannot set this only for a one single query.\n\nWe are accessing our reports from a web-browser, which in turn runs the\nreport from Application Server (Jasper). This server connects to\nPostgreSQL server.\n\nInside a JRXML(Jasper report file) file we cannot set this parameter.\n\nI am attaching a JRXML file for a feel. You can open this file in\nnotepad. I don't think we can set server level property in this file. So\nhow about a workaround?\n\nVivek\n\n\n\n-----Original Message-----\nFrom: Jerry Sievers [mailto:[email protected]]\nSent: Thursday, March 19, 2015 12:06 AM\nTo: [email protected]\nCc: Tomas Vondra; [email protected]\nSubject: Re: [PERFORM] Performance issues\n\nVivekanand Joshi <[email protected]> writes:\n\n> So, here is the first taste of success and which gives me the\n> confidence that if properly worked out with a good hardware and proper\n> tuning, PostgreSQL could be a good replacement.\n>\n> Out of the 9 reports which needs to be migrated in PostgreSQL, 3 are\n> now running.\n>\n> Report 4 was giving an issue and I will see it tomorrow.\n>\n> Just to inform you guys that, the thing that helped most is setting\n> enable_nestloops to false worked. Plans are now not miscalculated.\n>\n> But this is not a production-suitable setting. So what do you think\n> how to get a work around this?\n\nConsider just disabling that setting for 1 or a few odd queries you have\nfor which they are known to plan badly.\n\nbegin;\nset local enable_nestloops to false;\nselect ...;\ncommit/abort;\n\nI'd say never make that sort of setting DB or cluster-wide.\n\n\n>\n>\n> Regards,\n> Vivek\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Tomas\n> Vondra\n> Sent: Tuesday, March 17, 2015 9:00 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] Performance issues\n>\n> On 17.3.2015 16:24, Thomas Kellerer wrote:\n>> Tomas Vondra schrieb am 17.03.2015 um 15:43:\n>>> On 17.3.2015 15:19, Thomas Kellerer wrote:\n>>>> Tomas Vondra schrieb am 17.03.2015 um 14:55:\n>>>>> (2) using window functions, e.g. like this:\n>>>>>\n>>>>> SELECT * FROM (\n>>>>> SELECT *,\n>>>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id\n>>>>> ORDER BY FROM max_creation_dt) AS rn\n>>>>> FROM s_f_touchpoint_execution_status_history\n>>>>> ) foo WHERE rn = 1\n>>>>>\n>>>>> But estimating this is also rather difficult ...\n>>>>\n>>>>\n>>>> From my experience rewriting something like the above using\n>>>> DISTINCT ON is usually faster.\n>>>\n>>> How do you get the last record (with respect to a timestamp column)\n>>> using a DISTINCT ON?\n>>\n>> You need to use \"order by ... desc\". See here:\n>> http://sqlfiddle.com/#!15/d4846/2\n>\n> Nice, thanks!\n>\n>>\n>> Btw: your row_number() usage wouldn't return the \"latest\" row either.\n>> It would return the \"oldest\" row.\n>\n> Oh, right. I forgot the DESC in the window.\n>\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n--\nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 18 Mar 2015 19:07:31 +0000",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Hi,\n\nOn 18.3.2015 18:31, Vivekanand Joshi wrote:\n> So, here is the first taste of success and which gives me the\n> confidence that if properly worked out with a good hardware and\n> proper tuning, PostgreSQL could be a good replacement.\n> \n> Out of the 9 reports which needs to be migrated in PostgreSQL, 3 are\n> now running.\n> \n> Report 4 was giving an issue and I will see it tomorrow.\n> \n> Just to inform you guys that, the thing that helped most is setting \n> enable_nestloops to false worked. Plans are now not miscalculated.\n\nThe estimates are still miscalculated, but you're forcing the database\nnot to use the nested loop. The problem is the nested loop may be\nappropriate in some cases (maybe only in a few places of the plan) so\nthis is really corse-grained solution.\n\n> But this is not a production-suitable setting. So what do you think\n> how to get a work around this?\n\n(a) Try to identify why the queries are poorly estimated, and rephrase\n them somehow. This is the best solution, but takes time, expertise\n and may not be feasible in some cases.\n\n(b) Tweak the database structure, possibly introducing intermediate\n tables, materialized views (or tables maintained by triggers - this\n might work for the 'latest record' subquery), etc.\n\n(c) Try to tweak the cost parameters, to make the nested loops more\n expensive (and thus less likely to be selected), but in a more\n gradual way than enable_nestloops=false.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 20:23:27 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "The other approaches of fixing the estimates, cost params, etc are the\nright way of fixing it. *However* if you needed a quick fix for just this\nreport and can't find a way of setting it in Jaspersoft for just the report\n(I don't think it will let you run multiple sql statements by default,\nmaybe not at all) there are still a couple more options. You can define a\nnew datasource in jasper, point this report to that datasource, and have\nthat new datasource configured to not use the nested loops. You could do\nthat either by making the new datasource use a different user than\neverything else, and disable nested loops for that user in postgres, or you\ncould probably have the datasource initialization process disable nested\nloops.\n\nThe other approaches of fixing the estimates, cost params, etc are the right way of fixing it. *However* if you needed a quick fix for just this report and can't find a way of setting it in Jaspersoft for just the report (I don't think it will let you run multiple sql statements by default, maybe not at all) there are still a couple more options. You can define a new datasource in jasper, point this report to that datasource, and have that new datasource configured to not use the nested loops. You could do that either by making the new datasource use a different user than everything else, and disable nested loops for that user in postgres, or you could probably have the datasource initialization process disable nested loops.",
"msg_date": "Sat, 21 Mar 2015 03:39:48 -0400",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "Any documentation regarding how to configure postgresql.conf file as per\nindividual user?\nOn 21 Mar 2015 13:10, \"Josh Krupka\" <[email protected]> wrote:\n\n> The other approaches of fixing the estimates, cost params, etc are the\n> right way of fixing it. *However* if you needed a quick fix for just this\n> report and can't find a way of setting it in Jaspersoft for just the report\n> (I don't think it will let you run multiple sql statements by default,\n> maybe not at all) there are still a couple more options. You can define a\n> new datasource in jasper, point this report to that datasource, and have\n> that new datasource configured to not use the nested loops. You could do\n> that either by making the new datasource use a different user than\n> everything else, and disable nested loops for that user in postgres, or you\n> could probably have the datasource initialization process disable nested\n> loops.\n>\n\nAny documentation regarding how to configure postgresql.conf file as per individual user?\nOn 21 Mar 2015 13:10, \"Josh Krupka\" <[email protected]> wrote:The other approaches of fixing the estimates, cost params, etc are the right way of fixing it. *However* if you needed a quick fix for just this report and can't find a way of setting it in Jaspersoft for just the report (I don't think it will let you run multiple sql statements by default, maybe not at all) there are still a couple more options. You can define a new datasource in jasper, point this report to that datasource, and have that new datasource configured to not use the nested loops. You could do that either by making the new datasource use a different user than everything else, and disable nested loops for that user in postgres, or you could probably have the datasource initialization process disable nested loops.",
"msg_date": "Mon, 23 Mar 2015 03:20:24 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues"
},
{
"msg_contents": "On 22.3.2015 22:50, Vivekanand Joshi wrote:\n> Any documentation regarding how to configure postgresql.conf file as per\n> individual user?\n\nThat can't be done in postgresql.conf, but by ALTER ROLE commands.\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 22 Mar 2015 23:01:01 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues"
}
] |
[
{
"msg_contents": "The reason I ask is that it seems to support deduplication/compression. I was wondering if this would have any performance implications of PG operations.\n\n\nThanks, M.?\n\n\nMel Llaguno * Staff Engineer - Team Lead\nOffice: +1.403.264.9717 x310\nwww.coverity.com<http://www.coverity.com/> <http://www.coverity.com/> * Twitter: @coverity\nCoverity by Synopsys\n\n\n\n\n\n\n\n\nThe reason I ask is that it seems to support deduplication/compression. I was wondering if this would have any performance implications of PG operations.\n\n\n\nThanks, M.\n\n\n\nMel Llaguno • Staff Engineer – Team Lead\nOffice: +1.403.264.9717 x310\nwww.coverity.com <http://www.coverity.com/>\n • Twitter: @coverity\nCoverity by Synopsys",
"msg_date": "Fri, 13 Mar 2015 19:18:30 +0000",
"msg_from": "Mel Llaguno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Anyone have experience using PG on a NetApp All-Flash FAS8000?"
},
{
"msg_contents": "Hi Mel,\n\nI don't have any experience in NetApp storage systems, but if\ncompression / deduplication is the only point for which you're\nconsider NetApp, then do consider FS like ZFS or btrfs which can do\ndeduplication as well as compression on normal disks. Here are few\nreports of running Postgres on ZFS\n(http://www.citusdata.com/blog/64-zfs-compression) and btrfs\n(http://no0p.github.io/postgresql/2014/09/06/benchmarking-postgresql-btrfs-zlib.html)\n\nOn Fri, Mar 13, 2015 at 8:18 PM, Mel Llaguno <[email protected]> wrote:\n> The reason I ask is that it seems to support deduplication/compression. I\n> was wondering if this would have any performance implications of PG\n> operations.\n>\n>\n> Thanks, M.\n>\n>\n> Mel Llaguno • Staff Engineer – Team Lead\n> Office: +1.403.264.9717 x310\n> www.coverity.com <http://www.coverity.com/> • Twitter: @coverity\n> Coverity by Synopsys\n\n\n\n-- \nThanks,\nM. Varadharajan\n\n------------------------------------------------\n\n\"Experience is what you get when you didn't get what you wanted\"\n -By Prof. Randy Pausch in \"The Last Lecture\"\n\nMy Journal :- www.thinkasgeek.wordpress.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 14 Mar 2015 08:02:20 +0100",
"msg_from": "Varadharajan Mukundan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone have experience using PG on a NetApp All-Flash FAS8000?"
}
] |
[
{
"msg_contents": "I'm having difficulty understanding what I perceive as an inconsistency in\nhow the postgres parser chooses to use indices. We have a query based on NOT\nIN against an indexed column that the parser executes sequentially, but\nwhen we perform the same query as IN, it uses the index.\n\n \n\nI've created a simplistic example that I believe demonstrates the issue,\nnotice this first query is sequential\n\n \n\nCREATE TABLE node\n\n(\n\n id SERIAL PRIMARY KEY,\n\n vid INTEGER\n\n);\n\nCREATE INDEX x ON node(vid);\n\n \n\nINSERT INTO node(vid) VALUES (1),(2);\n\n \n\nEXPLAIN ANALYZE\n\nSELECT *\n\nFROM node\n\nWHERE NOT vid IN (1);\n\n \n\nSeq Scan on node (cost=0.00..36.75 rows=2129 width=8) (actual\ntime=0.009..0.010 rows=1 loops=1)\n\n Filter: (vid <> 1)\n\n Rows Removed by Filter: 1\n\nTotal runtime: 0.025 ms\n\n \n\nBut if we invert the query to IN, you'll notice that it now decided to use\nthe index\n\n \n\nEXPLAIN ANALYZE\n\nSELECT *\n\nFROM node\n\nWHERE vid IN (2);\n\n \n\nBitmap Heap Scan on node (cost=4.34..15.01 rows=11 width=8) (actual\ntime=0.017..0.017 rows=1 loops=1)\n\n Recheck Cond: (vid = 1)\n\n -> Bitmap Index Scan on x (cost=0.00..4.33 rows=11 width=0) (actual\ntime=0.012..0.012 rows=1 loops=1)\n\n Index Cond: (vid = 1)\n\nTotal runtime: 0.039 ms\n\n \n\nCan anyone shed any light on this? Specifically, is there a way to re-write\nout NOT IN to work with the index (when obviously the result set is not as\nsimplistic as just 1 or 2).\n\n \n\nWe are using Postgres 9.2 on CentOS 6.6\n\n \n\n\nI'm having difficulty understanding what I perceive as an inconsistency in how the postgres parser chooses to use indices. We have a query based on NOT IN against an indexed column that the parser executes sequentially, but when we perform the same query as IN, it uses the index. I've created a simplistic example that I believe demonstrates the issue, notice this first query is sequential CREATE TABLE node( id SERIAL PRIMARY KEY, vid INTEGER);CREATE INDEX x ON node(vid); INSERT INTO node(vid) VALUES (1),(2); EXPLAIN ANALYZESELECT *FROM nodeWHERE NOT vid IN (1); Seq Scan on node (cost=0.00..36.75 rows=2129 width=8) (actual time=0.009..0.010 rows=1 loops=1) Filter: (vid <> 1) Rows Removed by Filter: 1Total runtime: 0.025 ms But if we invert the query to IN, you'll notice that it now decided to use the index EXPLAIN ANALYZESELECT *FROM nodeWHERE vid IN (2); Bitmap Heap Scan on node (cost=4.34..15.01 rows=11 width=8) (actual time=0.017..0.017 rows=1 loops=1) Recheck Cond: (vid = 1) -> Bitmap Index Scan on x (cost=0.00..4.33 rows=11 width=0) (actual time=0.012..0.012 rows=1 loops=1) Index Cond: (vid = 1)Total runtime: 0.039 ms Can anyone shed any light on this? Specifically, is there a way to re-write out NOT IN to work with the index (when obviously the result set is not as simplistic as just 1 or 2). We are using Postgres 9.2 on CentOS 6.6",
"msg_date": "Fri, 13 Mar 2015 17:27:29 -0400",
"msg_from": "\"Jim Carroll\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres inconsistent use of Index vs. Seq Scan"
},
{
"msg_contents": "\"Jim Carroll\" <[email protected]> writes:\n> I'm having difficulty understanding what I perceive as an inconsistency in\n> how the postgres parser chooses to use indices. We have a query based on NOT\n> IN against an indexed column that the parser executes sequentially, but\n> when we perform the same query as IN, it uses the index.\n\nWhat you've got here is a query that asks for all rows with vid <> 1.\nNot-equals is not an indexable operator according to Postgres, and there\nwould not be much point in making it one, since it generally implies\nhaving to scan the majority of the table.\n\nIf, indeed, 99% of your table has vid = 1, then there would be some point\nin trying to use an index to find the other 1%; but you'll have to\nformulate the query differently (perhaps \"vid > 1\" would do?) or else\nuse a properly-designed partial index.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Mar 2015 18:04:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres inconsistent use of Index vs. Seq Scan"
}
] |
[
{
"msg_contents": "Hi!\n\nWe at MusicBrainz have been having trouble with our Postgres install for the past few days. I’ve collected all the relevant information here:\n\n http://blog.musicbrainz.org/2015/03/15/postgres-troubles/ <http://blog.musicbrainz.org/2015/03/15/postgres-troubles/>\n\nIf anyone could provide tips, suggestions or other relevant advice for what to poke at next, we would love it.\n\nThanks!\n\n--\n\n--ruaok \n\nRobert Kaye -- [email protected] <mailto:[email protected]> -- http://musicbrainz.org <http://musicbrainz.org/>\n\nHi!We at MusicBrainz have been having trouble with our Postgres install for the past few days. I’ve collected all the relevant information here: http://blog.musicbrainz.org/2015/03/15/postgres-troubles/If anyone could provide tips, suggestions or other relevant advice for what to poke at next, we would love it.Thanks!\n----ruaok Robert Kaye -- [email protected] -- http://musicbrainz.org",
"msg_date": "Sun, 15 Mar 2015 11:54:30 +0100",
"msg_from": "Robert Kaye <[email protected]>",
"msg_from_op": true,
"msg_subject": "MusicBrainz postgres performance issues"
},
{
"msg_contents": "It sounds like you've hit the postgres basics, what about some of the linux\ncheck list items?\n\nwhat does free -m show on your db server?\n\nIf the load problem really is being caused by swapping when things really\nshouldn't be swapping, it could be a matter of adjusting your swappiness -\nwhat does cat /proc/sys/vm/swappiness show on your server?\n\nThere are other linux memory management things that can cause postgres and\nthe server running it to throw fits like THP and zone reclaim. I don't\nhave enough info about your system to say they are the cause either, but\ncheck out the many postings here and other places on the detrimental effect\nthat those settings *can* have. That would at least give you another angle\nto investigate.\n\nIt sounds like you've hit the postgres basics, what about some of the linux check list items?what does free -m show on your db server?If the load problem really is being caused by swapping when things really shouldn't be swapping, it could be a matter of adjusting your swappiness - what does cat /proc/sys/vm/swappiness show on your server?There are other linux memory management things that can cause postgres and the server running it to throw fits like THP and zone reclaim. I don't have enough info about your system to say they are the cause either, but check out the many postings here and other places on the detrimental effect that those settings *can* have. That would at least give you another angle to investigate.",
"msg_date": "Sun, 15 Mar 2015 07:13:53 -0400",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "Robert Kaye <[email protected]> wrote:\n\n> Hi!\n> \n> We at MusicBrainz have been having trouble with our Postgres install for the\n> past few days. I’ve collected all the relevant information here:\n> \n> http://blog.musicbrainz.org/2015/03/15/postgres-troubles/\n> \n> If anyone could provide tips, suggestions or other relevant advice for what to\n> poke at next, we would love it.\n\n\njust a wild guess: raid-controller BBU faulty\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 12:41:23 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "> On Mar 15, 2015, at 12:13 PM, Josh Krupka <[email protected]> wrote:\n> \n> It sounds like you've hit the postgres basics, what about some of the linux check list items?\n> \n> what does free -m show on your db server?\n\n total used free shared buffers cached\nMem: 48295 31673 16622 0 5 12670\n-/+ buffers/cache: 18997 29298\nSwap: 22852 2382 20470\n\n> \n> If the load problem really is being caused by swapping when things really shouldn't be swapping, it could be a matter of adjusting your swappiness - what does cat /proc/sys/vm/swappiness show on your server?\n\n0 \n\nWe adjusted that too, but no effect.\n\n(I’ve updated the blog post with these two comments)\n\n> \n> There are other linux memory management things that can cause postgres and the server running it to throw fits like THP and zone reclaim. I don't have enough info about your system to say they are the cause either, but check out the many postings here and other places on the detrimental effect that those settings *can* have. That would at least give you another angle to investigate.\n\n\nIf there are specific things you’d like to know, I’ve be happy to be a human proxy. :)\n\nThanks!\n\n--\n\n--ruaok \n\nRobert Kaye -- [email protected] <mailto:[email protected]> -- http://musicbrainz.org <http://musicbrainz.org/>\n\nOn Mar 15, 2015, at 12:13 PM, Josh Krupka <[email protected]> wrote:It sounds like you've hit the postgres basics, what about some of the linux check list items?what does free -m show on your db server? total used free shared buffers cachedMem: 48295 31673 16622 0 5 12670-/+ buffers/cache: 18997 29298Swap: 22852 2382 20470If the load problem really is being caused by swapping when things really shouldn't be swapping, it could be a matter of adjusting your swappiness - what does cat /proc/sys/vm/swappiness show on your server?0 We adjusted that too, but no effect.(I’ve updated the blog post with these two comments)There are other linux memory management things that can cause postgres and the server running it to throw fits like THP and zone reclaim. I don't have enough info about your system to say they are the cause either, but check out the many postings here and other places on the detrimental effect that those settings *can* have. That would at least give you another angle to investigate.\nIf there are specific things you’d like to know, I’ve be happy to be a human proxy. :)Thanks!\n----ruaok Robert Kaye -- [email protected] -- http://musicbrainz.org",
"msg_date": "Sun, 15 Mar 2015 13:07:25 +0100",
"msg_from": "Robert Kaye <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "\n> On Mar 15, 2015, at 12:41 PM, Andreas Kretschmer <[email protected]> wrote:\n> \n> just a wild guess: raid-controller BBU faulty\n\nWe don’t have a BBU in this server, but at least we have redundant power supplies.\n\nIn any case, how would a fault batter possibly cause this?\n\n--\n\n--ruaok \n\nRobert Kaye -- [email protected] -- http://musicbrainz.org\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 13:08:13 +0100",
"msg_from": "Robert Kaye <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "\n\n\n\n\npls check this if it helps:\n http://ubuntuforums.org/showthread.php?t=2258734\n\n 在 2015/3/15 18:54, Robert Kaye 写道:\n\n\n\n Hi!\n \n\nWe at MusicBrainz have been having trouble with our\n Postgres install for the past few days. I’ve collected all the\n relevant information here:\n\n\n http://blog.musicbrainz.org/2015/03/15/postgres-troubles/\n\n\nIf anyone could provide tips, suggestions or other\n relevant advice for what to poke at next, we would love it.\n\n\nThanks!\n\n\n\n\n--\n\n --ruaok \n\n Robert Kaye -- [email protected] --\n http://musicbrainz.org\n\n\n\n\n\n\n\n",
"msg_date": "Sun, 15 Mar 2015 20:29:27 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 2015-03-15 13:08:13 +0100, Robert Kaye wrote:\n> > On Mar 15, 2015, at 12:41 PM, Andreas Kretschmer <[email protected]> wrote:\n> > \n> > just a wild guess: raid-controller BBU faulty\n> \n> We don’t have a BBU in this server, but at least we have redundant power supplies.\n> \n> In any case, how would a fault batter possibly cause this?\n\nMany controllers disable write-back caching when the battery is dead.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 13:32:19 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On Sun, Mar 15, 2015 at 8:07 AM, Robert Kaye <[email protected]> wrote:\n>\n> what does free -m show on your db server?\n>\n>\n> total used free shared buffers cached\n> Mem: 48295 31673 16622 0 5 12670\n> -/+ buffers/cache: 18997 29298\n> Swap: 22852 2382 20470\n>\n>\nHmm that's definitely odd that it's swapping since it has plenty of free\nmemory at the moment. Is it still under heavy load right now? Has the\noutput of free consistently looked like that during your trouble times?\n\n\n>\n> If the load problem really is being caused by swapping when things really\n> shouldn't be swapping, it could be a matter of adjusting your swappiness -\n> what does cat /proc/sys/vm/swappiness show on your server?\n>\n>\n> 0\n>\n> We adjusted that too, but no effect.\n>\n> (I’ve updated the blog post with these two comments)\n>\n> That had been updated a while ago or just now?\n\n\n>\n> There are other linux memory management things that can cause postgres and\n> the server running it to throw fits like THP and zone reclaim. I don't\n> have enough info about your system to say they are the cause either, but\n> check out the many postings here and other places on the detrimental effect\n> that those settings *can* have. That would at least give you another angle\n> to investigate.\n>\n>\n> If there are specific things you’d like to know, I’ve be happy to be a\n> human proxy. :)\n>\n>\nIf zone reclaim is enabled (I think linux usually decides whether or not to\nenable it at boot time depending on the numa architecture) it sometimes\navoids using memory on remote numa nodes if it thinks that memory access is\ntoo expensive. This can lead to way too much disk access (not sure if it\nwould actually make linux swap or not...) and lots of ram sitting around\ndoing nothing instead of being used for fs cache like it should be. Check\nto see if zone reclaim is enabled with this command: cat\n/proc/sys/vm/zone_reclaim_mode. If your server is a numa one, you can\ninstall the numactl utility and look at the numa layout with this: numactl\n--hardware\n\nI'm not sure how THP would cause lots of swapping, but it's worth checking\nin general: cat /sys/kernel/mm/transparent_hugepage/enabled. If it's\nspending too much time trying to compact memory pages it can cause stalls\nin your processes. To get the thp metrics do egrep 'trans|thp' /proc/vmstat\n\n On Sun, Mar 15, 2015 at 8:07 AM, Robert Kaye <[email protected]> wrote:what does free -m show on your db server? total used free shared buffers cachedMem: 48295 31673 16622 0 5 12670-/+ buffers/cache: 18997 29298Swap: 22852 2382 20470Hmm that's definitely odd that it's swapping since it has plenty of free memory at the moment. Is it still under heavy load right now? Has the output of free consistently looked like that during your trouble times? If the load problem really is being caused by swapping when things really shouldn't be swapping, it could be a matter of adjusting your swappiness - what does cat /proc/sys/vm/swappiness show on your server?0 We adjusted that too, but no effect.(I’ve updated the blog post with these two comments)That had been updated a while ago or just now? There are other linux memory management things that can cause postgres and the server running it to throw fits like THP and zone reclaim. I don't have enough info about your system to say they are the cause either, but check out the many postings here and other places on the detrimental effect that those settings *can* have. That would at least give you another angle to investigate.\nIf there are specific things you’d like to know, I’ve be happy to be a human proxy. :)If zone reclaim is enabled (I think linux usually decides whether or not to enable it at boot time depending on the numa architecture) it sometimes avoids using memory on remote numa nodes if it thinks that memory access is too expensive. This can lead to way too much disk access (not sure if it would actually make linux swap or not...) and lots of ram sitting around doing nothing instead of being used for fs cache like it should be. Check to see if zone reclaim is enabled with this command: cat /proc/sys/vm/zone_reclaim_mode. If your server is a numa one, you can install the numactl utility and look at the numa layout with this: numactl --hardwareI'm not sure how THP would cause lots of swapping, but it's worth checking in general: cat /sys/kernel/mm/transparent_hugepage/enabled. If it's spending too much time trying to compact memory pages it can cause stalls in your processes. To get the thp metrics do egrep 'trans|thp' /proc/vmstat",
"msg_date": "Sun, 15 Mar 2015 08:45:59 -0400",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "Hi!\n\nWhat shows your pg_stat_bgwriter for one day? \n\n\n> On Mar 15, 2015, at 11:54, Robert Kaye <[email protected]> wrote:\n> \n> Hi!\n> \n> We at MusicBrainz have been having trouble with our Postgres install for the past few days. I’ve collected all the relevant information here:\n> \n> http://blog.musicbrainz.org/2015/03/15/postgres-troubles/\n> \n> If anyone could provide tips, suggestions or other relevant advice for what to poke at next, we would love it.\n> \n> Thanks!\n> \n> --\n> \n> --ruaok \n> \n> Robert Kaye -- [email protected] -- http://musicbrainz.org\n> \n\nHi!What shows your pg_stat_bgwriter for one day? On Mar 15, 2015, at 11:54, Robert Kaye <[email protected]> wrote:Hi!We at MusicBrainz have been having trouble with our Postgres install for the past few days. I’ve collected all the relevant information here: http://blog.musicbrainz.org/2015/03/15/postgres-troubles/If anyone could provide tips, suggestions or other relevant advice for what to poke at next, we would love it.Thanks!\n----ruaok Robert Kaye -- [email protected] -- http://musicbrainz.org",
"msg_date": "Sun, 15 Mar 2015 14:27:34 +0100",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 15.3.2015 13:07, Robert Kaye wrote:\n>\n>> If the load problem really is being caused by swapping when things\n>> really shouldn't be swapping, it could be a matter of adjusting your\n>> swappiness - what does cat /proc/sys/vm/swappiness show on your server?\n> \n> 0 \n> \n> We adjusted that too, but no effect.\n> \n> (I’ve updated the blog post with these two comments)\n\nIMHO setting swappiness to 0 is way too aggressive. Just set it to\nsomething like 10 - 20, that works better in my experience.\n\n\n>> There are other linux memory management things that can cause\n>> postgres and the server running it to throw fits like THP and zone\n>> reclaim. I don't have enough info about your system to say they are\n>> the cause either, but check out the many postings here and other\n>> places on the detrimental effect that those settings *can* have.\n>> That would at least give you another angle to investigate.\n> \n> If there are specific things you’d like to know, I’ve be happy to be a\n> human proxy. :)\n\nI'd start with vm.* configuration, so the output from this:\n\n# sysctl -a | grep '^vm.*'\n\nand possibly /proc/meminfo. I'm especially interested in the overcommit\nsettings, because per the free output you provided there's ~16GB of free\nRAM.\n\nBTW what amounts of data are we talking about? How large is the database\nand how large is the active set?\n\n\nI also noticed you use kernel 3.2 - that's not the best kernel version\nfor PostgreSQL - see [1] or [2] for example.\n\n[1]\nhttps://medium.com/postgresql-talk/benchmarking-postgresql-with-different-linux-kernel-versions-on-ubuntu-lts-e61d57b70dd4\n\n[2]\nhttp://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 14:28:59 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "\n\n\n> On Mar 15, 2015, at 13:45, Josh Krupka <[email protected]> wrote:\n> Hmm that's definitely odd that it's swapping since it has plenty of free memory at the moment. Is it still under heavy load right now? Has the output of free consistently looked like that during your trouble times?\n\nAnd it seems better to disable swapiness \n\n\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 14:30:19 +0100",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 2015-03-15 13:07:25 +0100, Robert Kaye wrote:\n> \n> > On Mar 15, 2015, at 12:13 PM, Josh Krupka <[email protected]> wrote:\n> > \n> > It sounds like you've hit the postgres basics, what about some of the linux check list items?\n> > \n> > what does free -m show on your db server?\n> \n> total used free shared buffers cached\n> Mem: 48295 31673 16622 0 5 12670\n> -/+ buffers/cache: 18997 29298\n> Swap: 22852 2382 20470\n\nCould you post /proc/meminfo instead? That gives a fair bit more\ninformation.\n\nAlso:\n* What hardware is this running on?\n* Why do you need 500 connections (that are nearly all used) when you\n have a pgbouncer in front of the database? That's not going to be\n efficient.\n* Do you have any data tracking the state connections are in?\n I.e. whether they're idle or not? The connections graph on you linked\n doesn't give that information?\n* You're apparently not graphing CPU usage. How busy are the CPUs? How\n much time is spent in the kernel (i.e. system)?\n* Consider installing perf (linux-utils-$something) and doing a\n systemwide profile.\n\n3.2 isn't the greatest kernel around, efficiency wise. At some point you\nmight want to upgrade to something newer. I've seen remarkable\ndifferences around this.\n\nYou really should upgrade postgres to a newer major version one of these\ndays. Especially 9.2. can give you a remarkable improvement in\nperformance with many connections in a read mostly workload.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 14:50:22 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On Sun, Mar 15, 2015 at 7:50 AM, Andres Freund <[email protected]> wrote:\n> On 2015-03-15 13:07:25 +0100, Robert Kaye wrote:\n>>\n>> > On Mar 15, 2015, at 12:13 PM, Josh Krupka <[email protected]> wrote:\n>> >\n>> > It sounds like you've hit the postgres basics, what about some of the linux check list items?\n>> >\n>> > what does free -m show on your db server?\n>>\n>> total used free shared buffers cached\n>> Mem: 48295 31673 16622 0 5 12670\n>> -/+ buffers/cache: 18997 29298\n>> Swap: 22852 2382 20470\n>\n> Could you post /proc/meminfo instead? That gives a fair bit more\n> information.\n>\n> Also:\n> * What hardware is this running on?\n> * Why do you need 500 connections (that are nearly all used) when you\n> have a pgbouncer in front of the database? That's not going to be\n> efficient.\n> * Do you have any data tracking the state connections are in?\n> I.e. whether they're idle or not? The connections graph on you linked\n> doesn't give that information?\n> * You're apparently not graphing CPU usage. How busy are the CPUs? How\n> much time is spent in the kernel (i.e. system)?\n\nhtop is a great tool for watching the CPU cores live. Red == kernel btw.\n\n> * Consider installing perf (linux-utils-$something) and doing a\n> systemwide profile.\n>\n> 3.2 isn't the greatest kernel around, efficiency wise. At some point you\n> might want to upgrade to something newer. I've seen remarkable\n> differences around this.\n\nThat is an understatement. Here's a nice article on why it's borked:\n\nhttp://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n\nHad a 32 core machine with big RAID BBU and 512GB memory that was\ndying using 3.2 kernel. went to 3.11 and it went from a load of 20 to\n40 to a load of 5.\n\n> You really should upgrade postgres to a newer major version one of these\n> days. Especially 9.2. can give you a remarkable improvement in\n> performance with many connections in a read mostly workload.\n\nAgreed. ubuntu 12.04 with kernel 3.11/3.13 with pg 9.2 has been a\ngreat improvement over debian squeeze and pg 8.4 that we were running\nat work until recently.\n\nAs for the OP. if you've got swap activity causing issues when there's\nplenty of free space just TURN IT OFF.\n\nswapoff -a\n\nI do this on all my big memory servers that don't really need swap,\nesp when I was using hte 3.2 kernel which seems broken as regards swap\non bigger memory machines.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 10:43:47 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "\nOn 03/15/2015 05:08 AM, Robert Kaye wrote:\n>\n>\n>> On Mar 15, 2015, at 12:41 PM, Andreas Kretschmer <[email protected]> wrote:\n>>\n>> just a wild guess: raid-controller BBU faulty\n>\n> We don’t have a BBU in this server, but at least we have redundant power supplies.\n>\n> In any case, how would a fault batter possibly cause this?\n\nThe controller would turn off the cache.\n\nJD\n\n>\n> --\n>\n> --ruaok\n>\n> Robert Kaye -- [email protected] -- http://musicbrainz.org\n>\n>\n>\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, @cmdpromptinc\n\nNow I get it: your service is designed for a customer\nbase that grew up with Facebook, watches Japanese seizure\nrobot anime, and has the attention span of a gnat.\nI'm not that user., \"Tyler Riddle\"\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 09:47:20 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "\nOn 03/15/2015 09:43 AM, Scott Marlowe wrote:\n\n>> * Consider installing perf (linux-utils-$something) and doing a\n>> systemwide profile.\n>>\n>> 3.2 isn't the greatest kernel around, efficiency wise. At some point you\n>> might want to upgrade to something newer. I've seen remarkable\n>> differences around this.\n\nNot at some point, now. 3.2 - 3.8 are undeniably broken for PostgreSQL.\n\n>\n> That is an understatement. Here's a nice article on why it's borked:\n>\n> http://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n>\n> Had a 32 core machine with big RAID BBU and 512GB memory that was\n> dying using 3.2 kernel. went to 3.11 and it went from a load of 20 to\n> 40 to a load of 5.\n\nYep, I can confirm this behavior.\n\n>\n>> You really should upgrade postgres to a newer major version one of these\n>> days. Especially 9.2. can give you a remarkable improvement in\n>> performance with many connections in a read mostly workload.\n\nSeconded.\n\nJD\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, @cmdpromptinc\n\nNow I get it: your service is designed for a customer\nbase that grew up with Facebook, watches Japanese seizure\nrobot anime, and has the attention span of a gnat.\nI'm not that user., \"Tyler Riddle\"\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 09:49:01 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On Sun, Mar 15, 2015 at 10:43 AM, Scott Marlowe <[email protected]> wrote:\n> On Sun, Mar 15, 2015 at 7:50 AM, Andres Freund <[email protected]> wrote:\n>> On 2015-03-15 13:07:25 +0100, Robert Kaye wrote:\n>>>\n>>> > On Mar 15, 2015, at 12:13 PM, Josh Krupka <[email protected]> wrote:\n>>> >\n>>> > It sounds like you've hit the postgres basics, what about some of the linux check list items?\n>>> >\n>>> > what does free -m show on your db server?\n>>>\n>>> total used free shared buffers cached\n>>> Mem: 48295 31673 16622 0 5 12670\n>>> -/+ buffers/cache: 18997 29298\n>>> Swap: 22852 2382 20470\n>>\n>> Could you post /proc/meminfo instead? That gives a fair bit more\n>> information.\n>>\n>> Also:\n>> * What hardware is this running on?\n>> * Why do you need 500 connections (that are nearly all used) when you\n>> have a pgbouncer in front of the database? That's not going to be\n>> efficient.\n>> * Do you have any data tracking the state connections are in?\n>> I.e. whether they're idle or not? The connections graph on you linked\n>> doesn't give that information?\n>> * You're apparently not graphing CPU usage. How busy are the CPUs? How\n>> much time is spent in the kernel (i.e. system)?\n>\n> htop is a great tool for watching the CPU cores live. Red == kernel btw.\n>\n>> * Consider installing perf (linux-utils-$something) and doing a\n>> systemwide profile.\n>>\n>> 3.2 isn't the greatest kernel around, efficiency wise. At some point you\n>> might want to upgrade to something newer. I've seen remarkable\n>> differences around this.\n>\n> That is an understatement. Here's a nice article on why it's borked:\n>\n> http://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n>\n> Had a 32 core machine with big RAID BBU and 512GB memory that was\n> dying using 3.2 kernel. went to 3.11 and it went from a load of 20 to\n> 40 to a load of 5.\n>\n>> You really should upgrade postgres to a newer major version one of these\n>> days. Especially 9.2. can give you a remarkable improvement in\n>> performance with many connections in a read mostly workload.\n>\n> Agreed. ubuntu 12.04 with kernel 3.11/3.13 with pg 9.2 has been a\n> great improvement over debian squeeze and pg 8.4 that we were running\n> at work until recently.\n>\n> As for the OP. if you've got swap activity causing issues when there's\n> plenty of free space just TURN IT OFF.\n>\n> swapoff -a\n>\n> I do this on all my big memory servers that don't really need swap,\n> esp when I was using hte 3.2 kernel which seems broken as regards swap\n> on bigger memory machines.\n\nOK I've now read your blog post. A few pointers I'd make.\n\nshared_mem of 12G is almost always too large. I'd drop it down to ~1G or so.\n\n64MB work mem AND max_connections = 500 is a recipe for disaster. No\ndb can actively process 500 queries at once without going kaboom, ad\nhaving 64MB work_mem means it will go kaboom long before it reaches\n500 active connections. Lower that and let pgbouncer handle the extra\nconnections for you.\n\nGet some monitoring installed if you don't already have it so you can\ntrack memory usage, cpu usage, disk usage etc. Zabbix or Nagios work\nwell. Without some kind of system monitoring you're missing half the\ninformation you need to troubleshoot with.\n\nInstall iotop, sysstat, and htop. Configure sysstat to collect data so\nyou can use sar to see what the machine's been doing in the past few\ndays etc. Set it to 1 minute intervals in the /etc/cron.d/sysstat\nfile.\n\nDo whatever you have to to get kernel 3.11 or greater on that machine\n(or a new one). You don't have to upgrade pg just yet but the upgrade\nof the kernel is essential.\n\nGood luck. Let us know what you find and if you can get that machine\nback on its feet.\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 11:09:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On Sun, Mar 15, 2015 at 11:09 AM, Scott Marlowe <[email protected]> wrote:\n\nClarification:\n\n> 64MB work mem AND max_connections = 500 is a recipe for disaster. No\n> db can actively process 500 queries at once without going kaboom, ad\n> having 64MB work_mem means it will go kaboom long before it reaches\n> 500 active connections. Lower that and let pgbouncer handle the extra\n> connections for you.\n\nLower max_connections. work_mem 64MB is fine as long as\nmax_connections is something reasonable (reasonable is generally #CPU\ncores * 2 or so).\n\nwork_mem is per sort. A single query could easily use 2 or 4x work_mem\nall by itself. You can see how having hundreds of active connections\neach using 64MB or more at the same time can kill your server.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 11:11:55 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 2015-03-15 11:09:34 -0600, Scott Marlowe wrote:\n> shared_mem of 12G is almost always too large. I'd drop it down to ~1G or so.\n\nI think that's a outdated wisdom, i.e. not generally true. I've now seen\na significant number of systems where a larger shared_buffers can help\nquite massively. The primary case where it can, in my experience, go\nbad are write mostly database where every buffer acquiration has to\nwrite out dirty data while holding locks. Especially during relation\nextension that's bad. A new enough kernel, a sane filesystem\n(i.e. not ext3) and sane checkpoint configuration takes care of most of\nthe other disadvantages.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 18:20:48 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On Sun, Mar 15, 2015 at 8:20 PM, Andres Freund <[email protected]> wrote:\n> On 2015-03-15 11:09:34 -0600, Scott Marlowe wrote:\n>> shared_mem of 12G is almost always too large. I'd drop it down to ~1G or so.\n>\n> I think that's a outdated wisdom, i.e. not generally true.\n\nQuite agreed. With note, that proper configured controller with BBU is needed.\n\n\n> A new enough kernel, a sane filesystem\n> (i.e. not ext3) and sane checkpoint configuration takes care of most of\n> the other disadvantages.\n\nMost likely. And better to be sure that filesystem mounted without barrier.\n\nAnd I agree with Scott - 64MB work mem AND max_connections = 500 is a\nrecipe for disaster. The problem could be in session mode of\npgbouncer. If you can work with transaction mode - do it.\n\n\nBest regards,\nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 20:42:51 +0300",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 2015-03-15 20:42:51 +0300, Ilya Kosmodemiansky wrote:\n> On Sun, Mar 15, 2015 at 8:20 PM, Andres Freund <[email protected]> wrote:\n> > On 2015-03-15 11:09:34 -0600, Scott Marlowe wrote:\n> >> shared_mem of 12G is almost always too large. I'd drop it down to ~1G or so.\n> >\n> > I think that's a outdated wisdom, i.e. not generally true.\n> \n> Quite agreed. With note, that proper configured controller with BBU is needed.\n\nThat imo doesn't really have anything to do with it. The primary benefit\nof a BBU with writeback caching is accelerating (near-)synchronous\nwrites. Like the WAL. But, besides influencing the default for\nwal_buffers, a larger shared_buffers doesn't change the amount of\nsynchronous writes.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 18:46:47 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On Sun, Mar 15, 2015 at 8:46 PM, Andres Freund <[email protected]> wrote:\n> That imo doesn't really have anything to do with it. The primary benefit\n> of a BBU with writeback caching is accelerating (near-)synchronous\n> writes. Like the WAL.\n\nMy point was, that having no proper raid controller (today bbu surely\nneeded for the controller to be a proper one) + heavy writes of any\nkind, it is absolutely impossible to live with large shared_buffers\nand without io problems.\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n> --\n> Andres Freund http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 20:54:51 +0300",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On Sun, Mar 15, 2015 at 11:46 AM, Andres Freund <[email protected]> wrote:\n> On 2015-03-15 20:42:51 +0300, Ilya Kosmodemiansky wrote:\n>> On Sun, Mar 15, 2015 at 8:20 PM, Andres Freund <[email protected]> wrote:\n>> > On 2015-03-15 11:09:34 -0600, Scott Marlowe wrote:\n>> >> shared_mem of 12G is almost always too large. I'd drop it down to ~1G or so.\n>> >\n>> > I think that's a outdated wisdom, i.e. not generally true.\n>>\n>> Quite agreed. With note, that proper configured controller with BBU is needed.\n>\n> That imo doesn't really have anything to do with it. The primary benefit\n> of a BBU with writeback caching is accelerating (near-)synchronous\n> writes. Like the WAL. But, besides influencing the default for\n> wal_buffers, a larger shared_buffers doesn't change the amount of\n> synchronous writes.\n\nHere's the problem with a large shared_buffers on a machine that's\ngetting pushed into swap. It starts to swap BUFFERs. Once buffers\nstart getting swapped you're not just losing performance, that huge\nshared_buffers is now working against you because what you THINK are\nbuffers in RAM to make things faster are in fact blocks on a hard\ndrive being swapped in and out during reads. It's the exact opposite\nof fast. :)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 12:25:07 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 2015-03-15 12:25:07 -0600, Scott Marlowe wrote:\n> Here's the problem with a large shared_buffers on a machine that's\n> getting pushed into swap. It starts to swap BUFFERs. Once buffers\n> start getting swapped you're not just losing performance, that huge\n> shared_buffers is now working against you because what you THINK are\n> buffers in RAM to make things faster are in fact blocks on a hard\n> drive being swapped in and out during reads. It's the exact opposite\n> of fast. :)\n\nIMNSHO that's tackling things from the wrong end. If 12GB of shared\nbuffers drive your 48GB dedicated OLTP postgres server into swapping out\nactively used pages, the problem isn't the 12GB of shared buffers, but\nthat you require so much memory for other things. That needs to be\nfixed.\n\nBut! We haven't even established that swapping is an actual problem\nhere. The ~2GB of swapped out memory could just as well be the java raid\ncontroller management monstrosity or something similar. Those pages\nwon't ever be used and thus can better be used to buffer IO.\n\nYou can check what's actually swapped out using:\ngrep ^VmSwap /proc/[0-9]*/status|grep -v '0 kB'\n\nFor swapping to be actually harmful you need to have pages that are\nregularly swapped in. vmstat will tell.\n\nIn a concurrent OLTP workload (~450 established connections do suggest\nthat) with a fair amount of data keeping the hot data set in\nshared_buffers can significantly reduce problems. Constantly searching\nfor victim buffers isn't a nice thing, and that will happen if your most\nfrequently used data doesn't fit into s_b. On the other hand, if your\ndata set is so large that even the hottest part doesn't fit into memory\n(perhaps because there's no hottest part as there's no locality at all),\na smaller shared buffers can make things more efficient, because the\nsearch for replacement buffers is cheaper with a smaller shared buffers\nsetting.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 15 Mar 2015 23:47:56 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "How many CPUs in play here on the PG Cluster Server,\n\ncat /proc/cpuinfo | grep processor | wc -l\n\n\nI see you got pg_stat_statements enabled, what are the SQL you \nexperience during this heavy load time? And does explain on them show a \nlot of sorting activity that requires more work_mem.\n\nPlease enable log_checkpoints, so we can see if your checkpoint_segments \nis adequate.\n\n> Andres Freund <mailto:[email protected]>\n> Sunday, March 15, 2015 6:47 PM\n>\n> IMNSHO that's tackling things from the wrong end. If 12GB of shared\n> buffers drive your 48GB dedicated OLTP postgres server into swapping out\n> actively used pages, the problem isn't the 12GB of shared buffers, but\n> that you require so much memory for other things. That needs to be\n> fixed.\n>\n> But! We haven't even established that swapping is an actual problem\n> here. The ~2GB of swapped out memory could just as well be the java raid\n> controller management monstrosity or something similar. Those pages\n> won't ever be used and thus can better be used to buffer IO.\n>\n> You can check what's actually swapped out using:\n> grep ^VmSwap /proc/[0-9]*/status|grep -v '0 kB'\n>\n> For swapping to be actually harmful you need to have pages that are\n> regularly swapped in. vmstat will tell.\n>\n> In a concurrent OLTP workload (~450 established connections do suggest\n> that) with a fair amount of data keeping the hot data set in\n> shared_buffers can significantly reduce problems. Constantly searching\n> for victim buffers isn't a nice thing, and that will happen if your most\n> frequently used data doesn't fit into s_b. On the other hand, if your\n> data set is so large that even the hottest part doesn't fit into memory\n> (perhaps because there's no hottest part as there's no locality at all),\n> a smaller shared buffers can make things more efficient, because the\n> search for replacement buffers is cheaper with a smaller shared buffers\n> setting.\n>\n> Greetings,\n>\n> Andres Freund\n>\n> Scott Marlowe <mailto:[email protected]>\n> Sunday, March 15, 2015 2:25 PM\n>\n> Here's the problem with a large shared_buffers on a machine that's\n> getting pushed into swap. It starts to swap BUFFERs. Once buffers\n> start getting swapped you're not just losing performance, that huge\n> shared_buffers is now working against you because what you THINK are\n> buffers in RAM to make things faster are in fact blocks on a hard\n> drive being swapped in and out during reads. It's the exact opposite\n> of fast. :)\n>\n>\n> Andres Freund <mailto:[email protected]>\n> Sunday, March 15, 2015 1:46 PM\n>\n> That imo doesn't really have anything to do with it. The primary benefit\n> of a BBU with writeback caching is accelerating (near-)synchronous\n> writes. Like the WAL. But, besides influencing the default for\n> wal_buffers, a larger shared_buffers doesn't change the amount of\n> synchronous writes.\n>\n> Greetings,\n>\n> Andres Freund\n>\n> Ilya Kosmodemiansky <mailto:[email protected]>\n> Sunday, March 15, 2015 1:42 PM\n> On Sun, Mar 15, 2015 at 8:20 PM, Andres Freund<[email protected]> wrote:\n>> On 2015-03-15 11:09:34 -0600, Scott Marlowe wrote:\n>>> shared_mem of 12G is almost always too large. I'd drop it down to ~1G or so.\n>> I think that's a outdated wisdom, i.e. not generally true.\n>\n> Quite agreed. With note, that proper configured controller with BBU is needed.\n>\n>\n>> A new enough kernel, a sane filesystem\n>> (i.e. not ext3) and sane checkpoint configuration takes care of most of\n>> the other disadvantages.\n>\n> Most likely. And better to be sure that filesystem mounted without barrier.\n>\n> And I agree with Scott - 64MB work mem AND max_connections = 500 is a\n> recipe for disaster. The problem could be in session mode of\n> pgbouncer. If you can work with transaction mode - do it.\n>\n>\n> Best regards,\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>\n> Andres Freund <mailto:[email protected]>\n> Sunday, March 15, 2015 1:20 PM\n>\n> I think that's a outdated wisdom, i.e. not generally true. I've now seen\n> a significant number of systems where a larger shared_buffers can help\n> quite massively. The primary case where it can, in my experience, go\n> bad are write mostly database where every buffer acquiration has to\n> write out dirty data while holding locks. Especially during relation\n> extension that's bad. A new enough kernel, a sane filesystem\n> (i.e. not ext3) and sane checkpoint configuration takes care of most of\n> the other disadvantages.\n>\n> Greetings,\n>\n> Andres Freund\n>",
"msg_date": "Sun, 15 Mar 2015 19:13:52 -0400",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 15.3.2015 18:54, Ilya Kosmodemiansky wrote:\n> On Sun, Mar 15, 2015 at 8:46 PM, Andres Freund <[email protected]> wrote:\n>> That imo doesn't really have anything to do with it. The primary\n>> benefit of a BBU with writeback caching is accelerating\n>> (near-)synchronous writes. Like the WAL.\n> \n> My point was, that having no proper raid controller (today bbu\n> surely needed for the controller to be a proper one) + heavy writes\n> of any kind, it is absolutely impossible to live with large\n> shared_buffers and without io problems.\n\nThat is not really true, IMHO.\n\nThe benefit of the write cache is that it can absorb certain amount of\nwrites, equal to the size of the cache (nowadays usually 512MB or 1GB),\nwithout forcing them to disks.\n\nIt however still has to flush the dirty data to the drives later, but\nthat side usually has much lower throughput - e.g. while you can easily\nwrite several GB/s to the controller, the drives usually handle only\n~1MB/s of random writes each (I assume rotational drives here).\n\nBut if you do a lot of random writes (which is likely the case for\nwrite-heavy databases), you'll fill the write cache pretty soon and will\nbe bounded by the drives anyway.\n\nThe controller really can't help with sequential writes much, because\nthe drives already handle that quite well. And SSDs are a completely\ndifferent story of course.\n\nThat does not mean the write cache is useless - it can absorb short\nbursts of random writes, fix the write hole with RAID5, the controller\nmay compute the parity computation etc. Whenever someone asks me whether\nthey should buy a RAID controller with write cache for their database\nserver, my answer is \"absolutely yes\" in 95.23% cases ...\n\n... but really it's not something that magically changes the limits for\nwrite-heavy databases - the main limit are still the drives.\n\nregards\nTomas\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 00:29:11 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 2015-03-15 20:54:51 +0300, Ilya Kosmodemiansky wrote:\n> On Sun, Mar 15, 2015 at 8:46 PM, Andres Freund <[email protected]> wrote:\n> > That imo doesn't really have anything to do with it. The primary benefit\n> > of a BBU with writeback caching is accelerating (near-)synchronous\n> > writes. Like the WAL.\n> \n> My point was, that having no proper raid controller (today bbu surely\n> needed for the controller to be a proper one) + heavy writes of any\n> kind, it is absolutely impossible to live with large shared_buffers\n> and without io problems.\n\nAnd my point is that you're mostly wrong. What a raid controller's\nwriteback usefully cache accelerates is synchronous writes. I.e. writes\nthat the application waits for. Usually raid controllers don't have much\nchance to reorderer the queued writes (i.e. turning neighboring writes\ninto one larger sequential write). What they do excel at is making\nsynchronous writes to disk return faster because the data is only\nwritten to the the controller's memory, not to actual storage. They're\nalso good at avoiding actual writes to disk when the *same* page is\nwritten to multiple times in short amount of time.\n\nIn postgres writes for data that goes through shared_buffers are usally\nasynchronous. We write them to the OS's page cache when a page is needed\nfor other contents, is undirtied by the bgwriter, or written out during\na checkpoint; but do *not* wait for the write to hit the disk. The\ncontroller's write back cache doesn't hugely help here, because it\ndoesn't make *that* much of a difference whether the dirty data stays in\nthe kernel's page cache or in the controller.\n\nIn contrast to that, writes to the WAL are often more or les\nsynchronous. We actually wait (via fdatasync()/fsync() syscalls) for\nwrites to hit disk in a bunch of scenarios, most commonly when\ncommitting a transaction. Unless synchronous_commit = off every COMMIT\nin a transaction that wrote data implies a fdatasync() of a file in\npg_xlog (well, we do optimize that in some condition, but let's leave\nthat out for now).\n\nAdditionally, if there are many smaller/interleaved transactions, we\nwill write()/fdatasync() out the same 8kb WAL page repeatedly. Everytime\na transaction commits (and some other things) the page that commit\nrecord is on will be flushed. As the WAL records for insertions,\nupdates, deletes, commits are frequently much smaller than 8kb that will\noften happen 20-100 for the same page in OLTP scenarios with narrow\nrows. That's why synchronous_commit = off can be such a huge win for\nOLTP write workloads without a writeback cache - synchronous writes are\nturned into asynchronous writes, and repetitive writes to the same page\nare avoided. It also explains why synchronous_commit = off has much less\nan effect for bulk write workloads: As there are no synchronous disk\nwrites due to WAL flushes at commit time (there's only very few\ncommits), synchronous commit doesn't have much of an effect.\n\n\nThat said, there's a couple reasons why you're not completely wrong:\n\nHistorically, when using ext3 with data=ordered and some other\nfilesystems, synchronous writes to one file forced *all* other\npreviously dirtied data to also be flushed. That means that if you have\npg_xlog and the normal relation files on the same filesystem, the\nsynchronous writes to the WAL will not only have to write out the new\nWAL (often not that much data), but also all the other dirty data. The\nOS will often be unable to do efficient write combining in that case,\nbecause a) there's not yet that much data there, b) postgres doesn't\norder writes during checkpoints. That means that WAL writes will\nsuddenly have to write out much more data => COMMITs are slow. That's\nwhere the suggestion to keep pg_xlog on a separate partion largely comes\nfrom.\n\nWrites going through shared_buffers are sometimes indirectly turned into\nsynchronous writes (from the OSs perspective at least. Which means\nthey'll done at a higher priority). That happens when the checkpointer\nfsync()s all the files at the end of a checkpoint. When things are going\nwell and checkpoints are executed infrequently and completely guided by\ntime (i.e. triggered solely by checkpoint_timeout, and not\ncheckpoint_segments) that's usually not too bad. You'll see a relatively\nsmall latency spike for transactions.\nUnfortunately the ext* filesystems have a implementation problem here,\nwhich can make this problem much worse: The way writes are priorized\nduring an fsync() can stall out concurrent synchronous reads/writes\npretty badly. That's much less of a problem with e.g. xfs. Which is why\nI'd atm not suggest ext4 for write intensive applications.\n\nThe other situation where this can lead to big problems is if your\ncheckpoints aren't scheduled by time (you can recognize that by enabling\nlog_checkpoints and check a) that time is the trigger, b) they're\nactually happening in a interval consistent with checkpoint_timeout). If\nthe relation files are not writtten out in a smoothed out fashion\n(configured by checkpoint_completion_target) a *lot* of dirty buffers\ncan exist in the OS's page cache. Especially because the default 'dirty'\nsettings in linux on servers with a lot of IO are often completely\ninsane; especially with older kernels (pretty much everything before\n3.11 is badly affected). The important thing to do here is to configure\ncheckpoint_timeout, checkpoint_segments and checkpoint_completion_target\nin a consistent way. In my opinion the default checkpoint_timeout is\n*way* too low; and just leads to a large increase in overall writes (due\nto more frequent checkpoints repeated writes to the same page aren't\ncoalesced) *and* an increase in WAL volume (many more full_page_writes).\n\n\nLots more could be written about this topic; but I think I've blathered\non enough for the moment ;)\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 00:30:49 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 15.3.2015 23:47, Andres Freund wrote:\n> On 2015-03-15 12:25:07 -0600, Scott Marlowe wrote:\n>> Here's the problem with a large shared_buffers on a machine that's\n>> getting pushed into swap. It starts to swap BUFFERs. Once buffers\n>> start getting swapped you're not just losing performance, that huge\n>> shared_buffers is now working against you because what you THINK are\n>> buffers in RAM to make things faster are in fact blocks on a hard\n>> drive being swapped in and out during reads. It's the exact opposite\n>> of fast. :)\n> \n> IMNSHO that's tackling things from the wrong end. If 12GB of shared \n> buffers drive your 48GB dedicated OLTP postgres server into swapping\n> out actively used pages, the problem isn't the 12GB of shared\n> buffers, but that you require so much memory for other things. That\n> needs to be fixed.\n\nI second this opinion.\n\nAs was already pointed out, the 500 connections is rather insane\n(assuming the machine does not have hundreds of cores).\n\nIf there are memory pressure issues, it's likely because many queries\nare performing memory-expensive operations at the same time (might even\nbe a bad estimate causing hashagg to use much more than work_mem).\n\n\n> But! We haven't even established that swapping is an actual problem\n> here. The ~2GB of swapped out memory could just as well be the java raid\n> controller management monstrosity or something similar. Those pages\n> won't ever be used and thus can better be used to buffer IO.\n> \n> You can check what's actually swapped out using:\n> grep ^VmSwap /proc/[0-9]*/status|grep -v '0 kB'\n> \n> For swapping to be actually harmful you need to have pages that are \n> regularly swapped in. vmstat will tell.\n\nI've already asked for vmstat logs, so let's wait.\n\n> In a concurrent OLTP workload (~450 established connections do\n> suggest that) with a fair amount of data keeping the hot data set in \n> shared_buffers can significantly reduce problems. Constantly\n> searching for victim buffers isn't a nice thing, and that will happen\n> if your most frequently used data doesn't fit into s_b. On the other\n> hand, if your data set is so large that even the hottest part doesn't\n> fit into memory (perhaps because there's no hottest part as there's\n> no locality at all), a smaller shared buffers can make things more\n> efficient, because the search for replacement buffers is cheaper with\n> a smaller shared buffers setting.\n\nI've met many systems with max_connections values this high, and it was\nmostly idle connections because of separate connection pools on each\napplication server. So mostly idle (90% of the time), but at peak time\nall the application servers want to od stuff at the same time. And it\nall goes KABOOOM! just like here.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 00:41:54 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "Why is 500 connections \"insane\". We got 32 CPU with 96GB and 3000 max \nconnections, and we are doing fine, even when hitting our max concurrent \nconnection peaks around 4500. At a previous site, we were using 2000 \nmax connections on 24 CPU and 64GB RAM, with about 1500 max concurrent \nconnections. So I wouldn't be too hasty in saying more than 500 is \nasking for trouble. Just as long as you got your kernel resources set \nhigh enough to sustain it (SHMMAX, SHMALL, SEMMNI, and ulimits), and RAM \nfor work_mem.\n> Tomas Vondra <mailto:[email protected]>\n> Sunday, March 15, 2015 7:41 PM\n> On 15.3.2015 23:47, Andres Freund wrote:\n>> On 2015-03-15 12:25:07 -0600, Scott Marlowe wrote:\n>>> Here's the problem with a large shared_buffers on a machine that's\n>>> getting pushed into swap. It starts to swap BUFFERs. Once buffers\n>>> start getting swapped you're not just losing performance, that huge\n>>> shared_buffers is now working against you because what you THINK are\n>>> buffers in RAM to make things faster are in fact blocks on a hard\n>>> drive being swapped in and out during reads. It's the exact opposite\n>>> of fast. :)\n>> IMNSHO that's tackling things from the wrong end. If 12GB of shared\n>> buffers drive your 48GB dedicated OLTP postgres server into swapping\n>> out actively used pages, the problem isn't the 12GB of shared\n>> buffers, but that you require so much memory for other things. That\n>> needs to be fixed.\n>\n> I second this opinion.\n>\n> As was already pointed out, the 500 connections is rather insane\n> (assuming the machine does not have hundreds of cores).\n>\n> If there are memory pressure issues, it's likely because many queries\n> are performing memory-expensive operations at the same time (might even\n> be a bad estimate causing hashagg to use much more than work_mem).\n>\n>\n>> But! We haven't even established that swapping is an actual problem\n>> here. The ~2GB of swapped out memory could just as well be the java raid\n>> controller management monstrosity or something similar. Those pages\n>> won't ever be used and thus can better be used to buffer IO.\n>>\n>> You can check what's actually swapped out using:\n>> grep ^VmSwap /proc/[0-9]*/status|grep -v '0 kB'\n>>\n>> For swapping to be actually harmful you need to have pages that are\n>> regularly swapped in. vmstat will tell.\n>\n> I've already asked for vmstat logs, so let's wait.\n>\n>> In a concurrent OLTP workload (~450 established connections do\n>> suggest that) with a fair amount of data keeping the hot data set in\n>> shared_buffers can significantly reduce problems. Constantly\n>> searching for victim buffers isn't a nice thing, and that will happen\n>> if your most frequently used data doesn't fit into s_b. On the other\n>> hand, if your data set is so large that even the hottest part doesn't\n>> fit into memory (perhaps because there's no hottest part as there's\n>> no locality at all), a smaller shared buffers can make things more\n>> efficient, because the search for replacement buffers is cheaper with\n>> a smaller shared buffers setting.\n>\n> I've met many systems with max_connections values this high, and it was\n> mostly idle connections because of separate connection pools on each\n> application server. So mostly idle (90% of the time), but at peak time\n> all the application servers want to od stuff at the same time. And it\n> all goes KABOOOM! just like here.\n>\n>\n> Andres Freund <mailto:[email protected]>\n> Sunday, March 15, 2015 6:47 PM\n>\n> IMNSHO that's tackling things from the wrong end. If 12GB of shared\n> buffers drive your 48GB dedicated OLTP postgres server into swapping out\n> actively used pages, the problem isn't the 12GB of shared buffers, but\n> that you require so much memory for other things. That needs to be\n> fixed.\n>\n> But! We haven't even established that swapping is an actual problem\n> here. The ~2GB of swapped out memory could just as well be the java raid\n> controller management monstrosity or something similar. Those pages\n> won't ever be used and thus can better be used to buffer IO.\n>\n> You can check what's actually swapped out using:\n> grep ^VmSwap /proc/[0-9]*/status|grep -v '0 kB'\n>\n> For swapping to be actually harmful you need to have pages that are\n> regularly swapped in. vmstat will tell.\n>\n> In a concurrent OLTP workload (~450 established connections do suggest\n> that) with a fair amount of data keeping the hot data set in\n> shared_buffers can significantly reduce problems. Constantly searching\n> for victim buffers isn't a nice thing, and that will happen if your most\n> frequently used data doesn't fit into s_b. On the other hand, if your\n> data set is so large that even the hottest part doesn't fit into memory\n> (perhaps because there's no hottest part as there's no locality at all),\n> a smaller shared buffers can make things more efficient, because the\n> search for replacement buffers is cheaper with a smaller shared buffers\n> setting.\n>\n> Greetings,\n>\n> Andres Freund\n>\n> Scott Marlowe <mailto:[email protected]>\n> Sunday, March 15, 2015 2:25 PM\n>\n> Here's the problem with a large shared_buffers on a machine that's\n> getting pushed into swap. It starts to swap BUFFERs. Once buffers\n> start getting swapped you're not just losing performance, that huge\n> shared_buffers is now working against you because what you THINK are\n> buffers in RAM to make things faster are in fact blocks on a hard\n> drive being swapped in and out during reads. It's the exact opposite\n> of fast. :)\n>\n>\n> Andres Freund <mailto:[email protected]>\n> Sunday, March 15, 2015 1:46 PM\n>\n> That imo doesn't really have anything to do with it. The primary benefit\n> of a BBU with writeback caching is accelerating (near-)synchronous\n> writes. Like the WAL. But, besides influencing the default for\n> wal_buffers, a larger shared_buffers doesn't change the amount of\n> synchronous writes.\n>\n> Greetings,\n>\n> Andres Freund\n>\n> Ilya Kosmodemiansky <mailto:[email protected]>\n> Sunday, March 15, 2015 1:42 PM\n> On Sun, Mar 15, 2015 at 8:20 PM, Andres Freund<[email protected]> wrote:\n>> On 2015-03-15 11:09:34 -0600, Scott Marlowe wrote:\n>>> shared_mem of 12G is almost always too large. I'd drop it down to ~1G or so.\n>> I think that's a outdated wisdom, i.e. not generally true.\n>\n> Quite agreed. With note, that proper configured controller with BBU is needed.\n>\n>\n>> A new enough kernel, a sane filesystem\n>> (i.e. not ext3) and sane checkpoint configuration takes care of most of\n>> the other disadvantages.\n>\n> Most likely. And better to be sure that filesystem mounted without barrier.\n>\n> And I agree with Scott - 64MB work mem AND max_connections = 500 is a\n> recipe for disaster. The problem could be in session mode of\n> pgbouncer. If you can work with transaction mode - do it.\n>\n>\n> Best regards,\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>",
"msg_date": "Sun, 15 Mar 2015 19:55:23 -0400",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 16.3.2015 00:55, [email protected] wrote:\n> Why is 500 connections \"insane\". We got 32 CPU with 96GB and 3000\n> max connections, and we are doing fine, even when hitting our max\n> concurrent connection peaks around 4500. At a previous site, we were\n> using 2000 max connections on 24 CPU and 64GB RAM, with about 1500\n> max concurrent connections. So I wouldn't be too hasty in saying more\n> than 500 is asking for trouble. Just as long as you got your kernel\n> resources set high enough to sustain it (SHMMAX, SHMALL, SEMMNI, and\n> ulimits), and RAM for work_mem.\n\nIf all the connections are active at the same time (i.e. running\nqueries), they have to share the 32 cores somehow. Or I/O, if that's the\nbottleneck.\n\nIn other words, you're not improving the throughput of the system,\nyou're merely increasing latencies. And it may easily happen that the\nlatency increase is not linear, but grows faster - because of locking,\ncontext switches and other process-related management.\n\nImagine you have a query taking 1 second of CPU time. If you have 64\nsuch queries running concurrently on 32 cores, each gets only 1/2 a CPU\nand so takes >=2 seconds. With 500 queries, it's >=15 seconds per, etc.\n\nIf those queries are acquiring the same locks (e.g. updating the same\nrows, or so), you can imagine what happens ...\n\nAlso, if part of the query required a certain amount of memory for part\nof the plan, it now holds that memory for much longer too. That only\nincreases the change of OOM issues.\n\nIt may work fine when most of the connections are idle, but it makes\nstorms like this possible.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 01:07:36 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "(please quote properly)\n\nOn 2015-03-15 19:55:23 -0400, [email protected] wrote:\n> Why is 500 connections \"insane\". We got 32 CPU with 96GB and 3000 max\n> connections, and we are doing fine, even when hitting our max concurrent\n> connection peaks around 4500. At a previous site, we were using 2000 max\n> connections on 24 CPU and 64GB RAM, with about 1500 max concurrent\n> connections. So I wouldn't be too hasty in saying more than 500 is asking\n> for trouble. Just as long as you got your kernel resources set high enough\n> to sustain it (SHMMAX, SHMALL, SEMMNI, and ulimits), and RAM for work_mem.\n\nIt may work acceptably in some scenarios, but it can lead to significant\nproblems. Several things in postgres things in postgres scale linearly\n(from the algorithmic point of view, often CPU characteristics like\ncache sizes make it wors) with max_connections, most notably acquiring a\nsnapshot. It usually works ok enough if you don't have a high number of\nqueries per second, but if you do, you can run into horrible contention\nproblems. Absurdly enough that matters *more* on bigger machines with\nseveral sockets. It's especially bad on 4+ socket systems.\n\nThe other aspect is that such a high number of full connections usually\njust isn't helpful for throughput. Not even the most massive NUMA\nsystems (~256 hardware threads is the realistic max atm IIRC) can\nprocess 4.5k queries at the same time. It'll often be much more\nefficient if all connections above a certain number aren't allocated a\nfull postgres backend, with all it's overhead, but use a much more\nlightweight pooler connection.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 01:12:34 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "I agree with your counter argument about how high max_connections \"can\" \ncause problems, but max_connections may not part of the problem here. \nThere's a bunch of \"depends stuff\" in there based on workload details, # \ncpus, RAM, etc.\n\nI'm still waiting to find out how many CPUs on this DB server. Did i \nmiss it somewhere in the email thread below?\n\n> Tomas Vondra <mailto:[email protected]>\n> Sunday, March 15, 2015 8:07 PM\n>\n> If all the connections are active at the same time (i.e. running\n> queries), they have to share the 32 cores somehow. Or I/O, if that's the\n> bottleneck.\n>\n> In other words, you're not improving the throughput of the system,\n> you're merely increasing latencies. And it may easily happen that the\n> latency increase is not linear, but grows faster - because of locking,\n> context switches and other process-related management.\n>\n> Imagine you have a query taking 1 second of CPU time. If you have 64\n> such queries running concurrently on 32 cores, each gets only 1/2 a CPU\n> and so takes >=2 seconds. With 500 queries, it's >=15 seconds per, etc.\n>\n> If those queries are acquiring the same locks (e.g. updating the same\n> rows, or so), you can imagine what happens ...\n>\n> Also, if part of the query required a certain amount of memory for part\n> of the plan, it now holds that memory for much longer too. That only\n> increases the change of OOM issues.\n>\n> It may work fine when most of the connections are idle, but it makes\n> storms like this possible.\n>\n>\n> [email protected] <mailto:[email protected]>\n> Sunday, March 15, 2015 7:55 PM\n> Why is 500 connections \"insane\". We got 32 CPU with 96GB and 3000 max \n> connections, and we are doing fine, even when hitting our max \n> concurrent connection peaks around 4500. At a previous site, we were \n> using 2000 max connections on 24 CPU and 64GB RAM, with about 1500 max \n> concurrent connections. So I wouldn't be too hasty in saying more \n> than 500 is asking for trouble. Just as long as you got your kernel \n> resources set high enough to sustain it (SHMMAX, SHMALL, SEMMNI, and \n> ulimits), and RAM for work_mem.\n> Tomas Vondra <mailto:[email protected]>\n> Sunday, March 15, 2015 7:41 PM\n> On 15.3.2015 23:47, Andres Freund wrote:\n>> On 2015-03-15 12:25:07 -0600, Scott Marlowe wrote:\n>>> Here's the problem with a large shared_buffers on a machine that's\n>>> getting pushed into swap. It starts to swap BUFFERs. Once buffers\n>>> start getting swapped you're not just losing performance, that huge\n>>> shared_buffers is now working against you because what you THINK are\n>>> buffers in RAM to make things faster are in fact blocks on a hard\n>>> drive being swapped in and out during reads. It's the exact opposite\n>>> of fast. :)\n>> IMNSHO that's tackling things from the wrong end. If 12GB of shared\n>> buffers drive your 48GB dedicated OLTP postgres server into swapping\n>> out actively used pages, the problem isn't the 12GB of shared\n>> buffers, but that you require so much memory for other things. That\n>> needs to be fixed.\n>\n> I second this opinion.\n>\n> As was already pointed out, the 500 connections is rather insane\n> (assuming the machine does not have hundreds of cores).\n>\n> If there are memory pressure issues, it's likely because many queries\n> are performing memory-expensive operations at the same time (might even\n> be a bad estimate causing hashagg to use much more than work_mem).\n>\n>\n>> But! We haven't even established that swapping is an actual problem\n>> here. The ~2GB of swapped out memory could just as well be the java raid\n>> controller management monstrosity or something similar. Those pages\n>> won't ever be used and thus can better be used to buffer IO.\n>>\n>> You can check what's actually swapped out using:\n>> grep ^VmSwap /proc/[0-9]*/status|grep -v '0 kB'\n>>\n>> For swapping to be actually harmful you need to have pages that are\n>> regularly swapped in. vmstat will tell.\n>\n> I've already asked for vmstat logs, so let's wait.\n>\n>> In a concurrent OLTP workload (~450 established connections do\n>> suggest that) with a fair amount of data keeping the hot data set in\n>> shared_buffers can significantly reduce problems. Constantly\n>> searching for victim buffers isn't a nice thing, and that will happen\n>> if your most frequently used data doesn't fit into s_b. On the other\n>> hand, if your data set is so large that even the hottest part doesn't\n>> fit into memory (perhaps because there's no hottest part as there's\n>> no locality at all), a smaller shared buffers can make things more\n>> efficient, because the search for replacement buffers is cheaper with\n>> a smaller shared buffers setting.\n>\n> I've met many systems with max_connections values this high, and it was\n> mostly idle connections because of separate connection pools on each\n> application server. So mostly idle (90% of the time), but at peak time\n> all the application servers want to od stuff at the same time. And it\n> all goes KABOOOM! just like here.\n>\n>\n> Andres Freund <mailto:[email protected]>\n> Sunday, March 15, 2015 6:47 PM\n>\n> IMNSHO that's tackling things from the wrong end. If 12GB of shared\n> buffers drive your 48GB dedicated OLTP postgres server into swapping out\n> actively used pages, the problem isn't the 12GB of shared buffers, but\n> that you require so much memory for other things. That needs to be\n> fixed.\n>\n> But! We haven't even established that swapping is an actual problem\n> here. The ~2GB of swapped out memory could just as well be the java raid\n> controller management monstrosity or something similar. Those pages\n> won't ever be used and thus can better be used to buffer IO.\n>\n> You can check what's actually swapped out using:\n> grep ^VmSwap /proc/[0-9]*/status|grep -v '0 kB'\n>\n> For swapping to be actually harmful you need to have pages that are\n> regularly swapped in. vmstat will tell.\n>\n> In a concurrent OLTP workload (~450 established connections do suggest\n> that) with a fair amount of data keeping the hot data set in\n> shared_buffers can significantly reduce problems. Constantly searching\n> for victim buffers isn't a nice thing, and that will happen if your most\n> frequently used data doesn't fit into s_b. On the other hand, if your\n> data set is so large that even the hottest part doesn't fit into memory\n> (perhaps because there's no hottest part as there's no locality at all),\n> a smaller shared buffers can make things more efficient, because the\n> search for replacement buffers is cheaper with a smaller shared buffers\n> setting.\n>\n> Greetings,\n>\n> Andres Freund\n>\n> Scott Marlowe <mailto:[email protected]>\n> Sunday, March 15, 2015 2:25 PM\n>\n> Here's the problem with a large shared_buffers on a machine that's\n> getting pushed into swap. It starts to swap BUFFERs. Once buffers\n> start getting swapped you're not just losing performance, that huge\n> shared_buffers is now working against you because what you THINK are\n> buffers in RAM to make things faster are in fact blocks on a hard\n> drive being swapped in and out during reads. It's the exact opposite\n> of fast. :)\n>\n>",
"msg_date": "Sun, 15 Mar 2015 20:17:44 -0400",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 16/03/15 13:07, Tomas Vondra wrote:\n> On 16.3.2015 00:55, [email protected] wrote:\n>> Why is 500 connections \"insane\". We got 32 CPU with 96GB and 3000\n>> max connections, and we are doing fine, even when hitting our max\n>> concurrent connection peaks around 4500. At a previous site, we were\n>> using 2000 max connections on 24 CPU and 64GB RAM, with about 1500\n>> max concurrent connections. So I wouldn't be too hasty in saying more\n>> than 500 is asking for trouble. Just as long as you got your kernel\n>> resources set high enough to sustain it (SHMMAX, SHMALL, SEMMNI, and\n>> ulimits), and RAM for work_mem.\n[...]\n> Also, if part of the query required a certain amount of memory for part\n> of the plan, it now holds that memory for much longer too. That only\n> increases the change of OOM issues.\n>\n[...]\n\nAlso you could get a situation where a small number of queries & their \ndata, relevant indexes, and working memory etc can all just fit into \nRAM, but the extra queries suddenly reduce the RAM so that even these \nqueries spill to disk, plus the time required to process the extra \nqueries. So a nicely behaved system could suddenly get a lot worse. \nEven before you consider additional lock contention and other nasty things!\n\nIt all depends...\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 13:22:52 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 3/15/2015 6:54 AM, Robert Kaye wrote:\n> Hi!\n>\n> We at MusicBrainz have been having trouble with our Postgres install \n> for the past few days. I’ve collected all the relevant information here:\n>\n> http://blog.musicbrainz.org/2015/03/15/postgres-troubles/\n>\n> If anyone could provide tips, suggestions or other relevant advice for \n> what to poke at next, we would love it.\nRobert,\n\nWow - You've engaged the wizards indeed.\n\nI haven't heard or seen anything that would answer my *second* question \nif faced with this (my first would have been \"what changed\")....\n\nWhat is the database actually trying to do when it spikes? e.g. what \nqueries are running ?\nIs there any pattern in the specific activity (exactly the same query, \nor same query different data, or even just same tables, and/or same \nusers, same apps) when it spikes?\n\nI know from experience that well behaved queries can stop being well \nbehaved if underlying data changes\n\nand for the experts... what would a corrupt index do to memory usage?\n\nRoxanne\n\n\n\n\n\n\nOn 3/15/2015 6:54 AM, Robert Kaye\n wrote:\n\n\n\n Hi!\n \n\nWe at MusicBrainz have been having trouble with our\n Postgres install for the past few days. I’ve collected all the\n relevant information here:\n\n\n http://blog.musicbrainz.org/2015/03/15/postgres-troubles/\n\n\nIf anyone could provide tips, suggestions or other\n relevant advice for what to poke at next, we would love it.\n\n Robert,\n\n Wow - You've engaged the wizards indeed.\n\n I haven't heard or seen anything that would answer my *second*\n question if faced with this (my first would have been \"what\n changed\")....\n\n What is the database actually trying to do when it spikes? e.g.\n what queries are running ?\n Is there any pattern in the specific activity (exactly the same\n query, or same query different data, or even just same tables,\n and/or same users, same apps) when it spikes?\n\n I know from experience that well behaved queries can stop being well\n behaved if underlying data changes \n\n and for the experts... what would a corrupt index do to memory\n usage?\n\n Roxanne",
"msg_date": "Sun, 15 Mar 2015 22:23:56 -0400",
"msg_from": "Roxanne Reid-Bennett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On March 16, 2015 at 3:24:34 AM, Roxanne Reid-Bennett ([email protected]) wrote:\nRobert,\n\nWow - You've engaged the wizards indeed.\n\nI haven't heard or seen anything that would answer my *second* question if faced with this (my first would have been \"what changed\")....\n\nYes, indeed — I feel honored to have so many people chime into this issue.\n\nThe problem was that nothing abnormal was happening — just the normal queries were running that hadn’t given us any problems for months. We undid everything that had been recently changed in an effort to address “what changed”. Nothing helped, which is what had us so perplexed.\n\nHowever, I am glad to report that our problems are fixed and that our server is back to humming along nicely. \n\nWhat we changed:\n\n1. As it was pointed out here, max_connections of 500 was in fact insanely high, especially in light of using PGbouncer. Before we used PGbouncer we needed a lot more connections and when we started using PGbouncer, we never reduced this number.\n\n2. Our server_lifetime was set far too high (1 hour). Josh Berkus suggested lowering that to 5 minutes.\n\n3. We reduced the number of PGbouncer active connections to the DB.\n\nWhat we learned:\n\n1. We had too many backends\n\n2. The backends were being kept around for too long by PGbouncer.\n\n3. This caused too many idle backends to kick around. Once we exhausted physical ram, we started swapping.\n\n4. Linux 3.2 apparently has some less than desirable swap behaviours. Once we started swapping, everything went nuts. \n\nGoing forward we’re going to upgrade our kernel the next time we have down time for our site and the rest should be sorted now.\n\nI wanted to thank everyone who contributed their thoughts to this thread — THANK YOU.\n\nAnd as I said to Josh earlier: \"Postgres rocks our world. I’m immensely pleased that once again the problems were our own stupidity and not PG’s fault. In over 10 years of us using PG, it has never been PG’s fault. Not once.”\n\nAnd thus we’re one tiny bit smarter today. Thank you everyone!\n\n\n\nP.S. If anyone would still like to get some more information about this problem for their own edification, please let me know. Given that we’ve fixed the issue, I don’t want to spam this list by responding to all the questions that were posed.\n\n\n--\n\n--ruaok \n\nRobert Kaye -- [email protected] -- http://musicbrainz.org\nOn March 16, 2015 at 3:24:34 AM, Roxanne Reid-Bennett ([email protected]) wrote: Robert,Wow - You've engaged the wizards indeed.I haven't heard or seen anything that would answer my *second* question if faced with this (my first would have been \"what changed\").... Yes, indeed — I feel honored to have so many people chime into this issue.The problem was that nothing abnormal was happening — just the normal queries were running that hadn’t given us any problems for months. We undid everything that had been recently changed in an effort to address “what changed”. Nothing helped, which is what had us so perplexed.However, I am glad to report that our problems are fixed and that our server is back to humming along nicely. What we changed:1. As it was pointed out here, max_connections of 500 was in fact insanely high, especially in light of using PGbouncer. Before we used PGbouncer we needed a lot more connections and when we started using PGbouncer, we never reduced this number.2. Our server_lifetime was set far too high (1 hour). Josh Berkus suggested lowering that to 5 minutes.3. We reduced the number of PGbouncer active connections to the DB.What we learned:1. We had too many backends2. The backends were being kept around for too long by PGbouncer.3. This caused too many idle backends to kick around. Once we exhausted physical ram, we started swapping.4. Linux 3.2 apparently has some less than desirable swap behaviours. Once we started swapping, everything went nuts. Going forward we’re going to upgrade our kernel the next time we have down time for our site and the rest should be sorted now.I wanted to thank everyone who contributed their thoughts to this thread — THANK YOU.And as I said to Josh earlier: \"Postgres rocks our world. I’m immensely pleased that once again the problems were our own stupidity and not PG’s fault. In over 10 years of us using PG, it has never been PG’s fault. Not once.”And thus we’re one tiny bit smarter today. Thank you everyone!P.S. If anyone would still like to get some more information about this problem for their own edification, please let me know. Given that we’ve fixed the issue, I don’t want to spam this list by responding to all the questions that were posed.----ruaok Robert Kaye -- [email protected] -- http://musicbrainz.org",
"msg_date": "Mon, 16 Mar 2015 13:59:52 +0100",
"msg_from": "Robert Kaye <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "Robert Kaye schrieb am 16.03.2015 um 13:59:\n> However, I am glad to report that our problems are fixed and that our\n> server is back to humming along nicely.\n> \n> And as I said to Josh earlier: \"Postgres rocks our world. I’m\n> immensely pleased that once again the problems were our own stupidity\n> and not PG’s fault. In over 10 years of us using PG, it has never\n> been PG’s fault. Not once.”\n> \n> And thus we’re one tiny bit smarter today. Thank you everyone!\n> \n\nI think it would be nice if you can amend your blog posting to include the solution that you found. \n\nOtherwise this will simply stick around as yet another unsolved performance problem\n\nThomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 14:22:48 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "Robert Kaye <[email protected]> wrote:\n\n> However, I am glad to report that our problems are fixed and that our server is\n> back to humming along nicely. \n> \n> What we changed:\n> \n> 1. As it was pointed out here, max_connections of 500 was in fact insanely\n> high, especially in light of using PGbouncer. Before we used PGbouncer we\n> needed a lot more connections and when we started using PGbouncer, we never\n> reduced this number.\n> \n> 2. Our server_lifetime was set far too high (1 hour). Josh Berkus suggested\n> lowering that to 5 minutes.\n> \n> 3. We reduced the number of PGbouncer active connections to the DB.\n> \n\n\nMany thanks for the feedback!\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 14:38:16 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "\n> On Mar 16, 2015, at 2:22 PM, Thomas Kellerer <[email protected]> wrote:\n> \n> I think it would be nice if you can amend your blog posting to include the solution that you found. \n> \n> Otherwise this will simply stick around as yet another unsolved performance problem\n\n\nGood thinking:\n\n http://blog.musicbrainz.org/2015/03/16/postgres-troubles-resolved/\n\nI’ve also updated the original post with the like to the above. Case closed. :)\n\n--\n\n--ruaok \n\nRobert Kaye -- [email protected] -- http://musicbrainz.org\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 15:32:51 +0100",
"msg_from": "Robert Kaye <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 03/16/2015 05:59 AM, Robert Kaye wrote:\n> 4. Linux 3.2 apparently has some less than desirable swap behaviours.\n> Once we started swapping, everything went nuts. \n\nRelevant to this:\n\nhttp://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n\nAnybody who is on Linux Kernels 3.0 to 3.8 really needs to upgrade soon.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 10:47:47 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "Robert many thanks for feedback!!\n\nCould you post your new pgbouncer config file??\n\nHow many postgresql process do you have now at OS with this new conf??\n\nHow many clients from app server hit your pgbouncer??\n\n\nRegards,\n\nRegards,\n\n2015-03-16 11:32 GMT-03:00 Robert Kaye <[email protected]>:\n\n>\n> > On Mar 16, 2015, at 2:22 PM, Thomas Kellerer <[email protected]> wrote:\n> >\n> > I think it would be nice if you can amend your blog posting to include\n> the solution that you found.\n> >\n> > Otherwise this will simply stick around as yet another unsolved\n> performance problem\n>\n>\n> Good thinking:\n>\n> http://blog.musicbrainz.org/2015/03/16/postgres-troubles-resolved/\n>\n> I’ve also updated the original post with the like to the above. Case\n> closed. :)\n>\n> --\n>\n> --ruaok\n>\n> Robert Kaye -- [email protected] --\n> http://musicbrainz.org\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nRobert many thanks for feedback!! Could you post your new pgbouncer config file??How many postgresql process do you have now at OS with this new conf?? How many clients from app server hit your pgbouncer??Regards,Regards,2015-03-16 11:32 GMT-03:00 Robert Kaye <[email protected]>:\n> On Mar 16, 2015, at 2:22 PM, Thomas Kellerer <[email protected]> wrote:\n>\n> I think it would be nice if you can amend your blog posting to include the solution that you found.\n>\n> Otherwise this will simply stick around as yet another unsolved performance problem\n\n\nGood thinking:\n\n http://blog.musicbrainz.org/2015/03/16/postgres-troubles-resolved/\n\nI’ve also updated the original post with the like to the above. Case closed. :)\n\n--\n\n--ruaok\n\nRobert Kaye -- [email protected] -- http://musicbrainz.org\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 16 Mar 2015 16:24:50 -0300",
"msg_from": "Joao Junior <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On 3/15/15 7:17 PM, [email protected] wrote:\nPlease avoid top-posting.\n\n> I agree with your counter argument about how high max_connections \"can\"\n> cause problems, but max_connections may not part of the problem here.\n> There's a bunch of \"depends stuff\" in there based on workload details, #\n> cpus, RAM, etc.\n\nSure, but the big, huge danger with a very large max_connections is that \nyou now have a large grenade with the pin pulled out. If *anything* \nhappens to disturb the server and push the active connection count past \nthe number of actual cores the box is going to fall over and not recover.\n\nIn contrast, if max_connections is <= the number of cores this is far \nless likely to happen. Each connection will get a CPU to run on, and as \nlong as they're not all clamoring for the same locks the server will be \nmaking forward progress. Clients may have to wait in the pool for a free \nconnection for some time, but once they get one their work will get done.\n\n> I'm still waiting to find out how many CPUs on this DB server. Did i\n> miss it somewhere in the email thread below?\n\nhttp://blog.musicbrainz.org/2015/03/15/postgres-troubles/ might show it \nsomewhere...\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 15:01:04 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
},
{
"msg_contents": "On Mon, Mar 16, 2015 at 6:59 AM, Robert Kaye <[email protected]> wrote:\n>\n> 4. Linux 3.2 apparently has some less than desirable swap behaviours. Once\n> we started swapping, everything went nuts.\n\nOn older machines I used to just turn off swap altogether. Esp if I\nwasn't running out of memory but swap was engaging anyway. swappiness\n= 0 didn't help, nothing did, I just kept seeing kswapd working it's\nbutt off doing nothing but hitting the swap partition.\n\nSo glad to be off those old kernels.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 14:29:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MusicBrainz postgres performance issues"
}
] |
[
{
"msg_contents": "I wasn't sure whether to post this in general, admin or performance but \nsince it is basically a performance question I went with performance.\n\nI'm about to launch a new a website that is written using the Django web \nframework and has PostgreSQL as the database server. Unfortunately I \ncan't afford to get dedicated hardware at the launch of the website as I \nwon't be making money off it for a couple of months (maybe longer).\n\nSo I was wondering if anyone had any recommendations for decent VPS \nproviders that have good hardware specs for running a PostgreSQL server? \nI'll be using PostgreSQL 9.4.\n\nThe database is likely to be quite small (under 1GB) for quite sometime \nso should I go for double the size of the database in RAM so I can fit \nit all in memory if required? The database will be mainly read only with \nonly a small number of writes (although as new features are added the \nnumber of database write operations will increase).\n\nI guess SSDs are essential these days but am I right about the amount of \nRAM? Is there anything else I should be looking out for? I'll just be \nrunning PostgreSQL on the VPS, the web server and app server will be run \non different VPSs.\n\nIn the past I've used Linode, Digital Ocean, Vultr and RamNode. I've \nbecome disheartened by Digital Ocean so don't want to use them for this \nproject.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 05:08:42 +0000",
"msg_from": "Some Developer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best VPS provider for running performant PostgreSQL database server"
},
{
"msg_contents": "On 3/16/15 12:08 AM, Some Developer wrote:\n> I wasn't sure whether to post this in general, admin or performance but\n> since it is basically a performance question I went with performance.\n>\n> I'm about to launch a new a website that is written using the Django web\n> framework and has PostgreSQL as the database server. Unfortunately I\n> can't afford to get dedicated hardware at the launch of the website as I\n> won't be making money off it for a couple of months (maybe longer).\n>\n> So I was wondering if anyone had any recommendations for decent VPS\n> providers that have good hardware specs for running a PostgreSQL server?\n> I'll be using PostgreSQL 9.4.\n>\n> The database is likely to be quite small (under 1GB) for quite sometime\n> so should I go for double the size of the database in RAM so I can fit\n> it all in memory if required? The database will be mainly read only with\n> only a small number of writes (although as new features are added the\n> number of database write operations will increase).\n\nThat's probably your best bet. If you go that route then IO performance \nbasically shouldn't matter. That means that instead of spending money \nfor a VPS you could just use a cheap EC2 instance.\n\n> I guess SSDs are essential these days but am I right about the amount of\n> RAM? Is there anything else I should be looking out for? I'll just be\n> running PostgreSQL on the VPS, the web server and app server will be run\n> on different VPSs.\n\nSSD is in no way essential. It's all a question of what your needs are, \nand from how you're describing it right now your needs are extremely modest.\n\nOne thing you absolutely should do however is have at least 1 hot \nstandby. That's an absolute must with services like EC2 where a node can \njust vanish, and it's still a good idea with a VPS.\n\n> In the past I've used Linode, Digital Ocean, Vultr and RamNode. I've\n> become disheartened by Digital Ocean so don't want to use them for this\n> project.\n\nYou should take a look at \nhttps://github.com/manageacloud/cloud-benchmark-postgres and \nhttps://www.youtube.com/watch?v=JtORBqQdKHY\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 14:33:39 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best VPS provider for running performant PostgreSQL\n database server"
}
] |
[
{
"msg_contents": "Hi,\nWe have a so far (to us) unexplainable issue on our production systems after we roughly doubled the amount of data we import daily. We should be ok on pure theoretical hardware performance, but we are seeing some weird IO counters when the actual throughput of the writes is very low. The use case is as follows: - typical DW - relatively constant periodic data loads - i.e. heavy write - we receive large CSV files ~ 5-10Gb every 15 minutes spread out across 5-7 minutes - Custom ETL scripts process and filter files within < 30 seconds down to about 5Gb CSV ready to load - 2 loader queues load the files, picking off a file one-by-one - tables are partitioned daily, indexed on a primary key + timestamp - system is HP blade; 128Gb RAM, 2x 8-core, 12x 10k RPM RAID1+0 (database) on first controller, 2x 15k RAID1 (xlog) on a different controller - DB size is ~2.5Tb; rotating load of 30 days keeps the database stable - filesystem: zfs with lz4 compression - raw throughput of the database disk is > 700Mbytes/sec sequential and >150Mbytes random for read and roughly half for write in various benchmarks - CPU load is minimal when copy loads are taking place (i.e. after ETL has finished)\nThe issue is that the system is constantly checkpointing regardless of various kernel and postgres settings. Having read through most of the history of this list and most of the recommendations on various blogs, we have been unable to find an answer why the checkpoints are being written so slowly. Even when we disable all import processes or if index is dropped, the checkpoint is still taking > 1hour. Stats are pointing to checkpoint sizes of roughly 7Gb which should take < 1min even with full random reads; so even when imports are fully disabled, what is not making sense is why would the checkpointing be taking well over an hour?\nOne other thing that's noticed, but not measured, i.e. mostly anecdotal is that for a period of <1hr when postgres is restarted, the system performs mostly fine and checkpoints are completing in <5min; so it may be that after a while some (OS/postgres) buffers are filling up and causing this issue?\nFull iostat/iotop, configuration, checkpoint stats, etc. are pasted below for completeness. Highlights are:checkpoint_segments=512shared_buffers=16GBcheckpoint_timeout=15mincheckpoint_completion_target=0.1\nRegards,Steve\n---Checkpoint stats:\ndb=# select * from pg_stat_bgwriter;\n checkpoints_timed 6 checkpoints_req 3 checkpoint_write_time 26346184 checkpoint_sync_time 142 buffers_checkpoint 4227065 buffers_clean 4139841 maxwritten_clean 8261 buffers_backend 9128583 buffers_backend_fsync 0 buffers_alloc 9311478 stats_reset 2015-03-17 11:14:21.5649\n---postgres log file - checkpoint log entries:\n2015-03-17 11:25:25 LOG: checkpoint complete: wrote 855754 buffers (40.8%); 0 transaction log file(s) added, 0 removed, 500 recycled; write=2988.185 s, sync=0.044 s, total=2988.331 s; sync files=110, longest=0.003 s, average=0.000 s2015-03-17 11:25:25 LOG: checkpoint starting: xlog time2015-03-17 11:59:54 LOG: parameter \"checkpoint_completion_target\" changed to \"0.9\"2015-03-17 13:30:20 LOG: checkpoint complete: wrote 1012112 buffers (48.3%); 0 transaction log file(s) added, 0 removed, 512 recycled; write=7494.228 s, sync=0.021 s, total=7494.371 s; sync files=119, longest=0.001 s, average=0.000 s2015-03-17 13:30:20 LOG: checkpoint starting: xlog time2015-03-17 14:21:53 LOG: parameter \"checkpoint_completion_target\" changed to \"0.1\"2015-03-17 16:00:58 LOG: checkpoint complete: wrote 1411979 buffers (67.3%); 0 transaction log file(s) added, 696 removed, 900 recycled; write=9036.898 s, sync=0.020 s, total=9038.538 s; sync files=109, longest=0.000 s, average=0.000 s2015-03-17 16:00:58 LOG: checkpoint starting: time2015-03-17 16:28:40 LOG: checkpoint complete: wrote 345183 buffers (16.5%); 0 transaction log file(s) added, 2001 removed, 0 recycled; write=1660.333 s, sync=0.018 s, total=1661.816 s; sync files=93, longest=0.002 s, average=0.000 s2015-03-17 17:28:40 LOG: checkpoint starting: time2015-03-17 18:54:47 LOG: checkpoint complete: wrote 602037 buffers (28.7%); 0 transaction log file(s) added, 0 removed, 500 recycled; write=5166.540 s, sync=0.039 s, total=5166.657 s; sync files=122, longest=0.003 s, average=0.000 s2015-03-17 18:54:47 LOG: checkpoint starting: xlog time\n---iostat -x snapshot:\navg-cpu: %user %nice %system %iowait %steal %idle 0.50 0.00 2.35 15.09 0.00 82.05\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsda 0.00 0.00 0.00 5.00 0.00 2056.00 822.40 0.00 0.00 0.00 0.00 0.00 0.00sdb 0.00 0.00 1055.00 549.00 41166.50 22840.00 79.81 5.28 3.28 4.94 0.10 0.62 100.00\n---vmstat 60 output\n# vmstat 60procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 5 3 877508 1251152 74476 98853728 0 0 87 1891 0 0 1 5 92 2 6 5 877508 915044 74940 99237840 0 0 46588 41857 6993 41784 8 4 76 12 2 4 877508 1676008 75292 98577936 0 0 46847 34540 4778 17175 9 3 75 13\n---sysctl settings for dirty pages\nvm.dirty_background_bytes = 0vm.dirty_background_ratio = 5vm.dirty_bytes = 0vm.dirty_expire_centisecs = 3000vm.dirty_ratio = 10vm.dirty_writeback_centisecs = 500\n---# free -m total used free shared buffers cachedMem: 128905 126654 2250 0 70 95035-/+ buffers/cache: 31549 97355Swap: 15255 856 14399\n\n---postgres settings: \n# cat postgresql.conf |grep checkcheckpoint_segments = 512 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 15min # range 30s-1hcheckpoint_completion_target = 0.1 # checkpoint target duration, 0.0 - 1.0checkpoint_warning = 10min # 0 disableslog_checkpoints = on\n# cat postgresql.conf |egrep -e 'wal|arch|hot|lru|shared'shared_buffers = 16384MBbgwriter_lru_maxpages = 500#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/roundwal_level = 'archive'archive_mode = onarchive_command = 'cd .' # we can also use exit 0, anything thatmax_wal_senders = 0wal_keep_segments = 500 hot_standby = off\n\n---iotop snapshot:Total DISK READ: 41.63 M/s | Total DISK WRITE: 31.43 M/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND32101 be/4 postgres 10.25 M/s 1085.86 K/s 0.00 % 96.80 % postgres: checkpointer process56661 be/4 postgres 6.84 M/s 591.61 K/s 0.00 % 90.91 % postgres: dbauser db [local] COPY56751 be/4 postgres 6.97 M/s 838.73 K/s 0.00 % 88.00 % postgres: dbauser db [local] COPY56744 be/4 postgres 6.13 M/s 958.55 K/s 0.00 % 85.48 % postgres: dbauser db [local] COPY56621 be/4 postgres 6.77 M/s 1288.05 K/s 0.00 % 83.96 % postgres: dbauser db [local] COPY32102 be/4 postgres 8.05 M/s 1340.47 K/s 0.00 % 82.47 % postgres: writer process 1005 be/0 root 0.00 B/s 0.00 B/s 0.00 % 5.81 % [txg_sync]32103 be/4 postgres 0.00 B/s 10.41 M/s 0.00 % 0.52 % postgres: wal writer process\n\n--- \t\t \t \t\t \n\n\n\nHi,We have a so far (to us) unexplainable issue on our production systems after we roughly doubled the amount of data we import daily. We should be ok on pure theoretical hardware performance, but we are seeing some weird IO counters when the actual throughput of the writes is very low. The use case is as follows: - typical DW - relatively constant periodic data loads - i.e. heavy write - we receive large CSV files ~ 5-10Gb every 15 minutes spread out across 5-7 minutes - Custom ETL scripts process and filter files within < 30 seconds down to about 5Gb CSV ready to load - 2 loader queues load the files, picking off a file one-by-one - tables are partitioned daily, indexed on a primary key + timestamp - system is HP blade; 128Gb RAM, 2x 8-core, 12x 10k RPM RAID1+0 (database) on first controller, 2x 15k RAID1 (xlog) on a different controller - DB size is ~2.5Tb; rotating load of 30 days keeps the database stable - filesystem: zfs with lz4 compression - raw throughput of the database disk is > 700Mbytes/sec sequential and >150Mbytes random for read and roughly half for write in various benchmarks - CPU load is minimal when copy loads are taking place (i.e. after ETL has finished)The issue is that the system is constantly checkpointing regardless of various kernel and postgres settings. Having read through most of the history of this list and most of the recommendations on various blogs, we have been unable to find an answer why the checkpoints are being written so slowly. Even when we disable all import processes or if index is dropped, the checkpoint is still taking > 1hour. Stats are pointing to checkpoint sizes of roughly 7Gb which should take < 1min even with full random reads; so even when imports are fully disabled, what is not making sense is why would the checkpointing be taking well over an hour?One other thing that's noticed, but not measured, i.e. mostly anecdotal is that for a period of <1hr when postgres is restarted, the system performs mostly fine and checkpoints are completing in <5min; so it may be that after a while some (OS/postgres) buffers are filling up and causing this issue?Full iostat/iotop, configuration, checkpoint stats, etc. are pasted below for completeness. Highlights are:checkpoint_segments=512shared_buffers=16GBcheckpoint_timeout=15mincheckpoint_completion_target=0.1Regards,Steve---Checkpoint stats:db=# select * from pg_stat_bgwriter; checkpoints_timed 6 checkpoints_req 3 checkpoint_write_time 26346184 checkpoint_sync_time 142 buffers_checkpoint 4227065 buffers_clean 4139841 maxwritten_clean 8261 buffers_backend 9128583 buffers_backend_fsync 0 buffers_alloc 9311478 stats_reset 2015-03-17 11:14:21.5649---postgres log file - checkpoint log entries:2015-03-17 11:25:25 LOG: checkpoint complete: wrote 855754 buffers (40.8%); 0 transaction log file(s) added, 0 removed, 500 recycled; write=2988.185 s, sync=0.044 s, total=2988.331 s; sync files=110, longest=0.003 s, average=0.000 s2015-03-17 11:25:25 LOG: checkpoint starting: xlog time2015-03-17 11:59:54 LOG: parameter \"checkpoint_completion_target\" changed to \"0.9\"2015-03-17 13:30:20 LOG: checkpoint complete: wrote 1012112 buffers (48.3%); 0 transaction log file(s) added, 0 removed, 512 recycled; write=7494.228 s, sync=0.021 s, total=7494.371 s; sync files=119, longest=0.001 s, average=0.000 s2015-03-17 13:30:20 LOG: checkpoint starting: xlog time2015-03-17 14:21:53 LOG: parameter \"checkpoint_completion_target\" changed to \"0.1\"2015-03-17 16:00:58 LOG: checkpoint complete: wrote 1411979 buffers (67.3%); 0 transaction log file(s) added, 696 removed, 900 recycled; write=9036.898 s, sync=0.020 s, total=9038.538 s; sync files=109, longest=0.000 s, average=0.000 s2015-03-17 16:00:58 LOG: checkpoint starting: time2015-03-17 16:28:40 LOG: checkpoint complete: wrote 345183 buffers (16.5%); 0 transaction log file(s) added, 2001 removed, 0 recycled; write=1660.333 s, sync=0.018 s, total=1661.816 s; sync files=93, longest=0.002 s, average=0.000 s2015-03-17 17:28:40 LOG: checkpoint starting: time2015-03-17 18:54:47 LOG: checkpoint complete: wrote 602037 buffers (28.7%); 0 transaction log file(s) added, 0 removed, 500 recycled; write=5166.540 s, sync=0.039 s, total=5166.657 s; sync files=122, longest=0.003 s, average=0.000 s2015-03-17 18:54:47 LOG: checkpoint starting: xlog time---iostat -x snapshot:avg-cpu: %user %nice %system %iowait %steal %idle 0.50 0.00 2.35 15.09 0.00 82.05Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsda 0.00 0.00 0.00 5.00 0.00 2056.00 822.40 0.00 0.00 0.00 0.00 0.00 0.00sdb 0.00 0.00 1055.00 549.00 41166.50 22840.00 79.81 5.28 3.28 4.94 0.10 0.62 100.00---vmstat 60 output# vmstat 60procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 5 3 877508 1251152 74476 98853728 0 0 87 1891 0 0 1 5 92 2 6 5 877508 915044 74940 99237840 0 0 46588 41857 6993 41784 8 4 76 12 2 4 877508 1676008 75292 98577936 0 0 46847 34540 4778 17175 9 3 75 13---sysctl settings for dirty pagesvm.dirty_background_bytes = 0vm.dirty_background_ratio = 5vm.dirty_bytes = 0vm.dirty_expire_centisecs = 3000vm.dirty_ratio = 10vm.dirty_writeback_centisecs = 500---# free -m total used free shared buffers cachedMem: 128905 126654 2250 0 70 95035-/+ buffers/cache: 31549 97355Swap: 15255 856 14399---postgres settings: # cat postgresql.conf |grep checkcheckpoint_segments = 512 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 15min # range 30s-1hcheckpoint_completion_target = 0.1 # checkpoint target duration, 0.0 - 1.0checkpoint_warning = 10min # 0 disableslog_checkpoints = on# cat postgresql.conf |egrep -e 'wal|arch|hot|lru|shared'shared_buffers = 16384MBbgwriter_lru_maxpages = 500#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/roundwal_level = 'archive'archive_mode = onarchive_command = 'cd .' # we can also use exit 0, anything thatmax_wal_senders = 0wal_keep_segments = 500 hot_standby = off---iotop snapshot:Total DISK READ: 41.63 M/s | Total DISK WRITE: 31.43 M/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND32101 be/4 postgres 10.25 M/s 1085.86 K/s 0.00 % 96.80 % postgres: checkpointer process56661 be/4 postgres 6.84 M/s 591.61 K/s 0.00 % 90.91 % postgres: dbauser db [local] COPY56751 be/4 postgres 6.97 M/s 838.73 K/s 0.00 % 88.00 % postgres: dbauser db [local] COPY56744 be/4 postgres 6.13 M/s 958.55 K/s 0.00 % 85.48 % postgres: dbauser db [local] COPY56621 be/4 postgres 6.77 M/s 1288.05 K/s 0.00 % 83.96 % postgres: dbauser db [local] COPY32102 be/4 postgres 8.05 M/s 1340.47 K/s 0.00 % 82.47 % postgres: writer process 1005 be/0 root 0.00 B/s 0.00 B/s 0.00 % 5.81 % [txg_sync]32103 be/4 postgres 0.00 B/s 10.41 M/s 0.00 % 0.52 % postgres: wal writer process---",
"msg_date": "Wed, 18 Mar 2015 11:21:08 +0000",
"msg_from": "Steven Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow checkpoints"
},
{
"msg_contents": "Apologies about the formatting; resending again as plain-text.\n\nRegards,\nSteve\n\n\nFrom: [email protected]\nTo: [email protected]\nSubject: [PERFORM] Very slow checkpoints\nDate: Wed, 18 Mar 2015 11:21:08 +0000\n\nHi,\n\nWe have a so far (to us) unexplainable issue on our production systems after we roughly doubled the amount of data we import daily. We should be ok on pure theoretical hardware performance, but we are seeing some weird IO counters when the actual throughput of the writes is very low. The use case is as follows:\n\n - typical DW - relatively constant periodic data loads - i.e. heavy write\n - we receive large CSV files ~ 5-10Gb every 15 minutes spread out across 5-7 minutes\n - Custom ETL scripts process and filter files within < 30 seconds down to about 5Gb CSV ready to load\n - 2 loader queues load the files, picking off a file one-by-one\n - tables are partitioned daily, indexed on a primary key + timestamp \n - system is HP blade; 128Gb RAM, 2x 8-core, 12x 10k RPM RAID1+0 (database) on first controller, 2x 15k RAID1 (xlog) on a different controller\n - DB size is ~2.5Tb; rotating load of 30 days keeps the database stable\n - filesystem: zfs with lz4 compression\n - raw throughput of the database disk is> 700Mbytes/sec sequential and>150Mbytes random for read and roughly half for write in various benchmarks\n - CPU load is minimal when copy loads are taking place (i.e. after ETL has finished)\n\nThe issue is that the system is constantly checkpointing regardless of various kernel and postgres settings. Having read through most of the history of this list and most of the recommendations on various blogs, we have been unable to find an answer why the checkpoints are being written so slowly. Even when we disable all import processes or if index is dropped, the checkpoint is still taking> 1hour. Stats are pointing to checkpoint sizes of roughly 7Gb which should take < 1min even with full random reads; so even when imports are fully disabled, what is not making sense is why would the checkpointing be taking well over an hour?\n\nOne other thing that's noticed, but not measured, i.e. mostly anecdotal is that for a period of COMMAND\n32101 be/4 postgres 10.25 M/s 1085.86 K/s 0.00 % 96.80 % postgres: checkpointer process\n56661 be/4 postgres 6.84 M/s 591.61 K/s 0.00 % 90.91 % postgres: dbauser db [local] COPY\n56751 be/4 postgres 6.97 M/s 838.73 K/s 0.00 % 88.00 % postgres: dbauser db [local] COPY\n56744 be/4 postgres 6.13 M/s 958.55 K/s 0.00 % 85.48 % postgres: dbauser db [local] COPY\n56621 be/4 postgres 6.77 M/s 1288.05 K/s 0.00 % 83.96 % postgres: dbauser db [local] COPY\n32102 be/4 postgres 8.05 M/s 1340.47 K/s 0.00 % 82.47 % postgres: writer process\n 1005 be/0 root 0.00 B/s 0.00 B/s 0.00 % 5.81 % [txg_sync]\n32103 be/4 postgres 0.00 B/s 10.41 M/s 0.00 % 0.52 % postgres: wal writer process\n\n\n--- \t\t \t \t\t \n\n\n\nApologies about the formatting; resending again as plain-text.\n\nRegards,\nSteve\n\n\nFrom: [email protected]\nTo: [email protected]\nSubject: [PERFORM] Very slow checkpoints\nDate: Wed, 18 Mar 2015 11:21:08 +0000\n\nHi,\n\nWe have a so far (to us) unexplainable issue on our production systems after we roughly doubled the amount of data we import daily. We should be ok on pure theoretical hardware performance, but we are seeing some weird IO counters when the actual throughput of the writes is very low. The use case is as follows:\n\n - typical DW - relatively constant periodic data loads - i.e. heavy write\n - we receive large CSV files ~ 5-10Gb every 15 minutes spread out across 5-7 minutes\n - Custom ETL scripts process and filter files within < 30 seconds down to about 5Gb CSV ready to load\n - 2 loader queues load the files, picking off a file one-by-one\n - tables are partitioned daily, indexed on a primary key + timestamp \n - system is HP blade; 128Gb RAM, 2x 8-core, 12x 10k RPM RAID1+0 (database) on first controller, 2x 15k RAID1 (xlog) on a different controller\n - DB size is ~2.5Tb; rotating load of 30 days keeps the database stable\n - filesystem: zfs with lz4 compression\n - raw throughput of the database disk is> 700Mbytes/sec sequential and>150Mbytes random for read and roughly half for write in various benchmarks\n - CPU load is minimal when copy loads are taking place (i.e. after ETL has finished)\n\nThe issue is that the system is constantly checkpointing regardless of various kernel and postgres settings. Having read through most of the history of this list and most of the recommendations on various blogs, we have been unable to find an answer why the checkpoints are being written so slowly. Even when we disable all import processes or if index is dropped, the checkpoint is still taking> 1hour. Stats are pointing to checkpoint sizes of roughly 7Gb which should take < 1min even with full random reads; so even when imports are fully disabled, what is not making sense is why would the checkpointing be taking well over an hour?\n\nOne other thing that's noticed, but not measured, i.e. mostly anecdotal is that for a period of <1hr when postgres is restarted, the system performs mostly fine and checkpoints are completing in <5min; so it may be that after a while some (OS/postgres) buffers are filling up and causing this issue?\n\nFull iostat/iotop, configuration, checkpoint stats, etc. are pasted below for completeness. Highlights are:\ncheckpoint_segments=512\nshared_buffers=16GB\ncheckpoint_timeout=15min\ncheckpoint_completion_target=0.1\n\nRegards,\nSteve\n\n---\nCheckpoint stats:\n\ndb=# select * from pg_stat_bgwriter;\n\n checkpoints_timed 6\n checkpoints_req 3\n checkpoint_write_time 26346184\n checkpoint_sync_time 142\n buffers_checkpoint 4227065\n buffers_clean 4139841\n maxwritten_clean 8261\n buffers_backend 9128583\n buffers_backend_fsync 0\n buffers_alloc 9311478\n stats_reset 2015-03-17 11:14:21.5649\n\n---\npostgres log file - checkpoint log entries:\n\n2015-03-17 11:25:25 LOG: checkpoint complete: wrote 855754 buffers (40.8%); 0 transaction log file(s) added, 0 removed, 500 recycled; write=2988.185 s, sync=0.044 s, total=2988.331 s; sync files=110, longest=0.003 s, average=0.000 s\n2015-03-17 11:25:25 LOG: checkpoint starting: xlog time\n2015-03-17 11:59:54 LOG: parameter \"checkpoint_completion_target\" changed to \"0.9\"\n2015-03-17 13:30:20 LOG: checkpoint complete: wrote 1012112 buffers (48.3%); 0 transaction log file(s) added, 0 removed, 512 recycled; write=7494.228 s, sync=0.021 s, total=7494.371 s; sync files=119, longest=0.001 s, average=0.000 s\n2015-03-17 13:30:20 LOG: checkpoint starting: xlog time\n2015-03-17 14:21:53 LOG: parameter \"checkpoint_completion_target\" changed to \"0.1\"\n2015-03-17 16:00:58 LOG: checkpoint complete: wrote 1411979 buffers (67.3%); 0 transaction log file(s) added, 696 removed, 900 recycled; write=9036.898 s, sync=0.020 s, total=9038.538 s; sync files=109, longest=0.000 s, average=0.000 s\n2015-03-17 16:00:58 LOG: checkpoint starting: time\n2015-03-17 16:28:40 LOG: checkpoint complete: wrote 345183 buffers (16.5%); 0 transaction log file(s) added, 2001 removed, 0 recycled; write=1660.333 s, sync=0.018 s, total=1661.816 s; sync files=93, longest=0.002 s, average=0.000 s\n2015-03-17 17:28:40 LOG: checkpoint starting: time\n2015-03-17 18:54:47 LOG: checkpoint complete: wrote 602037 buffers (28.7%); 0 transaction log file(s) added, 0 removed, 500 recycled; write=5166.540 s, sync=0.039 s, total=5166.657 s; sync files=122, longest=0.003 s, average=0.000 s\n2015-03-17 18:54:47 LOG: checkpoint starting: xlog time\n\n---\niostat -x snapshot:\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.50 0.00 2.35 15.09 0.00 82.05\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\nsda 0.00 0.00 0.00 5.00 0.00 2056.00 822.40 0.00 0.00 0.00 0.00 0.00 0.00\nsdb 0.00 0.00 1055.00 549.00 41166.50 22840.00 79.81 5.28 3.28 4.94 0.10 0.62 100.00\n\n---\nvmstat 60 output\n\n# vmstat 60\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 5 3 877508 1251152 74476 98853728 0 0 87 1891 0 0 1 5 92 2\n 6 5 877508 915044 74940 99237840 0 0 46588 41857 6993 41784 8 4 76 12\n 2 4 877508 1676008 75292 98577936 0 0 46847 34540 4778 17175 9 3 75 13\n\n---\nsysctl settings for dirty pages\n\nvm.dirty_background_bytes = 0\nvm.dirty_background_ratio = 5\nvm.dirty_bytes = 0\nvm.dirty_expire_centisecs = 3000\nvm.dirty_ratio = 10\nvm.dirty_writeback_centisecs = 500\n\n---\n# free -m\n total used free shared buffers cached\nMem: 128905 126654 2250 0 70 95035\n-/+ buffers/cache: 31549 97355\nSwap: 15255 856 14399\n\n\n---\npostgres settings: \n\n# cat postgresql.conf |grep check\ncheckpoint_segments = 512 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 15min # range 30s-1h\ncheckpoint_completion_target = 0.1 # checkpoint target duration, 0.0 - 1.0\ncheckpoint_warning = 10min # 0 disables\nlog_checkpoints = on\n\n# cat postgresql.conf |egrep -e 'wal|arch|hot|lru|shared'\nshared_buffers = 16384MB\nbgwriter_lru_maxpages = 500\n#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round\nwal_level = 'archive'\narchive_mode = on\narchive_command = 'cd .' # we can also use exit 0, anything that\nmax_wal_senders = 0\nwal_keep_segments = 500 \nhot_standby = off\n\n\n---\niotop snapshot:\nTotal DISK READ: 41.63 M/s | Total DISK WRITE: 31.43 M/s\n TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND\n32101 be/4 postgres 10.25 M/s 1085.86 K/s 0.00 % 96.80 % postgres: checkpointer process\n56661 be/4 postgres 6.84 M/s 591.61 K/s 0.00 % 90.91 % postgres: dbauser db [local] COPY\n56751 be/4 postgres 6.97 M/s 838.73 K/s 0.00 % 88.00 % postgres: dbauser db [local] COPY\n56744 be/4 postgres 6.13 M/s 958.55 K/s 0.00 % 85.48 % postgres: dbauser db [local] COPY\n56621 be/4 postgres 6.77 M/s 1288.05 K/s 0.00 % 83.96 % postgres: dbauser db [local] COPY\n32102 be/4 postgres 8.05 M/s 1340.47 K/s 0.00 % 82.47 % postgres: writer process\n 1005 be/0 root 0.00 B/s 0.00 B/s 0.00 % 5.81 % [txg_sync]\n32103 be/4 postgres 0.00 B/s 10.41 M/s 0.00 % 0.52 % postgres: wal writer process\n\n\n---",
"msg_date": "Wed, 18 Mar 2015 11:25:27 +0000",
"msg_from": "Steven Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "Hi Steven,\n\nOn Wed, Mar 18, 2015 at 12:21 PM, Steven Jones\n<[email protected]> wrote:\n> - system is HP blade; 128Gb RAM, 2x 8-core, 12x 10k RPM RAID1+0 (database)\n\nHave you BBU on your controller? And how your controller configured, I\nmean cache mode, io mode, disk write cache mode. You have 15K SAS\n(which form factor?) under WAL and 10K SAS under database, am I\ncorrect?\n\n> Full iostat/iotop, configuration, checkpoint stats, etc. are pasted below\n> for completeness. Highlights are:\n> checkpoint_segments=512\n> shared_buffers=16GB\n> checkpoint_timeout=15min\n> checkpoint_completion_target=0.1\n\nIt looks like your checkpoint settings are a bit strange besides of\neverything else. If you chose high value for checkpoint_segments, your\naim is to avoid checkpoints by timeout (or vice verse). If you have\ncheckpoint_segments=512, your checkpoint_timeout should be about\n60min. And anyway - checkpoint_completion_target=0.9 or 0.7 in order\nto spread disk load between checkpoints.\n\n\n\n> ---\n> sysctl settings for dirty pages\n>\n> vm.dirty_background_bytes = 0\n> vm.dirty_background_ratio = 5\n> vm.dirty_bytes = 0\n> vm.dirty_expire_centisecs = 3000\n> vm.dirty_ratio = 10\n> vm.dirty_writeback_centisecs = 500\n\nValues for this settings are really dependent of RAID (and BBU size).\n\nAnd about further problem description: have you any graphical\nrepresentation of your % disc utilization?\n\nBest regards,\nIlya\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 12:42:43 +0100",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "Hi Ilya,\n\nThank you for the response.\n\nYes BBU is on the controller; 1024Mb. It is a HP P410i controller, with write caching turned on the controller; off on disk level.\n\n2x 15k SAS SFF for WAL and 12x 10k SAS SFF for DB\n\nWe have tried longer settings for checkpoint_timeout, but not 1hr; so we will try that as well.\n\nWe don't at this stage have any graphs, but we will set it up over the next 24hrs at least.\n\nRegards,\nSteve\n\n----------------------------------------\n> From: [email protected]\n> Date: Wed, 18 Mar 2015 12:42:43 +0100\n> Subject: Re: [PERFORM] Very slow checkpoints\n> To: [email protected]\n> CC: [email protected]\n>\n> Hi Steven,\n>\n> On Wed, Mar 18, 2015 at 12:21 PM, Steven Jones\n> <[email protected]> wrote:\n>> - system is HP blade; 128Gb RAM, 2x 8-core, 12x 10k RPM RAID1+0 (database)\n>\n> Have you BBU on your controller? And how your controller configured, I\n> mean cache mode, io mode, disk write cache mode. You have 15K SAS\n> (which form factor?) under WAL and 10K SAS under database, am I\n> correct?\n>\n>> Full iostat/iotop, configuration, checkpoint stats, etc. are pasted below\n>> for completeness. Highlights are:\n>> checkpoint_segments=512\n>> shared_buffers=16GB\n>> checkpoint_timeout=15min\n>> checkpoint_completion_target=0.1\n>\n> It looks like your checkpoint settings are a bit strange besides of\n> everything else. If you chose high value for checkpoint_segments, your\n> aim is to avoid checkpoints by timeout (or vice verse). If you have\n> checkpoint_segments=512, your checkpoint_timeout should be about\n> 60min. And anyway - checkpoint_completion_target=0.9 or 0.7 in order\n> to spread disk load between checkpoints.\n>\n>\n>\n>> \n>> sysctl settings for dirty pages\n>>\n>> vm.dirty_background_bytes = 0\n>> vm.dirty_background_ratio = 5\n>> vm.dirty_bytes = 0\n>> vm.dirty_expire_centisecs = 3000\n>> vm.dirty_ratio = 10\n>> vm.dirty_writeback_centisecs = 500\n>\n> Values for this settings are really dependent of RAID (and BBU size).\n>\n> And about further problem description: have you any graphical\n> representation of your % disc utilization?\n>\n> Best regards,\n> Ilya\n>\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 11:58:45 +0000",
"msg_from": "Steven Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "On Wed, Mar 18, 2015 at 12:58 PM, Steven Jones\n<[email protected]> wrote:\n> Yes BBU is on the controller; 1024Mb. It is a HP P410i controller, with write caching turned on the controller; off on disk level.\n\n\nvm.dirty_background_bytes=67108864 and vm.dirty_bytes=536870912 looks\nresonable for 512MB BBU, you can calculate them for 1024 or\nrecalculate them for dirty_background_ratio\n\nBy the way, which kernel do you use?\n\n> We don't at this stage have any graphs, but we will set it up over the next 24hrs at least.\n\nDo not forget to have iostat statistics on them, at least latency,\n%iowait and %util, such parameters are very helpful.\n\nAnd I am always suspicious about zfs under heavy writes. It is\nreliable and quite comfortable in terms of configuration, but for\nspeed ext4 or xfs with disabled barrier looks more reasonable\n\n>>\n>> --\n>> Ilya Kosmodemiansky,\n>>\n>> PostgreSQL-Consulting.com\n>> tel. +14084142500\n>> cell. +4915144336040\n>> [email protected]\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 13:13:36 +0100",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "On Wed, Mar 18, 2015 at 12:21 PM, Steven Jones\n<[email protected]> wrote:\n> - typical DW - relatively constant periodic data loads - i.e. heavy write\n> - we receive large CSV files ~ 5-10Gb every 15 minutes spread out across\n> 5-7 minutes\n> - DB size is ~2.5Tb; rotating load of 30 days keeps the database stable\n\nAnd an important addition: how your autovacuum is configured?\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 13:17:14 +0100",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "Thanks.\n\nKernel is: 3.5.0-36-generic #57~precise1-Ubuntu\n\nAlso - xlog is on ext4; db partition is zfs (on top of hardware RAID1+0).\n\nRegards,\nSteve\n\n----------------------------------------\n> From: [email protected]\n> Date: Wed, 18 Mar 2015 13:13:36 +0100\n> Subject: Re: [PERFORM] Very slow checkpoints\n> To: [email protected]\n> CC: [email protected]\n>\n> On Wed, Mar 18, 2015 at 12:58 PM, Steven Jones\n> <[email protected]> wrote:\n>> Yes BBU is on the controller; 1024Mb. It is a HP P410i controller, with write caching turned on the controller; off on disk level.\n>\n>\n> vm.dirty_background_bytes=67108864 and vm.dirty_bytes=536870912 looks\n> resonable for 512MB BBU, you can calculate them for 1024 or\n> recalculate them for dirty_background_ratio\n>\n> By the way, which kernel do you use?\n>\n>> We don't at this stage have any graphs, but we will set it up over the next 24hrs at least.\n>\n> Do not forget to have iostat statistics on them, at least latency,\n> %iowait and %util, such parameters are very helpful.\n>\n> And I am always suspicious about zfs under heavy writes. It is\n> reliable and quite comfortable in terms of configuration, but for\n> speed ext4 or xfs with disabled barrier looks more reasonable\n>\n>>>\n>>> --\n>>> Ilya Kosmodemiansky,\n>>>\n>>> PostgreSQL-Consulting.com\n>>> tel. +14084142500\n>>> cell. +4915144336040\n>>> [email protected]\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 12:19:09 +0000",
"msg_from": "Steven Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "Autovacuum - default:\n\n#autovacuum = on # Enable autovacuum subprocess? 'on'\n#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and\n#autovacuum_max_workers = 3 # max number of autovacuum subprocesses\n#autovacuum_naptime = 1min # time between autovacuum runs\nautovacuum_vacuum_threshold = 500 # min number of row updates before\nautovacuum_analyze_threshold = 500 # min number of row updates before\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze\n#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum\n#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n # autovacuum, in milliseconds;\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovacuum, -1 means use\n\nRegards,\nSteve\n\n----------------------------------------\n> From: [email protected]\n> Date: Wed, 18 Mar 2015 13:17:14 +0100\n> Subject: Re: [PERFORM] Very slow checkpoints\n> To: [email protected]\n> CC: [email protected]\n>\n> On Wed, Mar 18, 2015 at 12:21 PM, Steven Jones\n> <[email protected]> wrote:\n>> - typical DW - relatively constant periodic data loads - i.e. heavy write\n>> - we receive large CSV files ~ 5-10Gb every 15 minutes spread out across\n>> 5-7 minutes\n>> - DB size is ~2.5Tb; rotating load of 30 days keeps the database stable\n>\n> And an important addition: how your autovacuum is configured?\n>\n>\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 12:21:12 +0000",
"msg_from": "Steven Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "On Wed, Mar 18, 2015 at 1:21 PM, Steven Jones\n<[email protected]> wrote:\n> #autovacuum = on # Enable autovacuum subprocess? 'on'\n> #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and\n> #autovacuum_max_workers = 3 # max number of autovacuum subprocesses\n> #autovacuum_naptime = 1min # time between autovacuum runs\n> autovacuum_vacuum_threshold = 500 # min number of row updates before\n> autovacuum_analyze_threshold = 500 # min number of row updates before\n> #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum\n> #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze\n\nIf I were you, Ill use _scale_factor settings instead of threshold,\nbecause it makes your autovacuum aggressive enough (you need it on\nsuch workload) without firing too frequently (vacuuming has its\nprice). autovacuum_vacuum_scale_factor = 0.01 and\nautovacuum_analyze_scale_factor = 0.05 will be OK\n\nAnd if you see all your autovacuum workers active all the time (more\nthan 80% of the time for example) it is a reason to increase\nautovacuum_max_workers\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 13:28:27 +0100",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 18, 2015 at 12:21 PM, Steven Jones\n<[email protected]> wrote:\n> Hi,\n\n> iostat -x snapshot:\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 0.50 0.00 2.35 15.09 0.00 82.05\n>\n> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz\n> avgqu-sz await r_await w_await svctm %util\n> sda 0.00 0.00 0.00 5.00 0.00 2056.00 822.40\n> 0.00 0.00 0.00 0.00 0.00 0.00\n> sdb 0.00 0.00 1055.00 549.00 41166.50 22840.00 79.81\n> 5.28 3.28 4.94 0.10 0.62 100.00\nYour sdb is saturated...\n\n\n> ---\n> iotop snapshot:\n> Total DISK READ: 41.63 M/s | Total DISK WRITE: 31.43 M/s\n> TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND\n> 32101 be/4 postgres 10.25 M/s 1085.86 K/s 0.00 % 96.80 % postgres:\n> checkpointer process\n> 56661 be/4 postgres 6.84 M/s 591.61 K/s 0.00 % 90.91 % postgres:\n> dbauser db [local] COPY\n\n> 32102 be/4 postgres 8.05 M/s 1340.47 K/s 0.00 % 82.47 % postgres: writer\n> process\n> 1005 be/0 root 0.00 B/s 0.00 B/s 0.00 % 5.81 % [txg_sync]\n> 32103 be/4 postgres 0.00 B/s 10.41 M/s 0.00 % 0.52 % postgres: wal\n> writer process\n\nWhy are checkpointer process and writer process reading at 18 MB/s ?\nI have no experience with zfs but could it be related to COW and\nrecordsize? I have no idea if these reads are counted in iotop output\nthough.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 16:27:36 +0100",
"msg_from": "didier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "Hi,\n\n>>\n>> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz\n>> avgqu-sz await r_await w_await svctm %util\n>> sda 0.00 0.00 0.00 5.00 0.00 2056.00 822.40\n>> 0.00 0.00 0.00 0.00 0.00 0.00\n>> sdb 0.00 0.00 1055.00 549.00 41166.50 22840.00 79.81\n>> 5.28 3.28 4.94 0.10 0.62 100.00\n> Your sdb is saturated...\n\nYes that's what iostat seems to indicate, but it's weird because at the same time it is reporting 100% io utilization I can hit the disk write (seq) at> 250Mbyte/sec:\n\n# sync;time bash -c \"(dd if=/dev/sda1 of=bf bs=8k count=500000; sync)\"\n500000+0 records in\n500000+0 records out\n4096000000 bytes (4.1 GB) copied, 14.8575 s, 276 MB/s\n\nreal\t0m14.896s\nuser\t0m0.068s\nsys\t0m10.157s\n\n\n> Why are checkpointer process and writer process reading at 18 MB/s ?\n> I have no experience with zfs but could it be related to COW and\n> recordsize? I have no idea if these reads are counted in iotop output\n> though.\n\nIn general, some random disk write benchmarks and varying block sizes don't have a huge effect. But for some reason the checkpointing process is just simply writing checkpoints too slowly. In the meantime the COPY is piling up logs while the previous checkpoint is still being written, so the next one starts straight away and no setting is able to split them up.\n\n\nRegards,\nSteve \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Mar 2015 05:28:40 +0000",
"msg_from": "Steven Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow checkpoints"
},
{
"msg_contents": "Would not be the case slightly decrease the shared buffer, lower your\ncheckpoints_timeout for eg 5 minutes or decrease checkpoints_segments and\nset the checkpoint_completation_target to 0.5 to not mess up the next\ncheckpoints?\n\n\nWhat logs tell me is that a checkpoint occurs immediately to the other.\n\nWhen you have a large shared_buffer and checkpoint_completation_target\nvalue near the maximum time between checkpoints, which I imagine is very\nspread given for a very long time interval\n.\nAs your shared_buffer is relatively large, more data should be handled by\nthe checkpoint. The checkpoints are being triggered by time (15 minutes)\naccumulating a good amount of data.\n\nAll this happens every time you load your CVS, and when the writing process\nof checkpoints reaches the end occurs immediately another.\n\nIt seems reasonable high disk activity performed by checkpoint and writing\nprocesses, performing theirs job duties amid a high load on the\nshared_buffer.\n\n\nRegards\n\nWould not be the case slightly decrease the shared buffer, lower your checkpoints_timeout for eg 5 minutes or decrease checkpoints_segments and set the checkpoint_completation_target to 0.5 to not mess up the next checkpoints?What logs tell me is that a checkpoint occurs immediately to the other.When you have a large shared_buffer and checkpoint_completation_target value near the maximum time between checkpoints, which I imagine is very spread given for a very long time interval.As your shared_buffer is relatively large, more data should be handled by the checkpoint. The checkpoints are being triggered by time (15 minutes) accumulating a good amount of data.All this happens every time you load your CVS, and when the writing process of checkpoints reaches the end occurs immediately another.It seems reasonable high disk activity performed by checkpoint and writing processes, performing theirs job duties amid a high load on the shared_buffer.Regards",
"msg_date": "Thu, 19 Mar 2015 16:40:46 -0300",
"msg_from": "Joao Junior <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow checkpoints"
}
] |
[
{
"msg_contents": "We have views that run from 20x to failing to complete at all in 9.4.1 whereas they finished in seconds in 8.4.7 on the same platform.\n\nAfter upgrading from 8.4 to 9.4, I ran ANALYZE on the entire DB. Performance improved for some but not all of the views.\n\nHere is the explain-analyze output from one of the slow views in 9.4: \nhttp://explain.depesz.com/s/36n\n\nUnfortunately I have no way of producing an 8.4 plan.\n\nWhile acknowledging that nested loops and sequential table scans account for 85% of the execution time which suggests that a better query may be needed, why would the same query run in seconds on 8.x but take minutes on 9.x? \n\nDid the planner change significantly between the two releases? Is there any way to compel the 9.4 planner to produce an 8.4 plan? \n\nThanks in advance for you assistance.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 17:00:42 +0000",
"msg_from": "\"Carson, Leonard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "views much slower in 9.3 than 8.4"
},
{
"msg_contents": "\"Carson, Leonard\" <[email protected]> wrote:\n\n> While acknowledging that nested loops and sequential table scans\n> account for 85% of the execution time which suggests that a\n> better query may be needed, why would the same query run in\n> seconds on 8.x but take minutes on 9.x?\n\nFirst, please show the output of this from both servers:\n\nSELECT version();\nSELECT name, current_setting(name), source\nFROM pg_settings\nWHERE source NOT IN ('default', 'override');\n\nThen, for your newer server, please follow the steps outlined here:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nMy first guess would be that at some point your costing parameters\nwere tuned on the old system, but have not yet been tuned on the\nnew one. Rather than blindly using the old settings for the new\nserver, it would be good to see the information requested on the\nabove-cited page to determine good settings for the new server.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 17:16:20 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: views much slower in 9.3 than 8.4"
}
] |
[
{
"msg_contents": "Hi Team,\n\n\n\nI don't know under which section does this question comes, so I am posting\nthis question to both Admin and performance mailing list. Apologies in\nadvance.\n\n\n\nObjective:\n\n\n\nWe are planning to use PostgreSQL instead of Netezza for our data warehouse\nas well as database solutions. Right now, we have all our clients in one\nNetezza box. What we are thinking of migrating our clients to dedicated\nPostgreSQL for each of them. We will start with one of the client. If it\nworks successfully, we will be migrating all the clients one by one. The\nobjective is to get a better performance than our existing solution. We are\nhopeful of that mainly because of two reasons. Firstly, we will have a\ndedicated server for each of the client with good hardware instead of\nhaving one server with all the clients on that. Secondly, we can spend on\nhardware much easily than spending on a proprietary appliance.\n\n\n\nI am hoping this community can help us to know that what would be the good\ninfrastructure/hardware that can help us in achieving our goal.\n\n\n\nHere are few of the statistics which might act as a starting point.\n\n\n\nAvailability: High (24*7).\n\nUser Data : 700 GB which will increase to 1.5 TB in next 2-3 years.\n\nNumber of User Databases : 2 (One is the main database, other is used only\nfor working tables where tables gets deleted in every 48 hours)\n\nNumber of tables : 200 (in the main database), (2000-3000 in working\ndatabase)\n\nSize of top 5 biggest tables : 20-40 GB\n\nNo of users concurrently accessing the system : 5-6 with write access. 10\nwith read access.\n\nNo of User Queries running on the system in a day : ~80K\n\nRead-only Queries (Select): ~60K\n\nWrite queries: ~20K\n\nData Import Queries: ~1K\n\nTypical Business Day : 18-20 hours.\n\n\n\nI can pass on few complex queries to let you guys know what are we doing.\n\n\n\nHere are few questions:\n\n\n\n1.) I don't need a load balancing solution. It must be high availability\nserver and I can work with asynchronous replication. The most important\nthing here would be recovery should be as fast as possible.\n\nWhat approach would you recommend?\n\n\n\n2.) Recommendations on indexes, WAL, table spaces. I am not asking about on\nwhich key I need to make indexes, but an high level approach about how to\nkeep them? This might come out as a weird question to many but please\nexcuse me for being a novice.\n\n\n\n*Most Important Question:*\n\n\n\n3.) What would be the ideal hardware configuration for this requirement? I\nknow there is not a one-stop answer for this, but let's take it is a\nstarting point. We can come to a proper conclusion after a discussion.\n\n\n\nWhat are the best on-line resources/books which can tell us about the\nhardware requirements?\n\n\n\nWarm Regards,\n\n\nVivekanand Joshi\n+919654227927\n\n\n\n[image: Zeta Interactive]\n\n185 Madison Ave. New York, NY 10016\n\nwww.zetainteractive.com",
"msg_date": "Thu, 19 Mar 2015 00:37:49 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware Configuration and other Stuff"
},
{
"msg_contents": "\nOn 03/18/2015 12:07 PM, Vivekanand Joshi wrote:\n\n>\n> Here are few questions:\n>\n> 1.) I don't need a load balancing solution. It must be high availability\n> server and I can work with asynchronous replication. The most important\n> thing here would be recovery should be as fast as possible.\n>\n> What approach would you recommend?\n\nLinuxHA + Corosync/Pacemaker etc...\n\n>\n> 2.) Recommendations on indexes, WAL, table spaces. I am not asking about\n> on which key I need to make indexes, but an high level approach about\n> how to keep them? This might come out as a weird question to many but\n> please excuse me for being a novice.\n\nThis is too broad of a question without understanding the hardware it \nwill be on.\n\n\n>\n> *Most Important Question:*\n>\n> 3.) What would be the ideal hardware configuration for this requirement?\n> I know there is not a one-stop answer for this, but let's take it is a\n> starting point. We can come to a proper conclusion after a discussion.\n>\n> What are the best on-line resources/books which can tell us about the\n> hardware requirements?\n\nAnd see above. You need a consultant. I am sure you will get some decent \nresponses but this isn't just about PostgreSQL, this is about \narchitecture of a rather complex solution and a migration.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, @cmdpromptinc\n\nNow I get it: your service is designed for a customer\nbase that grew up with Facebook, watches Japanese seizure\nrobot anime, and has the attention span of a gnat.\nI'm not that user., \"Tyler Riddle\"\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 12:37:40 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Hardware Configuration and other Stuff"
},
{
"msg_contents": "Hi,\n\nOn 18.3.2015 20:07, Vivekanand Joshi wrote:\n> Hi Team,\n> \n> I don't know under which section does this question comes, so I am\n> posting this question to both Admin and performance mailing list.\n> Apologies in advance.\n\nLet's keep this in pgsql-performance.\n\n> \n> Objective:\n> \n> We are planning to use PostgreSQL instead of Netezza for our data \n> warehouse as well as database solutions. Right now, we have all our \n> clients in one Netezza box. What we are thinking of migrating our \n> clients to dedicated PostgreSQL for each of them. We will start with\n> one of the client. If it works successfully, we will be migrating all\n> the clients one by one. The objective is to get a better performance\n> than our existing solution. We are hopeful of that mainly because of\n> two reasons. Firstly, we will have a dedicated server for each of the\n> client with good hardware instead of having one server with all the\n> clients on that. Secondly, we can spend on hardware much easily than\n> spending on a proprietary appliance.\n>\n\nOK.\n\n> I am hoping this community can help us to know that what would be the\n> good infrastructure/hardware that can help us in achieving our goal.\n> \n> Here are few of the statistics which might act as a starting point.\n> \n> Availability: High (24*7).\n> \n> User Data : 700 GB which will increase to 1.5 TB in next 2-3 years.\n\nHow do you measure the amount of data? Is that the amount of data before\nloading, size of the database, or what?\n\nAlso, is this a single client (thus placed on a single box), or multiple\nclients?\n\n> Number of User Databases : 2 (One is the main database, other is\n> used only for working tables where tables gets deleted in every 48\n> hours)\n\nYou mentioned 700GB of data - is that just the main database, or both\ndatabases?\n\n> \n> Number of tables : 200 (in the main database), (2000-3000 in working \n> database)\n> \n> Size of top 5 biggest tables : 20-40 GB\n> \n> No of users concurrently accessing the system : 5-6 with write\n> access. 10 with read access.\n> \n> No of User Queries running on the system in a day : ~80K\n> \n> Read-only Queries (Select): ~60K\n> \n> Write queries: ~20K\n> \n> Data Import Queries: ~1K\n> \n> Typical Business Day : 18-20 hours.\n\nSo is this a typical \"batch\" environment when you do the loads at night,\nbut no during loads? That might be possible with clients on dedicated\nboxes and would allow various optimizations.\n\n> \n> I can pass on few complex queries to let you guys know what are we\n> doing.\n> \n> Here are few questions:\n> \n> 1.) I don't need a load balancing solution. It must be high availability\n> server and I can work with asynchronous replication. The most important\n> thing here would be recovery should be as fast as possible.\n> \n> What approach would you recommend?\n\nStreaming replication. I would probably start with sync replication.\n\n> 2.) Recommendations on indexes, WAL, table spaces. I am not asking\n> about on which key I need to make indexes, but an high level approach\n> about how to keep them? This might come out as a weird question to\n> many but please excuse me for being a novice.\n\nNot sure what exactly are you looking for - there's a lot of things, and\nmany of them depend on what hardware you plan to use.\n\nThe simplest indexing strategy is to design them along with the schema,\nand evaluate them on queries (collect slow queries -> create suitable\nindexes -> repeat).\n\n> \n> 3.) What would be the ideal hardware configuration for this\n> requirement? I know there is not a one-stop answer for this, but\n> let's take it is a starting point. We can come to a proper conclusion\n> after a discussion.\n\nThis is very complex question, to be honest. I assume you're looking for\nregular servers, in that case a good server with not that many CPUs\n(say, 32 cores seems to be enough for your workload), plenty of RAM and\ngood disk system to handle the load would be a good start.\n\n> What are the best on-line resources/books which can tell us about\n> the hardware requirements?\n\nI'd say these two books would be helpful:\n\n(1)\nhttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-9-high-availability-cookbook\n\n - explains capacity planning etc.\n\n(2)\nhttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-90-high-performance\n\n - a good book about PostgreSQL performance tuning\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 20:39:06 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware Configuration and other Stuff"
},
{
"msg_contents": "Hi\n\n2015-03-18 20:07 GMT+01:00 Vivekanand Joshi <[email protected]>:\n\n> Hi Team,\n>\n>\n>\n> I don't know under which section does this question comes, so I am posting\n> this question to both Admin and performance mailing list. Apologies in\n> advance.\n>\n>\n>\n> Objective:\n>\n>\n>\n> We are planning to use PostgreSQL instead of Netezza for our data\n> warehouse as well as database solutions. Right now, we have all our clients\n> in one Netezza box. What we are thinking of migrating our clients to\n> dedicated PostgreSQL for each of them. We will start with one of the\n> client. If it works successfully, we will be migrating all the clients one\n> by one. The objective is to get a better performance than our existing\n> solution. We are hopeful of that mainly because of two reasons. Firstly, we\n> will have a dedicated server for each of the client with good hardware\n> instead of having one server with all the clients on that. Secondly, we can\n> spend on hardware much easily than spending on a proprietary appliance.\n>\n>\n>\n\nIt terrible depends on use case. Netezza is extremely optimized OLAP column\nstore database. PoostgreSQL is optimized OLTP row store database. You\ncannot to get same performance on OLAP queries on Postgres ever. I don't\nthink so dedicated hw can help. If you use Nettezza well, then it is\n10-100x faster than Postgres.\n\nYou can try to use Postgres with cstore_fdw or maybe better MonetDB\n\nRegards\n\nPavel\n\n\n> I am hoping this community can help us to know that what would be the good\n> infrastructure/hardware that can help us in achieving our goal.\n>\n>\n>\n> Here are few of the statistics which might act as a starting point.\n>\n>\n>\n> Availability: High (24*7).\n>\n> User Data : 700 GB which will increase to 1.5 TB in next 2-3 years.\n>\n> Number of User Databases : 2 (One is the main database, other is used only\n> for working tables where tables gets deleted in every 48 hours)\n>\n> Number of tables : 200 (in the main database), (2000-3000 in working\n> database)\n>\n> Size of top 5 biggest tables : 20-40 GB\n>\n> No of users concurrently accessing the system : 5-6 with write access. 10\n> with read access.\n>\n> No of User Queries running on the system in a day : ~80K\n>\n> Read-only Queries (Select): ~60K\n>\n> Write queries: ~20K\n>\n> Data Import Queries: ~1K\n>\n> Typical Business Day : 18-20 hours.\n>\n>\n>\n> I can pass on few complex queries to let you guys know what are we doing.\n>\n>\n>\n> Here are few questions:\n>\n>\n>\n> 1.) I don't need a load balancing solution. It must be high availability\n> server and I can work with asynchronous replication. The most important\n> thing here would be recovery should be as fast as possible.\n>\n> What approach would you recommend?\n>\n>\n>\n> 2.) Recommendations on indexes, WAL, table spaces. I am not asking about\n> on which key I need to make indexes, but an high level approach about how\n> to keep them? This might come out as a weird question to many but please\n> excuse me for being a novice.\n>\n>\n>\n> *Most Important Question:*\n>\n>\n>\n> 3.) What would be the ideal hardware configuration for this requirement? I\n> know there is not a one-stop answer for this, but let's take it is a\n> starting point. We can come to a proper conclusion after a discussion.\n>\n>\n>\n> What are the best on-line resources/books which can tell us about the\n> hardware requirements?\n>\n>\n>\n> Warm Regards,\n>\n>\n> Vivekanand Joshi\n> +919654227927\n>\n>\n>\n> [image: Zeta Interactive]\n>\n> 185 Madison Ave. New York, NY 10016\n>\n> www.zetainteractive.com\n>\n>\n>",
"msg_date": "Wed, 18 Mar 2015 20:40:32 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware Configuration and other Stuff"
}
] |
[
{
"msg_contents": "There is only one server at this point. The 8.4 machine was upgraded to 9.3 about a year ago and we have no 8.4 backups so it's difficult if not impossible to recreate the 8.4 environment AFAIK. One of our developers pointed out the discrepancy in execution times. I decomposed a slow view and found out that it consists of a view calling a view calling a view (3 deep). This is the analyze explain plan of the innermost view:\n\nhttp://explain.depesz.com/s/IMg\n\nAnd as you requested:\n\nSELECT version();\n\nPostgresql 9.3.4\n\nSELECT name, current_setting(name), source\nFROM pg_settings\nWHERE source NOT IN ('default', 'override');\n\nautovacuum,on,configuration file\ncheckpoint_completion_target,0.9,configuration file\ncheckpoint_segments,16,configuration file\nclient_encoding,UTF8,session\nclient_min_messages,notice,configuration file\nDateStyle,\"ISO, MDY\",configuration file\ndeadlock_timeout,5s,configuration file\ndefault_text_search_config,pg_catalog.english,configuration file\neffective_cache_size,8GB,configuration file\neffective_io_concurrency,6,configuration file\nlc_messages,en_US.UTF-8,configuration file\nlc_monetary,en_US.UTF-8,configuration file\nlc_numeric,en_US.UTF-8,configuration file\nlc_time,en_US.UTF-8,configuration file\nlisten_addresses,*,configuration file\nlog_checkpoints,on,configuration file\nlog_connections,on,configuration file\nlog_destination,stderr,configuration file\nlog_directory,/dbms/postgresql/logs/dtfdev,configuration file\nlog_disconnections,on,configuration file\nlog_duration,off,configuration file\nlog_error_verbosity,verbose,configuration file\nlog_filename,postgresql-%a.log,configuration file\nlog_hostname,on,configuration file\nlog_line_prefix,\"%t [%p]: [%l-1] db=%d,user=%u \",configuration file\nlog_lock_waits,on,configuration file\nlog_min_duration_statement,0,configuration file\nlog_min_error_statement,error,configuration file\nlog_min_messages,warning,configuration file\nlog_rotation_age,1d,configuration file\nlog_rotation_size,500MB,configuration file\nlog_statement,none,configuration file\nlog_temp_files,0,configuration file\nlog_timezone,US/Pacific,configuration file\nlog_truncate_on_rotation,on,configuration file\nlogging_collector,on,configuration file\nmaintenance_work_mem,256MB,configuration file\nmax_connections,200,configuration file\nmax_stack_depth,8MB,configuration file\nport,2222,configuration file\nrandom_page_cost,2,configuration file\nsearch_path,\"acct, \"\"$user\"\", public\",session\nshared_buffers,4GB,configuration file\nssl,on,configuration file\ntemp_buffers,16MB,configuration file\nTimeZone,US/Pacific,configuration file\nwal_level,minimal,configuration file\nwal_sync_method,fdatasync,configuration file\nwork_mem,5MB,configuration file\n\nserver has 24GB of RAM\n\nfrom postgresql.conf:\nshared_buffers = 4GB\neffective_cache_size = 8GB\nwork_mem = 5MB (note: I increased work_mem to 500MB and repeated the experiment, no difference in exec. time)\n\n\nOn Mar 18, 2015, at 10:16 AM, Kevin Grittner <[email protected]<mailto:[email protected]>> wrote:\n\n\"Carson, Leonard\" <[email protected]<mailto:[email protected]>> wrote:\n\nWhile acknowledging that nested loops and sequential table scans\naccount for 85% of the execution time which suggests that a\nbetter query may be needed, why would the same query run in\nseconds on 8.x but take minutes on 9.x?\n\nFirst, please show the output of this from both servers:\n\nSELECT version();\nSELECT name, current_setting(name), source\nFROM pg_settings\nWHERE source NOT IN ('default', 'override');\n\nThen, for your newer server, please follow the steps outlined here:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nMy first guess would be that at some point your costing parameters\nwere tuned on the old system, but have not yet been tuned on the\nnew one. Rather than blindly using the old settings for the new\nserver, it would be good to see the information requested on the\nabove-cited page to determine good settings for the new server.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\n\n\n\n\n\n\nThere is only one server at this point. The 8.4 machine was upgraded to 9.3 about a year ago and we have no 8.4 backups so it's difficult if not impossible to recreate the 8.4 environment AFAIK. One of our developers pointed out the discrepancy in execution\n times. I decomposed a slow view and found out that it consists of a view calling a view calling a view (3 deep). This is the analyze explain plan of the innermost view:\n\n\nhttp://explain.depesz.com/s/IMg\n\n\nAnd as you requested:\n\n\n\n\nSELECT version();\n\n\n\nPostgresql 9.3.4\n\n\n\nSELECT name, current_setting(name), source\nFROM pg_settings\nWHERE source NOT IN ('default', 'override');\n\n\n\n\nautovacuum,on,configuration file\ncheckpoint_completion_target,0.9,configuration file\ncheckpoint_segments,16,configuration file\nclient_encoding,UTF8,session\nclient_min_messages,notice,configuration file\nDateStyle,\"ISO, MDY\",configuration file\ndeadlock_timeout,5s,configuration file\ndefault_text_search_config,pg_catalog.english,configuration file\neffective_cache_size,8GB,configuration file\neffective_io_concurrency,6,configuration file\nlc_messages,en_US.UTF-8,configuration file\nlc_monetary,en_US.UTF-8,configuration file\nlc_numeric,en_US.UTF-8,configuration file\nlc_time,en_US.UTF-8,configuration file\nlisten_addresses,*,configuration file\nlog_checkpoints,on,configuration file\nlog_connections,on,configuration file\nlog_destination,stderr,configuration file\nlog_directory,/dbms/postgresql/logs/dtfdev,configuration file\nlog_disconnections,on,configuration file\nlog_duration,off,configuration file\nlog_error_verbosity,verbose,configuration file\nlog_filename,postgresql-%a.log,configuration file\nlog_hostname,on,configuration file\nlog_line_prefix,\"%t [%p]: [%l-1] db=%d,user=%u \",configuration file\nlog_lock_waits,on,configuration file\nlog_min_duration_statement,0,configuration file\nlog_min_error_statement,error,configuration file\nlog_min_messages,warning,configuration file\nlog_rotation_age,1d,configuration file\nlog_rotation_size,500MB,configuration file\nlog_statement,none,configuration file\nlog_temp_files,0,configuration file\nlog_timezone,US/Pacific,configuration file\nlog_truncate_on_rotation,on,configuration file\nlogging_collector,on,configuration file\nmaintenance_work_mem,256MB,configuration file\nmax_connections,200,configuration file\nmax_stack_depth,8MB,configuration file\nport,2222,configuration file\nrandom_page_cost,2,configuration file\nsearch_path,\"acct, \"\"$user\"\", public\",session\nshared_buffers,4GB,configuration file\nssl,on,configuration file\ntemp_buffers,16MB,configuration file\nTimeZone,US/Pacific,configuration file\nwal_level,minimal,configuration file\nwal_sync_method,fdatasync,configuration file\nwork_mem,5MB,configuration file\n\n\nserver has 24GB of RAM\n\n\nfrom postgresql.conf:\nshared_buffers = 4GB\neffective_cache_size = 8GB\nwork_mem = 5MB (note: I increased work_mem to 500MB and repeated the experiment, no difference in exec. time)\n\n\n\n\n\n\nOn Mar 18, 2015, at 10:16 AM, Kevin Grittner <[email protected]> wrote:\n\n\"Carson, Leonard\" <[email protected]> wrote:\n\nWhile acknowledging that nested loops and sequential table scans\naccount for 85% of the execution time which suggests that a\nbetter query may be needed, why would the same query run in\nseconds on 8.x but take minutes on 9.x?\n\n\nFirst, please show the output of this from both servers:\n\nSELECT version();\nSELECT name, current_setting(name), source\nFROM pg_settings\nWHERE source NOT IN ('default', 'override');\n\nThen, for your newer server, please follow the steps outlined here:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nMy first guess would be that at some point your costing parameters\nwere tuned on the old system, but have not yet been tuned on the\nnew one. Rather than blindly using the old settings for the new\nserver, it would be good to see the information requested on the\nabove-cited page to determine good settings for the new server.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 18 Mar 2015 22:18:11 +0000",
"msg_from": "\"Carson, Leonard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: views much slower in 9.3 than 8.4"
},
{
"msg_contents": "\"Carson, Leonard\" <[email protected]> writes:\n> There is only one server at this point. The 8.4 machine was upgraded to 9.3 about a year ago and we have no 8.4 backups so it's difficult if not impossible to recreate the 8.4 environment AFAIK. One of our developers pointed out the discrepancy in execution times. I decomposed a slow view and found out that it consists of a view calling a view calling a view (3 deep). This is the analyze explain plan of the innermost view:\n\n> http://explain.depesz.com/s/IMg\n\nYou're probably going to need to show us the actual view definitions.\n\nI'm suspicious that the underlying cause might have to do with recent\nversions being warier about optimizing sub-selects containing volatile\nfunctions than 8.4 was. However, that theory doesn't seem to explain\nthe horribly bad join size estimates you're showing.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 18:41:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: views much slower in 9.3 than 8.4"
},
{
"msg_contents": "Here are the 3 views and some timing notes:\nhttp://pgsql.privatepaste.com/decae31693#\nthanks, lcarson\n\nOn Mar 18, 2015, at 3:41 PM, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\n\n\"Carson, Leonard\" <[email protected]<mailto:[email protected]>> writes:\nThere is only one server at this point. The 8.4 machine was upgraded to 9.3 about a year ago and we have no 8.4 backups so it's difficult if not impossible to recreate the 8.4 environment AFAIK. One of our developers pointed out the discrepancy in execution times. I decomposed a slow view and found out that it consists of a view calling a view calling a view (3 deep). This is the analyze explain plan of the innermost view:\n\nhttp://explain.depesz.com/s/IMg\n\nYou're probably going to need to show us the actual view definitions.\n\nI'm suspicious that the underlying cause might have to do with recent\nversions being warier about optimizing sub-selects containing volatile\nfunctions than 8.4 was. However, that theory doesn't seem to explain\nthe horribly bad join size estimates you're showing.\n\nregards, tom lane\n\n\n\n\n\n\n\nHere are the 3 views and some timing notes:\nhttp://pgsql.privatepaste.com/decae31693#\nthanks, lcarson\n\n\nOn Mar 18, 2015, at 3:41 PM, Tom Lane <[email protected]> wrote:\n\n\"Carson, Leonard\" <[email protected]> writes:\nThere is only one server at this point. The 8.4 machine was upgraded to 9.3 about a year ago and we have no 8.4 backups so it's difficult if not impossible to recreate the 8.4 environment AFAIK. One of our developers pointed out the\n discrepancy in execution times. I decomposed a slow view and found out that it consists of a view calling a view calling a view (3 deep). This is the analyze explain plan of the innermost view:\n\n\nhttp://explain.depesz.com/s/IMg\n\n\nYou're probably going to need to show us the actual view definitions.\n\nI'm suspicious that the underlying cause might have to do with recent\nversions being warier about optimizing sub-selects containing volatile\nfunctions than 8.4 was. However, that theory doesn't seem to explain\nthe horribly bad join size estimates you're showing.\n\nregards, tom lane",
"msg_date": "Thu, 19 Mar 2015 17:24:47 +0000",
"msg_from": "\"Carson, Leonard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: views much slower in 9.3 than 8.4"
},
{
"msg_contents": "\"Carson, Leonard\" <[email protected]> writes:\n> Here are the 3 views and some timing notes:\n> http://pgsql.privatepaste.com/decae31693#\n\nThat doesn't really leave us any wiser than before, unfortunately.\n\nIt's clear that the root of the problem is the drastic underestimation of\nthe size of the rq/a join, but it's not clear why that's happening, nor\nwhy 8.4 would not have fallen into the same trap.\n\nWould it be possible to provide the data in the join columns involved in\nthat part of the query? To wit\nrequests.account_id\nrequests.start_date\nallocations.account_id\nallocations.initial_start_date\nallocations.resource_id\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Mar 2015 19:04:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: views much slower in 9.3 than 8.4"
},
{
"msg_contents": "I wrote:\n> \"Carson, Leonard\" <[email protected]> writes:\n>> Here are the 3 views and some timing notes:\n>> http://pgsql.privatepaste.com/decae31693#\n\n> That doesn't really leave us any wiser than before, unfortunately.\n\n> It's clear that the root of the problem is the drastic underestimation of\n> the size of the rq/a join, but it's not clear why that's happening, nor\n> why 8.4 would not have fallen into the same trap.\n\nLeonard was kind enough to provide the problematic data off-list, and\nhere's what I find after some poking around: 8.4 is not, in fact, any\nsmarter than the more recent versions, it just happens to get lucky\non this particular query. The core of the problem is this aspect of\nthe projv view:\n\nSELECT ...\n FROM\n allocations a,\n ... other relations ...\n WHERE\n a.initial_start_date = (SELECT max(allocations.initial_start_date)\n FROM allocations\n WHERE allocations.account_id = a.account_id AND\n allocations.resource_id = a.resource_id)\n AND ... a bunch of other conditions ...\n\n(There's a similar consider-only-the-latest-row construction for\naccounts_history, which doubles the problem, but let's just look at this\none for now.) Now there are two things that are bad about this: first,\nthe construct requires executing the MAX-subselect for every row of \"a\",\nwhich is expensive and the planner knows it. Second, it's very very hard\nfor the planner to guess right about the selectivity of this condition on\na.initial_start_date. It ends up using DEFAULT_EQ_SEL which is 0.005,\nbut given Leonard's data set the actual selectivity is just about 1.0,\nie, there are no records that aren't the latest for their account_id cum\nresource_id and thus no rows are eliminated by the condition anyway.\n\nSo we have an expensive scan on \"a\" that is going to produce many more\nrows than the planner thinks. By the time we get done joining to\naccounts_history, which has a similar problem, the planner is estimating\nonly one row out of the join (vs. nearly 24000 in reality), and it's\nsetting the total cost estimate at 148163 cost units. This just totally\nbollixes planning of the joins to the remaining half-dozen tables.\nThe single-row estimate is nearly fatal in itself, because it encourages\nnestloop joining which is pretty inappropriate here. But the other\nproblem is that the planner considers less-than-1% differences in cost\nestimates to be \"in the noise\", which means that it's not going to\nconsider cost differences of less than 1480 units in the remaining join\nsteps to be significant. This is how come we end up with the apparently\nbrain-dead decisions to use seqscans on some of the other tables such as\n\"pi\" and \"ac\": comparing the seqscan to a potential inner indexscan, the\ntotal cost of the join is \"the same\" according to the 1% rule, and then\nthe first tiebreaker is startup cost, and the indexscan has a higher\nstartup cost.\n\nNow, 8.4 also had the 1% rule, but it had slightly different tiebreaking\nprocedures, which caused it to end up picking the inner indexscans over\nthe seqscans; and in this particular data set inner indexscans do far\nbetter than repeated seqscans when the rq/a/ah join turns out to produce\n24000 times more tuples than predicted. But I can't persuade myself that\nthe tiebreaking changes amount to a bug. (I did experiment with varying\nthe tiebreaking rules a bit, but I think that would just be moving the\npain around.)\n\nLong-term planner fixes for this type of problem might include improving\nthe statistics enough that we could get better rowcount estimates.\n(Cross-column stats would help, since a contributing factor is that some\nof the joins are on two join columns that are pretty heavily correlated.)\nAnother thing we've discussed is making risk estimates, whereby we could\nrealize that the nestloop-plus-seqscan plans are going to be a lot worse\nif our rowcount estimates are off at all. But both of those things are\nresearch projects.\n\nWhat seems like a potential near-term fix for Leonard is to recast his\nviews to do the latest-row selection more intelligently. I experimented\nwith redoing the projv view like this to eliminate the\nsubselects-in-WHERE:\n\nSELECT ...\n FROM\n acct.requests rq,\n acct.fields_of_science fos,\n acct.accounts ac,\n acct.allocations a,\n (select account_id, resource_id, max(initial_start_date) AS initial_start_date\n FROM acct.allocations GROUP BY 1,2) a_latest,\n acct.transaction_types tt,\n acct.resources ar,\n acct.accounts_history ah,\n (select account_id, resource_id, max(activity_time) AS activity_time\n FROM acct.accounts_history GROUP BY 1,2) ah_latest,\n acct.allocation_states sx,\n acct.principal_investigators pi,\n acct.people p\n WHERE\n a.account_id = ac.account_id AND\n a.account_id = a_latest.account_id AND\n a.resource_id = a_latest.resource_id AND\n a.initial_start_date = a_latest.initial_start_date AND\n rq.account_id = a.account_id AND\n rq.start_date = a.initial_start_date AND\n ar.resource_id = a.resource_id AND\n a.allocation_type_id = tt.transaction_type_id AND\n ah.account_id = a.account_id AND\n ah.resource_id = a.resource_id AND\n ah.account_id = ah_latest.account_id AND\n ah.resource_id = ah_latest.resource_id AND\n ah.activity_time = ah_latest.activity_time AND\n sx.state_id = ah.state_id AND\n rq.primary_fos_id = fos.field_of_science_id AND\n pi.request_id = rq.request_id AND\n p.person_id = pi.person_id\n;\n\nThat produces significantly better plans. It doesn't look like the\nrowcount estimates are better :-( ... but the total estimated cost\nis now down in the range of 3000 or so cost units, which means that\nthe 1% rule doesn't keep us from adopting the inner indexscans. And\nthis is fundamentally a better way to do latest-row selection, anyhow.\n\n(I guess another potential research project is to do this sort of\naggregated-subselect transformation automatically. But don't hold\nyour breath for that to happen, either.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 28 Mar 2015 13:14:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: views much slower in 9.3 than 8.4"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n\n> But the other problem is that the planner considers less-than-1%\n> differences in cost estimates to be \"in the noise\", which means\n> that it's not going to consider cost differences of less than\n> 1480 units in the remaining join steps to be significant. This\n> is how come we end up with the apparently brain-dead decisions to\n> use seqscans on some of the other tables such as \"pi\" and \"ac\":\n> comparing the seqscan to a potential inner indexscan, the total\n> cost of the join is \"the same\" according to the 1% rule,\n\nThe 1% rule itself might be something to add to the R&D list. I\nhave seen it cause big problems in production, although the users\nin that case had made a mistake which significantly contributed to\nit being an issue. They had used the enable_seqscan = off setting\nfor one query which they had been unable to wrestle into good\nperformance in other ways, but accidentally neglected to turn it\nback on after that query. Now, seqscans were rarely a good idea\nwith their permanent tables, but they had a couple queries which\nused very small temporary tables with no indexes. It chose the\nseqscan in spite of the setting; but, when run with seqscans off,\nthat gave all candidate plans such a high cost that they all looked\n\"equal\" and the tie-breaker logic picked a horrible one. (The\nfaster plans did have lower cost, but not by enough to exceed the\n1% threshold.) Now, had they not made a questionable choice in\ndisabling seqscan in production, compounded by an error in not\nturning it off again, they would not have had their main web\napplication slow to unusable levels at times -- but it seems to me\nthat it might be reasonable to have some absolute cost maximum\ndifference test that needs to be met in addition to the percentage\ndifference, as kind of a \"safety\" on this foot-gun.\n\nI'm not sold on this as being a good idea, and had not been\nplanning on raising it without further research; but since it plays\ninto this other scenario it seems worth mentioning as material for\npotential R&D.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Mar 2015 14:46:04 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: views much slower in 9.3 than 8.4"
},
{
"msg_contents": "Kevin Grittner <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> But the other problem is that the planner considers less-than-1%\n>> differences in cost estimates to be \"in the noise\", which means\n>> that it's not going to consider cost differences of less than\n>> 1480 units in the remaining join steps to be significant. This\n>> is how come we end up with the apparently brain-dead decisions to\n>> use seqscans on some of the other tables such as \"pi\" and \"ac\":\n>> comparing the seqscan to a potential inner indexscan, the total\n>> cost of the join is \"the same\" according to the 1% rule,\n\n> The 1% rule itself might be something to add to the R&D list.\n\nPerhaps. But it does make for a significant difference in planner speed,\nand I would argue that any case where it really hurts is by definition\na cost estimation failure somewhere else.\n\n> [ disable_cost skews the behavior pretty badly ]\n\nTrue. Your example suggests that it might be nice to have something other\nthan a cost-delta way of discriminating against seqscans. In principle\nthis consideration could be added to add_path(), although I'm pretty\nhesitant to make that function even more complex/slower.\n\nPerhaps another way would be to generate seqscan paths last (I think\nthey're first at the moment), and only generate them if we didn't find\nany other path for the rel.\n\nNestloops for join rels have the same issue and would need to be handled\nsimilarly, whatever solution we pick.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Mar 2015 11:52:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: views much slower in 9.3 than 8.4"
},
{
"msg_contents": "On 3/30/15 10:52 AM, Tom Lane wrote:\n> Kevin Grittner<[email protected]> writes:\n>> >Tom Lane<[email protected]> wrote:\n>>> >>But the other problem is that the planner considers less-than-1%\n>>> >>differences in cost estimates to be \"in the noise\", which means\n>>> >>that it's not going to consider cost differences of less than\n>>> >>1480 units in the remaining join steps to be significant. This\n>>> >>is how come we end up with the apparently brain-dead decisions to\n>>> >>use seqscans on some of the other tables such as \"pi\" and \"ac\":\n>>> >>comparing the seqscan to a potential inner indexscan, the total\n>>> >>cost of the join is \"the same\" according to the 1% rule,\n>> >The 1% rule itself might be something to add to the R&D list.\n\n> Perhaps. But it does make for a significant difference in planner speed,\n> and I would argue that any case where it really hurts is by definition\n> a cost estimation failure somewhere else.\n\nWhat I wish we had was some way to represent \"confidence\" in the \naccuracy of a specific plan node, with the goal of avoiding plans that \ncost out slightly cheaper but if we guessed wrong on something will blow \nup spectacularly. Nested loops are an example; if you miscalculate \neither of the sides by very much you can end up with a real mess unless \nthe rowcounts were already pretty trivial to begin with.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Apr 2015 18:30:43 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: views much slower in 9.3 than 8.4"
}
] |
[
{
"msg_contents": "I am having problems with a join where the planner picks a merge join and an\nindex scan on one of the tables. Manually disabling merge joins and running\nthe query both ways shows the merge join takes over 10 seconds while a hash\njoin takes less than 100ms. The planner total cost estimate favors the merge\njoin, but the cost estimate for the index scan part is greater than the\ntotal cost estimate by a factor of 300x. My understanding of how this can\noccur is that it expects it won't actually have to scan all the rows,\nbecause using the histogram distribution stats it can know that all the\nrelevant rows of the join column will be at the beginning of the scan. But\nin practice it appears to actually be index scanning all the rows, showing\nmassive amounts of page hits. What is also confusing is that the planner\nestimate of the number of rows that match the second join condition is\naccurate and very low, so I would expect it to index scan on that column's\nindex instead. Pasted at the bottom is the explain plan for the query and\nsome other variations I think might be relevant. The table/index names are\nobfuscated. I ran ANALYZE on all the tables in the query first. All the\npages are cached in the explain plans but we wouldn't expect that to be true\nin the production system. There are btree indexes on all the columns in both\nthe join conditions and the filters.\n\nSearching, I found this thread\nhttp://postgresql.nabble.com/merge-join-killing-performance-td2076433.html\nwhich sounds kind of similar, but there are no Nulls in this table.\n\nThanks for your help.\n\n\n\nPostgres version info: PostgreSQL 9.1.13 on x86_64-unknown-linux-gnu,\ncompiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\n\n-----------------------\nOriginal Query\n\nThe estimated cost for Index Scan is 898k but the total cost estimate is\n2.6k. The planner has a good estimate of the number of rows, 1335, for the\nindex scan, but by the number of page hits (8M) it appears it actually\nscanned the entire table which has about 8M rows.\n-----------------------\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM vehicles v LEFT JOIN usagestats ON\nv.id = tid AND type = 'vehicle';\n \nQUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Right Join (cost=593.28..2634.10 rows=4155 width=619) (actual\ntime=9.150..11464.949 rows=4155 loops=1)\n Merge Cond: (usagestats.tid = s.id)\n Buffers: shared hit=8063988\n -> Index Scan using usagestats_tid_idx on usagestats \n(cost=0.00..898911.91 rows=1335 width=37) (actual time=0.027..11448.789\nrows=2979 loops=1)\n Filter: ((type)::text = 'vehicle'::text)\n Buffers: shared hit=8063686\n -> Sort (cost=593.28..603.67 rows=4155 width=582) (actual\ntime=9.108..10.429 rows=4155 loops=1)\n Sort Key: s.id\n Sort Method: quicksort Memory: 1657kB\n Buffers: shared hit=302\n -> Seq Scan on vehicles v (cost=0.00..343.55 rows=4155 width=582)\n(actual time=0.014..2.917 rows=4155 loops=1)\n Buffers: shared hit=302\n Total runtime: 11466.122 ms\n(13 rows)\n\n------------------------\nChange the type='vehicle' condition to an always true condition\n\nIf we change the filter from \"type = 'vehicle'\" (True for a small fraction\nof the rows) to \"freq > -1\" (True for all rows) then the plan is the same,\nbut the actual time and page hits are much less and the query returns is\nfast.\n------------------------\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM vehicle v LEFT JOIN usagestats ON\n(v.id = tid AND freq > -1);\n\n \nQUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Merge Right Join (cost=593.28..2434.79 rows=7733 width=619) (actual\ntime=5.635..59.852 rows=17096 loops=1)\n\n Merge Cond: (usagestats.tid = v.id)\n\n Buffers: shared hit=17653\n\n -> Index Scan using usagestats_tid_idx on usagestats \n(cost=0.00..898914.00 rows=8006976 width=37) (actual time=0.010..34.075\nrows=17225 loops=1)\n\n Filter: (freq > (-1))\n\n Buffers: shared hit=17351\n\n -> Sort (cost=593.28..603.67 rows=4155 width=582) (actual\ntime=5.617..9.351 rows=17094 loops=1)\n\n Sort Key: v.id\n\n Sort Method: quicksort Memory: 1657kB\n\n Buffers: shared hit=302\n\n -> Seq Scan on vehicle v (cost=0.00..343.55 rows=4155 width=582)\n(actual time=0.009..1.803 rows=4157 loops=1)\n\n Buffers: shared hit=302\n\n Total runtime: 62.868 ms\n\n(13 rows)\n\n\n----------------------\nOriginal Query with merge joins disabled\n\nIf we manually disable merge joins and run the original query we get a hash\njoin with what seems like\na more reasonable index scan on the more selective type column. The total\ncost estimate is higher than the merge join plan, but lower than the cost\nestimate for the index scan in the merge join query.\n---------------------\nBEGIN;\nSET LOCAL enable_mergejoin = off;\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM vehicle v LEFT JOIN usagestats ON\nv.id = tid AND type = 'vehicle';\n \nQUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Right Join (cost=395.49..5158.10 rows=4155 width=619) (actual\ntime=8.038..20.886 rows=4155 loops=1)\n Hash Cond: (usagestats.tid = v.id)\n Buffers: shared hit=3250\n -> Index Scan using usagestats_type_idx on usagestats \n(cost=0.00..4752.59 rows=1335 width=37) (actual time=0.100..6.770 rows=2979\nloops=1)\n Index Cond: ((type)::text = 'vehicle'::text)\n Buffers: shared hit=2948\n -> Hash (cost=343.55..343.55 rows=4155 width=582) (actual\ntime=7.908..7.908 rows=4155 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1088kB\n Buffers: shared hit=302\n -> Seq Scan on vehicle v (cost=0.00..343.55 rows=4155 width=582)\n(actual time=0.021..3.068 rows=4155 loops=1)\n Buffers: shared hit=302\n Total runtime: 21.936 ms\n(12 rows)\n\n-----------------------\nMiscellaneous stats\n-----------------------\n\nSELECT COUNT(*) FROM vehicle;\n count \n-------\n 4155\n(1 row)\n\nSELECT COUNT(*) FROM usagestats;\n count \n---------\n 8007015\n(1 row)\n\nThe usagestats table has 501 histogram buckets for the tid column. The max\nid in the vehicle table is 4155 and the first two buckets of the histogram\ncover all the values between 1 and 4500.\n\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Merge-Join-chooses-very-slow-index-scan-tp5842523.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Mar 2015 23:23:34 -0700 (MST)",
"msg_from": "Jake Magner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Merge Join chooses very slow index scan"
},
{
"msg_contents": "Hi\n\nwhat is your random_page_cost and seq_page_cost?\n\nRegards\n\nPavel Stehule\n\n2015-03-19 7:23 GMT+01:00 Jake Magner <[email protected]>:\n\n> I am having problems with a join where the planner picks a merge join and\n> an\n> index scan on one of the tables. Manually disabling merge joins and running\n> the query both ways shows the merge join takes over 10 seconds while a hash\n> join takes less than 100ms. The planner total cost estimate favors the\n> merge\n> join, but the cost estimate for the index scan part is greater than the\n> total cost estimate by a factor of 300x. My understanding of how this can\n> occur is that it expects it won't actually have to scan all the rows,\n> because using the histogram distribution stats it can know that all the\n> relevant rows of the join column will be at the beginning of the scan. But\n> in practice it appears to actually be index scanning all the rows, showing\n> massive amounts of page hits. What is also confusing is that the planner\n> estimate of the number of rows that match the second join condition is\n> accurate and very low, so I would expect it to index scan on that column's\n> index instead. Pasted at the bottom is the explain plan for the query and\n> some other variations I think might be relevant. The table/index names are\n> obfuscated. I ran ANALYZE on all the tables in the query first. All the\n> pages are cached in the explain plans but we wouldn't expect that to be\n> true\n> in the production system. There are btree indexes on all the columns in\n> both\n> the join conditions and the filters.\n>\n> Searching, I found this thread\n> http://postgresql.nabble.com/merge-join-killing-performance-td2076433.html\n> which sounds kind of similar, but there are no Nulls in this table.\n>\n> Thanks for your help.\n>\n>\n>\n> Postgres version info: PostgreSQL 9.1.13 on x86_64-unknown-linux-gnu,\n> compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>\n>\n> -----------------------\n> Original Query\n>\n> The estimated cost for Index Scan is 898k but the total cost estimate is\n> 2.6k. The planner has a good estimate of the number of rows, 1335, for the\n> index scan, but by the number of page hits (8M) it appears it actually\n> scanned the entire table which has about 8M rows.\n> -----------------------\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM vehicles v LEFT JOIN usagestats ON\n> v.id = tid AND type = 'vehicle';\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Right Join (cost=593.28..2634.10 rows=4155 width=619) (actual\n> time=9.150..11464.949 rows=4155 loops=1)\n> Merge Cond: (usagestats.tid = s.id)\n> Buffers: shared hit=8063988\n> -> Index Scan using usagestats_tid_idx on usagestats\n> (cost=0.00..898911.91 rows=1335 width=37) (actual time=0.027..11448.789\n> rows=2979 loops=1)\n> Filter: ((type)::text = 'vehicle'::text)\n> Buffers: shared hit=8063686\n> -> Sort (cost=593.28..603.67 rows=4155 width=582) (actual\n> time=9.108..10.429 rows=4155 loops=1)\n> Sort Key: s.id\n> Sort Method: quicksort Memory: 1657kB\n> Buffers: shared hit=302\n> -> Seq Scan on vehicles v (cost=0.00..343.55 rows=4155\n> width=582)\n> (actual time=0.014..2.917 rows=4155 loops=1)\n> Buffers: shared hit=302\n> Total runtime: 11466.122 ms\n> (13 rows)\n>\n> ------------------------\n> Change the type='vehicle' condition to an always true condition\n>\n> If we change the filter from \"type = 'vehicle'\" (True for a small fraction\n> of the rows) to \"freq > -1\" (True for all rows) then the plan is the same,\n> but the actual time and page hits are much less and the query returns is\n> fast.\n> ------------------------\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM vehicle v LEFT JOIN usagestats ON\n> (v.id = tid AND freq > -1);\n>\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Merge Right Join (cost=593.28..2434.79 rows=7733 width=619) (actual\n> time=5.635..59.852 rows=17096 loops=1)\n>\n> Merge Cond: (usagestats.tid = v.id)\n>\n> Buffers: shared hit=17653\n>\n> -> Index Scan using usagestats_tid_idx on usagestats\n> (cost=0.00..898914.00 rows=8006976 width=37) (actual time=0.010..34.075\n> rows=17225 loops=1)\n>\n> Filter: (freq > (-1))\n>\n> Buffers: shared hit=17351\n>\n> -> Sort (cost=593.28..603.67 rows=4155 width=582) (actual\n> time=5.617..9.351 rows=17094 loops=1)\n>\n> Sort Key: v.id\n>\n> Sort Method: quicksort Memory: 1657kB\n>\n> Buffers: shared hit=302\n>\n> -> Seq Scan on vehicle v (cost=0.00..343.55 rows=4155 width=582)\n> (actual time=0.009..1.803 rows=4157 loops=1)\n>\n> Buffers: shared hit=302\n>\n> Total runtime: 62.868 ms\n>\n> (13 rows)\n>\n>\n> ----------------------\n> Original Query with merge joins disabled\n>\n> If we manually disable merge joins and run the original query we get a hash\n> join with what seems like\n> a more reasonable index scan on the more selective type column. The total\n> cost estimate is higher than the merge join plan, but lower than the cost\n> estimate for the index scan in the merge join query.\n> ---------------------\n> BEGIN;\n> SET LOCAL enable_mergejoin = off;\n>\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM vehicle v LEFT JOIN usagestats ON\n> v.id = tid AND type = 'vehicle';\n>\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Right Join (cost=395.49..5158.10 rows=4155 width=619) (actual\n> time=8.038..20.886 rows=4155 loops=1)\n> Hash Cond: (usagestats.tid = v.id)\n> Buffers: shared hit=3250\n> -> Index Scan using usagestats_type_idx on usagestats\n> (cost=0.00..4752.59 rows=1335 width=37) (actual time=0.100..6.770 rows=2979\n> loops=1)\n> Index Cond: ((type)::text = 'vehicle'::text)\n> Buffers: shared hit=2948\n> -> Hash (cost=343.55..343.55 rows=4155 width=582) (actual\n> time=7.908..7.908 rows=4155 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 1088kB\n> Buffers: shared hit=302\n> -> Seq Scan on vehicle v (cost=0.00..343.55 rows=4155 width=582)\n> (actual time=0.021..3.068 rows=4155 loops=1)\n> Buffers: shared hit=302\n> Total runtime: 21.936 ms\n> (12 rows)\n>\n> -----------------------\n> Miscellaneous stats\n> -----------------------\n>\n> SELECT COUNT(*) FROM vehicle;\n> count\n> -------\n> 4155\n> (1 row)\n>\n> SELECT COUNT(*) FROM usagestats;\n> count\n> ---------\n> 8007015\n> (1 row)\n>\n> The usagestats table has 501 histogram buckets for the tid column. The max\n> id in the vehicle table is 4155 and the first two buckets of the histogram\n> cover all the values between 1 and 4500.\n>\n>\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/Merge-Join-chooses-very-slow-index-scan-tp5842523.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHiwhat is your random_page_cost and seq_page_cost?RegardsPavel Stehule2015-03-19 7:23 GMT+01:00 Jake Magner <[email protected]>:I am having problems with a join where the planner picks a merge join and an\nindex scan on one of the tables. Manually disabling merge joins and running\nthe query both ways shows the merge join takes over 10 seconds while a hash\njoin takes less than 100ms. The planner total cost estimate favors the merge\njoin, but the cost estimate for the index scan part is greater than the\ntotal cost estimate by a factor of 300x. My understanding of how this can\noccur is that it expects it won't actually have to scan all the rows,\nbecause using the histogram distribution stats it can know that all the\nrelevant rows of the join column will be at the beginning of the scan. But\nin practice it appears to actually be index scanning all the rows, showing\nmassive amounts of page hits. What is also confusing is that the planner\nestimate of the number of rows that match the second join condition is\naccurate and very low, so I would expect it to index scan on that column's\nindex instead. Pasted at the bottom is the explain plan for the query and\nsome other variations I think might be relevant. The table/index names are\nobfuscated. I ran ANALYZE on all the tables in the query first. All the\npages are cached in the explain plans but we wouldn't expect that to be true\nin the production system. There are btree indexes on all the columns in both\nthe join conditions and the filters.\n\nSearching, I found this thread\nhttp://postgresql.nabble.com/merge-join-killing-performance-td2076433.html\nwhich sounds kind of similar, but there are no Nulls in this table.\n\nThanks for your help.\n\n\n\nPostgres version info: PostgreSQL 9.1.13 on x86_64-unknown-linux-gnu,\ncompiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\n\n-----------------------\nOriginal Query\n\nThe estimated cost for Index Scan is 898k but the total cost estimate is\n2.6k. The planner has a good estimate of the number of rows, 1335, for the\nindex scan, but by the number of page hits (8M) it appears it actually\nscanned the entire table which has about 8M rows.\n-----------------------\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM vehicles v LEFT JOIN usagestats ON\nv.id = tid AND type = 'vehicle';\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Right Join (cost=593.28..2634.10 rows=4155 width=619) (actual\ntime=9.150..11464.949 rows=4155 loops=1)\n Merge Cond: (usagestats.tid = s.id)\n Buffers: shared hit=8063988\n -> Index Scan using usagestats_tid_idx on usagestats\n(cost=0.00..898911.91 rows=1335 width=37) (actual time=0.027..11448.789\nrows=2979 loops=1)\n Filter: ((type)::text = 'vehicle'::text)\n Buffers: shared hit=8063686\n -> Sort (cost=593.28..603.67 rows=4155 width=582) (actual\ntime=9.108..10.429 rows=4155 loops=1)\n Sort Key: s.id\n Sort Method: quicksort Memory: 1657kB\n Buffers: shared hit=302\n -> Seq Scan on vehicles v (cost=0.00..343.55 rows=4155 width=582)\n(actual time=0.014..2.917 rows=4155 loops=1)\n Buffers: shared hit=302\n Total runtime: 11466.122 ms\n(13 rows)\n\n------------------------\nChange the type='vehicle' condition to an always true condition\n\nIf we change the filter from \"type = 'vehicle'\" (True for a small fraction\nof the rows) to \"freq > -1\" (True for all rows) then the plan is the same,\nbut the actual time and page hits are much less and the query returns is\nfast.\n------------------------\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM vehicle v LEFT JOIN usagestats ON\n(v.id = tid AND freq > -1);\n\n\nQUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Merge Right Join (cost=593.28..2434.79 rows=7733 width=619) (actual\ntime=5.635..59.852 rows=17096 loops=1)\n\n Merge Cond: (usagestats.tid = v.id)\n\n Buffers: shared hit=17653\n\n -> Index Scan using usagestats_tid_idx on usagestats\n(cost=0.00..898914.00 rows=8006976 width=37) (actual time=0.010..34.075\nrows=17225 loops=1)\n\n Filter: (freq > (-1))\n\n Buffers: shared hit=17351\n\n -> Sort (cost=593.28..603.67 rows=4155 width=582) (actual\ntime=5.617..9.351 rows=17094 loops=1)\n\n Sort Key: v.id\n\n Sort Method: quicksort Memory: 1657kB\n\n Buffers: shared hit=302\n\n -> Seq Scan on vehicle v (cost=0.00..343.55 rows=4155 width=582)\n(actual time=0.009..1.803 rows=4157 loops=1)\n\n Buffers: shared hit=302\n\n Total runtime: 62.868 ms\n\n(13 rows)\n\n\n----------------------\nOriginal Query with merge joins disabled\n\nIf we manually disable merge joins and run the original query we get a hash\njoin with what seems like\na more reasonable index scan on the more selective type column. The total\ncost estimate is higher than the merge join plan, but lower than the cost\nestimate for the index scan in the merge join query.\n---------------------\nBEGIN;\nSET LOCAL enable_mergejoin = off;\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM vehicle v LEFT JOIN usagestats ON\nv.id = tid AND type = 'vehicle';\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Right Join (cost=395.49..5158.10 rows=4155 width=619) (actual\ntime=8.038..20.886 rows=4155 loops=1)\n Hash Cond: (usagestats.tid = v.id)\n Buffers: shared hit=3250\n -> Index Scan using usagestats_type_idx on usagestats\n(cost=0.00..4752.59 rows=1335 width=37) (actual time=0.100..6.770 rows=2979\nloops=1)\n Index Cond: ((type)::text = 'vehicle'::text)\n Buffers: shared hit=2948\n -> Hash (cost=343.55..343.55 rows=4155 width=582) (actual\ntime=7.908..7.908 rows=4155 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1088kB\n Buffers: shared hit=302\n -> Seq Scan on vehicle v (cost=0.00..343.55 rows=4155 width=582)\n(actual time=0.021..3.068 rows=4155 loops=1)\n Buffers: shared hit=302\n Total runtime: 21.936 ms\n(12 rows)\n\n-----------------------\nMiscellaneous stats\n-----------------------\n\nSELECT COUNT(*) FROM vehicle;\n count\n-------\n 4155\n(1 row)\n\nSELECT COUNT(*) FROM usagestats;\n count\n---------\n 8007015\n(1 row)\n\nThe usagestats table has 501 histogram buckets for the tid column. The max\nid in the vehicle table is 4155 and the first two buckets of the histogram\ncover all the values between 1 and 4500.\n\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Merge-Join-chooses-very-slow-index-scan-tp5842523.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 19 Mar 2015 07:42:11 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merge Join chooses very slow index scan"
},
{
"msg_contents": "random_page_cost = 4\nseq_page_cost = 1\n\nRegardless of the the choice to use the index scan and random access to the\nrows, how come in the second query with the freq > -1 condition, it accesses\nfar fewer pages with the same index scan even though no rows are filtered\nout?\n\nThanks\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Merge-Join-chooses-very-slow-index-scan-tp5842523p5842527.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Mar 2015 00:04:10 -0700 (MST)",
"msg_from": "Jake Magner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Merge Join chooses very slow index scan"
},
{
"msg_contents": "Jake Magner <[email protected]> writes:\n> I am having problems with a join where the planner picks a merge join and an\n> index scan on one of the tables. Manually disabling merge joins and running\n> the query both ways shows the merge join takes over 10 seconds while a hash\n> join takes less than 100ms. The planner total cost estimate favors the merge\n> join, but the cost estimate for the index scan part is greater than the\n> total cost estimate by a factor of 300x. My understanding of how this can\n> occur is that it expects it won't actually have to scan all the rows,\n> because using the histogram distribution stats it can know that all the\n> relevant rows of the join column will be at the beginning of the scan. But\n> in practice it appears to actually be index scanning all the rows, showing\n> massive amounts of page hits.\n> ...\n> If we change the filter from \"type = 'vehicle'\" (True for a small fraction\n> of the rows) to \"freq > -1\" (True for all rows) then the plan is the same,\n> but the actual time and page hits are much less and the query returns is\n> fast.\n\nI think what must be happening is that the planner notes the maximum\npossible value of v.id and supposes that the mergejoin will stop far short\nof completion because v.id spans just a small part of the range of\nusagestats.tid. Which it does, when you have only the nonselective filter\ncondition on usagestats. However, the executor cannot stop until it's\nfetched a usagestats row that has a tid value larger than the last v.id\nvalue; otherwise it can't be sure it's emitted all the required join rows.\nI'm guessing that the \"type = 'vehicle'\" condition eliminates all such\nrows, or at least enough of them that a very large part of the usagestats\ntable has to be scanned to find the first can't-possibly-match row.\n\nI'm not sure there's anything much we can do to improve this situation\nin Postgres. It seems like a sufficiently bizarre corner case that it\nwouldn't be appropriate to spend planner cycles checking for it, and\nI'm not sure how we'd check for it even if we were willing to spend those\ncycles. You might consider altering the query, or inserting some kind of\ndummy sentinel row in the data, or changing the schema (is it really\nsensible to keep vehicle usagestats in the same table as other\nusagestats?). A brute-force fix would be \"enable_mergejoin = off\", but\nthat would prevent selecting this plan type even when it actually is\na significant win.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Mar 2015 10:28:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merge Join chooses very slow index scan"
},
{
"msg_contents": "I wrote:\n> [ assorted possible workarounds ]\n\nActually, an easy fix might be to create a 2-column index on\nusagestats(type, tid). I think the planner should be able to\nuse that to produce sorted output for the mergejoin, and you'd\nget the best of both worlds, because the indexscan will stop\nimmediately when it's exhausted the rows with type = 'vehicle'.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Mar 2015 10:58:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merge Join chooses very slow index scan"
},
{
"msg_contents": "Thanks Tom, that sounds like what is happening. Some additional\ncomments/questions inline.\n\n\nTom Lane-2 wrote\n> I think what must be happening is that the planner notes the maximum\n> possible value of v.id and supposes that the mergejoin will stop far short\n> of completion because v.id spans just a small part of the range of\n> usagestats.tid. Which it does, when you have only the nonselective filter\n> condition on usagestats. However, the executor cannot stop until it's\n> fetched a usagestats row that has a tid value larger than the last v.id\n> value; otherwise it can't be sure it's emitted all the required join rows.\n\nOk, that makes a lot of sense. It is scanning the tid index though, so once\nit gets past the last value in v.id isn't it guaranteed that there can be no\nmore required join rows? Even if it sees tid = 5000 and type = 'aircraft'\nthen it can know there are no more tids less than 5000. It must be that it\nwaits to do this check until it gets a row that matches the filter, maybe\nthis is an optimization in most cases? Seems like the cost of the check\nwould be small enough compared to the cost of looking up the next row to do\nit every time.\n\n\nTom Lane-2 wrote\n> I'm guessing that the \"type = 'vehicle'\" condition eliminates all such\n> rows, or at least enough of them that a very large part of the usagestats\n> table has to be scanned to find the first can't-possibly-match row.\n\nYes, you are exactly right.\n\n\nTom Lane-2 wrote\n> I'm not sure there's anything much we can do to improve this situation\n> in Postgres. It seems like a sufficiently bizarre corner case that it\n> wouldn't be appropriate to spend planner cycles checking for it, and\n> I'm not sure how we'd check for it even if we were willing to spend those\n> cycles. You might consider altering the query, or inserting some kind of\n> dummy sentinel row in the data, or changing the schema (is it really\n> sensible to keep vehicle usagestats in the same table as other\n> usagestats?). A brute-force fix would be \"enable_mergejoin = off\", but\n> that would prevent selecting this plan type even when it actually is\n> a significant win.\n\nI agree it may make sense to change the schema, although there are some good\nreasons to have it this way (I obfuscated the table names). If we\npartitioned the table on \"type\" then would the planner be able to stop after\nfinishing the 'vehicle' type partition?\n\n\nTom Lane-2 wrote\n> Actually, an easy fix might be to create a 2-column index on \n> usagestats(type, tid). I think the planner should be able to \n> use that to produce sorted output for the mergejoin, and you'd \n> get the best of both worlds, because the indexscan will stop \n> immediately when it's exhausted the rows with type = 'vehicle'. \n\nActually there is an index on (type, tid) and it doesn't help. I just tried\nadding an index on (tid, type) and it partially fixed the issue, judging by\nthe page hits, it looks like it is still scanning all the rows of the\ncompound index, but no longer needs to go to the actual table. This takes\n600ms instead of the 11,500ms of the original query, but still much more\nthan the 60ms when you change the type='vehicle' condition to freq > -1. So\nit isn't a perfect solution. We could also switch to enum values for the\ntype field which may reduce the (tid, type) index size enough to make the\nperformance adequate, but it would be best if we can just get it to quit the\nscan early, so the performance doesn't degrade if the table grows\nsignificantly.\n\nBest,\nJake\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Merge-Join-chooses-very-slow-index-scan-tp5842523p5842603.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Mar 2015 09:40:49 -0700 (MST)",
"msg_from": "Jake Magner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Merge Join chooses very slow index scan"
},
{
"msg_contents": "I think I understand now after reading the notes here on the merge join\nalgorithm: \n\nhttps://github.com/postgres/postgres/blob/4ea51cdfe85ceef8afabceb03c446574daa0ac23/src/backend/executor/nodeMergejoin.c\n\nThe index scanning node doesn't know the max id of the vehicle table and so\ncan't know when to stop looking for the next tuple that matches the join.\nDoesn't seem like any easy way to improve that case. I imagine it is a\nfairly common pattern though. Any time metadata about entities modeled in\ndifferent tables, is stored in the same table, and the distribution of the\nnumber of each entity type is skewed, this situation will arise.\n\nBest,\nJake\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Merge-Join-chooses-very-slow-index-scan-tp5842523p5842633.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Mar 2015 12:15:33 -0700 (MST)",
"msg_from": "Jake Magner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Merge Join chooses very slow index scan"
}
] |
[
{
"msg_contents": "Hi,\nWonder if anyone can help.\n\nHave a lookup table where the primary key is a native uuid type\n(filled with uuid's of type 4), around 50m rows in size.\n\nHave a separate table, table A, similar size (around 50m rows).\nPrimary key in table A is the standard integer, nextval, etc type\nprimary key. Table A also has a uuid column. The uuid column in table\nA (native Postgres uuid type) has a \"UNIQUE CONSTRAINT, btree (uuid)\"\nconstraint on the uuid column.\n\nCurrently regularly running following set of queries:\n1. Pull around 10,000 rows from lookup table.\n2. Use uuid's from (1), to query table A.\n\nQuery (2) above, is running slowly. Typically around 40-50 seconds to\npull 8000-10,000 rows. - which is pretty slow. The table has various\nother columns: 4 text fields, couple of JSON fields, so each row in\ntable A is fairly \"fat\" (if that's the correct expression).\n\nI've experimented with various forms of WHERE clause:\n- (a) ANY ('{1dc384ea-ac3d-4e95-a33e-42f3d821c104,\n- (b) ANY + VALUES: WHERE uuid =\nANY(VALUES('38de2ff6-ceed-43f3-a6fa-7a731ffa8c20':uuid),\n('b956fa3a-87d0-42da-9a75-c498c7ca4650'));\n- (c) Mulitple OR clauses\n\nAnd I've experimented with both btree and hash indices on uuid on\ntable A. So various combinations: just btree, btree+hash, just hash.\nBy far the fastest (which in itself as I've outlined above is not very\nfast) is btree and the ANY form I've listed as (a) above.\n\nIf I use btree + (a) above, EXPLAIN ANALYZE contains (below is for\n4000 rows on a much smaller database, one of only 1million rows as\nopposed to 65 million):\n\n\"\nIndex Scan using table_a_uuid_key on table_a (cost=5.42..32801.60\nrows=4000 width=585) (actual time=0.035..23.023 rows=4000 loops=1)\nIndex Cond: (uuid = ANY\n('{13aad9d6-bb45-4d98-a58b-b50147b6340d,40613404-ebf4-4343-8857-9 ...\netc ....\n\"\n\nVarious comments I've read:\n- Perhaps actually try a JOIN, e.g. LEFT OUTER JOIN between lookup\ntable and table A.\n- Perhaps increase work_mem (currently at 100mb)\n- Perhaps, there's not alot that can be done. By using uuid type 4,\ni.e. a fully random identifier, we're not looking at great performance\ndue to the fact that the id's are so ... random and not sequential.\n\nWe don't care about ordering, hence the experimentation with hash index.\n\nIncidentally, when experimenting with just hash index and ANY, would\nget following in EXPLAIN ANALYZE:\n\n\"\nBitmap Heap Scan on table_a_ (cost=160.36..320.52 rows=40 width=585)\n(actual time=0.285..0.419 rows=40 loops=1)\n Recheck Cond: (uuid = ANY\n('{a4a47eab-6393-4613-b098-b287ea59f2a4,3f0c6111-4b1b-4dae-bd36-e3c8d2b4341b,3748ea41-cf83-4024-a66c-be6b88352b7\n -> Bitmap Index Scan on table_a__uuid_hash_index\n(cost=0.00..160.35 rows=40 width=0) (actual time=0.273..0.273 rows=40\nloops=1)\n Index Cond: (uuid = ANY\n('{a4a47eab-6393-4613-b098-b287ea59f2a4,3f0c6111-4b1b-4dae-bd36-e3c8d2b4341b,3748ea41-cf83-4024-a66c-be6b88352b75,b1894bd6-ff\n\"\n\nAnyway. Any suggestions, thoughts very welcome.\n\nThanks,\nR\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 20 Mar 2015 19:01:20 +0000",
"msg_from": "Roland Dunn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query RE: Optimising UUID Lookups"
},
{
"msg_contents": "On Sat, Mar 21, 2015 at 6:01 AM, Roland Dunn <[email protected]> wrote:\n\n> Hi,\n> Wonder if anyone can help.\n>\n> Have a lookup table where the primary key is a native uuid type\n> (filled with uuid's of type 4), around 50m rows in size.\n>\n> Have a separate table, table A, similar size (around 50m rows).\n> Primary key in table A is the standard integer, nextval, etc type\n> primary key. Table A also has a uuid column. The uuid column in table\n> A (native Postgres uuid type) has a \"UNIQUE CONSTRAINT, btree (uuid)\"\n> constraint on the uuid column.\n>\n> Currently regularly running following set of queries:\n> 1. Pull around 10,000 rows from lookup table.\n> 2. Use uuid's from (1), to query table A.\n>\n> Query (2) above, is running slowly. Typically around 40-50 seconds to\n> pull 8000-10,000 rows. - which is pretty slow. The table has various\n> other columns: 4 text fields, couple of JSON fields, so each row in\n> table A is fairly \"fat\" (if that's the correct expression).\n>\n\nHi Roland,\n\nIt's very likely that the query is IO-bound.\nUsual single SATA drive can perform around 100 IOPS/s.\nAs a result to fetch randomly spread 10000 rows HDD must spent ~100second\nwhich is pretty close to actual timings.\n\nI suggest enable track_io_timing in postgresql.conf, and after use explain\n(analyze, costs, buffers, timing) instead of simple explain analyze. It\nwill help you see time spend on the IO operations.\n\nIf your load are actually IO-bound I could suggest 3 possible ways make\nthings better:\n1)use good server grade ssd drive instead of hdd.\n2)increase memory on the server so database could comfortable fit into the\nRAM.\n3)use raid10 raid with good raid controller and 6-12 SAS drives.\n\nThe database could not retrieve rows faster than underlying file system\ncould fetch data from hdd.\n\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n\nOn Sat, Mar 21, 2015 at 6:01 AM, Roland Dunn <[email protected]> wrote:Hi,\nWonder if anyone can help.\n\nHave a lookup table where the primary key is a native uuid type\n(filled with uuid's of type 4), around 50m rows in size.\n\nHave a separate table, table A, similar size (around 50m rows).\nPrimary key in table A is the standard integer, nextval, etc type\nprimary key. Table A also has a uuid column. The uuid column in table\nA (native Postgres uuid type) has a \"UNIQUE CONSTRAINT, btree (uuid)\"\nconstraint on the uuid column.\n\nCurrently regularly running following set of queries:\n1. Pull around 10,000 rows from lookup table.\n2. Use uuid's from (1), to query table A.\n\nQuery (2) above, is running slowly. Typically around 40-50 seconds to\npull 8000-10,000 rows. - which is pretty slow. The table has various\nother columns: 4 text fields, couple of JSON fields, so each row in\ntable A is fairly \"fat\" (if that's the correct expression).Hi Roland,It's very likely that the query is IO-bound.Usual single SATA drive can perform around 100 IOPS/s.As a result to fetch randomly spread 10000 rows HDD must spent ~100second which is pretty close to actual timings.I suggest enable track_io_timing in postgresql.conf, and after use explain (analyze, costs, buffers, timing) instead of simple explain analyze. It will help you see time spend on the IO operations.If your load are actually IO-bound I could suggest 3 possible ways make things better:1)use good server grade ssd drive instead of hdd.2)increase memory on the server so database could comfortable fit into the RAM.3)use raid10 raid with good raid controller and 6-12 SAS drives.The database could not retrieve rows faster than underlying file system could fetch data from hdd.-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/",
"msg_date": "Sat, 21 Mar 2015 20:10:59 +1100",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query RE: Optimising UUID Lookups"
},
{
"msg_contents": "Hi Maxim,\nThanks for the reply, v interesting.\n\nDo you speculate that the 10,000 rows would be randomly spread because\nof the uuid-type that we chose, namely the uuid-4 type? i.e. the\ncompletely random one? If we'd chosen the uuid-1 type (mac\naddress+timestamp), rows would have been more regularly placed and so\nfaster to pull back? Just curious as to why you said the randomly\nspaced. Also bear in mind that we did experiment with both btree and\nhash index on the uuid column.\n\nRE: increasing the memory. Currently at 64GB, with following conf settings:\n\nmax_connections = 100\nshared_buffers = 10GB\neffective_cache_size = 45GB\nwork_mem = 100MB\nmaintenance_work_mem = 1GB\ncheckpoint_segments = 128\ncheckpoint_completion_target = 0.9\nwal_buffers = 16MB\n\nIs it worth (do you think) experimenting with work_mem, and if so to\nwhat degree?\nIf we did add more RAM, would it be the effective_cache_size setting\nthat we would alter? Is there a way to force PG to load a particular\ntable into RAM? If so, is it actually a good idea?\n\nThanks again,\nR\n\n\n\n\nOn 21 March 2015 at 09:10, Maxim Boguk <[email protected]> wrote:\n>\n> On Sat, Mar 21, 2015 at 6:01 AM, Roland Dunn <[email protected]> wrote:\n>>\n>> Hi,\n>> Wonder if anyone can help.\n>>\n>> Have a lookup table where the primary key is a native uuid type\n>> (filled with uuid's of type 4), around 50m rows in size.\n>>\n>> Have a separate table, table A, similar size (around 50m rows).\n>> Primary key in table A is the standard integer, nextval, etc type\n>> primary key. Table A also has a uuid column. The uuid column in table\n>> A (native Postgres uuid type) has a \"UNIQUE CONSTRAINT, btree (uuid)\"\n>> constraint on the uuid column.\n>>\n>> Currently regularly running following set of queries:\n>> 1. Pull around 10,000 rows from lookup table.\n>> 2. Use uuid's from (1), to query table A.\n>>\n>> Query (2) above, is running slowly. Typically around 40-50 seconds to\n>> pull 8000-10,000 rows. - which is pretty slow. The table has various\n>> other columns: 4 text fields, couple of JSON fields, so each row in\n>> table A is fairly \"fat\" (if that's the correct expression).\n>\n>\n> Hi Roland,\n>\n> It's very likely that the query is IO-bound.\n> Usual single SATA drive can perform around 100 IOPS/s.\n> As a result to fetch randomly spread 10000 rows HDD must spent ~100second\n> which is pretty close to actual timings.\n>\n> I suggest enable track_io_timing in postgresql.conf, and after use explain\n> (analyze, costs, buffers, timing) instead of simple explain analyze. It will\n> help you see time spend on the IO operations.\n>\n> If your load are actually IO-bound I could suggest 3 possible ways make\n> things better:\n> 1)use good server grade ssd drive instead of hdd.\n> 2)increase memory on the server so database could comfortable fit into the\n> RAM.\n> 3)use raid10 raid with good raid controller and 6-12 SAS drives.\n>\n> The database could not retrieve rows faster than underlying file system\n> could fetch data from hdd.\n>\n>\n>\n> --\n> Maxim Boguk\n> Senior Postgresql DBA\n> http://www.postgresql-consulting.ru/\n>\n\n\n\n-- \n\nKind regards,\nRoland\n\nRoland Dunn\n--------------------------\n\nm: +44 (0)7967 646 789\ne: [email protected]\nw: http://www.cloudshapes.co.uk/\nhttps://twitter.com/roland_dunn\nhttp://uk.linkedin.com/in/rolanddunn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 21 Mar 2015 10:34:23 +0000",
"msg_from": "Roland Dunn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query RE: Optimising UUID Lookups"
},
{
"msg_contents": "Hi Roland,\n\n\n\n> Do you speculate that the 10,000 rows would be randomly spread because\n> of the uuid-type that we chose, namely the uuid-4 type? i.e. the\n> completely random one? If we'd chosen the uuid-1 type (mac\n> address+timestamp), rows would have been more regularly placed and so\n> faster to pull back? Just curious as to why you said the randomly\n> spaced. Also bear in mind that we did experiment with both btree and\n> hash index on the uuid column.\n>\n\nNo, I mean that the data corresponding to 10000 UUIDS very likely random\ndistributed over the table, and as a result over HDD.\nSo getting each single row mean 1 seek on HDD which usually took 5-10ms.\nYou will see the same issue with integer type (or any other type) as well.\n\nBtw, good test for this theory is execute the same query few time in short\nperiod of time and see if the second and later runs become faster.\n\n\n\n>\n> RE: increasing the memory. Currently at 64GB, with following conf settings:\n>\n> max_connections = 100\n> shared_buffers = 10GB\n> effective_cache_size = 45GB\n> work_mem = 100MB\n> maintenance_work_mem = 1GB\n> checkpoint_segments = 128\n> checkpoint_completion_target = 0.9\n> wal_buffers = 16MB\n>\n> Is it worth (do you think) experimenting with work_mem, and if so to\n> what degree?\n>\n\nwork_mem doesn't help there.\n\n\n\n> If we did add more RAM, would it be the effective_cache_size setting\n> that we would alter?\n\n\nYep.\n\n\n\n> Is there a way to force PG to load a particular\n> table into RAM? If so, is it actually a good idea?\n>\n\nThere are no such way except setting the shared buffers equal or bigger\nthan the database size (if you have enough RAM of course).\nIf the table being accessed quite actively and there are not a lot of\nmemory pressure on the database - than table will be in RAM anyway after\nsome time.\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\nJabber: [email protected]\nМойКруг: http://mboguk.moikrug.ru/\n\n\"People problems are solved with people.\nIf people cannot solve the problem, try technology.\nPeople will then wish they'd listened at the first stage.\"\n\nHi Roland, \nDo you speculate that the 10,000 rows would be randomly spread because\nof the uuid-type that we chose, namely the uuid-4 type? i.e. the\ncompletely random one? If we'd chosen the uuid-1 type (mac\naddress+timestamp), rows would have been more regularly placed and so\nfaster to pull back? Just curious as to why you said the randomly\nspaced. Also bear in mind that we did experiment with both btree and\nhash index on the uuid column.No, I mean that the data corresponding to 10000 UUIDS very likely random distributed over the table, and as a result over HDD.So getting each single row mean 1 seek on HDD which usually took 5-10ms. You will see the same issue with integer type (or any other type) as well.Btw, good test for this theory is execute the same query few time in short period of time and see if the second and later runs become faster. \n\nRE: increasing the memory. Currently at 64GB, with following conf settings:\n\nmax_connections = 100\nshared_buffers = 10GB\neffective_cache_size = 45GB\nwork_mem = 100MB\nmaintenance_work_mem = 1GB\ncheckpoint_segments = 128\ncheckpoint_completion_target = 0.9\nwal_buffers = 16MB\n\nIs it worth (do you think) experimenting with work_mem, and if so to\nwhat degree?work_mem doesn't help there. \nIf we did add more RAM, would it be the effective_cache_size setting\nthat we would alter?Yep. Is there a way to force PG to load a particular\ntable into RAM? If so, is it actually a good idea?There are no such way except setting the shared buffers equal or bigger than the database size (if you have enough RAM of course).If the table being accessed quite actively and there are not a lot of memory pressure on the database - than table will be in RAM anyway after some time.-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Tue, 24 Mar 2015 16:28:07 +1100",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query RE: Optimising UUID Lookups"
},
{
"msg_contents": "On 21 March 2015 at 23:34, Roland Dunn <[email protected]> wrote:\n\n>\n> If we did add more RAM, would it be the effective_cache_size setting\n> that we would alter? Is there a way to force PG to load a particular\n> table into RAM? If so, is it actually a good idea?\n>\n\nHave you had a look at EXPLAIN (ANALYZE, BUFFERS) for the query?\n\nPay special attention to \"Buffers: shared read=NNN\" and \"Buffers: shared\nhit=NNN\", if you're not reading any buffers between runs then the pages are\nin the PostgreSQL shared buffers. By the looks of your config you have 10GB\nof these. On the other hand if you're getting buffer reads, then they're\neither coming from disk, or from the OS cache. PostgreSQL won't really know\nthe difference.\n\nIf you're not getting any buffer reads and it's still slow, then the\nproblem is not I/O\n\nJust for fun... What happens if you stick the 50 UUIDs in some table,\nanalyze it, then perform a join between the 2 tables, using IN() or\nEXISTS()... Is that any faster?\n\nAlso how well does it perform with: set enable_bitmapscan = off; ?\n\nRegards\n\nDavid Rowley\n\nOn 21 March 2015 at 23:34, Roland Dunn <[email protected]> wrote:If we did add more RAM, would it be the effective_cache_size setting\nthat we would alter? Is there a way to force PG to load a particular\ntable into RAM? If so, is it actually a good idea?Have you had a look at EXPLAIN (ANALYZE, BUFFERS) for the query?Pay special attention to \"Buffers: shared read=NNN\" and \"Buffers: shared hit=NNN\", if you're not reading any buffers between runs then the pages are in the PostgreSQL shared buffers. By the looks of your config you have 10GB of these. On the other hand if you're getting buffer reads, then they're either coming from disk, or from the OS cache. PostgreSQL won't really know the difference.If you're not getting any buffer reads and it's still slow, then the problem is not I/OJust for fun... What happens if you stick the 50 UUIDs in some table, analyze it, then perform a join between the 2 tables, using IN() or EXISTS()... Is that any faster?Also how well does it perform with: set enable_bitmapscan = off; ?RegardsDavid Rowley",
"msg_date": "Tue, 24 Mar 2015 20:49:46 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query RE: Optimising UUID Lookups"
},
{
"msg_contents": "Thanks for replies. More detail and data below:\n\nTable: \"lookup\"\n\nuuid: type uuid. not null. plain storage.\ndatetime_stamp: type bigint. not null. plain storage.\nharvest_date_stamp: type bigint. not null. plain storage.\nstate: type smallint. not null. plain storage.\n\nIndexes:\n \"lookup_pkey\" PRIMARY KEY, btree (uuid)\n \"lookup_32ff3898\" btree (datetime_stamp)\n \"lookup_6c8369bc\" btree (harvest_date_stamp)\n \"lookup_9ed39e2e\" btree (state)\nHas OIDs: no\n\n\nTable: \"article_data\"\n\nint: type integer. not null default\nnextval('article_data_id_seq'::regclass). plain storage.\ntitle: text.\ntext: text.\ninsertion_date: date\nharvest_date: timestamp with time zone.\nuuid: uuid.\n\nIndexes:\n \"article_data_pkey\" PRIMARY KEY, btree (id)\n \"article_data_uuid_key\" UNIQUE CONSTRAINT, btree (uuid)\nHas OIDs: no\n\n\nBoth lookup and article_data have around 65m rows. Two queries:\n\n\n(1) SELECT uuid FROM lookup WHERE state = 200 LIMIT 4000;\n\nOUTPUT FROM EXPLAIN (ANALYZE, BUFFERS):\n------------------------------------------------\n Limit (cost=0.00..4661.02 rows=4000 width=16) (actual\ntime=0.009..1.036 rows=4000 loops=1)\n Buffers: shared hit=42\n -> Seq Scan on lookup (cost=0.00..1482857.00 rows=1272559\nwidth=16) (actual time=0.008..0.777 rows=4000 loops=1)\n Filter: (state = 200)\n Rows Removed by Filter: 410\n Buffers: shared hit=42\n Total runtime: 1.196 ms\n(7 rows)\n\nQuestion: Why does this do a sequence scan and not an index scan when\nthere is a btree on state?\n\n\n\n\n(2) SELECT article_data.id, article_data.uuid, article_data.title,\narticle_data.text FROM article_data WHERE uuid = ANY\n('{f0d5e665-4f21-4337-a54b-cf0b4757db65,..... 3999 more uuid's\n....}'::uuid[]);\n\n\nOUTPUT FROM EXPLAIN (ANALYZE, BUFFERS):\n------------------------------------------------\n Index Scan using article_data_uuid_key on article_data\n(cost=5.56..34277.00 rows=4000 width=581) (actual\ntime=0.063..66029.031 rows=400\n0 loops=1)\n Index Cond: (uuid = ANY\n('{f0d5e665-4f21-4337-a54b-cf0b4757db65,5618754f-544b-4700-9d24-c364fd0ba4e9,958e37e3-6e6e-4b2a-b854-48e88ac1fdb7,ba56b483-59b2-4ae5-ae44-910401f3221b,aa4\naca60-a320-4ed3-b7b4-829e6ca63592,05f1c0b9-1f9b-4e1c-8f41-07545d694e6b,7aa4dee9-be17-49df-b0ca-d6e63b0dc023,e9037826-86c4-4bbc-a9d5-6977ff7458af,db5852bf-a447-4a1d-9673-ead2f7045589\n,6704d89\n\n0b2-9ea9-390c8ed3cb2e,91cedfca-6b55-43e6-ae33-f2adf758ec78,e1b41c2f-31bb-4d29-9757-e7467ebb66c7,a9d3e6a9-5324-44e7-9cab-489bfb5ca081,ce9c2e64-b40e-48d7-b346-b9c76d79f192,26c3fcc5-cccb-4bc9-a5f5-806ead6fc859,2da9a3bc-0acb-41fd-b565-2a8a8662b85c,2097d61b-8d9b-4795-bd0d-c6db5a8e0501,d8841e46-0c1e-499b-804f-cb3fec3593b0,3ea98067-79ee-4497-b986-20cc09da6294,63046459-225f-4672-9db4-25b4491566e6,d45b2540-5835-43db-8e48-aa7b6613f8d4,df8720bf-9a2a-4550-9183-fd5e36e40485,c1c2cf05-c1d4-4f4c-8d8c-8b515d4ef24a,7233cc38-96ca-4e79-89ea-14c51e0e7ef4,76c6901d-496f-4c73-9d45-c934e46401f8,51673157-e2c6-4b89-bbcd-9aeda1750301,3de3f10f-da3d-4a96-90cd-fa3c9a02df01,9dbec983-23b8-4847-9c0e-030a8aee7ccc,7108ec74-91dc-47c6-a762-d860f0d56caa,eda38d3c-1231-47b8-ad19-28549fb4ec4c,401673a7-e5ca-4a47-9dea-5870dc69dbc8,649244dd-9a5b-48a7-88cf-ca2c7915de27,e9c8f789-3602-4e91-850e-eabc67269ecb,a55be381-bb34-4f2c-aede-8bab37cb479c,d101b8f1-389c-4613-b310-cd7d114dea8d,abce5c60-fa16-4d88-b844-ee3287aab777,e64e8b97-632d-45b8-9f4e-d83ef1717e77,f3a62745-6bcb-400b-b770-ac3c2fc91b81}'::uuid[]))\n Buffers: shared hit=16060 read=4084 dirtied=292\n Total runtime: 66041.443 ms\n(4 rows)\n\nQuestion: Why is this so slow, even though it's reading from disk?\n\n\n\n\n\n\n\n\nOn 24 March 2015 at 07:49, David Rowley <[email protected]> wrote:\n> On 21 March 2015 at 23:34, Roland Dunn <[email protected]> wrote:\n>>\n>>\n>> If we did add more RAM, would it be the effective_cache_size setting\n>> that we would alter? Is there a way to force PG to load a particular\n>> table into RAM? If so, is it actually a good idea?\n>\n>\n> Have you had a look at EXPLAIN (ANALYZE, BUFFERS) for the query?\n>\n> Pay special attention to \"Buffers: shared read=NNN\" and \"Buffers: shared\n> hit=NNN\", if you're not reading any buffers between runs then the pages are\n> in the PostgreSQL shared buffers. By the looks of your config you have 10GB\n> of these. On the other hand if you're getting buffer reads, then they're\n> either coming from disk, or from the OS cache. PostgreSQL won't really know\n> the difference.\n>\n> If you're not getting any buffer reads and it's still slow, then the problem\n> is not I/O\n>\n> Just for fun... What happens if you stick the 50 UUIDs in some table,\n> analyze it, then perform a join between the 2 tables, using IN() or\n> EXISTS()... Is that any faster?\n>\n> Also how well does it perform with: set enable_bitmapscan = off; ?\n>\n> Regards\n>\n> David Rowley\n>\n>\n>\n\n\n\n-- \n\nKind regards,\nRoland\n\nRoland Dunn\n--------------------------\n\nm: +44 (0)7967 646 789\ne: [email protected]\nw: http://www.cloudshapes.co.uk/\nhttps://twitter.com/roland_dunn\nhttp://uk.linkedin.com/in/rolanddunn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Mar 2015 10:45:26 +0000",
"msg_from": "Roland Dunn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query RE: Optimising UUID Lookups"
},
{
"msg_contents": "> (1) SELECT uuid FROM lookup WHERE state = 200 LIMIT 4000;\n>\n> OUTPUT FROM EXPLAIN (ANALYZE, BUFFERS):\n> ------------------------------------------------\n> Limit (cost=0.00..4661.02 rows=4000 width=16) (actual\n> time=0.009..1.036 rows=4000 loops=1)\n> Buffers: shared hit=42\n> -> Seq Scan on lookup (cost=0.00..1482857.00 rows=1272559\n> width=16) (actual time=0.008..0.777 rows=4000 loops=1)\n> Filter: (state = 200)\n> Rows Removed by Filter: 410\n> Buffers: shared hit=42\n> Total runtime: 1.196 ms\n> (7 rows)\n>\n> Question: Why does this do a sequence scan and not an index scan when\n> there is a btree on state?\n>\n\nvery likely that state=200 is very common value in the table\nso seq scan of few pages (42 to be exact) is faster than performing index\nscan.\n\n\n\n> (2) SELECT article_data.id, article_data.uuid, article_data.title,\n> article_data.text FROM article_data WHERE uuid = ANY\n> ('{f0d5e665-4f21-4337-a54b-cf0b4757db65,..... 3999 more uuid's\n> ....}'::uuid[]);\n>\n>\n> OUTPUT FROM EXPLAIN (ANALYZE, BUFFERS):\n> ------------------------------------------------\n> Index Scan using article_data_uuid_key on article_data\n> (cost=5.56..34277.00 rows=4000 width=581) (actual time=0.063..66029.031\n> rows=4000 loops=1)\n> Index Cond: (uuid = ANY\n> \n> (\n> '...'\n> ::uuid[]))\n>\n Buffers: shared hit=16060\n> \n> read=4084 dirtied=292\n> Total runtime: 66041.443 ms Question:\n\n>>\n Why is this so slow, even though it's reading from disk?\n\n\nAs I already suggested enable track_io_timing in the database and use\nexplain (analyze, costs, buffer, timing)\nto see how much exactly time had been spent during IO operations.\n\nThe time requred for single random IO operation for common HDD's are around\n10ms, so reading read=4084 pages could easily took 60seconds especially if\nsome other IO activity exist on the server.\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\nJabber: [email protected]\nМойКруг: http://mboguk.moikrug.ru/\n\n\"People problems are solved with people.\nIf people cannot solve the problem, try technology.\nPeople will then wish they'd listened at the first stage.\"\n\n(1) SELECT uuid FROM lookup WHERE state = 200 LIMIT 4000;\n\nOUTPUT FROM EXPLAIN (ANALYZE, BUFFERS):\n------------------------------------------------\n Limit (cost=0.00..4661.02 rows=4000 width=16) (actual\ntime=0.009..1.036 rows=4000 loops=1)\n Buffers: shared hit=42\n -> Seq Scan on lookup (cost=0.00..1482857.00 rows=1272559\nwidth=16) (actual time=0.008..0.777 rows=4000 loops=1)\n Filter: (state = 200)\n Rows Removed by Filter: 410\n Buffers: shared hit=42\n Total runtime: 1.196 ms\n(7 rows)\n\nQuestion: Why does this do a sequence scan and not an index scan when\nthere is a btree on state?very likely that state=200 is very common value in the tableso seq scan of few pages (42 to be exact) is faster than performing index scan. \n(2) SELECT article_data.id, article_data.uuid, article_data.title,\narticle_data.text FROM article_data WHERE uuid = ANY\n('{f0d5e665-4f21-4337-a54b-cf0b4757db65,..... 3999 more uuid's\n....}'::uuid[]);\n\n\nOUTPUT FROM EXPLAIN (ANALYZE, BUFFERS):\n------------------------------------------------\n Index Scan using article_data_uuid_key on article_data\n(cost=5.56..34277.00 rows=4000 width=581) (actual\ntime=0.063..66029.031 rows=4000 loops=1)\n Index Cond: (uuid = ANY\n('...'::uuid[]))\n Buffers: shared hit=16060 read=4084 dirtied=292\n Total runtime: 66041.443 ms Question: >> Why is this so slow, even though it's reading from disk?As I already suggested enable track_io_timing in the database and use explain (analyze, costs, buffer, timing) to see how much exactly time had been spent during IO operations.The time requred for single random IO operation for common HDD's are around 10ms, so reading read=4084 pages could easily took 60seconds especially if some other IO activity exist on the server.-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Tue, 24 Mar 2015 22:43:10 +1100",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query RE: Optimising UUID Lookups"
}
] |
[
{
"msg_contents": "Hi,\n\nSituation:\n\nWe have a table with 3,500,000+ rows, which contain items that need to\nbe printed or have been printed previously.\n\nMost of these records have a status of 'PRINTED', we have a partial\nindex on this table WHERE status <> 'PRINTED'.\nDuring normal operation there will be < 10 records matching 'NOT_YET_PRINTED'.\nWhen using the index scan this is done in < 5ms, but when the\nsequential scan is involved the query runs > 500ms.\n\n\nWe query this table often in the form:\n\nSELECT *\n FROM print_list\n JOIN [...]\n JOIN [...]\n WHERE stats = 'NOT_YET_PRINTED'\n LIMIT 8;\n\nThis query is currently switching between a sequential scan on the\nprint_list table and an index scan on the previously mentioned index.\n\nWhen doing an explain analyze on the queries we see that it sometimes\nexpects to return > 5000 records when in reality it is only < 5\nrecords that are returned, for example:\n\n -> Index Scan using print_list_status_idx on print_list\n(cost=0.27..1138.53 rows=6073 width=56) (actual time=0.727..0.727\nrows=0 loops=1)\n\nSometimes, this results in the planner choosing a sequential scan for\nthis query.\n\nWhen analyzing pg_stats we have sometimes have the following: (Note:\n'NOT_YET_PRINTED' has not been found during this analyze, these are\nreal values)\n\n attname | status\n inherited | f\n null_frac | 0\n avg_width | 4\n n_distinct | 3\n most_common_vals | {PRINTED}\n most_common_freqs | {0.996567}\n histogram_bounds | {PREPARED,ERROR}\n correlation | 0.980644\n\nA question about this specific entry, which some of you may be able to\nshed some light on:\n\nmost_common_vals contains only 1 entry, why is this? I would expect to\nsee 3 entries, as it has n_distinct=3\n\nWhen looking at\nhttp://www.postgresql.org/docs/current/static/row-estimation-examples.html\nwe can see that an estimate > 5000 is what is to be expected for these\nstatistics:\n\n# select ( (1 - 0.996567)/2 * 3500000 )::int;\n int4\n------\n 6008\n(1 row)\n\nBut why does it not record the frequency of 'PREPARED' and 'ERROR' in\nmost_common_*?\n\nOur current strategies in mitigating this problem is decreasing the\nautovacuum_*_scale_factor for this specific table, therefore\ntriggering more analyses and vacuums.\n\nThis is helping somewhat, as if the problem occurs it often solved\nautomatically if autoanalyze analyzes this table, it is analyzed many\ntimes an hour currently.\n\nWe can also increase the 'Stats target' for this table, which will\ncause the statistics to contain information about 'NOT_YET_PRINTED'\nmore often, but even then, it may not find any of these records, as\nthey sometimes do not exist.\n\nCould you help us to find a strategy to troubleshoot this issue further?\n\nSome specific questions:\n- We can see it is doing a sequential scan of the full table (3.5mio\nrecords) even when it only expects 8000 records to be returned, we\nwould expect this not to happen so soon.\n- Why is most_common_* not filled when there are only 3 distinct values?\n\nFeike Steenbergen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Mar 2015 13:04:20 +0100",
"msg_from": "Feike Steenbergen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index only scan sometimes switches to sequential scan for\n small amount of rows"
},
{
"msg_contents": "On 25.3.2015 13:04, Feike Steenbergen wrote:\n...\n> When analyzing pg_stats we have sometimes have the following: (Note:\n> 'NOT_YET_PRINTED' has not been found during this analyze, these are\n> real values)\n> \n> attname | status\n> inherited | f\n> null_frac | 0\n> avg_width | 4\n> n_distinct | 3\n> most_common_vals | {PRINTED}\n> most_common_freqs | {0.996567}\n> histogram_bounds | {PREPARED,ERROR}\n> correlation | 0.980644\n> \n> A question about this specific entry, which some of you may be able to\n> shed some light on:\n> \n> most_common_vals contains only 1 entry, why is this? I would expect to\n> see 3 entries, as it has n_distinct=3\n\nTo be included in the MCV list, the value has to actually appear in the\nrandom sample at least twice, IIRC. If the values are very rare (e.g. if\nyou only have such 10 rows out of 3.5M), that may not happen.\n\nYou may try increasing the statistics target for this column, which\nshould make the sample larger and stats more detailed (max is 10000,\nwhich should use sample ~3M rows, i.e. almost the whole table).\n\n> When looking at\n> http://www.postgresql.org/docs/current/static/row-estimation-examples.html\n> we can see that an estimate > 5000 is what is to be expected for these\n> statistics:\n> \n> # select ( (1 - 0.996567)/2 * 3500000 )::int;\n> int4\n> ------\n> 6008\n> (1 row)\n> \n> But why does it not record the frequency of 'PREPARED' and 'ERROR'\n> in most_common_*?\n\nCan you post results for this query?\n\nSELECT stats, COUNT(*) FROM print_list group by 1\n\nI'd like to know how frequent the other values are.\n\n> \n> Our current strategies in mitigating this problem is decreasing the \n> autovacuum_*_scale_factor for this specific table, therefore \n> triggering more analyses and vacuums.\n\nI'm not sure this is a good solution. The problem is elsewhere, IMHO.\n\n> This is helping somewhat, as if the problem occurs it often solved \n> automatically if autoanalyze analyzes this table, it is analyzed\n> many times an hour currently.\n> \n> We can also increase the 'Stats target' for this table, which will\n> cause the statistics to contain information about 'NOT_YET_PRINTED'\n> more often, but even then, it may not find any of these records, as\n> they sometimes do not exist.\n\nThis is a better solution, IMHO.\n\n> \n> Could you help us to find a strategy to troubleshoot this issue\n> further?\n\nYou might also make the index scans cheaper, so that the switch to\nsequential scan happens later (when more rows are estimated). Try to\ndecreasing random_page_cost from 4 (default) to 1.5 or something like that.\n\nIt may hurt other queries, though, depending on the dataset size etc.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Mar 2015 13:45:06 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "Hi, thanks for having a look and thinking with us\n\nOn 25 March 2015 at 13:45, Tomas Vondra <[email protected]> wrote:\n> Can you post results for this query?\n>\n> SELECT stats, COUNT(*) FROM print_list group by 1\n\n status | count\n----------------+---------\n ERROR | 159\n PREPARED | 10162\n PRINTED | 3551367\n TO_BE_PREPARED | 2\n(4 rows)\n\n>> We can also increase the 'Stats target' for this table, which will\n>> cause the statistics to contain information about 'NOT_YET_PRINTED'\n>> more often, but even then, it may not find any of these records, as\n>> they sometimes do not exist.\n>\n> This is a better solution, IMHO.\n\nWe'll have a go at this, also if what you say about values having to\nappear at least twice, the other values may make it into\nmost_common_*, which would make it clearer to us.\n\nWe're a bit hesitant to decrease random_page_cost (currently 3 in this\ncluster) as a lot more is happening on this database.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Mar 2015 15:22:33 +0100",
"msg_from": "Feike Steenbergen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "I'm posting this as I am trying to understand what has happened.\nTLDR: The problem seems to be fixed now.\n\nBy bumping the statistics_target we see that most_common_vals is\nhaving its contents filled more often, causing way better estimates:\n\n attname | status\n inherited | f\n null_frac | 0\n avg_width | 4\n n_distinct | 3\n most_common_vals | {PRINTED,PREPARED,ERROR}\n most_common_freqs | {0.996863,0.00307333,6.33333e-05}\n histogram_bounds | (null)\n correlation | 0.98207\n most_common_elems | (null)\n most_common_elem_freqs | (null)\n elem_count_histogram | (null)\n\nBasically 100% of the records are accounted for in these statistics,\nthe planner now consistently estimates the number of rows to be very\nsmall for other values.\n\nBefore bumping the target we didn't have information for 0.34% of the\nrows, which in this case means roughly 11K rows.\n\nWhat is the reasoning behind having at least 2 hits before including\nit in the most_common_* columns?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Mar 2015 17:07:13 +0100",
"msg_from": "Feike Steenbergen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "Feike Steenbergen <[email protected]> writes:\n> On 25 March 2015 at 13:45, Tomas Vondra <[email protected]> wrote:\n>>> We can also increase the 'Stats target' for this table, which will\n>>> cause the statistics to contain information about 'NOT_YET_PRINTED'\n>>> more often, but even then, it may not find any of these records, as\n>>> they sometimes do not exist.\n\n>> This is a better solution, IMHO.\n\n> We'll have a go at this, also if what you say about values having to\n> appear at least twice, the other values may make it into\n> most_common_*, which would make it clearer to us.\n\nIn principle increasing the stats target should fix this, whether or not\n'NOT_YET_PRINTED' appears in the MCV list after any particular analyze;\nbecause what will happen is that the frequency for 'PRINTED' will more\nnearly approach 1, and so the estimated selectivity for other values\nwill drop even if they're not in the list.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Mar 2015 13:26:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan for small\n amount of rows"
},
{
"msg_contents": "On Wed, Mar 25, 2015 at 9:07 AM, Feike Steenbergen <\[email protected]> wrote:\n\n> I'm posting this as I am trying to understand what has happened.\n> TLDR: The problem seems to be fixed now.\n>\n> By bumping the statistics_target we see that most_common_vals is\n> having its contents filled more often, causing way better estimates:\n>\n> attname | status\n> inherited | f\n> null_frac | 0\n> avg_width | 4\n> n_distinct | 3\n> most_common_vals | {PRINTED,PREPARED,ERROR}\n> most_common_freqs | {0.996863,0.00307333,6.33333e-05}\n> histogram_bounds | (null)\n> correlation | 0.98207\n> most_common_elems | (null)\n> most_common_elem_freqs | (null)\n> elem_count_histogram | (null)\n>\n> Basically 100% of the records are accounted for in these statistics,\n> the planner now consistently estimates the number of rows to be very\n> small for other values.\n>\n> Before bumping the target we didn't have information for 0.34% of the\n> rows, which in this case means roughly 11K rows.\n>\n> What is the reasoning behind having at least 2 hits before including\n> it in the most_common_* columns?\n>\n\nIf you sample a small portion of the table, then anything only present once\nis going to be have a huge uncertainty on its estimate.\n\nConsider the consequences of including things sampled once. 100% of the\nrows that got sampled will be in the sample at least once. That means\nmost_common_freqs will always sum to 100%. Which means we are declaring\nthat anything not observed in the sample has a frequency of\n0.000000000000%, which is clearly beyond what we have any reasonable\nevidence to support.\n\nAlso, I doubt that that is the problem in the first place. If you collect\na sample of 30,000 (which the default target size of 100 does), and the\nfrequency of the second most common is really 0.00307333 at the time you\nsampled it, you would expect to find it 92 times in the sample. The chances\nagainst actually finding 1 instead of around 92 due to sampling error are\nastronomical.\n\nThe problem seems to be rapidly changing stats, not too small of a target\nsize (unless your original target size was way below the current default\nvalue, forgive me if you already reported that, I didn't see it anywhere).\n\nIf you analyze the table at a point when it is 100% PRINTED, there is no\nway of knowing based on that analysis alone what the distribution of\n!='PRINTED' would be, should such values ever arise.\n\nMaybe it would work better if you built the partial index where status =\n'NOT_YET_PRINTED', instead of !='PRINTED'.\n\nCheers,\n\nJeff\n\nOn Wed, Mar 25, 2015 at 9:07 AM, Feike Steenbergen <[email protected]> wrote:I'm posting this as I am trying to understand what has happened.\nTLDR: The problem seems to be fixed now.\n\nBy bumping the statistics_target we see that most_common_vals is\nhaving its contents filled more often, causing way better estimates:\n\n attname | status\n inherited | f\n null_frac | 0\n avg_width | 4\n n_distinct | 3\n most_common_vals | {PRINTED,PREPARED,ERROR}\n most_common_freqs | {0.996863,0.00307333,6.33333e-05}\n histogram_bounds | (null)\n correlation | 0.98207\n most_common_elems | (null)\n most_common_elem_freqs | (null)\n elem_count_histogram | (null)\n\nBasically 100% of the records are accounted for in these statistics,\nthe planner now consistently estimates the number of rows to be very\nsmall for other values.\n\nBefore bumping the target we didn't have information for 0.34% of the\nrows, which in this case means roughly 11K rows.\n\nWhat is the reasoning behind having at least 2 hits before including\nit in the most_common_* columns?If you sample a small portion of the table, then anything only present once is going to be have a huge uncertainty on its estimate.Consider the consequences of including things sampled once. 100% of the rows that got sampled will be in the sample at least once. That means most_common_freqs will always sum to 100%. Which means we are declaring that anything not observed in the sample has a frequency of 0.000000000000%, which is clearly beyond what we have any reasonable evidence to support.Also, I doubt that that is the problem in the first place. If you collect a sample of 30,000 (which the default target size of 100 does), and the frequency of the second most common is really 0.00307333 at the time you sampled it, you would expect to find it 92 times in the sample. The chances against actually finding 1 instead of around 92 due to sampling error are astronomical. The problem seems to be rapidly changing stats, not too small of a target size (unless your original target size was way below the current default value, forgive me if you already reported that, I didn't see it anywhere).If you analyze the table at a point when it is 100% PRINTED, there is no way of knowing based on that analysis alone what the distribution of !='PRINTED' would be, should such values ever arise.Maybe it would work better if you built the partial index where status = 'NOT_YET_PRINTED', instead of !='PRINTED'.Cheers,Jeff",
"msg_date": "Wed, 25 Mar 2015 11:07:12 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "On 25 March 2015 at 19:07, Jeff Janes <[email protected]> wrote:\n\n> Also, I doubt that that is the problem in the first place. If you collect a\n> sample of 30,000 (which the default target size of 100 does), and the\n> frequency of the second most common is really 0.00307333 at the time you\n> sampled it, you would expect to find it 92 times in the sample. The chances\n> against actually finding 1 instead of around 92 due to sampling error are\n> astronomical.\n\nIt can be that the distribution of values is very volatile; we hope\nthe increased stats target (from the default=100 to 1000 for this\ncolumn) and frequent autovacuum and autoanalyze helps in keeping the\nestimates correct.\n\nIt seems that it did find some other records (<> 'PRINTED), as is\ndemonstrated in the stats where there was only one value in the MCV\nlist: the frequency was 0.996567 and the fraction of nulls was 0,\ntherefore leaving 0.03+ for other values. But because none of them\nwere in the MCV and MCF list, they were all treated as equals. They\nare certainly not equal.\n\nI not know why some values were found (they are mentioned in the\nhistogram_bounds), but are not part of the MCV list, as you say, the\nlikeliness of only 1 item being found is very small.\n\nDoes anyone know the criteria for a value to be included in the MCV list?\n\n> The problem seems to be rapidly changing stats, not too small of a target\n> size (unless your original target size was way below the current default\n> value, forgive me if you already reported that, I didn't see it anywhere).\n> Maybe it would work better if you built the partial index where status =\n> 'NOT_YET_PRINTED', instead of !='PRINTED'.\n\nThanks, we did create a partial index on 'NOT_YET_PRINTED' today to\nhelp aiding these kind of queries.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Mar 2015 21:00:44 +0100",
"msg_from": "Feike Steenbergen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "On Wed, Mar 25, 2015 at 1:00 PM, Feike Steenbergen <\[email protected]> wrote:\n\n> On 25 March 2015 at 19:07, Jeff Janes <[email protected]> wrote:\n>\n> > Also, I doubt that that is the problem in the first place. If you\n> collect a\n> > sample of 30,000 (which the default target size of 100 does), and the\n> > frequency of the second most common is really 0.00307333 at the time you\n> > sampled it, you would expect to find it 92 times in the sample. The\n> chances\n> > against actually finding 1 instead of around 92 due to sampling error are\n> > astronomical.\n>\n> It can be that the distribution of values is very volatile; we hope\n> the increased stats target (from the default=100 to 1000 for this\n> column) and frequent autovacuum and autoanalyze helps in keeping the\n> estimates correct.\n>\n> It seems that it did find some other records (<> 'PRINTED), as is\n> demonstrated in the stats where there was only one value in the MCV\n> list: the frequency was 0.996567 and the fraction of nulls was 0,\n> therefore leaving 0.03+ for other values. But because none of them\n> were in the MCV and MCF list, they were all treated as equals. They\n> are certainly not equal.\n>\n\nNow that I look back at the first post you made, it certainly looks like\nthe statistics target was set to 1 when that was analyzed, not to 100. But\nit doesn't look quite correct for that, either.\n\nWhat version of PostgreSQL are running? 'select version();'\n\nWhat do you get when to do \"analyze verbose print_list\"?\n\nHow can the avg_width be 4 when the vast majority of entries are 7\ncharacters long?\n\nCheers,\n\nJeff\n\nOn Wed, Mar 25, 2015 at 1:00 PM, Feike Steenbergen <[email protected]> wrote:On 25 March 2015 at 19:07, Jeff Janes <[email protected]> wrote:\n\n> Also, I doubt that that is the problem in the first place. If you collect a\n> sample of 30,000 (which the default target size of 100 does), and the\n> frequency of the second most common is really 0.00307333 at the time you\n> sampled it, you would expect to find it 92 times in the sample. The chances\n> against actually finding 1 instead of around 92 due to sampling error are\n> astronomical.\n\nIt can be that the distribution of values is very volatile; we hope\nthe increased stats target (from the default=100 to 1000 for this\ncolumn) and frequent autovacuum and autoanalyze helps in keeping the\nestimates correct.\n\nIt seems that it did find some other records (<> 'PRINTED), as is\ndemonstrated in the stats where there was only one value in the MCV\nlist: the frequency was 0.996567 and the fraction of nulls was 0,\ntherefore leaving 0.03+ for other values. But because none of them\nwere in the MCV and MCF list, they were all treated as equals. They\nare certainly not equal.Now that I look back at the first post you made, it certainly looks like the statistics target was set to 1 when that was analyzed, not to 100. But it doesn't look quite correct for that, either.What version of PostgreSQL are running? 'select version();'What do you get when to do \"analyze verbose print_list\"?How can the avg_width be 4 when the vast majority of entries are 7 characters long?Cheers,Jeff",
"msg_date": "Wed, 25 Mar 2015 14:45:47 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "On Wed, Mar 25, 2015 at 1:00 PM, Feike Steenbergen <\[email protected]> wrote:\n\n> On 25 March 2015 at 19:07, Jeff Janes <[email protected]> wrote:\n>\n> > Also, I doubt that that is the problem in the first place. If you\n> collect a\n> > sample of 30,000 (which the default target size of 100 does), and the\n> > frequency of the second most common is really 0.00307333 at the time you\n> > sampled it, you would expect to find it 92 times in the sample. The\n> chances\n> > against actually finding 1 instead of around 92 due to sampling error are\n> > astronomical.\n>\n> It can be that the distribution of values is very volatile; we hope\n> the increased stats target (from the default=100 to 1000 for this\n> column) and frequent autovacuum and autoanalyze helps in keeping the\n> estimates correct.\n>\n> It seems that it did find some other records (<> 'PRINTED), as is\n> demonstrated in the stats where there was only one value in the MCV\n> list: the frequency was 0.996567 and the fraction of nulls was 0,\n> therefore leaving 0.03+ for other values. But because none of them\n> were in the MCV and MCF list, they were all treated as equals. They\n> are certainly not equal.\n>\n> I not know why some values were found (they are mentioned in the\n> histogram_bounds), but are not part of the MCV list, as you say, the\n> likeliness of only 1 item being found is very small.\n>\n> Does anyone know the criteria for a value to be included in the MCV list?\n>\n\nOK, this is starting to look like a long-standing bug to me.\n\nIf it only sees 3 distinct values, and all three are present at least\ntwice, it throws\nall of them into the MCV list. But if one of those 3 were present just\nonce, then it\ntests them to see if they qualify. The test for inclusion is that it has\nto be present more than once, and that it must be \"over-represented\" by 25%.\n\nLets say it sampled 30000 rows and found 29,900 of one value, 99 of\nanother, and 1 of a third.\n\nBut that turns into the second one needing to be present 12,500 times. The\naverage value is present 10,000 times (30,000 samples with 3 distinct\nvalues) and 25 more than that is 12,500. So it excluded.\n\nIt seems to me that a more reasonable criteria is that it must be\nover-represented 25% compared to the average of all the remaining values\nnot yet accepted into the MCV list. I.e. all the greater ones should be\nsubtracted out before computing the over-representation threshold.\n\nIt is also grossly inconsistent with the other behavior. If they are\n\"29900; 98; 2\" then all three go into the MCV.\nIf they are \"29900; 99; 1\" then only the highest one goes in. The second\none gets evicted for being slightly *more* popular.\n\nThis is around line 2605 of src/backend/commands/analyze.c in head.\n\nCheers,\n\nJeff\n\nOn Wed, Mar 25, 2015 at 1:00 PM, Feike Steenbergen <[email protected]> wrote:On 25 March 2015 at 19:07, Jeff Janes <[email protected]> wrote:\n\n> Also, I doubt that that is the problem in the first place. If you collect a\n> sample of 30,000 (which the default target size of 100 does), and the\n> frequency of the second most common is really 0.00307333 at the time you\n> sampled it, you would expect to find it 92 times in the sample. The chances\n> against actually finding 1 instead of around 92 due to sampling error are\n> astronomical.\n\nIt can be that the distribution of values is very volatile; we hope\nthe increased stats target (from the default=100 to 1000 for this\ncolumn) and frequent autovacuum and autoanalyze helps in keeping the\nestimates correct.\n\nIt seems that it did find some other records (<> 'PRINTED), as is\ndemonstrated in the stats where there was only one value in the MCV\nlist: the frequency was 0.996567 and the fraction of nulls was 0,\ntherefore leaving 0.03+ for other values. But because none of them\nwere in the MCV and MCF list, they were all treated as equals. They\nare certainly not equal.\n\nI not know why some values were found (they are mentioned in the\nhistogram_bounds), but are not part of the MCV list, as you say, the\nlikeliness of only 1 item being found is very small.\n\nDoes anyone know the criteria for a value to be included in the MCV list?OK, this is starting to look like a long-standing bug to me.If it only sees 3 distinct values, and all three are present at least twice, it throwsall of them into the MCV list. But if one of those 3 were present just once, then it tests them to see if they qualify. The test for inclusion is that it has to be present more than once, and that it must be \"over-represented\" by 25%.Lets say it sampled 30000 rows and found 29,900 of one value, 99 of another, and 1 of a third.But that turns into the second one needing to be present 12,500 times. The average value is present 10,000 times (30,000 samples with 3 distinct values) and 25 more than that is 12,500. So it excluded.It seems to me that a more reasonable criteria is that it must be over-represented 25% compared to the average of all the remaining values not yet accepted into the MCV list. I.e. all the greater ones should be subtracted out before computing the over-representation threshold.It is also grossly inconsistent with the other behavior. If they are \"29900; 98; 2\" then all three go into the MCV.If they are \"29900; 99; 1\" then only the highest one goes in. The second one gets evicted for being slightly *more* popular.This is around line 2605 of src/backend/commands/analyze.c in head.Cheers,Jeff",
"msg_date": "Thu, 26 Mar 2015 00:48:34 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "On 25 March 2015 at 22:45, Jeff Janes <[email protected]> wrote:\n\n> How can the avg_width be 4 when the vast majority of entries are 7\n> characters long?\n\nThe datatype is an enum, as I understand it, an enum type always\noccupies 4 bytes\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Mar 2015 09:17:55 +0100",
"msg_from": "Feike Steenbergen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "Sorry, didn't respond to all your questions:\n\n> What version of PostgreSQL are running? 'select version();'\n\nPostgreSQL 9.3.4 on x86_64-pc-linux-gnu, compiled by gcc-4.6.real\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\n> What do you get when to do \"analyze verbose print_list\"?\n\n# analyze verbose print_list ;\nINFO: analyzing \"print_list\"\nINFO: \"print_list\": scanned 53712 of 53712 pages, containing 3626950\nlive rows and 170090 dead rows; 300000 rows in sample, 3626950\nestimated total rows\nANALYZE\nTime: 6656.037 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Mar 2015 09:26:11 +0100",
"msg_from": "Feike Steenbergen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "On 26.3.2015 08:48, Jeff Janes wrote:\n>\n> OK, this is starting to look like a long-standing bug to me.\n> \n> If it only sees 3 distinct values, and all three are present at least\n> twice, it throws all of them into the MCV list. But if one of those 3\n> were present just once, then it tests them to see if they qualify.\n> The test for inclusion is that it has to be present more than once,\n> and that it must be \"over-represented\" by 25%.\n> \n> Lets say it sampled 30000 rows and found 29,900 of one value, 99 of\n> another, and 1 of a third.\n> \n> But that turns into the second one needing to be present 12,500 times. \n> The average value is present 10,000 times (30,000 samples with 3\n> distinct values) and 25 more than that is 12,500. So it excluded.\n> \n> It seems to me that a more reasonable criteria is that it must be\n> over-represented 25% compared to the average of all the remaining values\n> not yet accepted into the MCV list. I.e. all the greater ones should be\n> subtracted out before computing the over-representation threshold.\n\nThat might work IMO, but maybe we should increase the coefficient a bit\n(say, from 1.25 to 2), not to produce needlessly long MCV lists.\n\n\n> It is also grossly inconsistent with the other behavior. If they are\n> \"29900; 98; 2\" then all three go into the MCV.\n\nIsn't the mincount still 12500? How could all three get into the MCV?\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Mar 2015 13:44:19 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "On Thu, Mar 26, 2015 at 5:44 AM, Tomas Vondra <[email protected]>\nwrote:\n\n> On 26.3.2015 08:48, Jeff Janes wrote:\n> >\n> > OK, this is starting to look like a long-standing bug to me.\n> >\n> > If it only sees 3 distinct values, and all three are present at least\n> > twice, it throws all of them into the MCV list. But if one of those 3\n> > were present just once, then it tests them to see if they qualify.\n> > The test for inclusion is that it has to be present more than once,\n> > and that it must be \"over-represented\" by 25%.\n> >\n> > Lets say it sampled 30000 rows and found 29,900 of one value, 99 of\n> > another, and 1 of a third.\n> >\n> > But that turns into the second one needing to be present 12,500 times.\n> > The average value is present 10,000 times (30,000 samples with 3\n> > distinct values) and 25 more than that is 12,500. So it excluded.\n> >\n> > It seems to me that a more reasonable criteria is that it must be\n> > over-represented 25% compared to the average of all the remaining values\n> > not yet accepted into the MCV list. I.e. all the greater ones should be\n> > subtracted out before computing the over-representation threshold.\n>\n> That might work IMO, but maybe we should increase the coefficient a bit\n> (say, from 1.25 to 2), not to produce needlessly long MCV lists.\n>\n\nThat wouldn't work here, because at the point of decision the value present\n99 times contributes half the average, so the average is 50, and of course\nit can't possibly be twice of that.\n\nI have a patch, but is there a way to determine how it affects a wide\nvariety of situations? I guess run `make installcheck`, then analyze, then\ndump pg_stats, with the patch and without the patch, and then compare the\ndumpsj?\n\n\n\n>\n> > It is also grossly inconsistent with the other behavior. If they are\n> > \"29900; 98; 2\" then all three go into the MCV.\n>\n> Isn't the mincount still 12500? How could all three get into the MCV?\n>\n\nIf all observed values are observed at least twice, it takes a different\npath through the code. It just keeps them all in the MCV list. That is\nwhat is causing the instability for the OP. If the 3rd most common is seen\ntwice, then all three are kept. If it is seen once, then only the most\ncommon is kept. See if statements at 2494 and 2585\n\nelse if (toowide_cnt == 0 && nmultiple == ndistinct)\n\n if (track_cnt == ndistinct ....\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 26 Mar 2015 09:35:57 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
},
{
"msg_contents": "On 26.3.2015 17:35, Jeff Janes wrote:\n> On Thu, Mar 26, 2015 at 5:44 AM, Tomas Vondra\n> <[email protected] <mailto:[email protected]>> wrote:\n> \n>> That might work IMO, but maybe we should increase the coefficient a\n>> bit (say, from 1.25 to 2), not to produce needlessly long MCV\n>> lists.\n> \n> That wouldn't work here, because at the point of decision the value \n> present 99 times contributes half the average, so the average is 50, \n> and of course it can't possibly be twice of that.\n\nOh, right. How could I miss that? ;-)\n\n> I have a patch, but is there a way to determine how it affects a\n> wide variety of situations? I guess run `make installcheck`, then\n> analyze, then dump pg_stats, with the patch and without the patch,\n> and then compare the dumpsj?\n\nI doubt there's such way. I'd argue that if you can show this always\ngenerates longer MCV lists, we can assume the stats are probably more\naccurate, and thus the plans should be better.\n\nOf course, there's always the possibility that the plan was good by\nluck, and improving the estimates will result in a worse plan. But I\ndon't think we can really fix that :-(\n\n>>> It is also grossly inconsistent with the other behavior. If they\n>>> are \"29900; 98; 2\" then all three go into the MCV.\n>> \n>> Isn't the mincount still 12500? How could all three get into the\n>> MCV?\n> \n> If all observed values are observed at least twice, it takes a \n> different path through the code. It just keeps them all in the MCV \n> list. That is what is causing the instability for the OP. If the 3rd \n> most common is seen twice, then all three are kept. If it is seen \n> once, then only the most common is kept. See if statements at 2494 \n> and 2585\n> \n> else if (toowide_cnt == 0 && nmultiple == ndistinct)\n> \n> if (track_cnt == ndistinct ....\n\nAha, I think I see it now. I've been concentrating on this code:\n\n avgcount = (double) samplerows / ndistinct;\n /* set minimum threshold count to store a value */\n mincount = avgcount * 1.25;\n if (mincount < 2)\n mincount = 2;\n\nbut this is actually too late, because first we do this:\n\n else if (toowide_cnt == 0 && nmultiple == ndistinct)\n {\n stats->stadistinct = ndistinct;\n }\n\nand that only happens if each item is observed at least 2x in the sample\n(and the actual Haas and Stokes estimator it not used).\n\nAnd then we do this:\n\n if (track_cnt == ndistinct && toowide_cnt == 0 &&\n stats->stadistinct > 0 && track_cnt <= num_mcv)\n {\n num_mcv = track_cnt;\n }\n\nso that we track everything.\n\nIf at least one value is seen only 1x, it works differently, and we use\nthe code with (1.25*avgcount) threshold.\n\nI wonder where the 1.25x threshold comes from - whether it's something\nwe came up with, or if it comes from some paper. I guess the former.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Mar 2015 23:19:07 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index only scan sometimes switches to sequential scan\n for small amount of rows"
}
] |
[
{
"msg_contents": "Hello,\n\nI have an executing time problem for a query : this time is very \ndifferent as I used a local table or a foreign table : 20 times faster \nfor the foreign table\nOn a server 9.4.1, I have 2 spatial bases b1 (size 5.4 Go) et b2 (size \n19Mo) and in the base b1 the table tmp_obs_coordgps (61 Mo, 502982 \nlignes).\n\nWhen I use a JOIN construct beetwen this table tmp_obs_coordgps and a \nforeign table fao_areas (table in b2), the performance are best than \nwith a local table fao_aires_local (in b1).\nThese 2 tables fao_areas and fao_aires_local are identical (build with \n\"select * from\" or with pg_dump : the results are the same)\n\nThe links to \"explain analyze\" are\n* foreign table fao_areas : http://explain.depesz.com/s/4hO\nselect count(*) from tmp_obs_coordgps o, fao_areas f where \no.code_fao=f.f_code and st_contains(f.the_geom, o.geom);\n* local table fao_aires_local : http://explain.depesz.com/s/BvDb\nselect count(*) from tmp_obs_coordgps o, fao_aires_local f where \no.code_fao=f.f_code and st_contains(f.the_geom, o.geom);\n\nThanks by advance\n\n-- \nDominique Vallée\nUMS 3468 Bases de données sur la Biodiversité, Ecologie, Environnement et Sociétés (BBEES)\nMNHN - Muséum national d'Histoire naturelle\n01 40 79 53 70\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Mar 2015 11:46:55 +0100",
"msg_from": "=?UTF-8?B?RG9taW5pcXVlIFZhbGzDqWU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "query faster with a foreign table"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nI have an issue with a rather large CASE WHEN and I cannot figure out why\nit is so slow...\n\n \n\nFirst, here is my test query :\n\n \n\nSELECT CASE WHEN dwh_company_id = 1\n\n \nTHEN CASE\n\n \n\n \nWHEN wv.source ~ '^$' THEN 'Not tracked'\n\n \nWHEN wv.source ~ '^1$' THEN 'Not tracked1'\n\n \nWHEN wv.source ~ '^2$' THEN 'Not tracked2'\n\n \nWHEN wv.source ~ '^3$' THEN 'Not tracked3'\n\n \nWHEN wv.source ~ '^4$' THEN 'Not tracked4'\n\n \nWHEN wv.source ~ '^5$' THEN 'Not tracked5'\n\n \nWHEN wv.source ~ '^6$' THEN 'Not tracked6'\n\n \nWHEN wv.source ~ '^7$' THEN 'Not tracked7'\n\n \nWHEN wv.source ~ '^8$' THEN 'Not tracked8'\n\n \nWHEN wv.source ~ '^9$' THEN 'Not tracked9'\n\n \nWHEN wv.source ~ '^10$' THEN 'Not tracked10'\n\n \nWHEN wv.source ~ '^11$' THEN 'Not tracked11'\n\n \nWHEN wv.source ~ '^12$' THEN 'Not tracked12'\n\n \nWHEN wv.source ~ '^13$' THEN 'Not tracked13'\n\n \nWHEN wv.source ~ '^14$' THEN 'Not tracked14'\n\n \nWHEN wv.source ~ '^15$' THEN 'Not tracked15'\n\n \nWHEN wv.source ~ '^16$' THEN 'Not tracked16'\n\n \nWHEN wv.source ~ '^17$' THEN 'Not tracked17'\n\n \nWHEN wv.source ~ '^18$' THEN 'Not tracked18'\n\n \nWHEN wv.source ~ '^19$' THEN 'Not tracked19'\n\n \nWHEN wv.source ~ '^20$' THEN 'Not tracked20'\n\n \nWHEN wv.source ~ '^21$' THEN 'Not tracked21'\n\n \nWHEN wv.source ~ '^22$' THEN 'Not tracked22'\n\n \nWHEN wv.source ~ '^23$' THEN 'Not tracked23'\n\n \nWHEN wv.source ~ '^24$' THEN 'Not tracked24'\n\n \nWHEN wv.source ~ '^25$' THEN 'Not tracked25'\n\n \nWHEN wv.source ~ '^26$' THEN 'Not tracked26'\n\n \nWHEN wv.source ~ '^27$' THEN 'Not tracked27'\n\n \nWHEN wv.source ~ '^28$' THEN 'Not tracked28'\n\n \n--WHEN wv.source ~ '^29$' THEN 'Not tracked29'\n\n \nWHEN wv.source ~ '^30$' THEN 'Not tracked30'\n\n \nWHEN wv.source ~ '^31$' THEN 'Not tracked31'\n\n \nWHEN wv.source ~ '^32$' THEN 'Not tracked32'\n\n \nEND\n\n ELSE\n\n 'Others'\n\n END as channel\n\nFROM (\n\n SELECT wv.id,\n\n wv.ga_id, \n\n split_part(wv.ga_source_medium, ' / ',\n1) as source,\n\n ga.dwh_source_id,\n\n s.dwh_company_id\n\n FROM marketing.web_visits wv \n\n INNER JOIN dwh_metadata.google_analytics ga\nON ga.ga_id = wv.ga_id\n\n INNER JOIN dwh_manager.sources s ON\nga.dwh_source_id =s.dwh_source_id\n\n --WHERE s.dwh_company_id = 1\n\n LIMIT 100000\n\n ) wv\n\n \n\n \n\nThis is a pretty simple case, my subquery (or CTE when using WITH\nstatement) should return 5 fields with more or less this structure :\n\nId : character(32)\n\nGa_id : bigint\n\nSource : character(32)\n\nMedium : character(32)\n\ndwh_company_id : bigint\n\n \n\nOn top of which I apply a case when statement.\n\n \n\nNow the weird thing is, using this query I notice a significant drop in\nperformance as the \"case when\" is getting bigger. If I run the query as if,\nI get the following exec plain and execution time:\n\nSubquery Scan on wv (cost=6.00..29098.17 rows=100000 width=36) (actual\ntime=0.828..22476.917 rows=100000 loops=1) \n\n Buffers: shared hit=3136 \n\n -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual\ntime=0.209..133.429 rows=100000 loops=1) \n\n Buffers: shared hit=3136 \n\n -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58)\n(actual time=0.208..119.297 rows=100000 loops=1) \n\n Hash Cond: (wv_1.ga_id = ga.ga_id) \n\n Buffers: shared hit=3136 \n\n -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78\nrows=20587078 width=50) (actual time=0.004..18.412 rows=100000 loops=1) \n\n Buffers: shared hit=3133 \n\n -> Hash (cost=5.50..5.50 rows=40 width=12) (actual\ntime=0.184..0.184 rows=111 loops=1) \n\n Buckets: 1024 Batches: 1 Memory Usage: 5kB \n\n Buffers: shared hit=3 \n\n -> Hash Join (cost=1.88..5.50 rows=40 width=12)\n(actual time=0.056..0.148 rows=111 loops=1) \n\n Hash Cond: (ga.dwh_source_id = s.dwh_source_id) \n\n Buffers: shared hit=3 \n\n -> Seq Scan on google_analytics ga\n(cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.028 rows=111\nloops=1) \n\n Buffers: shared hit=2 \n\n -> Hash (cost=1.39..1.39 rows=39 width=8)\n(actual time=0.042..0.042 rows=56 loops=1) \n\n Buckets: 1024 Batches: 1 Memory Usage:\n3kB \n\n Buffers: shared hit=1 \n\n -> Seq Scan on sources s (cost=0.00..1.39\nrows=39 width=8) (actual time=0.005..0.020 rows=56 loops=1) \n\n Buffers: shared hit=1 \n\n Planning time: 0.599 ms \n\n Execution time: 22486.216 ms\n\n \n\nThen try commenting out only one line in the case when and the query run 10x\nfaster :\n\n \n\nSubquery Scan on wv (cost=6.00..28598.17 rows=100000 width=36) (actual\ntime=0.839..2460.002 rows=100000 loops=1) \n\n Buffers: shared hit=3136 \n\n -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual\ntime=0.210..112.043 rows=100000 loops=1) \n\n Buffers: shared hit=3136 \n\n -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58)\n(actual time=0.209..99.513 rows=100000 loops=1) \n\n Hash Cond: (wv_1.ga_id = ga.ga_id) \n\n Buffers: shared hit=3136 \n\n -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78\nrows=20587078 width=50) (actual time=0.004..14.048 rows=100000 loops=1) \n\n Buffers: shared hit=3133 \n\n -> Hash (cost=5.50..5.50 rows=40 width=12) (actual\ntime=0.184..0.184 rows=111 loops=1) \n\n Buckets: 1024 Batches: 1 Memory Usage: 5kB \n\n Buffers: shared hit=3 \n\n -> Hash Join (cost=1.88..5.50 rows=40 width=12)\n(actual time=0.058..0.146 rows=111 loops=1) \n\n Hash Cond: (ga.dwh_source_id = s.dwh_source_id) \n\n Buffers: shared hit=3 \n\n -> Seq Scan on google_analytics ga\n(cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.025 rows=111\nloops=1) \n\n Buffers: shared hit=2 \n\n -> Hash (cost=1.39..1.39 rows=39 width=8)\n(actual time=0.042..0.042 rows=56 loops=1) \n\n Buckets: 1024 Batches: 1 Memory Usage:\n3kB \n\n Buffers: shared hit=1 \n\n -> Seq Scan on sources s (cost=0.00..1.39\nrows=39 width=8) (actual time=0.006..0.021 rows=56 loops=1) \n\n Buffers: shared hit=1 \n\n Planning time: 0.583 ms \n\n Execution time: 2467.484 ms\n\n \n\nWhy this drop in performance for only one (in this simple example) condition\n? I do not really understand it. If I add more conditions to the query (let\nsay 1 or 2) it is also getting slower. And it's not a few ms, it is around 5\nsec or so. (which is huge considering I only take in my example 1/500 of my\ndata with LIMIT.\n\n \n\nBefore we deviate from the problem I have (which is why the sudden drop of\nperformance) let me clarify a few things about this query :\n\n- The purpose is not to rewrite it, with a join or whatever, the\ncase when actually comes from a function which is auto-generated by another\napp we have\n\n- My example is pretty simple and regex expressions could be\nreplaced by equals, the real case when query contains way more complicated\nregex\n\n- This is subset of my CASE WHEN, it is much bigger, I cut it at\nthe \"bottleneck\" point for this post.\n\n \n\nThanks a lot.\n\n \n\nBest Regards,\n\n \n\nKevin\n\n \n\n\nHi, I have an issue with a rather large CASE WHEN and I cannot figure out why it is so slow... First, here is my test query : SELECT CASE WHEN dwh_company_id = 1 THEN CASE WHEN wv.source ~ '^$' THEN 'Not tracked' WHEN wv.source ~ '^1$' THEN 'Not tracked1' WHEN wv.source ~ '^2$' THEN 'Not tracked2' WHEN wv.source ~ '^3$' THEN 'Not tracked3' WHEN wv.source ~ '^4$' THEN 'Not tracked4' WHEN wv.source ~ '^5$' THEN 'Not tracked5' WHEN wv.source ~ '^6$' THEN 'Not tracked6' WHEN wv.source ~ '^7$' THEN 'Not tracked7' WHEN wv.source ~ '^8$' THEN 'Not tracked8' WHEN wv.source ~ '^9$' THEN 'Not tracked9' WHEN wv.source ~ '^10$' THEN 'Not tracked10' WHEN wv.source ~ '^11$' THEN 'Not tracked11' WHEN wv.source ~ '^12$' THEN 'Not tracked12' WHEN wv.source ~ '^13$' THEN 'Not tracked13' WHEN wv.source ~ '^14$' THEN 'Not tracked14' WHEN wv.source ~ '^15$' THEN 'Not tracked15' WHEN wv.source ~ '^16$' THEN 'Not tracked16' WHEN wv.source ~ '^17$' THEN 'Not tracked17' WHEN wv.source ~ '^18$' THEN 'Not tracked18' WHEN wv.source ~ '^19$' THEN 'Not tracked19' WHEN wv.source ~ '^20$' THEN 'Not tracked20' WHEN wv.source ~ '^21$' THEN 'Not tracked21' WHEN wv.source ~ '^22$' THEN 'Not tracked22' WHEN wv.source ~ '^23$' THEN 'Not tracked23' WHEN wv.source ~ '^24$' THEN 'Not tracked24' WHEN wv.source ~ '^25$' THEN 'Not tracked25' WHEN wv.source ~ '^26$' THEN 'Not tracked26' WHEN wv.source ~ '^27$' THEN 'Not tracked27' WHEN wv.source ~ '^28$' THEN 'Not tracked28' --WHEN wv.source ~ '^29$' THEN 'Not tracked29' WHEN wv.source ~ '^30$' THEN 'Not tracked30' WHEN wv.source ~ '^31$' THEN 'Not tracked31' WHEN wv.source ~ '^32$' THEN 'Not tracked32' END ELSE 'Others' END as channelFROM ( SELECT wv.id, wv.ga_id, split_part(wv.ga_source_medium, ' / ', 1) as source, ga.dwh_source_id, s.dwh_company_id FROM marketing.web_visits wv INNER JOIN dwh_metadata.google_analytics ga ON ga.ga_id = wv.ga_id INNER JOIN dwh_manager.sources s ON ga.dwh_source_id =s.dwh_source_id --WHERE s.dwh_company_id = 1 LIMIT 100000 ) wv This is a pretty simple case, my subquery (or CTE when using WITH statement) should return 5 fields with more or less this structure :Id : character(32)Ga_id : bigintSource : character(32)Medium : character(32)dwh_company_id : bigint On top of which I apply a case when statement… Now the weird thing is, using this query I notice a significant drop in performance as the “case when” is getting bigger. If I run the query as if, I get the following exec plain and execution time:Subquery Scan on wv (cost=6.00..29098.17 rows=100000 width=36) (actual time=0.828..22476.917 rows=100000 loops=1) Buffers: shared hit=3136 -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.209..133.429 rows=100000 loops=1) Buffers: shared hit=3136 -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.208..119.297 rows=100000 loops=1) Hash Cond: (wv_1.ga_id = ga.ga_id) Buffers: shared hit=3136 -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..18.412 rows=100000 loops=1) Buffers: shared hit=3133 -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 5kB Buffers: shared hit=3 -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.056..0.148 rows=111 loops=1) Hash Cond: (ga.dwh_source_id = s.dwh_source_id) Buffers: shared hit=3 -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.028 rows=111 loops=1) Buffers: shared hit=2 -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 3kB Buffers: shared hit=1 -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.005..0.020 rows=56 loops=1) Buffers: shared hit=1 Planning time: 0.599 ms Execution time: 22486.216 ms Then try commenting out only one line in the case when and the query run 10x faster : Subquery Scan on wv (cost=6.00..28598.17 rows=100000 width=36) (actual time=0.839..2460.002 rows=100000 loops=1) Buffers: shared hit=3136 -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.210..112.043 rows=100000 loops=1) Buffers: shared hit=3136 -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.209..99.513 rows=100000 loops=1) Hash Cond: (wv_1.ga_id = ga.ga_id) Buffers: shared hit=3136 -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..14.048 rows=100000 loops=1) Buffers: shared hit=3133 -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 5kB Buffers: shared hit=3 -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.058..0.146 rows=111 loops=1) Hash Cond: (ga.dwh_source_id = s.dwh_source_id) Buffers: shared hit=3 -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.025 rows=111 loops=1) Buffers: shared hit=2 -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 3kB Buffers: shared hit=1 -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.006..0.021 rows=56 loops=1) Buffers: shared hit=1 Planning time: 0.583 ms Execution time: 2467.484 ms Why this drop in performance for only one (in this simple example) condition ? I do not really understand it. If I add more conditions to the query (let say 1 or 2) it is also getting slower. And it’s not a few ms, it is around 5 sec or so. (which is huge considering I only take in my example 1/500 of my data with LIMIT. Before we deviate from the problem I have (which is why the sudden drop of performance) let me clarify a few things about this query :- The purpose is not to rewrite it, with a join or whatever, the case when actually comes from a function which is auto-generated by another app we have- My example is pretty simple and regex expressions could be replaced by equals, the real case when query contains way more complicated regex- This is subset of my CASE WHEN, it is much bigger, I cut it at the “bottleneck” point for this post. Thanks a lot. Best Regards, Kevin",
"msg_date": "Tue, 31 Mar 2015 10:53:35 +0200",
"msg_from": "\"Kevin Viraud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Weird CASE WHEN behaviour causing query to be suddenly very slow"
},
{
"msg_contents": "Hi\n\nlong CASE can be problem. Why you don't use a dictionary table and join?\n\nRegards\n\nPavel\n\n2015-03-31 10:53 GMT+02:00 Kevin Viraud <[email protected]>:\n\n> Hi,\n>\n>\n>\n> I have an issue with a rather large CASE WHEN and I cannot figure out why\n> it is so slow...\n>\n>\n>\n> First, here is my test query :\n>\n>\n>\n> SELECT CASE WHEN dwh_company_id = 1\n>\n>\n> THEN CASE\n>\n>\n>\n>\n> WHEN wv.source ~ '^$' THEN 'Not tracked'\n>\n>\n>\n> WHEN wv.source ~ '^1$' THEN 'Not tracked1'\n>\n>\n> WHEN wv.source ~ '^2$' THEN 'Not tracked2'\n>\n>\n> WHEN wv.source ~ '^3$' THEN 'Not tracked3'\n>\n>\n> WHEN wv.source ~ '^4$' THEN 'Not tracked4'\n>\n>\n> WHEN wv.source ~ '^5$' THEN 'Not tracked5'\n>\n>\n> WHEN wv.source ~ '^6$' THEN 'Not tracked6'\n>\n>\n> WHEN wv.source ~ '^7$' THEN 'Not tracked7'\n>\n>\n> WHEN wv.source ~ '^8$' THEN 'Not tracked8'\n>\n>\n> WHEN wv.source ~ '^9$' THEN 'Not tracked9'\n>\n>\n> WHEN wv.source ~ '^10$' THEN 'Not tracked10'\n>\n>\n> WHEN wv.source ~ '^11$' THEN 'Not tracked11'\n>\n>\n> WHEN wv.source ~ '^12$' THEN 'Not tracked12'\n>\n>\n> WHEN wv.source ~ '^13$' THEN 'Not tracked13'\n>\n>\n> WHEN wv.source ~ '^14$' THEN 'Not tracked14'\n>\n>\n> WHEN wv.source ~ '^15$' THEN 'Not tracked15'\n>\n>\n> WHEN wv.source ~ '^16$' THEN 'Not tracked16'\n>\n>\n> WHEN wv.source ~ '^17$' THEN 'Not tracked17'\n>\n>\n> WHEN wv.source ~ '^18$' THEN 'Not tracked18'\n>\n>\n> WHEN wv.source ~ '^19$' THEN 'Not tracked19'\n>\n>\n> WHEN wv.source ~ '^20$' THEN 'Not tracked20'\n>\n>\n> WHEN wv.source ~ '^21$' THEN 'Not tracked21'\n>\n>\n> WHEN wv.source ~ '^22$' THEN 'Not tracked22'\n>\n>\n> WHEN wv.source ~ '^23$' THEN 'Not tracked23'\n>\n>\n> WHEN wv.source ~ '^24$' THEN 'Not tracked24'\n>\n>\n> WHEN wv.source ~ '^25$' THEN 'Not tracked25'\n>\n>\n> WHEN wv.source ~ '^26$' THEN 'Not tracked26'\n>\n>\n> WHEN wv.source ~ '^27$' THEN 'Not tracked27'\n>\n>\n> WHEN wv.source ~ '^28$' THEN 'Not tracked28'\n>\n>\n> --WHEN wv.source ~ '^29$' THEN 'Not tracked29'\n>\n>\n> WHEN wv.source ~ '^30$' THEN 'Not tracked30'\n>\n>\n> WHEN wv.source ~ '^31$' THEN 'Not tracked31'\n>\n>\n> WHEN wv.source ~ '^32$' THEN 'Not tracked32'\n>\n>\n> END\n>\n> ELSE\n>\n> 'Others'\n>\n> END as channel\n>\n> FROM (\n>\n> SELECT wv.id,\n>\n> wv.ga_id,\n>\n> split_part(wv.ga_source_medium, ' /\n> ', 1) as source,\n>\n> ga.dwh_source_id,\n>\n> s.dwh_company_id\n>\n> FROM marketing.web_visits wv\n>\n> INNER JOIN dwh_metadata.google_analytics ga\n> ON ga.ga_id = wv.ga_id\n>\n> INNER JOIN dwh_manager.sources s ON\n> ga.dwh_source_id =s.dwh_source_id\n>\n> --WHERE s.dwh_company_id = 1\n>\n> LIMIT 100000\n>\n> ) wv\n>\n>\n>\n>\n>\n> This is a pretty simple case, my subquery (or CTE when using WITH\n> statement) should return 5 fields with more or less this structure :\n>\n> Id : character(32)\n>\n> Ga_id : bigint\n>\n> Source : character(32)\n>\n> Medium : character(32)\n>\n> dwh_company_id : bigint\n>\n>\n>\n> On top of which I apply a case when statement…\n>\n>\n>\n> Now the weird thing is, using this query I notice a significant drop in\n> performance as the “case when” is getting bigger. If I run the query as if,\n> I get the following exec plain and execution time:\n>\n> Subquery Scan on wv (cost=6.00..29098.17 rows=100000 width=36) (actual\n> time=0.828..22476.917 rows=100000 loops=1)\n>\n> Buffers: shared hit=3136\n>\n> -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual\n> time=0.209..133.429 rows=100000 loops=1)\n>\n> Buffers: shared hit=3136\n>\n> -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58)\n> (actual time=0.208..119.297 rows=100000 loops=1)\n>\n> Hash Cond: (wv_1.ga_id = ga.ga_id)\n>\n> Buffers: shared hit=3136\n>\n> -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78\n> rows=20587078 width=50) (actual time=0.004..18.412 rows=100000 loops=1)\n>\n> Buffers: shared hit=3133\n>\n> -> Hash (cost=5.50..5.50 rows=40 width=12) (actual\n> time=0.184..0.184 rows=111 loops=1)\n>\n> Buckets: 1024 Batches: 1 Memory Usage: 5kB\n>\n> Buffers: shared hit=3\n>\n> -> Hash Join (cost=1.88..5.50 rows=40 width=12)\n> (actual time=0.056..0.148 rows=111 loops=1)\n>\n> Hash Cond: (ga.dwh_source_id = s.dwh_source_id)\n>\n> Buffers: shared hit=3\n>\n> -> Seq Scan on google_analytics ga\n> (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.028 rows=111\n> loops=1)\n>\n> Buffers: shared hit=2\n>\n> -> Hash (cost=1.39..1.39 rows=39 width=8)\n> (actual time=0.042..0.042 rows=56 loops=1)\n>\n> Buckets: 1024 Batches: 1 Memory Usage:\n> 3kB\n>\n> Buffers: shared hit=1\n>\n> -> Seq Scan on sources s\n> (cost=0.00..1.39 rows=39 width=8) (actual time=0.005..0.020 rows=56\n> loops=1)\n>\n> Buffers: shared hit=1\n>\n> Planning time: 0.599 ms\n>\n> Execution time: 22486.216 ms\n>\n>\n>\n> Then try commenting out only one line in the case when and the query run\n> 10x faster :\n>\n>\n>\n> Subquery Scan on wv (cost=6.00..28598.17 rows=100000 width=36) (actual\n> time=0.839..2460.002 rows=100000 loops=1)\n>\n> Buffers: shared hit=3136\n>\n> -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual\n> time=0.210..112.043 rows=100000 loops=1)\n>\n> Buffers: shared hit=3136\n>\n> -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58)\n> (actual time=0.209..99.513 rows=100000 loops=1)\n>\n> Hash Cond: (wv_1.ga_id = ga.ga_id)\n>\n> Buffers: shared hit=3136\n>\n> -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78\n> rows=20587078 width=50) (actual time=0.004..14.048 rows=100000 loops=1)\n>\n> Buffers: shared hit=3133\n>\n> -> Hash (cost=5.50..5.50 rows=40 width=12) (actual\n> time=0.184..0.184 rows=111 loops=1)\n>\n> Buckets: 1024 Batches: 1 Memory Usage: 5kB\n>\n> Buffers: shared hit=3\n>\n> -> Hash Join (cost=1.88..5.50 rows=40 width=12)\n> (actual time=0.058..0.146 rows=111 loops=1)\n>\n> Hash Cond: (ga.dwh_source_id = s.dwh_source_id)\n>\n> Buffers: shared hit=3\n>\n> -> Seq Scan on google_analytics ga\n> (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.025 rows=111\n> loops=1)\n>\n> Buffers: shared hit=2\n>\n> -> Hash (cost=1.39..1.39 rows=39 width=8)\n> (actual time=0.042..0.042 rows=56 loops=1)\n>\n> Buckets: 1024 Batches: 1 Memory Usage:\n> 3kB\n>\n> Buffers: shared hit=1\n>\n> -> Seq Scan on sources s\n> (cost=0.00..1.39 rows=39 width=8) (actual time=0.006..0.021 rows=56\n> loops=1)\n>\n> Buffers: shared hit=1\n>\n> Planning time: 0.583 ms\n>\n> Execution time: 2467.484 ms\n>\n>\n>\n> Why this drop in performance for only one (in this simple example)\n> condition ? I do not really understand it. If I add more conditions to the\n> query (let say 1 or 2) it is also getting slower. And it’s not a few ms, it\n> is around 5 sec or so. (which is huge considering I only take in my example\n> 1/500 of my data with LIMIT.\n>\n>\n>\n> Before we deviate from the problem I have (which is why the sudden drop of\n> performance) let me clarify a few things about this query :\n>\n> - The purpose is not to rewrite it, with a join or whatever, the\n> case when actually comes from a function which is auto-generated by another\n> app we have\n>\n> - My example is pretty simple and regex expressions could be\n> replaced by equals, the real case when query contains way more complicated\n> regex\n>\n> - This is subset of my CASE WHEN, it is much bigger, I cut it at\n> the “bottleneck” point for this post.\n>\n>\n>\n> Thanks a lot.\n>\n>\n>\n> Best Regards,\n>\n>\n>\n> Kevin\n>\n>\n>\n\nHilong CASE can be problem. Why you don't use a dictionary table and join?RegardsPavel2015-03-31 10:53 GMT+02:00 Kevin Viraud <[email protected]>:Hi, I have an issue with a rather large CASE WHEN and I cannot figure out why it is so slow... First, here is my test query : SELECT CASE WHEN dwh_company_id = 1 THEN CASE WHEN wv.source ~ '^$' THEN 'Not tracked' WHEN wv.source ~ '^1$' THEN 'Not tracked1' WHEN wv.source ~ '^2$' THEN 'Not tracked2' WHEN wv.source ~ '^3$' THEN 'Not tracked3' WHEN wv.source ~ '^4$' THEN 'Not tracked4' WHEN wv.source ~ '^5$' THEN 'Not tracked5' WHEN wv.source ~ '^6$' THEN 'Not tracked6' WHEN wv.source ~ '^7$' THEN 'Not tracked7' WHEN wv.source ~ '^8$' THEN 'Not tracked8' WHEN wv.source ~ '^9$' THEN 'Not tracked9' WHEN wv.source ~ '^10$' THEN 'Not tracked10' WHEN wv.source ~ '^11$' THEN 'Not tracked11' WHEN wv.source ~ '^12$' THEN 'Not tracked12' WHEN wv.source ~ '^13$' THEN 'Not tracked13' WHEN wv.source ~ '^14$' THEN 'Not tracked14' WHEN wv.source ~ '^15$' THEN 'Not tracked15' WHEN wv.source ~ '^16$' THEN 'Not tracked16' WHEN wv.source ~ '^17$' THEN 'Not tracked17' WHEN wv.source ~ '^18$' THEN 'Not tracked18' WHEN wv.source ~ '^19$' THEN 'Not tracked19' WHEN wv.source ~ '^20$' THEN 'Not tracked20' WHEN wv.source ~ '^21$' THEN 'Not tracked21' WHEN wv.source ~ '^22$' THEN 'Not tracked22' WHEN wv.source ~ '^23$' THEN 'Not tracked23' WHEN wv.source ~ '^24$' THEN 'Not tracked24' WHEN wv.source ~ '^25$' THEN 'Not tracked25' WHEN wv.source ~ '^26$' THEN 'Not tracked26' WHEN wv.source ~ '^27$' THEN 'Not tracked27' WHEN wv.source ~ '^28$' THEN 'Not tracked28' --WHEN wv.source ~ '^29$' THEN 'Not tracked29' WHEN wv.source ~ '^30$' THEN 'Not tracked30' WHEN wv.source ~ '^31$' THEN 'Not tracked31' WHEN wv.source ~ '^32$' THEN 'Not tracked32' END ELSE 'Others' END as channelFROM ( SELECT wv.id, wv.ga_id, split_part(wv.ga_source_medium, ' / ', 1) as source, ga.dwh_source_id, s.dwh_company_id FROM marketing.web_visits wv INNER JOIN dwh_metadata.google_analytics ga ON ga.ga_id = wv.ga_id INNER JOIN dwh_manager.sources s ON ga.dwh_source_id =s.dwh_source_id --WHERE s.dwh_company_id = 1 LIMIT 100000 ) wv This is a pretty simple case, my subquery (or CTE when using WITH statement) should return 5 fields with more or less this structure :Id : character(32)Ga_id : bigintSource : character(32)Medium : character(32)dwh_company_id : bigint On top of which I apply a case when statement… Now the weird thing is, using this query I notice a significant drop in performance as the “case when” is getting bigger. If I run the query as if, I get the following exec plain and execution time:Subquery Scan on wv (cost=6.00..29098.17 rows=100000 width=36) (actual time=0.828..22476.917 rows=100000 loops=1) Buffers: shared hit=3136 -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.209..133.429 rows=100000 loops=1) Buffers: shared hit=3136 -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.208..119.297 rows=100000 loops=1) Hash Cond: (wv_1.ga_id = ga.ga_id) Buffers: shared hit=3136 -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..18.412 rows=100000 loops=1) Buffers: shared hit=3133 -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 5kB Buffers: shared hit=3 -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.056..0.148 rows=111 loops=1) Hash Cond: (ga.dwh_source_id = s.dwh_source_id) Buffers: shared hit=3 -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.028 rows=111 loops=1) Buffers: shared hit=2 -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 3kB Buffers: shared hit=1 -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.005..0.020 rows=56 loops=1) Buffers: shared hit=1 Planning time: 0.599 ms Execution time: 22486.216 ms Then try commenting out only one line in the case when and the query run 10x faster : Subquery Scan on wv (cost=6.00..28598.17 rows=100000 width=36) (actual time=0.839..2460.002 rows=100000 loops=1) Buffers: shared hit=3136 -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.210..112.043 rows=100000 loops=1) Buffers: shared hit=3136 -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.209..99.513 rows=100000 loops=1) Hash Cond: (wv_1.ga_id = ga.ga_id) Buffers: shared hit=3136 -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..14.048 rows=100000 loops=1) Buffers: shared hit=3133 -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 5kB Buffers: shared hit=3 -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.058..0.146 rows=111 loops=1) Hash Cond: (ga.dwh_source_id = s.dwh_source_id) Buffers: shared hit=3 -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.025 rows=111 loops=1) Buffers: shared hit=2 -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 3kB Buffers: shared hit=1 -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.006..0.021 rows=56 loops=1) Buffers: shared hit=1 Planning time: 0.583 ms Execution time: 2467.484 ms Why this drop in performance for only one (in this simple example) condition ? I do not really understand it. If I add more conditions to the query (let say 1 or 2) it is also getting slower. And it’s not a few ms, it is around 5 sec or so. (which is huge considering I only take in my example 1/500 of my data with LIMIT. Before we deviate from the problem I have (which is why the sudden drop of performance) let me clarify a few things about this query :- The purpose is not to rewrite it, with a join or whatever, the case when actually comes from a function which is auto-generated by another app we have- My example is pretty simple and regex expressions could be replaced by equals, the real case when query contains way more complicated regex- This is subset of my CASE WHEN, it is much bigger, I cut it at the “bottleneck” point for this post. Thanks a lot. Best Regards, Kevin",
"msg_date": "Tue, 31 Mar 2015 11:08:58 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird CASE WHEN behaviour causing query to be suddenly very slow"
},
{
"msg_contents": "Hi Pavel,\n\n \n\nThanks for your answer.\n\n \n\nYes sure, I could do that, but like I wrote the purpose is not to find a way to rewrite it. But to understand why at a certain point it is totally going off. I’m aware that the longer my case when will be the longest the query will run. But 10x slower for adding one condition, something feels wrong here.\n\n \n\nPlus, the case when is part of a function so basically I use it this way :\n\nSELECT col1, col2, get_channel(company_id, source_id, …)\n\nFROM mytable;\n\n \n\nGet_channel is coming from another app. And even though I have, I need to assume that I don’t have the control over this one and that I’m using it as if.\n\n \n\nThis is only my debugging query.\n\n \n\nBest regards,\n\n \n\nKevin\n\n \n\nFrom: Pavel Stehule [mailto:[email protected]] \nSent: Dienstag, 31. März 2015 11:09\nTo: Kevin Viraud\nCc: [email protected]\nSubject: Re: [PERFORM] Weird CASE WHEN behaviour causing query to be suddenly very slow\n\n \n\nHi\n\nlong CASE can be problem. Why you don't use a dictionary table and join?\n\nRegards\n\nPavel\n\n \n\n2015-03-31 10:53 GMT+02:00 Kevin Viraud <[email protected] <mailto:[email protected]> >:\n\nHi,\n\n \n\nI have an issue with a rather large CASE WHEN and I cannot figure out why it is so slow...\n\n \n\nFirst, here is my test query :\n\n \n\nSELECT CASE WHEN dwh_company_id = 1\n\n THEN CASE\n\n \n\n WHEN wv.source ~ '^$' THEN 'Not tracked'\n\n WHEN wv.source ~ '^1$' THEN 'Not tracked1'\n\n WHEN wv.source ~ '^2$' THEN 'Not tracked2'\n\n WHEN wv.source ~ '^3$' THEN 'Not tracked3'\n\n WHEN wv.source ~ '^4$' THEN 'Not tracked4'\n\n WHEN wv.source ~ '^5$' THEN 'Not tracked5'\n\n WHEN wv.source ~ '^6$' THEN 'Not tracked6'\n\n WHEN wv.source ~ '^7$' THEN 'Not tracked7'\n\n WHEN wv.source ~ '^8$' THEN 'Not tracked8'\n\n WHEN wv.source ~ '^9$' THEN 'Not tracked9'\n\n WHEN wv.source ~ '^10$' THEN 'Not tracked10'\n\n WHEN wv.source ~ '^11$' THEN 'Not tracked11'\n\n WHEN wv.source ~ '^12$' THEN 'Not tracked12'\n\n WHEN wv.source ~ '^13$' THEN 'Not tracked13'\n\n WHEN wv.source ~ '^14$' THEN 'Not tracked14'\n\n WHEN wv.source ~ '^15$' THEN 'Not tracked15'\n\n WHEN wv.source ~ '^16$' THEN 'Not tracked16'\n\n WHEN wv.source ~ '^17$' THEN 'Not tracked17'\n\n WHEN wv.source ~ '^18$' THEN 'Not tracked18'\n\n WHEN wv.source ~ '^19$' THEN 'Not tracked19'\n\n WHEN wv.source ~ '^20$' THEN 'Not tracked20'\n\n WHEN wv.source ~ '^21$' THEN 'Not tracked21'\n\n WHEN wv.source ~ '^22$' THEN 'Not tracked22'\n\n WHEN wv.source ~ '^23$' THEN 'Not tracked23'\n\n WHEN wv.source ~ '^24$' THEN 'Not tracked24'\n\n WHEN wv.source ~ '^25$' THEN 'Not tracked25'\n\n WHEN wv.source ~ '^26$' THEN 'Not tracked26'\n\n WHEN wv.source ~ '^27$' THEN 'Not tracked27'\n\n WHEN wv.source ~ '^28$' THEN 'Not tracked28'\n\n --WHEN wv.source ~ '^29$' THEN 'Not tracked29'\n\n WHEN wv.source ~ '^30$' THEN 'Not tracked30'\n\n WHEN wv.source ~ '^31$' THEN 'Not tracked31'\n\n WHEN wv.source ~ '^32$' THEN 'Not tracked32'\n\n END\n\n ELSE\n\n 'Others'\n\n END as channel\n\nFROM (\n\n SELECT wv.id <http://wv.id> ,\n\n wv.ga_id, \n\n split_part(wv.ga_source_medium, ' / ', 1) as source,\n\n ga.dwh_source_id,\n\n s.dwh_company_id\n\n FROM marketing.web_visits wv \n\n INNER JOIN dwh_metadata.google_analytics ga ON ga.ga_id = wv.ga_id\n\n INNER JOIN dwh_manager.sources s ON ga.dwh_source_id =s.dwh_source_id\n\n --WHERE s.dwh_company_id = 1\n\n LIMIT 100000\n\n ) wv\n\n \n\n \n\nThis is a pretty simple case, my subquery (or CTE when using WITH statement) should return 5 fields with more or less this structure :\n\nId : character(32)\n\nGa_id : bigint\n\nSource : character(32)\n\nMedium : character(32)\n\ndwh_company_id : bigint\n\n \n\nOn top of which I apply a case when statement…\n\n \n\nNow the weird thing is, using this query I notice a significant drop in performance as the “case when” is getting bigger. If I run the query as if, I get the following exec plain and execution time:\n\nSubquery Scan on wv (cost=6.00..29098.17 rows=100000 width=36) (actual time=0.828..22476.917 rows=100000 loops=1) \n\n Buffers: shared hit=3136 \n\n -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.209..133.429 rows=100000 loops=1) \n\n Buffers: shared hit=3136 \n\n -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.208..119.297 rows=100000 loops=1) \n\n Hash Cond: (wv_1.ga_id = ga.ga_id) \n\n Buffers: shared hit=3136 \n\n -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..18.412 rows=100000 loops=1) \n\n Buffers: shared hit=3133 \n\n -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) \n\n Buckets: 1024 Batches: 1 Memory Usage: 5kB \n\n Buffers: shared hit=3 \n\n -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.056..0.148 rows=111 loops=1) \n\n Hash Cond: (ga.dwh_source_id = s.dwh_source_id) \n\n Buffers: shared hit=3 \n\n -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.028 rows=111 loops=1) \n\n Buffers: shared hit=2 \n\n -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) \n\n Buckets: 1024 Batches: 1 Memory Usage: 3kB \n\n Buffers: shared hit=1 \n\n -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.005..0.020 rows=56 loops=1) \n\n Buffers: shared hit=1 \n\n Planning time: 0.599 ms \n\n Execution time: 22486.216 ms\n\n \n\nThen try commenting out only one line in the case when and the query run 10x faster :\n\n \n\nSubquery Scan on wv (cost=6.00..28598.17 rows=100000 width=36) (actual time=0.839..2460.002 rows=100000 loops=1) \n\n Buffers: shared hit=3136 \n\n -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.210..112.043 rows=100000 loops=1) \n\n Buffers: shared hit=3136 \n\n -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.209..99.513 rows=100000 loops=1) \n\n Hash Cond: (wv_1.ga_id = ga.ga_id) \n\n Buffers: shared hit=3136 \n\n -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..14.048 rows=100000 loops=1) \n\n Buffers: shared hit=3133 \n\n -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) \n\n Buckets: 1024 Batches: 1 Memory Usage: 5kB \n\n Buffers: shared hit=3 \n\n -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.058..0.146 rows=111 loops=1) \n\n Hash Cond: (ga.dwh_source_id = s.dwh_source_id) \n\n Buffers: shared hit=3 \n\n -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.025 rows=111 loops=1) \n\n Buffers: shared hit=2 \n\n -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) \n\n Buckets: 1024 Batches: 1 Memory Usage: 3kB \n\n Buffers: shared hit=1 \n\n -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.006..0.021 rows=56 loops=1) \n\n Buffers: shared hit=1 \n\n Planning time: 0.583 ms \n\n Execution time: 2467.484 ms\n\n \n\nWhy this drop in performance for only one (in this simple example) condition ? I do not really understand it. If I add more conditions to the query (let say 1 or 2) it is also getting slower. And it’s not a few ms, it is around 5 sec or so. (which is huge considering I only take in my example 1/500 of my data with LIMIT.\n\n \n\nBefore we deviate from the problem I have (which is why the sudden drop of performance) let me clarify a few things about this query :\n\n- The purpose is not to rewrite it, with a join or whatever, the case when actually comes from a function which is auto-generated by another app we have\n\n- My example is pretty simple and regex expressions could be replaced by equals, the real case when query contains way more complicated regex\n\n- This is subset of my CASE WHEN, it is much bigger, I cut it at the “bottleneck” point for this post.\n\n \n\nThanks a lot.\n\n \n\nBest Regards,\n\n \n\nKevin\n\n \n\n \n\n\nHi Pavel, Thanks for your answer. Yes sure, I could do that, but like I wrote the purpose is not to find a way to rewrite it. But to understand why at a certain point it is totally going off. I’m aware that the longer my case when will be the longest the query will run. But 10x slower for adding one condition, something feels wrong here. Plus, the case when is part of a function so basically I use it this way :SELECT col1, col2, get_channel(company_id, source_id, …)FROM mytable; Get_channel is coming from another app. And even though I have, I need to assume that I don’t have the control over this one and that I’m using it as if. This is only my debugging query. Best regards, Kevin From: Pavel Stehule [mailto:[email protected]] Sent: Dienstag, 31. März 2015 11:09To: Kevin ViraudCc: [email protected]: Re: [PERFORM] Weird CASE WHEN behaviour causing query to be suddenly very slow Hilong CASE can be problem. Why you don't use a dictionary table and join?RegardsPavel 2015-03-31 10:53 GMT+02:00 Kevin Viraud <[email protected]>:Hi, I have an issue with a rather large CASE WHEN and I cannot figure out why it is so slow... First, here is my test query : SELECT CASE WHEN dwh_company_id = 1 THEN CASE WHEN wv.source ~ '^$' THEN 'Not tracked' WHEN wv.source ~ '^1$' THEN 'Not tracked1' WHEN wv.source ~ '^2$' THEN 'Not tracked2' WHEN wv.source ~ '^3$' THEN 'Not tracked3' WHEN wv.source ~ '^4$' THEN 'Not tracked4' WHEN wv.source ~ '^5$' THEN 'Not tracked5' WHEN wv.source ~ '^6$' THEN 'Not tracked6' WHEN wv.source ~ '^7$' THEN 'Not tracked7' WHEN wv.source ~ '^8$' THEN 'Not tracked8' WHEN wv.source ~ '^9$' THEN 'Not tracked9' WHEN wv.source ~ '^10$' THEN 'Not tracked10' WHEN wv.source ~ '^11$' THEN 'Not tracked11' WHEN wv.source ~ '^12$' THEN 'Not tracked12' WHEN wv.source ~ '^13$' THEN 'Not tracked13' WHEN wv.source ~ '^14$' THEN 'Not tracked14' WHEN wv.source ~ '^15$' THEN 'Not tracked15' WHEN wv.source ~ '^16$' THEN 'Not tracked16' WHEN wv.source ~ '^17$' THEN 'Not tracked17' WHEN wv.source ~ '^18$' THEN 'Not tracked18' WHEN wv.source ~ '^19$' THEN 'Not tracked19' WHEN wv.source ~ '^20$' THEN 'Not tracked20' WHEN wv.source ~ '^21$' THEN 'Not tracked21' WHEN wv.source ~ '^22$' THEN 'Not tracked22' WHEN wv.source ~ '^23$' THEN 'Not tracked23' WHEN wv.source ~ '^24$' THEN 'Not tracked24' WHEN wv.source ~ '^25$' THEN 'Not tracked25' WHEN wv.source ~ '^26$' THEN 'Not tracked26' WHEN wv.source ~ '^27$' THEN 'Not tracked27' WHEN wv.source ~ '^28$' THEN 'Not tracked28' --WHEN wv.source ~ '^29$' THEN 'Not tracked29' WHEN wv.source ~ '^30$' THEN 'Not tracked30' WHEN wv.source ~ '^31$' THEN 'Not tracked31' WHEN wv.source ~ '^32$' THEN 'Not tracked32' END ELSE 'Others' END as channelFROM ( SELECT wv.id, wv.ga_id, split_part(wv.ga_source_medium, ' / ', 1) as source, ga.dwh_source_id, s.dwh_company_id FROM marketing.web_visits wv INNER JOIN dwh_metadata.google_analytics ga ON ga.ga_id = wv.ga_id INNER JOIN dwh_manager.sources s ON ga.dwh_source_id =s.dwh_source_id --WHERE s.dwh_company_id = 1 LIMIT 100000 ) wv This is a pretty simple case, my subquery (or CTE when using WITH statement) should return 5 fields with more or less this structure :Id : character(32)Ga_id : bigintSource : character(32)Medium : character(32)dwh_company_id : bigint On top of which I apply a case when statement… Now the weird thing is, using this query I notice a significant drop in performance as the “case when” is getting bigger. If I run the query as if, I get the following exec plain and execution time:Subquery Scan on wv (cost=6.00..29098.17 rows=100000 width=36) (actual time=0.828..22476.917 rows=100000 loops=1) Buffers: shared hit=3136 -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.209..133.429 rows=100000 loops=1) Buffers: shared hit=3136 -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.208..119.297 rows=100000 loops=1) Hash Cond: (wv_1.ga_id = ga.ga_id) Buffers: shared hit=3136 -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..18.412 rows=100000 loops=1) Buffers: shared hit=3133 -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 5kB Buffers: shared hit=3 -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.056..0.148 rows=111 loops=1) Hash Cond: (ga.dwh_source_id = s.dwh_source_id) Buffers: shared hit=3 -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.028 rows=111 loops=1) Buffers: shared hit=2 -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 3kB Buffers: shared hit=1 -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.005..0.020 rows=56 loops=1) Buffers: shared hit=1 Planning time: 0.599 ms Execution time: 22486.216 ms Then try commenting out only one line in the case when and the query run 10x faster : Subquery Scan on wv (cost=6.00..28598.17 rows=100000 width=36) (actual time=0.839..2460.002 rows=100000 loops=1) Buffers: shared hit=3136 -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.210..112.043 rows=100000 loops=1) Buffers: shared hit=3136 -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.209..99.513 rows=100000 loops=1) Hash Cond: (wv_1.ga_id = ga.ga_id) Buffers: shared hit=3136 -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..14.048 rows=100000 loops=1) Buffers: shared hit=3133 -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 5kB Buffers: shared hit=3 -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.058..0.146 rows=111 loops=1) Hash Cond: (ga.dwh_source_id = s.dwh_source_id) Buffers: shared hit=3 -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.025 rows=111 loops=1) Buffers: shared hit=2 -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 3kB Buffers: shared hit=1 -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.006..0.021 rows=56 loops=1) Buffers: shared hit=1 Planning time: 0.583 ms Execution time: 2467.484 ms Why this drop in performance for only one (in this simple example) condition ? I do not really understand it. If I add more conditions to the query (let say 1 or 2) it is also getting slower. And it’s not a few ms, it is around 5 sec or so. (which is huge considering I only take in my example 1/500 of my data with LIMIT. Before we deviate from the problem I have (which is why the sudden drop of performance) let me clarify a few things about this query :- The purpose is not to rewrite it, with a join or whatever, the case when actually comes from a function which is auto-generated by another app we have- My example is pretty simple and regex expressions could be replaced by equals, the real case when query contains way more complicated regex- This is subset of my CASE WHEN, it is much bigger, I cut it at the “bottleneck” point for this post. Thanks a lot. Best Regards, Kevin",
"msg_date": "Tue, 31 Mar 2015 11:19:18 +0200",
"msg_from": "\"Kevin Viraud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Weird CASE WHEN behaviour causing query to be suddenly very slow"
},
{
"msg_contents": "2015-03-31 11:19 GMT+02:00 Kevin Viraud <[email protected]>:\n\n> Hi Pavel,\n>\n>\n>\n> Thanks for your answer.\n>\n>\n>\n> Yes sure, I could do that, but like I wrote the purpose is not to find a\n> way to rewrite it. But to understand why at a certain point it is totally\n> going off. I’m aware that the longer my case when will be the longest the\n> query will run. But 10x slower for adding one condition, something feels\n> wrong here.\n>\n\nIt is slow due lot of expressions evaluation. It is CPU expensive.\nPostgreSQL uses interpreted expression evaluation - and if you have lot of\nexpressions, then you have problem.\n\nRegards\n\nPavel\n\n\n>\n>\n> Plus, the case when is part of a function so basically I use it this way :\n>\n> SELECT col1, col2, get_channel(company_id, source_id, …)\n>\n> FROM mytable;\n>\n>\n>\n> Get_channel is coming from another app. And even though I have, I need to\n> assume that I don’t have the control over this one and that I’m using it as\n> if.\n>\n>\n>\n> This is only my debugging query.\n>\n>\n>\n> Best regards,\n>\n>\n>\n> Kevin\n>\n>\n>\n> *From:* Pavel Stehule [mailto:[email protected]]\n> *Sent:* Dienstag, 31. März 2015 11:09\n> *To:* Kevin Viraud\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Weird CASE WHEN behaviour causing query to be\n> suddenly very slow\n>\n>\n>\n> Hi\n>\n> long CASE can be problem. Why you don't use a dictionary table and join?\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n> 2015-03-31 10:53 GMT+02:00 Kevin Viraud <[email protected]>:\n>\n> Hi,\n>\n>\n>\n> I have an issue with a rather large CASE WHEN and I cannot figure out why\n> it is so slow...\n>\n>\n>\n> First, here is my test query :\n>\n>\n>\n> SELECT CASE WHEN dwh_company_id = 1\n>\n>\n> THEN CASE\n>\n>\n>\n>\n> WHEN wv.source ~ '^$' THEN 'Not tracked'\n>\n>\n>\n> WHEN wv.source ~ '^1$' THEN 'Not tracked1'\n>\n>\n> WHEN wv.source ~ '^2$' THEN 'Not tracked2'\n>\n>\n> WHEN wv.source ~ '^3$' THEN 'Not tracked3'\n>\n>\n> WHEN wv.source ~ '^4$' THEN 'Not tracked4'\n>\n>\n> WHEN wv.source ~ '^5$' THEN 'Not tracked5'\n>\n>\n> WHEN wv.source ~ '^6$' THEN 'Not tracked6'\n>\n>\n> WHEN wv.source ~ '^7$' THEN 'Not tracked7'\n>\n>\n> WHEN wv.source ~ '^8$' THEN 'Not tracked8'\n>\n>\n> WHEN wv.source ~ '^9$' THEN 'Not tracked9'\n>\n>\n> WHEN wv.source ~ '^10$' THEN 'Not tracked10'\n>\n>\n> WHEN wv.source ~ '^11$' THEN 'Not tracked11'\n>\n>\n> WHEN wv.source ~ '^12$' THEN 'Not tracked12'\n>\n>\n> WHEN wv.source ~ '^13$' THEN 'Not tracked13'\n>\n>\n> WHEN wv.source ~ '^14$' THEN 'Not tracked14'\n>\n>\n> WHEN wv.source ~ '^15$' THEN 'Not tracked15'\n>\n>\n> WHEN wv.source ~ '^16$' THEN 'Not tracked16'\n>\n>\n> WHEN wv.source ~ '^17$' THEN 'Not tracked17'\n>\n>\n> WHEN wv.source ~ '^18$' THEN 'Not tracked18'\n>\n>\n> WHEN wv.source ~ '^19$' THEN 'Not tracked19'\n>\n>\n> WHEN wv.source ~ '^20$' THEN 'Not tracked20'\n>\n>\n> WHEN wv.source ~ '^21$' THEN 'Not tracked21'\n>\n>\n> WHEN wv.source ~ '^22$' THEN 'Not tracked22'\n>\n>\n> WHEN wv.source ~ '^23$' THEN 'Not tracked23'\n>\n>\n> WHEN wv.source ~ '^24$' THEN 'Not tracked24'\n>\n>\n> WHEN wv.source ~ '^25$' THEN 'Not tracked25'\n>\n>\n> WHEN wv.source ~ '^26$' THEN 'Not tracked26'\n>\n>\n> WHEN wv.source ~ '^27$' THEN 'Not tracked27'\n>\n>\n> WHEN wv.source ~ '^28$' THEN 'Not tracked28'\n>\n>\n> --WHEN wv.source ~ '^29$' THEN 'Not tracked29'\n>\n>\n> WHEN wv.source ~ '^30$' THEN 'Not tracked30'\n>\n>\n> WHEN wv.source ~ '^31$' THEN 'Not tracked31'\n>\n>\n> WHEN wv.source ~ '^32$' THEN 'Not tracked32'\n>\n>\n> END\n>\n> ELSE\n>\n> 'Others'\n>\n> END as channel\n>\n> FROM (\n>\n> SELECT wv.id,\n>\n> wv.ga_id,\n>\n> split_part(wv.ga_source_medium, ' /\n> ', 1) as source,\n>\n> ga.dwh_source_id,\n>\n> s.dwh_company_id\n>\n> FROM marketing.web_visits wv\n>\n> INNER JOIN dwh_metadata.google_analytics ga\n> ON ga.ga_id = wv.ga_id\n>\n> INNER JOIN dwh_manager.sources s ON\n> ga.dwh_source_id =s.dwh_source_id\n>\n> --WHERE s.dwh_company_id = 1\n>\n> LIMIT 100000\n>\n> ) wv\n>\n>\n>\n>\n>\n> This is a pretty simple case, my subquery (or CTE when using WITH\n> statement) should return 5 fields with more or less this structure :\n>\n> Id : character(32)\n>\n> Ga_id : bigint\n>\n> Source : character(32)\n>\n> Medium : character(32)\n>\n> dwh_company_id : bigint\n>\n>\n>\n> On top of which I apply a case when statement…\n>\n>\n>\n> Now the weird thing is, using this query I notice a significant drop in\n> performance as the “case when” is getting bigger. If I run the query as if,\n> I get the following exec plain and execution time:\n>\n> Subquery Scan on wv (cost=6.00..29098.17 rows=100000 width=36) (actual\n> time=0.828..22476.917 rows=100000 loops=1)\n>\n> Buffers: shared hit=3136\n>\n> -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual\n> time=0.209..133.429 rows=100000 loops=1)\n>\n> Buffers: shared hit=3136\n>\n> -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58)\n> (actual time=0.208..119.297 rows=100000 loops=1)\n>\n> Hash Cond: (wv_1.ga_id = ga.ga_id)\n>\n> Buffers: shared hit=3136\n>\n> -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78\n> rows=20587078 width=50) (actual time=0.004..18.412 rows=100000 loops=1)\n>\n> Buffers: shared hit=3133\n>\n> -> Hash (cost=5.50..5.50 rows=40 width=12) (actual\n> time=0.184..0.184 rows=111 loops=1)\n>\n> Buckets: 1024 Batches: 1 Memory Usage: 5kB\n>\n> Buffers: shared hit=3\n>\n> -> Hash Join (cost=1.88..5.50 rows=40 width=12)\n> (actual time=0.056..0.148 rows=111 loops=1)\n>\n> Hash Cond: (ga.dwh_source_id = s.dwh_source_id)\n>\n> Buffers: shared hit=3\n>\n> -> Seq Scan on google_analytics ga\n> (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.028 rows=111\n> loops=1)\n>\n> Buffers: shared hit=2\n>\n> -> Hash (cost=1.39..1.39 rows=39 width=8)\n> (actual time=0.042..0.042 rows=56 loops=1)\n>\n> Buckets: 1024 Batches: 1 Memory Usage:\n> 3kB\n>\n> Buffers: shared hit=1\n>\n> -> Seq Scan on sources s\n> (cost=0.00..1.39 rows=39 width=8) (actual time=0.005..0.020 rows=56\n> loops=1)\n>\n> Buffers: shared hit=1\n>\n> Planning time: 0.599 ms\n>\n> Execution time: 22486.216 ms\n>\n>\n>\n> Then try commenting out only one line in the case when and the query run\n> 10x faster :\n>\n>\n>\n> Subquery Scan on wv (cost=6.00..28598.17 rows=100000 width=36) (actual\n> time=0.839..2460.002 rows=100000 loops=1)\n>\n> Buffers: shared hit=3136\n>\n> -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual\n> time=0.210..112.043 rows=100000 loops=1)\n>\n> Buffers: shared hit=3136\n>\n> -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58)\n> (actual time=0.209..99.513 rows=100000 loops=1)\n>\n> Hash Cond: (wv_1.ga_id = ga.ga_id)\n>\n> Buffers: shared hit=3136\n>\n> -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78\n> rows=20587078 width=50) (actual time=0.004..14.048 rows=100000 loops=1)\n>\n> Buffers: shared hit=3133\n>\n> -> Hash (cost=5.50..5.50 rows=40 width=12) (actual\n> time=0.184..0.184 rows=111 loops=1)\n>\n> Buckets: 1024 Batches: 1 Memory Usage: 5kB\n>\n> Buffers: shared hit=3\n>\n> -> Hash Join (cost=1.88..5.50 rows=40 width=12)\n> (actual time=0.058..0.146 rows=111 loops=1)\n>\n> Hash Cond: (ga.dwh_source_id = s.dwh_source_id)\n>\n> Buffers: shared hit=3\n>\n> -> Seq Scan on google_analytics ga\n> (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.025 rows=111\n> loops=1)\n>\n> Buffers: shared hit=2\n>\n> -> Hash (cost=1.39..1.39 rows=39 width=8)\n> (actual time=0.042..0.042 rows=56 loops=1)\n>\n> Buckets: 1024 Batches: 1 Memory Usage:\n> 3kB\n>\n> Buffers: shared hit=1\n>\n> -> Seq Scan on sources s\n> (cost=0.00..1.39 rows=39 width=8) (actual time=0.006..0.021 rows=56\n> loops=1)\n>\n> Buffers: shared hit=1\n>\n> Planning time: 0.583 ms\n>\n> Execution time: 2467.484 ms\n>\n>\n>\n> Why this drop in performance for only one (in this simple example)\n> condition ? I do not really understand it. If I add more conditions to the\n> query (let say 1 or 2) it is also getting slower. And it’s not a few ms, it\n> is around 5 sec or so. (which is huge considering I only take in my example\n> 1/500 of my data with LIMIT.\n>\n>\n>\n> Before we deviate from the problem I have (which is why the sudden drop of\n> performance) let me clarify a few things about this query :\n>\n> - The purpose is not to rewrite it, with a join or whatever, the\n> case when actually comes from a function which is auto-generated by another\n> app we have\n>\n> - My example is pretty simple and regex expressions could be\n> replaced by equals, the real case when query contains way more complicated\n> regex\n>\n> - This is subset of my CASE WHEN, it is much bigger, I cut it at\n> the “bottleneck” point for this post.\n>\n>\n>\n> Thanks a lot.\n>\n>\n>\n> Best Regards,\n>\n>\n>\n> Kevin\n>\n>\n>\n>\n>\n\n2015-03-31 11:19 GMT+02:00 Kevin Viraud <[email protected]>:Hi Pavel, Thanks for your answer. Yes sure, I could do that, but like I wrote the purpose is not to find a way to rewrite it. But to understand why at a certain point it is totally going off. I’m aware that the longer my case when will be the longest the query will run. But 10x slower for adding one condition, something feels wrong here.It is slow due lot of expressions evaluation. It is CPU expensive. PostgreSQL uses interpreted expression evaluation - and if you have lot of expressions, then you have problem.RegardsPavel Plus, the case when is part of a function so basically I use it this way :SELECT col1, col2, get_channel(company_id, source_id, …)FROM mytable; Get_channel is coming from another app. And even though I have, I need to assume that I don’t have the control over this one and that I’m using it as if. This is only my debugging query. Best regards, Kevin From: Pavel Stehule [mailto:[email protected]] Sent: Dienstag, 31. März 2015 11:09To: Kevin ViraudCc: [email protected]: Re: [PERFORM] Weird CASE WHEN behaviour causing query to be suddenly very slow Hilong CASE can be problem. Why you don't use a dictionary table and join?RegardsPavel 2015-03-31 10:53 GMT+02:00 Kevin Viraud <[email protected]>:Hi, I have an issue with a rather large CASE WHEN and I cannot figure out why it is so slow... First, here is my test query : SELECT CASE WHEN dwh_company_id = 1 THEN CASE WHEN wv.source ~ '^$' THEN 'Not tracked' WHEN wv.source ~ '^1$' THEN 'Not tracked1' WHEN wv.source ~ '^2$' THEN 'Not tracked2' WHEN wv.source ~ '^3$' THEN 'Not tracked3' WHEN wv.source ~ '^4$' THEN 'Not tracked4' WHEN wv.source ~ '^5$' THEN 'Not tracked5' WHEN wv.source ~ '^6$' THEN 'Not tracked6' WHEN wv.source ~ '^7$' THEN 'Not tracked7' WHEN wv.source ~ '^8$' THEN 'Not tracked8' WHEN wv.source ~ '^9$' THEN 'Not tracked9' WHEN wv.source ~ '^10$' THEN 'Not tracked10' WHEN wv.source ~ '^11$' THEN 'Not tracked11' WHEN wv.source ~ '^12$' THEN 'Not tracked12' WHEN wv.source ~ '^13$' THEN 'Not tracked13' WHEN wv.source ~ '^14$' THEN 'Not tracked14' WHEN wv.source ~ '^15$' THEN 'Not tracked15' WHEN wv.source ~ '^16$' THEN 'Not tracked16' WHEN wv.source ~ '^17$' THEN 'Not tracked17' WHEN wv.source ~ '^18$' THEN 'Not tracked18' WHEN wv.source ~ '^19$' THEN 'Not tracked19' WHEN wv.source ~ '^20$' THEN 'Not tracked20' WHEN wv.source ~ '^21$' THEN 'Not tracked21' WHEN wv.source ~ '^22$' THEN 'Not tracked22' WHEN wv.source ~ '^23$' THEN 'Not tracked23' WHEN wv.source ~ '^24$' THEN 'Not tracked24' WHEN wv.source ~ '^25$' THEN 'Not tracked25' WHEN wv.source ~ '^26$' THEN 'Not tracked26' WHEN wv.source ~ '^27$' THEN 'Not tracked27' WHEN wv.source ~ '^28$' THEN 'Not tracked28' --WHEN wv.source ~ '^29$' THEN 'Not tracked29' WHEN wv.source ~ '^30$' THEN 'Not tracked30' WHEN wv.source ~ '^31$' THEN 'Not tracked31' WHEN wv.source ~ '^32$' THEN 'Not tracked32' END ELSE 'Others' END as channelFROM ( SELECT wv.id, wv.ga_id, split_part(wv.ga_source_medium, ' / ', 1) as source, ga.dwh_source_id, s.dwh_company_id FROM marketing.web_visits wv INNER JOIN dwh_metadata.google_analytics ga ON ga.ga_id = wv.ga_id INNER JOIN dwh_manager.sources s ON ga.dwh_source_id =s.dwh_source_id --WHERE s.dwh_company_id = 1 LIMIT 100000 ) wv This is a pretty simple case, my subquery (or CTE when using WITH statement) should return 5 fields with more or less this structure :Id : character(32)Ga_id : bigintSource : character(32)Medium : character(32)dwh_company_id : bigint On top of which I apply a case when statement… Now the weird thing is, using this query I notice a significant drop in performance as the “case when” is getting bigger. If I run the query as if, I get the following exec plain and execution time:Subquery Scan on wv (cost=6.00..29098.17 rows=100000 width=36) (actual time=0.828..22476.917 rows=100000 loops=1) Buffers: shared hit=3136 -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.209..133.429 rows=100000 loops=1) Buffers: shared hit=3136 -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.208..119.297 rows=100000 loops=1) Hash Cond: (wv_1.ga_id = ga.ga_id) Buffers: shared hit=3136 -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..18.412 rows=100000 loops=1) Buffers: shared hit=3133 -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 5kB Buffers: shared hit=3 -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.056..0.148 rows=111 loops=1) Hash Cond: (ga.dwh_source_id = s.dwh_source_id) Buffers: shared hit=3 -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.028 rows=111 loops=1) Buffers: shared hit=2 -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 3kB Buffers: shared hit=1 -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.005..0.020 rows=56 loops=1) Buffers: shared hit=1 Planning time: 0.599 ms Execution time: 22486.216 ms Then try commenting out only one line in the case when and the query run 10x faster : Subquery Scan on wv (cost=6.00..28598.17 rows=100000 width=36) (actual time=0.839..2460.002 rows=100000 loops=1) Buffers: shared hit=3136 -> Limit (cost=6.00..11598.17 rows=100000 width=58) (actual time=0.210..112.043 rows=100000 loops=1) Buffers: shared hit=3136 -> Hash Join (cost=6.00..1069811.24 rows=9228690 width=58) (actual time=0.209..99.513 rows=100000 loops=1) Hash Cond: (wv_1.ga_id = ga.ga_id) Buffers: shared hit=3136 -> Seq Scan on web_visits wv_1 (cost=0.00..877005.78 rows=20587078 width=50) (actual time=0.004..14.048 rows=100000 loops=1) Buffers: shared hit=3133 -> Hash (cost=5.50..5.50 rows=40 width=12) (actual time=0.184..0.184 rows=111 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 5kB Buffers: shared hit=3 -> Hash Join (cost=1.88..5.50 rows=40 width=12) (actual time=0.058..0.146 rows=111 loops=1) Hash Cond: (ga.dwh_source_id = s.dwh_source_id) Buffers: shared hit=3 -> Seq Scan on google_analytics ga (cost=0.00..2.89 rows=89 width=8) (actual time=0.007..0.025 rows=111 loops=1) Buffers: shared hit=2 -> Hash (cost=1.39..1.39 rows=39 width=8) (actual time=0.042..0.042 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 3kB Buffers: shared hit=1 -> Seq Scan on sources s (cost=0.00..1.39 rows=39 width=8) (actual time=0.006..0.021 rows=56 loops=1) Buffers: shared hit=1 Planning time: 0.583 ms Execution time: 2467.484 ms Why this drop in performance for only one (in this simple example) condition ? I do not really understand it. If I add more conditions to the query (let say 1 or 2) it is also getting slower. And it’s not a few ms, it is around 5 sec or so. (which is huge considering I only take in my example 1/500 of my data with LIMIT. Before we deviate from the problem I have (which is why the sudden drop of performance) let me clarify a few things about this query :- The purpose is not to rewrite it, with a join or whatever, the case when actually comes from a function which is auto-generated by another app we have- My example is pretty simple and regex expressions could be replaced by equals, the real case when query contains way more complicated regex- This is subset of my CASE WHEN, it is much bigger, I cut it at the “bottleneck” point for this post. Thanks a lot. Best Regards, Kevin",
"msg_date": "Tue, 31 Mar 2015 11:41:48 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird CASE WHEN behaviour causing query to be suddenly very slow"
},
{
"msg_contents": "\"Kevin Viraud\" <[email protected]> writes:\n> I have an issue with a rather large CASE WHEN and I cannot figure out why\n> it is so slow...\n\nDo all the arms of the CASE usually fail, leaving you at the ELSE?\n\nI suspect what's happening is that you're running into the MAX_CACHED_RES\nlimit in src/backend/utils/adt/regexp.c, so that instead of just compiling\neach regexp once and then re-using 'em, the regexps are constantly falling\nout of cache and then having to be recompiled. They'd have to be used in\na nearly perfect round robin in order for the behavior to have such a big\ncliff as you describe, though. In this CASE structure, that suggests that\nyou're nearly always testing every regexp because they're all failing.\n\nI have to think there's probably a better way to do whatever you're trying\nto do, but there's not enough info here about your underlying goal to\nsuggest a better approach. At the very least, if you need a many-armed\nCASE, it behooves you to make sure the common cases appear early.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 31 Mar 2015 09:58:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird CASE WHEN behaviour causing query to be suddenly very slow"
},
{
"msg_contents": "Touche ! Thanks a lot.\n\nLooking more at the data yes it goes very often to ELSE Clause. And\ntherefore reaching the MAX_CACHED_RES. \n\nIn there anyway to increase that value ?\n\nBasically, I have several tables containing millions of rows and let say 5\ncolumns. Those five columns, depending of their combination give me a 6th\nvalue. \nWe have complex patterns to match and using simple LIKE / EQUAL and so on\nwouldn't be enough. This can be applied to N number of table so we\nrefactored this process into a function that we can use in the SELECT\nstatement, by giving only the 5 values each time.\n\nI wouldn't mind using a table and mapping it through a join if it were for\nmy own use. \nBut the final query has to be readable and usable for almost-non-initiated\nSQL user... So using a function with encapsulated case when seemed to be a\ngood idea and so far worked nicely. \n\nBut we might consider changing it if we have no other choice...\n\nRegards,\n\nKevin\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Dienstag, 31. März 2015 15:59\nTo: Kevin Viraud\nCc: [email protected]\nSubject: Re: [PERFORM] Weird CASE WHEN behaviour causing query to be\nsuddenly very slow\n\n\"Kevin Viraud\" <[email protected]> writes:\n> I have an issue with a rather large CASE WHEN and I cannot figure out \n> why it is so slow...\n\nDo all the arms of the CASE usually fail, leaving you at the ELSE?\n\nI suspect what's happening is that you're running into the MAX_CACHED_RES\nlimit in src/backend/utils/adt/regexp.c, so that instead of just compiling\neach regexp once and then re-using 'em, the regexps are constantly falling\nout of cache and then having to be recompiled. They'd have to be used in a\nnearly perfect round robin in order for the behavior to have such a big\ncliff as you describe, though. In this CASE structure, that suggests that\nyou're nearly always testing every regexp because they're all failing.\n\nI have to think there's probably a better way to do whatever you're trying\nto do, but there's not enough info here about your underlying goal to\nsuggest a better approach. At the very least, if you need a many-armed\nCASE, it behooves you to make sure the common cases appear early.\n\n\t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 31 Mar 2015 17:58:57 +0200",
"msg_from": "\"Kevin Viraud\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Weird CASE WHEN behaviour causing query to be suddenly very slow"
},
{
"msg_contents": "On Tue, Mar 31, 2015 at 8:58 AM, Kevin Viraud <\[email protected]> wrote:\n\n> Touche ! Thanks a lot.\n>\n> Looking more at the data yes it goes very often to ELSE Clause. And\n> therefore reaching the MAX_CACHED_RES.\n>\n> In there anyway to increase that value ?\n>\n> Basically, I have several tables containing millions of rows and let say 5\n> columns. Those five columns, depending of their combination give me a 6th\n> value.\n> We have complex patterns to match and using simple LIKE / EQUAL and so on\n> wouldn't be enough. This can be applied to N number of table so we\n> refactored this process into a function that we can use in the SELECT\n> statement, by giving only the 5 values each time.\n>\n> I wouldn't mind using a table and mapping it through a join if it were for\n> my own use.\n> But the final query has to be readable and usable for almost-non-initiated\n> SQL user... So using a function with encapsulated case when seemed to be a\n> good idea and so far worked nicely.\n>\n> But we might consider changing it if we have no other choice...\n>\n> Regards,\n>\n> Kevin\n>\n>\nThoughts...\n\nMemoization: http://en.wikipedia.org/wiki/Memoization\n\nRewrite the function in pl/perl and compare performance\n\nHierarchy of CASE statements allowing you to reduce the number of\npossibilities in exchange for manually pre-processing the batches on a\nsignificantly less complicated condition probably using only 1 or 2 columns\ninstead of all five.\n\nI'm not familiar with the caching constraint or the data so its hard to\nmake more specific suggestions.\n\nDavid J.\n\nOn Tue, Mar 31, 2015 at 8:58 AM, Kevin Viraud <[email protected]> wrote:Touche ! Thanks a lot.\n\nLooking more at the data yes it goes very often to ELSE Clause. And\ntherefore reaching the MAX_CACHED_RES.\n\nIn there anyway to increase that value ?\n\nBasically, I have several tables containing millions of rows and let say 5\ncolumns. Those five columns, depending of their combination give me a 6th\nvalue.\nWe have complex patterns to match and using simple LIKE / EQUAL and so on\nwouldn't be enough. This can be applied to N number of table so we\nrefactored this process into a function that we can use in the SELECT\nstatement, by giving only the 5 values each time.\n\nI wouldn't mind using a table and mapping it through a join if it were for\nmy own use.\nBut the final query has to be readable and usable for almost-non-initiated\nSQL user... So using a function with encapsulated case when seemed to be a\ngood idea and so far worked nicely.\n\nBut we might consider changing it if we have no other choice...\n\nRegards,\n\nKevinThoughts...Memoization: http://en.wikipedia.org/wiki/MemoizationRewrite the function in pl/perl and compare performanceHierarchy of CASE statements allowing you to reduce the number of possibilities in exchange for manually pre-processing the batches on a significantly less complicated condition probably using only 1 or 2 columns instead of all five.I'm not familiar with the caching constraint or the data so its hard to make more specific suggestions.David J.",
"msg_date": "Tue, 31 Mar 2015 10:03:10 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird CASE WHEN behaviour causing query to be suddenly very slow"
},
{
"msg_contents": "On 3/31/15 10:58 AM, Kevin Viraud wrote:\n> Touche ! Thanks a lot.\n>\n> Looking more at the data yes it goes very often to ELSE Clause. And\n> therefore reaching the MAX_CACHED_RES.\n>\n> In there anyway to increase that value ?\n\nSure, change it and re-compile. But be aware that increasing it will \nprobably increase the cost of some other stuff, so it's a tradeoff.\n\nIf this is that complex though you very likely would do better in \nplperl, especially if you could pre-compile the RE's. AFAIK there's no \nway to do that in Postgres, though it might be interesting to add that \nability.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Apr 2015 18:38:38 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird CASE WHEN behaviour causing query to be suddenly very slow"
},
{
"msg_contents": "We have to confirm the theory first: a 'perf top' sampling during two\nruns shall give enough information.\n\nRegards,\nQingqing\n\nOn Tue, Mar 31, 2015 at 8:58 AM, Kevin Viraud\n<[email protected]> wrote:\n> Touche ! Thanks a lot.\n>\n> Looking more at the data yes it goes very often to ELSE Clause. And\n> therefore reaching the MAX_CACHED_RES.\n>\n> In there anyway to increase that value ?\n>\n> Basically, I have several tables containing millions of rows and let say 5\n> columns. Those five columns, depending of their combination give me a 6th\n> value.\n> We have complex patterns to match and using simple LIKE / EQUAL and so on\n> wouldn't be enough. This can be applied to N number of table so we\n> refactored this process into a function that we can use in the SELECT\n> statement, by giving only the 5 values each time.\n>\n> I wouldn't mind using a table and mapping it through a join if it were for\n> my own use.\n> But the final query has to be readable and usable for almost-non-initiated\n> SQL user... So using a function with encapsulated case when seemed to be a\n> good idea and so far worked nicely.\n>\n> But we might consider changing it if we have no other choice...\n>\n> Regards,\n>\n> Kevin\n>\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Dienstag, 31. März 2015 15:59\n> To: Kevin Viraud\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Weird CASE WHEN behaviour causing query to be\n> suddenly very slow\n>\n> \"Kevin Viraud\" <[email protected]> writes:\n>> I have an issue with a rather large CASE WHEN and I cannot figure out\n>> why it is so slow...\n>\n> Do all the arms of the CASE usually fail, leaving you at the ELSE?\n>\n> I suspect what's happening is that you're running into the MAX_CACHED_RES\n> limit in src/backend/utils/adt/regexp.c, so that instead of just compiling\n> each regexp once and then re-using 'em, the regexps are constantly falling\n> out of cache and then having to be recompiled. They'd have to be used in a\n> nearly perfect round robin in order for the behavior to have such a big\n> cliff as you describe, though. In this CASE structure, that suggests that\n> you're nearly always testing every regexp because they're all failing.\n>\n> I have to think there's probably a better way to do whatever you're trying\n> to do, but there's not enough info here about your underlying goal to\n> suggest a better approach. At the very least, if you need a many-armed\n> CASE, it behooves you to make sure the common cases appear early.\n>\n> regards, tom lane\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Apr 2015 16:50:53 -0700",
"msg_from": "Qingqing Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird CASE WHEN behaviour causing query to be suddenly very slow"
}
] |
[
{
"msg_contents": "All,\n\nI currently have access to a matched pair of 20-core, 128GB RAM servers\nwith SSD-PCI storage, for about 2 weeks before they go into production.\n Are there any performance tests people would like to see me run on\nthese? Otherwise, I'll just do some pgbench and DVDStore.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 31 Mar 2015 12:52:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some performance testing?"
},
{
"msg_contents": "It would be interesting to get raw performance benchmarks in addition to\r\nPG specific benchmarks. I’ve been measuring raw I/O performance of a few\r\nof our systems and run the following tests as well:\r\n\r\n1. 10 runs of bonnie++\r\n2. 5 runs of hdparm -Tt\r\n3. Using a temp file created on the SSD, dd if=tempfile of=/dev/null bs=1M\r\ncount=1024 && echo 3 > /proc/sys/vm/drop_caches; dd\r\nif=tempfileof=/dev/null bs=1M count=1024\r\n4. Using phoronix benchmarks -> stream / ramspeed / compress-7zip\r\n\r\n\r\nI was curious to measure the magnitude of difference between HDD -> SSD. I\r\nwould expect significant differences between SSD -> PCI-E Flash.\r\n\r\nI’ve included some metrics from some previous runs vs. different types of\r\nSSDs (OWC Mercury Extreme 6G which is our standard SSD, an Intel S3700\r\nSSD, a Samsung SSD 840 PRO) vs. some standard HDD from Western Digital\r\nand HGST. I put in a req for a 960Gb Mercury Excelsior PCI-E SSD which\r\nhasn’t yet materialized ...\r\n\r\nThanks, M.\r\n\r\nMel Llaguno • Staff Engineer – Team Lead\r\nOffice: +1.403.264.9717 x310\r\nwww.coverity.com <http://www.coverity.com/> • Twitter: @coverity\r\nCoverity by Synopsys\r\n\r\n\r\nOn 3/31/15, 1:52 PM, \"Josh Berkus\" <[email protected]> wrote:\r\n\r\n>All,\r\n>\r\n>I currently have access to a matched pair of 20-core, 128GB RAM servers\r\n>with SSD-PCI storage, for about 2 weeks before they go into production.\r\n> Are there any performance tests people would like to see me run on\r\n>these? Otherwise, I'll just do some pgbench and DVDStore.\r\n>\r\n>-- \r\n>Josh Berkus\r\n>PostgreSQL Experts Inc.\r\n>http://pgexperts.com\r\n>\r\n>\r\n>-- \r\n>Sent via pgsql-performance mailing list ([email protected])\r\n>To make changes to your subscription:\r\n>http://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 31 Mar 2015 20:41:55 +0000",
"msg_from": "Mel Llaguno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "Maybe you will find time to benchamark xfs vs ext4 (with and without\njournaling enabled on ext4).\n\nNice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X vs\nRHEL 7.0 and kernel 3.10.\n\nI was looking for some guidance what to choose and there is very poor\ninformation about such things.\n\n-- \nPrzemysław Deć\nSenior Solutions Architect\nLinux Polska Sp. z o.o\n\n2015-04-01 10:37 GMT+02:00 Przemysław Deć <[email protected]>:\n\n> Maybe you will find time to benchamark xfs vs ext4 (with and without\n> journaling enabled on ext4).\n>\n> Nice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X vs\n> RHEL 7.0 and kernel 3.10.\n>\n> I was looking for some guidance what to choose and there is very poor\n> information about such things.\n>\n> --\n> Przemysław Deć\n> Senior Solutions Architect\n> Linux Polska Sp. z o.o\n>\n>\n> 2015-03-31 22:41 GMT+02:00 Mel Llaguno <[email protected]>:\n>\n>> It would be interesting to get raw performance benchmarks in addition to\n>> PG specific benchmarks. I’ve been measuring raw I/O performance of a few\n>> of our systems and run the following tests as well:\n>>\n>> 1. 10 runs of bonnie++\n>> 2. 5 runs of hdparm -Tt\n>> 3. Using a temp file created on the SSD, dd if=tempfile of=/dev/null bs=1M\n>> count=1024 && echo 3 > /proc/sys/vm/drop_caches; dd\n>> if=tempfileof=/dev/null bs=1M count=1024\n>> 4. Using phoronix benchmarks -> stream / ramspeed / compress-7zip\n>>\n>>\n>> I was curious to measure the magnitude of difference between HDD -> SSD. I\n>> would expect significant differences between SSD -> PCI-E Flash.\n>>\n>> I’ve included some metrics from some previous runs vs. different types of\n>> SSDs (OWC Mercury Extreme 6G which is our standard SSD, an Intel S3700\n>> SSD, a Samsung SSD 840 PRO) vs. some standard HDD from Western Digital\n>> and HGST. I put in a req for a 960Gb Mercury Excelsior PCI-E SSD which\n>> hasn’t yet materialized ...\n>>\n>> Thanks, M.\n>>\n>> Mel Llaguno • Staff Engineer – Team Lead\n>> Office: +1.403.264.9717 x310\n>> www.coverity.com <http://www.coverity.com/> • Twitter: @coverity\n>> Coverity by Synopsys\n>>\n>>\n>> On 3/31/15, 1:52 PM, \"Josh Berkus\" <[email protected]> wrote:\n>>\n>> >All,\n>> >\n>> >I currently have access to a matched pair of 20-core, 128GB RAM servers\n>> >with SSD-PCI storage, for about 2 weeks before they go into production.\n>> > Are there any performance tests people would like to see me run on\n>> >these? Otherwise, I'll just do some pgbench and DVDStore.\n>> >\n>> >--\n>> >Josh Berkus\n>> >PostgreSQL Experts Inc.\n>> >http://pgexperts.com\n>> >\n>> >\n>> >--\n>> >Sent via pgsql-performance mailing list (\n>> [email protected])\n>> >To make changes to your subscription:\n>> >http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>\n\nMaybe you will find time to benchamark xfs vs ext4 (with and without journaling enabled on ext4).Nice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X vs RHEL 7.0 and kernel 3.10.I was looking for some guidance what to choose and there is very poor information about such things. -- Przemysław Deć Senior Solutions Architect Linux Polska Sp. z o.o2015-04-01 10:37 GMT+02:00 Przemysław Deć <[email protected]>:Maybe you will find time to benchamark xfs vs ext4 (with and without journaling enabled on ext4).Nice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X vs RHEL 7.0 and kernel 3.10.I was looking for some guidance what to choose and there is very poor information about such things. -- Przemysław Deć Senior Solutions Architect Linux Polska Sp. z o.o \n2015-03-31 22:41 GMT+02:00 Mel Llaguno <[email protected]>:It would be interesting to get raw performance benchmarks in addition to\nPG specific benchmarks. I’ve been measuring raw I/O performance of a few\nof our systems and run the following tests as well:\n\n1. 10 runs of bonnie++\n2. 5 runs of hdparm -Tt\n3. Using a temp file created on the SSD, dd if=tempfile of=/dev/null bs=1M\ncount=1024 && echo 3 > /proc/sys/vm/drop_caches; dd\nif=tempfileof=/dev/null bs=1M count=1024\n4. Using phoronix benchmarks -> stream / ramspeed / compress-7zip\n\n\nI was curious to measure the magnitude of difference between HDD -> SSD. I\nwould expect significant differences between SSD -> PCI-E Flash.\n\nI’ve included some metrics from some previous runs vs. different types of\nSSDs (OWC Mercury Extreme 6G which is our standard SSD, an Intel S3700\nSSD, a Samsung SSD 840 PRO) vs. some standard HDD from Western Digital\nand HGST. I put in a req for a 960Gb Mercury Excelsior PCI-E SSD which\nhasn’t yet materialized ...\n\nThanks, M.\n\nMel Llaguno • Staff Engineer – Team Lead\nOffice: +1.403.264.9717 x310\nwww.coverity.com <http://www.coverity.com/> • Twitter: @coverity\nCoverity by Synopsys\n\n\nOn 3/31/15, 1:52 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n>All,\n>\n>I currently have access to a matched pair of 20-core, 128GB RAM servers\n>with SSD-PCI storage, for about 2 weeks before they go into production.\n> Are there any performance tests people would like to see me run on\n>these? Otherwise, I'll just do some pgbench and DVDStore.\n>\n>--\n>Josh Berkus\n>PostgreSQL Experts Inc.\n>http://pgexperts.com\n>\n>\n>--\n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 1 Apr 2015 11:02:02 +0200",
"msg_from": "=?UTF-8?B?UHJ6ZW15c8WCYXcgRGXEhw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "Although the results only focus on SATA2 HDD, these may be useful for a comparison of ext4 vs. xfs : http://www.fuzzy.cz/bench/compare-pgbench.php?type[]=btrfs-datacow-barrier:1&type[]=btrfs-datacow-nobarrier:1&type[]=btrfs-nodatacow-barrier:1&type[]=btrfs-nodatacow-nobarrier:1&type[]=ext4-writeback-barrier:1&type[]=ext4-writeback-nobarrier:1&type[]=xfs-barrier:1&type[]=xfs-nobarrier:1?\n\n\nM.\n\n________________________________\nFrom: Przemysław Deć <[email protected]>\nSent: Wednesday, April 01, 2015 3:02 AM\nTo: Mel Llaguno\nCc: Josh Berkus; [email protected]\nSubject: Re: [PERFORM] Some performance testing?\n\nMaybe you will find time to benchamark xfs vs ext4 (with and without journaling enabled on ext4).\n\nNice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X vs RHEL 7.0 and kernel 3.10.\n\nI was looking for some guidance what to choose and there is very poor information about such things.\n[https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif]\n--\nPrzemysław Deć\nSenior Solutions Architect\nLinux Polska Sp. z o.o\n\n2015-04-01 10:37 GMT+02:00 Przemysław Deć <[email protected]<mailto:[email protected]>>:\nMaybe you will find time to benchamark xfs vs ext4 (with and without journaling enabled on ext4).\n\nNice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X vs RHEL 7.0 and kernel 3.10.\n\nI was looking for some guidance what to choose and there is very poor information about such things.\n\n--\nPrzemysław Deć\nSenior Solutions Architect\nLinux Polska Sp. z o.o\n\n\n2015-03-31 22:41 GMT+02:00 Mel Llaguno <[email protected]<mailto:[email protected]>>:\nIt would be interesting to get raw performance benchmarks in addition to\nPG specific benchmarks. I've been measuring raw I/O performance of a few\nof our systems and run the following tests as well:\n\n1. 10 runs of bonnie++\n2. 5 runs of hdparm -Tt\n3. Using a temp file created on the SSD, dd if=tempfile of=/dev/null bs=1M\ncount=1024 && echo 3 > /proc/sys/vm/drop_caches; dd\nif=tempfileof=/dev/null bs=1M count=1024\n4. Using phoronix benchmarks -> stream / ramspeed / compress-7zip\n\n\nI was curious to measure the magnitude of difference between HDD -> SSD. I\nwould expect significant differences between SSD -> PCI-E Flash.\n\nI've included some metrics from some previous runs vs. different types of\nSSDs (OWC Mercury Extreme 6G which is our standard SSD, an Intel S3700\nSSD, a Samsung SSD 840 PRO) vs. some standard HDD from Western Digital\nand HGST. I put in a req for a 960Gb Mercury Excelsior PCI-E SSD which\nhasn't yet materialized ...\n\nThanks, M.\n\nMel Llaguno * Staff Engineer - Team Lead\nOffice: +1.403.264.9717 x310\nwww.coverity.com<http://www.coverity.com> <http://www.coverity.com/> * Twitter: @coverity\nCoverity by Synopsys\n\n\nOn 3/31/15, 1:52 PM, \"Josh Berkus\" <[email protected]<mailto:[email protected]>> wrote:\n\n>All,\n>\n>I currently have access to a matched pair of 20-core, 128GB RAM servers\n>with SSD-PCI storage, for about 2 weeks before they go into production.\n> Are there any performance tests people would like to see me run on\n>these? Otherwise, I'll just do some pgbench and DVDStore.\n>\n>--\n>Josh Berkus\n>PostgreSQL Experts Inc.\n>http://pgexperts.com\n>\n>\n>--\n>Sent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\n\n\n\nAlthough the results only focus on SATA2 HDD, these may be useful for a comparison of ext4 vs. xfs : http://www.fuzzy.cz/bench/compare-pgbench.php?type[]=btrfs-datacow-barrier:1&type[]=btrfs-datacow-nobarrier:1&type[]=btrfs-nodatacow-barrier:1&type[]=btrfs-nodatacow-nobarrier:1&type[]=ext4-writeback-barrier:1&type[]=ext4-writeback-nobarrier:1&type[]=xfs-barrier:1&type[]=xfs-nobarrier:1\n\n\n\nM.\n\n\n\nFrom: Przemysław Deć <[email protected]>\nSent: Wednesday, April 01, 2015 3:02 AM\nTo: Mel Llaguno\nCc: Josh Berkus; [email protected]\nSubject: Re: [PERFORM] Some performance testing?\n \n\n\n\n\nMaybe you will find time to benchamark xfs vs ext4 (with and without journaling enabled on ext4).\n\n\nNice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X vs RHEL 7.0 and kernel 3.10.\n\n\nI was looking for some guidance what to choose and there is very poor information about such things.\n\n\n-- \nPrzemysław Deć \nSenior Solutions Architect \nLinux Polska Sp. z o.o\n\n\n2015-04-01 10:37 GMT+02:00 Przemysław Deć \n<[email protected]>:\n\n\n\nMaybe you will find time to benchamark xfs vs ext4 (with and without journaling enabled on ext4).\n\n\nNice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X vs RHEL 7.0 and kernel 3.10.\n\n\nI was looking for some guidance what to choose and there is very poor information about such things.\n\n\n\n\n\n\n\n\n-- \nPrzemysław Deć \nSenior Solutions Architect \nLinux Polska Sp. z o.o \n\n\n\n\n\n\n\n\n\n\n\n2015-03-31 22:41 GMT+02:00 Mel Llaguno <[email protected]>:\n\nIt would be interesting to get raw performance benchmarks in addition to\nPG specific benchmarks. I’ve been measuring raw I/O performance of a few\nof our systems and run the following tests as well:\n\n1. 10 runs of bonnie++\n2. 5 runs of hdparm -Tt\n3. Using a temp file created on the SSD, dd if=tempfile of=/dev/null bs=1M\ncount=1024 && echo 3 > /proc/sys/vm/drop_caches; dd\nif=tempfileof=/dev/null bs=1M count=1024\n4. Using phoronix benchmarks -> stream / ramspeed / compress-7zip\n\n\nI was curious to measure the magnitude of difference between HDD -> SSD. I\nwould expect significant differences between SSD -> PCI-E Flash.\n\nI’ve included some metrics from some previous runs vs. different types of\nSSDs (OWC Mercury Extreme 6G which is our standard SSD, an Intel S3700\nSSD, a Samsung SSD 840 PRO) vs. some standard HDD from Western Digital\nand HGST. I put in a req for a 960Gb Mercury Excelsior PCI-E SSD which\nhasn’t yet materialized ...\n\nThanks, M.\n\nMel Llaguno • Staff Engineer – Team Lead\nOffice: +1.403.264.9717 x310\nwww.coverity.com <http://www.coverity.com/> • Twitter: @coverity\nCoverity by Synopsys\n\n\n\nOn 3/31/15, 1:52 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n>All,\n>\n>I currently have access to a matched pair of 20-core, 128GB RAM servers\n>with SSD-PCI storage, for about 2 weeks before they go into production.\n> Are there any performance tests people would like to see me run on\n>these? Otherwise, I'll just do some pgbench and DVDStore.\n>\n>--\n>Josh Berkus\n>PostgreSQL Experts Inc.\n>http://pgexperts.com\n>\n>\n>--\n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 3 Apr 2015 17:44:30 +0000",
"msg_from": "Mel Llaguno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "On 04/01/2015 01:37 AM, Przemysław Deć wrote:\n> Maybe you will find time to benchamark xfs vs ext4 (with and without\n> journaling enabled on ext4).\n> \n> Nice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X\n> vs RHEL 7.0 and kernel 3.10.\n\nDue to how these are hosted, I can't swap out kernels.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 06 Apr 2015 13:51:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "FYI - all my tests were conducted using Ubuntu 12.04 x64 LTS (which I\r\nbelieve are all 3.xx series kernels).\r\n\r\nMel Llaguno • Staff Engineer – Team Lead\r\nOffice: +1.403.264.9717 x310\r\nwww.coverity.com <http://www.coverity.com/> • Twitter: @coverity\r\nCoverity by Synopsys\r\n\r\n\r\n\r\n\r\nOn 4/6/15, 2:51 PM, \"Josh Berkus\" <[email protected]> wrote:\r\n\r\n>On 04/01/2015 01:37 AM, Przemysław Deć wrote:\r\n>> Maybe you will find time to benchamark xfs vs ext4 (with and without\r\n>> journaling enabled on ext4).\r\n>> \r\n>> Nice comparison also could be rhel 6.5 with its newest kernel 2.6.32-X\r\n>> vs RHEL 7.0 and kernel 3.10.\r\n>\r\n>Due to how these are hosted, I can't swap out kernels.\r\n>\r\n>-- \r\n>Josh Berkus\r\n>PostgreSQL Experts Inc.\r\n>http://pgexperts.com\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Apr 2015 16:46:05 +0000",
"msg_from": "Mel Llaguno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "On 04/07/2015 09:46 AM, Mel Llaguno wrote:\n> FYI - all my tests were conducted using Ubuntu 12.04 x64 LTS (which I\n> believe are all 3.xx series kernels).\n\nIf it's 3.2 or 3.5, then your tests aren't useful, I'm afraid. Both of\nthose kernels have known, severe, memory management issues.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Apr 2015 10:41:42 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "Care to elaborate? We usually do not recommend specific kernel versions\r\nfor our customers (who run on a variety of distributions). Thanks, M.\r\n\r\nMel Llaguno • Staff Engineer – Team Lead\r\nOffice: +1.403.264.9717 x310\r\nwww.coverity.com <http://www.coverity.com/> • Twitter: @coverity\r\nCoverity by Synopsys\r\n\r\n\r\nOn 4/7/15, 11:41 AM, \"Josh Berkus\" <[email protected]> wrote:\r\n\r\n>On 04/07/2015 09:46 AM, Mel Llaguno wrote:\r\n>> FYI - all my tests were conducted using Ubuntu 12.04 x64 LTS (which I\r\n>> believe are all 3.xx series kernels).\r\n>\r\n>If it's 3.2 or 3.5, then your tests aren't useful, I'm afraid. Both of\r\n>those kernels have known, severe, memory management issues.\r\n>\r\n>-- \r\n>Josh Berkus\r\n>PostgreSQL Experts Inc.\r\n>http://pgexperts.com\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Apr 2015 18:07:12 +0000",
"msg_from": "Mel Llaguno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "On 04/07/2015 11:07 AM, Mel Llaguno wrote:\n> Care to elaborate? We usually do not recommend specific kernel versions\n> for our customers (who run on a variety of distributions). Thanks, M.\n\nYou should.\n\nhttp://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n\nPerformance is literally 2X to 5X different between kernels.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 08 Apr 2015 12:05:19 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "Cool. Good to know. I'll see if I can replicate these results in my environment. Thanks, M.\n\n________________________________________\nFrom: Josh Berkus <[email protected]>\nSent: Wednesday, April 08, 2015 1:05 PM\nTo: Mel Llaguno; Przemysław Deć\nCc: [email protected]\nSubject: Re: [PERFORM] Some performance testing?\n\nOn 04/07/2015 11:07 AM, Mel Llaguno wrote:\n> Care to elaborate? We usually do not recommend specific kernel versions\n> for our customers (who run on a variety of distributions). Thanks, M.\n\nYou should.\n\nhttp://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n\nPerformance is literally 2X to 5X different between kernels.\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 8 Apr 2015 21:29:49 +0000",
"msg_from": "Mel Llaguno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "On Wed, Apr 8, 2015 at 3:05 PM, Josh Berkus <[email protected]> wrote:\n\n> On 04/07/2015 11:07 AM, Mel Llaguno wrote:\n> > Care to elaborate? We usually do not recommend specific kernel versions\n> > for our customers (who run on a variety of distributions). Thanks, M.\n>\n> You should.\n>\n>\n> http://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n>\n> Performance is literally 2X to 5X different between kernels.\n>\n>\nJosh, there seems to be an inconsistency in your blog. You say 3.10.X is\nsafe, but the graph you show with the poor performance seems to be from\n3.13.X which as I understand it is a later kernel. Can you clarify which\n3.X kernels are good to use and which are not?\n--\nMike Nolan\n\nOn Wed, Apr 8, 2015 at 3:05 PM, Josh Berkus <[email protected]> wrote:On 04/07/2015 11:07 AM, Mel Llaguno wrote:\n> Care to elaborate? We usually do not recommend specific kernel versions\n> for our customers (who run on a variety of distributions). Thanks, M.\n\nYou should.\n\nhttp://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n\nPerformance is literally 2X to 5X different between kernels.\n Josh, there seems to be an inconsistency in your blog. You say 3.10.X is safe, but the graph you show with the poor performance seems to be from 3.13.X which as I understand it is a later kernel. Can you clarify which 3.X kernels are good to use and which are not?--Mike Nolan",
"msg_date": "Wed, 8 Apr 2015 18:09:05 -0400",
"msg_from": "Michael Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "Hey Mike,\r\n\r\nWhat those graphs are showing is that the new kernel reduces the IO required for the same DB load. At least, that’s how we’re supposed to interpret it.\r\n\r\nI’d be curious to see a measure of the database load for both of those so we can verify that the new kernel does in fact provide better performance.\r\n\r\n-Wes\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Michael Nolan\r\nSent: Wednesday, April 08, 2015 5:09 PM\r\nTo: Josh Berkus\r\nCc: Mel Llaguno; Przemysław Deć; [email protected]\r\nSubject: Re: [PERFORM] Some performance testing?\r\n\r\n\r\n\r\nOn Wed, Apr 8, 2015 at 3:05 PM, Josh Berkus <[email protected]<mailto:[email protected]>> wrote:\r\nOn 04/07/2015 11:07 AM, Mel Llaguno wrote:\r\n> Care to elaborate? We usually do not recommend specific kernel versions\r\n> for our customers (who run on a variety of distributions). Thanks, M.\r\n\r\nYou should.\r\n\r\nhttp://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\r\n\r\nPerformance is literally 2X to 5X different between kernels.\r\n\r\n\r\nJosh, there seems to be an inconsistency in your blog. You say 3.10.X is safe, but the graph you show with the poor performance seems to be from 3.13.X which as I understand it is a later kernel. Can you clarify which 3.X kernels are good to use and which are not?\r\n--\r\nMike Nolan\r\n\n\n\n\n\n\n\n\n\nHey Mike,\n \nWhat those graphs are showing is that the new kernel reduces the IO required for the same DB load. At least, that’s how we’re supposed to interpret it.\r\n\n \nI’d be curious to see a measure of the database load for both of those so we can verify that the new kernel does in fact provide better performance.\n \n-Wes\r\n\n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Michael Nolan\nSent: Wednesday, April 08, 2015 5:09 PM\nTo: Josh Berkus\nCc: Mel Llaguno; Przemysław Deć; [email protected]\nSubject: Re: [PERFORM] Some performance testing?\n \n\n \n\n \n\nOn Wed, Apr 8, 2015 at 3:05 PM, Josh Berkus <[email protected]> wrote:\n\nOn 04/07/2015 11:07 AM, Mel Llaguno wrote:\r\n> Care to elaborate? We usually do not recommend specific kernel versions\r\n> for our customers (who run on a variety of distributions). Thanks, M.\n\r\nYou should.\n\nhttp://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html\n\r\nPerformance is literally 2X to 5X different between kernels.\n\n\n \n\n\n\n\n \n\n\nJosh, there seems to be an inconsistency in your blog. You say 3.10.X is safe, but the graph you show with the poor performance seems to be from 3.13.X which as I understand it is a later kernel. Can you clarify which 3.X kernels are\r\n good to use and which are not?\r\n--\n\n\nMike Nolan",
"msg_date": "Thu, 9 Apr 2015 15:20:10 +0000",
"msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "On Thu, Apr 9, 2015 at 11:20 AM, Wes Vaske (wvaske) <[email protected]>\nwrote:\n\n> Hey Mike,\n>\n>\n>\n> What those graphs are showing is that the new kernel reduces the IO\n> required for the same DB load. At least, that’s how we’re supposed to\n> interpret it.\n>\n>\n>\n> I’d be curious to see a measure of the database load for both of those so\n> we can verify that the new kernel does in fact provide better performance.\n>\n>\n>\n> *-Wes *\n>\n>\n>\n\nI think you're correct that I mis-interpreted the graph. I hope I was the\nonly one to do so.\n--\nMike Nolan\n\nOn Thu, Apr 9, 2015 at 11:20 AM, Wes Vaske (wvaske) <[email protected]> wrote:\n\n\nHey Mike,\n \nWhat those graphs are showing is that the new kernel reduces the IO required for the same DB load. At least, that’s how we’re supposed to interpret it.\n\n \nI’d be curious to see a measure of the database load for both of those so we can verify that the new kernel does in fact provide better performance.\n \n-Wes\n\n I think you're correct that I mis-interpreted the graph. I hope I was the only one to do so.--Mike Nolan",
"msg_date": "Fri, 10 Apr 2015 09:21:04 -0400",
"msg_from": "Michael Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
}
] |
[
{
"msg_contents": "This question was posted originally on http://dba.stackexchange.com/questions/96444/cant-get-dell-pe-t420-perc-h710-perform-better-than-a-macmini-with-postgresql and they suggested to post it on this mailing list.\n\nIt's months that I'm trying to solve a performance issue with PostgreSQL. I’m able to give you all the technical details needed.\n\nSYSTEM CONFIGURATION\n\nOur deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID controller configured in this way:\n\nVD0: two 15k SAS disks (ext4, OS partition, WAL partition, RAID1)\nVD1: ten 10k SAS disks (XFS, Postgres data partition, RAID5)\nThis system has the following configuration:\n\nUbuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64)\n128GB RAM (DDR3, 8x16GB @1600Mhz)\ntwo Intel Xeon E5-2640 v2 @2Ghz\nDell Perc H710 with 512MB RAM (Write cache: \"WriteBack\", Read cache: \"ReadAhead\", Disk cache: \"disabled\"):\nVD0 (OS and WAL partition): two 15k SAS disks (ext4, RAID1)\nVD1 (Postgres data partition): ten 10k SAS disks (XFS, RAID5)\nPostgreSQL 9.4 (updated to the latest available version)\nmoved pg_stat_tmp to RAM disk\nMy personal low cost and low profile development machine is a MacMini configured in this way:\n\nOS X Server 10.7.5\n8GB RAM (DDR3, 2x4GB @1333Mhz)\none Intel i7 @2.2Ghz\ntwo Internal 500GB 7.2k SAS HDD (non RAID) for OS partition\nexternal Promise Pegasus R1 connected with Thunderbolt v1 (512MB RAM, four 1TB 7.2k SAS HDD 32MB cache, RAID5, Write cache: \"WriteBack\", Read cache: \"ReadAhead\", Disk cache: \"enabled\", NCQ: \"enabled\")\nPostgreSQL 9.0.13 (the original built-in shipped with OS X Server)\nmoved pg_stat_tmp to RAM disk\nSo far I've made a lot of tuning adjustments to both machines, including kernel reccomended ones on the official Postgres doc site.\n\nAPPLICATION\n\nThe deployment machine runs a web platform which instructs Postgres to make big transactions over billion of records. It's a platform designed for one user because system resources have to be dedicated as much as possible to one single job due to data size (I don't like to call it big data because big data are in the order ob ten of billion).\n\nISSUEs\n\nI've found the deployment machine to be a lot slower than the development machine. This is paradoxal because the two machine really differs in many aspects. I've run many queries to investigate this strange behaviour and have done a lot of tuning adjustments.\n\nDuring the last two months I've prepared and executed two type of query sets:\n\nA: these sets make use of SELECT ... INTO, CREATE INDEX, CLUSTER and VACUUM ANALYZE.\nB: these sets are from our application generated transactions and make use of SELECT over the tables created with set A.\nA and B were always slower on T420. The only type of operation that was faster is the VACUUM ANALYZE.\n\nRESULTS\n\nA type set:\n\nT420: went from 311seconds (default postgresql.conf) to 195seconds doing tuning adjustments over RAID, kernel and postgresql.conf;\nMacMini: 40seconds.\nB type set:\n\nT420: 141seconds;\nMacMini: 101seconds.\nI've to mention that we have also adjusted the BIOS on T420 setting all possible parameters to \"performance\" and disabling low energy profiles. This lowered time execution over a type A set from 240seconds to 211seconds.\n\nWe have also upgrade all firmware and BIOS to the latest available versions.\n\n\nHere are two benchmarks generated using pg_test_fsync:\n\nT420 pg_test_fsync\n\n60 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 23358.758 ops/sec 43 usecs/op\n fdatasync 21417.018 ops/sec 47 usecs/op\n fsync 21112.662 ops/sec 47 usecs/op\n fsync_writethrough n/a\n open_sync 23082.764 ops/sec 43 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 11737.746 ops/sec 85 usecs/op\n fdatasync 19222.074 ops/sec 52 usecs/op\n fsync 18608.405 ops/sec 54 usecs/op\n fsync_writethrough n/a\n open_sync 11510.074 ops/sec 87 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n 1 * 16kB open_sync write 21484.546 ops/sec 47 usecs/op\n 2 * 8kB open_sync writes 11478.119 ops/sec 87 usecs/op\n 4 * 4kB open_sync writes 5885.149 ops/sec 170 usecs/op\n 8 * 2kB open_sync writes 3027.676 ops/sec 330 usecs/op\n 16 * 1kB open_sync writes 1512.922 ops/sec 661 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n write, fsync, close 17946.690 ops/sec 56 usecs/op\n write, close, fsync 17976.202 ops/sec 56 usecs/op\n\nNon-Sync'ed 8kB writes:\n write 343202.937 ops/sec 3 usecs/op\nMacMini pg_test_fsync\n\n60 seconds per test\nDirect I/O is not supported on this platform.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 3780.341 ops/sec 265 usecs/op\n fdatasync 3117.094 ops/sec 321 usecs/op\n fsync 3156.298 ops/sec 317 usecs/op\n fsync_writethrough 110.300 ops/sec 9066 usecs/op\n open_sync 3077.932 ops/sec 325 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 1522.400 ops/sec 657 usecs/op\n fdatasync 2700.055 ops/sec 370 usecs/op\n fsync 2670.652 ops/sec 374 usecs/op\n fsync_writethrough 98.462 ops/sec 10156 usecs/op\n open_sync 1532.235 ops/sec 653 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n 1 * 16kB open_sync write 2634.754 ops/sec 380 usecs/op\n 2 * 8kB open_sync writes 1547.801 ops/sec 646 usecs/op\n 4 * 4kB open_sync writes 801.542 ops/sec 1248 usecs/op\n 8 * 2kB open_sync writes 405.515 ops/sec 2466 usecs/op\n 16 * 1kB open_sync writes 204.095 ops/sec 4900 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n write, fsync, close 2747.345 ops/sec 364 usecs/op\n write, close, fsync 3070.877 ops/sec 326 usecs/op\n\nNon-Sync'ed 8kB writes:\n write 3275.716 ops/sec 305 usecs/op\nThis confirms the hardware IO capabilities of T420 but doesn't explain why MacMini is MUCH MORE FAST.\n\n\n\n\n\nNow let’s propose some query profiling times.\n\nB type set are transactions, so it's impossible for me to post EXPLAIN ANALYZE results. I've extracted two querys from a single transactions and executed the twos on both system. Here are the results:\n\nT420\n\nQuery B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbM\n\nQuery B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06\n\nMacMini\n\nQuery B_1 [56315.614 ms] http://explain.depesz.com/s/uZTx\n\nQuery B_2 [44890.813 ms] http://explain.depesz.com/s/y7Dk\n\n\n\nCOMPILING PGSQL\n\nI compiled and tested all the latest pgsql versions (9.0.19, 9.1.15, 9.2.10, 9.3.6 and 9.4.1) using different combinations of parameters for gcc-4.9.1 (gcc 4.7 for pgsql 9.0.19) and Postgres (I’ve tried also clang compiler with different optimization flags with no benefits). I followed this article but I was unable to test the -flto option due to several errors returned by make. After two days of testing I went down from 195 to 189 seconds on T420 where MacMini still is 40 seconds (A set); and from 141 to 129 seconds where MacMini is 101 seconds (B set). On MacMini I’ve used the built-in pgsql 9.0.13 version while on T420 I've used the following optimal compiling options:\n./configure CFLAGS=\"-O3 -fno-inline-functions -march=native\" --with-openssl --with-libxml --with-libxslt --with-wal-blocksize=64 --with-blocksize=32 --with-wal-segsize=64 --with-segsize=1\n\nI've also tried to disable Hyper-Threading with echo 0 > /sys/devices/system/cpu/cpuN/online where cpuN is the N-th logical CPU but nothing changed over B set queries. We have 2 CPU with 8 cores for a total of 16 physical cores and 16 logical cores.\n\n\nIt seems like T420 doesn’t push on single transaction while is probably able to manage multiple connections much better than MacMini. I can’t figure out why it’s much much much slower than MacMini on any kind of query (from data loading to da selection).\nThis question was posted originally on http://dba.stackexchange.com/questions/96444/cant-get-dell-pe-t420-perc-h710-perform-better-than-a-macmini-with-postgresql and they suggested to post it on this mailing list.It's months that I'm trying to solve a performance issue with PostgreSQL. I’m able to give you all the technical details needed.\nSYSTEM CONFIGURATIONOur deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID controller configured in this way:\n\nVD0: two 15k SAS disks (ext4, OS partition, WAL partition, RAID1)\nVD1: ten 10k SAS disks (XFS, Postgres data partition, RAID5)\nThis system has the following configuration:\n\nUbuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64)\n128GB RAM (DDR3, 8x16GB @1600Mhz)\ntwo Intel Xeon E5-2640 v2 @2Ghz\nDell Perc H710 with 512MB RAM (Write cache: \"WriteBack\", Read cache: \"ReadAhead\", Disk cache: \"disabled\"):\n\nVD0 (OS and WAL partition): two 15k SAS disks (ext4, RAID1)\nVD1 (Postgres data partition): ten 10k SAS disks (XFS, RAID5)\n\nPostgreSQL 9.4 (updated to the latest available version)\nmoved pg_stat_tmp to RAM disk\nMy personal low cost and low profile development machine is a MacMini configured in this way:\n\nOS X Server 10.7.5\n8GB RAM (DDR3, 2x4GB @1333Mhz)\none Intel i7 @2.2Ghz\ntwo Internal 500GB 7.2k SAS HDD (non RAID) for OS partition\nexternal Promise Pegasus R1 connected with Thunderbolt v1 (512MB \nRAM, four 1TB 7.2k SAS HDD 32MB cache, RAID5, Write cache: \"WriteBack\", \nRead cache: \"ReadAhead\", Disk cache: \"enabled\", NCQ: \"enabled\")\nPostgreSQL 9.0.13 (the original built-in shipped with OS X Server)\nmoved pg_stat_tmp to RAM disk\nSo far I've made a lot of tuning adjustments to both machines, \nincluding kernel reccomended ones on the official Postgres doc site.\nAPPLICATIONThe deployment machine runs a web platform which instructs Postgres \nto make big transactions over billion of records. It's a platform \ndesigned for one user because system resources have to be dedicated as \nmuch as possible to one single job due to data size (I don't like to \ncall it big data because big data are in the order ob ten of billion).\nISSUEsI've found the deployment machine to be a lot slower than the \ndevelopment machine. This is paradoxal because the two machine really \ndiffers in many aspects. I've run many queries to investigate this \nstrange behaviour and have done a lot of tuning adjustments. During the last two months I've prepared and executed two type of query sets:\n\nA: these sets make use of SELECT ... INTO, CREATE INDEX, CLUSTER and VACUUM ANALYZE.\nB: these sets are from our application generated transactions and make use of SELECT over the tables created with set A.\nA and B were always slower on T420. The only type of operation that was faster is the VACUUM ANALYZE.\nRESULTSA type set:\n\nT420: went from 311seconds (default postgresql.conf) to 195seconds doing tuning adjustments over RAID, kernel and postgresql.conf;\nMacMini: 40seconds.\nB type set:\n\nT420: 141seconds;\nMacMini: 101seconds.\nI've to mention that we have also adjusted the BIOS on T420 setting \nall possible parameters to \"performance\" and disabling low energy \nprofiles. This lowered time execution over a type A set from 240seconds \nto 211seconds.We have also upgrade all firmware and BIOS to the latest available versions.Here are two benchmarks generated using pg_test_fsync:T420 pg_test_fsync\n60 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 23358.758 ops/sec 43 usecs/op\n fdatasync 21417.018 ops/sec 47 usecs/op\n fsync 21112.662 ops/sec 47 usecs/op\n fsync_writethrough n/a\n open_sync 23082.764 ops/sec 43 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 11737.746 ops/sec 85 usecs/op\n fdatasync 19222.074 ops/sec 52 usecs/op\n fsync 18608.405 ops/sec 54 usecs/op\n fsync_writethrough n/a\n open_sync 11510.074 ops/sec 87 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n 1 * 16kB open_sync write 21484.546 ops/sec 47 usecs/op\n 2 * 8kB open_sync writes 11478.119 ops/sec 87 usecs/op\n 4 * 4kB open_sync writes 5885.149 ops/sec 170 usecs/op\n 8 * 2kB open_sync writes 3027.676 ops/sec 330 usecs/op\n 16 * 1kB open_sync writes 1512.922 ops/sec 661 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n write, fsync, close 17946.690 ops/sec 56 usecs/op\n write, close, fsync 17976.202 ops/sec 56 usecs/op\n\nNon-Sync'ed 8kB writes:\n write 343202.937 ops/sec 3 usecs/opMacMini pg_test_fsync\n60 seconds per test\nDirect I/O is not supported on this platform.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 3780.341 ops/sec 265 usecs/op\n fdatasync 3117.094 ops/sec 321 usecs/op\n fsync 3156.298 ops/sec 317 usecs/op\n fsync_writethrough 110.300 ops/sec 9066 usecs/op\n open_sync 3077.932 ops/sec 325 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 1522.400 ops/sec 657 usecs/op\n fdatasync 2700.055 ops/sec 370 usecs/op\n fsync 2670.652 ops/sec 374 usecs/op\n fsync_writethrough 98.462 ops/sec 10156 usecs/op\n open_sync 1532.235 ops/sec 653 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n 1 * 16kB open_sync write 2634.754 ops/sec 380 usecs/op\n 2 * 8kB open_sync writes 1547.801 ops/sec 646 usecs/op\n 4 * 4kB open_sync writes 801.542 ops/sec 1248 usecs/op\n 8 * 2kB open_sync writes 405.515 ops/sec 2466 usecs/op\n 16 * 1kB open_sync writes 204.095 ops/sec 4900 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n write, fsync, close 2747.345 ops/sec 364 usecs/op\n write, close, fsync 3070.877 ops/sec 326 usecs/op\n\nNon-Sync'ed 8kB writes:\n write 3275.716 ops/sec 305 usecs/opThis confirms the hardware IO capabilities of T420 but doesn't explain why MacMini is MUCH MORE FAST.Now let’s propose some query profiling times.B type set are transactions, so it's impossible for me to post EXPLAIN ANALYZE results. I've extracted two querys from a single transactions and executed the twos on both system. Here are the results:T420Query B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbMQuery B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06MacMiniQuery B_1 [56315.614 ms] http://explain.depesz.com/s/uZTxQuery B_2 [44890.813 ms] http://explain.depesz.com/s/y7Dk\nCOMPILING PGSQLI compiled and tested all the latest pgsql versions (9.0.19, 9.1.15, 9.2.10, 9.3.6 and 9.4.1) using different combinations of parameters for gcc-4.9.1 (gcc 4.7 for pgsql 9.0.19) and Postgres (I’ve tried also clang compiler with different optimization flags with no benefits). I followed this article but I was unable to test the -flto option due to several errors returned by make. After two days of testing I went down from 195 to 189 seconds on T420 \nwhere MacMini still is 40 seconds (A set); and from 141 to 129 seconds where \nMacMini is 101 seconds (B set). On MacMini I’ve used the built-in pgsql 9.0.13 version while on T420 I've used the following optimal compiling \noptions:./configure CFLAGS=\"-O3 -fno-inline-functions -march=native\" \n--with-openssl --with-libxml --with-libxslt --with-wal-blocksize=64 \n--with-blocksize=32 --with-wal-segsize=64 --with-segsize=1I've also tried to disable Hyper-Threading with echo 0 > /sys/devices/system/cpu/cpuN/online where cpuN\n is the N-th logical CPU but nothing changed over B set queries. We have\n 2 CPU with 8 cores for a total of 16 physical cores and 16 logical \ncores.It seems like T420 doesn’t push on single transaction while is probably able to manage multiple connections much better than MacMini. I can’t figure out why it’s much much much slower than MacMini on any kind of query (from data loading to da selection).",
"msg_date": "Wed, 1 Apr 2015 15:56:59 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can't get Dell PE T420 (Perc H710) perform better than a MacMini with\n PostgreSQL"
},
{
"msg_contents": "Hi Pietro,\n\nOn Wed, Apr 1, 2015 at 3:56 PM, Pietro Pugni <[email protected]> wrote:\n> T420: went from 311seconds (default postgresql.conf) to 195seconds doing\n> tuning adjustments over RAID, kernel and postgresql.conf;\n> MacMini: 40seconds.\n\nI'am afraid, the matter is, that PostgreSQL is not configured properly\n(and so do operating system and probably controller, however\npg_test_fsync shows that things are not so bad there as with\npostgresql.conf).\n\nIt is pretty useless to benchmark a database using out-of-the-box\nconfiguration. You need at least configure shared memory related,\ncheckpoints-related and autovacuum-related settings. And as a first\nstep, please compare postgresql.conf on Mac and on the server:\nsometimes (with some mac installers) default postgresql.conf can be\nnot the same as on server.\n\nBest regards,\nIlya\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 Apr 2015 16:27:20 +0200",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "On Wed, Apr 1, 2015 at 9:56 AM, Pietro Pugni <[email protected]> wrote:\n\n\n> *Now let’s propose some query profiling times.*\n>\n> B type set are transactions, so it's impossible for me to post EXPLAIN\n> ANALYZE results. I've extracted two querys from a single transactions and\n> executed the twos on both system. Here are the results:\n>\n> *T420*\n>\n> Query B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbM\n>\n> Query B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06\n>\n> *MacMini*\n>\n> Query B_1 [56315.614 ms] http://explain.depesz.com/s/uZTx\n>\n> Query B_2 [44890.813 ms] http://explain.depesz.com/s/y7Dk\n>\n\nLooking at the 2 B_2 queries (since they are so drastically different), the\nin-memory quicksorts stand out on the Dell as being *drastically* slower\nthan the disk-based sorts on your mac-mini....\n\nOn Wed, Apr 1, 2015 at 9:56 AM, Pietro Pugni <[email protected]> wrote: Now let’s propose some query profiling times.B type set are transactions, so it's impossible for me to post EXPLAIN ANALYZE results. I've extracted two querys from a single transactions and executed the twos on both system. Here are the results:T420Query B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbMQuery B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06MacMiniQuery B_1 [56315.614 ms] http://explain.depesz.com/s/uZTxQuery B_2 [44890.813 ms] http://explain.depesz.com/s/y7DkLooking at the 2 B_2 queries (since they are so drastically different), the in-memory quicksorts stand out on the Dell as being *drastically* slower than the disk-based sorts on your mac-mini....",
"msg_date": "Wed, 1 Apr 2015 10:32:00 -0400",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "Just looking at the 2 B_2 queries, I'm curious as to why is the execution\nplan different between the 2 machines. Is the optimiser stats updated on\nboth databases?\n\nRegards,\nWei Shan\n\nOn 1 April 2015 at 22:32, Aidan Van Dyk <[email protected]> wrote:\n\n> On Wed, Apr 1, 2015 at 9:56 AM, Pietro Pugni <[email protected]>\n> wrote:\n>\n>\n>> *Now let’s propose some query profiling times.*\n>>\n>> B type set are transactions, so it's impossible for me to post EXPLAIN\n>> ANALYZE results. I've extracted two querys from a single transactions\n>> and executed the twos on both system. Here are the results:\n>>\n>> *T420*\n>>\n>> Query B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbM\n>>\n>> Query B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06\n>>\n>> *MacMini*\n>>\n>> Query B_1 [56315.614 ms] http://explain.depesz.com/s/uZTx\n>>\n>> Query B_2 [44890.813 ms] http://explain.depesz.com/s/y7Dk\n>>\n>\n> Looking at the 2 B_2 queries (since they are so drastically different),\n> the in-memory quicksorts stand out on the Dell as being *drastically*\n> slower than the disk-based sorts on your mac-mini....\n>\n>\n\n\n-- \nRegards,\nAng Wei Shan\n\nJust looking at the 2 B_2 queries, I'm curious as to why is the execution plan different between the 2 machines. Is the optimiser stats updated on both databases?Regards,Wei ShanOn 1 April 2015 at 22:32, Aidan Van Dyk <[email protected]> wrote:On Wed, Apr 1, 2015 at 9:56 AM, Pietro Pugni <[email protected]> wrote: Now let’s propose some query profiling times.B type set are transactions, so it's impossible for me to post EXPLAIN ANALYZE results. I've extracted two querys from a single transactions and executed the twos on both system. Here are the results:T420Query B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbMQuery B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06MacMiniQuery B_1 [56315.614 ms] http://explain.depesz.com/s/uZTxQuery B_2 [44890.813 ms] http://explain.depesz.com/s/y7DkLooking at the 2 B_2 queries (since they are so drastically different), the in-memory quicksorts stand out on the Dell as being *drastically* slower than the disk-based sorts on your mac-mini....\n-- Regards,Ang Wei Shan",
"msg_date": "Wed, 1 Apr 2015 22:44:15 +0800",
"msg_from": "Wei Shan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "On Wed, Apr 1, 2015 at 6:56 AM, Pietro Pugni <[email protected]> wrote:\n\n> This question was posted originally on\n> http://dba.stackexchange.com/questions/96444/cant-get-dell-pe-t420-perc-h710-perform-better-than-a-macmini-with-postgresql\n> and they suggested to post it on this mailing list.\n>\n> It's months that I'm trying to solve a performance issue with PostgreSQL.\n> I’m able to give you all the technical details needed.\n> *SYSTEM CONFIGURATION*\n>\n> Our deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID\n> controller configured in this way:\n>\n> - two Intel Xeon E5-2640 v2 @2Ghz\n> - PostgreSQL 9.4 (updated to the latest available version)\n>\n> My personal low cost and low profile development machine is a MacMini\n> configured in this way:\n>\n> - one Intel i7 @2.2Ghz\n> - PostgreSQL 9.0.13 (the original built-in shipped with OS X Server)\n>\n>\nUsing such different versions of PostgreSQL seems like a recipe for\nfrustration.\n\n\n> Here are two benchmarks generated using pg_test_fsync:\n>\n\nThis is unlikely to be important for the type of workload you describe.\nFsyncs are the bottleneck for many short transactions, but not often the\nbottleneck for very large transactions.\n\n\n> *T420*\n>\n> Query B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06\n>\n> *MacMini*\n>\n> Query B_2 [44890.813 ms] http://explain.depesz.com/s/y7Dk\n>\n\n\nWhat collation is used for both databases? Perhaps the T420 is using a\nmuch slower collation.\n\nHow can you sort 2,951,191 but then materialize 4,458,971 rows out of\nthat? I've never seen that before. (Or, in the other plan, put 2,951,191\nrows into the sort from the CTE but get 4,458,971 out of the sort?\n\nCheers,\n\nJeff\n\nOn Wed, Apr 1, 2015 at 6:56 AM, Pietro Pugni <[email protected]> wrote:This question was posted originally on http://dba.stackexchange.com/questions/96444/cant-get-dell-pe-t420-perc-h710-perform-better-than-a-macmini-with-postgresql and they suggested to post it on this mailing list.It's months that I'm trying to solve a performance issue with PostgreSQL. I’m able to give you all the technical details needed.\nSYSTEM CONFIGURATIONOur deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID controller configured in this way:two Intel Xeon E5-2640 v2 @2Ghz\nPostgreSQL 9.4 (updated to the latest available version)\nMy personal low cost and low profile development machine is a MacMini configured in this way:\n\none Intel i7 @2.2Ghz\nPostgreSQL 9.0.13 (the original built-in shipped with OS X Server)Using such different versions of PostgreSQL seems like a recipe for frustration.Here are two benchmarks generated using pg_test_fsync:This is unlikely to be important for the type of workload you describe. Fsyncs are the bottleneck for many short transactions, but not often the bottleneck for very large transactions.T420Query B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06MacMiniQuery B_2 [44890.813 ms] http://explain.depesz.com/s/y7DkWhat collation is used for both databases? Perhaps the T420 is using a much slower collation.How can you sort 2,951,191 but then materialize 4,458,971 rows out of that? I've never seen that before. (Or, in the other plan, put 2,951,191 rows into the sort from the CTE but get 4,458,971 out of the sort?Cheers,Jeff",
"msg_date": "Wed, 1 Apr 2015 09:38:14 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "Ok, a quick view on the system, and some things that may be important to note:\n\n> Our deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID\n> controller configured in this way:\n> \n> * VD0: two 15k SAS disks (ext4, OS partition, WAL partition,\n> RAID1)\n> * VD1: ten 10k SAS disks (XFS, Postgres data partition, RAID5)\n> \n\nWell...usually RAID5 have the worst performance in writing...EVER!!! Have you tested this in another raid configuration? RAID10 is usually the best bet.\n\n> \n> \n> This system has the following configuration:\n> \n> * Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64)\n> * 128GB RAM (DDR3, 8x16GB @1600Mhz)\n> * two Intel Xeon E5-2640 v2 @2Ghz\n> * Dell Perc H710 with 512MB RAM (Write cache: \"WriteBack\", Read\n> cache: \"ReadAhead\", Disk cache: \"disabled\"):\n> * VD0 (OS and WAL partition): two 15k SAS disks (ext4, RAID1)\n> * VD1 (Postgres data partition): ten 10k SAS disks (XFS,\n> RAID5)\n> * PostgreSQL 9.4 (updated to the latest available version)\n> * moved pg_stat_tmp to RAM disk\n> \n> \n[...]> versions.\n> \nYou did not mention any \"postgres\" configuration at all. If you let the default checkpoint_segments=3, that would be an IO hell for your disk controler...and the RAID5 making things worst...Can you show us the values of:\n\ncheckpoint_segments\nshared_buffers\nwork_mem\nmaintenance_work_mem\neffective_io_concurrency\n\nI would start from there, few changes, and check again. I would change the RAID first of all things, and try those tests again.\n\nCheers.\nGerardo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 Apr 2015 23:19:38 -0300 (ART)",
"msg_from": "Gerardo Herzig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better\n than a MacMini with PostgreSQL"
},
{
"msg_contents": "Hi Gerardo,\nthank you for your response.\nAt the moment I can’t switch to RAID10. I know it has best performance, but both systems have RAID5 and MacMini has a consumer desktop RAID solution while T420 has a server-grade one.\nAnyway, I used two configurations for each system: one for data loading operations and the other one for any other kind of operation (SELECT etc.). These configurations were made studying different combinations. I’ve changed kernel parameters as stated in the official Postgres documentation ( www.postgresql.org/docs/9.4/static/kernel-resources.html ).\nI copy and paste here the various postgresql.conf involved:\n\nT420\nNormal operations\nautovacuum = on\nmaintenance_work_mem = 512MB\nwork_mem = 512MB\nwal_buffers = 64MB\neffective_cache_size = 64GB # this helps A LOT in disk write speed when creating indexes\nshared_buffers = 32GB\ncheckpoint_segments = 2000\ncheckpoint_completion_target = 1.0\neffective_io_concurrency = 0 # 1 doesn’t make any substantial difference\nmax_connections = 10 # 20 doesn’t make any difference\n\nData loading (same as above with the following changes):\nautovacuum = off\nmaintenance_work_mem = 64GB\n\n\nMacMini\nNormal operations\nautovacuum = on\nmaintenance_work_mem = 128MB\nwork_mem = 32MB\nwal_buffers = 32MB\neffective_cache_size = 800MB\nshared_buffers = 512MB\ncheckpoint_segments = 32\ncheckpoint_completion_target = 1.0\neffective_io_concurrency = 1\nmax_connections = 20\n\nData loading (same as above with the following changes):\nautovacuum = off\nmaintenance_work_mem = 6GB\n\n\nBest regards,\n Pietro\n\n\nIl giorno 02/apr/2015, alle ore 04:19, Gerardo Herzig <[email protected]> ha scritto:\n\n> Ok, a quick view on the system, and some things that may be important to note:\n> \n>> Our deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID\n>> controller configured in this way:\n>> \n>> * VD0: two 15k SAS disks (ext4, OS partition, WAL partition,\n>> RAID1)\n>> * VD1: ten 10k SAS disks (XFS, Postgres data partition, RAID5)\n>> \n> \n> Well...usually RAID5 have the worst performance in writing...EVER!!! Have you tested this in another raid configuration? RAID10 is usually the best bet.\n> \n>> \n>> \n>> This system has the following configuration:\n>> \n>> * Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64)\n>> * 128GB RAM (DDR3, 8x16GB @1600Mhz)\n>> * two Intel Xeon E5-2640 v2 @2Ghz\n>> * Dell Perc H710 with 512MB RAM (Write cache: \"WriteBack\", Read\n>> cache: \"ReadAhead\", Disk cache: \"disabled\"):\n>> * VD0 (OS and WAL partition): two 15k SAS disks (ext4, RAID1)\n>> * VD1 (Postgres data partition): ten 10k SAS disks (XFS,\n>> RAID5)\n>> * PostgreSQL 9.4 (updated to the latest available version)\n>> * moved pg_stat_tmp to RAM disk\n>> \n>> \n> [...]> versions.\n>> \n> You did not mention any \"postgres\" configuration at all. If you let the default checkpoint_segments=3, that would be an IO hell for your disk controler...and the RAID5 making things worst...Can you show us the values of:\n> \n> checkpoint_segments\n> shared_buffers\n> work_mem\n> maintenance_work_mem\n> effective_io_concurrency\n> \n> I would start from there, few changes, and check again. I would change the RAID first of all things, and try those tests again.\n> \n> Cheers.\n> Gerardo\n\n\nHi Gerardo,thank you for your response.At the moment I can’t switch to RAID10. I know it has best performance, but both systems have RAID5 and MacMini has a consumer desktop RAID solution while T420 has a server-grade one.Anyway, I used two configurations for each system: one for data loading operations and the other one for any other kind of operation (SELECT etc.). These configurations were made studying different combinations. I’ve changed kernel parameters as stated in the official Postgres documentation ( www.postgresql.org/docs/9.4/static/kernel-resources.html ).I copy and paste here the various postgresql.conf involved:T420Normal operationsautovacuum = onmaintenance_work_mem = 512MBwork_mem = 512MBwal_buffers = 64MBeffective_cache_size = 64GB # this helps A LOT in disk write speed when creating indexesshared_buffers = 32GBcheckpoint_segments = 2000checkpoint_completion_target = 1.0effective_io_concurrency = 0 # 1 doesn’t make any substantial differencemax_connections = 10 # 20 doesn’t make any differenceData loading (same as above with the following changes):autovacuum = offmaintenance_work_mem = 64GBMacMiniNormal operationsautovacuum = onmaintenance_work_mem = 128MBwork_mem = 32MBwal_buffers = 32MBeffective_cache_size = 800MBshared_buffers = 512MBcheckpoint_segments = 32checkpoint_completion_target = 1.0effective_io_concurrency = 1max_connections = 20Data loading (same as above with the following changes):autovacuum = offmaintenance_work_mem = 6GBBest regards, PietroIl giorno 02/apr/2015, alle ore 04:19, Gerardo Herzig <[email protected]> ha scritto:Ok, a quick view on the system, and some things that may be important to note:Our deployment machine is a Dell PowerEdge T420 with a Perc H710 RAIDcontroller configured in this way: * VD0: two 15k SAS disks (ext4, OS partition, WAL partition, RAID1) * VD1: ten 10k SAS disks (XFS, Postgres data partition, RAID5)Well...usually RAID5 have the worst performance in writing...EVER!!! Have you tested this in another raid configuration? RAID10 is usually the best bet.This system has the following configuration: * Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64) * 128GB RAM (DDR3, 8x16GB @1600Mhz) * two Intel Xeon E5-2640 v2 @2Ghz * Dell Perc H710 with 512MB RAM (Write cache: \"WriteBack\", Read cache: \"ReadAhead\", Disk cache: \"disabled\"): * VD0 (OS and WAL partition): two 15k SAS disks (ext4, RAID1) * VD1 (Postgres data partition): ten 10k SAS disks (XFS, RAID5) * PostgreSQL 9.4 (updated to the latest available version) * moved pg_stat_tmp to RAM disk[...]> versions.You did not mention any \"postgres\" configuration at all. If you let the default checkpoint_segments=3, that would be an IO hell for your disk controler...and the RAID5 making things worst...Can you show us the values of:checkpoint_segmentsshared_bufferswork_memmaintenance_work_memeffective_io_concurrencyI would start from there, few changes, and check again. I would change the RAID first of all things, and try those tests again.Cheers.Gerardo",
"msg_date": "Thu, 2 Apr 2015 12:33:15 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "Hi Jeff,\nthank you for your response.\nI’m using Postgres 9.0 on MacMini because I’ve noticed that it’s quite fast compared to different Ubuntu machines on which I’ve worked with different (and more performant) hardware.\nThe built-in Postgres version on OS X Server is impossible to update. I should stop it and install a parallel and independent distribution which has not been optimized by Apple. On opensource.appel.com they have different Postgres versions but the latest one is 9.2.x. They stopped updating it in 2012.\npg_test_fsync tells me that T420 disk iops are ~7 times faster than MacMini, which is ok, but queries run ~2-5 times slower (for brevity I didn’t report all test results in my first mail).\n\nI’ve searched just now what a collation is because I’ve never explicitly used one before, so I think it uses the default one.\n\nB_2 query is of the form:\nWITH soggetti AS (\n SELECT ... FROM ... GROUP BY ...)\nSELECT ... INTO ... FROM soggetti, ... WHERE ... \n\n(I omit the … part because they’re not relevant)\n\nBest regards,\n Pietro\n\nPS it’s the first time for me on this list so I don’t know if you read the other answers. I reported the postgresql.conf for both systems\n\n\n\n\nIl giorno 01/apr/2015, alle ore 18:38, Jeff Janes <[email protected]> ha scritto:\n\n> On Wed, Apr 1, 2015 at 6:56 AM, Pietro Pugni <[email protected]> wrote:\n> This question was posted originally on http://dba.stackexchange.com/questions/96444/cant-get-dell-pe-t420-perc-h710-perform-better-than-a-macmini-with-postgresql and they suggested to post it on this mailing list.\n> \n> It's months that I'm trying to solve a performance issue with PostgreSQL. I’m able to give you all the technical details needed.\n> \n> SYSTEM CONFIGURATION\n> \n> Our deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID controller configured in this way:\n> \n> two Intel Xeon E5-2640 v2 @2Ghz\n> PostgreSQL 9.4 (updated to the latest available version)\n> My personal low cost and low profile development machine is a MacMini configured in this way:\n> \n> one Intel i7 @2.2Ghz\n> PostgreSQL 9.0.13 (the original built-in shipped with OS X Server)\n> \n> Using such different versions of PostgreSQL seems like a recipe for frustration.\n> \n> \n> Here are two benchmarks generated using pg_test_fsync:\n> \n> \n> This is unlikely to be important for the type of workload you describe. Fsyncs are the bottleneck for many short transactions, but not often the bottleneck for very large transactions.\n> \n> \n> \n> T420\n> \n> Query B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06\n> \n> MacMini\n> \n> Query B_2 [44890.813 ms] http://explain.depesz.com/s/y7Dk\n> \n> \n> \n> What collation is used for both databases? Perhaps the T420 is using a much slower collation.\n> \n> How can you sort 2,951,191 but then materialize 4,458,971 rows out of that? I've never seen that before. (Or, in the other plan, put 2,951,191 rows into the sort from the CTE but get 4,458,971 out of the sort?\n> \n> Cheers,\n> \n> Jeff\n\n\nHi Jeff,thank you for your response.I’m using Postgres 9.0 on MacMini because I’ve noticed that it’s quite fast compared to different Ubuntu machines on which I’ve worked with different (and more performant) hardware.The built-in Postgres version on OS X Server is impossible to update. I should stop it and install a parallel and independent distribution which has not been optimized by Apple. On opensource.appel.com they have different Postgres versions but the latest one is 9.2.x. They stopped updating it in 2012.pg_test_fsync tells me that T420 disk iops are ~7 times faster than MacMini, which is ok, but queries run ~2-5 times slower (for brevity I didn’t report all test results in my first mail).I’ve searched just now what a collation is because I’ve never explicitly used one before, so I think it uses the default one.B_2 query is of the form:WITH soggetti AS ( SELECT ... FROM ... GROUP BY ...)SELECT ... INTO ... FROM soggetti, ... WHERE ... (I omit the … part because they’re not relevant)Best regards, PietroPS it’s the first time for me on this list so I don’t know if you read the other answers. I reported the postgresql.conf for both systemsIl giorno 01/apr/2015, alle ore 18:38, Jeff Janes <[email protected]> ha scritto:On Wed, Apr 1, 2015 at 6:56 AM, Pietro Pugni <[email protected]> wrote:This question was posted originally on http://dba.stackexchange.com/questions/96444/cant-get-dell-pe-t420-perc-h710-perform-better-than-a-macmini-with-postgresql and they suggested to post it on this mailing list.It's months that I'm trying to solve a performance issue with PostgreSQL. I’m able to give you all the technical details needed.\nSYSTEM CONFIGURATIONOur deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID controller configured in this way:two Intel Xeon E5-2640 v2 @2Ghz\nPostgreSQL 9.4 (updated to the latest available version)\nMy personal low cost and low profile development machine is a MacMini configured in this way:\n\none Intel i7 @2.2Ghz\nPostgreSQL 9.0.13 (the original built-in shipped with OS X Server)Using such different versions of PostgreSQL seems like a recipe for frustration.Here are two benchmarks generated using pg_test_fsync:This is unlikely to be important for the type of workload you describe. Fsyncs are the bottleneck for many short transactions, but not often the bottleneck for very large transactions.T420Query B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06MacMiniQuery B_2 [44890.813 ms] http://explain.depesz.com/s/y7DkWhat collation is used for both databases? Perhaps the T420 is using a much slower collation.How can you sort 2,951,191 but then materialize 4,458,971 rows out of that? I've never seen that before. (Or, in the other plan, put 2,951,191 rows into the sort from the CTE but get 4,458,971 out of the sort?Cheers,Jeff",
"msg_date": "Thu, 2 Apr 2015 12:47:38 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "Hi Wei Shan,\nThank you for your response.\nQuery B was run after initializing the DB ex-novo doing VACUUM ANALYZE before and after creating and clustering indexes.\nBy the way, these results are consistent through time and are reproducible, so it’s not a metter of statistic collector (I guess).\nYour observation is the same done at dba.stackexchange.com and this make me think that the built-in Postgres of OS X Server is truly optimized. \n\nBest regards,\n Pietro\n\nPS on the other response I reported both postgresql.conf \n\n\nIl giorno 01/apr/2015, alle ore 16:44, Wei Shan <[email protected]> ha scritto:\n\n> Just looking at the 2 B_2 queries, I'm curious as to why is the execution plan different between the 2 machines. Is the optimiser stats updated on both databases?\n> \n> Regards,\n> Wei Shan\n> \n> On 1 April 2015 at 22:32, Aidan Van Dyk <[email protected]> wrote:\n> On Wed, Apr 1, 2015 at 9:56 AM, Pietro Pugni <[email protected]> wrote:\n> \n> Now let’s propose some query profiling times.\n> \n> B type set are transactions, so it's impossible for me to post EXPLAIN ANALYZE results. I've extracted two querys from a single transactions and executed the twos on both system. Here are the results:\n> \n> T420\n> \n> Query B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbM\n> \n> Query B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06\n> \n> MacMini\n> \n> Query B_1 [56315.614 ms] http://explain.depesz.com/s/uZTx\n> \n> Query B_2 [44890.813 ms] http://explain.depesz.com/s/y7Dk\n> \n> \n> Looking at the 2 B_2 queries (since they are so drastically different), the in-memory quicksorts stand out on the Dell as being *drastically* slower than the disk-based sorts on your mac-mini....\n> \n> \n> \n> \n> -- \n> Regards,\n> Ang Wei Shan\n\n\nHi Wei Shan,Thank you for your response.Query B was run after initializing the DB ex-novo doing VACUUM ANALYZE before and after creating and clustering indexes.By the way, these results are consistent through time and are reproducible, so it’s not a metter of statistic collector (I guess).Your observation is the same done at dba.stackexchange.com and this make me think that the built-in Postgres of OS X Server is truly optimized. Best regards, PietroPS on the other response I reported both postgresql.conf Il giorno 01/apr/2015, alle ore 16:44, Wei Shan <[email protected]> ha scritto:Just looking at the 2 B_2 queries, I'm curious as to why is the execution plan different between the 2 machines. Is the optimiser stats updated on both databases?Regards,Wei ShanOn 1 April 2015 at 22:32, Aidan Van Dyk <[email protected]> wrote:On Wed, Apr 1, 2015 at 9:56 AM, Pietro Pugni <[email protected]> wrote: Now let’s propose some query profiling times.B type set are transactions, so it's impossible for me to post EXPLAIN ANALYZE results. I've extracted two querys from a single transactions and executed the twos on both system. Here are the results:T420Query B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbMQuery B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06MacMiniQuery B_1 [56315.614 ms] http://explain.depesz.com/s/uZTxQuery B_2 [44890.813 ms] http://explain.depesz.com/s/y7DkLooking at the 2 B_2 queries (since they are so drastically different), the in-memory quicksorts stand out on the Dell as being *drastically* slower than the disk-based sorts on your mac-mini....\n-- Regards,Ang Wei Shan",
"msg_date": "Thu, 2 Apr 2015 12:51:33 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "Hi Ilya,\nthank your for your response.\nBoth system were configured for each test I’ve done. On T420 I’ve optimized the kernel following the official Postgres documentation ( http://www.postgresql.org/docs/9.4/static/kernel-resources.html ):\nkernel.shmmax=68719476736\nkernel.shmall=16777216\nvm.overcommit_memory=2\nvm.overcommit_ratio=90\n\n\nRAID controllers were configured as following:\n- Write cache: WriteBack\n- Read cache: ReadAhead\n- Disk cache (only T420): disabled to take full advantage of WriteBack cache (BBU is charged and working)\n- NCQ (only MacMini because it’s a SATA option): enabled (this affects a lot the overall performance)\n\nFor postgresql.conf:\n\nT420\nNormal operations\nautovacuum = on\nmaintenance_work_mem = 512MB\nwork_mem = 512MB\nwal_buffers = 64MB\neffective_cache_size = 64GB # this helps A LOT in disk write speed when creating indexes\nshared_buffers = 32GB\ncheckpoint_segments = 2000\ncheckpoint_completion_target = 1.0\neffective_io_concurrency = 0 # 1 doesn’t make any substantial difference\nmax_connections = 10 # 20 doesn’t make any difference\n\nData loading (same as above with the following changes):\nautovacuum = off\nmaintenance_work_mem = 64GB\n\n\nMacMini\nNormal operations\nautovacuum = on\nmaintenance_work_mem = 128MB\nwork_mem = 32MB\nwal_buffers = 32MB\neffective_cache_size = 800MB\nshared_buffers = 512MB\ncheckpoint_segments = 32\ncheckpoint_completion_target = 1.0\neffective_io_concurrency = 1\nmax_connections = 20\n\nData loading (same as above with the following changes):\nautovacuum = off\nmaintenance_work_mem = 6GB\n\n\nBest regards,\n Pietro\n\n\n\nIl giorno 01/apr/2015, alle ore 16:27, Ilya Kosmodemiansky <[email protected]> ha scritto:\n\n> Hi Pietro,\n> \n> On Wed, Apr 1, 2015 at 3:56 PM, Pietro Pugni <[email protected]> wrote:\n>> T420: went from 311seconds (default postgresql.conf) to 195seconds doing\n>> tuning adjustments over RAID, kernel and postgresql.conf;\n>> MacMini: 40seconds.\n> \n> I'am afraid, the matter is, that PostgreSQL is not configured properly\n> (and so do operating system and probably controller, however\n> pg_test_fsync shows that things are not so bad there as with\n> postgresql.conf).\n> \n> It is pretty useless to benchmark a database using out-of-the-box\n> configuration. You need at least configure shared memory related,\n> checkpoints-related and autovacuum-related settings. And as a first\n> step, please compare postgresql.conf on Mac and on the server:\n> sometimes (with some mac installers) default postgresql.conf can be\n> not the same as on server.\n> \n> Best regards,\n> Ilya\n> \n> \n> -- \n> Ilya Kosmodemiansky,\n> \n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n\n\nHi Ilya,thank your for your response.Both system were configured for each test I’ve done. On T420 I’ve optimized the kernel following the official Postgres documentation ( http://www.postgresql.org/docs/9.4/static/kernel-resources.html ):kernel.shmmax=68719476736kernel.shmall=16777216vm.overcommit_memory=2vm.overcommit_ratio=90RAID controllers were configured as following:- Write cache: WriteBack- Read cache: ReadAhead- Disk cache (only T420): disabled to take full advantage of WriteBack cache (BBU is charged and working)- NCQ (only MacMini because it’s a SATA option): enabled (this affects a lot the overall performance)For postgresql.conf:T420Normal operationsautovacuum = onmaintenance_work_mem = 512MBwork_mem = 512MBwal_buffers = 64MBeffective_cache_size = 64GB # this helps A LOT in disk write speed when creating indexesshared_buffers = 32GBcheckpoint_segments = 2000checkpoint_completion_target = 1.0effective_io_concurrency = 0 # 1 doesn’t make any substantial differencemax_connections = 10 # 20 doesn’t make any differenceData loading (same as above with the following changes):autovacuum = offmaintenance_work_mem = 64GBMacMiniNormal operationsautovacuum = onmaintenance_work_mem = 128MBwork_mem = 32MBwal_buffers = 32MBeffective_cache_size = 800MBshared_buffers = 512MBcheckpoint_segments = 32checkpoint_completion_target = 1.0effective_io_concurrency = 1max_connections = 20Data loading (same as above with the following changes):autovacuum = offmaintenance_work_mem = 6GBBest regards, PietroIl giorno 01/apr/2015, alle ore 16:27, Ilya Kosmodemiansky <[email protected]> ha scritto:Hi Pietro,On Wed, Apr 1, 2015 at 3:56 PM, Pietro Pugni <[email protected]> wrote:T420: went from 311seconds (default postgresql.conf) to 195seconds doingtuning adjustments over RAID, kernel and postgresql.conf;MacMini: 40seconds.I'am afraid, the matter is, that PostgreSQL is not configured properly(and so do operating system and probably controller, howeverpg_test_fsync shows that things are not so bad there as withpostgresql.conf).It is pretty useless to benchmark a database using out-of-the-boxconfiguration. You need at least configure shared memory related,checkpoints-related and autovacuum-related settings. And as a firststep, please compare postgresql.conf on Mac and on the server:sometimes (with some mac installers) default postgresql.conf can benot the same as on server.Best regards,Ilya-- Ilya Kosmodemiansky,PostgreSQL-Consulting.comtel. +14084142500cell. [email protected]",
"msg_date": "Thu, 2 Apr 2015 12:57:22 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "Hi Pietro,\n\nThe modern CPUs trying to be too smart.\n\ntry to run this code to disable CPUs c-states:\n\n----> setcpulatency.c \n\n#include <stdio.h>\n#include <fcntl.h>\n#include <stdint.h>\n\nint main(int argc, char **argv) {\n int32_t l;\n int fd;\n\n if (argc != 2) {\n fprintf(stderr, \"Usage: %s <latency in us>\\n\", argv[0]);\n return 2;\n }\n\n l = atoi(argv[1]);\n printf(\"setting latency to %d us\\n\", l);\n fd = open(\"/dev/cpu_dma_latency\", O_WRONLY);\n if (fd < 0) {\n perror(\"open /dev/cpu_dma_latency\");\n return 1;\n }\n \n if (write(fd, &l, sizeof(l)) != sizeof(l)) {\n perror(\"write to /dev/cpu_dma_latency\");\n return 1;\n }\n \n while (1) pause();\n}\n\n\n---->\n\nyou can use i7z (https://code.google.com/p/i7z/) to see the percentage of CPU power to be used.\nChanging CPU from C1 to C0 takes quite some time and for DB workload not optimal (if you need a \nhigh throughout and any given moment).\n\nI see ~65% boost when run './setcpulatency 0'.\n\nTigran.\n\n----- Original Message -----\n> From: \"Pietro Pugni\" <[email protected]>\n> To: [email protected]\n> Cc: \"pgsql-performance\" <[email protected]>\n> Sent: Thursday, April 2, 2015 12:57:22 PM\n> Subject: Re: [PERFORM] Can't get Dell PE T420 (Perc H710) perform better than a MacMini with PostgreSQL\n\n> Hi Ilya,\n> thank your for your response.\n> Both system were configured for each test I’ve done. On T420 I’ve optimized the\n> kernel following the official Postgres documentation (\n> http://www.postgresql.org/docs/9.4/static/kernel-resources.html ):\n> kernel.shmmax=68719476736\n> kernel.shmall=16777216\n> vm.overcommit_memory=2\n> vm.overcommit_ratio=90\n> \n> \n> RAID controllers were configured as following:\n> - Write cache: WriteBack\n> - Read cache: ReadAhead\n> - Disk cache (only T420): disabled to take full advantage of WriteBack cache\n> (BBU is charged and working)\n> - NCQ (only MacMini because it’s a SATA option): enabled (this affects a lot the\n> overall performance)\n> \n> For postgresql.conf:\n> \n> T420\n> Normal operations\n> autovacuum = on\n> maintenance_work_mem = 512MB\n> work_mem = 512MB\n> wal_buffers = 64MB\n> effective_cache_size = 64GB # this helps A LOT in disk write speed when creating\n> indexes\n> shared_buffers = 32GB\n> checkpoint_segments = 2000\n> checkpoint_completion_target = 1.0\n> effective_io_concurrency = 0 # 1 doesn’t make any substantial difference\n> max_connections = 10 # 20 doesn’t make any difference\n> \n> Data loading (same as above with the following changes):\n> autovacuum = off\n> maintenance_work_mem = 64GB\n> \n> \n> MacMini\n> Normal operations\n> autovacuum = on\n> maintenance_work_mem = 128MB\n> work_mem = 32MB\n> wal_buffers = 32MB\n> effective_cache_size = 800MB\n> shared_buffers = 512MB\n> checkpoint_segments = 32\n> checkpoint_completion_target = 1.0\n> effective_io_concurrency = 1\n> max_connections = 20\n> \n> Data loading (same as above with the following changes):\n> autovacuum = off\n> maintenance_work_mem = 6GB\n> \n> \n> Best regards,\n> Pietro\n> \n> \n> \n> Il giorno 01/apr/2015, alle ore 16:27, Ilya Kosmodemiansky\n> <[email protected]> ha scritto:\n> \n>> Hi Pietro,\n>> \n>> On Wed, Apr 1, 2015 at 3:56 PM, Pietro Pugni <[email protected]> wrote:\n>>> T420: went from 311seconds (default postgresql.conf) to 195seconds doing\n>>> tuning adjustments over RAID, kernel and postgresql.conf;\n>>> MacMini: 40seconds.\n>> \n>> I'am afraid, the matter is, that PostgreSQL is not configured properly\n>> (and so do operating system and probably controller, however\n>> pg_test_fsync shows that things are not so bad there as with\n>> postgresql.conf).\n>> \n>> It is pretty useless to benchmark a database using out-of-the-box\n>> configuration. You need at least configure shared memory related,\n>> checkpoints-related and autovacuum-related settings. And as a first\n>> step, please compare postgresql.conf on Mac and on the server:\n>> sometimes (with some mac installers) default postgresql.conf can be\n>> not the same as on server.\n>> \n>> Best regards,\n>> Ilya\n>> \n>> \n>> --\n>> Ilya Kosmodemiansky,\n>> \n>> PostgreSQL-Consulting.com\n>> tel. +14084142500\n>> cell. +4915144336040\n> > [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Apr 2015 14:22:57 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better\n than a MacMini with PostgreSQL"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 2, 2015 at 12:47 PM, Pietro Pugni <[email protected]> wrote:\n> Hi Jeff,\n> thank you for your response.\n> I’m using Postgres 9.0 on MacMini because I’ve noticed that it’s quite fast\n> compared to different Ubuntu machines on which I’ve worked with different\n> (and more performant) hardware.\n> The built-in Postgres version on OS X Server is impossible to update. I\n> should stop it and install a parallel and independent distribution which has\n> not been optimized by Apple. On opensource.appel.com they have different\n> Postgres versions but the latest one is 9.2.x. They stopped updating it in\n> 2012.\nIf you want you can compile 9.0 on OSX and double check.\nI don't remember well but ITSM that a fsync used by psql was a noop on OSX.\n\n> pg_test_fsync tells me that T420 disk iops are ~7 times faster than MacMini,\n> which is ok, but queries run ~2-5 times slower (for brevity I didn’t report\n> all test results in my first mail).\n\n>\n> I’ve searched just now what a collation is because I’ve never explicitly\n> used one before, so I think it uses the default one.\n\nWhat's the output of free and sysctl -a | grep vm.zone_reclaim_mode\n\nSearch the mailing list for zone_reclaim_mode there's some tips.\n\n\nFor testing you can also use the mac mini config with the dell, at\nleast it should give you the same plan.\nWith your example disks don't seem to matter, it's all in memory.\n\nKeep in mind that a psql query is still single thread so the mac and\nthe dell should get more or less the same speed for in memory queries.\n\n>\n> B_2 query is of the form:\n> WITH soggetti AS (\n> SELECT ... FROM ... GROUP BY ...)\n> SELECT ... INTO ... FROM soggetti, ... WHERE ...\n>\n> (I omit the … part because they’re not relevant)\n>\n> Best regards,\n> Pietro\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Apr 2015 14:29:09 +0200",
"msg_from": "didier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "On Thu, Apr 2, 2015 at 6:33 AM, Pietro Pugni <[email protected]> wrote:\n\n\n> *T420*\n> work_mem = 512MB\n>\n\n\n> *MacMini*\n> work_mem = 32MB\n>\n\nSo that is why the T420 does memory sorts and the mini does disk sorts.\n\nI'd start looking at why memory sorts on the T420 is so slow. Check your\nnuma settings, etc (as already mentioned).\n\nFor a drastic test, disable the 2nd socket on the dell, and just use one\n(eliminate any numa/QPI costs) and see how it compares to the no-numa\nMacMini.\n\nIf you want to see how bad the NUMA/QPI is, play with stream to benchmark\nmemory performance.\n\na.\n\nOn Thu, Apr 2, 2015 at 6:33 AM, Pietro Pugni <[email protected]> wrote: T420work_mem = 512MB MacMiniwork_mem = 32MBSo that is why the T420 does memory sorts and the mini does disk sorts.I'd start looking at why memory sorts on the T420 is so slow. Check your numa settings, etc (as already mentioned).For a drastic test, disable the 2nd socket on the dell, and just use one (eliminate any numa/QPI costs) and see how it compares to the no-numa MacMini.If you want to see how bad the NUMA/QPI is, play with stream to benchmark memory performance.a.",
"msg_date": "Thu, 2 Apr 2015 08:51:14 -0400",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "Hi Aidan,\n\n> \n> T420\n> work_mem = 512MB\n> \n> MacMini\n> work_mem = 32MB\n> \n> So that is why the T420 does memory sorts and the mini does disk sorts.\n> \n> I'd start looking at why memory sorts on the T420 is so slow. Check your numa settings, etc (as already mentioned).\n> \n> For a drastic test, disable the 2nd socket on the dell, and just use one (eliminate any numa/QPI costs) and see how it compares to the no-numa MacMini.\n> \n\nthe command \ndmesg | grep -i numa\ndoesn’t display me anything. I think T420 hasn’t NUMA on it. Is there a way to enable it from Ubuntu? I don’t have immediate access to BIOS (server is in another location).\nFor QPI I don’t know what to do. Please, can you give me more details?\n\n> If you want to see how bad the NUMA/QPI is, play with stream to benchmark memory performance.\n\n\nWith stream you refer to this: https://sites.utexas.edu/jdm4372/tag/stream-benchmark/ ? Do you suggest me some way to do this kind of tests?\n\nThank you very much \n Pietro\n\n\nHi Aidan, T420work_mem = 512MB MacMiniwork_mem = 32MBSo that is why the T420 does memory sorts and the mini does disk sorts.I'd start looking at why memory sorts on the T420 is so slow. Check your numa settings, etc (as already mentioned).For a drastic test, disable the 2nd socket on the dell, and just use one (eliminate any numa/QPI costs) and see how it compares to the no-numa MacMini.the command dmesg | grep -i numadoesn’t display me anything. I think T420 hasn’t NUMA on it. Is there a way to enable it from Ubuntu? I don’t have immediate access to BIOS (server is in another location).For QPI I don’t know what to do. Please, can you give me more details?If you want to see how bad the NUMA/QPI is, play with stream to benchmark memory performance.\nWith stream you refer to this: https://sites.utexas.edu/jdm4372/tag/stream-benchmark/ ? Do you suggest me some way to do this kind of tests?Thank you very much Pietro",
"msg_date": "Thu, 2 Apr 2015 15:23:37 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "Il giorno 02/apr/2015, alle ore 14:29, didier <[email protected]> ha scritto:\n\n> Hi,\n> \n> On Thu, Apr 2, 2015 at 12:47 PM, Pietro Pugni <[email protected]> wrote:\n>> Hi Jeff,\n>> thank you for your response.\n>> I’m using Postgres 9.0 on MacMini because I’ve noticed that it’s quite fast\n>> compared to different Ubuntu machines on which I’ve worked with different\n>> (and more performant) hardware.\n>> The built-in Postgres version on OS X Server is impossible to update. I\n>> should stop it and install a parallel and independent distribution which has\n>> not been optimized by Apple. On opensource.appel.com they have different\n>> Postgres versions but the latest one is 9.2.x. They stopped updating it in\n>> 2012.\n> If you want you can compile 9.0 on OSX and double check.\n> I don't remember well but ITSM that a fsync used by psql was a noop on OSX.\n> \nYou’re referring to disk scheduler? I’ve tried to change it on T420 with no significant variations over performance.\nI’ve also tried different fsync options with no improvements.\n\n>> pg_test_fsync tells me that T420 disk iops are ~7 times faster than MacMini,\n>> which is ok, but queries run ~2-5 times slower (for brevity I didn’t report\n>> all test results in my first mail).\n> \n>> \n>> I’ve searched just now what a collation is because I’ve never explicitly\n>> used one before, so I think it uses the default one.\n> \n> What's the output of free and sysctl -a | grep vm.zone_reclaim_mode\n> \n> Search the mailing list for zone_reclaim_mode there's some tips.\n> \nvm.zone_reclaim_mode = 0\n\nI’ve also set these options in /etc/sysctl.conf:\nkernel.shmmax=68719476736\nkernel.shmall=16777216\nvm.overcommit_memory=2\nvm.overcommit_ratio=90\n\nI’ll search the mailing list.\n\n> For testing you can also use the mac mini config with the dell, at\n> least it should give you the same plan.\n> With your example disks don't seem to matter, it's all in memory.\nThe same transaction took 106s on MacMini; 129s on T420 with my optimized configuration; 180s on T420 using MacMini configuration.\nQuery plans for B_1 and B_2 queries with the two configurations on T420:\n\nT420 with optimal postgresql.conf\nQuery B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbM\nQuery B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06\n\n\nT420 with MacMini postgresql.conf\nQuery B_1 [51280.208ms + 0.699ms] http://explain.depesz.com/s/wlb\nQuery B_2 [177278.205ms + 0.428ms] http://explain.depesz.com/s/rzr\n\nMacMini\nQuery B_1 [56315.614 ms] http://explain.depesz.com/s/uZTx\nQuery B_2 [44890.813 ms] http://explain.depesz.com/s/y7Dk\n\n\n> Keep in mind that a psql query is still single thread so the mac and\n> the dell should get more or less the same speed for in memory queries.\nYes I know ;) With 128GB I try to maximize RAM usage, but it’s difficult to fully understand how to achieve this.\n\nThank you again,\n Pietro\nIl giorno 02/apr/2015, alle ore 14:29, didier <[email protected]> ha scritto:Hi,On Thu, Apr 2, 2015 at 12:47 PM, Pietro Pugni <[email protected]> wrote:Hi Jeff,thank you for your response.I’m using Postgres 9.0 on MacMini because I’ve noticed that it’s quite fastcompared to different Ubuntu machines on which I’ve worked with different(and more performant) hardware.The built-in Postgres version on OS X Server is impossible to update. Ishould stop it and install a parallel and independent distribution which hasnot been optimized by Apple. On opensource.appel.com they have differentPostgres versions but the latest one is 9.2.x. They stopped updating it in2012.If you want you can compile 9.0 on OSX and double check.I don't remember well but ITSM that a fsync used by psql was a noop on OSX.You’re referring to disk scheduler? I’ve tried to change it on T420 with no significant variations over performance.I’ve also tried different fsync options with no improvements.pg_test_fsync tells me that T420 disk iops are ~7 times faster than MacMini,which is ok, but queries run ~2-5 times slower (for brevity I didn’t reportall test results in my first mail).I’ve searched just now what a collation is because I’ve never explicitlyused one before, so I think it uses the default one.What's the output of free and sysctl -a | grep vm.zone_reclaim_modeSearch the mailing list for zone_reclaim_mode there's some tips.vm.zone_reclaim_mode = 0I’ve also set these options in /etc/sysctl.conf:kernel.shmmax=68719476736kernel.shmall=16777216vm.overcommit_memory=2vm.overcommit_ratio=90I’ll search the mailing list.For testing you can also use the mac mini config with the dell, atleast it should give you the same plan.With your example disks don't seem to matter, it's all in memory.The same transaction took 106s on MacMini; 129s on T420 with my optimized configuration; 180s on T420 using MacMini configuration.Query plans for B_1 and B_2 queries with the two configurations on T420:T420 with optimal postgresql.confQuery B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbMQuery B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06T420 with MacMini postgresql.confQuery B_1 [51280.208ms + 0.699ms] http://explain.depesz.com/s/wlbQuery B_2 [177278.205ms + 0.428ms] http://explain.depesz.com/s/rzrMacMiniQuery B_1 [56315.614 ms] http://explain.depesz.com/s/uZTxQuery B_2 [44890.813 ms] http://explain.depesz.com/s/y7DkKeep in mind that a psql query is still single thread so the mac andthe dell should get more or less the same speed for in memory queries.Yes I know ;) With 128GB I try to maximize RAM usage, but it’s difficult to fully understand how to achieve this.Thank you again, Pietro",
"msg_date": "Thu, 2 Apr 2015 15:52:10 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "On Thu, Apr 2, 2015 at 9:23 AM, Pietro Pugni <[email protected]> wrote:\n\n\n> the command\n> dmesg | grep -i numa\n> doesn’t display me anything. I think T420 hasn’t NUMA on it. Is there a\n> way to enable it from Ubuntu? I don’t have immediate access to BIOS (server\n> is in another location).\n>\n\nNUMA stands for \"Non-Uniform-Memory-Access\" . It's basically the \"label\"\nfor systems which have memory attached to different cpu sockets, such that\naccessing all of the memory from a paritciular cpu thread has different\ncosts based on where the actual memory is located (i.e. on some other\nsocket, or the local socket).\n\n\n> For QPI I don’t know what to do. Please, can you give me more details?\n>\n\nQPI is the the intel \"QuickPath Interconnect\". It's the communication path\nbetween CPU sockets. Memory ready by one cpu thread that has to come from\nanother cpu socket's memory controller goes through QPI.\n\nGoogle has lots of info on these, and how they impact performance, etc.\n\n> If you want to see how bad the NUMA/QPI is, play with stream to benchmark\n> memory performance.\n>\n>\n> With stream you refer to this:\n> https://sites.utexas.edu/jdm4372/tag/stream-benchmark/ ? Do you suggest\n> me some way to do this kind of tests?\n>\n\nYa, that's the one. I don't have specific tests in mind.\n\nA more simple \"overview\" might be \"numactl --hardware\"\n\na.\n\nOn Thu, Apr 2, 2015 at 9:23 AM, Pietro Pugni <[email protected]> wrote: the command dmesg | grep -i numadoesn’t display me anything. I think T420 hasn’t NUMA on it. Is there a way to enable it from Ubuntu? I don’t have immediate access to BIOS (server is in another location).NUMA stands for \"Non-Uniform-Memory-Access\" . It's basically the \"label\" for systems which have memory attached to different cpu sockets, such that accessing all of the memory from a paritciular cpu thread has different costs based on where the actual memory is located (i.e. on some other socket, or the local socket). For QPI I don’t know what to do. Please, can you give me more details?QPI is the the intel \"QuickPath Interconnect\". It's the communication path between CPU sockets. Memory ready by one cpu thread that has to come from another cpu socket's memory controller goes through QPI.Google has lots of info on these, and how they impact performance, etc.If you want to see how bad the NUMA/QPI is, play with stream to benchmark memory performance.\nWith stream you refer to this: https://sites.utexas.edu/jdm4372/tag/stream-benchmark/ ? Do you suggest me some way to do this kind of tests?Ya, that's the one. I don't have specific tests in mind.A more simple \"overview\" might be \"numactl --hardware\"a.",
"msg_date": "Thu, 2 Apr 2015 10:00:39 -0400",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "Hi Tigran,\n\n> The modern CPUs trying to be too smart.\n> \n> try to run this code to disable CPUs c-states:\n> \n> ----> setcpulatency.c \n> \n> #include <stdio.h>\n> #include <fcntl.h>\n> #include <stdint.h>\n> \n> int main(int argc, char **argv) {\n> int32_t l;\n> int fd;\n> \n> if (argc != 2) {\n> fprintf(stderr, \"Usage: %s <latency in us>\\n\", argv[0]);\n> return 2;\n> }\n> \n> l = atoi(argv[1]);\n> printf(\"setting latency to %d us\\n\", l);\n> fd = open(\"/dev/cpu_dma_latency\", O_WRONLY);\n> if (fd < 0) {\n> perror(\"open /dev/cpu_dma_latency\");\n> return 1;\n> }\n> \n> if (write(fd, &l, sizeof(l)) != sizeof(l)) {\n> perror(\"write to /dev/cpu_dma_latency\");\n> return 1;\n> }\n> \n> while (1) pause();\n> }\n> \n> \n> ——>\n> \nyour C code should be equivalent to the following:\necho 0 > /dev/cpu_dma_latency\nRight?\nI executed the above command but time execution increase of about 2 seconds over 129seconds (I’ve executed the transaction several times repeating the procedure of restarting db and redoing transaction). With setting echo 1 > /dev/cpu_dma_latency it returns to 129seconds.\n\n> you can use i7z (https://code.google.com/p/i7z/) to see the percentage of CPU power to be used.\n\nI’ve installed i7z-GUI but it reports the following and crashes with segmentation fault (T420 has Intel Xeon, not i-series):\ni7z DEBUG: i7z version: svn-r77-(20-Nov-2011)\ni7z DEBUG: Found Intel Processor\ni7z DEBUG: Stepping 4\ni7z DEBUG: Model e\ni7z DEBUG: Family 6\ni7z DEBUG: Processor Type 0\ni7z DEBUG: Extended Model 3\ni7z DEBUG: msr = Model Specific Register\ni7z DEBUG: detected a newer model of ivy bridge processor\ni7z DEBUG: my coder doesn't know about it, can you send the following info to him?\ni7z DEBUG: model e, extended model 3, proc_family 6\ni7z DEBUG: msr device files DONOT exist, trying out a makedev script\ni7z DEBUG: modprobbing for msr\n[1]+ Segmentation fault (core dumped) i7z_GUI\n\n\n> Changing CPU from C1 to C0 takes quite some time and for DB workload not optimal (if you need a \n> high throughout and any given moment).\n> \n> I see ~65% boost when run './setcpulatency 0'.\n> \n> Tigran.\n> \nWith “takes quite some time” you mean that it will take some time to take effect?\n\nThank you a lot for your help.\nBest regards,\n Pietro\n\n\nHi Tigran,The modern CPUs trying to be too smart.try to run this code to disable CPUs c-states:----> setcpulatency.c #include <stdio.h>#include <fcntl.h>#include <stdint.h>int main(int argc, char **argv) { int32_t l; int fd; if (argc != 2) { fprintf(stderr, \"Usage: %s <latency in us>\\n\", argv[0]); return 2; } l = atoi(argv[1]); printf(\"setting latency to %d us\\n\", l); fd = open(\"/dev/cpu_dma_latency\", O_WRONLY); if (fd < 0) { perror(\"open /dev/cpu_dma_latency\"); return 1; } if (write(fd, &l, sizeof(l)) != sizeof(l)) { perror(\"write to /dev/cpu_dma_latency\"); return 1; } while (1) pause();}——>your C code should be equivalent to the following:echo 0 > /dev/cpu_dma_latencyRight?I executed the above command but time execution increase of about 2 seconds over 129seconds (I’ve executed the transaction several times repeating the procedure of restarting db and redoing transaction). With setting echo 1 > /dev/cpu_dma_latency it returns to 129seconds.you can use i7z (https://code.google.com/p/i7z/) to see the percentage of CPU power to be used.I’ve installed i7z-GUI but it reports the following and crashes with segmentation fault (T420 has Intel Xeon, not i-series):i7z DEBUG: i7z version: svn-r77-(20-Nov-2011)i7z DEBUG: Found Intel Processori7z DEBUG: Stepping 4i7z DEBUG: Model ei7z DEBUG: Family 6i7z DEBUG: Processor Type 0i7z DEBUG: Extended Model 3i7z DEBUG: msr = Model Specific Registeri7z DEBUG: detected a newer model of ivy bridge processori7z DEBUG: my coder doesn't know about it, can you send the following info to him?i7z DEBUG: model e, extended model 3, proc_family 6i7z DEBUG: msr device files DONOT exist, trying out a makedev scripti7z DEBUG: modprobbing for msr[1]+ Segmentation fault (core dumped) i7z_GUIChanging CPU from C1 to C0 takes quite some time and for DB workload not optimal (if you need a high throughout and any given moment).I see ~65% boost when run './setcpulatency 0'.Tigran.With “takes quite some time” you mean that it will take some time to take effect?Thank you a lot for your help.Best regards, Pietro",
"msg_date": "Thu, 2 Apr 2015 16:25:03 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "Hi\n\nOn Thu, Apr 2, 2015 at 3:52 PM, Pietro Pugni <[email protected]> wrote:\n\n>\n> I’ve searched just now what a collation is because I’ve never explicitly\n> used one before, so I think it uses the default one.\n>\n>\n> What's the output of free and sysctl -a | grep vm.zone_reclaim_mode\n>\n> Search the mailing list for zone_reclaim_mode there's some tips.\n>\n> vm.zone_reclaim_mode = 0\nIn my understanding it's the rigth value\nthere's also huge page\n/sys/kernel/mm/transparent_hugepage/enabled\ncan you try to disable it?\n\nAlso test on the dell:\nselect tmp.cf, tmp.dt from grep_studi.tmp;\nand\nselect tmp.cf, tmp.dt from grep_studi.tmp order by tmp.cf;\nin Query B_2\nthe sort is 9 time slower on the dell, you have to find why...\n\n>\n> For testing you can also use the mac mini config with the dell, at\n> least it should give you the same plan.\n> With your example disks don't seem to matter, it's all in memory.\n\n> T420 with optimal postgresql.conf\n> Query B_1 [55999.649 ms + 0.639 ms] http://explain.depesz.com/s/LbM\n> Query B_2 [95664.832 ms + 0.523 ms] http://explain.depesz.com/s/v06\n>\n>\n> T420 with MacMini postgresql.conf\n> Query B_1 [51280.208ms + 0.699ms] http://explain.depesz.com/s/wlb\n> Query B_2 [177278.205ms + 0.428ms] http://explain.depesz.com/s/rzr\n>\n32 GB for buffers is too high for the queries in your test but it\ndoesn't matter.\n\n> MacMini\n> Query B_1 [56315.614 ms] http://explain.depesz.com/s/uZTx\n> Query B_2 [44890.813 ms] http://explain.depesz.com/s/y7Dk\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Apr 2015 17:11:41 +0200",
"msg_from": "didier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "Hi didier,\nthank you for your time.\nI forgot to display before the output of free. I’ve looked into it before and I found difficult to fully understand if there was something wrong.\n\nBefore starting Postgres:\n total used free shared buffers cached\nMem: 125G 9G 115G 15M 362M 8.1G\n-/+ buffers/cache: 1.5G 124G\nSwap: 127G 0B 127G\n\nHere’s an example of free output when queries B_1 and B_2 are running (they’re part of the same transaction). Generally values remains the same. For what I can understand, RAM isn’t used at all (there’s a lot of unused RAM).\n\n total used free shared buffers cached\nMem: 125G 13G 112G 3.1G 362M 11G\n-/+ buffers/cache: 1.9G 123G\nSwap: 127G 0B 127G\n\nWith Postgres running after transaction has been executed:\n total used free shared buffers cached\nMem: 125G 13G 112G 3.1G 362M 11G\n-/+ buffers/cache: 1.5G 124G\nSwap: 127G 0B 127G\n\n\n> there's also huge page\n> /sys/kernel/mm/transparent_hugepage/enabled\n> can you try to disable it?\nIt was enabled and after disabling it nothing changed: time execution is practically the same (131s for the same transaction tested in previous emails, which is composed by queries B_1 and B_2).\n\n\n> Also test on the dell:\n> select tmp.cf, tmp.dt from grep_studi.tmp;\n> and\n> select tmp.cf, tmp.dt from grep_studi.tmp order by tmp.cf;\n> in Query B_2\n> the sort is 9 time slower on the dell, you have to find why…\nHere’s the output for the two queries:\n\n> select tmp.cf, tmp.dt from grep_studi.tmp;\n\n\"Seq Scan on grep_studi.tmp (cost=0.00..11007.74 rows=1346868 width=72) (actual time=0.082..618.709 rows=2951191 loops=1)\"\n\" Output: cf, dt\"\n\" Buffers: shared hit=512 read=7802 dirtied=8314\"\n\"Planning time: 0.087 ms\"\n\"Execution time: 745.505 ms\"\n\n> select tmp.cf, tmp.dt from grep_studi.tmp;\n\n\"Sort (cost=38431.55..39104.99 rows=1346868 width=72) (actual time=3146.548..3306.179 rows=2951191 loops=1)\"\n\" Output: cf, dt\"\n\" Sort Key: tmp.cf\"\n\" Sort Method: quicksort Memory: 328866kB\"\n\" Buffers: shared hit=8317\"\n\" -> Seq Scan on grep_studi.tmp (cost=0.00..11007.74 rows=1346868 width=72) (actual time=0.012..373.346 rows=2951191 loops=1)\"\n\" Output: cf, dt\"\n\" Buffers: shared hit=8314\"\n\"Planning time: 0.034 ms\"\n\"Execution time: 3459.065 ms\"\n\n\n> 32 GB for buffers is too high for the queries in your test but it\n> doesn't matter.\n\nI’ve set shared_buffers to be 1/4 of the total RAM. I’ve changed kernel values to accomodate this value. Lowering to smaller values doesn’t improve the transaction results. Here’s the results with 1 run for each level of shared_buffers:\n\n32GB:\t131s\n16GB:\t132s\n8GB:\t133s\n4GB:\t132s\n2GB:\t143s\n1GB:\t148s\n512MB:\t183s\n256MB:\t192s\n\nProbably I can keep 4GB but I make use of several partitions with tens of millions of records each. This is why I keep shared_buffers high. My applications is also similar to a DWH solution with one user. Like you said, big values of shared_buffers shouldn’t be a issue.\n\nI’ve done some tests with sysbench on Dell T420 and MacMini.\n\nT420 - RAM READ - 16GB / 1MB\nsh-4.3# sysbench --test=memory --memory-oper=read --memory-block-size=1MB --memory-total-size=16GB run\nsysbench 0.4.12: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nDoing memory operations speed test\nMemory block size: 1024K\n\nMemory transfer size: 16384M\n\nMemory operations type: read\nMemory scope type: global\nThreads started!\nDone.\n\nOperations performed: 16384 (3643025.32 ops/sec)\n\n16384.00 MB transferred (3643025.32 MB/sec)\n\n\nTest execution summary:\n total time: 0.0045s\n total number of events: 16384\n total time taken by event execution: 0.0031\n per-request statistics:\n min: 0.00ms\n avg: 0.00ms\n max: 0.02ms\n approx. 95 percentile: 0.00ms\n\nThreads fairness:\n events (avg/stddev): 16384.0000/0.00\n execution time (avg/stddev): 0.0031/0.00\n\nMacMini - RAM READ - 16GB / 1MB\nserver:sysbench Pietro$ ./sysbench --test=memory --memory-oper=read --memory-block-size=1MB --memory-total-size=16GB run\nsysbench 0.5: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\nRandom number generator seed is 0 and will be ignored\n\n\nThreads started!\n\nOperations performed: 16384 ( 5484.50 ops/sec)\n\n16384.00 MB transferred (5484.50 MB/sec)\n\n\nGeneral statistics:\n total time: 2.9873s\n total number of events: 16384\n total time taken by event execution: 2.9836s\n response time:\n min: 0.18ms\n avg: 0.18ms\n max: 0.24ms\n approx. 95 percentile: 0.19ms\n\nThreads fairness:\n events (avg/stddev): 16384.0000/0.00\n execution time (avg/stddev): 2.9836/0.00\n\nT420 - RAM WRITE - 16GB / 1MB\nsh-4.3# sysbench --test=memory --memory-oper=write --memory-block-size=1MB --memory-total-size=16GB run\nsysbench 0.4.12: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nDoing memory operations speed test\nMemory block size: 1024K\n\nMemory transfer size: 16384M\n\nMemory operations type: write\nMemory scope type: global\nThreads started!\nDone.\n\nOperations performed: 16384 ( 8298.97 ops/sec)\n\n16384.00 MB transferred (8298.97 MB/sec)\n\n\nTest execution summary:\n total time: 1.9742s\n total number of events: 16384\n total time taken by event execution: 1.9723\n per-request statistics:\n min: 0.12ms\n avg: 0.12ms\n max: 0.25ms\n approx. 95 percentile: 0.12ms\n\nThreads fairness:\n events (avg/stddev): 16384.0000/0.00\n execution time (avg/stddev): 1.9723/0.00\n\n\n\nMacMini - RAM WRITE - 16GB / 1MB\nserver:sysbench Pietro$ ./sysbench --test=memory --memory-oper=write --memory-block-size=1MB --memory-total-size=16GB run\nsysbench 0.5: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\nRandom number generator seed is 0 and will be ignored\n\n\nThreads started!\n\nOperations performed: 16384 ( 5472.90 ops/sec)\n\n16384.00 MB transferred (5472.90 MB/sec)\n\n\nGeneral statistics:\n total time: 2.9937s\n total number of events: 16384\n total time taken by event execution: 2.9890s\n response time:\n min: 0.18ms\n avg: 0.18ms\n max: 0.32ms\n approx. 95 percentile: 0.19ms\n\nThreads fairness:\n events (avg/stddev): 16384.0000/0.00\n execution time (avg/stddev): 2.9890/0.00\n\n\nT420 - CPU\nsh-4.3# sysbench --test=cpu run\nsysbench 0.4.12: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nDoing CPU performance benchmark\n\nThreads started!\nDone.\n\nMaximum prime number checked in CPU test: 10000\n\n\nTest execution summary:\n total time: 13.0683s\n total number of events: 10000\n total time taken by event execution: 13.0674\n per-request statistics:\n min: 1.30ms\n avg: 1.31ms\n max: 1.44ms\n approx. 95 percentile: 1.35ms\n\nThreads fairness:\n events (avg/stddev): 10000.0000/0.00\n execution time (avg/stddev): 13.0674/0.00\n\n\nMacMini - CPU\nserver:sysbench Pietro$ ./sysbench --test=cpu run\nsysbench 0.5: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\nRandom number generator seed is 0 and will be ignored\n\n\nPrimer numbers limit: 10000\n\nThreads started!\n\n\nGeneral statistics:\n total time: 11.5728s\n total number of events: 10000\n total time taken by event execution: 11.5703s\n response time:\n min: 1.15ms\n avg: 1.16ms\n max: 2.17ms\n approx. 95 percentile: 1.17ms\n\nThreads fairness:\n events (avg/stddev): 10000.0000/0.00\n execution time (avg/stddev): 11.5703/0.00\n\n\n\n\n\nI’ve done these tests because someone else on this discussion asked me to investigate on memory bandwidth and because I found this interesting article about Intel Xeon vs Intel i5 with different Postgres versions: http://blog.pgaddict.com/posts/performance-since-postgresql-7-4-to-9-4-pgbench\nHope this helps to better understand the problem.\n\nThank you very much.\nBest regards,\n Pietro\nHi didier,thank you for your time.I forgot to display before the output of free. I’ve looked into it before and I found difficult to fully understand if there was something wrong.Before starting Postgres: total used free shared buffers cachedMem: 125G 9G 115G 15M 362M 8.1G-/+ buffers/cache: 1.5G 124GSwap: 127G 0B 127GHere’s an example of free output when queries B_1 and B_2 are running (they’re part of the same transaction). Generally values remains the same. For what I can understand, RAM isn’t used at all (there’s a lot of unused RAM). total used free shared buffers cachedMem: 125G 13G 112G 3.1G 362M 11G-/+ buffers/cache: 1.9G 123GSwap: 127G 0B 127GWith Postgres running after transaction has been executed: total used free shared buffers cachedMem: 125G 13G 112G 3.1G 362M 11G-/+ buffers/cache: 1.5G 124GSwap: 127G 0B 127Gthere's also huge page/sys/kernel/mm/transparent_hugepage/enabledcan you try to disable it?It was enabled and after disabling it nothing changed: time execution is practically the same (131s for the same transaction tested in previous emails, which is composed by queries B_1 and B_2).Also test on the dell:select tmp.cf, tmp.dt from grep_studi.tmp;andselect tmp.cf, tmp.dt from grep_studi.tmp order by tmp.cf;in Query B_2the sort is 9 time slower on the dell, you have to find why…Here’s the output for the two queries:select tmp.cf, tmp.dt from grep_studi.tmp;\"Seq Scan on grep_studi.tmp (cost=0.00..11007.74 rows=1346868 width=72) (actual time=0.082..618.709 rows=2951191 loops=1)\"\" Output: cf, dt\"\" Buffers: shared hit=512 read=7802 dirtied=8314\"\"Planning time: 0.087 ms\"\"Execution time: 745.505 ms\"select tmp.cf, tmp.dt from grep_studi.tmp;\"Sort (cost=38431.55..39104.99 rows=1346868 width=72) (actual time=3146.548..3306.179 rows=2951191 loops=1)\"\" Output: cf, dt\"\" Sort Key: tmp.cf\"\" Sort Method: quicksort Memory: 328866kB\"\" Buffers: shared hit=8317\"\" -> Seq Scan on grep_studi.tmp (cost=0.00..11007.74 rows=1346868 width=72) (actual time=0.012..373.346 rows=2951191 loops=1)\"\" Output: cf, dt\"\" Buffers: shared hit=8314\"\"Planning time: 0.034 ms\"\"Execution time: 3459.065 ms\"32 GB for buffers is too high for the queries in your test but itdoesn't matter.I’ve set shared_buffers to be 1/4 of the total RAM. I’ve changed kernel values to accomodate this value. Lowering to smaller values doesn’t improve the transaction results. Here’s the results with 1 run for each level of shared_buffers:32GB: 131s16GB: 132s8GB: 133s4GB: 132s2GB: 143s1GB: 148s512MB: 183s256MB: 192sProbably I can keep 4GB but I make use of several partitions with tens of millions of records each. This is why I keep shared_buffers high. My applications is also similar to a DWH solution with one user. Like you said, big values of shared_buffers shouldn’t be a issue.I’ve done some tests with sysbench on Dell T420 and MacMini.T420 - RAM READ - 16GB / 1MBsh-4.3# sysbench --test=memory --memory-oper=read --memory-block-size=1MB --memory-total-size=16GB runsysbench 0.4.12: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Doing memory operations speed testMemory block size: 1024KMemory transfer size: 16384MMemory operations type: readMemory scope type: globalThreads started!Done.Operations performed: 16384 (3643025.32 ops/sec)16384.00 MB transferred (3643025.32 MB/sec)Test execution summary: total time: 0.0045s total number of events: 16384 total time taken by event execution: 0.0031 per-request statistics: min: 0.00ms avg: 0.00ms max: 0.02ms approx. 95 percentile: 0.00msThreads fairness: events (avg/stddev): 16384.0000/0.00 execution time (avg/stddev): 0.0031/0.00MacMini - RAM READ - 16GB / 1MBserver:sysbench Pietro$ ./sysbench --test=memory --memory-oper=read --memory-block-size=1MB --memory-total-size=16GB runsysbench 0.5: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Random number generator seed is 0 and will be ignoredThreads started!Operations performed: 16384 ( 5484.50 ops/sec)16384.00 MB transferred (5484.50 MB/sec)General statistics: total time: 2.9873s total number of events: 16384 total time taken by event execution: 2.9836s response time: min: 0.18ms avg: 0.18ms max: 0.24ms approx. 95 percentile: 0.19msThreads fairness: events (avg/stddev): 16384.0000/0.00 execution time (avg/stddev): 2.9836/0.00T420 - RAM WRITE - 16GB / 1MBsh-4.3# sysbench --test=memory --memory-oper=write --memory-block-size=1MB --memory-total-size=16GB runsysbench 0.4.12: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Doing memory operations speed testMemory block size: 1024KMemory transfer size: 16384MMemory operations type: writeMemory scope type: globalThreads started!Done.Operations performed: 16384 ( 8298.97 ops/sec)16384.00 MB transferred (8298.97 MB/sec)Test execution summary: total time: 1.9742s total number of events: 16384 total time taken by event execution: 1.9723 per-request statistics: min: 0.12ms avg: 0.12ms max: 0.25ms approx. 95 percentile: 0.12msThreads fairness: events (avg/stddev): 16384.0000/0.00 execution time (avg/stddev): 1.9723/0.00MacMini - RAM WRITE - 16GB / 1MBserver:sysbench Pietro$ ./sysbench --test=memory --memory-oper=write --memory-block-size=1MB --memory-total-size=16GB runsysbench 0.5: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Random number generator seed is 0 and will be ignoredThreads started!Operations performed: 16384 ( 5472.90 ops/sec)16384.00 MB transferred (5472.90 MB/sec)General statistics: total time: 2.9937s total number of events: 16384 total time taken by event execution: 2.9890s response time: min: 0.18ms avg: 0.18ms max: 0.32ms approx. 95 percentile: 0.19msThreads fairness: events (avg/stddev): 16384.0000/0.00 execution time (avg/stddev): 2.9890/0.00T420 - CPUsh-4.3# sysbench --test=cpu runsysbench 0.4.12: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Doing CPU performance benchmarkThreads started!Done.Maximum prime number checked in CPU test: 10000Test execution summary: total time: 13.0683s total number of events: 10000 total time taken by event execution: 13.0674 per-request statistics: min: 1.30ms avg: 1.31ms max: 1.44ms approx. 95 percentile: 1.35msThreads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 13.0674/0.00MacMini - CPUserver:sysbench Pietro$ ./sysbench --test=cpu runsysbench 0.5: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Random number generator seed is 0 and will be ignoredPrimer numbers limit: 10000Threads started!General statistics: total time: 11.5728s total number of events: 10000 total time taken by event execution: 11.5703s response time: min: 1.15ms avg: 1.16ms max: 2.17ms approx. 95 percentile: 1.17msThreads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 11.5703/0.00I’ve done these tests because someone else on this discussion asked me to investigate on memory bandwidth and because I found this interesting article about Intel Xeon vs Intel i5 with different Postgres versions: http://blog.pgaddict.com/posts/performance-since-postgresql-7-4-to-9-4-pgbenchHope this helps to better understand the problem.Thank you very much.Best regards, Pietro",
"msg_date": "Fri, 3 Apr 2015 15:43:02 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "Hi Aidan,\nthank you again for your support.\nI found an interesting article showing better performance from a Intel i5 vs a Intel Xeon on different Postgres versions: http://blog.pgaddict.com/posts/performance-since-postgresql-7-4-to-9-4-pgbench\nI have to say that MacMini has a 2011 CPU ( http://ark.intel.com/it/products/53463/Intel-Core-i7-2635QM-Processor-6M-Cache-up-to-2_90-GHz ) while Dell has two 2013 CPU ( http://ark.intel.com/products/75267/Intel-Xeon-Processor-E5-2640-v2-20M-Cache-2_00-GHz ), both at 2.0Ghz.\n\n> NUMA stands for \"Non-Uniform-Memory-Access\" . It's basically the \"label\" for systems which have memory attached to different cpu sockets, such that accessing all of the memory from a paritciular cpu thread has different costs based on where the actual memory is located (i.e. on some other socket, or the local socket).\nThanks, good to know.\n\n> QPI is the the intel \"QuickPath Interconnect\". It's the communication path between CPU sockets. Memory ready by one cpu thread that has to come from another cpu socket's memory controller goes through QPI.\n> Google has lots of info on these, and how they impact performance, etc.\nWhen I’ll get access to BIOS (probably next week or later) I’ll try to disable QPI (if possible). Meanwhile I’ll document on Internet about QPI vs performance.\n\n>> If you want to see how bad the NUMA/QPI is, play with stream to benchmark memory performance.\n> \n> \n> With stream you refer to this: https://sites.utexas.edu/jdm4372/tag/stream-benchmark/ ? Do you suggest me some way to do this kind of tests?\n> \n> Ya, that's the one. I don't have specific tests in mind.\nI’ve done some tests with sysbench on Dell T420 (via apt-get install) and MacMini (I’ve compiled the latest available sources at https://github.com/akopytov/sysbench ).\nHere are some results with 16GB RAM read and written at 1MB block size (I don’t know if this makes sense, but I’ve no problem in changing these parameters).\n\nT420 - RAM READ - 16GB / 1MB\nsh-4.3# sysbench --test=memory --memory-oper=read --memory-block-size=1MB --memory-total-size=16GB run\nsysbench 0.4.12: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nDoing memory operations speed test\nMemory block size: 1024K\n\nMemory transfer size: 16384M\n\nMemory operations type: read\nMemory scope type: global\nThreads started!\nDone.\n\nOperations performed: 16384 (3643025.32 ops/sec)\n\n16384.00 MB transferred (3643025.32 MB/sec)\n\n\nTest execution summary:\n total time: 0.0045s\n total number of events: 16384\n total time taken by event execution: 0.0031\n per-request statistics:\n min: 0.00ms\n avg: 0.00ms\n max: 0.02ms\n approx. 95 percentile: 0.00ms\n\nThreads fairness:\n events (avg/stddev): 16384.0000/0.00\n execution time (avg/stddev): 0.0031/0.00\n\nMacMini - RAM READ - 16GB / 1MB\nserver:sysbench Pietro$ ./sysbench --test=memory --memory-oper=read --memory-block-size=1MB --memory-total-size=16GB run\nsysbench 0.5: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\nRandom number generator seed is 0 and will be ignored\n\n\nThreads started!\n\nOperations performed: 16384 ( 5484.50 ops/sec)\n\n16384.00 MB transferred (5484.50 MB/sec)\n\n\nGeneral statistics:\n total time: 2.9873s\n total number of events: 16384\n total time taken by event execution: 2.9836s\n response time:\n min: 0.18ms\n avg: 0.18ms\n max: 0.24ms\n approx. 95 percentile: 0.19ms\n\nThreads fairness:\n events (avg/stddev): 16384.0000/0.00\n execution time (avg/stddev): 2.9836/0.00\n\nT420 - RAM WRITE - 16GB / 1MB\nsh-4.3# sysbench --test=memory --memory-oper=write --memory-block-size=1MB --memory-total-size=16GB run\nsysbench 0.4.12: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nDoing memory operations speed test\nMemory block size: 1024K\n\nMemory transfer size: 16384M\n\nMemory operations type: write\nMemory scope type: global\nThreads started!\nDone.\n\nOperations performed: 16384 ( 8298.97 ops/sec)\n\n16384.00 MB transferred (8298.97 MB/sec)\n\n\nTest execution summary:\n total time: 1.9742s\n total number of events: 16384\n total time taken by event execution: 1.9723\n per-request statistics:\n min: 0.12ms\n avg: 0.12ms\n max: 0.25ms\n approx. 95 percentile: 0.12ms\n\nThreads fairness:\n events (avg/stddev): 16384.0000/0.00\n execution time (avg/stddev): 1.9723/0.00\n\n\n\nMacMini - RAM WRITE - 16GB / 1MB\nserver:sysbench Pietro$ ./sysbench --test=memory --memory-oper=write --memory-block-size=1MB --memory-total-size=16GB run\nsysbench 0.5: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\nRandom number generator seed is 0 and will be ignored\n\n\nThreads started!\n\nOperations performed: 16384 ( 5472.90 ops/sec)\n\n16384.00 MB transferred (5472.90 MB/sec)\n\n\nGeneral statistics:\n total time: 2.9937s\n total number of events: 16384\n total time taken by event execution: 2.9890s\n response time:\n min: 0.18ms\n avg: 0.18ms\n max: 0.32ms\n approx. 95 percentile: 0.19ms\n\nThreads fairness:\n events (avg/stddev): 16384.0000/0.00\n execution time (avg/stddev): 2.9890/0.00\n\n\nT420 - CPU\nsh-4.3# sysbench --test=cpu run\nsysbench 0.4.12: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nDoing CPU performance benchmark\n\nThreads started!\nDone.\n\nMaximum prime number checked in CPU test: 10000\n\n\nTest execution summary:\n total time: 13.0683s\n total number of events: 10000\n total time taken by event execution: 13.0674\n per-request statistics:\n min: 1.30ms\n avg: 1.31ms\n max: 1.44ms\n approx. 95 percentile: 1.35ms\n\nThreads fairness:\n events (avg/stddev): 10000.0000/0.00\n execution time (avg/stddev): 13.0674/0.00\n\n\nMacMini - CPU\nserver:sysbench Pietro$ ./sysbench --test=cpu run\nsysbench 0.5: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\nRandom number generator seed is 0 and will be ignored\n\n\nPrimer numbers limit: 10000\n\nThreads started!\n\n\nGeneral statistics:\n total time: 11.5728s\n total number of events: 10000\n total time taken by event execution: 11.5703s\n response time:\n min: 1.15ms\n avg: 1.16ms\n max: 2.17ms\n approx. 95 percentile: 1.17ms\n\nThreads fairness:\n events (avg/stddev): 10000.0000/0.00\n execution time (avg/stddev): 11.5703/0.00\n\n\n\n> A more simple \"overview\" might be \"numactl —hardware”\nIt returns the following output:\n\nsh-4.3# numactl --hardware\navailable: 2 nodes (0-1)\nnode 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30\nnode 0 size: 64385 MB\nnode 0 free: 56487 MB\nnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31\nnode 1 size: 64508 MB\nnode 1 free: 62201 MB\nnode distances:\nnode 0 1 \n 0: 10 20 \n 1: 20 10 \n\nThank you so much for your help, really appreciate it.\nBest regards,\n Pietro\n\n\n\n\nHi Aidan,thank you again for your support.I found an interesting article showing better performance from a Intel i5 vs a Intel Xeon on different Postgres versions: http://blog.pgaddict.com/posts/performance-since-postgresql-7-4-to-9-4-pgbenchI have to say that MacMini has a 2011 CPU ( http://ark.intel.com/it/products/53463/Intel-Core-i7-2635QM-Processor-6M-Cache-up-to-2_90-GHz ) while Dell has two 2013 CPU ( http://ark.intel.com/products/75267/Intel-Xeon-Processor-E5-2640-v2-20M-Cache-2_00-GHz ), both at 2.0Ghz.NUMA stands for \"Non-Uniform-Memory-Access\" . It's basically the \"label\" for systems which have memory attached to different cpu sockets, such that accessing all of the memory from a paritciular cpu thread has different costs based on where the actual memory is located (i.e. on some other socket, or the local socket).Thanks, good to know.QPI is the the intel \"QuickPath Interconnect\". It's the communication path between CPU sockets. Memory ready by one cpu thread that has to come from another cpu socket's memory controller goes through QPI.Google has lots of info on these, and how they impact performance, etc.When I’ll get access to BIOS (probably next week or later) I’ll try to disable QPI (if possible). Meanwhile I’ll document on Internet about QPI vs performance.If you want to see how bad the NUMA/QPI is, play with stream to benchmark memory performance.\nWith stream you refer to this: https://sites.utexas.edu/jdm4372/tag/stream-benchmark/ ? Do you suggest me some way to do this kind of tests?Ya, that's the one. I don't have specific tests in mind.I’ve done some tests with sysbench on Dell T420 (via apt-get install) and MacMini (I’ve compiled the latest available sources at https://github.com/akopytov/sysbench ).Here are some results with 16GB RAM read and written at 1MB block size (I don’t know if this makes sense, but I’ve no problem in changing these parameters).T420 - RAM READ - 16GB / 1MBsh-4.3# sysbench --test=memory --memory-oper=read --memory-block-size=1MB --memory-total-size=16GB runsysbench 0.4.12: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Doing memory operations speed testMemory block size: 1024KMemory transfer size: 16384MMemory operations type: readMemory scope type: globalThreads started!Done.Operations performed: 16384 (3643025.32 ops/sec)16384.00 MB transferred (3643025.32 MB/sec)Test execution summary: total time: 0.0045s total number of events: 16384 total time taken by event execution: 0.0031 per-request statistics: min: 0.00ms avg: 0.00ms max: 0.02ms approx. 95 percentile: 0.00msThreads fairness: events (avg/stddev): 16384.0000/0.00 execution time (avg/stddev): 0.0031/0.00MacMini - RAM READ - 16GB / 1MBserver:sysbench Pietro$ ./sysbench --test=memory --memory-oper=read --memory-block-size=1MB --memory-total-size=16GB runsysbench 0.5: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Random number generator seed is 0 and will be ignoredThreads started!Operations performed: 16384 ( 5484.50 ops/sec)16384.00 MB transferred (5484.50 MB/sec)General statistics: total time: 2.9873s total number of events: 16384 total time taken by event execution: 2.9836s response time: min: 0.18ms avg: 0.18ms max: 0.24ms approx. 95 percentile: 0.19msThreads fairness: events (avg/stddev): 16384.0000/0.00 execution time (avg/stddev): 2.9836/0.00T420 - RAM WRITE - 16GB / 1MBsh-4.3# sysbench --test=memory --memory-oper=write --memory-block-size=1MB --memory-total-size=16GB runsysbench 0.4.12: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Doing memory operations speed testMemory block size: 1024KMemory transfer size: 16384MMemory operations type: writeMemory scope type: globalThreads started!Done.Operations performed: 16384 ( 8298.97 ops/sec)16384.00 MB transferred (8298.97 MB/sec)Test execution summary: total time: 1.9742s total number of events: 16384 total time taken by event execution: 1.9723 per-request statistics: min: 0.12ms avg: 0.12ms max: 0.25ms approx. 95 percentile: 0.12msThreads fairness: events (avg/stddev): 16384.0000/0.00 execution time (avg/stddev): 1.9723/0.00MacMini - RAM WRITE - 16GB / 1MBserver:sysbench Pietro$ ./sysbench --test=memory --memory-oper=write --memory-block-size=1MB --memory-total-size=16GB runsysbench 0.5: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Random number generator seed is 0 and will be ignoredThreads started!Operations performed: 16384 ( 5472.90 ops/sec)16384.00 MB transferred (5472.90 MB/sec)General statistics: total time: 2.9937s total number of events: 16384 total time taken by event execution: 2.9890s response time: min: 0.18ms avg: 0.18ms max: 0.32ms approx. 95 percentile: 0.19msThreads fairness: events (avg/stddev): 16384.0000/0.00 execution time (avg/stddev): 2.9890/0.00T420 - CPUsh-4.3# sysbench --test=cpu runsysbench 0.4.12: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Doing CPU performance benchmarkThreads started!Done.Maximum prime number checked in CPU test: 10000Test execution summary: total time: 13.0683s total number of events: 10000 total time taken by event execution: 13.0674 per-request statistics: min: 1.30ms avg: 1.31ms max: 1.44ms approx. 95 percentile: 1.35msThreads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 13.0674/0.00MacMini - CPUserver:sysbench Pietro$ ./sysbench --test=cpu runsysbench 0.5: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 1Random number generator seed is 0 and will be ignoredPrimer numbers limit: 10000Threads started!General statistics: total time: 11.5728s total number of events: 10000 total time taken by event execution: 11.5703s response time: min: 1.15ms avg: 1.16ms max: 2.17ms approx. 95 percentile: 1.17msThreads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 11.5703/0.00A more simple \"overview\" might be \"numactl —hardware”\nIt returns the following output:sh-4.3# numactl --hardwareavailable: 2 nodes (0-1)node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30node 0 size: 64385 MBnode 0 free: 56487 MBnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31node 1 size: 64508 MBnode 1 free: 62201 MBnode distances:node 0 1 0: 10 20 1: 20 10 Thank you so much for your help, really appreciate it.Best regards, Pietro",
"msg_date": "Fri, 3 Apr 2015 15:55:04 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "A more simple \"overview\" might be \"numactl —hardware”\n\n> It returns the following output:\n>\n> sh-4.3# numactl --hardware\n> available: 2 nodes (0-1)\n> node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30\n> node 0 size: 64385 MB\n> node 0 free: 56487 MB\n> node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31\n> node 1 size: 64508 MB\n> node 1 free: 62201 MB\n> node distances:\n> node 0 1\n> 0: 10 20\n> 1: 20 10\n>\n\nDid you already post the results of:\ncat /proc/sys/vm/zone_reclaim_mode\n\nAlso, how big did you say your dataset is? Based on the output of free,\nyou're certainly not using all the memory you have. That could be just\nbecause you haven't accessed that much of your dataset, or it could be\nbecause zone reclaim is preventing you from using your entire amount of RAM\nas file system cache.\n\nA more simple \"overview\" might be \"numactl —hardware”\nIt returns the following output:sh-4.3# numactl --hardwareavailable: 2 nodes (0-1)node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30node 0 size: 64385 MBnode 0 free: 56487 MBnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31node 1 size: 64508 MBnode 1 free: 62201 MBnode distances:node 0 1 0: 10 20 1: 20 10 Did you already post the results of:cat /proc/sys/vm/zone_reclaim_modeAlso, how big did you say your dataset is? Based on the output of free, you're certainly not using all the memory you have. That could be just because you haven't accessed that much of your dataset, or it could be because zone reclaim is preventing you from using your entire amount of RAM as file system cache.",
"msg_date": "Fri, 3 Apr 2015 11:00:09 -0400",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "Hi Josh,\n\n> Did you already post the results of:\n> cat /proc/sys/vm/zone_reclaim_mode\n\nzone_reclaim_mode was set on 0 for all my tests. I’ve also set it to the other values (1, 2, 4) but there was no improvement. Tests results are the following (1 run for each test):\necho 0 > /proc/sys/vm/zone_reclaim_mode\n130s\necho 1 > /proc/sys/vm/zone_reclaim_mode\n129s\necho 2 > /proc/sys/vm/zone_reclaim_mode\n134s\necho 4 > /proc/sys/vm/zone_reclaim_mode\n131s\n\n\n> Also, how big did you say your dataset is? Based on the output of free, you're certainly not using all the memory you have. That could be just because you haven't accessed that much of your dataset, or it could be because zone reclaim is preventing you from using your entire amount of RAM as file system cache. \nThe table I use for this test is about 20milion row, has less than 10 columns (a small table) and has 4 indexes. Other tables I use are partitioned and consists of a total of 1.6bilion rows, 757 million rows ad so on descending.\n\nThank you for your help.\nBest regards,\n Pietro\n\n\nHi Josh,Did you already post the results of:cat /proc/sys/vm/zone_reclaim_modezone_reclaim_mode was set on 0 for all my tests. I’ve also set it to the other values (1, 2, 4) but there was no improvement. Tests results are the following (1 run for each test):echo 0 > /proc/sys/vm/zone_reclaim_mode130secho 1 > /proc/sys/vm/zone_reclaim_mode129secho 2 > /proc/sys/vm/zone_reclaim_mode134secho 4 > /proc/sys/vm/zone_reclaim_mode131sAlso, how big did you say your dataset is? Based on the output of free, you're certainly not using all the memory you have. That could be just because you haven't accessed that much of your dataset, or it could be because zone reclaim is preventing you from using your entire amount of RAM as file system cache. \nThe table I use for this test is about 20milion row, has less than 10 columns (a small table) and has 4 indexes. Other tables I use are partitioned and consists of a total of 1.6bilion rows, 757 million rows ad so on descending.Thank you for your help.Best regards, Pietro",
"msg_date": "Fri, 3 Apr 2015 17:10:47 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "Sorry, how much disk space is actually used by the tables, indexes, etc\ninvolved in your queries? Or it that's a bit much to get, how much disk\nspace is occupied by your database in total?\n\nSorry, how much disk space is actually used by the tables, indexes, etc involved in your queries? Or it that's a bit much to get, how much disk space is occupied by your database in total?",
"msg_date": "Fri, 3 Apr 2015 11:21:26 -0400",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than\n a MacMini with PostgreSQL"
},
{
"msg_contents": "Hi Josh,\nat the moment the server is unreachable so I can’t calculate sizes. I run all of my test both with all data loaded into Postgres and with no data loaded (except from the single 20mln rows table with relative indexes).\nTo give you an idea, with all data loaded into Postgres with indexes the space occupied is approximately 1.2-1.5TB and free space on is about 800GB.\n\nMany thanks.\nBest regads,\n Pietro\n\nIl giorno 03/apr/2015, alle ore 17:21, Josh Krupka <[email protected]> ha scritto:\n\n> Sorry, how much disk space is actually used by the tables, indexes, etc involved in your queries? Or it that's a bit much to get, how much disk space is occupied by your database in total?\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Apr 2015 17:33:20 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "> On Tue, Apr 7, 2015 at 6:27 AM, Pietro Pugni <[email protected]> wrote:\n> Hi Jeff,\n> sorry for the latency but server was down due to a error I made in the sysctl.conf file.\n>> Yes, but are the defaults for those two systems? on psql, use \\l to see.\n>> \n> \n> \\l returns the following:\n> \n> T420 (Postgres 9.4.1)\n> List of databases\n> Name | Owner | Encoding | Collate | Ctype | Access privileges \n> -----------+----------+----------+-------------+-------------+-----------------------\n> grep | grep | UTF8 | en_US.UTF-8 | en_US.UTF-8 | \n> postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | \n> template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +\n> | | | | | postgres=CTc/postgres\n> template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +\n> | | | | | postgres=CTc/postgres\n> (4 rows)\n> \n> \n> \n> MacMini (Postgres 9.0.13)\n> List of databases\n> Name | Owner | Encoding | Collation | Ctype | Access privileges \n> -------------------+------------+----------+-----------+-------+-------------------------\n> caldav | caldav | UTF8 | C | C | \n> collab | collab | UTF8 | C | C | \n> device_management | _devicemgr | UTF8 | C | C | \n> pen | pen | UTF8 | C | C | \n> postgres | _postgres | UTF8 | C | C | \n> roundcubemail | roundcube | UTF8 | C | C | \n> template0 | _postgres | UTF8 | C | C | =c/_postgres +\n> | | | | | _postgres=CTc/_postgres\n> template1 | _postgres | UTF8 | C | C | =c/_postgres +\n> | | | | | _postgres=CTc/_postgres\n> (8 rows)\n> \n> \n> The difference between the \"C\" and the \"en_US\" collation is entirely sufficient to explain the difference in performance. \"C\" is the fastest possible collation as it never needs to look ahead or consult tables, it just compares raw bytes.\n> \n> Cheers,\n> \n> Jeff\n\nHi Jeff,\nis there a way to set a default collection during compiling or in the configuration file? I have never specified one, so I suppose that somewhere on MacMini “C” collation type is set as the default one.\n\nThank you a lot.\nBest regards,\n Pietro\n On Tue, Apr 7, 2015 at 6:27 AM, Pietro Pugni <[email protected]> wrote:Hi Jeff,sorry for the latency but server was down due to a error I made in the sysctl.conf file.Yes, but are the defaults for those two systems? on psql, use \\l to see.\\l returns the following:T420 (Postgres 9.4.1) List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- grep | grep | UTF8 | en_US.UTF-8 | en_US.UTF-8 | postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres(4 rows)MacMini (Postgres 9.0.13) List of databases Name | Owner | Encoding | Collation | Ctype | Access privileges -------------------+------------+----------+-----------+-------+------------------------- caldav | caldav | UTF8 | C | C | collab | collab | UTF8 | C | C | device_management | _devicemgr | UTF8 | C | C | pen | pen | UTF8 | C | C | postgres | _postgres | UTF8 | C | C | roundcubemail | roundcube | UTF8 | C | C | template0 | _postgres | UTF8 | C | C | =c/_postgres + | | | | | _postgres=CTc/_postgres template1 | _postgres | UTF8 | C | C | =c/_postgres + | | | | | _postgres=CTc/_postgres(8 rows)The difference between the \"C\" and the \"en_US\" collation is entirely sufficient to explain the difference in performance. \"C\" is the fastest possible collation as it never needs to look ahead or consult tables, it just compares raw bytes.Cheers,Jeff\nHi Jeff,is there a way to set a default collection during compiling or in the configuration file? I have never specified one, so I suppose that somewhere on MacMini “C” collation type is set as the default one.Thank you a lot.Best regards, Pietro",
"msg_date": "Tue, 7 Apr 2015 18:49:58 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "I meant “collation”, not “collection”.\n\n Pietro\n\nIl giorno 07/apr/2015, alle ore 18:49, Pietro Pugni <[email protected]> ha scritto:\n\n> \t\n>> On Tue, Apr 7, 2015 at 6:27 AM, Pietro Pugni <[email protected]> wrote:\n>> Hi Jeff,\n>> sorry for the latency but server was down due to a error I made in the sysctl.conf file.\n>>> Yes, but are the defaults for those two systems? on psql, use \\l to see.\n>>> \n>> \n>> \\l returns the following:\n>> \n>> T420 (Postgres 9.4.1)\n>> List of databases\n>> Name | Owner | Encoding | Collate | Ctype | Access privileges \n>> -----------+----------+----------+-------------+-------------+-----------------------\n>> grep | grep | UTF8 | en_US.UTF-8 | en_US.UTF-8 | \n>> postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | \n>> template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +\n>> | | | | | postgres=CTc/postgres\n>> template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +\n>> | | | | | postgres=CTc/postgres\n>> (4 rows)\n>> \n>> \n>> \n>> MacMini (Postgres 9.0.13)\n>> List of databases\n>> Name | Owner | Encoding | Collation | Ctype | Access privileges \n>> -------------------+------------+----------+-----------+-------+-------------------------\n>> caldav | caldav | UTF8 | C | C | \n>> collab | collab | UTF8 | C | C | \n>> device_management | _devicemgr | UTF8 | C | C | \n>> pen | pen | UTF8 | C | C | \n>> postgres | _postgres | UTF8 | C | C | \n>> roundcubemail | roundcube | UTF8 | C | C | \n>> template0 | _postgres | UTF8 | C | C | =c/_postgres +\n>> | | | | | _postgres=CTc/_postgres\n>> template1 | _postgres | UTF8 | C | C | =c/_postgres +\n>> | | | | | _postgres=CTc/_postgres\n>> (8 rows)\n>> \n>> \n>> The difference between the \"C\" and the \"en_US\" collation is entirely sufficient to explain the difference in performance. \"C\" is the fastest possible collation as it never needs to look ahead or consult tables, it just compares raw bytes.\n>> \n>> Cheers,\n>> \n>> Jeff\n> \n> Hi Jeff,\n> is there a way to set a default collection during compiling or in the configuration file? I have never specified one, so I suppose that somewhere on MacMini “C” collation type is set as the default one.\n> \n> Thank you a lot.\n> Best regards,\n> Pietro\n\n\nI meant “collation”, not “collection”. PietroIl giorno 07/apr/2015, alle ore 18:49, Pietro Pugni <[email protected]> ha scritto: On Tue, Apr 7, 2015 at 6:27 AM, Pietro Pugni <[email protected]> wrote:Hi Jeff,sorry for the latency but server was down due to a error I made in the sysctl.conf file.Yes, but are the defaults for those two systems? on psql, use \\l to see.\\l returns the following:T420 (Postgres 9.4.1) List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- grep | grep | UTF8 | en_US.UTF-8 | en_US.UTF-8 | postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres(4 rows)MacMini (Postgres 9.0.13) List of databases Name | Owner | Encoding | Collation | Ctype | Access privileges -------------------+------------+----------+-----------+-------+------------------------- caldav | caldav | UTF8 | C | C | collab | collab | UTF8 | C | C | device_management | _devicemgr | UTF8 | C | C | pen | pen | UTF8 | C | C | postgres | _postgres | UTF8 | C | C | roundcubemail | roundcube | UTF8 | C | C | template0 | _postgres | UTF8 | C | C | =c/_postgres + | | | | | _postgres=CTc/_postgres template1 | _postgres | UTF8 | C | C | =c/_postgres + | | | | | _postgres=CTc/_postgres(8 rows)The difference between the \"C\" and the \"en_US\" collation is entirely sufficient to explain the difference in performance. \"C\" is the fastest possible collation as it never needs to look ahead or consult tables, it just compares raw bytes.Cheers,Jeff\nHi Jeff,is there a way to set a default collection during compiling or in the configuration file? I have never specified one, so I suppose that somewhere on MacMini “C” collation type is set as the default one.Thank you a lot.Best regards, Pietro",
"msg_date": "Tue, 7 Apr 2015 19:17:40 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "Hi Jeff\n\n\n> The default collation for the database cluster is set when you create the cluster with initdb (the package you used to install postgresql might provide scripts that wrap initdb and call it something else, sorry I can't be much use with those). \n> You can set it with --lc-collate flag to initdb, otherwise it is set based on the environment variables (LANG or LC_* variables) set in the shell you use to run initdb.\n> \n> Note that you can create a new database in the cluster which has its own default which is different from the cluster's default.\n\nguess what? it worked! Now I run the following command:\n/usr/local/pgsql/bin/initdb -D /mnt/raid5/pg_data --no-locale --encoding=UTF8\n\nTime execution for my reference transaction went from 2m9s to 1m18s where Mac Mini is 1m43s.\nThis is the best improvement after modifying BIOS settings.\nI’ll do some testing. In the meanwhile I’ve made some kernel changes which may help in heavy workloads (at the moment they don't affect performance).\n\nI should offer you a dinner…\nThank you a lot. Now I’m looking to push it faster (I think it can goes faster than this!).\n\nBest regards,\n Pietro\nHi JeffThe default collation for the database cluster is set when you create the cluster with initdb (the package you used to install postgresql might provide scripts that wrap initdb and call it something else, sorry I can't be much use with those). You can set it with --lc-collate flag to initdb, otherwise it is set based on the environment variables (LANG or LC_* variables) set in the shell you use to run initdb.Note that you can create a new database in the cluster which has its own default which is different from the cluster's default.guess what? it worked! Now I run the following command:/usr/local/pgsql/bin/initdb -D /mnt/raid5/pg_data --no-locale --encoding=UTF8Time execution for my reference transaction went from 2m9s to 1m18s where Mac Mini is 1m43s.This is the best improvement after modifying BIOS settings.I’ll do some testing. In the meanwhile I’ve made some kernel changes which may help in heavy workloads (at the moment they don't affect performance).I should offer you a dinner…Thank you a lot. Now I’m looking to push it faster (I think it can goes faster than this!).Best regards, Pietro",
"msg_date": "Tue, 7 Apr 2015 19:45:58 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
},
{
"msg_contents": "> Ciao Pietro,\n> stavo seguendo thread sulla mailing list Postgresql.\n> \n> Puoi farmi un piccolo riassunto delle conclusioni perchè non sono sicuro di aver capito tutto?\n\n\nCiao Domenico,\nsì effettivamente la mailing list è un po’ dispersiva.\nUtilizzando il collation di tipo “C” il Dell T420 impiega meno tempo rispetto al Mac Mini nell’esecuzione di qualsiasi query. La differenza finora è del 20%-30% circa. Adesso sto facendo un caricamento dati massiccio che sul Dell impiegava 3 giorni circa mentre sul Mac Mini impiega 1 giorno e mezzo circa e vi farò sapere il tempo di esecuzione.\n\n> Alla fine la differenza di performance tra il DELL R420 e il Mac MINI è dovuta al tipo di \"collate\" utilizzato nell'inizializzazione del DB?\n\n\nPossono essere adoperati diversi collation all’interno dello stesso DB. Ogni tabella può averne uno diverso, persino ogni attributo può avere un collation diverso dagli altri attributi della stessa tabella o della stessa SELECT.\n\nIl comando:\ninitdb —no-locale\n\nimposta il collate “C” come predefinito per il DB creato.\nTuttavia nel file di postgresql.conf sono presenti le seguenti variabili di configurazione che definiscono il collate predefinito per l’istanza di Postgres avviata:\n# These settings are initialized by initdb, but they can be changed.\nlc_messages = 'C' # locale for system error message\n # strings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\nIn ogni caso mi aspettavo performance migliori da questo Dell, soprattutto considerando il livello della macchina (controller RAID di un certo livello, 10 dischi rigidi SAS da 10k rpm per i dati e 2 dischi SAS da 15k rpm per sistema operativo e WAL, 128GB di RAM e 2 CPU Xeon per un totale di 16 core fisici e 16 logici).\nOltre al collation è bene modificare le impostazioni del BIOS e del controller RAID; il primo perché su questi sistemi sono impostate opzioni di risparmio energetico che tagliano la potenza, mentre per il RAID è bene scegliere una cache in scrittura del tipo WriteBack e in lettura del tipo ReadAhead e poi controllare che la batteria della cache di scrittura sia carica, altrimenti non viene adoperata nessuna cache.\nOvviamente poi la configurazione RAID fa tanto; per ora utilizziamo RAID5 che non è affatto performante in scrittura ma lo è in lettura.\nIl tutto dipende dall’applicazione specifica. Nel nostro caso è una sorta di data warehouse ibrido, quindi una soluzione mono utenza che fa uso di grandi moli di dati (siamo nell’ordine dei 1.8TB di database): poche transazioni ma massicce. Le impostazioni sul kernel agiscono soprattutto sulle performance in ambienti multi utenza (migliaia di connessioni) e con le ultime versioni di Postgres e del kernel linux il sistema è già abbastanza auto bilanciato.\n\nSpero di essere stato sufficientemente chiaro ed esaustivo.\nCordiali saluti,\n Pietro\n\n\nCiao Pietro,stavo seguendo thread sulla mailing list Postgresql.Puoi farmi un piccolo riassunto delle conclusioni perchè non sono sicuro di aver capito tutto?Ciao Domenico,sì effettivamente la mailing list è un po’ dispersiva.Utilizzando il collation di tipo “C” il Dell T420 impiega meno tempo rispetto al Mac Mini nell’esecuzione di qualsiasi query. La differenza finora è del 20%-30% circa. Adesso sto facendo un caricamento dati massiccio che sul Dell impiegava 3 giorni circa mentre sul Mac Mini impiega 1 giorno e mezzo circa e vi farò sapere il tempo di esecuzione.Alla fine la differenza di performance tra il DELL R420 e il Mac MINI è dovuta al tipo di \"collate\" utilizzato nell'inizializzazione del DB?Possono essere adoperati diversi collation all’interno dello stesso DB. Ogni tabella può averne uno diverso, persino ogni attributo può avere un collation diverso dagli altri attributi della stessa tabella o della stessa SELECT.Il comando:initdb —no-localeimposta il collate “C” come predefinito per il DB creato.Tuttavia nel file di postgresql.conf sono presenti le seguenti variabili di configurazione che definiscono il collate predefinito per l’istanza di Postgres avviata:# These settings are initialized by initdb, but they can be changed.lc_messages = 'C' # locale for system error message # stringslc_monetary = 'C' # locale for monetary formattinglc_numeric = 'C' # locale for number formattinglc_time = 'C' # locale for time formattingIn ogni caso mi aspettavo performance migliori da questo Dell, soprattutto considerando il livello della macchina (controller RAID di un certo livello, 10 dischi rigidi SAS da 10k rpm per i dati e 2 dischi SAS da 15k rpm per sistema operativo e WAL, 128GB di RAM e 2 CPU Xeon per un totale di 16 core fisici e 16 logici).Oltre al collation è bene modificare le impostazioni del BIOS e del controller RAID; il primo perché su questi sistemi sono impostate opzioni di risparmio energetico che tagliano la potenza, mentre per il RAID è bene scegliere una cache in scrittura del tipo WriteBack e in lettura del tipo ReadAhead e poi controllare che la batteria della cache di scrittura sia carica, altrimenti non viene adoperata nessuna cache.Ovviamente poi la configurazione RAID fa tanto; per ora utilizziamo RAID5 che non è affatto performante in scrittura ma lo è in lettura.Il tutto dipende dall’applicazione specifica. Nel nostro caso è una sorta di data warehouse ibrido, quindi una soluzione mono utenza che fa uso di grandi moli di dati (siamo nell’ordine dei 1.8TB di database): poche transazioni ma massicce. Le impostazioni sul kernel agiscono soprattutto sulle performance in ambienti multi utenza (migliaia di connessioni) e con le ultime versioni di Postgres e del kernel linux il sistema è già abbastanza auto bilanciato.Spero di essere stato sufficientemente chiaro ed esaustivo.Cordiali saluti, Pietro",
"msg_date": "Wed, 8 Apr 2015 12:37:38 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can't get Dell PE T420 (Perc H710) perform better than a MacMini\n with PostgreSQL"
}
] |
[
{
"msg_contents": "-- tables\n-- New column \"span\" added and new index created on both tables.\nCREATE TABLE customer(\n uid bigserial PRIMARY KEY,\n name character varying(50) NOT NULL,\n start_time timestamp without time zone,\n end_time timestamp without time zone,\n span tsrange,\n comment text,\n created timestamp without time zone DEFAULT now()\n);\n\nCREATE INDEX sidx_customer ON customer USING GiST (uid, span);\n\nCREATE TABLE customer_log (\n uid SERIAL PRIMARY KEY,\n action character varying(32) NOT NULL,\n start_time timestamp without time zone,\n end_time timestamp without time zone,\n customer_uid bigint,\n span tsrange,\n comment text,\n created timestamp without time zone DEFAULT now()\n);\n\nCREATE INDEX sidx_customer_log ON customer_log USING GiST (customer_uid, span);\n\n-- current query\nEXPLAIN (analyze, buffers)\n SELECT * FROM CUSTOMER t JOIN CUSTOMER_LOG tr ON t.uid = tr.customer_uid\n WHERE t.start_time <= '2050-01-01 00:00:00'::timestamp without time zone AND t.end_time >= '1970-01-01 00:00:00'::timestamp without time zone\n AND tr.start_time <= '2050-01-01 00:00:00'::timestamp without time zone AND tr.end_time >= '1970-01-01 00:00:00'::timestamp without time zone\n AND tr.action like 'LOGIN'\n ORDER BY t.uid asc limit 1000;\n\nQuestion/Problem:\n\nHow to rewrite this query to leverage tsrange?\n\ni.e.\n\n SELECT *\n FROM customer t JOIN customer_log tr ON t.uid = tr.customer_uid\n WHERE t.span @> tsrange('1970-01-01 00:00:00', '2050-01-01 00:00:00', '[]')\n AND tr.span @> tsrange('1970-01-01 00:00:00', '2050-01-01 00:00:00', '[]')\n AND tr.action like 'LOGIN'\n ORDER BY t.uid asc limit 1000;\n\nThanks in advance for any assistance with this query.\n\n\n\n\n\n\n\n\n\n-- tables\n-- New column \"span\" added and new index created on both tables.\nCREATE TABLE customer(\n uid bigserial PRIMARY KEY,\n name character varying(50) NOT NULL,\n start_time timestamp without time zone,\n end_time timestamp without time zone,\n span tsrange,\n comment text,\n created timestamp without time zone DEFAULT now()\n);\n\nCREATE INDEX sidx_customer ON customer USING GiST (uid, span);\n\nCREATE TABLE customer_log (\n uid SERIAL PRIMARY KEY,\n action character varying(32) NOT NULL,\n start_time timestamp without time zone,\n end_time timestamp without time zone,\n customer_uid bigint,\n span tsrange,\n comment text,\n created timestamp without time zone DEFAULT now()\n);\n\nCREATE INDEX sidx_customer_log ON customer_log USING GiST (customer_uid, span);\n\n-- current query\nEXPLAIN (analyze, buffers) \n SELECT * FROM CUSTOMER t JOIN CUSTOMER_LOG tr ON t.uid = tr.customer_uid\n WHERE t.start_time <= '2050-01-01 00:00:00'::timestamp without time zone AND t.end_time >= '1970-01-01 00:00:00'::timestamp without time zone\n AND tr.start_time <= '2050-01-01 00:00:00'::timestamp without time zone AND tr.end_time >= '1970-01-01 00:00:00'::timestamp without time zone\n AND tr.action like 'LOGIN'\n ORDER BY t.uid asc limit 1000;\n\nQuestion/Problem:\n\nHow to rewrite this query to leverage tsrange?\n\ni.e.\n\n SELECT *\n FROM customer t JOIN customer_log tr ON t.uid = tr.customer_uid\n WHERE t.span @> tsrange('1970-01-01 00:00:00', '2050-01-01 00:00:00', '[]')\n AND tr.span @> tsrange('1970-01-01 00:00:00', '2050-01-01 00:00:00', '[]')\n AND tr.action like 'LOGIN'\n ORDER BY t.uid asc limit 1000;\n\nThanks in advance for any assistance with this query.",
"msg_date": "Thu, 2 Apr 2015 03:26:48 +0000",
"msg_from": "\"Burgess, Freddie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performing query re-write using tsrange index"
}
] |
[
{
"msg_contents": "> \n> Josh, there seems to be an inconsistency in your blog. You say 3.10.X is\n> safe, but the graph you show with the poor performance seems to be from\n> 3.13.X which as I understand it is a later kernel. Can you clarify which\n> 3.X kernels are good to use and which are not?\n\nSorry to cut in - \n\nSo far we've found kernel 3.18 to be excellent for postgres 9.3 performance (pgbench + our own queries run much faster than with the 2.6.32-504 centos 6 kernel, and we haven't encountered random stalls or slowness).\n\nWe use elrepo to get prebuilt rpms of the latest mainline stable kernel (kernel-ml).\n\nhttp://elrepo.org/tiki/kernel-ml\n\nGraeme Bell\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Apr 2015 09:04:08 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "Can you say how much faster it was?\n\nPrzemek Deć\n\n2015-04-09 11:04 GMT+02:00 Graeme B. Bell <[email protected]>:\n\n> >\n> > Josh, there seems to be an inconsistency in your blog. You say 3.10.X is\n> > safe, but the graph you show with the poor performance seems to be from\n> > 3.13.X which as I understand it is a later kernel. Can you clarify which\n> > 3.X kernels are good to use and which are not?\n>\n> Sorry to cut in -\n>\n> So far we've found kernel 3.18 to be excellent for postgres 9.3\n> performance (pgbench + our own queries run much faster than with the\n> 2.6.32-504 centos 6 kernel, and we haven't encountered random stalls or\n> slowness).\n>\n> We use elrepo to get prebuilt rpms of the latest mainline stable kernel\n> (kernel-ml).\n>\n> http://elrepo.org/tiki/kernel-ml\n>\n> Graeme Bell\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nCan you say how much faster it was?Przemek Deć\n2015-04-09 11:04 GMT+02:00 Graeme B. Bell <[email protected]>:>\n> Josh, there seems to be an inconsistency in your blog. You say 3.10.X is\n> safe, but the graph you show with the poor performance seems to be from\n> 3.13.X which as I understand it is a later kernel. Can you clarify which\n> 3.X kernels are good to use and which are not?\n\nSorry to cut in -\n\nSo far we've found kernel 3.18 to be excellent for postgres 9.3 performance (pgbench + our own queries run much faster than with the 2.6.32-504 centos 6 kernel, and we haven't encountered random stalls or slowness).\n\nWe use elrepo to get prebuilt rpms of the latest mainline stable kernel (kernel-ml).\n\nhttp://elrepo.org/tiki/kernel-ml\n\nGraeme Bell\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 9 Apr 2015 12:39:27 +0200",
"msg_from": "=?UTF-8?B?UHJ6ZW15c8WCYXcgRGXEhw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "\r\nFrom a measurement I took back when we did the upgrade:\r\n\r\nperformance with 2.6: (pgbench, size 100, 32 clients)\r\n\r\n48 651 transactions per second (read only)\r\n6 504 transactions per second (read-write)\r\n\r\n\r\nperformance with 3.18 (pgbench, size 100, 32 clients)\r\n\r\n129 303 transactions per second (read only)\r\n16 895 transactions (read-write)\r\n\r\n\r\nSo that looks like 2.6x improvement to reads and writes. That was an 8 core xeon server with H710P and 4x crucial M550 SSDs in RAID, pg9.3. \r\n\r\nGraeme Bell\r\n\r\n\r\n\r\n\r\n\r\nOn 09 Apr 2015, at 12:39, Przemysław Deć <[email protected]> wrote:\r\n\r\n> Can you say how much faster it was?\r\n> \r\n> Przemek Deć\r\n> \r\n> 2015-04-09 11:04 GMT+02:00 Graeme B. Bell <[email protected]>:\r\n> >\r\n> > Josh, there seems to be an inconsistency in your blog. You say 3.10.X is\r\n> > safe, but the graph you show with the poor performance seems to be from\r\n> > 3.13.X which as I understand it is a later kernel. Can you clarify which\r\n> > 3.X kernels are good to use and which are not?\r\n> \r\n> Sorry to cut in -\r\n> \r\n> So far we've found kernel 3.18 to be excellent for postgres 9.3 performance (pgbench + our own queries run much faster than with the 2.6.32-504 centos 6 kernel, and we haven't encountered random stalls or slowness).\r\n> \r\n> We use elrepo to get prebuilt rpms of the latest mainline stable kernel (kernel-ml).\r\n> \r\n> http://elrepo.org/tiki/kernel-ml\r\n> \r\n> Graeme Bell\r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list ([email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n> \r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Apr 2015 11:01:51 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "Wow, thats huge performance gain.\nAnd it was on ext4?\n\n-- \nLinux Polska Sp. z o.o.\nPrzemysław Deć - Senior Solutions Architect\nRHCSA, RHCJA, PostgreSQL Professional Certification\nmob: +48 519 130 141\nemail: [email protected]\nwww.linuxpolska.pl\n___________________________________________________________________________________________________________________________\nLinux Polska Sp. z o. o. Al. Jerozolimskie 123A (26 p.); 02-017 Warszawa;\ntel. (+48) 222139571; fax (+48)222139671\nKRS - 0000326158 Sąd Rejonowy dla M. St. Warszawy w Warszawie, XII Wydział\nGospodarczy KRS\nKapitał zakładowy wpłacony 1 000 500PLN; NIP 7010181018; REGON 141791601\n\n[image: Open Source Day 2015] <http://opensourceday.pl/>\n\n2015-04-09 13:01 GMT+02:00 Graeme B. Bell <[email protected]>:\n\n>\n> From a measurement I took back when we did the upgrade:\n>\n> performance with 2.6: (pgbench, size 100, 32 clients)\n>\n> 48 651 transactions per second (read only)\n> 6 504 transactions per second (read-write)\n>\n>\n> performance with 3.18 (pgbench, size 100, 32 clients)\n>\n> 129 303 transactions per second (read only)\n> 16 895 transactions (read-write)\n>\n>\n> So that looks like 2.6x improvement to reads and writes. That was an 8\n> core xeon server with H710P and 4x crucial M550 SSDs in RAID, pg9.3.\n>\n> Graeme Bell\n>\n>\n>\n>\n>\n> On 09 Apr 2015, at 12:39, Przemysław Deć <[email protected]>\n> wrote:\n>\n> > Can you say how much faster it was?\n> >\n> > Przemek Deć\n> >\n> > 2015-04-09 11:04 GMT+02:00 Graeme B. Bell <[email protected]>:\n> > >\n> > > Josh, there seems to be an inconsistency in your blog. You say 3.10.X\n> is\n> > > safe, but the graph you show with the poor performance seems to be from\n> > > 3.13.X which as I understand it is a later kernel. Can you clarify\n> which\n> > > 3.X kernels are good to use and which are not?\n> >\n> > Sorry to cut in -\n> >\n> > So far we've found kernel 3.18 to be excellent for postgres 9.3\n> performance (pgbench + our own queries run much faster than with the\n> 2.6.32-504 centos 6 kernel, and we haven't encountered random stalls or\n> slowness).\n> >\n> > We use elrepo to get prebuilt rpms of the latest mainline stable kernel\n> (kernel-ml).\n> >\n> > http://elrepo.org/tiki/kernel-ml\n> >\n> > Graeme Bell\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n>\n>",
"msg_date": "Thu, 9 Apr 2015 13:56:21 +0200",
"msg_from": "=?UTF-8?B?UHJ6ZW15c8WCYXcgRGXEhw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "ext4 settings\r\n\r\next4, nobarrier\r\nnoatime+nodatime, \r\nstripe&stride aligned between raid10 & ext4 correctly.\r\n\r\n\r\nSome other useful things to know\r\n\r\n-- h710p\r\nreadahead disabled on H710P\r\nwriteback cache enabled on H710P \r\nDirect IO enabled on H710P\r\n\r\n-- os filesystem settings\r\nlinux readahead enabled (16384), \r\nnr_requests=975\r\nNOOP scheduler\r\nnon-NUMA\r\n\r\n-- pg\r\nio_concurrency on\r\nasync commit.*** see below!\r\n\r\nAll settings were kept identical on the server before and after the kernel change, so this performance increase can be entirely attributed to the newer kernel and its synergies with our configuration. 3.18 contains about 5-10 years of linux kernel development vs. 2.6 kernels (except where backported).\r\n\r\nI have conducted quite a lot of plug-pull testing with diskchecker.pl, and rather a lot of testing of scheduling/IO/RAID controller/etc parameters. The OS/RAID controller/file system settings are as fast as I've been able to achieve without compromising database integrity (please note: this server can run async_commit because of the work we use it for, but we do not use that setting on our other main production servers). \r\n\r\nOur local DBs run extremely nicely for all our normal queries which involve quite a mix of random small IO and full-table operations on e.g. 20GB+ tables , so they're not optimised for pgbench specifically.\r\n\r\nGraeme Bell\r\n\r\n\r\n\r\nOn 09 Apr 2015, at 13:56, Przemysław Deć <[email protected]> wrote:\r\n\r\n> Wow, thats huge performance gain.\r\n> And it was on ext4?\r\n> \r\n> -- \r\n> Linux Polska Sp. z o.o.\r\n> Przemysław Deć - Senior Solutions Architect\r\n> RHCSA, RHCJA, PostgreSQL Professional Certification\r\n> mob: +48 519 130 141\r\n> email: [email protected]\r\n> www.linuxpolska.pl \r\n> ___________________________________________________________________________________________________________________________\r\n> Linux Polska Sp. z o. o. Al. Jerozolimskie 123A (26 p.); 02-017 Warszawa; tel. (+48) 222139571; fax (+48)222139671\r\n> KRS - 0000326158 Sąd Rejonowy dla M. St. Warszawy w Warszawie, XII Wydział Gospodarczy KRS\r\n> Kapitał zakładowy wpłacony 1 000 500PLN; NIP 7010181018; REGON 141791601\r\n> <Mail Attachment.jpeg>\r\n> \r\n> \r\n> 2015-04-09 13:01 GMT+02:00 Graeme B. Bell <[email protected]>:\r\n> \r\n> From a measurement I took back when we did the upgrade:\r\n> \r\n> performance with 2.6: (pgbench, size 100, 32 clients)\r\n> \r\n> 48 651 transactions per second (read only)\r\n> 6 504 transactions per second (read-write)\r\n> \r\n> \r\n> performance with 3.18 (pgbench, size 100, 32 clients)\r\n> \r\n> 129 303 transactions per second (read only)\r\n> 16 895 transactions (read-write)\r\n> \r\n> \r\n> So that looks like 2.6x improvement to reads and writes. That was an 8 core xeon server with H710P and 4x crucial M550 SSDs in RAID, pg9.3.\r\n> \r\n> Graeme Bell\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> On 09 Apr 2015, at 12:39, Przemysław Deć <[email protected]> wrote:\r\n> \r\n> > Can you say how much faster it was?\r\n> >\r\n> > Przemek Deć\r\n> >\r\n> > 2015-04-09 11:04 GMT+02:00 Graeme B. Bell <[email protected]>:\r\n> > >\r\n> > > Josh, there seems to be an inconsistency in your blog. You say 3.10.X is\r\n> > > safe, but the graph you show with the poor performance seems to be from\r\n> > > 3.13.X which as I understand it is a later kernel. Can you clarify which\r\n> > > 3.X kernels are good to use and which are not?\r\n> >\r\n> > Sorry to cut in -\r\n> >\r\n> > So far we've found kernel 3.18 to be excellent for postgres 9.3 performance (pgbench + our own queries run much faster than with the 2.6.32-504 centos 6 kernel, and we haven't encountered random stalls or slowness).\r\n> >\r\n> > We use elrepo to get prebuilt rpms of the latest mainline stable kernel (kernel-ml).\r\n> >\r\n> > http://elrepo.org/tiki/kernel-ml\r\n> >\r\n> > Graeme Bell\r\n> >\r\n> > --\r\n> > Sent via pgsql-performance mailing list ([email protected])\r\n> > To make changes to your subscription:\r\n> > http://www.postgresql.org/mailpref/pgsql-performance\r\n> >\r\n> \r\n> \r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Apr 2015 13:35:42 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "On Thu, Apr 9, 2015 at 7:35 AM, Graeme B. Bell <[email protected]> wrote:\n> ext4 settings\n>\n> ext4, nobarrier\n> noatime+nodatime,\n> stripe&stride aligned between raid10 & ext4 correctly.\n>\n>\n> Some other useful things to know\n>\n> -- h710p\n> readahead disabled on H710P\n> writeback cache enabled on H710P\n> Direct IO enabled on H710P\n>\n> -- os filesystem settings\n> linux readahead enabled (16384),\n> nr_requests=975\n> NOOP scheduler\n> non-NUMA\n>\n> -- pg\n> io_concurrency on\n> async commit.*** see below!\n>\n> All settings were kept identical on the server before and after the kernel change, so this performance increase can be entirely attributed to the newer kernel and its synergies with our configuration. 3.18 contains about 5-10 years of linux kernel development vs. 2.6 kernels (except where backported).\n>\n> I have conducted quite a lot of plug-pull testing with diskchecker.pl, and rather a lot of testing of scheduling/IO/RAID controller/etc parameters. The OS/RAID controller/file system settings are as fast as I've been able to achieve without compromising database integrity (please note: this server can run async_commit because of the work we use it for, but we do not use that setting on our other main production servers).\n>\n> Our local DBs run extremely nicely for all our normal queries which involve quite a mix of random small IO and full-table operations on e.g. 20GB+ tables , so they're not optimised for pgbench specifically.\n\nIt would be handy to see a chart comparing 3.11, 3.13 and 3.18 as\nwell, to see if most / any of those performance gains came in earlier\nkernels, but after 2.6 or 3.2 etc.\n\nCan confirm that for pg purposes, 3.2 is basically broken in some not\nto great ways. We've had servers that were overloaded at load factors\nof 20 or 30 drop down to 5 or less with just a change from 3.2 to\n3.11/3.13 on ubuntu 12.04\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Apr 2015 09:15:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "\nThe 3.10 mainline had a bug which crashes on boot for us, I think it was the network card driver for the R620. Can't recall. \nThe equipment is in continual use so unfortunately we can't test other kernels at the moment but hopefully someone else may be interested.\nIt's probably valuable to compare against 3.19 too for anyone who doesn't want to delve into kernel archeology.\n\nKernel performance gains will be sensitive to things like VM settings, scheduler settings, NUMA, hardware choice, BIOS settings etc, use of virtualisation, so it depends which codepaths you end up running. So it may not be meaningful to compare kernels without a range of system configurations to measure against. ie. Your mileage will probably vary depending on how far you've tuned your disk configuration, memory, etc. \n\nGraeme Bell\n\nOn 09 Apr 2015, at 17:15, Scott Marlowe <[email protected]> wrote:\n\n> On Thu, Apr 9, 2015 at 7:35 AM, Graeme B. Bell <[email protected]> wrote:\n>> ext4 settings\n>> \n>> ext4, nobarrier\n>> noatime+nodatime,\n>> stripe&stride aligned between raid10 & ext4 correctly.\n>> \n>> \n>> Some other useful things to know\n>> \n>> -- h710p\n>> readahead disabled on H710P\n>> writeback cache enabled on H710P\n>> Direct IO enabled on H710P\n>> \n>> -- os filesystem settings\n>> linux readahead enabled (16384),\n>> nr_requests=975\n>> NOOP scheduler\n>> non-NUMA\n>> \n>> -- pg\n>> io_concurrency on\n>> async commit.*** see below!\n>> \n>> All settings were kept identical on the server before and after the kernel change, so this performance increase can be entirely attributed to the newer kernel and its synergies with our configuration. 3.18 contains about 5-10 years of linux kernel development vs. 2.6 kernels (except where backported).\n>> \n>> I have conducted quite a lot of plug-pull testing with diskchecker.pl, and rather a lot of testing of scheduling/IO/RAID controller/etc parameters. The OS/RAID controller/file system settings are as fast as I've been able to achieve without compromising database integrity (please note: this server can run async_commit because of the work we use it for, but we do not use that setting on our other main production servers).\n>> \n>> Our local DBs run extremely nicely for all our normal queries which involve quite a mix of random small IO and full-table operations on e.g. 20GB+ tables , so they're not optimised for pgbench specifically.\n> \n> It would be handy to see a chart comparing 3.11, 3.13 and 3.18 as\n> well, to see if most / any of those performance gains came in earlier\n> kernels, but after 2.6 or 3.2 etc.\n> \n> Can confirm that for pg purposes, 3.2 is basically broken in some not\n> to great ways. We've had servers that were overloaded at load factors\n> of 20 or 30 drop down to 5 or less with just a change from 3.2 to\n> 3.11/3.13 on ubuntu 12.04\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 10 Apr 2015 13:29:38 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some performance testing?"
},
{
"msg_contents": "Scott,\n\n> Can confirm that for pg purposes, 3.2 is basically broken in some not\n> to great ways. We've had servers that were overloaded at load factors\n> of 20 or 30 drop down to 5 or less with just a change from 3.2 to\n> 3.11/3.13 on ubuntu 12.04\n\nThat's correct, and 3.5 shares the same problems. The underlying issue\nwas that 3.X was tweaked to be MUCH more aggressive about\ncache-clearing, to the point where it would be evicting data from the FS\ncache which had just been read in and hadn't even been used yet. For\nsome reason, this aggressive eviction got worse the more processes on\nthe system which were using the FS cache, so where you really see it is\nwhen you have more processes with cache than you have cores.\n\nIt's pretty easy to demonstrate just using pgbench, with a database\nlarger than RAM, and 2X as many clients as cores. You'll see that\nkernels 3.2 and 3.5 will do 3X to 5X as much IO for the same workload as\n3.10 and later will do.\n\nGreame,\n\nOn 04/09/2015 04:01 AM, Graeme B. Bell wrote:> performance with 2.6:\n(pgbench, size 100, 32 clients)\n>\n> 48 651 transactions per second (read only)\n> 6 504 transactions per second (read-write)\n>\n>\n> performance with 3.18 (pgbench, size 100, 32 clients)\n>\n> 129 303 transactions per second (read only)\n> 16 895 transactions (read-write)\n\nThanks for that data! I'm glad to see that 3.18 has improved so much.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 Apr 2015 12:58:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some performance testing?"
}
] |
[
{
"msg_contents": "A tangent to the performance testing thread here, but an important issue that you will see come up in your work this year or next. \n\n\"PCIe SSD\" may include AHCI PCI SSD or NVMe PCI SSD.\n\nAHCI = old style, basically it's faster than SATA3 but quite similar in terms of how the operating system sees the flash device.\nNVMe = new style, requires a very new motherboard, operating system & drivers, but extremely fast and low latency, very high IOPS. \nFor example, Macbooks have PCIe SSDs in them, but not NVMe (currently).\n\nThe difference is very important since NVMe offers multiples of performance in terms of everything we love: lower latency, higher IOPS, lower CPU overhead and higher throughput. \nhttp://www.anandtech.com/show/7843/testing-sata-express-with-asus/4\nscroll down to the \"App to SSD IO Read Latency\" graph. Look at the two bottom lines.\n\nSo I'd suggest it's probably worth noting in any benchmark if you are using NVMe and if so which driver version, since development is ongoing.\n\nOn the topic of PCIe NVMe SSDs, some interesting reading:\n\n- http://www.tweaktown.com/reviews/6773/samsung-xs1715-1-6tb-2-5-inch-nvme-pcie-enterprise-ssd-review/index.html\n\"it can deliver 750,000 random read IOPS and 115,000 write IOPS \"\n\n- or any of these nice toys... \nhttp://imagescdn.tweaktown.com/content/6/7/6773_11777_samsung_xs1715_1_6tb_2_5_inch_nvme_pcie_enterprise_ssd_review.png\n\nall with capacitor backing (which you should plug-pull test, of course).\n\nGraeme.\n\n\n> I currently have access to a matched pair of 20-core, 128GB RAM servers\n> with SSD-PCI storage, for about 2 weeks before they go into production.\n> Are there any performance tests people would like to see me run on\n> these? Otherwise, I'll just do some pgbench and DVDStore.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Apr 2015 14:45:24 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "NVMe or AHCI PCI-express? A comment for people benchmarking... "
}
] |
[
{
"msg_contents": "Hi all. Using PG-9.4 I have a query which goes something like this The \nsimplified schema is like this: CREATE TABLE message( id serial PRIMARY KEY, \nsubject varchar NOT NULL, folder_id integer NOT NULL REFERENCES folder(id), \nreceived timestamp not null, fts_all tsvector ); create index \nmessage_fts_all_folder_idx ON message using gin (fts_all, folder_id); SELECT \nm.id, m.subject FROM message m WHERE m.folder_id = 1 AND m.fts_all @@ \nto_tsquery('simple', 'foo') ORDER BY received DESC LIMIT 10; ... On my \ndataset it uses an index I have on (folder_id, received DESC), then filters the \nresult, which is not optimal when searching in > 1million messages and the \nresult is large and I'm only after the first (newest) 10. What I'd like is to \nhave an index like this: create index message_fts_all_folder_idx ON message \nusing gin (fts_all, folder_id, received DESC); but GIN doesn't allow ASC/DESC \nmodifiers. Any hints on how to optimize this? Thanks. -- Andreas Joseph \nKrogh CTO / Partner - Visena AS Mobile: +47 909 56 963 [email protected] \n<mailto:[email protected]> www.visena.com <https://www.visena.com> \n<https://www.visena.com>",
"msg_date": "Thu, 9 Apr 2015 23:39:26 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cannot get query to use btree-gin index when ORDER BY"
}
] |
[
{
"msg_contents": "Hi all. I have a pg_largeobject of ~300GB size and when I run \"vacuumlo -n \n<dbname>\", I get: Would remove 82172 large objects from database \"<dbname>\". \nSo I'm running without \"-n\" to do the actual work, but it seems to take \nforever. The disks are 8 SAS 10K HDD drives in RAID5. Any hints on how long \nthis is supposed to take? Thanks. -- Andreas Joseph Krogh CTO / Partner - \nVisena AS Mobile: +47 909 56 963 [email protected] <mailto:[email protected]> \nwww.visena.com <https://www.visena.com> <https://www.visena.com>",
"msg_date": "Wed, 15 Apr 2015 03:46:34 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of vacuumlo"
}
] |
[
{
"msg_contents": "Greetings,\n\nWe have been having some pretty painful site outages related to heavy swapping, even though the system may appear to have as much as 10GB of free memory.\nIt seems that as soon as the system hits ~22GB (of 32GB) of memory usage it starts to swap. As soon as we go below ~22GB, swap is released. \n\nDuring the worst outages we see:\n\nheavy swapping (10-15GB)\nheavy disk IO (up to 600ms)\nheavy CPU load: 4.0 (load over 100+ with 26 cores)\navailable memory: (6-8GB)\n\nHere's the host information, it's a dedicated physical host, not a VM.\n\nubuntu-14.04.1 LTS\nLinux db004 3.13.0-34-generic #60-Ubuntu SMP Wed Aug 13 15:45:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux\n24 cores, 32GB RAM, 32GB swapfile\nRAID 10 with SSDs\n\nPostgres version:\n\npostgresql-9.3.5\npostgis-2.1.4 extension\n\nHere's our /etc/sysctl.conf:\n\nkernel.shmmax = 16850395136\nkernel.shmall = 4113866\nvm.overcommit_memory = 2\nvm.overcommit_ratio = 50\nvm.swappiness = 0 \nvm.dirty_ratio = 10 ## maximum 3.2GB dirty cache size \nvm.dirty_background_ratio = 5 ## ratio at which disk begins to flush cache (1.6GB)\n\nWe are setting our shared_buffers, work_mem and maintenance_work_mem very high which I suspected is reason for swapping.\nThe huge mystery to me is that during heavy swap there appears to be ~8-10GB free memory in cache.\nWe wondered whether that unused memory in cache was being consumed by our memory settings in postgresql.conf:\n\nshared_buffers = 8GB ## 25% of system memory (32GB)\nmaintenance_work_mem = 8GB ## autovacuuming on, 3 workers (defaults)\nwork_mem = 256MB ## we have as many as 170 connections\n\nWe have average around 170 connections and have been moving everything to pgbouncer to reduce this count.\nRough estimate of cost with work_mem set to 256MB: 170 x 256MB = 43510MB (43GB)\n\nIs it possible that the high work_mem setting is causing the connections to hold on to the extra available memory?\nThis is our assumption so we plan on dialing down work_mem and maintenance_work_mem to sane values.\nWe have also ordered more RAM (increase from 32GB to 96GB) but I would like to understand what is happening.\n\nHere are the values we are going to change:\n\nmaintenance_work_mem = 1GB\nwork_mem = 64MB\n\nOne last thing, we disabled THP (tranparent huge pages) because we were seeing compaction errors:\n\n[db003.prod:~] root% egrep 'compact_(fail|stall)' /proc/vmstat\ncompact_stall 34682729\ncompact_fail 32915396\n\nHere are our competing theories of why we are swapping as much as 10-15GB with 6GB-10GB of free memory:\n\n1.) maintenance_work_mem and work_mem are set too high causing postgres to allocate too much memory\n2.) The 6GB - 10GB of free memory observed during swapping is postgresql's shared buffer \n3.) Linux filesystem caching is tuned incorrectly, maybe SSD related?\n4.) This is a NUMA related issue similar to the Mysql Swap Insanity issue:\n\nhttp://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/\nhttp://frosty-postgres.blogspot.com/2012/08/postgresql-numa-and-zone-reclaim-mode.html\nhttp://www.postgresql.org/message-id/CAGTBQpacrSDcN10rTwRbH+AGm2_y0Qao6CJDoyvEp504iFbdrw@mail.gmail.com\n\nNote, we zone_reclaim_mode appears disabled on our database host.\n\nroot% cat /proc/sys/vm/zone_reclaim_mode\n0\n\nHoping someone here can help us sort this out because it's a huge mystery for us.\nWe are going to add more RAM but I'm trying to understand what is happening.\n\nThank you kindly.\n\nChristian Gough\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 Apr 2015 14:25:15 -0400",
"msg_from": "Christian Gough <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql Host Swapping Hard With Abundant Free Memory"
},
{
"msg_contents": "On 4/17/15 1:25 PM, Christian Gough wrote:\n> Here are our competing theories of why we are swapping as much as 10-15GB with 6GB-10GB of free memory:\n\nHow are you determining that you're swapping?\nHow are you measuring 'free memory'?\n\n> 1.) maintenance_work_mem and work_mem are set too high causing postgres to allocate too much memory\n\nThere's not much that uses maintenance_work_mem. (AUTO)VACUUM, CREATE \nINDEX... I think that's it. Also, I believe it's internally limited to 1G.\n\nAs for work_mem, it's hard to say. As your math shows, if *every* \nconnection suddenly allocated a full work_mem then you'd be in trouble. \nBut that would be a pretty extreme situation. Even though a backend can \nactually have multiple work_mem consuming operations in use at once, so \nit could theoretically use several times work_mem, in reality I think \nit's very rare for every backend to use even a fraction of work_mem.\n\nBut, there's no need to theorize here; work_mem will be reported as \nallocated to each backend and will not be considered free memory by the OS.\n\n> 2.) The 6GB - 10GB of free memory observed during swapping is postgresql's shared buffer\n\nIf anything, I'd expect the OS to avoid swapping a shared memory segment \n(if not flat-out refuse to swap it).\n\nShared memory certainly shouldn't be counted as free either. The only \ntricky bit here is many OSes will report shared memory as part of the \nmemory footprint *for each backend*. So if you have 170 backends and 8GB \nshared buffers, it could look like you're using 13.6TB of memory (which \nyou're obviously not).\n\n> 3.) Linux filesystem caching is tuned incorrectly, maybe SSD related?\n\nPossibly, though I don't think it'd be SSD related...\n\n> 4.) This is a NUMA related issue similar to the Mysql Swap Insanity issue:\n\nSince you have that turned off, I don't think so.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Apr 2015 18:33:11 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Host Swapping Hard With Abundant Free Memory"
}
] |
[
{
"msg_contents": "Given the table:\n\nCREATE TABLE dates (id SERIAL, d DATE NOT NULL, t TEXT NOT NULL)\n\nWith an *index* on field d. The following two queries are functionally\nequivalent:\n\n1. SELECT * FROM dates WHERE d >= '1900-01-01'\n2. SELECT * FROM dates WHERE EXTRACT(year from d) >= 1900'\n\nBy functionally equivalent, they will return the same result set.\n\nQuery 2 does not use the index, adding a performance cost. It seems\nthere is an opportunity for optimization to handle these two queries\nequivalently to take advantage of the index.\n\nSome database abstraction layers have attempted to workaround this\nlimitation by rewriting EXTRACT(year ...) queries into a query more\nlike query 1. For example: Django's ORM does exctly this. Rather than\nall abstraction layers trying to optimize this case, maybe it could be\npushed to the database layer.\n\nI have written a test script that demonstrates that these functionally\nequivalent queries have different performance characteristics. The\nscript and results are provide below:\n\nRESULTS:\n\n----\nEXPLAIN SELECT * FROM dates WHERE d >= '1900-01-01'\n QUERY PLAN\n----------------------------------------------------------------------------\n Bitmap Heap Scan on dates (cost=9819.23..26390.15 rows=524233 width=40)\n Recheck Cond: (d >= '1900-01-01'::date)\n -> Bitmap Index Scan on d_idx (cost=0.00..9688.17 rows=524233 width=0)\n Index Cond: (d >= '1900-01-01'::date)\n(4 rows)\n\nEXPLAIN SELECT * FROM dates WHERE EXTRACT(year from d) >= 1900\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Seq Scan on dates (cost=0.00..37540.25 rows=524233 width=40)\n Filter: (date_part('year'::text, (d)::timestamp without time zone)\n>= 1900::double precision)\n(2 rows)\n\nTiming\nselect_without_extract: 284.233350s\nselect_with_extract: 323.106491s\n----\n\nSCRIPT:\n\n----\n#!/usr/bin/python3\n\nimport datetime\nimport subprocess\nimport random\nimport timeit\nimport sys\n\n\nsubprocess.check_call(['psql', 'postgres', '-c', 'DROP DATABASE IF\nEXISTS datetest'], stdout=subprocess.DEVNULL)\nsubprocess.check_call(['psql', 'postgres', '-c', 'CREATE DATABASE\ndatetest'], stdout=subprocess.DEVNULL)\nsubprocess.check_call(['psql', 'datetest', '-c', 'CREATE TABLE dates\n(id SERIAL, d DATE NOT NULL, t TEXT NOT NULL)'],\nstdout=subprocess.DEVNULL)\n\n\ndef chunks(n, l):\n i = 0\n while i < len(l):\n yield l[i:i+n]\n i += n\n\nd = datetime.date(1800, 1, 1)\ntoday = datetime.date.today()\nvalues = []\nwhile d < today:\n values.extend('(\\'%s\\', \\'%s\\')' % (d, d) for i in range(20))\n d += datetime.timedelta(days=1)\nrandom.shuffle(values)\nfor chunk in chunks(1000, values):\n s = ','.join(chunk)\n subprocess.check_call(['psql', 'datetest', '-c', 'INSERT INTO\ndates (d, t) VALUES %s' % s], stdout=subprocess.DEVNULL)\n\n\nsubprocess.check_call(['psql', 'datetest', '-c', 'CREATE INDEX d_idx\nON dates (d)'], stdout=subprocess.DEVNULL)\nprint('EXPLAIN SELECT * FROM dates WHERE d >= \\'1900-01-01\\'')\nsys.stdout.flush()\nsubprocess.check_call(['psql', 'datetest', '-c', 'EXPLAIN SELECT *\nFROM dates WHERE d >= \\'1900-01-01\\''])\nprint('EXPLAIN SELECT * FROM dates WHERE EXTRACT(year from d) >= 1900')\nsys.stdout.flush()\nsubprocess.check_call(['psql', 'datetest', '-c', 'EXPLAIN SELECT *\nFROM dates WHERE EXTRACT(year from d) >= 1900'])\n\n\ndef select_without_extract():\n subprocess.check_call(['psql', 'datetest', '-c', 'SELECT * FROM\ndates WHERE d >= \\'1900-01-01\\''], stdout=subprocess.DEVNULL)\n\ndef select_with_extract():\n subprocess.check_call(['psql', 'datetest', '-c', 'SELECT * FROM\ndates WHERE EXTRACT(year from d) >= 1900'], stdout=subprocess.DEVNULL)\n\nprint('Timing')\nsys.stdout.flush()\n\nv = timeit.timeit('select_without_extract()', setup='from __main__\nimport select_without_extract', number=100)\nprint('select_without_extract: %fs' % v)\nsys.stdout.flush()\n\nv = timeit.timeit('select_with_extract()', setup='from __main__ import\nselect_with_extract', number=100)\nprint('select_with_extract: %fs' % v)\nsys.stdout.flush()\n---\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 Apr 2015 10:16:29 -0700",
"msg_from": "Jon Dufresne <[email protected]>",
"msg_from_op": true,
"msg_subject": "extract(year from date) doesn't use index but maybe could?"
},
{
"msg_contents": "\n\nOn 04/19/15 19:16, Jon Dufresne wrote:\n> Given the table:\n>\n> CREATE TABLE dates (id SERIAL, d DATE NOT NULL, t TEXT NOT NULL)\n>\n> With an *index* on field d. The following two queries are functionally\n> equivalent:\n>\n> 1. SELECT * FROM dates WHERE d >= '1900-01-01'\n> 2. SELECT * FROM dates WHERE EXTRACT(year from d) >= 1900'\n>\n> By functionally equivalent, they will return the same result set.\n>\n> Query 2 does not use the index, adding a performance cost. It seems\n> there is an opportunity for optimization to handle these two queries\n> equivalently to take advantage of the index.\n\nOr you might try creating an expression index ...\n\nCREATE INDEX date_year_idx ON dates((extract(year from d)));\n\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 Apr 2015 19:42:12 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(year from date) doesn't use index but maybe\n could?"
},
{
"msg_contents": "On Sun, Apr 19, 2015 at 10:42 AM, Tomas Vondra\n<[email protected]> wrote:\n>\n>\n> On 04/19/15 19:16, Jon Dufresne wrote:\n>>\n>> Given the table:\n>>\n>> CREATE TABLE dates (id SERIAL, d DATE NOT NULL, t TEXT NOT NULL)\n>>\n>> With an *index* on field d. The following two queries are functionally\n>> equivalent:\n>>\n>> 1. SELECT * FROM dates WHERE d >= '1900-01-01'\n>> 2. SELECT * FROM dates WHERE EXTRACT(year from d) >= 1900'\n>>\n>> By functionally equivalent, they will return the same result set.\n>>\n>> Query 2 does not use the index, adding a performance cost. It seems\n>> there is an opportunity for optimization to handle these two queries\n>> equivalently to take advantage of the index.\n>\n>\n> Or you might try creating an expression index ...\n>\n> CREATE INDEX date_year_idx ON dates((extract(year from d)));\n>\n\nCertainly, but won't this add additional overhead in the form of two\nindexes; one for the column and one for the expression?\n\nMy point is, why force the user to take these extra steps or add\noverhead when the the two queries (or two indexes) are functionally\nequivalent. Shouldn't this is an optimization handled by the database\nso the user doesn't need to hand optimize these differences?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 Apr 2015 13:10:42 -0700",
"msg_from": "Jon Dufresne <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: extract(year from date) doesn't use index but maybe could?"
},
{
"msg_contents": "On Sun, 2015-04-19 at 13:10 -0700, Jon Dufresne wrote:\n> On Sun, Apr 19, 2015 at 10:42 AM, Tomas Vondra\n> <[email protected]> wrote:\n> > On 04/19/15 19:16, Jon Dufresne wrote:\n> >> Given the table:\n> >> CREATE TABLE dates (id SERIAL, d DATE NOT NULL, t TEXT NOT NULL)\n> >> With an *index* on field d. The following two queries are functionally\n> >> equivalent:\n> >> 1. SELECT * FROM dates WHERE d >= '1900-01-01'\n> >> 2. SELECT * FROM dates WHERE EXTRACT(year from d) >= 1900'\n> >> By functionally equivalent, they will return the same result set.\n> >> Query 2 does not use the index, adding a performance cost. It seems\n> >> there is an opportunity for optimization to handle these two queries\n> >> equivalently to take advantage of the index.\n> > Or you might try creating an expression index ..\n> > CREATE INDEX date_year_idx ON dates((extract(year from d)));\n> Certainly, but won't this add additional overhead in the form of two\n> indexes; one for the column and one for the expression?\n> My point is, why force the user to take these extra steps or add\n> overhead when the the two queries (or two indexes) are functionally\n> equivalent. \n\nBut they aren't functionally equivalent. One is an index on a\ndatetime/date, the other is an index just on the year [a DOUBLE].\nDate/datetimes potentially have time zones, integer values do not - in\ngeneral time values are an order of magnitude more complicated than\npeople expect.\n\n> Shouldn't this is an optimization handled by the database\n> so the user doesn't need to hand optimize these differences?\n\nSometimes \"d >= '1900-01-01'\" and \"EXTRACT(year from d) >= 1900\" may be\nequivalent; but not always.\n\n-- \nAdam Tauno Williams <mailto:[email protected]> GPG D95ED383\nSystems Administrator, Python Developer, LPI / NCLA\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 Apr 2015 17:03:47 -0400",
"msg_from": "Adam Tauno Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(year from date) doesn't use index but maybe\n could?"
},
{
"msg_contents": "\n\nOn 04/19/15 22:10, Jon Dufresne wrote:\n> On Sun, Apr 19, 2015 at 10:42 AM, Tomas Vondra\n> <[email protected]> wrote:\n>\n>> Or you might try creating an expression index ...\n>>\n>> CREATE INDEX date_year_idx ON dates((extract(year from d)));\n>>\n>\n> Certainly, but won't this add additional overhead in the form of two\n> indexes; one for the column and one for the expression?\n\nIt will, but it probably will be more efficient than poorly performing \nqueries. Another option is to use the first type of queries with \nexplicit date ranges, thus making it possible to use a single index.\n\n>\n> My point is, why force the user to take these extra steps or add\n> overhead when the the two queries (or two indexes) are functionally\n> equivalent. Shouldn't this is an optimization handled by the\n> database so the user doesn't need to hand optimize these differences?\n\nTheoretically yes.\n\nBut currently the \"extract\" function call is pretty much a black box for \nthe planner, just like any other function - it has no idea what happens \ninside, what fields are extracted and so on. It certainly is unable to \ninfer the date range as you propose.\n\nIt's possible that in the future someone will implement an optimization \nlike this, but I'm not aware of anyone working on that and I wouldn't \nhold my breath.\n\nUntil then you either have to create an expression index, or use queries \nwith explicit date ranges (without \"extract\" calls).\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 Apr 2015 23:12:28 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(year from date) doesn't use index but maybe\n could?"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 04/19/15 22:10, Jon Dufresne wrote:\n>> My point is, why force the user to take these extra steps or add\n>> overhead when the the two queries (or two indexes) are functionally\n>> equivalent. Shouldn't this is an optimization handled by the\n>> database so the user doesn't need to hand optimize these differences?\n\n> Theoretically yes.\n\n> But currently the \"extract\" function call is pretty much a black box for \n> the planner, just like any other function - it has no idea what happens \n> inside, what fields are extracted and so on. It certainly is unable to \n> infer the date range as you propose.\n\n> It's possible that in the future someone will implement an optimization \n> like this, but I'm not aware of anyone working on that and I wouldn't \n> hold my breath.\n\nYeah. In principle you could make the planner do this. As Adam Williams\nnotes nearby, there's a problem with lack of exact consistency between\nextract() semantics and straight timestamp comparisons; but you could\nhandle that by extracting indexable expressions that are considered lossy,\nmuch as we do with anchored LIKE and regexp patterns. The problem is that\nthis would add significant overhead to checking for indexable clauses.\nWith \"x LIKE 'foo%'\" you just need to make a direct check whether x is\nan indexed column; this is exactly parallel to noting whether x is indexed\nin \"x >= 'foo'\", and it doesn't require much additional machinery or\ncycles to reject the common case that there's no match to x. But if you\nwant to notice whether d is indexed in \"extract(year from d) = 2015\",\nthat requires digging down another level in the expression, so it's going\nto add overhead that's not there now, even in cases that have nothing to\ndo with extract() let alone have any chance of benefiting.\n\nWe might still be willing to do it if there were a sufficiently wide range\nof examples that could be handled by the same extra machinery, but this\ndoesn't look too promising from that angle: AFAICS only the \"year\" case\ncould yield a useful index restriction.\n\nSo the short answer is that whether it's worth saving users from\nhand-optimizing such cases depends a lot on what it's going to cost in\nadded planning time for queries that don't get any benefit. This example\ndoesn't look like a case that's going to win that cost/benefit tradeoff\ncomparison.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 Apr 2015 17:33:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(year from date) doesn't use index but maybe could?"
},
{
"msg_contents": "On 2015-04-19 15:33, Tom Lane wrote:\n> \n>> It's possible that in the future someone will implement an optimization \n>> like this, but I'm not aware of anyone working on that and I wouldn't \n>> hold my breath.\n> \n> Yeah. In principle you could make the planner do this. As Adam Williams\n> notes nearby, there's a problem with lack of exact consistency between\n> extract() semantics and straight timestamp comparisons; but you could\n> handle that by extracting indexable expressions that are considered lossy,\n\nWhat about functions that are simpler such as upper()/lower()?\n\nOn 9.3, this:\n`select email from users where lower(first_name) = 'yves';\n\nis not using the index on first_name (Seq Scan on first_name). This should be\neasy to implement?\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 Apr 2015 16:05:14 -0600",
"msg_from": "Yves Dorfsman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(year from date) doesn't use index but maybe\n could?"
},
{
"msg_contents": "Yves Dorfsman <[email protected]> writes:\n> What about functions that are simpler such as upper()/lower()?\n\nIf you think those are simpler, you're much mistaken :-(. For instance,\n\"lower(first_name) = 'yves'\" would have to be translated to something\nlike \"first_name IN ('yves', 'yveS', 'yvEs', 'yvES', ..., 'YVES')\"\n-- 16 possibilities altogether, or 2^N for an N-character string.\n(And that's just assuming ASCII up/down-casing, never mind the interesting\nrules in some non-English languages.) In a case-sensitive index, those\nvarious strings aren't going to sort consecutively, so we'd end up needing\na separate index probe for each possibility.\n\nextract(year from date) agrees with timestamp comparison up to boundary\ncases, that is a few hours either way at a year boundary depending on the\ntimezone situation. So you could translate it to a lossy-but-indexable\ntimestamp comparison condition and not expect to scan too many index items\nthat don't satisfy the original extract() condition. But I don't see how\nto make something like that work for mapping case-insensitive searches\nonto case-sensitive indexes.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 Apr 2015 18:29:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(year from date) doesn't use index but maybe could?"
},
{
"msg_contents": "On 20/04/15 10:29, Tom Lane wrote:\n> Yves Dorfsman <[email protected]> writes:\n>> What about functions that are simpler such as upper()/lower()?\n> If you think those are simpler, you're much mistaken :-(. For instance,\n> \"lower(first_name) = 'yves'\" would have to be translated to something\n> like \"first_name IN ('yves', 'yveS', 'yvEs', 'yvES', ..., 'YVES')\"\n> -- 16 possibilities altogether, or 2^N for an N-character string.\n> (And that's just assuming ASCII up/down-casing, never mind the interesting\n> rules in some non-English languages.) In a case-sensitive index, those\n> various strings aren't going to sort consecutively, so we'd end up needing\n> a separate index probe for each possibility.\n>\n> extract(year from date) agrees with timestamp comparison up to boundary\n> cases, that is a few hours either way at a year boundary depending on the\n> timezone situation. So you could translate it to a lossy-but-indexable\n> timestamp comparison condition and not expect to scan too many index items\n> that don't satisfy the original extract() condition. But I don't see how\n> to make something like that work for mapping case-insensitive searches\n> onto case-sensitive indexes.\n>\n> \t\t\tregards, tom lane\n>\n>\nYeah, an event that happened at 2 am Thursday January 1st 2015 in New \nZealand, will be in the year 2014 for people of London in England!\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Apr 2015 12:13:12 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(year from date) doesn't use index but maybe\n could?"
},
{
"msg_contents": "On 4/19/15 4:33 PM, Tom Lane wrote:\n> We might still be willing to do it if there were a sufficiently wide range\n> of examples that could be handled by the same extra machinery, but this\n> doesn't look too promising from that angle: AFAICS only the \"year\" case\n> could yield a useful index restriction.\n\n\"date_trunc() op val\" is where we'd want to do this. Midnight, month and \nquarter boundaries are the cases I've commonly seen. The first is \nespecially bad, because people got in the habit of doing \ntimestamptz::date = '2015-1-1' and then got very irritated when that \ncast became volatile.\n\nPerhaps this could be better handled by having a way to translate \ndate/time specifications into a range? So something like \n\"date_trunc('...', timestamptz) = val\" would become \"date_trunk('...', \ntimestamptz) <@ [val, val+<correct interval based on truncation>)\". \n<,<=,>= and > wouldn't even need to be a range. To cover the broadest \nbase, we'd want to recognize that extract(year) could transform to \ndate_trunk('year',...), and timestamp/timestamptz::date transforms to \ndate_trunc('day',...). I think these transforms would be lossless, so \nthey could always be made, though we'd not want to do the transformation \nif there was already an index on something like date_trunc(...).\n\nI don't readily see any other data types where this would be useful, but \nthis is so common with timestamps that I think it's worth handling just \nthat case.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Apr 2015 11:15:12 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(year from date) doesn't use index but maybe\n could?"
}
] |
[
{
"msg_contents": "This is a question about how to read \"explain analyze\". I've anonymized \ncolumn names and table names.\n\nIn the output of \"explain analyze\" below, what was the query doing \nbetween actual time 1.426 and 17.077?\n\nKind regards,\nAndomar\n\n\n HashAggregate (cost=862.02..862.62 rows=48 width=90) (actual \ntime=17.077..17.077 rows=0 loops=1)\n Group Key: col, col, col\n Buffers: shared hit=6018\n -> Nested Loop (cost=1.52..861.18 rows=48 width=90) (actual \ntime=17.077..17.077 rows=0 loops=1)\n Buffers: shared hit=6018\n -> Nested Loop (cost=1.09..26.74 rows=303 width=41) (actual \ntime=0.033..1.426 rows=384 loops=1)\n Buffers: shared hit=845\n -> Index Scan using ind on tbl (cost=0.42..8.44 rows=1 \nwidth=8) (actual time=0.010..0.011 rows=1 loops=1)\n Index Cond: (col = 123)\n Buffers: shared hit=4\n -> Index Scan using ind on tbl (cost=0.67..18.28 rows=2 \nwidth=49) (actual time=0.020..1.325 rows=384 loops=1)\n Index Cond: (col = col)\n Filter: (col = 'value')\n Rows Removed by Filter: 2720\n Buffers: shared hit=841\n -> Index Scan using index on tbl (cost=0.42..2.74 rows=1 \nwidth=57) (actual time=0.040..0.040 rows=0 loops=384)\n Index Cond: (col = col)\n Filter: (col = ANY (ARRAY[func('value1'::text), \nfunc('value2'::text)]))\n Rows Removed by Filter: 1\n Buffers: shared hit=5173\n Planning time: 0.383 ms\n Execution time: 17.128 ms\n\n\nVersion: PostgreSQL 9.4.1 on x86_64-unknown-linux-gnu, compiled by gcc \n(GCC) 4.4.7 20120313 (Red Hat 4.4.7-11), 64-bit\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Apr 2015 19:37:49 +0200",
"msg_from": "Andomar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query plan with missing timespans"
},
{
"msg_contents": "Andomar <[email protected]> wrote:\n\n> In the output of \"explain analyze\" below, what was the query\n> doing between actual time 1.426 and 17.077?\n\nLooping through 384 index scans of tbl, each taking 0.040 ms.\nThat's 15.36 ms. That leaves 0.291 ms unaccounted for, which means\nthat's about how much time the top level nested loop took to do its\nwork.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Apr 2015 19:47:03 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan with missing timespans"
},
{
"msg_contents": "> Looping through 384 index scans of tbl, each taking 0.040 ms.\n> That's 15.36 ms. That leaves 0.291 ms unaccounted for, which means\n> that's about how much time the top level nested loop took to do its\n> work.\n>\n\nThanks for your reply, interesting! I'd have thought that this line \nactually implied 0 ms:\n\n actual time=0.040..0.040\n\nBut based on your reply this means, it took between 0.040 and 0.040 ms \nfor each loop?\n\nIs there a way to tell postgres that a function will always return the \nsame result for the same parameter, within the same transaction?\n\nKind regards,\nAndomar\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Apr 2015 21:59:27 +0200",
"msg_from": "Andomar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan with missing timespans"
},
{
"msg_contents": "> On Apr 22, 2015, at 1:59 PM, Andomar <[email protected]> wrote:\n> \n> Is there a way to tell postgres that a function will always return the same result for the same parameter, within the same transaction?\n\n\nYup… read over the Function Volatility Categories <http://www.postgresql.org/docs/9.4/static/xfunc-volatility.html> page and decide which you need. What you’re describing is STABLE (or slightly stricter than STABLE, since STABLE makes that guarantee only for a single statement within a transaction).\n\n--\nJason Petersen\nSoftware Engineer | Citus Data\n303.736.9255\[email protected]",
"msg_date": "Wed, 22 Apr 2015 14:18:40 -0600",
"msg_from": "Jason Petersen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan with missing timespans"
},
{
"msg_contents": "Hello,\r\n\r\nAt Wed, 22 Apr 2015 21:59:27 +0200, Andomar <[email protected]> wrote in <[email protected]>\r\n> > Looping through 384 index scans of tbl, each taking 0.040 ms.\r\n> > That's 15.36 ms. That leaves 0.291 ms unaccounted for, which means\r\n> > that's about how much time the top level nested loop took to do its\r\n> > work.\r\n> >\r\n> \r\n> Thanks for your reply, interesting! I'd have thought that this line\r\n> actually implied 0 ms:\r\n> \r\n> actual time=0.040..0.040\r\n> \r\n> But based on your reply this means, it took between 0.040 and 0.040 ms\r\n> for each loop?\r\n\r\nYou might mistake how to read it (besides the scale:). The index\r\nscan took 40ms as the average through all loops. The number at\r\nthe left of '..' is \"startup time\".\r\n\r\nhttp://www.postgresql.org/docs/9.4/static/sql-explain.html\r\n\r\n# Mmm.. this doesn't explain about \"startup time\".. It's the time\r\n# taken from execution start to returning the first result.\r\n\r\nAt Wed, 22 Apr 2015 14:18:40 -0600, Jason Petersen <[email protected]> wrote in <[email protected]>\r\n> > On Apr 22, 2015, at 1:59 PM, Andomar <[email protected]> wrote:\r\n> > \r\n> > Is there a way to tell postgres that a function will always return the same result for the same parameter, within the same transaction?\r\n> \r\n> Yup… read over the Function Volatility Categories\r\n> <http://www.postgresql.org/docs/9.4/static/xfunc-volatility.html>\r\n> page and decide which you need. What you’re describing is\r\n> STABLE (or slightly stricter than STABLE, since STABLE makes\r\n> that guarantee only for a single statement within a\r\n> transaction).\r\n\r\nAnd you will see what volatility category does a function go\r\nunder in pg_proc system catalog.\r\n\r\n=# select proname, provolatile from pg_proc where oid = 'random'::regproc;\r\n proname | provolatile \r\n---------+-------------\r\n random | v\r\n\r\nrandom() is a volatile funciton.\r\n\r\nhttp://www.postgresql.org/docs/9.4/static/catalog-pg-proc.html\r\n\r\nregards,\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 Apr 2015 13:40:40 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan with missing timespans"
}
] |
[
{
"msg_contents": "I am using PostgreSQL to log data in my application. A number of rows are added periodically, but there are no updates or deletes. There are several applications that log to different databases.\n\nThis causes terrible disk fragmentation which again causes performance degradation when retrieving data from the databases. The table files are getting more than 50000 fragments over time (max table size about 1 GB).\n\nThe problem seems to be that PostgreSQL grows the database with only the room it need for the new data each time it is added. Because several applications are adding data to different databases, the additions are never contiguous.\n\nI think that preallocating lumps of a given, configurable size, say 4 MB, for the tables would remove this problem. The max number of fragments on a 1 GB file would then be 250, which is no problem. Is this possible to configure in PostgreSQL? If not, how difficult is it to implement in the database?\n\nThank you,\nJG\n\n\n\n\n\n\n\n\n\nI am using PostgreSQL to log data in my application. A number of rows are added periodically, but there are no updates or deletes.\n There are several applications that log to different databases.\n\nThis causes terrible disk fragmentation which again causes performance degradation when retrieving data from the databases. The table files are getting more than 50000 fragments over time (max table size about 1 GB).\n\nThe problem seems to be that PostgreSQL grows the database with only the room it need for the new data each time it is added. Because several applications are adding data to different databases, the additions are never contiguous.\n\nI think that preallocating lumps of a given, configurable size, say 4 MB, for the tables would remove this problem. The max number of fragments on a 1 GB file would then be 250, which is no problem. Is this possible to configure\n in PostgreSQL? If not, how difficult is it to implement in the database?\n \nThank you,\nJG",
"msg_date": "Thu, 23 Apr 2015 19:47:06 +0000",
"msg_from": "Jan Gunnar Dyrset <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL disk fragmentation causes performance problems on Windows"
},
{
"msg_contents": "Hi,\n\nOn 2015-04-23 19:47:06 +0000, Jan Gunnar Dyrset wrote:\n> I am using PostgreSQL to log data in my application. A number of rows\n> are added periodically, but there are no updates or deletes. There are\n> several applications that log to different databases.\n> \n> This causes terrible disk fragmentation which again causes performance\n> degradation when retrieving data from the databases. The table files\n> are getting more than 50000 fragments over time (max table size about\n> 1 GB).\n> \n> The problem seems to be that PostgreSQL grows the database with only\n> the room it need for the new data each time it is added. Because\n> several applications are adding data to different databases, the\n> additions are never contiguous.\n\nWhich OS and filesystem is this done on? Because many halfway modern\nsystems, like e.g ext4 and xfs, implement this in the background as\n'delayed allocation'.\n\nIs it possible that e.g. you're checkpointing very frequently - which\nincludes fsyncing dirty files - and that that causes delayed allocation\nnot to work? How often did you checkpoint?\n\nHow did you measure the fragmentation? Using filefrag? If so, could you\nperhaps send its output?\n\n> I think that preallocating lumps of a given, configurable size, say 4\n> MB, for the tables would remove this problem. The max number of\n> fragments on a 1 GB file would then be 250, which is no problem. Is\n> this possible to configure in PostgreSQL? If not, how difficult is it\n> to implement in the database?\n\nIt's not impossible, but there are complexities because a) extension\nhappens under a sometimes contended lock, and doing more there will have\npossible negative scalability implications. we need to restructure the\nlogging first to make that more realistic. b) postgres also tries to\ntruncate files, and we need to make sure that happens only in the right\ncirumstances.\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 Apr 2015 10:06:39 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL disk fragmentation causes performance\n problems on Windows"
},
{
"msg_contents": "On 2015-04-29 10:06:39 +0200, Andres Freund wrote:\n> Hi,\n> \n> On 2015-04-23 19:47:06 +0000, Jan Gunnar Dyrset wrote:\n> > I am using PostgreSQL to log data in my application. A number of rows\n> > are added periodically, but there are no updates or deletes. There are\n> > several applications that log to different databases.\n> > \n> > This causes terrible disk fragmentation which again causes performance\n> > degradation when retrieving data from the databases. The table files\n> > are getting more than 50000 fragments over time (max table size about\n> > 1 GB).\n> > \n> > The problem seems to be that PostgreSQL grows the database with only\n> > the room it need for the new data each time it is added. Because\n> > several applications are adding data to different databases, the\n> > additions are never contiguous.\n> \n> Which OS and filesystem is this done on? Because many halfway modern\n> systems, like e.g ext4 and xfs, implement this in the background as\n> 'delayed allocation'.\n\nOh, it's in the subject. Stupid me, sorry for that. I'd consider testing\nhow much better this behaves under a different operating system, as a\nshorter term relief.\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 Apr 2015 10:08:54 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL disk fragmentation causes performance\n problems on Windows"
},
{
"msg_contents": "\nOn 04/29/2015 01:08 AM, Andres Freund wrote:\n\n>> Which OS and filesystem is this done on? Because many halfway modern\n>> systems, like e.g ext4 and xfs, implement this in the background as\n>> 'delayed allocation'.\n>\n> Oh, it's in the subject. Stupid me, sorry for that. I'd consider testing\n> how much better this behaves under a different operating system, as a\n> shorter term relief.\n\nThis is a known issue on the Windows platform. It is part of the \nlimitations of that environment. Linux/Solaris/FreeBSD do not suffer \nfrom this issue in nearly the same manner.\n\njD\n\n\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 Apr 2015 07:07:04 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL disk fragmentation causes performance problems\n on Windows"
},
{
"msg_contents": "On Wed, Apr 29, 2015 at 07:07:04AM -0700, Joshua D. Drake wrote:\n> \n> On 04/29/2015 01:08 AM, Andres Freund wrote:\n> \n> >>Which OS and filesystem is this done on? Because many halfway modern\n> >>systems, like e.g ext4 and xfs, implement this in the background as\n> >>'delayed allocation'.\n> >\n> >Oh, it's in the subject. Stupid me, sorry for that. I'd consider testing\n> >how much better this behaves under a different operating system, as a\n> >shorter term relief.\n> \n> This is a known issue on the Windows platform. It is part of the\n> limitations of that environment. Linux/Solaris/FreeBSD do not suffer\n> from this issue in nearly the same manner.\n> \n> jD\n> \n\nYou might consider a CLUSTER or VACUUM FULL to re-write the table with\nless fragmentation.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 Apr 2015 09:35:43 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL disk fragmentation causes performance\n problems on Windows"
},
{
"msg_contents": "\nOn 04/29/2015 10:35 AM, [email protected] wrote:\n> On Wed, Apr 29, 2015 at 07:07:04AM -0700, Joshua D. Drake wrote:\n>> On 04/29/2015 01:08 AM, Andres Freund wrote:\n>>\n>>>> Which OS and filesystem is this done on? Because many halfway modern\n>>>> systems, like e.g ext4 and xfs, implement this in the background as\n>>>> 'delayed allocation'.\n>>> Oh, it's in the subject. Stupid me, sorry for that. I'd consider testing\n>>> how much better this behaves under a different operating system, as a\n>>> shorter term relief.\n>> This is a known issue on the Windows platform. It is part of the\n>> limitations of that environment. Linux/Solaris/FreeBSD do not suffer\n>> from this issue in nearly the same manner.\n>>\n>> jD\n>>\n> You might consider a CLUSTER or VACUUM FULL to re-write the table with\n> less fragmentation.\n>\n\nOr pg_repack if you can't handle the lockup time that these involve.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 Apr 2015 10:56:12 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL disk fragmentation causes performance problems\n on Windows"
},
{
"msg_contents": "On 04/23/2015 12:47 PM, Jan Gunnar Dyrset wrote:\n> I think that preallocating lumps of a given, configurable size, say 4\n> MB, for the tables would remove this problem. The max number of\n> fragments on a 1 GB file would then be 250, which is no problem. Is\n> this possible to configure in PostgreSQL? If not, how difficult is it to\n> implement in the database?\n\nIt is not currently possible to configure.\n\nThis has been talked about as a feature, but would require major work on\nPostgreSQL to make it possible. You'd be looking at several months of\neffort by a really good hacker, and then a whole bunch of performance\ntesting. If you have the budget for this, then please let's talk about\nit because right now nobody is working on it.\n\nNote that this could be a dead end; it's possible that preallocating\nlarge extents could cause worse problems than the current fragmentation\nissues.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 May 2015 11:54:40 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL disk fragmentation causes performance problems\n on Windows"
},
{
"msg_contents": "On 2015-05-21 11:54:40 -0700, Josh Berkus wrote:\n> This has been talked about as a feature, but would require major work on\n> PostgreSQL to make it possible. You'd be looking at several months of\n> effort by a really good hacker, and then a whole bunch of performance\n> testing. If you have the budget for this, then please let's talk about\n> it because right now nobody is working on it.\n\nI think this is overestimating the required effort quite a bit. While\nnot trivial, it's also not that complex to make this work.\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 May 2015 22:39:13 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL disk fragmentation causes performance\n problems on Windows"
},
{
"msg_contents": "It may be even easier. AFAIR, it's possible just to tell OS expected\nallocation without doing it. This way nothing changes for general code,\nit's only needed to specify expected file size on creation.\n\nPlease see FILE_ALLOCATION_INFO:\nhttps://msdn.microsoft.com/en-us/library/windows/desktop/aa364214(v=vs.85).aspx\n\nЧт, 21 трав. 2015 16:39 Andres Freund <[email protected]> пише:\n\n> On 2015-05-21 11:54:40 -0700, Josh Berkus wrote:\n> > This has been talked about as a feature, but would require major work on\n> > PostgreSQL to make it possible. You'd be looking at several months of\n> > effort by a really good hacker, and then a whole bunch of performance\n> > testing. If you have the budget for this, then please let's talk about\n> > it because right now nobody is working on it.\n>\n> I think this is overestimating the required effort quite a bit. While\n> not trivial, it's also not that complex to make this work.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIt may be even easier. AFAIR, it's possible just to tell OS expected allocation without doing it. This way nothing changes for general code, it's only needed to specify expected file size on creation.\nPlease see FILE_ALLOCATION_INFO: https://msdn.microsoft.com/en-us/library/windows/desktop/aa364214(v=vs.85).aspx\n\nЧт, 21 трав. 2015 16:39 Andres Freund <[email protected]> пише:On 2015-05-21 11:54:40 -0700, Josh Berkus wrote:\n> This has been talked about as a feature, but would require major work on\n> PostgreSQL to make it possible. You'd be looking at several months of\n> effort by a really good hacker, and then a whole bunch of performance\n> testing. If you have the budget for this, then please let's talk about\n> it because right now nobody is working on it.\n\nI think this is overestimating the required effort quite a bit. While\nnot trivial, it's also not that complex to make this work.\n\nGreetings,\n\nAndres Freund\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 21 May 2015 23:16:07 +0000",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL disk fragmentation causes performance\n problems on Windows"
},
{
"msg_contents": "On 05/21/2015 01:39 PM, Andres Freund wrote:\n> On 2015-05-21 11:54:40 -0700, Josh Berkus wrote:\n>> This has been talked about as a feature, but would require major work on\n>> PostgreSQL to make it possible. You'd be looking at several months of\n>> effort by a really good hacker, and then a whole bunch of performance\n>> testing. If you have the budget for this, then please let's talk about\n>> it because right now nobody is working on it.\n> \n> I think this is overestimating the required effort quite a bit. While\n> not trivial, it's also not that complex to make this work.\n\nWell, then, maybe someone should hack and test it ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 May 2015 10:34:03 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL disk fragmentation causes performance problems\n on Windows"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have a query which finds the latest row_id for a particular code.\n\nWe've found a backwards index scan is much slower than a forward one, to\nthe extent that disabling indexscan altogether actually improves the query\ntime.\n\nCan anyone suggest why this might be, and what's best to do to improve the\nquery time?\n\n\n\ndev=> \\d table\n Table \"public.table\"\n Column | Type | Modifiers\n--------------+--------------------------------+-----------\n row_id | integer |\n code | character(2) |\nIndexes:\n \"table_code_idx\" btree (code)\n \"table_row_idx\" btree (row_id)\n\ndev=> select count(*) from table;\n count\n---------\n 6090254\n(1 row)\n\ndev=> select count(distinct(row_id)) from table;\n count\n---------\n 5421022\n(1 row)\n\ndev=> select n_distinct from pg_stats where tablename='table' and\nattname='row_id';\n n_distinct\n------------\n -0.762951\n(1 row)\n\ndev=> show work_mem;\n work_mem\n-----------\n 1249105kB\n(1 row)\n\ndev=> select version();\n version\n\n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3\n20120306 (Red Hat 4.6.3-2), 64-bit\n(1 row)\n\n\nThe query in question:\n\ndev=> explain (analyse,buffers) select row_id as last_row_id from table\nwhere code='XX' order by row_id desc limit 1;\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.43..1.67 rows=1 width=4) (actual time=835.281..835.282\nrows=1 loops=1)\n Buffers: shared hit=187961\n -> Index Scan Backward using table_row_idx on table\n (cost=0.43..343741.98 rows=278731 width=4) (actual time=835.278..835.278\nrows=1 loops=1)\n Filter: (code = 'XX'::bpchar)\n Rows Removed by Filter: 4050971\n Buffers: shared hit=187961\n Total runtime: 835.315 ms\n(7 rows)\n\nhttp://explain.depesz.com/s/uGC\n\n\nSo we can see it's doing a backwards index scan. Out of curiosity I tried a\nforward scan and it was MUCH quicker:\n\ndev=> explain (analyse,buffers) select row_id as first_row_id from table\nwhere code='XX' order by row_id asc limit 1;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.43..1.67 rows=1 width=4) (actual time=19.473..19.474 rows=1\nloops=1)\n Buffers: shared hit=26730\n -> Index Scan using table_row_idx on table (cost=0.43..343741.98\nrows=278731 width=4) (actual time=19.470..19.470 rows=1 loops=1)\n Filter: (code = 'XX'::bpchar)\n Rows Removed by Filter: 62786\n Buffers: shared hit=26730\n Total runtime: 19.509 ms\n(7 rows)\n\nhttp://explain.depesz.com/s/ASxD\n\n\nI thought adding a index on row_id desc might be the answer but it has\nlittle effect:\n\ndev=> create index row_id_desc_idx on table(row_id desc);\nCREATE INDEX\nTime: 5293.812 ms\n\ndev=> explain (analyse,buffers) select row_id as last_row_id from table\nwhere code='XX' order by row_id desc limit 1;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.43..1.66 rows=1 width=4) (actual time=944.666..944.667\nrows=1 loops=1)\n Buffers: shared hit=176711 read=11071\n -> Index Scan using row_id_desc_idx on table (cost=0.43..342101.98\nrows=278731 width=4) (actual time=944.663..944.663 rows=1 loops=1)\n Filter: (code = 'XX'::bpchar)\n Rows Removed by Filter: 4050971\n Buffers: shared hit=176711 read=11071\n Total runtime: 944.699 ms\n(7 rows)\n\nhttp://explain.depesz.com/s/JStM\n\nIn fact, disabling the index scan completely improves matters considerably:\n\ndev=> drop index row_id_desc_idx;\nDROP INDEX\ndev=> set enable_indexscan to off;\nSET\n\ndev=> explain (analyse,buffers) select row_id as last_row_id from table\nwhere code='XX' order by row_id desc limit 1;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=74006.39..74006.39 rows=1 width=4) (actual\ntime=183.997..183.998 rows=1 loops=1)\n Buffers: shared hit=14723\n -> Sort (cost=74006.39..74703.22 rows=278731 width=4) (actual\ntime=183.995..183.995 rows=1 loops=1)\n Sort Key: row_id\n Sort Method: top-N heapsort Memory: 25kB\n Buffers: shared hit=14723\n -> Bitmap Heap Scan on table (cost=5276.60..72612.74 rows=278731\nwidth=4) (actual time=25.533..119.320 rows=275909 loops=1)\n Recheck Cond: (code = 'XX'::bpchar)\n Buffers: shared hit=14723\n -> Bitmap Index Scan on table_code_idx (cost=0.00..5206.91\nrows=278731 width=0) (actual time=23.298..23.298 rows=275909 loops=1)\n Index Cond: (code = 'XX'::bpchar)\n Buffers: shared hit=765\n Total runtime: 184.043 ms\n(13 rows)\n\nhttp://explain.depesz.com/s/E9VE\n\nThanks in advance for any help.\n\nRegards,\n-- \nDavid Osborne\nQcode Software Limited\nhttp://www.qcode.co.uk\n\nHi,We have a query which finds the latest row_id for a particular code.We've found a backwards index scan is much slower than a forward one, to the extent that disabling indexscan altogether actually improves the query time.Can anyone suggest why this might be, and what's best to do to improve the query time?dev=> \\d table Table \"public.table\" Column | Type | Modifiers --------------+--------------------------------+----------- row_id | integer | code | character(2) | Indexes: \"table_code_idx\" btree (code) \"table_row_idx\" btree (row_id)dev=> select count(*) from table; count --------- 6090254(1 row)dev=> select count(distinct(row_id)) from table; count --------- 5421022(1 row)dev=> select n_distinct from pg_stats where tablename='table' and attname='row_id'; n_distinct ------------ -0.762951(1 row)dev=> show work_mem; work_mem ----------- 1249105kB(1 row)dev=> select version(); version -------------------------------------------------------------------------------------------------------------- PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bit(1 row)The query in question:dev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.43..1.67 rows=1 width=4) (actual time=835.281..835.282 rows=1 loops=1) Buffers: shared hit=187961 -> Index Scan Backward using table_row_idx on table (cost=0.43..343741.98 rows=278731 width=4) (actual time=835.278..835.278 rows=1 loops=1) Filter: (code = 'XX'::bpchar) Rows Removed by Filter: 4050971 Buffers: shared hit=187961 Total runtime: 835.315 ms(7 rows)http://explain.depesz.com/s/uGCSo we can see it's doing a backwards index scan. Out of curiosity I tried a forward scan and it was MUCH quicker:dev=> explain (analyse,buffers) select row_id as first_row_id from table where code='XX' order by row_id asc limit 1; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.43..1.67 rows=1 width=4) (actual time=19.473..19.474 rows=1 loops=1) Buffers: shared hit=26730 -> Index Scan using table_row_idx on table (cost=0.43..343741.98 rows=278731 width=4) (actual time=19.470..19.470 rows=1 loops=1) Filter: (code = 'XX'::bpchar) Rows Removed by Filter: 62786 Buffers: shared hit=26730 Total runtime: 19.509 ms(7 rows)http://explain.depesz.com/s/ASxDI thought adding a index on row_id desc might be the answer but it has little effect:dev=> create index row_id_desc_idx on table(row_id desc);CREATE INDEXTime: 5293.812 msdev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.43..1.66 rows=1 width=4) (actual time=944.666..944.667 rows=1 loops=1) Buffers: shared hit=176711 read=11071 -> Index Scan using row_id_desc_idx on table (cost=0.43..342101.98 rows=278731 width=4) (actual time=944.663..944.663 rows=1 loops=1) Filter: (code = 'XX'::bpchar) Rows Removed by Filter: 4050971 Buffers: shared hit=176711 read=11071 Total runtime: 944.699 ms(7 rows)http://explain.depesz.com/s/JStMIn fact, disabling the index scan completely improves matters considerably:dev=> drop index row_id_desc_idx;DROP INDEXdev=> set enable_indexscan to off;SET dev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=74006.39..74006.39 rows=1 width=4) (actual time=183.997..183.998 rows=1 loops=1) Buffers: shared hit=14723 -> Sort (cost=74006.39..74703.22 rows=278731 width=4) (actual time=183.995..183.995 rows=1 loops=1) Sort Key: row_id Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=14723 -> Bitmap Heap Scan on table (cost=5276.60..72612.74 rows=278731 width=4) (actual time=25.533..119.320 rows=275909 loops=1) Recheck Cond: (code = 'XX'::bpchar) Buffers: shared hit=14723 -> Bitmap Index Scan on table_code_idx (cost=0.00..5206.91 rows=278731 width=0) (actual time=23.298..23.298 rows=275909 loops=1) Index Cond: (code = 'XX'::bpchar) Buffers: shared hit=765 Total runtime: 184.043 ms(13 rows)http://explain.depesz.com/s/E9VEThanks in advance for any help.Regards,-- David OsborneQcode Software Limitedhttp://www.qcode.co.uk",
"msg_date": "Fri, 1 May 2015 11:54:33 +0100",
"msg_from": "David Osborne <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index Scan Backward Slow"
},
{
"msg_contents": "\n> On 01 May 2015, at 13:54, David Osborne <[email protected]> wrote:\n> \n> Hi,\n> \n> We have a query which finds the latest row_id for a particular code.\n> \n> We've found a backwards index scan is much slower than a forward one, to the extent that disabling indexscan altogether actually improves the query time.\n> \n> Can anyone suggest why this might be, and what's best to do to improve the query time?\n> \n> \n> \n> dev=> \\d table\n> Table \"public.table\"\n> Column | Type | Modifiers \n> --------------+--------------------------------+-----------\n> row_id | integer | \n> code | character(2) | \n> Indexes:\n> \"table_code_idx\" btree (code)\n> \"table_row_idx\" btree (row_id)\n> \n> dev=> select count(*) from table;\n> count \n> ---------\n> 6090254\n> (1 row)\n> \n> dev=> select count(distinct(row_id)) from table;\n> count \n> ---------\n> 5421022\n> (1 row)\n> \n> dev=> select n_distinct from pg_stats where tablename='table' and attname='row_id';\n> n_distinct \n> ------------\n> -0.762951\n> (1 row)\n> \n> dev=> show work_mem;\n> work_mem \n> -----------\n> 1249105kB\n> (1 row)\n> \n> dev=> select version();\n> version \n> --------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bit\n> (1 row)\n> \n> \n> The query in question:\n> \n> dev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1;\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.43..1.67 rows=1 width=4) (actual time=835.281..835.282 rows=1 loops=1)\n> Buffers: shared hit=187961\n> -> Index Scan Backward using table_row_idx on table (cost=0.43..343741.98 rows=278731 width=4) (actual time=835.278..835.278 rows=1 loops=1)\n> Filter: (code = 'XX'::bpchar)\n> Rows Removed by Filter: 4050971\n> Buffers: shared hit=187961\n> Total runtime: 835.315 ms\n> (7 rows)\n> \n> http://explain.depesz.com/s/uGC\n> \n> \n> So we can see it's doing a backwards index scan. Out of curiosity I tried a forward scan and it was MUCH quicker:\n> \n> dev=> explain (analyse,buffers) select row_id as first_row_id from table where code='XX' order by row_id asc limit 1;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.43..1.67 rows=1 width=4) (actual time=19.473..19.474 rows=1 loops=1)\n> Buffers: shared hit=26730\n> -> Index Scan using table_row_idx on table (cost=0.43..343741.98 rows=278731 width=4) (actual time=19.470..19.470 rows=1 loops=1)\n> Filter: (code = 'XX'::bpchar)\n> Rows Removed by Filter: 62786\n> Buffers: shared hit=26730\n> Total runtime: 19.509 ms\n> (7 rows)\n> \n> http://explain.depesz.com/s/ASxD\n> \n> \n> I thought adding a index on row_id desc might be the answer but it has little effect:\n> \n> dev=> create index row_id_desc_idx on table(row_id desc);\n> CREATE INDEX\n> Time: 5293.812 ms\n> \n> dev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1;\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.43..1.66 rows=1 width=4) (actual time=944.666..944.667 rows=1 loops=1)\n> Buffers: shared hit=176711 read=11071\n> -> Index Scan using row_id_desc_idx on table (cost=0.43..342101.98 rows=278731 width=4) (actual time=944.663..944.663 rows=1 loops=1)\n> Filter: (code = 'XX'::bpchar)\n> Rows Removed by Filter: 4050971\n> Buffers: shared hit=176711 read=11071\n> Total runtime: 944.699 ms\n> (7 rows)\n> \n> http://explain.depesz.com/s/JStM\n> \n> In fact, disabling the index scan completely improves matters considerably:\n> \n> dev=> drop index row_id_desc_idx;\n> DROP INDEX\n> dev=> set enable_indexscan to off;\n> SET \n> \n> dev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1;\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=74006.39..74006.39 rows=1 width=4) (actual time=183.997..183.998 rows=1 loops=1)\n> Buffers: shared hit=14723\n> -> Sort (cost=74006.39..74703.22 rows=278731 width=4) (actual time=183.995..183.995 rows=1 loops=1)\n> Sort Key: row_id\n> Sort Method: top-N heapsort Memory: 25kB\n> Buffers: shared hit=14723\n> -> Bitmap Heap Scan on table (cost=5276.60..72612.74 rows=278731 width=4) (actual time=25.533..119.320 rows=275909 loops=1)\n> Recheck Cond: (code = 'XX'::bpchar)\n> Buffers: shared hit=14723\n> -> Bitmap Index Scan on table_code_idx (cost=0.00..5206.91 rows=278731 width=0) (actual time=23.298..23.298 rows=275909 loops=1)\n> Index Cond: (code = 'XX'::bpchar)\n> Buffers: shared hit=765\n> Total runtime: 184.043 ms\n> (13 rows)\n> \n> http://explain.depesz.com/s/E9VE\n\nYour queries are slow not because of backward scan.\nThey are slow because of Rows Removed by Filter: 4050971\n\nTry creating index on (code, row_id).\n\n> \n> Thanks in advance for any help.\n> \n> Regards,\n> -- \n> David Osborne\n> Qcode Software Limited\n> http://www.qcode.co.uk\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 1 May 2015 13:59:02 +0300",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Scan Backward Slow"
},
{
"msg_contents": "Simple... that did it... thanks!\n\ndev=> create index on table(code,row_id);\nCREATE INDEX\nTime: 38088.482 ms\ndev=> explain (analyse,buffers) select row_id as last_row_id from table\nwhere code='XX' order by row_id desc limit 1;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.43..0.46 rows=1 width=4) (actual time=0.070..0.071 rows=1\nloops=1)\n Buffers: shared hit=2 read=3\n -> Index Only Scan Backward using table_code_row_id_idx on table\n (cost=0.43..7999.28 rows=278743 width=4) (actual time=0.067..0.067 rows=1\nloops=1)\n Index Cond: (code = 'XX'::bpchar)\n Heap Fetches: 1\n Buffers: shared hit=2 read=3\n Total runtime: 0.097 ms\n(7 rows)\n\n\nOn 1 May 2015 at 11:59, Evgeniy Shishkin <[email protected]> wrote:\n\n>\n> > On 01 May 2015, at 13:54, David Osborne <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > We have a query which finds the latest row_id for a particular code.\n> >\n> > We've found a backwards index scan is much slower than a forward one, to\n> the extent that disabling indexscan altogether actually improves the query\n> time.\n> >\n> > Can anyone suggest why this might be, and what's best to do to improve\n> the query time?\n> >\n> >\n> >\n> > dev=> \\d table\n> > Table \"public.table\"\n> > Column | Type | Modifiers\n> > --------------+--------------------------------+-----------\n> > row_id | integer |\n> > code | character(2) |\n> > Indexes:\n> > \"table_code_idx\" btree (code)\n> > \"table_row_idx\" btree (row_id)\n> >\n> > dev=> select count(*) from table;\n> > count\n> > ---------\n> > 6090254\n> > (1 row)\n> >\n> > dev=> select count(distinct(row_id)) from table;\n> > count\n> > ---------\n> > 5421022\n> > (1 row)\n> >\n> > dev=> select n_distinct from pg_stats where tablename='table' and\n> attname='row_id';\n> > n_distinct\n> > ------------\n> > -0.762951\n> > (1 row)\n> >\n> > dev=> show work_mem;\n> > work_mem\n> > -----------\n> > 1249105kB\n> > (1 row)\n> >\n> > dev=> select version();\n> > version\n> >\n> --------------------------------------------------------------------------------------------------------------\n> > PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n> 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bit\n> > (1 row)\n> >\n> >\n> > The query in question:\n> >\n> > dev=> explain (analyse,buffers) select row_id as last_row_id from table\n> where code='XX' order by row_id desc limit 1;\n> >\n> QUERY PLAN\n> >\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=0.43..1.67 rows=1 width=4) (actual time=835.281..835.282\n> rows=1 loops=1)\n> > Buffers: shared hit=187961\n> > -> Index Scan Backward using table_row_idx on table\n> (cost=0.43..343741.98 rows=278731 width=4) (actual time=835.278..835.278\n> rows=1 loops=1)\n> > Filter: (code = 'XX'::bpchar)\n> > Rows Removed by Filter: 4050971\n> > Buffers: shared hit=187961\n> > Total runtime: 835.315 ms\n> > (7 rows)\n> >\n> > http://explain.depesz.com/s/uGC\n> >\n> >\n> > So we can see it's doing a backwards index scan. Out of curiosity I\n> tried a forward scan and it was MUCH quicker:\n> >\n> > dev=> explain (analyse,buffers) select row_id as first_row_id from\n> table where code='XX' order by row_id asc limit 1;\n> >\n> QUERY PLAN\n> >\n> ---------------------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=0.43..1.67 rows=1 width=4) (actual time=19.473..19.474\n> rows=1 loops=1)\n> > Buffers: shared hit=26730\n> > -> Index Scan using table_row_idx on table (cost=0.43..343741.98\n> rows=278731 width=4) (actual time=19.470..19.470 rows=1 loops=1)\n> > Filter: (code = 'XX'::bpchar)\n> > Rows Removed by Filter: 62786\n> > Buffers: shared hit=26730\n> > Total runtime: 19.509 ms\n> > (7 rows)\n> >\n> > http://explain.depesz.com/s/ASxD\n> >\n> >\n> > I thought adding a index on row_id desc might be the answer but it has\n> little effect:\n> >\n> > dev=> create index row_id_desc_idx on table(row_id desc);\n> > CREATE INDEX\n> > Time: 5293.812 ms\n> >\n> > dev=> explain (analyse,buffers) select row_id as last_row_id from table\n> where code='XX' order by row_id desc limit 1;\n> >\n> QUERY PLAN\n> >\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=0.43..1.66 rows=1 width=4) (actual time=944.666..944.667\n> rows=1 loops=1)\n> > Buffers: shared hit=176711 read=11071\n> > -> Index Scan using row_id_desc_idx on table (cost=0.43..342101.98\n> rows=278731 width=4) (actual time=944.663..944.663 rows=1 loops=1)\n> > Filter: (code = 'XX'::bpchar)\n> > Rows Removed by Filter: 4050971\n> > Buffers: shared hit=176711 read=11071\n> > Total runtime: 944.699 ms\n> > (7 rows)\n> >\n> > http://explain.depesz.com/s/JStM\n> >\n> > In fact, disabling the index scan completely improves matters\n> considerably:\n> >\n> > dev=> drop index row_id_desc_idx;\n> > DROP INDEX\n> > dev=> set enable_indexscan to off;\n> > SET\n> >\n> > dev=> explain (analyse,buffers) select row_id as last_row_id from table\n> where code='XX' order by row_id desc limit 1;\n> >\n> QUERY PLAN\n> >\n> --------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=74006.39..74006.39 rows=1 width=4) (actual\n> time=183.997..183.998 rows=1 loops=1)\n> > Buffers: shared hit=14723\n> > -> Sort (cost=74006.39..74703.22 rows=278731 width=4) (actual\n> time=183.995..183.995 rows=1 loops=1)\n> > Sort Key: row_id\n> > Sort Method: top-N heapsort Memory: 25kB\n> > Buffers: shared hit=14723\n> > -> Bitmap Heap Scan on table (cost=5276.60..72612.74\n> rows=278731 width=4) (actual time=25.533..119.320 rows=275909 loops=1)\n> > Recheck Cond: (code = 'XX'::bpchar)\n> > Buffers: shared hit=14723\n> > -> Bitmap Index Scan on table_code_idx\n> (cost=0.00..5206.91 rows=278731 width=0) (actual time=23.298..23.298\n> rows=275909 loops=1)\n> > Index Cond: (code = 'XX'::bpchar)\n> > Buffers: shared hit=765\n> > Total runtime: 184.043 ms\n> > (13 rows)\n> >\n> > http://explain.depesz.com/s/E9VE\n>\n> Your queries are slow not because of backward scan.\n> They are slow because of Rows Removed by Filter: 4050971\n>\n> Try creating index on (code, row_id).\n>\n> >\n> > Thanks in advance for any help.\n> >\n> > Regards,\n> > --\n> > David Osborne\n> > Qcode Software Limited\n> > http://www.qcode.co.uk\n> >\n>\n>\n\n\n-- \nDavid Osborne\nQcode Software Limited\nhttp://www.qcode.co.uk\nT: +44 (0)1463 896484\n\nSimple... that did it... thanks!dev=> create index on table(code,row_id);CREATE INDEXTime: 38088.482 msdev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.43..0.46 rows=1 width=4) (actual time=0.070..0.071 rows=1 loops=1) Buffers: shared hit=2 read=3 -> Index Only Scan Backward using table_code_row_id_idx on table (cost=0.43..7999.28 rows=278743 width=4) (actual time=0.067..0.067 rows=1 loops=1) Index Cond: (code = 'XX'::bpchar) Heap Fetches: 1 Buffers: shared hit=2 read=3 Total runtime: 0.097 ms(7 rows)On 1 May 2015 at 11:59, Evgeniy Shishkin <[email protected]> wrote:\n> On 01 May 2015, at 13:54, David Osborne <[email protected]> wrote:\n>\n> Hi,\n>\n> We have a query which finds the latest row_id for a particular code.\n>\n> We've found a backwards index scan is much slower than a forward one, to the extent that disabling indexscan altogether actually improves the query time.\n>\n> Can anyone suggest why this might be, and what's best to do to improve the query time?\n>\n>\n>\n> dev=> \\d table\n> Table \"public.table\"\n> Column | Type | Modifiers\n> --------------+--------------------------------+-----------\n> row_id | integer |\n> code | character(2) |\n> Indexes:\n> \"table_code_idx\" btree (code)\n> \"table_row_idx\" btree (row_id)\n>\n> dev=> select count(*) from table;\n> count\n> ---------\n> 6090254\n> (1 row)\n>\n> dev=> select count(distinct(row_id)) from table;\n> count\n> ---------\n> 5421022\n> (1 row)\n>\n> dev=> select n_distinct from pg_stats where tablename='table' and attname='row_id';\n> n_distinct\n> ------------\n> -0.762951\n> (1 row)\n>\n> dev=> show work_mem;\n> work_mem\n> -----------\n> 1249105kB\n> (1 row)\n>\n> dev=> select version();\n> version\n> --------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bit\n> (1 row)\n>\n>\n> The query in question:\n>\n> dev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.43..1.67 rows=1 width=4) (actual time=835.281..835.282 rows=1 loops=1)\n> Buffers: shared hit=187961\n> -> Index Scan Backward using table_row_idx on table (cost=0.43..343741.98 rows=278731 width=4) (actual time=835.278..835.278 rows=1 loops=1)\n> Filter: (code = 'XX'::bpchar)\n> Rows Removed by Filter: 4050971\n> Buffers: shared hit=187961\n> Total runtime: 835.315 ms\n> (7 rows)\n>\n> http://explain.depesz.com/s/uGC\n>\n>\n> So we can see it's doing a backwards index scan. Out of curiosity I tried a forward scan and it was MUCH quicker:\n>\n> dev=> explain (analyse,buffers) select row_id as first_row_id from table where code='XX' order by row_id asc limit 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.43..1.67 rows=1 width=4) (actual time=19.473..19.474 rows=1 loops=1)\n> Buffers: shared hit=26730\n> -> Index Scan using table_row_idx on table (cost=0.43..343741.98 rows=278731 width=4) (actual time=19.470..19.470 rows=1 loops=1)\n> Filter: (code = 'XX'::bpchar)\n> Rows Removed by Filter: 62786\n> Buffers: shared hit=26730\n> Total runtime: 19.509 ms\n> (7 rows)\n>\n> http://explain.depesz.com/s/ASxD\n>\n>\n> I thought adding a index on row_id desc might be the answer but it has little effect:\n>\n> dev=> create index row_id_desc_idx on table(row_id desc);\n> CREATE INDEX\n> Time: 5293.812 ms\n>\n> dev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.43..1.66 rows=1 width=4) (actual time=944.666..944.667 rows=1 loops=1)\n> Buffers: shared hit=176711 read=11071\n> -> Index Scan using row_id_desc_idx on table (cost=0.43..342101.98 rows=278731 width=4) (actual time=944.663..944.663 rows=1 loops=1)\n> Filter: (code = 'XX'::bpchar)\n> Rows Removed by Filter: 4050971\n> Buffers: shared hit=176711 read=11071\n> Total runtime: 944.699 ms\n> (7 rows)\n>\n> http://explain.depesz.com/s/JStM\n>\n> In fact, disabling the index scan completely improves matters considerably:\n>\n> dev=> drop index row_id_desc_idx;\n> DROP INDEX\n> dev=> set enable_indexscan to off;\n> SET\n>\n> dev=> explain (analyse,buffers) select row_id as last_row_id from table where code='XX' order by row_id desc limit 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=74006.39..74006.39 rows=1 width=4) (actual time=183.997..183.998 rows=1 loops=1)\n> Buffers: shared hit=14723\n> -> Sort (cost=74006.39..74703.22 rows=278731 width=4) (actual time=183.995..183.995 rows=1 loops=1)\n> Sort Key: row_id\n> Sort Method: top-N heapsort Memory: 25kB\n> Buffers: shared hit=14723\n> -> Bitmap Heap Scan on table (cost=5276.60..72612.74 rows=278731 width=4) (actual time=25.533..119.320 rows=275909 loops=1)\n> Recheck Cond: (code = 'XX'::bpchar)\n> Buffers: shared hit=14723\n> -> Bitmap Index Scan on table_code_idx (cost=0.00..5206.91 rows=278731 width=0) (actual time=23.298..23.298 rows=275909 loops=1)\n> Index Cond: (code = 'XX'::bpchar)\n> Buffers: shared hit=765\n> Total runtime: 184.043 ms\n> (13 rows)\n>\n> http://explain.depesz.com/s/E9VE\n\nYour queries are slow not because of backward scan.\nThey are slow because of Rows Removed by Filter: 4050971\n\nTry creating index on (code, row_id).\n\n>\n> Thanks in advance for any help.\n>\n> Regards,\n> --\n> David Osborne\n> Qcode Software Limited\n> http://www.qcode.co.uk\n>\n\n-- David OsborneQcode Software Limitedhttp://www.qcode.co.uk\nT: +44 (0)1463 896484",
"msg_date": "Fri, 1 May 2015 12:06:45 +0100",
"msg_from": "David Osborne <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index Scan Backward Slow"
},
{
"msg_contents": "On 01.05.2015 13:06, David Osborne wrote:\n> Simple... that did it... thanks!\n>\n> dev=> create index on table(code,row_id);\n> CREATE INDEX\n> Time: 38088.482 ms\n> dev=> explain (analyse,buffers) select row_id as last_row_id from table\n> where code='XX' order by row_id desc limit 1;\n\nJust out of curiosity: Is there a particular reason why you do not use\n\nselect max(row_id) as last_row_id\nfrom table\nwhere code='XX'\n\n?\n\nKind regards\n\n\trobert\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 02 May 2015 10:48:40 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Scan Backward Slow"
}
] |
[
{
"msg_contents": "Hello guru of postgres, it's possoble to tune query with join on random\nstring ?\ni know that it is not real life example, but i need it for tests.\n\nsoe=# explain\nsoe-# SELECT ADDRESS_ID,\nsoe-# CUSTOMER_ID,\nsoe-# DATE_CREATED,\nsoe-# HOUSE_NO_OR_NAME,\nsoe-# STREET_NAME,\nsoe-# TOWN,\nsoe-# COUNTY,\nsoe-# COUNTRY,\nsoe-# POST_CODE,\nsoe-# ZIP_CODE\nsoe-# FROM ADDRESSES\nsoe-# WHERE customer_id = trunc( random()*45000) ;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------\n Seq Scan on addresses (cost=0.00..165714.00 rows=22500 width=84)\n Filter: ((customer_id)::double precision = trunc((random() *\n45000::double precision)))\n(2 rows)\n\nsoe=# \\d addresses;\nsoe=# \\d addresses;\n Table\n\"public.addresses\"\n\n Column | Type |\nModifiers\n\n------------------+-----------------------------+-----------\n\n address_id | bigint | not\nnull\n customer_id | bigint | not\nnull\n date_created | timestamp without time zone | not\nnull\n house_no_or_name | character varying(60)\n|\n\n street_name | character varying(60)\n|\n\n town | character varying(60)\n|\n\n county | character varying(60)\n|\n\n country | character varying(60)\n|\n\n post_code | character varying(12)\n|\n\n zip_code | character varying(12)\n|\n\nIndexes:\n\n \"addresses_pkey\" PRIMARY KEY, btree\n(address_id)\n\n \"addresses_cust_ix\" btree\n(customer_id)\n\nForeign-key\nconstraints:\n\n \"add_cust_fk\" FOREIGN KEY (customer_id) REFERENCES\ncustomers(customer_id) DEFERRABLE\n\n\n\nsame query in oracle same query use index access path:\n\n00:05:23 (1)c##bushmelev_aa@orcl> explain plan for\n SELECT ADDRESS_ID,\n CUSTOMER_ID,\n DATE_CREATED,\n HOUSE_NO_OR_NAME,\n STREET_NAME,\n TOWN,\n COUNTY,\n COUNTRY,\n POST_CODE,\n ZIP_CODE\n FROM soe.ADDRESSES\n* WHERE customer_id = dbms_random.value ();*\n\nExplained.\n\nElapsed: 00:00:00.05\n00:05:29 (1)c##bushmelev_aa@orcl> @utlxpls\n\nPLAN_TABLE_OUTPUT\n------------------------------------------------------------------------------\nPlan hash value: 317664678\n\n-----------------------------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes |\nCost (%CPU)| Time |\n-----------------------------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | 2 | 150 |\n5 (0)| 00:00:01 |\n| 1 | TABLE ACCESS BY INDEX ROWID| ADDRESSES | 2 | 150 |\n5 (0)| 00:00:01 |\n|* 2 | *INDEX RANGE SCAN * | ADDRESS_CUST_IX | 2 |\n| 3 (0)| 00:00:01 |\n-----------------------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n 2 - access(\"CUSTOMER_ID\"=\"DBMS_RANDOM\".\"VALUE\"())\n\n\n\r\n Hello guru of postgres, it's possoble to tune query with join on\r\n random string ? \r\n i know that it is not real life example, but i need it for tests.\n\nsoe=# explain \nsoe-# SELECT ADDRESS_ID, \nsoe-# CUSTOMER_ID, \nsoe-# DATE_CREATED, \nsoe-# HOUSE_NO_OR_NAME, \nsoe-# STREET_NAME, \nsoe-# TOWN, \nsoe-# COUNTY, \nsoe-# COUNTRY, \nsoe-# POST_CODE, \nsoe-# ZIP_CODE \nsoe-# FROM ADDRESSES \nsoe-# WHERE customer_id = trunc( random()*45000) ;\n QUERY\r\n PLAN \n-------------------------------------------------------------------------------------------\n Seq Scan on addresses (cost=0.00..165714.00 rows=22500\r\n width=84)\n Filter: ((customer_id)::double precision =\r\n trunc((random() * 45000::double precision)))\n(2 rows)\n\nsoe=# \\d addresses;\nsoe=# \\d addresses;\n Table\r\n \"public.addresses\" \r\n \n Column | Type |\r\n Modifiers \r\n \n------------------+-----------------------------+----------- \r\n \n address_id | bigint | not\r\n null \r\n \n customer_id | bigint | not\r\n null \r\n \n date_created | timestamp without time zone | not\r\n null \r\n \n house_no_or_name | character varying(60) \r\n | \r\n \n street_name | character varying(60) \r\n | \r\n \n town | character varying(60) \r\n | \r\n \n county | character varying(60) \r\n | \r\n \n country | character varying(60) \r\n | \r\n \n post_code | character varying(12) \r\n | \r\n \n zip_code | character varying(12) \r\n | \r\n \nIndexes: \r\n \n \"addresses_pkey\" PRIMARY KEY, btree\r\n (address_id) \r\n \n \"addresses_cust_ix\" btree\r\n (customer_id) \r\n \nForeign-key\r\n constraints: \r\n \n \"add_cust_fk\" FOREIGN KEY (customer_id) REFERENCES\r\n customers(customer_id) DEFERRABLE \n\n\n\r\n same query in oracle same query use index access path:\n\n00:05:23 (1)c##bushmelev_aa@orcl> explain plan for\n SELECT ADDRESS_ID, \n CUSTOMER_ID, \n DATE_CREATED, \n HOUSE_NO_OR_NAME, \n STREET_NAME, \n TOWN, \n COUNTY, \n COUNTRY, \n POST_CODE, \n ZIP_CODE \n FROM soe.ADDRESSES \n WHERE customer_id = dbms_random.value\r\n ();\n\nExplained.\n\nElapsed: 00:00:00.05\n00:05:29 (1)c##bushmelev_aa@orcl> @utlxpls\n\nPLAN_TABLE_OUTPUT\n------------------------------------------------------------------------------\nPlan hash value: 317664678\n\n-----------------------------------------------------------------------------------------------\n| Id | Operation | Name |\r\n Rows | Bytes | Cost (%CPU)| Time |\n-----------------------------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | \r\n 2 | 150 | 5 (0)| 00:00:01 |\n| 1 | TABLE ACCESS BY INDEX ROWID| ADDRESSES | \r\n 2 | 150 | 5 (0)| 00:00:01 |\n|* 2 | INDEX RANGE SCAN |\r\n ADDRESS_CUST_IX | 2 | | 3 (0)| 00:00:01 |\n-----------------------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id): \n---------------------------------------------------\n 2 - access(\"CUSTOMER_ID\"=\"DBMS_RANDOM\".\"VALUE\"())",
"msg_date": "Mon, 4 May 2015 00:23:28 +0300",
"msg_from": "Anton Bushmelev <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimization join on random value"
},
{
"msg_contents": "On 05/04/2015 12:23 AM, Anton Bushmelev wrote:\n> Hello guru of postgres, it's possoble to tune query with join on random\n> string ?\n> i know that it is not real life example, but i need it for tests.\n>\n> soe=# explain\n> soe-# SELECT ADDRESS_ID,\n> soe-# CUSTOMER_ID,\n> soe-# DATE_CREATED,\n> soe-# HOUSE_NO_OR_NAME,\n> soe-# STREET_NAME,\n> soe-# TOWN,\n> soe-# COUNTY,\n> soe-# COUNTRY,\n> soe-# POST_CODE,\n> soe-# ZIP_CODE\n> soe-# FROM ADDRESSES\n> soe-# WHERE customer_id = trunc( random()*45000) ;\n> QUERY\n> PLAN\n> -------------------------------------------------------------------------------------------\n> Seq Scan on addresses (cost=0.00..165714.00 rows=22500 width=84)\n> Filter: ((customer_id)::double precision = trunc((random() *\n> 45000::double precision)))\n> (2 rows)\n>\n\nThere are two problems here that prohibit the index from being used:\n\n1. random() is volatile, so it's recalculated for each row.\n2. For the comparison, customer_id is cast to a float, and the index is \non the bigint value.\n\nTo work around the first problem, put the random() call inside a \nsubquery. And for the second problem, cast to bigint.\n\nSELECT ... FROM addresses\nWHERE customer_id = (SELECT random()*45000)::bigint\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 04 May 2015 00:28:46 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimization join on random value"
}
] |
[
{
"msg_contents": "Hi Group,\n\nFacing a problem where pg_catalog.pg_largetobject has been growing fast recently, in last two weeks. The actual data itself, in user tables, is about 60GB, but pg_catalog.pg_largeobject table is 200GB plues. Please let me know how to clean/truncate this table without losing any user data in other table.\n\nWith regards to this pg_largeobject, I have the following questions:\n\n\n- What is this pg_largetobject ?\n\n- what does it contain ? tried PostgreSQL documentation and lists, but could not get much from it.\n\n- why does it grow ?\n\n- Was there any configuration change that may have triggered this to grow? For last one year or so, there was no problem, but it started growing all of sudden in last two weeks. The only change we had in last two weeks was that we have scheduled night base-backup for it and auto-vacuum feature enabled.\n\n- pg_largeobject contains so many duplicate rows (loid). Though there are only about 0.6 million rows (LOIDs), but the total number of rows including duplicates are about 59million records. What are all these ?\n\nKindly help getting this information and getting this issue cleared, and appreciate your quick help on this.\n\nThanks and Regards\nM.Shiva\n\n\n\nHi Group, Facing a problem where pg_catalog.pg_largetobject has been growing fast recently, in last two weeks. The actual data itself, in user tables, is about 60GB, but pg_catalog.pg_largeobject table is 200GB plues. Please let me know how to clean/truncate this table without losing any user data in other table. With regards to this pg_largeobject, I have the following questions: - What is this pg_largetobject ?- what does it contain ? tried PostgreSQL documentation and lists, but could not get much from it.- why does it grow ?- Was there any configuration change that may have triggered this to grow? For last one year or so, there was no problem, but it started growing all of sudden in last two weeks. The only change we had in last two weeks was that we have scheduled night base-backup for it and auto-vacuum feature enabled.- pg_largeobject contains so many duplicate rows (loid). Though there are only about 0.6 million rows (LOIDs), but the total number of rows including duplicates are about 59million records. What are all these ? Kindly help getting this information and getting this issue cleared, and appreciate your quick help on this. Thanks and RegardsM.Shiva",
"msg_date": "Mon, 11 May 2015 15:25:06 +0530",
"msg_from": "\"Muthusamy, Sivaraman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to clean/truncate / VACUUM FULL pg_largeobject without (much)\n downtime?"
},
{
"msg_contents": "On 5/11/15 4:55 AM, Muthusamy, Sivaraman wrote:\n> Hi Group,\n>\n> Facing a problem where pg_catalog.pg_largetobject has been growing fast\n> recently, in last two weeks. The actual data itself, in user tables, is\n> about 60GB, but pg_catalog.pg_largeobject table is 200GB plues. Please\n> let me know how to clean/truncate this table without losing any user\n> data in other table.\n\nAutovacuum should be taking care of it for you, though you could also \ntry a manual vacuum (VACUUM pg_largeobject;).\n\n> With regards to this pg_largeobject, I have the following questions:\n>\n> -What is this pg_largetobject ?\n\nIt stores large objects \nhttp://www.postgresql.org/docs/9.4/static/lo-interfaces.html\n\n> -what does it contain ? tried PostgreSQL documentation and lists, but\n> could not get much from it.\n>\n> -why does it grow ?\n>\n> -Was there any configuration change that may have triggered this to\n> grow? For last one year or so, there was no problem, but it started\n> growing all of sudden in last two weeks. The only change we had in last\n> two weeks was that we have scheduled night base-backup for it and\n> auto-vacuum feature enabled.\n\nChanges to autovacuum settings could certainly cause changes. \nLong-running transactions would prevent cleanup, as would any prepared \ntransactions (which should really be disabled unless you explicitly need \nthem).\n\n> -pg_largeobject contains so many duplicate rows (loid). Though there are\n> only about 0.6 million rows (LOIDs), but the total number of rows\n> including duplicates are about 59million records. What are all these ?\n\nEach row can only be ~2KB wide, so any LO that's larger than that will \nbe split into multiple rows.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 May 2015 16:17:44 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to clean/truncate / VACUUM FULL pg_largeobject\n without (much) downtime?"
}
] |
[
{
"msg_contents": "Hi,\ni stumbled over something i cant seem to find a workaround. I create a view like\n\ncreate view v_test as\n\tselect\ta,b\n\tfrom\tbig_table\n\tunion all\n\tselect\ta,b\n\tfrom\tsmall_table;\n\nWhen i now use the view like\n\n\tselect * from v_test where a = 42;\n\nI can see an index scan happening on big_table. When i issue\nsomething like \n\n\tselect * from v_test where a in ( select 42 );\n\nor joining to another table i see that there will be seq scan on big\ntable. First the union will be executed and later the filter e.g. a in (\nselect 42 ) will be done on the huge result. My use case is that\nbig_table is >70mio entries growing fast and small_table is like 4\nentries, growing little. The filter e.g. \"a in ( select 42 )\" will\ntypically select 50-1000 entries of the 70mio. So i now create a union\nwith 70mio + 4 entries to then filter all with a = 42.\n\nIt seems the planner is not able to rewrite a union all e.g. the above\nstatement could be rewritten from:\n\n\tselect\t*\n\tfrom\t(\n\t\tselect\ta,b\n\t\tfrom\tbig_table\n\t\tunion all\n\t\tselect\ta,b\n\t\tfrom\tsmall_table;\n\t\t) foo\n\twhere\ta in ( select 42 );\n\nto \n\n\tselect\t*\n\tfrom\t(\n\t\tselect\ta,b\n\t\tfrom\tbig_table\n\t\twhere a in ( select 42 )\n\t\tunion all\n\t\tselect\ta,b\n\t\tfrom\tsmall_table\n\t\twhere a in ( select 42 )\n\t\t) foo\n\nwhich would then use an index scan not a seq scan and execution times\nwould be acceptable. \n\nI have now tried to wrap my head around the problem for 2 days and i am \nunable to find a workaround to using a union but the filter optimisation\nis impossible with a view construct.\n\nFlo\nPS: Postgres 9.1 - I tried 9.4 on Debian/jessie with IIRC same results.\n-- \nFlorian Lohoff [email protected]\n We need to self-defense - GnuPG/PGP enable your email today!",
"msg_date": "Thu, 21 May 2015 12:41:03 +0200",
"msg_from": "Florian Lohoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "union all and filter / index scan -> seq scan "
},
{
"msg_contents": "It looks pretty much like partitioning. You should check partitioning\nrecipes.\n\nЧт, 21 трав. 2015 06:41 Florian Lohoff <[email protected]> пише:\n\n\n> Hi,\n> i stumbled over something i cant seem to find a workaround. I create a\n> view like\n>\n> create view v_test as\n> select a,b\n> from big_table\n> union all\n> select a,b\n> from small_table;\n>\n> When i now use the view like\n>\n> select * from v_test where a = 42;\n>\n> I can see an index scan happening on big_table. When i issue\n> something like\n>\n> select * from v_test where a in ( select 42 );\n>\n> or joining to another table i see that there will be seq scan on big\n> table. First the union will be executed and later the filter e.g. a in (\n> select 42 ) will be done on the huge result. My use case is that\n> big_table is >70mio entries growing fast and small_table is like 4\n> entries, growing little. The filter e.g. \"a in ( select 42 )\" will\n> typically select 50-1000 entries of the 70mio. So i now create a union\n> with 70mio + 4 entries to then filter all with a = 42.\n>\n> It seems the planner is not able to rewrite a union all e.g. the above\n> statement could be rewritten from:\n>\n> select *\n> from (\n> select a,b\n> from big_table\n> union all\n> select a,b\n> from small_table;\n> ) foo\n> where a in ( select 42 );\n>\n> to\n>\n> select *\n> from (\n> select a,b\n> from big_table\n> where a in ( select 42 )\n> union all\n> select a,b\n> from small_table\n> where a in ( select 42 )\n> ) foo\n>\n> which would then use an index scan not a seq scan and execution times\n> would be acceptable.\n>\n> I have now tried to wrap my head around the problem for 2 days and i am\n> unable to find a workaround to using a union but the filter optimisation\n> is impossible with a view construct.\n>\n> Flo\n> PS: Postgres 9.1 - I tried 9.4 on Debian/jessie with IIRC same results.\n> --\n> Florian Lohoff [email protected]\n> We need to self-defense - GnuPG/PGP enable your email today!\n>\n\nIt looks pretty much like partitioning. You should check partitioning recipes.\nЧт, 21 трав. 2015 06:41 Florian Lohoff <[email protected]> пише:\n\nHi,\ni stumbled over something i cant seem to find a workaround. I create a view like\n\ncreate view v_test as\n select a,b\n from big_table\n union all\n select a,b\n from small_table;\n\nWhen i now use the view like\n\n select * from v_test where a = 42;\n\nI can see an index scan happening on big_table. When i issue\nsomething like\n\n select * from v_test where a in ( select 42 );\n\nor joining to another table i see that there will be seq scan on big\ntable. First the union will be executed and later the filter e.g. a in (\nselect 42 ) will be done on the huge result. My use case is that\nbig_table is >70mio entries growing fast and small_table is like 4\nentries, growing little. The filter e.g. \"a in ( select 42 )\" will\ntypically select 50-1000 entries of the 70mio. So i now create a union\nwith 70mio + 4 entries to then filter all with a = 42.\n\nIt seems the planner is not able to rewrite a union all e.g. the above\nstatement could be rewritten from:\n\n select *\n from (\n select a,b\n from big_table\n union all\n select a,b\n from small_table;\n ) foo\n where a in ( select 42 );\n\nto\n\n select *\n from (\n select a,b\n from big_table\n where a in ( select 42 )\n union all\n select a,b\n from small_table\n where a in ( select 42 )\n ) foo\n\nwhich would then use an index scan not a seq scan and execution times\nwould be acceptable.\n\nI have now tried to wrap my head around the problem for 2 days and i am\nunable to find a workaround to using a union but the filter optimisation\nis impossible with a view construct.\n\nFlo\nPS: Postgres 9.1 - I tried 9.4 on Debian/jessie with IIRC same results.\n--\nFlorian Lohoff [email protected]\n We need to self-defense - GnuPG/PGP enable your email today!",
"msg_date": "Thu, 21 May 2015 15:09:32 +0000",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: union all and filter / index scan -> seq scan"
},
{
"msg_contents": "Florian Lohoff <[email protected]> writes:\n> It seems the planner is not able to rewrite a union all\n\nI do not see any problems with pushing indexable conditions down through a\nUNION ALL when I try it. I speculate that either you are using a very old\n9.1.x minor release, or the actual view is more complex than you've let on.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 May 2015 11:28:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: union all and filter / index scan -> seq scan"
}
] |
[
{
"msg_contents": "I'm trying to call MAX() on the first value of a multi-column index of a\npartitioned table and the planner is choosing to do a sequential scan\ninstead of an index scan. Is there something I can do to fix this?\n\nHere's a simplified version of our schema:\nCREATE TABLE data ( tutci DOUBLE PRECISION, tutcf DOUBLE PRECISION, value\nINTEGER );\nCREATE TABLE data1 ( CHECK ( tutci >= 1000 AND tutci < 2000 ) ) INHERITS\n(data);\nCREATE TABLE data2 ( CHECK ( tutci >= 2000 AND tutci < 3000 ) ) INHERITS\n(data);\nWith the following indexes:\nCREATE INDEX data_tutc_index ON data(tutci, tutcf);\nCREATE INDEX data1_tutc_index ON data1(tutci, tutcf);\nCREATE INDEX data2_tutc_index ON data2(tutci, tutcf);\n\nNo data is stored in the parent table (only in the partitions) and the\nexplain is as follows after doing a CLUSTER on the index and a VACUUM\nANALYZE after populating with simple test data:\nEXPLAIN SELECT MAX(tutci) FROM data;\n QUERY PLAN\n----------------------------------------------------------------------------\n Aggregate (cost=408.53..408.54 rows=1 width=8)\n -> Append (cost=0.00..354.42 rows=21642 width=8)\n -> Seq Scan on data (cost=0.00..26.30 rows=1630 width=8)\n -> Seq Scan on data1 data (cost=0.00..164.11 rows=10011 width=8)\n -> Seq Scan on data2 data (cost=0.00..164.01 rows=10001 width=8)\n\nThanks,\nDave\n\nI'm trying to call MAX() on the first value of a multi-column index of a partitioned table and the planner is choosing to do a sequential scan instead of an index scan. Is there something I can do to fix this?Here's a simplified version of our schema:CREATE TABLE data ( tutci DOUBLE PRECISION, tutcf DOUBLE PRECISION, value INTEGER );CREATE TABLE data1 ( CHECK ( tutci >= 1000 AND tutci < 2000 ) ) INHERITS (data);CREATE TABLE data2 ( CHECK ( tutci >= 2000 AND tutci < 3000 ) ) INHERITS (data);With the following indexes:CREATE INDEX data_tutc_index ON data(tutci, tutcf);CREATE INDEX data1_tutc_index ON data1(tutci, tutcf);CREATE INDEX data2_tutc_index ON data2(tutci, tutcf);No data is stored in the parent table (only in the partitions) and the explain is as follows after doing a CLUSTER on the index and a VACUUM ANALYZE after populating with simple test data:EXPLAIN SELECT MAX(tutci) FROM data; QUERY PLAN ---------------------------------------------------------------------------- Aggregate (cost=408.53..408.54 rows=1 width=8) -> Append (cost=0.00..354.42 rows=21642 width=8) -> Seq Scan on data (cost=0.00..26.30 rows=1630 width=8) -> Seq Scan on data1 data (cost=0.00..164.11 rows=10011 width=8) -> Seq Scan on data2 data (cost=0.00..164.01 rows=10001 width=8)Thanks,Dave",
"msg_date": "Fri, 22 May 2015 15:27:29 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "MAX() and multi-column index on a partitioned table?"
},
{
"msg_contents": "Dave Johansen <[email protected]> writes:\n> I'm trying to call MAX() on the first value of a multi-column index of a\n> partitioned table and the planner is choosing to do a sequential scan\n> instead of an index scan. Is there something I can do to fix this?\n\nWhat PG version are you using? 9.1 or newer should know how to do this\nwith a merge append of several indexscans.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 May 2015 18:42:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAX() and multi-column index on a partitioned table?"
},
{
"msg_contents": "On Fri, May 22, 2015 at 3:42 PM, Tom Lane <[email protected]> wrote:\n\n> Dave Johansen <[email protected]> writes:\n> > I'm trying to call MAX() on the first value of a multi-column index of a\n> > partitioned table and the planner is choosing to do a sequential scan\n> > instead of an index scan. Is there something I can do to fix this?\n>\n> What PG version are you using? 9.1 or newer should know how to do this\n> with a merge append of several indexscans.\n>\n\nSorry, I should have mentioned that in the original email. I'm using 8.4.20\non RHEL 6. From your reply, it sounds like this is the expected behavior\nfor 8.4 and 9.0. Is that the case?\nThanks,\nDave\n\nOn Fri, May 22, 2015 at 3:42 PM, Tom Lane <[email protected]> wrote:Dave Johansen <[email protected]> writes:\n> I'm trying to call MAX() on the first value of a multi-column index of a\n> partitioned table and the planner is choosing to do a sequential scan\n> instead of an index scan. Is there something I can do to fix this?\n\nWhat PG version are you using? 9.1 or newer should know how to do this\nwith a merge append of several indexscans.Sorry, I should have mentioned that in the original email. I'm using 8.4.20 on RHEL 6. From your reply, it sounds like this is the expected behavior for 8.4 and 9.0. Is that the case?Thanks,Dave",
"msg_date": "Fri, 22 May 2015 16:53:19 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAX() and multi-column index on a partitioned table?"
},
{
"msg_contents": "Dave Johansen <[email protected]> writes:\n> On Fri, May 22, 2015 at 3:42 PM, Tom Lane <[email protected]> wrote:\n>> What PG version are you using? 9.1 or newer should know how to do this\n>> with a merge append of several indexscans.\n\n> Sorry, I should have mentioned that in the original email. I'm using 8.4.20\n> on RHEL 6. From your reply, it sounds like this is the expected behavior\n> for 8.4 and 9.0. Is that the case?\n\nYeah, pre-9.1 is not very bright about that. You could factor the query\nmanually, perhaps, though it's surely a pain in the neck.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 May 2015 21:13:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAX() and multi-column index on a partitioned table?"
}
] |
[
{
"msg_contents": "Greetings and salutations.\n\nI've got some weirdness.\n\nCurrent:\nPostgres 9.3.4\nSlony 2.2.3\nCentOS 6.5\n\nPrior running Postgres 9.1.2 w/slony 2.1.3 CentOS 6.2\n\nI found that if I tried to run a vacuum full on 1 table that I recently\nreindexed (out of possibly 8 tables) that I get this error:\n\n# vacuum full table.ads;\n\nERROR: missing chunk number 0 for toast value 1821556134 in pg_toast_17881\n\nIf I run a vacuum analyze it completes fine, but I can't run a vacuum full\nwithout it throwing an error. I seem to be able to query the table and I\nseem to be able to add data to the table and slony seems fine as does\npostgres.\n\nI'm unclear why the vacuum full is failing with this error. I've done some\nsearching and there are hints to prior bugs, but I didn't catch anything in\n9.3.3 to 9.3.7 that talks about this.\n\nMy next steps without your fine assistance, will be to drop the table from\nslon and re-add it (meaning it will drop the table completely from this db\nand recreate it from the master (there we can do a vacuum full without\nfailure)..\n\nI have already tried to remove the indexes and just create those, but no\nluck.\n\nIdeas?\n\nThanks\n\nTory\n\nGreetings and salutations.I've got some weirdness. Current:Postgres 9.3.4 Slony 2.2.3CentOS 6.5Prior running Postgres 9.1.2 w/slony 2.1.3 CentOS 6.2I found that if I tried to run a vacuum full on 1 table that I recently reindexed (out of possibly 8 tables) that I get this error:\n# vacuum full table.ads;\nERROR: missing chunk number 0 for toast value 1821556134 in pg_toast_17881If I run a vacuum analyze it completes fine, but I can't run a vacuum full without it throwing an error. I seem to be able to query the table and I seem to be able to add data to the table and slony seems fine as does postgres. I'm unclear why the vacuum full is failing with this error. I've done some searching and there are hints to prior bugs, but I didn't catch anything in 9.3.3 to 9.3.7 that talks about this.My next steps without your fine assistance, will be to drop the table from slon and re-add it (meaning it will drop the table completely from this db and recreate it from the master (there we can do a vacuum full without failure)..I have already tried to remove the indexes and just create those, but no luck. Ideas?ThanksTory",
"msg_date": "Wed, 27 May 2015 00:50:51 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: missing chunk number 0 for toast value 1821556134 in\n pg_toast_17881"
},
{
"msg_contents": "On 05/27/2015 12:50 AM, Tory M Blue wrote:\n> Greetings and salutations.\n> \n> I've got some weirdness. \n> \n> Current:\n> Postgres 9.3.4 \n> Slony 2.2.3\n> CentOS 6.5\n> \n> Prior running Postgres 9.1.2 w/slony 2.1.3 CentOS 6.2\n> \n> I found that if I tried to run a vacuum full on 1 table that I recently\n> reindexed (out of possibly 8 tables) that I get this error:\n> \n> # vacuum full table.ads;\n> \n> ERROR: missing chunk number 0 for toast value 1821556134 in pg_toast_17881\n> \n> If I run a vacuum analyze it completes fine, but I can't run a vacuum\n> full without it throwing an error. I seem to be able to query the table\n> and I seem to be able to add data to the table and slony seems fine as\n> does postgres. \n\nIs this happening on the Slony master or the replica?\n\nThere have been serveral bugs since 9.1.2 which could have caused this\nparticular error. Are you certain that this is a recent problem?\n\nNote that this error affects just one compressed value or row, so you're\nnot losing other data, unless it's a symptom of an ongoing problem.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 May 2015 15:02:28 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: missing chunk number 0 for toast value 1821556134\n in pg_toast_17881"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm running performance tests against a PostgreSQL database (9.4) with various hardware configurations and a couple different benchmarks (TPC-C & TPC-H).\n\nI'm currently using pg_dump and pg_restore to refresh my dataset between runs but this process seems slower than it could be.\n\nIs it possible to do a tar/untar of the entire /var/lib/pgsql tree as a backup & restore method?\n\nIf not, is there another way to restore a dataset more quickly? The database is dedicated to the test dataset so trashing & rebuilding the entire application/OS/anything is no issue for me-there's no data for me to lose.\n\nThanks!\n\nWes Vaske | Senior Storage Solutions Engineer\nMicron Technology\n101 West Louis Henna Blvd, Suite 210 | Austin, TX 78728\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI’m running performance tests against a PostgreSQL database (9.4) with various hardware configurations and a couple different benchmarks (TPC-C & TPC-H).\n \nI’m currently using pg_dump and pg_restore to refresh my dataset between runs but this process seems slower than it could be.\n\n \nIs it possible to do a tar/untar of the entire /var/lib/pgsql tree as a backup & restore method?\n \nIf not, is there another way to restore a dataset more quickly? The database is dedicated to the test dataset so trashing & rebuilding the entire application/OS/anything is no issue for me—there’s no data for me to lose.\n \nThanks!\n \nWes Vaske\n| Senior Storage Solutions Engineer\nMicron Technology\n\n101 West Louis Henna Blvd, Suite 210 | Austin, TX 78728",
"msg_date": "Wed, 27 May 2015 20:24:04 +0000",
"msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fastest Backup & Restore for perf testing"
},
{
"msg_contents": "\nOn 05/27/2015 04:24 PM, Wes Vaske (wvaske) wrote:\n>\n> Hi,\n>\n> I’m running performance tests against a PostgreSQL database (9.4) with \n> various hardware configurations and a couple different benchmarks \n> (TPC-C & TPC-H).\n>\n> I’m currently using pg_dump and pg_restore to refresh my dataset \n> between runs but this process seems slower than it could be.\n>\n> Is it possible to do a tar/untar of the entire /var/lib/pgsql tree as \n> a backup & restore method?\n>\n> If not, is there another way to restore a dataset more quickly? The \n> database is dedicated to the test dataset so trashing & rebuilding the \n> entire application/OS/anything is no issue for me—there’s no data for \n> me to lose.\n>\n> Thanks!\n>\n\n\nRead all of this chapter. \n<http://www.postgresql.org/docs/current/static/backup.html>\n\ncheers\n\nandrew\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 May 2015 16:37:01 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fastest Backup & Restore for perf testing"
},
{
"msg_contents": "\n> On May 27, 2015, at 1:24 PM, Wes Vaske (wvaske) <[email protected]> wrote:\n> \n> Hi,\n> \n> I’m running performance tests against a PostgreSQL database (9.4) with various hardware configurations and a couple different benchmarks (TPC-C & TPC-H).\n> \n> I’m currently using pg_dump and pg_restore to refresh my dataset between runs but this process seems slower than it could be.\n> \n> Is it possible to do a tar/untar of the entire /var/lib/pgsql tree as a backup & restore method?\n> \n> If not, is there another way to restore a dataset more quickly? The database is dedicated to the test dataset so trashing & rebuilding the entire application/OS/anything is no issue for me—there’s no data for me to lose.\n> \n\nDropping the database and recreating it from a template database with \"create database foo template foo_template\" is about as fast as a file copy, much faster than pg_restore tends to be.\n\nCheers,\n Steve\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 May 2015 13:39:06 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fastest Backup & Restore for perf testing"
},
{
"msg_contents": "On 5/27/15 3:39 PM, Steve Atkins wrote:\n>\n>> On May 27, 2015, at 1:24 PM, Wes Vaske (wvaske) <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> I’m running performance tests against a PostgreSQL database (9.4) with various hardware configurations and a couple different benchmarks (TPC-C & TPC-H).\n>>\n>> I’m currently using pg_dump and pg_restore to refresh my dataset between runs but this process seems slower than it could be.\n>>\n>> Is it possible to do a tar/untar of the entire /var/lib/pgsql tree as a backup & restore method?\n>>\n>> If not, is there another way to restore a dataset more quickly? The database is dedicated to the test dataset so trashing & rebuilding the entire application/OS/anything is no issue for me—there’s no data for me to lose.\n>>\n>\n> Dropping the database and recreating it from a template database with \"create database foo template foo_template\" is about as fast as a file copy, much faster than pg_restore tends to be.\n\nAnother possibility is filesystem snapshots, which could be even faster \nthan createdb --template.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 May 2015 12:40:15 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fastest Backup & Restore for perf testing"
}
] |
[
{
"msg_contents": "Hi,\n\nMy app was working just fine. A month ago postmaster started to eat all my\ncpu sometimes (not all day, but a lot of times and everyday) and then my app\ngets really slow and sometimes don't even complete the requests.\n\nWhat could it be?\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Postmaster-eating-up-all-my-cpu-tp5851428.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 May 2015 04:25:50 -0700 (MST)",
"msg_from": "birimblongas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postmaster eating up all my cpu"
},
{
"msg_contents": "Hi,\n\nOn 05/28/15 13:25, birimblongas wrote:\n> Hi,\n>\n> My app was working just fine. A month ago postmaster started to eat all my\n> cpu sometimes (not all day, but a lot of times and everyday) and then my app\n> gets really slow and sometimes don't even complete the requests.\n>\n> What could it be?\n\nA lot of things. The first step should be looking at pg_stat_activity, \nwhat is the process eating the CPU doing.\n\nWe also need much more information about your system - what PostgreSQL \nversion are you using, what kind of OS, configuration etc.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 30 May 2015 22:45:45 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster eating up all my cpu"
}
] |
[
{
"msg_contents": "I am testing partitioning of a large table. I am doing a range\npartitioning based on a sequence col, which also acts as the primary\nkey. For inserts I am using a trigger which will redirect insert to\nthe right table based on the value of the primary key.\n\nBased on my testing, I see that the insert speed is less than 10%\ndifferent than a non partitioned table. I am using SET\nconstraint_exclusion = on and I checked that via ANALYZE that the\nplanner does not consider non qualifying child tables.\n\nyet, selects and updates based on the primary key show anywhere from\n40 to 200% slowness as compared to non partition. One thing I notice\nis that, even with partition pruning, the planner scans the base table\nand the table matching the condition. Is that the additional overhead.\n\nI am attaching below the output of analyze.\n\n===========================\nOn a non partitioned table\n\nexplain select count(*) from tstesting.account where account_row_inst = 101 ;\nAggregate (cost=8.16..8.17 rows=1 width=0)\n-> Index Only Scan using account_pkey on account (cost=0.14..8.16\nrows=1 width=0)\nIndex Cond: (account_row_inst = 101)\n(3 rows)\n\n\nWith partition pruning:\n\nAggregate (cost=8.45..8.46 rows=1 width=0)\n-> Append (cost=0.00..8.44 rows=2 width=0)\n-> Seq Scan on account (cost=0.00..0.00 rows=1 width=0)\nFilter: (account_row_inst = 101)\n-> Index Only Scan using account_part1_pkey on account_part1\n(cost=0.42..8.44 rows=1 width=0)\nIndex Cond: (account_row_inst = 101)\n(6 rows)\n\nOn a partitioned table, with no partition pruning.\n\nexplain analyze select count(*) from tstesting.account where\naccount_row_inst = 101 ;\nAggregate (cost=29.77..29.78 rows=1 width=0) (actual time=0.032..0.032\nrows=1 loops=1)\n-> Append (cost=0.00..29.76 rows=5 width=0) (actual time=0.029..0.029\nrows=0 loops=1)\n-> Seq Scan on account (cost=0.00..0.00 rows=1 width=0) (actual\ntime=0.000..0.000 rows=0 loops=1)\nFilter: (account_row_inst = 101)\n-> Index Only Scan using account_part1_pkey on account_part1\n(cost=0.42..4.44 rows=1 width=0) (actual time=0.008..0.008 rows=0\nloops=1)\nIndex Cond: (account_row_inst = 101)\nHeap Fetches: 0\n-> Index Only Scan using account_part2_pkey on account_part2\n(cost=0.42..8.44 rows=1 width=0) (actual time=0.007..0.007 rows=0\nloops=1)\nIndex Cond: (account_row_inst = 101)\nHeap Fetches: 0\n-> Index Only Scan using account_part3_pkey on account_part3\n(cost=0.42..8.44 rows=1 width=0) (actual time=0.007..0.007 rows=0\nloops=1)\nIndex Cond: (account_row_inst = 101)\nHeap Fetches: 0\n-> Index Only Scan using account_part4_pkey on account_part4\n(cost=0.42..8.44 rows=1 width=0) (actual time=0.006..0.006 rows=0\nloops=1)\nIndex Cond: (account_row_inst = 101)\nHeap Fetches: 0\nPlanning time: 0.635 ms\nExecution time: 0.137 ms\n(18 rows)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 May 2015 10:31:28 -0400",
"msg_from": "Ravi Krishna <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning and performance"
},
{
"msg_contents": "On Thu, May 28, 2015 at 10:31 AM, Ravi Krishna <[email protected]> wrote:\n> I am testing partitioning of a large table. I am doing a range\n\nSorry I forgot to clarify. I am using INHERITS for partioning with\ncheck constraing built for range partitioning.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 May 2015 10:41:25 -0400",
"msg_from": "Ravi Krishna <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning and performance"
},
{
"msg_contents": "On 5/28/15 9:31 AM, Ravi Krishna wrote:\n> explain select count(*) from tstesting.account where account_row_inst = 101 ;\n> Aggregate (cost=8.16..8.17 rows=1 width=0)\n> -> Index Only Scan using account_pkey on account (cost=0.14..8.16\n> rows=1 width=0)\n> Index Cond: (account_row_inst = 101)\n\nEXPLAIN only shows what the planner thinks a query will cost. For any \nreal testing, you need EXPLAIN ANALYZE.\n\nAlso, understand that partitioning isn't a magic bullet. It can make \nsome operations drastically faster, but it's not going to help every \nscenario, and will actually slow some other operations down.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 May 2015 13:05:34 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning and performance"
}
] |
[
{
"msg_contents": "wdsah=> select version();\n version \n-----------------------------------------------------------------------------------------------\n PostgreSQL 9.1.15 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\n(1 row)\n\nI plan to upgrade to Debian 8 (with Postgres 9.4) soon, so the problem\nmay go away, but I would still like to understand what is happening\nhere.\n\nIRL the queries are a bit more complicated (they involve two additional\ntables), but I can demonstrate it with just two:\n\nwdsah=> \\d facttable_stat_fta4\n Table \"public.facttable_stat_fta4\"\n Column | Type | Modifiers \n---------------------+-----------------------------+-----------\n macrobondtimeseries | character varying(255) | not null\n date | date | not null\n value | double precision | \n berechnungsart | character varying | \n einheit | character varying | \n kurzbezeichnung | character varying | \n partnerregion | character varying | \n og | character varying | \n sitcr4 | character varying | \n warenstrom | character varying | \n valid_from | timestamp without time zone | \n from_job_queue_id | integer | \n kommentar | character varying | \nIndexes:\n \"facttable_stat_fta4_pkey\" PRIMARY KEY, btree (macrobondtimeseries, date)\n \"facttable_stat_fta4_berechnungsart_idx\" btree (berechnungsart)\n \"facttable_stat_fta4_einheit_idx\" btree (einheit)\n \"facttable_stat_fta4_og_idx\" btree (og)\n \"facttable_stat_fta4_partnerregion_idx\" btree (partnerregion)\n \"facttable_stat_fta4_sitcr4_idx\" btree (sitcr4)\n \"facttable_stat_fta4_warenstrom_idx\" btree (warenstrom)\n\nwdsah=> select count(*) from facttable_stat_fta4;\n count \n----------\n 43577941\n(1 row)\n\nwdsah=> \\d term\n Table \"public.term\"\n Column | Type | Modifiers \n------------------------+-----------------------------+------------------------\n facttablename | character varying | \n columnname | character varying | \n term | character varying | \n concept_id | integer | not null\n language | character varying | \n register | character varying | \n hidden | boolean | \n cleansing_job_queue_id | integer | not null default (-1)\n meta_insert_dt | timestamp without time zone | not null default now()\n meta_update_dt | timestamp without time zone | \n valid_from | timestamp without time zone | \n from_job_queue_id | integer | \nIndexes:\n \"term_concept_id_idx\" btree (concept_id)\n \"term_facttablename_columnname_idx\" btree (facttablename, columnname)\n \"term_facttablename_idx\" btree (facttablename)\n \"term_facttablename_idx1\" btree (facttablename) WHERE facttablename IS NOT NULL AND columnname::text = 'macrobondtimeseries'::text\n \"term_language_idx\" btree (language)\n \"term_register_idx\" btree (register)\n \"term_term_ftidx\" gin (to_tsvector('simple'::regconfig, term::text))\n \"term_term_idx\" btree (term)\nCheck constraints:\n \"term_facttablename_needs_columnname_chk\" CHECK (facttablename IS NULL OR columnname IS NOT NULL)\nForeign-key constraints:\n \"term_concept_id_fkey\" FOREIGN KEY (concept_id) REFERENCES concept(id) DEFERRABLE\n\nwdsah=> select count(*) from term;\n count \n---------\n 6109087\n(1 row)\n\nThe purpose of the query is to find all terms which occur is a given\ncolumn of the facttable (again, IRL this is a bit more complicated),\nbasically an optimized version of select distinct.\n\nSome of my columns have very few distinct members:\n\nwdsah=> select * from pg_stats where tablename='facttable_stat_fta4' and attname in ('einheit', 'berechnungsart', 'warenstrom');\n schemaname | tablename | attname | inherited | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation \n------------+---------------------+----------------+-----------+-----------+-----------+------------+------------------+---------------------+------------------+-------------\n public | facttable_stat_fta4 | berechnungsart | f | 0 | 2 | 2 | {n,m} | {0.515167,0.484833} | | 0.509567\n public | facttable_stat_fta4 | einheit | f | 0 | 3 | 2 | {EUR,kg} | {0.515167,0.484833} | | 0.491197\n public | facttable_stat_fta4 | warenstrom | f | 0 | 2 | 2 | {X,M} | {0.580267,0.419733} | | -0.461344\n(3 rows)\n\n\nAnd for some of them my query is indeed very fast:\n\nwdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register \n from term t where facttablename='facttable_stat_fta4' and columnname='einheit' and exists (select 1 from facttable_stat_fta4 f where f.einheit=t.term );\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Semi Join (cost=0.00..384860.48 rows=1 width=81) (actual time=0.061..0.119 rows=2 loops=1)\n -> Index Scan using term_facttablename_columnname_idx on term t (cost=0.00..391.46 rows=636 width=81) (actual time=0.028..0.030 rows=3 loops=1)\n Index Cond: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'einheit'::text))\n -> Index Scan using facttable_stat_fta4_einheit_idx on facttable_stat_fta4 f (cost=0.00..384457.80 rows=21788970 width=3) (actual time=0.027..0.027 rows=1 loops=3)\n Index Cond: ((einheit)::text = (t.term)::text)\n Total runtime: 0.173 ms\n(6 rows)\n\n0.17 ms. Much faster than a plain select distinct over a table with 43\nmillion rows could ever hope to be. \n\nwarenstrom is very similar and the columns with more distinct values\naren't that bad either. \n\nBut for column berechnungsart the result is bad:\n\nwdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register \n from term t where facttablename='facttable_stat_fta4' and columnname='berechnungsart' and exists (select 1 from facttable_stat_fta4 f where f.berechnungsart=t.term );\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Semi Join (cost=316864.57..319975.79 rows=1 width=81) (actual time=7703.917..30948.271 rows=2 loops=1)\n Merge Cond: ((t.term)::text = (f.berechnungsart)::text)\n -> Index Scan using term_term_idx on term t (cost=0.00..319880.73 rows=636 width=81) (actual time=7703.809..7703.938 rows=3 loops=1)\n Filter: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'berechnungsart'::text))\n -> Index Scan using facttable_stat_fta4_berechnungsart_idx on facttable_stat_fta4 f (cost=0.00..2545748.85 rows=43577940 width=2) (actual time=0.089..16263.582 rows=21336180 loops=1)\n Total runtime: 30948.648 ms\n(6 rows)\n\nOver 30 seconds! That's almost 200'000 times slower. \n\nThe weird thing is that for this particular table einheit and\nberechnungsart actually have a 1:1 correspondence. Not only is the\nfrequency the same, every row where einheit='kg' has berechnungsart='m'\nand every row where einheit='EUR' has berechnungsart='n'. So I don't see\nwhy two different execution plans are chosen.\n\n\thp\n\n-- \n _ | Peter J. Holzer | I want to forget all about both belts and\n|_|_) | | suspenders; instead, I want to buy pants \n| | | [email protected] | that actually fit.\n__/ | http://www.hjp.at/ | -- http://noncombatant.org/",
"msg_date": "Fri, 29 May 2015 10:55:44 +0200",
"msg_from": "\"Peter J. Holzer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Different plan for very similar queries"
},
{
"msg_contents": "On 2015-05-29 10:55:44 +0200, Peter J. Holzer wrote:\n> wdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register \n> from term t where facttablename='facttable_stat_fta4' and columnname='einheit' and exists (select 1 from facttable_stat_fta4 f where f.einheit=t.term );\n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Semi Join (cost=0.00..384860.48 rows=1 width=81) (actual time=0.061..0.119 rows=2 loops=1)\n> -> Index Scan using term_facttablename_columnname_idx on term t (cost=0.00..391.46 rows=636 width=81) (actual time=0.028..0.030 rows=3 loops=1)\n> Index Cond: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'einheit'::text))\n> -> Index Scan using facttable_stat_fta4_einheit_idx on facttable_stat_fta4 f (cost=0.00..384457.80 rows=21788970 width=3) (actual time=0.027..0.027 rows=1 loops=3)\n> Index Cond: ((einheit)::text = (t.term)::text)\n> Total runtime: 0.173 ms\n> (6 rows)\n> \n[...]\n> wdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register \n> from term t where facttablename='facttable_stat_fta4' and columnname='berechnungsart' and exists (select 1 from facttable_stat_fta4 f where f.berechnungsart=t.term );\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Semi Join (cost=316864.57..319975.79 rows=1 width=81) (actual time=7703.917..30948.271 rows=2 loops=1)\n> Merge Cond: ((t.term)::text = (f.berechnungsart)::text)\n> -> Index Scan using term_term_idx on term t (cost=0.00..319880.73 rows=636 width=81) (actual time=7703.809..7703.938 rows=3 loops=1)\n> Filter: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'berechnungsart'::text))\n> -> Index Scan using facttable_stat_fta4_berechnungsart_idx on facttable_stat_fta4 f (cost=0.00..2545748.85 rows=43577940 width=2) (actual time=0.089..16263.582 rows=21336180 loops=1)\n> Total runtime: 30948.648 ms\n> (6 rows)\n\nA couple of additional observations:\n\nThe total cost of both queries is quite similar, so random variations\nmight push into one direction or the other. Indeed, after dropping and\nrecreating indexes (I tried GIN indexes as suggested by Heikki on [1])\nand calling analyze after each change, I have now reached a state where\nboth queries use the fast plan.\n\nIn the first case the query planner seems to add the cost of the two\nindex scans to get the total cost, despite the fact that for a semi join\nthe second index scan can be aborted after the first hit (so either the\ncost of the second scan should be a lot less than 384457.80 or it needs\nto be divided by a large factor for the semi join).\n\nIn the second case the cost of the second index scan (2545748.85) is\neither completely ignored or divided by a large factor: It doesn't seem\nto contribute much to the total cost.\n\n\thp\n\n\n[1] http://hlinnaka.iki.fi/2014/03/28/gin-as-a-substitute-for-bitmap-indexes/\n\n-- \n _ | Peter J. Holzer | I want to forget all about both belts and\n|_|_) | | suspenders; instead, I want to buy pants \n| | | [email protected] | that actually fit.\n__/ | http://www.hjp.at/ | -- http://noncombatant.org/",
"msg_date": "Fri, 29 May 2015 11:51:17 +0200",
"msg_from": "\"Peter J. Holzer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Different plan for very similar queries"
},
{
"msg_contents": "Hi,\n\nOn 05/29/15 11:51, Peter J. Holzer wrote:\n> A couple of additional observations:\n>\n> The total cost of both queries is quite similar, so random variations\n> might push into one direction or the other. Indeed, after dropping\n> and recreating indexes (I tried GIN indexes as suggested by Heikki on\n> [1]) and calling analyze after each change, I have now reached a\n> state where both queries use the fast plan.\n\nI don't think bitmap indexes are particularly good match for this use \ncase. The queries need to check an existence of a few records, and btree \nindexes are great for that - the first plan is very fast.\n\nWhy exactly does the second query use a much slower plan I'm not sure. I \nbelieve I've found an issue in planning semi joins (reported to \npgsql-hackers a few minutes ago), but may be wrong and the code is OK.\n\nCan you try forcing the same plan for the second query, using \"enable\" \nflags? E.g.\n\n SET enable_mergejoin = off;\n\nwill disable the merge join, and push the optimizer towards a different \njoin type. You may have to disable a few more node types until you get \nthe same plan as for the first query, i.e.\n\n nestloop semi join\n -> index scan\n -> index scan\n\nSee this for more info:\n\n http://www.postgresql.org/docs/9.1/static/runtime-config-query.html\n\nAlso, have you tuned the PostgreSQL configuration? How?\n\nCan you provide the dataset? Not necessarily all the columns, it should \nbe sufficient to provide the columns used in the join/where clauses:\n\n term -> facttablename, columnname, term\n facttable_stat_fta4 -> einheit, berechnungsart\n\nThat'd make reproducing the problem much easier.\n\n> In the first case the query planner seems to add the cost of the two\n> index scans to get the total cost, despite the fact that for a semi\n> join the second index scan can be aborted after the first hit (so\n> either the cost of the second scan should be a lot less than\n> 384457.80 or it needs to be divided by a large factor for the semi\n> join).\n>\n> In the second case the cost of the second index scan (2545748.85) is\n> either completely ignored or divided by a large factor: It doesn't\n> seem to contribute much to the total cost.\n\nI believe this is a consequence of the semi join semantics, because the \nexplain plan contains \"total\" costs and row counts, as if the whole \nrelation was scanned (in this case all the 43M rows), but the optimizer \nonly propagates fraction of the cost estimate (depending on how much of \nthe relation it expects to scan). In this case it expects to scan a tiny \npart of the index scan, so the impact on the total cost is small.\n\nA bit confusing, yeah.\n\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 30 May 2015 01:47:49 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different plan for very similar queries"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> Why exactly does the second query use a much slower plan I'm not sure. I \n> believe I've found an issue in planning semi joins (reported to \n> pgsql-hackers a few minutes ago), but may be wrong and the code is OK.\n\nI think you are probably right that there's a bug there: the planner is\nvastly overestimating the cost of the nestloop-with-inner-indexscan\nplan. However, the reason why the mergejoin plan gets chosen in some\ncases seems to be that an additional estimation error is needed to make\nthat happen; otherwise the nestloop still comes out looking cheaper.\nThe undesirable case looks like:\n\n>> Merge Semi Join (cost=316864.57..319975.79 rows=1 width=81) (actual time=7703.917..30948.271 rows=2 loops=1)\n>> Merge Cond: ((t.term)::text = (f.berechnungsart)::text)\n>> -> Index Scan using term_term_idx on term t (cost=0.00..319880.73 rows=636 width=81) (actual time=7703.809..7703.938 rows=3 loops=1)\n>> Filter: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'berechnungsart'::text))\n>> -> Index Scan using facttable_stat_fta4_berechnungsart_idx on facttable_stat_fta4 f (cost=0.00..2545748.85 rows=43577940 width=2) (actual time=0.089..16263.582 rows=21336180 loops=1)\n>> Total runtime: 30948.648 ms\n\nNotice that it's estimating the cost of the join as significantly less\nthan the cost of the inner-side indexscan. This means it believes that\nthe inner indexscan will not be run to completion. That's not because of\nsemijoin semantics; there's no stop-after-first-match benefit for mergejoins.\nIt must be that it thinks the range of keys on the outer side of the join\nis much less than the range of keys on the inner. Given that it knows\nthat facttable_stat_fta4.berechnungsart only contains the values \"m\"\nand \"n\", this implies that it thinks term.term only contains \"m\" and\nnot \"n\". So this estimation error presumably comes from \"n\" not having\nbeen seen in ANALYZE's last sample of term.term, and raising the stats\ntarget for term.term would probably be a way to fix that.\n\nHowever, this would all be moot if the cost estimate for the nestloop\nplan were nearer to reality. Since you started a separate -hackers\nthread for that issue, let's go discuss that there.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 30 May 2015 15:04:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different plan for very similar queries"
},
{
"msg_contents": "[I've seen in -hackers that you already seem to have a fix]\n\nOn 2015-05-30 15:04:34 -0400, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n> > Why exactly does the second query use a much slower plan I'm not sure. I \n> > believe I've found an issue in planning semi joins (reported to \n> > pgsql-hackers a few minutes ago), but may be wrong and the code is OK.\n> \n> I think you are probably right that there's a bug there: the planner is\n> vastly overestimating the cost of the nestloop-with-inner-indexscan\n> plan. However, the reason why the mergejoin plan gets chosen in some\n> cases seems to be that an additional estimation error is needed to make\n> that happen; otherwise the nestloop still comes out looking cheaper.\n> The undesirable case looks like:\n> \n> >> Merge Semi Join (cost=316864.57..319975.79 rows=1 width=81) (actual time=7703.917..30948.271 rows=2 loops=1)\n> >> Merge Cond: ((t.term)::text = (f.berechnungsart)::text)\n> >> -> Index Scan using term_term_idx on term t (cost=0.00..319880.73 rows=636 width=81) (actual time=7703.809..7703.938 rows=3 loops=1)\n> >> Filter: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'berechnungsart'::text))\n\nJust noticed that this is a bit strange, too: \n\nThis scans the whole index term_term_idx and for every row found it\nchecks the table for the filter condition. So it has to read the whole\nindex and the whole table, right? But the planner estimates that it will\nreturn only 636 rows (out of 6.1E6), so using\nterm_facttablename_columnname_idx to extract those 636 and then sorting\nthem should be quite a bit faster (even just a plain full table scan\nand then sorting should be faster).\n\n> >> -> Index Scan using facttable_stat_fta4_berechnungsart_idx on facttable_stat_fta4 f (cost=0.00..2545748.85 rows=43577940 width=2) (actual time=0.089..16263.582 rows=21336180 loops=1)\n> >> Total runtime: 30948.648 ms\n> \n> Notice that it's estimating the cost of the join as significantly less\n> than the cost of the inner-side indexscan. This means it believes that\n> the inner indexscan will not be run to completion. That's not because of\n> semijoin semantics; there's no stop-after-first-match benefit for mergejoins.\n> It must be that it thinks the range of keys on the outer side of the join\n> is much less than the range of keys on the inner. Given that it knows\n> that facttable_stat_fta4.berechnungsart only contains the values \"m\"\n> and \"n\", this implies that it thinks term.term only contains \"m\" and\n> not \"n\". So this estimation error presumably comes from \"n\" not having\n> been seen in ANALYZE's last sample of term.term, and raising the stats\n> target for term.term would probably be a way to fix that.\n\nThe term column has a relatively flat distribution and about 3.5 million\ndistinct values (in 6.1 million rows). So it's unlikely for any specific\nvalue to be included in an ANALYZE sample, and an error to assume that a\nvalue doesn't occur at all just because it wasn't seen by ANALYZE. OTOH,\na value which is seen by ANALYZE will have its frequency overestimated\nby quite a bit. But that shouldn't influence whether the whole index\nneeds to be scanned or not.\n\nAnother thought: For the merge semi join postgresql doesn't actually\nhave to scan the whole inner index. It can skip from the first 'm' entry\nto the first 'n' entry reading only a few non-leaf blocks, skipping many\nleaf blocks in the process. The times (7703.917..30948.271) indicate that\nit doesn't actually do this, but maybe the planner assumes it does?\n\nI also suspected that the culprit is the \"columnname\" column. That one has a very\nskewed distribution:\n\nwdsah=> select columnname, count(*) from term group by columnname order by count(*);\n columnname | count \n---------------------+---------\n warenstrom | 3\n einheit | 3\n berechnungsart | 3\n og | 26\n berichtsregion | 242\n partnerregion | 246\n sitcr4 | 4719\n kurzbezeichnung | 1221319\n macrobondtimeseries | 1221320\n | 3661206\n(10 rows)\n\nSo random variation in the sample could throw off the estimated\nfrequencies of the the least frequent columnnames by quite a bit.\n\nBut given that both plans estimated the number of rows returned by the\nouter index scan as 636, that was probably a red herring.\n\nBut there does seem to be a connection to this column: In one case\npg_stats contained n_distinct=7 and only the two most common values.\nThen the plan looked like this:\n\nwdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register\n from term t where facttablename='facttable_stat_fta4' and columnname='warenstrom' and exists (select 1 from facttable_stat_fta4 f where f.warenstrom=t.term );\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Semi Join (cost=0.00..386141.13 rows=1 width=81) (actual time=0.202..0.253 rows=2 loops=1)\n -> Index Scan using term_facttablename_columnname_idx on term t (cost=0.00..264.03 rows=437 width=81) (actual time=0.097..0.099 rows=3 loops=1)\n Index Cond: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'warenstrom'::text))\n -> Index Scan using facttable_stat_fta4_warenstrom_idx on facttable_stat_fta4 f (cost=0.00..385869.36 rows=21787688 width=2) (actual time=0.033..0.033 rows=1 loops=3)\n Index Cond: ((warenstrom)::text = (t.term)::text)\n Total runtime: 0.314 ms\n\nBut after another analye, pg_stats contained n_distinct=5 and the 5 most\ncommon values. And now the plan looks like this (after disabling\nbitmapscan and hashagg):\n\nwdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register\n from term t where facttablename='facttable_stat_fta4' and columnname='warenstrom' and exists (select 1 from facttable_stat_fta4 f where f.warenstrom=t.term );\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Semi Join (cost=0.00..2124104.23 rows=1 width=81) (actual time=0.080..0.129 rows=2 loops=1)\n -> Index Scan using term_facttablename_columnname_idx on term t (cost=0.00..3.23 rows=1 width=81) (actual time=0.030..0.031 rows=3 loops=1)\n Index Cond: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'warenstrom'::text))\n -> Index Scan using facttable_stat_fta4_warenstrom_idx on facttable_stat_fta4 f (cost=0.00..2124100.90 rows=21787688 width=2) (actual time=0.029..0.029 rows=1 loops=3)\n Index Cond: ((warenstrom)::text = (t.term)::text)\n Total runtime: 0.180 ms\n(6 rows)\n\nThe estimated number of rows in the outer scan is way more accurate in\nthe second plan (1 instead of 437), but for some reason the cost for the\ninner scan is higher (2124100.90 instead of 385869.36) although it\nshould be lower (we only need to search for 1 value, not 437)\n\n(There was no analyze on facttable_stat_fta4 (automatic or manual) on\nfacttable_stat_fta4 between those two tests, so the statistics on\nfacttable_stat_fta4 shouldn't have changed - only those for term.)\n\n\thp\n\n-- \n _ | Peter J. Holzer | I want to forget all about both belts and\n|_|_) | | suspenders; instead, I want to buy pants \n| | | [email protected] | that actually fit.\n__/ | http://www.hjp.at/ | -- http://noncombatant.org/",
"msg_date": "Sun, 31 May 2015 13:00:10 +0200",
"msg_from": "\"Peter J. Holzer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Different plan for very similar queries"
},
{
"msg_contents": "\"Peter J. Holzer\" <[email protected]> writes:\n>>> Merge Semi Join (cost=316864.57..319975.79 rows=1 width=81) (actual time=7703.917..30948.271 rows=2 loops=1)\n>>> Merge Cond: ((t.term)::text = (f.berechnungsart)::text)\n>>> -> Index Scan using term_term_idx on term t (cost=0.00..319880.73 rows=636 width=81) (actual time=7703.809..7703.938 rows=3 loops=1)\n>>> Filter: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'berechnungsart'::text))\n\n> Just noticed that this is a bit strange, too: \n\n> This scans the whole index term_term_idx and for every row found it\n> checks the table for the filter condition. So it has to read the whole\n> index and the whole table, right? But the planner estimates that it will\n> return only 636 rows (out of 6.1E6), so using\n> term_facttablename_columnname_idx to extract those 636 and then sorting\n> them should be quite a bit faster (even just a plain full table scan\n> and then sorting should be faster).\n\nHm. I do not see that here with Tomas' sample data, neither on HEAD nor\n9.1: I always get a scan using term_facttablename_columnname_idx. I agree\nyour plan looks strange. Can you create some sample data that reproduces\nthat particular misbehavior?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 31 May 2015 11:50:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different plan for very similar queries"
},
{
"msg_contents": "On 05/31/15 13:00, Peter J. Holzer wrote:\n> [I've seen in -hackers that you already seem to have a fix]\n>\n> On 2015-05-30 15:04:34 -0400, Tom Lane wrote:\n>> Tomas Vondra <[email protected]> writes:\n>>> Why exactly does the second query use a much slower plan I'm not sure. I\n>>> believe I've found an issue in planning semi joins (reported to\n>>> pgsql-hackers a few minutes ago), but may be wrong and the code is OK.\n>>\n>> I think you are probably right that there's a bug there: the planner is\n>> vastly overestimating the cost of the nestloop-with-inner-indexscan\n>> plan. However, the reason why the mergejoin plan gets chosen in some\n>> cases seems to be that an additional estimation error is needed to make\n>> that happen; otherwise the nestloop still comes out looking cheaper.\n>> The undesirable case looks like:\n>>\n>>>> Merge Semi Join (cost=316864.57..319975.79 rows=1 width=81) (actual time=7703.917..30948.271 rows=2 loops=1)\n>>>> Merge Cond: ((t.term)::text = (f.berechnungsart)::text)\n>>>> -> Index Scan using term_term_idx on term t (cost=0.00..319880.73 rows=636 width=81) (actual time=7703.809..7703.938 rows=3 loops=1)\n>>>> Filter: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'berechnungsart'::text))\n>\n> Just noticed that this is a bit strange, too:\n>\n> This scans the whole index term_term_idx and for every row found it\n> checks the table for the filter condition. So it has to read the whole\n> index and the whole table, right? But the planner estimates that it will\n> return only 636 rows (out of 6.1E6), so using\n> term_facttablename_columnname_idx to extract those 636 and then sorting\n> them should be quite a bit faster (even just a plain full table scan\n> and then sorting should be faster).\n\nThat seems a bit strange, yes. I don't see why a simple index scan (with \nIndex Cond), expected to produce 636, should be more expensive than \nscanning the whole index (with a Filter). Even if there's an additional \nSort node, sorting those 636 rows.\n\nBut I've been unable to reproduce that (both on 9.1 and HEAD) without \nsignificant 'SET enable_*' gymnastics, so I'm not sure why that happens. \nDon't you have some 'enable_sort=off' or something like that?\n\nA test case with a data set would help a lot, in this case.\n\n\n> Another thought: For the merge semi join postgresql doesn't actually\n> have to scan the whole inner index. It can skip from the first 'm' entry\n> to the first 'n' entry reading only a few non-leaf blocks, skipping many\n> leaf blocks in the process. The times (7703.917..30948.271) indicate that\n> it doesn't actually do this, but maybe the planner assumes it does?\n\nHow would it know how far to skip? I mean, assume you're on the first \n'n' entry - how do you know where is the first 'm' entry?\n\nIf you only really need to check existence, a nested loop with an inner \nindex scan is probably the right thing anyway, especially if the number \nof outer rows (and thus loops performed) is quite low. This is clearly \ndemonstrated by the first plan in this thread:\n\n QUERY PLAN\n------------------------------------------------------------- ...\n Nested Loop Semi Join (cost=0.00..384860.48 rows=1 width=81 ...\n -> Index Scan using term_facttablename_columnname_idx on ...\n Index Cond: (((facttablename)::text = 'facttable_sta ...\n -> Index Scan using facttable_stat_fta4_einheit_idx on fa ...\n Index Cond: ((einheit)::text = (t.term)::text)\n Total runtime: 0.173 ms\n(6 rows)\n\nThis is probably the best plan you can get in cases like this ...\n\n>\n> I also suspected that the culprit is the \"columnname\" column. That one has a very\n> skewed distribution:\n>\n> wdsah=> select columnname, count(*) from term group by columnname order by count(*);\n> columnname | count\n> ---------------------+---------\n> warenstrom | 3\n> einheit | 3\n> berechnungsart | 3\n> og | 26\n> berichtsregion | 242\n> partnerregion | 246\n> sitcr4 | 4719\n> kurzbezeichnung | 1221319\n> macrobondtimeseries | 1221320\n> | 3661206\n> (10 rows)\n>\n> So random variation in the sample could throw off the estimated\n> frequencies of the the least frequent columnnames by quite a bit.\n>\n> But given that both plans estimated the number of rows returned by the\n> outer index scan as 636, that was probably a red herring.\n>\n> But there does seem to be a connection to this column: In one case\n> pg_stats contained n_distinct=7 and only the two most common values.\n> Then the plan looked like this:\n>\n> wdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register\n> from term t where facttablename='facttable_stat_fta4' and columnname='warenstrom' and exists (select 1 from facttable_stat_fta4 f where f.warenstrom=t.term );\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Semi Join (cost=0.00..386141.13 rows=1 width=81) (actual time=0.202..0.253 rows=2 loops=1)\n> -> Index Scan using term_facttablename_columnname_idx on term t (cost=0.00..264.03 rows=437 width=81) (actual time=0.097..0.099 rows=3 loops=1)\n> Index Cond: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'warenstrom'::text))\n> -> Index Scan using facttable_stat_fta4_warenstrom_idx on facttable_stat_fta4 f (cost=0.00..385869.36 rows=21787688 width=2) (actual time=0.033..0.033 rows=1 loops=3)\n> Index Cond: ((warenstrom)::text = (t.term)::text)\n> Total runtime: 0.314 ms\n>\n> But after another analye, pg_stats contained n_distinct=5 and the 5 most\n> common values. And now the plan looks like this (after disabling\n> bitmapscan and hashagg):\n>\n> wdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register\n> from term t where facttablename='facttable_stat_fta4' and columnname='warenstrom' and exists (select 1 from facttable_stat_fta4 f where f.warenstrom=t.term );\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Semi Join (cost=0.00..2124104.23 rows=1 width=81) (actual time=0.080..0.129 rows=2 loops=1)\n> -> Index Scan using term_facttablename_columnname_idx on term t (cost=0.00..3.23 rows=1 width=81) (actual time=0.030..0.031 rows=3 loops=1)\n> Index Cond: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'warenstrom'::text))\n> -> Index Scan using facttable_stat_fta4_warenstrom_idx on facttable_stat_fta4 f (cost=0.00..2124100.90 rows=21787688 width=2) (actual time=0.029..0.029 rows=1 loops=3)\n> Index Cond: ((warenstrom)::text = (t.term)::text)\n> Total runtime: 0.180 ms\n> (6 rows)\n>\n> The estimated number of rows in the outer scan is way more accurate in\n> the second plan (1 instead of 437), but for some reason the cost for the\n> inner scan is higher (2124100.90 instead of 385869.36) although it\n> should be lower (we only need to search for 1 value, not 437)\n\nAs I explained in the pgsql-hackers thread (sorry for the confusion, it \nseemed like a more appropriate place for discussion on planner \ninternals), I believe this happens because of only comparing total costs \nof the inner paths. That is a problem, because in this care we only \nreally care about the first tuple, not about all the tuples. Because \nthat's what semijoin needs.\n\nCould you post the plan with bitmapscan enabled? I'd bet the cost will \nbe somewhere between 385869.36 and 2124100.90, so that small variations \nin the statistics (and thus costs) cause such plan changes. When the \nindexscan gets below bitmapscan, you get the first (good) plan, \notherwise you get the other one.\n\nAlso, this may be easily caused by variations within the same \nstatistics, e.g. between columns or between values within the same \ncolumn (so a MCV item with 51% gets one plan, item with 49% gets a \ndifferent plan).\n\nThis might be improved by using a larger sample - 9.1 uses default \nstatistics target 100, so samples with 30k rows. Try increasing that to \n1000 (SET default_statistics_target=1000) - that should give more \nconsistent statistics and hopefully stable plans (but maybe in the wrong \ndirection).\n\nAlso, you might tweak the cost variables a bit, to make the cost \ndifferences more significant. But that's secondary I guess, as the costs \n(385869 vs. 2124100) are quite far away.\n\n> (There was no analyze on facttable_stat_fta4 (automatic or manual) on\n> facttable_stat_fta4 between those two tests, so the statistics on\n> facttable_stat_fta4 shouldn't have changed - only those for term.)\n\nSo maybe there was autoanalyze, because otherwise it really should be \nthe same in both plans ...\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 31 May 2015 18:05:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different plan for very similar queries"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 05/31/15 13:00, Peter J. Holzer wrote:\n>> (There was no analyze on facttable_stat_fta4 (automatic or manual) on\n>> facttable_stat_fta4 between those two tests, so the statistics on\n>> facttable_stat_fta4 shouldn't have changed - only those for term.)\n\n> So maybe there was autoanalyze, because otherwise it really should be \n> the same in both plans ...\n\nNo, because that's the inside of a nestloop with significantly different\nouter-side rowcount estimates. The first case gets a benefit from the\nexpectation that it will be re-executed many times (see the impact of\nloop_count on cost_index).\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 31 May 2015 12:22:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different plan for very similar queries"
},
{
"msg_contents": "\n\nOn 05/31/15 18:22, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 05/31/15 13:00, Peter J. Holzer wrote:\n>>> (There was no analyze on facttable_stat_fta4 (automatic or manual) on\n>>> facttable_stat_fta4 between those two tests, so the statistics on\n>>> facttable_stat_fta4 shouldn't have changed - only those for term.)\n>\n>> So maybe there was autoanalyze, because otherwise it really should be\n>> the same in both plans ...\n>\n> No, because that's the inside of a nestloop with significantly different\n> outer-side rowcount estimates. The first case gets a benefit from the\n> expectation that it will be re-executed many times (see the impact of\n> loop_count on cost_index).\n\nMeh, I got confused by the plan a bit - I thought there's a problem in \nthe outer path (e.g. change of row count). But actually this is the path \nscanning the 'term' table, so the change is expected there.\n\nThe fact that the index scan cost 'suddenly' grows from 386k to 2M is \nconfusing at first, but yeah - it's caused by the 'averaging' in \ncost_index() depending on loop_count.\n\nBut I think this does not really change the problem with eliminating \ninner paths solely on the basis of total cost - in fact it probably \nmakes it slightly worse, because the cost also depends on estimates in \nthe outer path (while the bitmapscan does not).\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 31 May 2015 18:39:10 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different plan for very similar queries"
},
{
"msg_contents": "On 2015-05-29 10:55:44 +0200, Peter J. Holzer wrote:\n> wdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register \n> from term t where facttablename='facttable_stat_fta4' and columnname='einheit' and exists (select 1 from facttable_stat_fta4 f where f.einheit=t.term );\n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Semi Join (cost=0.00..384860.48 rows=1 width=81) (actual time=0.061..0.119 rows=2 loops=1)\n> -> Index Scan using term_facttablename_columnname_idx on term t (cost=0.00..391.46 rows=636 width=81) (actual time=0.028..0.030 rows=3 loops=1)\n> Index Cond: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'einheit'::text))\n> -> Index Scan using facttable_stat_fta4_einheit_idx on facttable_stat_fta4 f (cost=0.00..384457.80 rows=21788970 width=3) (actual time=0.027..0.027 rows=1 loops=3)\n> Index Cond: ((einheit)::text = (t.term)::text)\n> Total runtime: 0.173 ms\n> (6 rows)\n> \n> 0.17 ms. Much faster than a plain select distinct over a table with 43\n> million rows could ever hope to be. \n> \n> warenstrom is very similar and the columns with more distinct values\n> aren't that bad either. \n> \n> But for column berechnungsart the result is bad:\n> \n> wdsah=> explain analyze select facttablename, columnname, term, concept_id, t.hidden, language, register \n> from term t where facttablename='facttable_stat_fta4' and columnname='berechnungsart' and exists (select 1 from facttable_stat_fta4 f where f.berechnungsart=t.term );\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Semi Join (cost=316864.57..319975.79 rows=1 width=81) (actual time=7703.917..30948.271 rows=2 loops=1)\n> Merge Cond: ((t.term)::text = (f.berechnungsart)::text)\n> -> Index Scan using term_term_idx on term t (cost=0.00..319880.73 rows=636 width=81) (actual time=7703.809..7703.938 rows=3 loops=1)\n> Filter: (((facttablename)::text = 'facttable_stat_fta4'::text) AND ((columnname)::text = 'berechnungsart'::text))\n> -> Index Scan using facttable_stat_fta4_berechnungsart_idx on facttable_stat_fta4 f (cost=0.00..2545748.85 rows=43577940 width=2) (actual time=0.089..16263.582 rows=21336180 loops=1)\n> Total runtime: 30948.648 ms\n> (6 rows)\n> \n> Over 30 seconds! That's almost 200'000 times slower. \n\nFirst I'd like to apologize for dropping out of the thread without\nproviding a test data set. I actually had one prepared (without\nconfidential data), but I wanted to make sure that I could reproduce the\nproblem with the test data, and I didn't get around to it for a week or\ntwo and then I went on vacation ...\n\nAnyway, in the meantime you released 9.5alpha (thanks for that, I\nprobably would have compiled a snapshot sooner or later, but installing\ndebian packages is just a lot more convenient - I hope you get a lot of\nuseful feedback) and I installed that this weekend. \n\nI am happy to report that the problem appears to be solved. All the\nqueries of this type I threw at the database finish in a few\nmilliseconds now.\n\n\thp\n\n\n-- \n _ | Peter J. Holzer | I want to forget all about both belts and\n|_|_) | | suspenders; instead, I want to buy pants \n| | | [email protected] | that actually fit.\n__/ | http://www.hjp.at/ | -- http://noncombatant.org/",
"msg_date": "Sun, 19 Jul 2015 22:41:44 +0200",
"msg_from": "\"Peter J. Holzer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Different plan for very similar queries"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am using postgresDB on redhat machine which is having 4GB RAM\nmachine. As soon as it starts to update the postgres DB it will reach\n100%cpu. It will comedown to normal after 40 minutes. I tried perform\nsome tuning on the postgres DB, But result was same.I am not postgres\nDB expert. Even we are not seeing in all machine. Only few machines we\nare seeing this issue. Any help on this would be appreciated.\n\nThanks,\nAshik\n\n\n-- \nSent via pgsql-bugs mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-bugs\n",
"msg_date": "Fri, 29 May 2015 22:10:35 +0530",
"msg_from": "Ashik S L <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres is using 100% CPU"
},
{
"msg_contents": "Hi,\n\nOn 05/29/15 18:40, Ashik S L wrote:\n> Hi All,\n>\n> I am using postgresDB on redhat machine which is having 4GB RAM\n> machine. As soon as it starts to update the postgres DB it will reach\n> 100%cpu. It will comedown to normal after 40 minutes. I tried perform\n> some tuning on the postgres DB, But result was same.I am not postgres\n> DB expert. Even we are not seeing in all machine. Only few machines we\n> are seeing this issue. Any help on this would be appreciated.\n\nWe need to know more about what you mean by \"update the postgres DB\", \nand basic information like database size, PostgreSQL version etc.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-bugs mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-bugs\n",
"msg_date": "Fri, 29 May 2015 19:17:29 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "\n\nOn 05/29/15 18:40, Ashik S L wrote:\n> Hi All,\n>\n> I am using postgresDB on redhat machine which is having 4GB RAM\n> machine. As soon as it starts to update the postgres DB it will reach\n> 100%cpu. It will comedown to normal after 40 minutes. I tried perform\n> some tuning on the postgres DB, But result was same.I am not postgres\n> DB expert. Even we are not seeing in all machine. Only few machines we\n> are seeing this issue. Any help on this would be appreciated.\n\n... also, this is not a bug. This is clearly a performance question, so \nplease post it to pgsql-performance list.\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-bugs mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-bugs\n",
"msg_date": "Fri, 29 May 2015 19:18:22 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "Hi All,\n\nI am using postgresDB on redhat machine which is having 4GB RAM\nmachine. As soon as it starts to Inserting rows into the postgres DB it\nwill reach\n100%cpu. It will comedown to normal after 40 minutes. I tried perform\nsome tuning on the postgres DB, But result was same.I am not postgres\nDB expert. Even we are not seeing in all machine. Only few machines we\nare seeing this issue. Any help on this would be appreciated.\n\nThanks,\nAshik\n\nHi All,\n\nI am using postgresDB on redhat machine which is having 4GB RAM\nmachine. As soon as it starts to Inserting rows into the postgres DB it will reach\n100%cpu. It will comedown to normal after 40 minutes. I tried perform\nsome tuning on the postgres DB, But result was same.I am not postgres\nDB expert. Even we are not seeing in all machine. Only few machines we\nare seeing this issue. Any help on this would be appreciated.\n\nThanks,\nAshik",
"msg_date": "Fri, 29 May 2015 23:27:18 +0530",
"msg_from": "Ashik S L <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Postgres is using 100% CPU"
},
{
"msg_contents": "Hi All,\n\nI am using postgresDB on redhat machine which is having 4GB RAM\nmachine. As soon as it starts to Inserting rows into the postgres DB it\nwill reach 100%cpu. It will comedown to normal after 40 minutes. I tried perform\nsome tuning on the postgres DB, But result was same.I am not postgres\nDB expert. Even we are not seeing in all machine. Only few machines we\nare seeing this issue. Any help on this would be appreciated.\n\nThanks,\nAshik\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 29 May 2015 23:40:52 +0530",
"msg_from": "Ashik S L <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres is using 100% CPU"
},
{
"msg_contents": "\n> machine. As soon as it starts to Inserting rows into the postgres DB it\n> will reach 100%cpu. It will comedown to normal after 40 minutes. I tried perform\n\nHow many rows are you inserting at once? How (sql insert? copy? \\copy? using a\ntemp or unlogged table?)?\n\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 29 May 2015 12:30:15 -0600",
"msg_from": "Yves Dorfsman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "\n\nOn 05/29/15 20:10, Ashik S L wrote:\n> Hi All,\n>\n> I am using postgresDB on redhat machine which is having 4GB RAM\n> machine. As soon as it starts to Inserting rows into the postgres DB\n> it will reach 100%cpu. It will comedown to normal after 40 minutes. I\n> tried perform some tuning on the postgres DB, But result was same.I\n> am not postgres DB expert. Even we are not seeing in all machine.\n> Only few machines we are seeing this issue. Any help on this would\n> be appreciated.\n\nAshik, before pointing you to this list, I asked for some basic \ninformation that are needed when diagnosing issues like this - database \nsize, postgres version etc. We can't really help you without this info, \nbecause right now we only know you're doing some inserts (while before \nyou mentioned updates), and it's slow.\n\nAlso, can you please provide info about the configuration and what \nchanges have you done when tuning it?\n\nHave you seen this?\n\n https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 29 May 2015 21:20:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "We are using postgres SQL version 8.4.17..\nPostgres DB szie is 900 MB and we are inserting 273 rows at once .and\neach row is of 60 bytes.Every time we insert 16380 bytes of data.\nI tried to make some config changes using above link. But I did not\nsee any improvement.\nI made following changes in postgres.conf file:\nshared_buffers = 512MB // It was 32MB\nwork_mem = 30MB\neffective_cache_size = 512MB // I tried with 128MB 256MB also\n\nPlease let me know any config changes that I can try out.\n\nThanks,\nAshik\n\nOn 5/30/15, Tomas Vondra <[email protected]> wrote:\n>\n>\n> On 05/29/15 20:10, Ashik S L wrote:\n>> Hi All,\n>>\n>> I am using postgresDB on redhat machine which is having 4GB RAM\n>> machine. As soon as it starts to Inserting rows into the postgres DB\n>> it will reach 100%cpu. It will comedown to normal after 40 minutes. I\n>> tried perform some tuning on the postgres DB, But result was same.I\n>> am not postgres DB expert. Even we are not seeing in all machine.\n>> Only few machines we are seeing this issue. Any help on this would\n>> be appreciated.\n>\n> Ashik, before pointing you to this list, I asked for some basic\n> information that are needed when diagnosing issues like this - database\n> size, postgres version etc. We can't really help you without this info,\n> because right now we only know you're doing some inserts (while before\n> you mentioned updates), and it's slow.\n>\n> Also, can you please provide info about the configuration and what\n> changes have you done when tuning it?\n>\n> Have you seen this?\n>\n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 30 May 2015 19:16:24 +0530",
"msg_from": "Ashik S L <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "Hi,\n\nOn 05/30/15 15:46, Ashik S L wrote:\n> We are using postgres SQL version 8.4.17..\n\nFYI 8.4 is already unsupported for ~1 year, so you should consider \nupgrading to a newer release. Also, the newest version in that branch is \n8.4.22, so with 8.4.17 you're missing ~1 year of patches.\n\n> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and\n> each row is of 60 bytes.Every time we insert 16380 bytes of data.\n\nSo you insert 273 rows and it gets stuck for 40 minutes? That's really \nstrange, and I find it rather unlikely even with a badly misconfigured \ninstance. It should easily insert thousands of rows per second.\n\nCan you elaborate more about the database structure, or at least the \ntable(s) you're inserting into. Are there any foreign keys (in either \ndirection), indexes or triggers?\n\n> I tried to make some config changes using above link. But I did not\n> see any improvement.\n> I made following changes in postgres.conf file:\n> shared_buffers = 512MB // It was 32MB\n> work_mem = 30MB\n> effective_cache_size = 512MB // I tried with 128MB 256MB also\n>\n> Please let me know any config changes that I can try out.\n\nI don't think this has anything to do with configuration. This seems \nlike an issue at the application level, or maybe poorly designed schema.\n\nYou mentioned you have multiple machines, and only some of them are \nhaving this issue. What are the differences between the machines? Are \nall the machines using the same schema? I assume each has a different \namount of data.\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 30 May 2015 16:20:32 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "On 05/30/2015 09:46 AM, Ashik S L wrote:\n> We are using postgres SQL version 8.4.17..\n> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and\n> each row is of 60 bytes.Every time we insert 16380 bytes of data.\n\nWay back when, I was inserting a lot of rows of date (millions of rows)\nand it was taking many hours on a machine with 6 10,000 rpm Ultra/320\nSCSI hard drives and 8 GBytes of ram. Each insert was a separate\ntransaction.\n\nWhen I bunched up lots of rows (thousaands) into a single transaction,\nthe whole thing took less than an hour.\n\nIs it possible that when you insert 273 rows at once, you are doing it\nas 273 transactions instead of one?\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key:166D840A 0C610C8B Registered Machine 1935521.\n /( )\\ Shrewsbury, New Jersey http://linuxcounter.net\n ^^-^^ 09:00:01 up 3 days, 9:57, 2 users, load average: 4.89, 4.90, 4.91\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 31 May 2015 09:04:58 -0400",
"msg_from": "Jean-David Beyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "On 2015-05-31 07:04, Jean-David Beyer wrote:\n> On 05/30/2015 09:46 AM, Ashik S L wrote:\n>> We are using postgres SQL version 8.4.17..\n>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and\n>> each row is of 60 bytes.Every time we insert 16380 bytes of data.\n> \n> Way back when, I was inserting a lot of rows of date (millions of rows)\n> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320\n> SCSI hard drives and 8 GBytes of ram. Each insert was a separate\n> transaction.\n> \n> When I bunched up lots of rows (thousaands) into a single transaction,\n> the whole thing took less than an hour.\n\nOr use copy, \\copy if possible, or a \"temporary\" unlogged table to copy from\nlater, etc...\n\n> Is it possible that when you insert 273 rows at once, you are doing it\n> as 273 transactions instead of one?\n\nThat's the thing, even on an old laptop with a slow IDE disk, 273 individual\ninserts should not take more than a second.\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 31 May 2015 08:23:10 -0600",
"msg_from": "Yves Dorfsman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "> On 05/30/2015 09:46 AM, Ashik S L wrote:\n>> We are using postgres SQL version 8.4.17..\n>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and\n>> each row is of 60 bytes.Every time we insert 16380 bytes of data.\n>\n> Way back when, I was inserting a lot of rows of date (millions of rows)\n> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320\n> SCSI hard drives and 8 GBytes of ram. Each insert was a separate\n> transaction.\n>\n> When I bunched up lots of rows (thousaands) into a single transaction,\n> the whole thing took less than an hour.\n\nOr use copy, \\copy if possible, or a \"temporary\" unlogged table to copy from\nlater, etc...\n\n> Is it possible that when you insert 273 rows at once, you are doing it\n> as 273 transactions instead of one?\n\n>That's the thing, even on an old laptop with a slow IDE disk, 273\nindividual\n>inserts should not take more than a second.\n\nWe are inserting 273 rows at once and its taking less than 1 second. But we\nwill be updating bunch of 273 rows every time which is taking high cpu.\nIts like updating 273 rows 2000 to 3000 times. We will be running multiple\ninstances of postgres as well.\n\nOn Sun, May 31, 2015 at 7:53 PM, Yves Dorfsman <[email protected]> wrote:\n\n> On 2015-05-31 07:04, Jean-David Beyer wrote:\n> > On 05/30/2015 09:46 AM, Ashik S L wrote:\n> >> We are using postgres SQL version 8.4.17..\n> >> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and\n> >> each row is of 60 bytes.Every time we insert 16380 bytes of data.\n> >\n> > Way back when, I was inserting a lot of rows of date (millions of rows)\n> > and it was taking many hours on a machine with 6 10,000 rpm Ultra/320\n> > SCSI hard drives and 8 GBytes of ram. Each insert was a separate\n> > transaction.\n> >\n> > When I bunched up lots of rows (thousaands) into a single transaction,\n> > the whole thing took less than an hour.\n>\n> Or use copy, \\copy if possible, or a \"temporary\" unlogged table to copy\n> from\n> later, etc...\n>\n> > Is it possible that when you insert 273 rows at once, you are doing it\n> > as 273 transactions instead of one?\n>\n> That's the thing, even on an old laptop with a slow IDE disk, 273\n> individual\n> inserts should not take more than a second.\n>\n> --\n> http://yves.zioup.com\n> gpg: 4096R/32B0F416\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n> On 05/30/2015 09:46 AM, Ashik S L wrote:\n>> We are using postgres SQL version 8.4.17..\n>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and\n>> each row is of 60 bytes.Every time we insert 16380 bytes of data.\n>\n> Way back when, I was inserting a lot of rows of date (millions of rows)\n> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320\n> SCSI hard drives and 8 GBytes of ram. Each insert was a separate\n> transaction.\n>\n> When I bunched up lots of rows (thousaands) into a single transaction,\n> the whole thing took less than an hour.\n\nOr use copy, \\copy if possible, or a \"temporary\" unlogged table to copy from\nlater, etc...\n\n> Is it possible that when you insert 273 rows at once, you are doing it\n> as 273 transactions instead of one?\n\n>That's the thing, even on an old laptop with a slow IDE disk, 273 individual\n>inserts should not take more than a second.We are inserting 273 rows at once and its taking less than 1 second. But we will be updating bunch of 273 rows every time which is taking high cpu.Its like updating 273 rows 2000 to 3000 times. We will be running multiple instances of postgres as well.On Sun, May 31, 2015 at 7:53 PM, Yves Dorfsman <[email protected]> wrote:On 2015-05-31 07:04, Jean-David Beyer wrote:\n> On 05/30/2015 09:46 AM, Ashik S L wrote:\n>> We are using postgres SQL version 8.4.17..\n>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and\n>> each row is of 60 bytes.Every time we insert 16380 bytes of data.\n>\n> Way back when, I was inserting a lot of rows of date (millions of rows)\n> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320\n> SCSI hard drives and 8 GBytes of ram. Each insert was a separate\n> transaction.\n>\n> When I bunched up lots of rows (thousaands) into a single transaction,\n> the whole thing took less than an hour.\n\nOr use copy, \\copy if possible, or a \"temporary\" unlogged table to copy from\nlater, etc...\n\n> Is it possible that when you insert 273 rows at once, you are doing it\n> as 273 transactions instead of one?\n\nThat's the thing, even on an old laptop with a slow IDE disk, 273 individual\ninserts should not take more than a second.\n\n--\nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 1 Jun 2015 11:08:33 +0530",
"msg_from": "Ashik S L <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "On Mon, Jun 1, 2015 at 12:38 AM, Ashik S L <[email protected]> wrote:\n>> On 05/30/2015 09:46 AM, Ashik S L wrote:\n>>> We are using postgres SQL version 8.4.17..\n>>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and\n>>> each row is of 60 bytes.Every time we insert 16380 bytes of data.\n>>\n>> Way back when, I was inserting a lot of rows of date (millions of rows)\n>> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320\n>> SCSI hard drives and 8 GBytes of ram. Each insert was a separate\n>> transaction.\n>>\n>> When I bunched up lots of rows (thousaands) into a single transaction,\n>> the whole thing took less than an hour.\n>\n> Or use copy, \\copy if possible, or a \"temporary\" unlogged table to copy from\n> later, etc...\n>\n>> Is it possible that when you insert 273 rows at once, you are doing it\n>> as 273 transactions instead of one?\n>\n>>That's the thing, even on an old laptop with a slow IDE disk, 273\n>> individual\n>>inserts should not take more than a second.\n>\n> We are inserting 273 rows at once and its taking less than 1 second. But we\n> will be updating bunch of 273 rows every time which is taking high cpu.\n> Its like updating 273 rows 2000 to 3000 times. We will be running multiple\n> instances of postgres as well.\n\nSomething is wrong. This is not typical behavior. Let's rule out\nsome obvious things:\n*) do you have any triggers on the table\n*) is your insert based on complicated query\n*) can you 'explain analyze' the statement you think is causing the cpu issues\n*) is this truly cpu problem or iowait?\n*) are there other queries running you are not aware of? let's check\nthe contents of 'pg_stat_activity' when cpu issues are happening\n\nwhat operating system is this? if linux/unix, let's get a 'top'\nprofile and confirm (via pid) that will tell you if your problems are\nproper cpu or storate based. On default configuration with slow\nstorage, 273 inserts/sec will be the maximum the hardware will support\nwhile syncing every transaction (note, this is 'good', and there are\nmany techniques to work around the problem). Try flipping the\n'synchronous_commit' setting in postgresql.conf to see if that\nimproves performance.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 1 Jun 2015 08:20:45 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "On Mon, Jun 1, 2015 at 7:20 AM, Merlin Moncure <[email protected]> wrote:\n> On Mon, Jun 1, 2015 at 12:38 AM, Ashik S L <[email protected]> wrote:\n>>> On 05/30/2015 09:46 AM, Ashik S L wrote:\n>>>> We are using postgres SQL version 8.4.17..\n>>>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and\n>>>> each row is of 60 bytes.Every time we insert 16380 bytes of data.\n>>>\n>>> Way back when, I was inserting a lot of rows of date (millions of rows)\n>>> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320\n>>> SCSI hard drives and 8 GBytes of ram. Each insert was a separate\n>>> transaction.\n>>>\n>>> When I bunched up lots of rows (thousaands) into a single transaction,\n>>> the whole thing took less than an hour.\n>>\n>> Or use copy, \\copy if possible, or a \"temporary\" unlogged table to copy from\n>> later, etc...\n>>\n>>> Is it possible that when you insert 273 rows at once, you are doing it\n>>> as 273 transactions instead of one?\n>>\n>>>That's the thing, even on an old laptop with a slow IDE disk, 273\n>>> individual\n>>>inserts should not take more than a second.\n>>\n>> We are inserting 273 rows at once and its taking less than 1 second. But we\n>> will be updating bunch of 273 rows every time which is taking high cpu.\n>> Its like updating 273 rows 2000 to 3000 times. We will be running multiple\n>> instances of postgres as well.\n>\n> Something is wrong. This is not typical behavior. Let's rule out\n> some obvious things:[email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nOP has not convinced me there's an actual problem. How many inserts\nper second / minute / hour can these machines handle? Are they\nhandling when they're at 100% CPU. 100% CPU isn't an automatically bad\nthing. Every query is limited in some way. If you've got some\nmonstrous IO subsystem it's not uncommon for the CPU to be the big\nlimiter.\n\nI'm not sure I've seen OP use the word slow anywhere... Just 100% CPU.\n\nI'd say we need metrics from iostat, vmstat, iotop, top, htop and\nperformance numbers before deciding there IS a problem.\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 1 Jun 2015 13:35:06 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
}
] |
[
{
"msg_contents": "I have several databases that have the same schema but different amounts of data in it (let's categorize these as Small, Medium, and Large). We have a mammoth query with 13 CTEs that are LEFT JOINed against a main table. This query takes <30 mins on the Small database, <2 hours to run on Large, but on the Medium database it takes in the vicinity of 14 hours.\n\nRunning truss/strace on the backend process running this query on the Medium database reveals that for a big chunk of this time Postgres creates/reads/unlinks a very large quantity (millions?) of tiny files inside pgsql_tmp. I also ran an EXPLAIN ANALYZE and am attaching the most time-consuming parts of the plan (with names redacted). Although I'm not too familiar with the internals of Postgres' Hash implementation, it seems that having over 4 million hash batches could be what's causing the problem.\n\nI'm running PostgreSQL 9.3.5, and have work_mem set to 32MB.\n\nIs there any way I can work around this problem, other than to experiment with disabling enable_hashjoin for this query/database?\n\nAlex\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 1 Jun 2015 16:03:10 +0000",
"msg_from": "Alex Adriaanse <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow hash join performance with many batches"
},
{
"msg_contents": "Alex Adriaanse <[email protected]> writes:\n> I have several databases that have the same schema but different amounts of data in it (let's categorize these as Small, Medium, and Large). We have a mammoth query with 13 CTEs that are LEFT JOINed against a main table. This query takes <30 mins on the Small database, <2 hours to run on Large, but on the Medium database it takes in the vicinity of 14 hours.\n> Running truss/strace on the backend process running this query on the Medium database reveals that for a big chunk of this time Postgres creates/reads/unlinks a very large quantity (millions?) of tiny files inside pgsql_tmp. I also ran an EXPLAIN ANALYZE and am attaching the most time-consuming parts of the plan (with names redacted). Although I'm not too familiar with the internals of Postgres' Hash implementation, it seems that having over 4 million hash batches could be what's causing the problem.\n\n> I'm running PostgreSQL 9.3.5, and have work_mem set to 32MB.\n\nI'd try using a significantly larger work_mem setting for this query,\nso as to have fewer hash batches and more buckets per batch.\n\nIt might be unwise to raise your global work_mem setting, but perhaps\nyou could just do a \"SET work_mem\" within the session running the query.\n\nAlso, it looks like the planner is drastically overestimating the sizes\nof the CTE outputs, which is contributing to selecting unreasonably large\nnumbers of batches. If you could get those numbers closer to reality it'd\nlikely help. Hard to opine further since no details about the CTEs were\nprovided.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 01 Jun 2015 13:58:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow hash join performance with many batches"
}
] |
[
{
"msg_contents": "I am connecting to a Postgres instance using SSL and seeing fairly slow connect times. I would expect there would be some overhead but it's more than I anticipated. The connection is happening over a network. I am using a wildcard SSL certificate on the server side only.\n\nUsing one of these JDBC SSL connect strings takes on average: 1060 ms to connect to the database:\njdbc:postgresql://db01-dev.pointclickcare.com:5432/testdb?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.jdbc4.LibPQFactory\n- or -\njdbc:postgresql://db01-dev.pointclickcare.com:5432/testdb?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.NonValidatingFactory\n\nUsing this JDBC non-SSL connect string takes on average: 190 ms to connect to the database:\njdbc:postgresql://db01-dev.pointclickcare.com:5432/testdb\n\nDoes this sound like a reasonable overhead that SSL would add to the connection time or does this seem high? (~870ms/~443% slower using SSL)\n\nI am using this Postgres version:\nPostgreSQL 9.4.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11), 64-bit\n\nThe Postgres JDBC driver I am using is:\npostgresql-9.4-1201-jdbc41.jar\n\nMy pg_hba.conf is below. Not sure DNS names so DNS lookups shouldn't be a problem although performing an nslookup on my client IP does return very quickly. I've also tried connecting Postgres both using a DNS and IP directly.\n\n# PostgreSQL Client Authentication Configuration File\n# ===================================================\n# TYPE DATABASE USER ADDRESS METHOD\n\nlocal all postgres trust\n\nlocal all all ident\n\nhost all all 127.0.0.1/32 md5\n\nhost all all ::1/128 md5\n\nhostssl testdb all 0.0.0.0/0 md5\n\nhostssl testdb all ::1/128 md5\n\n# \"local\" is for Unix domain socket connections only\nlocal all all peer\n\n\nlog_hostname in postgresql.conf is off.\n\n\nI did a search on the forums and found some older posts. One suggested SSL compression is a culprit of slowdowns but I don't think that would apply to the connection time. Another says it could be the authentication that could be causing the slow down but changing md5 to either password or even trust made no difference to the connect time.\n\n\n\n\n\n\n\n\n\n\n\n\nI am connecting to a Postgres instance using SSL and seeing fairly slow connect times. I would expect there would be some overhead but it's more than I anticipated. The connection is happening over a network. I am using a wildcard SSL certificate on the\nserver side only.\n \nUsing one of these JDBC SSL connect strings takes on average: 1060 ms to connect to the database:\njdbc:postgresql://db01-dev.pointclickcare.com:5432/testdb?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.jdbc4.LibPQFactory\n- or - \njdbc:postgresql://db01-dev.pointclickcare.com:5432/testdb?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.NonValidatingFactory\n \nUsing this JDBC non-SSL connect string takes on average: 190 ms to connect to the database:\njdbc:postgresql://db01-dev.pointclickcare.com:5432/testdb\n \nDoes this sound like a reasonable overhead that SSL would add to the connection time or does this seem high? (~870ms/~443% slower using SSL)\n \nI am using this Postgres version:\nPostgreSQL 9.4.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11), 64-bit\n \nThe Postgres JDBC driver I am using is:\npostgresql-9.4-1201-jdbc41.jar\n \nMy pg_hba.conf is below. Not sure DNS names so DNS lookups shouldn't be a problem although performing an nslookup on my client IP does return very quickly. I've also tried connecting Postgres both using a DNS and IP directly.\n \n# PostgreSQL Client Authentication Configuration File\n# ===================================================\n# TYPE DATABASE USER ADDRESS METHOD\n \nlocal all postgres trust\n \nlocal all all ident\n \nhost all all 127.0.0.1/32 md5\n \nhost all all ::1/128 md5\n \nhostssl testdb all 0.0.0.0/0 md5\n \nhostssl testdb all ::1/128 md5\n \n# \"local\" is for Unix domain socket connections only\nlocal all all peer\n \n \nlog_hostname in postgresql.conf is off.\n \n \nI did a search on the forums and found some older posts. One suggested SSL compression is a culprit of slowdowns but I don't think that would apply to the connection time. Another says it could be the authentication that could be causing the slow down\nbut changing md5 to either password or even trust made no difference to the connect time.",
"msg_date": "Mon, 1 Jun 2015 20:51:59 +0000",
"msg_from": "Marco Di Cesare <[email protected]>",
"msg_from_op": true,
"msg_subject": "Connection time when using SSL"
},
{
"msg_contents": "Hi\n\nOn 06/01/15 22:51, Marco Di Cesare wrote:\n> I am connecting to a Postgres instance using SSL and seeing fairly slow\n> connect times. I would expect there would be some overhead but it's more\n> than I anticipated. The connection is happening over a network. I am\n> using a wildcard SSL certificate on the server side only.\n> Using one of these JDBC SSL connect strings takes on average: 1060 ms to\n> connect to the database:\n> jdbc:postgresql://db01-dev.pointclickcare.com:5432/testdb?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.jdbc4.LibPQFactory\n> - or -\n> jdbc:postgresql://db01-dev.pointclickcare.com:5432/testdb?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.NonValidatingFactory\n> Using this JDBC non-SSL connect string takes on average: 190 ms to\n> connect to the database:\n> jdbc:postgresql://db01-dev.pointclickcare.com:5432/testdb\n> Does this sound like a reasonable overhead that SSL would add to the\n> connection time or does this seem high? (~870ms/~443% slower using SSL)\n\nWhat is the network latency (ping) between the two hosts? SSL requires a \nhandshake, exchanging a number messages between the two hosts, and if \neach roundtrip takes a significant amount of time ...\n\nThe 190ms seems quite high. On my rather slow workstation, a local \nconnection without SSL takes ~30ms , with SSL ~70ms. So I wouldn't be \nsurprised by ~100ms roundtrips in your case, and that is going to slow \ndown the SSL handshake significantly.\n\n\nThere's very little you can do with the roundtrip time, usually, but you \ncan keep the connections open in a pool. That'll amortize the costs.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 01 Jun 2015 23:21:10 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection time when using SSL"
}
] |
[
{
"msg_contents": "On Sun, May 31, 2015 at 7:53 PM, Yves Dorfsman <[email protected]> wrote:\n\n>> That's the thing, even on an old laptop with a slow IDE disk, 273\n> individual\n>> inserts should not take more than a second.\n> \n\nI think that would depend on settings such as synchronous_commit, commit_delay, or whether 2-phase commit is being used. \n\nIf synchronous commit is enabled and commit_delay was not used (e.g. 0), and you have a client synchronously making individual inserts to the DB (1 transaction each), then surely you have delays due to waiting for each transaction to commit synchronously to WAL on disk? \n\nI believe yes / 0 are the default settings for synchronous commit and commit_delay. (Interestingly the manual pages do not specify.)\n\n\nAssuming a 5400RPM laptop drive (which is a typical drive - some laptop drives run < 5000RPM), and assuming you are writing a sequential log to disk (with very short gaps between entries being added, e.g. no seek time, only rotational latency) will mean 5400 transactions per minute, 1 write per rotation. \n\nThat's a maximum 90 transactions per second synchronised to WAL. It would take just over 3 seconds.\n\n\nAshik, try altering your postgresql.conf to say 'commit_delay=100' or 'synchronous_commit=off'. Let us know if that fixes the problem. Read up on the options before you change them.\n\nGraeme Bell\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 2 Jun 2015 07:47:29 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "> I believe yes / 0 are the default settings for synchronous commit and commit_delay. ** (Interestingly the manual pages do not specify.) ** \n\nSorry, I've just spotted the settings in the text. The statement (marked **) is incorrect. \n\nDefaults are yes/0. (http://www.postgresql.org/docs/9.4/static/runtime-config-wal.html)\n\nGraeme.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 2 Jun 2015 07:58:54 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres is using 100% CPU"
},
{
"msg_contents": "Thanks everyone for your response. I will try these settings.\n\nOn Tue, Jun 2, 2015 at 1:28 PM, Graeme B. Bell <[email protected]>\nwrote:\n\n> > I believe yes / 0 are the default settings for synchronous commit and\n> commit_delay. ** (Interestingly the manual pages do not specify.) **\n>\n> Sorry, I've just spotted the settings in the text. The statement (marked\n> **) is incorrect.\n>\n> Defaults are yes/0. (\n> http://www.postgresql.org/docs/9.4/static/runtime-config-wal.html)\n>\n> Graeme.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks everyone for your response. I will try these settings.On Tue, Jun 2, 2015 at 1:28 PM, Graeme B. Bell <[email protected]> wrote:> I believe yes / 0 are the default settings for synchronous commit and commit_delay. ** (Interestingly the manual pages do not specify.) **\n\nSorry, I've just spotted the settings in the text. The statement (marked **) is incorrect.\n\nDefaults are yes/0. (http://www.postgresql.org/docs/9.4/static/runtime-config-wal.html)\n\nGraeme.\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 2 Jun 2015 13:50:13 +0530",
"msg_from": "Ashik S L <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres is using 100% CPU"
}
] |
[
{
"msg_contents": "Hi all, \n\nWe have a big database (more than 300 Gb) and we run a lot of queries each\nminute. \n\nHowever, once an hour, the (very complex) query writes A LOT on the disk\n(more than 95 Gb !!!)\nWe have 64 Gb of RAM and this is our config :\n\n\nAnd my error on the query is : \n\n\nDo you know how to solve this problem ? \n\nBest regards,\nBenjamin.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Jun 2015 05:21:32 -0700 (MST)",
"msg_from": "\"ben.play\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "You should repost this directly and not through Nabble. It has wrapped\nyour code in raw tags which the PostgreSQL mailing list software strips.\n\nOn Wednesday, June 3, 2015, ben.play <[email protected]> wrote:\n\n> Hi all,\n>\n> We have a big database (more than 300 Gb) and we run a lot of queries each\n> minute.\n>\n> However, once an hour, the (very complex) query writes A LOT on the disk\n> (more than 95 Gb !!!)\n> We have 64 Gb of RAM and this is our config :\n>\n>\n> And my error on the query is :\n>\n>\n> Do you know how to solve this problem ?\n>\n> Best regards,\n> Benjamin.\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected]\n> <javascript:;>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYou should repost this directly and not through Nabble. It has wrapped your code in raw tags which the PostgreSQL mailing list software strips.On Wednesday, June 3, 2015, ben.play <[email protected]> wrote:Hi all,\n\nWe have a big database (more than 300 Gb) and we run a lot of queries each\nminute.\n\nHowever, once an hour, the (very complex) query writes A LOT on the disk\n(more than 95 Gb !!!)\nWe have 64 Gb of RAM and this is our config :\n\n\nAnd my error on the query is :\n\n\nDo you know how to solve this problem ?\n\nBest regards,\nBenjamin.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 3 Jun 2015 08:44:23 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "SQLSTATE[53100]: Disk full: 7 ERROR: could not write block 1099247 of\ntemporary file\n\nIts looks like there is no room to write temporary file, try with limiting\ntemporary file size by setting temp_file_limit GUC.\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852328.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Jun 2015 06:11:22 -0700 (MST)",
"msg_from": "amulsul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "Hi Benjamin,\n\nIt looks you are facing disk space issue for queries.\nIn order to avid the disk space issue you can do the following.\n1) Increase the work_mem parameter session level before executing the\nqueries.\n2) If you observe diskspace issue particular user queries,increase the\nwork_mem parameter user level.\n3) Check with developer to tune the query.\n\nOn Wed, Jun 3, 2015 at 6:41 PM, amulsul <[email protected]> wrote:\n\n> SQLSTATE[53100]: Disk full: 7 ERROR: could not write block 1099247 of\n> temporary file\n>\n> Its looks like there is no room to write temporary file, try with limiting\n> temporary file size by setting temp_file_limit GUC.\n>\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852328.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Benjamin,It looks you are facing disk space issue for queries.In order to avid the disk space issue you can do the following.1) Increase the work_mem parameter session level before executing the queries.2) If you observe diskspace issue particular user queries,increase the work_mem parameter user level.3) Check with developer to tune the query.On Wed, Jun 3, 2015 at 6:41 PM, amulsul <[email protected]> wrote:SQLSTATE[53100]: Disk full: 7 ERROR: could not write block 1099247 of\ntemporary file\n\nIts looks like there is no room to write temporary file, try with limiting\ntemporary file size by setting temp_file_limit GUC.\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852328.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 3 Jun 2015 18:57:11 +0530",
"msg_from": "chiru r <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "\n\nOn 06/03/15 15:27, chiru r wrote:\n> Hi Benjamin,\n>\n> It looks you are facing disk space issue for queries.\n> In order to avid the disk space issue you can do the following.\n> 1) Increase the work_mem parameter session level before executing the\n> queries.\n> 2) If you observe diskspace issue particular user queries,increase the\n> work_mem parameter user level.\n\nThe suggestion to increase work_mem is a bit naive, IMHO. The query is \nwriting ~95GB to disk, it usually takes more space to keep the same data \nin memory. They only have 64GB of RAM ...\n\nIn the good case, it will crash just like now. In the worse case, the \nOOM killer will intervene, possibly crashing the whole database.\n\n\n> 3) Check with developer to tune the query.\n\nThat's a better possibility. Sadly, we don't know what the query is \ndoing, so we can't judge how much it can be optimized.\n\n--\nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 03 Jun 2015 15:51:05 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "The query is (unfortunately) generated by Doctrine 2 (Symfony 2). \nWe can’t change the query easily.\n\nThis is my config : \nmax_connections = 80\nshared_buffers = 15GB\nwork_mem = 384MB\nmaintenance_work_mem = 1GB\n#temp_buffers = 8MB \n#temp_file_limit = -1 \neffective_cache_size = 44GB\n\nIf I put a temp_file_limit …Are all my queries (who have to write on disk) will crash ?\n\nAs you can see… I have 64 gb of Ram, but less than 3 Gb is used !\n\nben@bdd:/home/benjamin# free -m\n total used free shared buffers cached\nMem: 64456 64141 315 15726 53 61761\n-/+ buffers/cache: 2326 62130\nSwap: 1021 63 958\n\n\nThanks guys for your help :)\n\n\n> Le 3 juin 2015 à 15:51, Tomas Vondra-4 [via PostgreSQL] <[email protected]> a écrit :\n> \n> \n> \n> On 06/03/15 15:27, chiru r wrote: \n> > Hi Benjamin, \n> > \n> > It looks you are facing disk space issue for queries. \n> > In order to avid the disk space issue you can do the following. \n> > 1) Increase the work_mem parameter session level before executing the \n> > queries. \n> > 2) If you observe diskspace issue particular user queries,increase the \n> > work_mem parameter user level. \n> \n> The suggestion to increase work_mem is a bit naive, IMHO. The query is \n> writing ~95GB to disk, it usually takes more space to keep the same data \n> in memory. They only have 64GB of RAM ... \n> \n> In the good case, it will crash just like now. In the worse case, the \n> OOM killer will intervene, possibly crashing the whole database. \n> \n> \n> > 3) Check with developer to tune the query. \n> \n> That's a better possibility. Sadly, we don't know what the query is \n> doing, so we can't judge how much it can be optimized. \n> \n> -- \n> Tomas Vondra http://www.2ndQuadrant.com/ <http://www.2ndquadrant.com/>\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([hidden email] <x-msg://4/user/SendEmail.jtp?type=node&node=5852331&i=0>) \n> To make changes to your subscription: \n> http://www.postgresql.org/mailpref/pgsql-performance <http://www.postgresql.org/mailpref/pgsql-performance>\n> \n> \n> If you reply to this email, your message will be added to the discussion below:\n> http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852331.html <http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852331.html>\n> To unsubscribe from How to reduce writing on disk ? (90 gb on pgsql_tmp), click here <http://postgresql.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5852321&code=YmVuamFtaW4uY29oZW5AcGxheXJpb24uY29tfDU4NTIzMjF8LTE0OTE4NTc4Ng==>.\n> NAML <http://postgresql.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852332.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nThe query is (unfortunately) generated by Doctrine 2 (Symfony 2). We can’t change the query easily.This is my config : max_connections = 80\nshared_buffers = 15GB\nwork_mem = 384MB\nmaintenance_work_mem = 1GB\n#temp_buffers = 8MB \n#temp_file_limit = -1 \neffective_cache_size = 44GB\nIf I put a temp_file_limit …Are all my queries (who have to write on disk) will crash ?As you can see… I have 64 gb of Ram, but less than 3 Gb is used !ben@bdd:/home/benjamin# free -m total used free shared buffers cachedMem: 64456 64141 315 15726 53 61761-/+ buffers/cache: 2326 62130Swap: 1021 63 958Thanks guys for your help :)Le 3 juin 2015 à 15:51, Tomas Vondra-4 [via PostgreSQL] <[hidden email]> a écrit :\nOn 06/03/15 15:27, chiru r wrote:\n> Hi Benjamin,\n>\n> It looks you are facing disk space issue for queries.\n> In order to avid the disk space issue you can do the following.\n> 1) Increase the work_mem parameter session level before executing the\n> queries.\n> 2) If you observe diskspace issue particular user queries,increase the\n> work_mem parameter user level.\nThe suggestion to increase work_mem is a bit naive, IMHO. The query is \nwriting ~95GB to disk, it usually takes more space to keep the same data \nin memory. They only have 64GB of RAM ...\nIn the good case, it will crash just like now. In the worse case, the \nOOM killer will intervene, possibly crashing the whole database.\n> 3) Check with developer to tune the query.\nThat's a better possibility. Sadly, we don't know what the query is \ndoing, so we can't judge how much it can be optimized.\n--\nTomas Vondra http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- \nSent via pgsql-performance mailing list (<a href=\"x-msg://4/user/SendEmail.jtp?type=node&node=5852331&i=0\" target=\"_top\" rel=\"nofollow\" link=\"external\" class=\"\">[hidden email])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852331.html\n\n\n\t\t\n\t\tTo unsubscribe from How to reduce writing on disk ? (90 gb on pgsql_tmp), click here.\nNAML\n\n\nView this message in context: Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 3 Jun 2015 07:06:18 -0700 (MST)",
"msg_from": "\"ben.play\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "\nOn 06/03/15 16:06, ben.play wrote:\n> The query is (unfortunately) generated by Doctrine 2 (Symfony 2).\n> We can’t change the query easily.\n\nWell, then you'll probably have to buy more RAM, apparently.\n\n> This is my config :\n>\n> max_connections = 80\n> shared_buffers = 15GB\n> work_mem = 384MB\n> maintenance_work_mem = 1GB\n> #temp_buffers = 8MB\n> #temp_file_limit = -1\n> effective_cache_size = 44GB\n>\n>\n> If I put a temp_file_limit …Are all my queries (who have to write on\n> disk) will crash ?\n>\n> As you can see… I have 64 gb of Ram, but less than 3 Gb is used !\n>\n> ben@bdd:/home/benjamin# free -m\n> total used free shared buffers cached\n> Mem: 64456 64141 315 15726 53 61761\n> -/+ buffers/cache: 2326 62130\n> Swap: 1021 63 958\n>\n>\n> Thanks guys for your help :)\n\nI don't see why you think you have less than 3GB used. The output you \nposted clearly shows there's only ~300MB memory free - there's 15GB \nshared buffers and ~45GB of page cache (file system cache).\n\nBut you still haven't shows us the query (the EXPLAIN ANALYZE of it), so \nwe can't really give you advice.\n\n--\nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 03 Jun 2015 16:56:45 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 06/03/15 16:06, ben.play wrote:\n>>\n>> The query is (unfortunately) generated by Doctrine 2 (Symfony 2).\n>> We can’t change the query easily.\n>\n>\n> Well, then you'll probably have to buy more RAM, apparently.\n>\n>> This is my config :\n>>\n>> max_connections = 80\n>> shared_buffers = 15GB\n>> work_mem = 384MB\n>> maintenance_work_mem = 1GB\n>> #temp_buffers = 8MB\n>> #temp_file_limit = -1\n>> effective_cache_size = 44GB\n>>\n>>\n>> If I put a temp_file_limit …Are all my queries (who have to write on\n>> disk) will crash ?\n>>\n>> As you can see… I have 64 gb of Ram, but less than 3 Gb is used !\n>>\n>> ben@bdd:/home/benjamin# free -m\n>> total used free shared buffers cached\n>> Mem: 64456 64141 315 15726 53 61761\n>> -/+ buffers/cache: 2326 62130\n>> Swap: 1021 63 958\n>>\n>>\n>> Thanks guys for your help :)\n>\n>\n> I don't see why you think you have less than 3GB used. The output you posted\n> clearly shows there's only ~300MB memory free - there's 15GB shared buffers\n> and ~45GB of page cache (file system cache).\n\nBecause you subtract cached from used to see how much real spare\nmemory you have. The kernel will dump cached mem as needed to free up\nspace for memory usage. So 64141-61761=2380MB used.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Jun 2015 09:09:16 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On Wed, Jun 3, 2015 at 11:56 AM, Tomas Vondra\n<[email protected]> wrote:\n> On 06/03/15 16:06, ben.play wrote:\n>>\n>> The query is (unfortunately) generated by Doctrine 2 (Symfony 2).\n>> We can’t change the query easily.\n>\n>\n> Well, then you'll probably have to buy more RAM, apparently.\n\nThere's an easy way to add disk space for this kind of thing.\n\nAdd a big fat rotational HD (temp tables are usually sequentially\nwritten and scanned so rotational performs great), format it of\ncourse, and create a tablespace pointing to it. Then set it as default\nin temp_tablespaces (postgresql.conf) or do it in the big query's\nsession (I'd recommend the global option if you don't already use a\nseparate tablespace for temporary tables).\n\nNot only it will give you the necessary space, but it will also be\nsubstantially faster.\n\nYou'll have to be careful about backups though (the move from one\nfilesystem to two filesystems always requires changes to backup\nstrategies)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Jun 2015 14:06:35 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "\n\nOn 06/03/15 17:09, Scott Marlowe wrote:\n> On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra\n>>\n>>\n>> I don't see why you think you have less than 3GB used. The output you posted\n>> clearly shows there's only ~300MB memory free - there's 15GB shared buffers\n>> and ~45GB of page cache (file system cache).\n>\n> Because you subtract cached from used to see how much real spare\n> memory you have. The kernel will dump cached mem as needed to free up\n> space for memory usage. So 64141-61761=2380MB used.\n\nWell, except that 15GB of that is shared_buffers, and I wouldn't call \nthat 'free'. Also, I don't see page cache as entirely free - you \nprobably want at least some caching at this level.\n\nIn any case, even if all 64GB were free, this would not be enough for \nthe query that needs >95GB for temp files.\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 03 Jun 2015 21:24:45 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On Wed, Jun 3, 2015 at 1:24 PM, Tomas Vondra\n<[email protected]> wrote:\n>\n>\n> On 06/03/15 17:09, Scott Marlowe wrote:\n>>\n>> On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra\n>>>\n>>>\n>>>\n>>> I don't see why you think you have less than 3GB used. The output you\n>>> posted\n>>> clearly shows there's only ~300MB memory free - there's 15GB shared\n>>> buffers\n>>> and ~45GB of page cache (file system cache).\n>>\n>>\n>> Because you subtract cached from used to see how much real spare\n>> memory you have. The kernel will dump cached mem as needed to free up\n>> space for memory usage. So 64141-61761=2380MB used.\n>\n>\n> Well, except that 15GB of that is shared_buffers, and I wouldn't call that\n> 'free'. Also, I don't see page cache as entirely free - you probably want at\n> least some caching at this level.\n>\n> In any case, even if all 64GB were free, this would not be enough for the\n> query that needs >95GB for temp files.\n\nYou can argue all you want, but this machine has plenty of free memory\nright now, and unless the OP goes crazy and cranks up work_mem to some\nmuch higher level it'll stay that way, which is good. There's far far\nmore than 300MB free here. At the drop of a hat there can be ~60G\nfreed up as needed, either for shared_buffers or work_mem or other\nthings to happen. Cache doesn't count as \"used\" in terms of real\nmemory pressure. IE you're not gonna start getting swapping becase you\nneed more memory, it'll just come from the cache.\n\nCache is free memory. If you think of it any other way when you're\nlooking at memory usage and pressure on theings like swap you're gonna\nmake some bad decisions.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Jun 2015 15:18:24 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On Wed, Jun 3, 2015 at 6:18 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Jun 3, 2015 at 1:24 PM, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>>\n>> On 06/03/15 17:09, Scott Marlowe wrote:\n>>>\n>>> On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra\n>>>>\n>>>>\n>>>>\n>>>> I don't see why you think you have less than 3GB used. The output you\n>>>> posted\n>>>> clearly shows there's only ~300MB memory free - there's 15GB shared\n>>>> buffers\n>>>> and ~45GB of page cache (file system cache).\n>>>\n>>>\n>>> Because you subtract cached from used to see how much real spare\n>>> memory you have. The kernel will dump cached mem as needed to free up\n>>> space for memory usage. So 64141-61761=2380MB used.\n>>\n>>\n>> Well, except that 15GB of that is shared_buffers, and I wouldn't call that\n>> 'free'. Also, I don't see page cache as entirely free - you probably want at\n>> least some caching at this level.\n>>\n>> In any case, even if all 64GB were free, this would not be enough for the\n>> query that needs >95GB for temp files.\n>\n> You can argue all you want, but this machine has plenty of free memory\n> right now, and unless the OP goes crazy and cranks up work_mem to some\n> much higher level it'll stay that way, which is good. There's far far\n> more than 300MB free here. At the drop of a hat there can be ~60G\n> freed up as needed, either for shared_buffers or work_mem or other\n> things to happen. Cache doesn't count as \"used\" in terms of real\n> memory pressure. IE you're not gonna start getting swapping becase you\n> need more memory, it'll just come from the cache.\n\nIn my experience, dumping the buffer cache as heavily as that counts\nas thrashing. Either concurrent or future queries will have to go to\ndisk and that will throw performance out the window, which is never\nquite so ok.\n\nIt is generally better to let pg use that temporary file (unless the\nin-memory strategy happens to be much faster, say using a hash instead\nof sort, which usually doesn't happen in my experience for those sizes\nanyway) and let the OS handle the pressure those dirty buffers cause.\nThe OS will usually handle it better than work_mem.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Jun 2015 18:58:57 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "\n\nOn 06/03/15 23:18, Scott Marlowe wrote:\n> On Wed, Jun 3, 2015 at 1:24 PM, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 06/03/15 17:09, Scott Marlowe wrote:\n>>>\n>>> On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra\n>>>\n>> Well, except that 15GB of that is shared_buffers, and I wouldn't call that\n>> 'free'. Also, I don't see page cache as entirely free - you probably want at\n>> least some caching at this level.\n>>\n>> In any case, even if all 64GB were free, this would not be enough for the\n>> query that needs >95GB for temp files.\n>\n> You can argue all you want, but this machine has plenty of free memory\n> right now, and unless the OP goes crazy and cranks up work_mem to some\n> much higher level it'll stay that way, which is good. There's far far\n> more than 300MB free here. At the drop of a hat there can be ~60G\n> freed up as needed, either for shared_buffers or work_mem or other\n> things to happen. Cache doesn't count as \"used\" in terms of real\n> memory pressure. IE you're not gonna start getting swapping becase you\n> need more memory, it'll just come from the cache.\n\nPlease, could you explain how you free 60GB 'as need' when 15GB of that \nis actually used for shared buffers? Also, we don't know how much of \nthat cache is 'dirty' which makes it more difficult to free.\n\nWhat is more important, though, is the amount of memory. OP reported the \nquery writes ~95GB of temp files (and dies because of full disk, so \nthere may be more). The on-disk format is usually more compact than the \nin-memory representation - for example on-disk sort often needs 3x less \nspace than in-memory qsort. So we can assume the query needs >95GB of \ndata. Can you explain how that's going to fit into the 64GB RAM?\n\n> Cache is free memory. If you think of it any other way when you're\n> looking at memory usage and pressure on theings like swap you're\n> gonna make some bad decisions.\n\nCache is not free memory - it's there for a purpose and usually plays a \nsignificant role in performance. Sure, it may be freed and used for \nother purposes, but that has consequences - e.g. it impacts performance \nof other queries etc. You generally don't want to do that on production.\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 04 Jun 2015 00:16:10 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "\nOn 06/03/2015 03:16 PM, Tomas Vondra wrote:\n\n> What is more important, though, is the amount of memory. OP reported the\n> query writes ~95GB of temp files (and dies because of full disk, so\n> there may be more). The on-disk format is usually more compact than the\n> in-memory representation - for example on-disk sort often needs 3x less\n> space than in-memory qsort. So we can assume the query needs >95GB of\n> data. Can you explain how that's going to fit into the 64GB RAM?\n>\n>> Cache is free memory. If you think of it any other way when you're\n>> looking at memory usage and pressure on theings like swap you're\n>> gonna make some bad decisions.\n>\n> Cache is not free memory - it's there for a purpose and usually plays a\n> significant role in performance. Sure, it may be freed and used for\n> other purposes, but that has consequences - e.g. it impacts performance\n> of other queries etc. You generally don't want to do that on production.\n\nExactly. If your cache is reduced your performance is reduced because \nless things are in cache. It is not free memory. Also the command \"free\" \nis not useful in this scenario. It is almost always better to use sar so \nyou can see where the data points are that free is using.\n\nSincerely,\n\nJD\n\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 03 Jun 2015 15:29:44 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On 2015-06-03 16:29, Joshua D. Drake wrote:\n> \n> On 06/03/2015 03:16 PM, Tomas Vondra wrote:\n> \n>> What is more important, though, is the amount of memory. OP reported the\n>> query writes ~95GB of temp files (and dies because of full disk, so\n>> there may be more). The on-disk format is usually more compact than the\n>> in-memory representation - for example on-disk sort often needs 3x less\n>> space than in-memory qsort. So we can assume the query needs >95GB of\n>> data. Can you explain how that's going to fit into the 64GB RAM?\n>>\n>>> Cache is free memory. If you think of it any other way when you're\n>>> looking at memory usage and pressure on theings like swap you're\n>>> gonna make some bad decisions.\n>>\n>> Cache is not free memory - it's there for a purpose and usually plays a\n>> significant role in performance. Sure, it may be freed and used for\n>> other purposes, but that has consequences - e.g. it impacts performance\n>> of other queries etc. You generally don't want to do that on production.\n> \n> Exactly. If your cache is reduced your performance is reduced because less\n> things are in cache. It is not free memory. Also the command \"free\" is not\n> useful in this scenario. It is almost always better to use sar so you can see\n> where the data points are that free is using.\n> \n\nIt's one thing to consciously keep free memory for the OS cache, but you\nshould not take the \"free\" column from the first line output of the program\nfree as meaning that's all there is left, or that you need all that memory.\n\nYou should look at \"used\" from the second line (\"-/+ buffers/cache\"). That\nvalue is what the kernel and all the apps are using on your machine. Add what\never you want to have for OS cache, and this is the total amount of memory you\nwant in your machine.\n\nNote that for a machine that has run long enough, and done enough I/O ops,\n\"free\" from the first line will always be close to 0, because the OS tries to\nuse as much memory as possible for caching, do enough I/O and you'll fill that up.\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 03 Jun 2015 17:54:12 -0600",
"msg_from": "Yves Dorfsman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "\n\nOn 06/04/15 01:54, Yves Dorfsman wrote:\n> On 2015-06-03 16:29, Joshua D. Drake wrote:\n>>\n>> On 06/03/2015 03:16 PM, Tomas Vondra wrote:\n>>\n>>> What is more important, though, is the amount of memory. OP reported the\n>>> query writes ~95GB of temp files (and dies because of full disk, so\n>>> there may be more). The on-disk format is usually more compact than the\n>>> in-memory representation - for example on-disk sort often needs 3x less\n>>> space than in-memory qsort. So we can assume the query needs >95GB of\n>>> data. Can you explain how that's going to fit into the 64GB RAM?\n>>>\n>>>> Cache is free memory. If you think of it any other way when you're\n>>>> looking at memory usage and pressure on theings like swap you're\n>>>> gonna make some bad decisions.\n>>>\n>>> Cache is not free memory - it's there for a purpose and usually plays a\n>>> significant role in performance. Sure, it may be freed and used for\n>>> other purposes, but that has consequences - e.g. it impacts performance\n>>> of other queries etc. You generally don't want to do that on production.\n>>\n>> Exactly. If your cache is reduced your performance is reduced because less\n>> things are in cache. It is not free memory. Also the command \"free\" is not\n>> useful in this scenario. It is almost always better to use sar so you can see\n>> where the data points are that free is using.\n>>\n>\n> It's one thing to consciously keep free memory for the OS cache, but\n> you should not take the \"free\" column from the first line output of\n> the program free as meaning that's all there is left, or that you\n> need allthat memory.\n\nNo one suggested using the 'free' column this way, so I'm not sure what \nyou're responding to?\n\n> You should look at \"used\" from the second line (\"-/+ buffers/cache\").\n> That value is what the kernel and all the apps are using on your\n> machine. Add whatever you want to have for OS cache, and this is the\n> total amount ofmemory you want in your machine.\n\nExcept that the second line is not particularly helpful too, because it \ndoes not account for the shared buffers clearly, nor does it show what \npart of the page cache is dirty etc.\n\n> Note that for a machine that has run long enough, and done enough\n> I/O ops, \"free\" from the first line will always be close to 0,\n> because the OS tries to use as much memory as possible for caching,\n> do enough I/O and you'll fill that up.\n\nThat's generally true, but the assumption is that on a 300GB database \nthe page cache has a significant benefit for performance. What however \nmakes this approach utterly futile is the fact that OP has only 64GB of \nRAM (and only ~45GB of that in page cache), and the query writes >95GB \ntemp files on disk (and then fails). So even if you drop the whole page \ncache, the query will fail anyway.\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 04 Jun 2015 02:22:49 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake <[email protected]> wrote:\n>\n> On 06/03/2015 03:16 PM, Tomas Vondra wrote:\n>\n>> What is more important, though, is the amount of memory. OP reported the\n>> query writes ~95GB of temp files (and dies because of full disk, so\n>> there may be more). The on-disk format is usually more compact than the\n>> in-memory representation - for example on-disk sort often needs 3x less\n>> space than in-memory qsort. So we can assume the query needs >95GB of\n>> data. Can you explain how that's going to fit into the 64GB RAM?\n>>\n>>> Cache is free memory. If you think of it any other way when you're\n>>> looking at memory usage and pressure on theings like swap you're\n>>> gonna make some bad decisions.\n>>\n>>\n>> Cache is not free memory - it's there for a purpose and usually plays a\n>> significant role in performance. Sure, it may be freed and used for\n>> other purposes, but that has consequences - e.g. it impacts performance\n>> of other queries etc. You generally don't want to do that on production.\n>\n>\n> Exactly. If your cache is reduced your performance is reduced because less\n> things are in cache. It is not free memory. Also the command \"free\" is not\n> useful in this scenario. It is almost always better to use sar so you can\n> see where the data points are that free is using.\n\nBut if that WAS happening he wouldn't still HAVE 60G of cache! That's\nmy whole point. He's NOT running out of memory. He's not even having\nto dump cache right now.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Jun 2015 18:53:45 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake <[email protected]> wrote:\n>>\n>> On 06/03/2015 03:16 PM, Tomas Vondra wrote:\n>>\n>>> What is more important, though, is the amount of memory. OP reported the\n>>> query writes ~95GB of temp files (and dies because of full disk, so\n>>> there may be more). The on-disk format is usually more compact than the\n>>> in-memory representation - for example on-disk sort often needs 3x less\n>>> space than in-memory qsort. So we can assume the query needs >95GB of\n>>> data. Can you explain how that's going to fit into the 64GB RAM?\n>>>\n>>>> Cache is free memory. If you think of it any other way when you're\n>>>> looking at memory usage and pressure on theings like swap you're\n>>>> gonna make some bad decisions.\n>>>\n>>>\n>>> Cache is not free memory - it's there for a purpose and usually plays a\n>>> significant role in performance. Sure, it may be freed and used for\n>>> other purposes, but that has consequences - e.g. it impacts performance\n>>> of other queries etc. You generally don't want to do that on production.\n>>\n>>\n>> Exactly. If your cache is reduced your performance is reduced because less\n>> things are in cache. It is not free memory. Also the command \"free\" is not\n>> useful in this scenario. It is almost always better to use sar so you can\n>> see where the data points are that free is using.\n>\n> But if that WAS happening he wouldn't still HAVE 60G of cache! That's\n> my whole point. He's NOT running out of memory. He's not even having\n> to dump cache right now.\n\nFurther if he started using a few gig here for this one it wouldn't\nhave a big impact on cache (60G-1G etc) but might make it much faster,\nas spilling to disk is a lot less intrusive when you've got a bigger\nchunk of ram to work in. OTOH doing something like setting work_mem to\n60G would likely be fatal.\n\nBut he's not down to 3GB of memory by any kind of imagination. Any\nworking machine will slowly, certainly fill its caches since it's not\nusing the memory for anything else. That's normal. As long as you're\nnot blowing out the cache you're fine.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Jun 2015 18:58:52 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On 04/06/15 12:58, Scott Marlowe wrote:\n> On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe <[email protected]> wrote:\n>> On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake <[email protected]> wrote:\n>>>\n>>> On 06/03/2015 03:16 PM, Tomas Vondra wrote:\n>>>\n>>>> What is more important, though, is the amount of memory. OP reported the\n>>>> query writes ~95GB of temp files (and dies because of full disk, so\n>>>> there may be more). The on-disk format is usually more compact than the\n>>>> in-memory representation - for example on-disk sort often needs 3x less\n>>>> space than in-memory qsort. So we can assume the query needs >95GB of\n>>>> data. Can you explain how that's going to fit into the 64GB RAM?\n>>>>\n>>>>> Cache is free memory. If you think of it any other way when you're\n>>>>> looking at memory usage and pressure on theings like swap you're\n>>>>> gonna make some bad decisions.\n>>>>\n>>>>\n>>>> Cache is not free memory - it's there for a purpose and usually plays a\n>>>> significant role in performance. Sure, it may be freed and used for\n>>>> other purposes, but that has consequences - e.g. it impacts performance\n>>>> of other queries etc. You generally don't want to do that on production.\n>>>\n>>>\n>>> Exactly. If your cache is reduced your performance is reduced because less\n>>> things are in cache. It is not free memory. Also the command \"free\" is not\n>>> useful in this scenario. It is almost always better to use sar so you can\n>>> see where the data points are that free is using.\n>>\n>> But if that WAS happening he wouldn't still HAVE 60G of cache! That's\n>> my whole point. He's NOT running out of memory. He's not even having\n>> to dump cache right now.\n>\n> Further if he started using a few gig here for this one it wouldn't\n> have a big impact on cache (60G-1G etc) but might make it much faster,\n> as spilling to disk is a lot less intrusive when you've got a bigger\n> chunk of ram to work in. OTOH doing something like setting work_mem to\n> 60G would likely be fatal.\n>\n> But he's not down to 3GB of memory by any kind of imagination. Any\n> working machine will slowly, certainly fill its caches since it's not\n> using the memory for anything else. That's normal. As long as you're\n> not blowing out the cache you're fine.\n>\n>\n\nI agree with Scott's analysis here.\n\nIt seems to me that the issue is the query(s) using too much disk space. \nAs others have said, it may not be practical to up work_mem to the point \nwhere is all happens in memory...so probably need to:\n\n- get more disk or,\n- tweak postgres params to get a less disk hungry plan (need to see that \nexplain analyze)!\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 04 Jun 2015 13:19:44 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "\n\nOn 06/04/15 02:58, Scott Marlowe wrote:\n> On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe <[email protected]> wrote:\n>> On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake <[email protected]> wrote:\n>>>\n>>> On 06/03/2015 03:16 PM, Tomas Vondra wrote:\n>>>\n>>>> Cache is not free memory - it's there for a purpose and usually\n>>>> plays a significant role in performance. Sure, it may be freed\n>>>> and used for other purposes, but that has consequences - e.g.\n>>>> it impacts performance of other queries etc. You generally\n>>>> don't want to do that onproduction.\n>>>\n>>>\n>>> Exactly. If your cache is reduced your performance is reduced\n>>> because less things are in cache. It is not free memory. Also the\n>>> command \"free\" is not useful in this scenario. It is almost\n>>> alwaysbetter to use sar so you can see where the data points are\n>>> thatfree is using.\n>>\n>> But if that WAS happening he wouldn't still HAVE 60G of cache!\n>> That's my whole point. He's NOT running out of memory. He's not\n>> even having to dump cache right now.\n\nNo one claimed he's running out of memory ...\n\nWhat I claimed is that considering page cache equal to free memory is \nnot really appropriate, because it is used for caching data, which plays \na significant role.\n\nRegarding the \"free\" output, we have no clue when the \"free\" command was \nexecuted. I might have been executed while the query was running, right \nafter it failed or long after that. That has significant impact on \ninterpretation of the output.\n\nAlso, we have no clue what happens on the machine, so it's possible \nthere are other queries competing for the page cache, quickly filling \nreusing free memory (used for large query moments ago) for page cache.\n\nAnd finally, we have no clue what plan the query is using, so we don't \nknow how much memory it's using before it starts spilling to disk. For \nexample it might easily be a single sort node, taking only 384MB (the \nwork_mem) of RAM before it starts spilling to disk.\n\n\n> Further if he started using a few gig here for this one it wouldn't\n> have a big impact on cache (60G-1G etc) but might make it much\n> faster, as spilling to disk is a lot less intrusive when you've got a\n> bigger chunk of ram to work in. OTOH doing something like setting\n> work_mem to 60G would likely be fatal.\n\nIt'd certainly be fatal, because this query is spilling >95G to disk, \nand keeping that in memory would easily require 2-3x more space.\n\n>\n> But he's not down to 3GB of memory by any kind of imagination. Any\n> working machine will slowly, certainly fill its caches since it's\n> not using the memory for anything else. That's normal. As long as\n> you're not blowing out the cache you're fine.\n\nOnce again, what about the 15GB shared buffers? Not that it'd change \nanything, really.\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 04 Jun 2015 16:43:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "Hi, \n\nThank you a lot for your answer.\nI've done that (create a tablespace in another HD with POSTGRES role + put\nit as the main temp_tablespace in the conf).\n\nBut ... my command ~# df show me that all queries use the default tablespace\n... \n\n\nThis was my commands (the directory is owned by postgres) :\nCREATE TABLESPACE hddtablespace LOCATION '/media/hdd/pgsql';\nALTER TABLESPACE hddtablespace OWNER TO postgres;\n\nSHOW temp_tablespaces;\n> hddtablespace\n\nIn /media/hdd/pgsql I have only one empty directory (PG_9.3_201306121).\n\nDo you have any tips ?\n\nThanks a lot guys ...\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5853081.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 Jun 2015 08:58:32 -0700 (MST)",
"msg_from": "\"ben.play\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On Tue, Jun 9, 2015 at 12:58 PM, ben.play <[email protected]> wrote:\n> Hi,\n>\n> Thank you a lot for your answer.\n> I've done that (create a tablespace in another HD with POSTGRES role + put\n> it as the main temp_tablespace in the conf).\n>\n> But ... my command ~# df show me that all queries use the default tablespace\n> ...\n>\n>\n> This was my commands (the directory is owned by postgres) :\n> CREATE TABLESPACE hddtablespace LOCATION '/media/hdd/pgsql';\n> ALTER TABLESPACE hddtablespace OWNER TO postgres;\n>\n> SHOW temp_tablespaces;\n>> hddtablespace\n>\n> In /media/hdd/pgsql I have only one empty directory (PG_9.3_201306121).\n>\n> Do you have any tips ?\n\n\nYou have to grant public CREATE permissions on the tablespace,\notherwise noone will have permission to use it.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 Jun 2015 13:04:47 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "Of course ! I'm an idiot ...\n\nThank you a lot ! \n\nA question : is it possible with Postgres to change the temp_tablespace only\nfor a session (or page) ?\nI have a cron which takes a lot of memory. I would like to say to PostGreSql\nto use this temp_tablespace only on this command and not affect my user\nexperience.\n\nThank you a lot :)\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5853337.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Jun 2015 01:56:21 -0700 (MST)",
"msg_from": "\"ben.play\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
},
{
"msg_contents": "On Thu, Jun 11, 2015 at 5:56 AM, ben.play <[email protected]> wrote:\n> A question : is it possible with Postgres to change the temp_tablespace only\n> for a session (or page) ?\n> I have a cron which takes a lot of memory. I would like to say to PostGreSql\n> to use this temp_tablespace only on this command and not affect my user\n> experience.\n\nYou can do it with the PGOPTIONS environment variable:\n\nPGOPTIONS=\"-c temp_tablespaces=blabla\" psql some_db -f some_script.sql\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Jun 2015 09:48:07 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)"
}
] |
[
{
"msg_contents": "I previously mentioned on the list that nvme drives are going to be a very big thing this year for DB performance.\n\nThis video shows what happens if you get an 'enthusiast'-class motherboard and 5 of the 400GB intel 750 drives.\nhttps://www.youtube.com/watch?v=-hE8Vg1qPSw\n\nTotal transfer speed: 10.3 GB/second.\nTotal IOPS: 2 million (!)\n\n+ nice power loss protection (Intel)\n+ lower latency too - about 20ms vs 100ms for SATA3 (http://www.anandtech.com/show/7843/testing-sata-express-with-asus/4)\n+ substantially lower CPU use per I/O (http://www.anandtech.com/show/8104/intel-ssd-dc-p3700-review-the-pcie-ssd-transition-begins-with-nvme/5)\n\nYou're probably wondering 'how much' though? \n$400 per drive! Peanuts. \n\nAssuming for the moment you're working in RAID0 or with tablespaces, and just want raw speed:\n$2400 total for 2 TB of storage, including a good quality motherboard, with 2 million battery backed IOPS and 10GB/second bulk transfers.\n\nThese drives are going to utterly wreck the profit margins on high-end DB hardware. \n\nGraeme Bell\n\np.s. No, I don't have shares in Intel, but maybe I should...\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 4 Jun 2015 11:07:39 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need more IOPS? This should get you drooling... (5xnvme drives)"
},
{
"msg_contents": "\nImages/data here\n\nhttp://www.pcper.com/reviews/Storage/Five-Intel-SSD-750s-Tested-Two-Million-IOPS-and-10-GBsec-Achievement-Unlocked\n\n\n\nOn 04 Jun 2015, at 13:07, Graeme Bell <[email protected]> wrote:\n\n> I previously mentioned on the list that nvme drives are going to be a very big thing this year for DB performance.\n> \n> This video shows what happens if you get an 'enthusiast'-class motherboard and 5 of the 400GB intel 750 drives.\n> https://www.youtube.com/watch?v=-hE8Vg1qPSw\n> \n> Total transfer speed: 10.3 GB/second.\n> Total IOPS: 2 million (!)\n> \n> + nice power loss protection (Intel)\n> + lower latency too - about 20ms vs 100ms for SATA3 (http://www.anandtech.com/show/7843/testing-sata-express-with-asus/4)\n> + substantially lower CPU use per I/O (http://www.anandtech.com/show/8104/intel-ssd-dc-p3700-review-the-pcie-ssd-transition-begins-with-nvme/5)\n> \n> You're probably wondering 'how much' though? \n> $400 per drive! Peanuts. \n> \n> Assuming for the moment you're working in RAID0 or with tablespaces, and just want raw speed:\n> $2400 total for 2 TB of storage, including a good quality motherboard, with 2 million battery backed IOPS and 10GB/second bulk transfers.\n> \n> These drives are going to utterly wreck the profit margins on high-end DB hardware. \n> \n> Graeme Bell\n> \n> p.s. No, I don't have shares in Intel, but maybe I should...\n> \n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 4 Jun 2015 11:23:50 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need more IOPS? This should get you drooling... (5xnvme drives)"
},
{
"msg_contents": "This looks great when you want in-memory (something like unlogged tables)\nand you also want replication. (meaning, I don't know of an alternative to\nget replication with unlogged than to just get faster drives + logged\ntables?)\n\nOn Thu, Jun 4, 2015 at 1:23 PM, Graeme B. Bell <[email protected]>\nwrote:\n\n>\n> Images/data here\n>\n>\n> http://www.pcper.com/reviews/Storage/Five-Intel-SSD-750s-Tested-Two-Million-IOPS-and-10-GBsec-Achievement-Unlocked\n>\n>\n>\n> On 04 Jun 2015, at 13:07, Graeme Bell <[email protected]> wrote:\n>\n> > I previously mentioned on the list that nvme drives are going to be a\n> very big thing this year for DB performance.\n> >\n> > This video shows what happens if you get an 'enthusiast'-class\n> motherboard and 5 of the 400GB intel 750 drives.\n> > https://www.youtube.com/watch?v=-hE8Vg1qPSw\n> >\n> > Total transfer speed: 10.3 GB/second.\n> > Total IOPS: 2 million (!)\n> >\n> > + nice power loss protection (Intel)\n> > + lower latency too - about 20ms vs 100ms for SATA3 (\n> http://www.anandtech.com/show/7843/testing-sata-express-with-asus/4)\n> > + substantially lower CPU use per I/O (\n> http://www.anandtech.com/show/8104/intel-ssd-dc-p3700-review-the-pcie-ssd-transition-begins-with-nvme/5\n> )\n> >\n> > You're probably wondering 'how much' though?\n> > $400 per drive! Peanuts.\n> >\n> > Assuming for the moment you're working in RAID0 or with tablespaces, and\n> just want raw speed:\n> > $2400 total for 2 TB of storage, including a good quality motherboard,\n> with 2 million battery backed IOPS and 10GB/second bulk transfers.\n> >\n> > These drives are going to utterly wreck the profit margins on high-end\n> DB hardware.\n> >\n> > Graeme Bell\n> >\n> > p.s. No, I don't have shares in Intel, but maybe I should...\n> >\n> >\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThis looks great when you want in-memory (something like unlogged tables) and you also want replication. (meaning, I don't know of an alternative to get replication with unlogged than to just get faster drives + logged tables?)On Thu, Jun 4, 2015 at 1:23 PM, Graeme B. Bell <[email protected]> wrote:\nImages/data here\n\nhttp://www.pcper.com/reviews/Storage/Five-Intel-SSD-750s-Tested-Two-Million-IOPS-and-10-GBsec-Achievement-Unlocked\n\n\n\nOn 04 Jun 2015, at 13:07, Graeme Bell <[email protected]> wrote:\n\n> I previously mentioned on the list that nvme drives are going to be a very big thing this year for DB performance.\n>\n> This video shows what happens if you get an 'enthusiast'-class motherboard and 5 of the 400GB intel 750 drives.\n> https://www.youtube.com/watch?v=-hE8Vg1qPSw\n>\n> Total transfer speed: 10.3 GB/second.\n> Total IOPS: 2 million (!)\n>\n> + nice power loss protection (Intel)\n> + lower latency too - about 20ms vs 100ms for SATA3 (http://www.anandtech.com/show/7843/testing-sata-express-with-asus/4)\n> + substantially lower CPU use per I/O (http://www.anandtech.com/show/8104/intel-ssd-dc-p3700-review-the-pcie-ssd-transition-begins-with-nvme/5)\n>\n> You're probably wondering 'how much' though?\n> $400 per drive! Peanuts.\n>\n> Assuming for the moment you're working in RAID0 or with tablespaces, and just want raw speed:\n> $2400 total for 2 TB of storage, including a good quality motherboard, with 2 million battery backed IOPS and 10GB/second bulk transfers.\n>\n> These drives are going to utterly wreck the profit margins on high-end DB hardware.\n>\n> Graeme Bell\n>\n> p.s. No, I don't have shares in Intel, but maybe I should...\n>\n>\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 4 Jun 2015 13:29:49 +0200",
"msg_from": "Dorian Hoxha <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need more IOPS? This should get you drooling... (5xnvme drives)"
},
{
"msg_contents": "\nNote also - these disks are close to the performance of memory from a few generations ago (e.g. >10GB/second bulk transfers)\n\nThey also have bigger/faster versions of the drives, 1.2TB each. \n\nI suspect that 5 of those would feel somewhat similar to having 6TB of memory in your db server ... :-) [better in fact, since writes are fast too]\n\nGraeme.\n\nOn 04 Jun 2015, at 13:29, Dorian Hoxha <[email protected]> wrote:\n\n> This looks great when you want in-memory (something like unlogged tables) and you also want replication. (meaning, I don't know of an alternative to get replication with unlogged than to just get faster drives + logged tables?)\n> \n> On Thu, Jun 4, 2015 at 1:23 PM, Graeme B. Bell <[email protected]> wrote:\n> \n> Images/data here\n> \n> http://www.pcper.com/reviews/Storage/Five-Intel-SSD-750s-Tested-Two-Million-IOPS-and-10-GBsec-Achievement-Unlocked\n> \n> \n> \n> On 04 Jun 2015, at 13:07, Graeme Bell <[email protected]> wrote:\n> \n> > I previously mentioned on the list that nvme drives are going to be a very big thing this year for DB performance.\n> >\n> > This video shows what happens if you get an 'enthusiast'-class motherboard and 5 of the 400GB intel 750 drives.\n> > https://www.youtube.com/watch?v=-hE8Vg1qPSw\n> >\n> > Total transfer speed: 10.3 GB/second.\n> > Total IOPS: 2 million (!)\n> >\n> > + nice power loss protection (Intel)\n> > + lower latency too - about 20ms vs 100ms for SATA3 (http://www.anandtech.com/show/7843/testing-sata-express-with-asus/4)\n> > + substantially lower CPU use per I/O (http://www.anandtech.com/show/8104/intel-ssd-dc-p3700-review-the-pcie-ssd-transition-begins-with-nvme/5)\n> >\n> > You're probably wondering 'how much' though?\n> > $400 per drive! Peanuts.\n> >\n> > Assuming for the moment you're working in RAID0 or with tablespaces, and just want raw speed:\n> > $2400 total for 2 TB of storage, including a good quality motherboard, with 2 million battery backed IOPS and 10GB/second bulk transfers.\n> >\n> > These drives are going to utterly wreck the profit margins on high-end DB hardware.\n> >\n> > Graeme Bell\n> >\n> > p.s. No, I don't have shares in Intel, but maybe I should...\n> >\n> >\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 4 Jun 2015 11:35:59 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need more IOPS? This should get you drooling...\n (5xnvme drives)"
}
] |
[
{
"msg_contents": "Postgresql 9.3 Version\n\nGuys\n Here is the issue that I'm facing for couple of weeks now. I have table (size 7GB)\n\nIf I run this query with this specific registration id it is using the wrong execution plan and takes more than a minute to complete. Total number of rows for this registration_id is only 414 in this table\n\nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1 width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual time=86910.725..86910.725 rows=1 loops=1)\n -> Index Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n Index Cond: (last_update_date IS NOT NULL)\n Filter: ((response <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter: 52145434\nTotal runtime: 86910.766 ms\n\n\nSame query with any other registration id will come back in milli seconds\n\n\n\nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8688546267 AND response != 4;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=529.75..529.78 rows=1 width=8) (actual time=19.723..19.723 rows=1 loops=1)\n -> Index Scan using btdt_responses_u2 on btdt_responses (cost=0.57..529.45 rows=119 width=8) (actual time=0.097..19.689 rows=72 loops=1)\n Index Cond: (registration_id = 8688546267::bigint)\n Filter: (response <> 4)\n Rows Removed by Filter: 22\nTotal runtime: 19.769 ms\n\n\nPlease let me know what I can do to fix this issue.\n\n\nThanks\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgresql 9.3 Version \n \nGuys\n Here is the issue that I’m facing for couple of weeks now. I have table (size 7GB)\n \nIf I run this query with this specific registration id it is using the wrong execution plan and takes more than a minute to complete. Total number of rows for this registration_id is only 414 in this table\n \nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1 width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual time=86910.725..86910.725 rows=1 loops=1)\n -> Index Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n Index Cond: (last_update_date IS NOT NULL)\n Filter: ((response <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter: 52145434\nTotal runtime: 86910.766 ms\n \n \nSame query with any other registration id will come back in milli seconds\n\n \n \n \nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8688546267 AND response != 4;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=529.75..529.78 rows=1 width=8) (actual time=19.723..19.723 rows=1 loops=1)\n -> Index Scan using btdt_responses_u2 on btdt_responses (cost=0.57..529.45 rows=119 width=8) (actual time=0.097..19.689 rows=72 loops=1)\n Index Cond: (registration_id = 8688546267::bigint)\n Filter: (response <> 4)\n Rows Removed by Filter: 22\nTotal runtime: 19.769 ms\n \n \nPlease let me know what I can do to fix this issue. \n \n \nThanks",
"msg_date": "Fri, 5 Jun 2015 17:54:39 +0000",
"msg_from": "\"Sheena, Prabhjot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query running slow for only one specific id. (Postgres 9.3) version"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Sheena, Prabhjot\nSent: Friday, June 05, 2015 1:55 PM\nTo: [email protected]; [email protected]\nSubject: [PERFORM] Query running slow for only one specific id. (Postgres 9.3) version\n\nPostgresql 9.3 Version\n\nGuys\n Here is the issue that I'm facing for couple of weeks now. I have table (size 7GB)\n\nIf I run this query with this specific registration id it is using the wrong execution plan and takes more than a minute to complete. Total number of rows for this registration_id is only 414 in this table\n\nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1 width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual time=86910.725..86910.725 rows=1 loops=1)\n -> Index Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n Index Cond: (last_update_date IS NOT NULL)\n Filter: ((response <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter: 52145434\nTotal runtime: 86910.766 ms\n\n\nSame query with any other registration id will come back in milli seconds\n\n\n\nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8688546267 AND response != 4;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=529.75..529.78 rows=1 width=8) (actual time=19.723..19.723 rows=1 loops=1)\n -> Index Scan using btdt_responses_u2 on btdt_responses (cost=0.57..529.45 rows=119 width=8) (actual time=0.097..19.689 rows=72 loops=1)\n Index Cond: (registration_id = 8688546267::bigint)\n Filter: (response <> 4)\n Rows Removed by Filter: 22\nTotal runtime: 19.769 ms\n\n\nPlease let me know what I can do to fix this issue.\n\n\nThanks\n\n\nNot enough info.\nTable structure? Is registration_id - PK? If not, what is the distribution of the values for this table?\nWhen was it analyzed last time? M.b. you need to increase statistics target for this table:\n\nIndex Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n\nIt expects 2214 records while really getting only 1.\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Sheena, Prabhjot\nSent: Friday, June 05, 2015 1:55 PM\nTo: [email protected]; [email protected]\nSubject: [PERFORM] Query running slow for only one specific id. (Postgres 9.3) version\n\n\n \nPostgresql 9.3 Version \n \nGuys\n Here is the issue that I’m facing for couple of weeks now. I have table (size 7GB)\n \nIf I run this query with this specific registration id it is using the wrong execution plan and takes more than a minute to complete. Total number of rows for this registration_id is only 414 in this table\n \nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1 width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual time=86910.725..86910.725 rows=1 loops=1)\n -> Index Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n Index Cond: (last_update_date IS NOT NULL)\n Filter: ((response <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter: 52145434\nTotal runtime: 86910.766 ms\n \n \nSame query with any other registration id will come back in milli seconds\n\n \n \n \nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8688546267 AND response != 4;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=529.75..529.78 rows=1 width=8) (actual time=19.723..19.723 rows=1 loops=1)\n -> Index Scan using btdt_responses_u2 on btdt_responses (cost=0.57..529.45 rows=119 width=8) (actual time=0.097..19.689 rows=72 loops=1)\n Index Cond: (registration_id = 8688546267::bigint)\n Filter: (response <> 4)\n Rows Removed by Filter: 22\nTotal runtime: 19.769 ms\n \n \nPlease let me know what I can do to fix this issue. \n \n \nThanks\n\n \n\n \nNot enough info.\nTable structure? Is registration_id – PK? If not, what is the distribution of the values for this table?\nWhen was it analyzed last time? M.b. you need to increase statistics target for this table:\n\n \nIndex Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n \nIt expects 2214 records while really getting only 1.\n \nRegards,\nIgor Neyman",
"msg_date": "Fri, 5 Jun 2015 18:05:36 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query running slow for only one specific id. (Postgres 9.3)\n version"
},
{
"msg_contents": "On 06/05/2015 10:54 AM, Sheena, Prabhjot wrote:\n>\n> Postgresql 9.3 Version\n>\n> Guys\n>\n> Here is the issue that I’m facing for couple of weeks now. \n> I have table (size 7GB)\n>\n> *If I run this query with this specific registration id it is using \n> the wrong execution plan and takes more than a minute to complete. \n> Total number of rows for this registration_id is only 414 in this table*\n>\n> explain analyze SELECT max(last_update_date) AS last_update_date FROM \n> btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Result (cost=2902.98..2903.01 rows=1 width=0) (actual \n> time=86910.730..86910.731 rows=1 loops=1)\n>\n> InitPlan 1 (returns $0)\n>\n> -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual \n> time=86910.725..86910.725 rows=1 loops=1)\n>\n> -> Index Scan Backward using btdt_responses_n5 on \n> btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual \n> time=86910.723..86910.723 rows=1 loops=1)\n>\n> Index Cond: (last_update_date IS NOT NULL)\n>\n> Filter: ((response <> 4) AND (registration_id = \n> 8718704208::bigint))\n>\n> Rows Removed by Filter: 52145434\n>\n> Total runtime: 86910.766 ms\n>\n> *Same query with any other registration id will come back in milli \n> seconds *\n>\n> explain analyze SELECT max(last_update_date) AS last_update_date FROM \n> btdt_responses WHERE registration_id = 8688546267 AND response != 4;\n>\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Aggregate (cost=529.75..529.78 rows=1 width=8) (actual \n> time=19.723..19.723 rows=1 loops=1)\n>\n> -> Index Scan using btdt_responses_u2 on btdt_responses \n> (cost=0.57..529.45 rows=119 width=8) (actual time=0.097..19.689 \n> rows=72 loops=1)\n>\n> Index Cond: (registration_id = 8688546267::bigint)\n>\n> Filter: (response <> 4)\n>\n> Rows Removed by Filter: 22\n>\n> Total runtime: 19.769 ms\n>\nA couple initial questions:\n\n1. Does the result change if you analyze the table and rerun the query?\n\n2. Are there any non-default settings for statistics collection on your \ndatabase?\n\n-Steve\n\n\n\n\n\n\n\nOn 06/05/2015 10:54 AM, Sheena,\n Prabhjot wrote:\n\n\n\n\n\n\nPostgresql 9.3 Version \n \nGuys\n Here is the issue that I’m\n facing for couple of weeks now. I have table (size 7GB)\n \nIf I run this query with this specific\n registration id it is using the wrong execution plan and\n takes more than a minute to complete. Total number of rows\n for this registration_id is only 414 in this table\n \nexplain analyze SELECT\n max(last_update_date) AS last_update_date FROM btdt_responses\n WHERE registration_id = 8718704208 AND response != 4;\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1\n width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98\n rows=1 width=8) (actual time=86910.725..86910.725 rows=1\n loops=1)\n -> Index Scan Backward using\n btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41\n rows=2214 width=8) (actual time=86910.723..86910.723 rows=1\n loops=1)\n Index Cond:\n (last_update_date IS NOT NULL)\n Filter: ((response\n <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter:\n 52145434\nTotal runtime: 86910.766 ms\n \n \nSame query with any other registration\n id will come back in milli seconds\n \n \n \n \nexplain analyze SELECT\n max(last_update_date) AS last_update_date FROM btdt_responses\n WHERE registration_id = 8688546267 AND response != 4;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=529.75..529.78 rows=1\n width=8) (actual time=19.723..19.723 rows=1 loops=1)\n -> Index Scan using\n btdt_responses_u2 on btdt_responses (cost=0.57..529.45\n rows=119 width=8) (actual time=0.097..19.689 rows=72 loops=1)\n Index Cond: (registration_id =\n 8688546267::bigint)\n Filter: (response <> 4)\n Rows Removed by Filter: 22\nTotal runtime: 19.769 ms\n \n\n\n A couple initial questions:\n\n 1. Does the result change if you analyze the table and rerun the\n query?\n\n 2. Are there any non-default settings for statistics collection on\n your database?\n\n -Steve",
"msg_date": "Fri, 05 Jun 2015 11:24:34 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query running slow for only one specific id. (Postgres\n 9.3) version"
},
{
"msg_contents": "When I run vacuum analyze it fixes the problem but after 1 or 2 days the problem comes back\n\nHere is the table structure\n\n Column | Type | Modifiers | Storage | Stats target | Description\n------------------+-----------------------------+----------------------------------------------------------------------+---------+--------------+-------------\nresponse_id | integer | not null default nextval('btdt_responses_response_id_seq'::regclass) | plain | |\nregistration_id | bigint | not null | plain | |\nbtdt_id | integer | not null | plain | |\nresponse | integer | not null | plain | |\ncreation_date | timestamp without time zone | not null default now() | plain | |\nlast_update_date | timestamp without time zone | not null default now() | plain | |\nIndexes:\n \"btdt_responses_pkey\" PRIMARY KEY, btree (response_id)\n \"btdt_responses_u2\" UNIQUE, btree (registration_id, btdt_id)\n \"btdt_responses_n1\" btree (btdt_id)\n \"btdt_responses_n2\" btree (btdt_id, response)\n \"btdt_responses_n4\" btree (creation_date)\n \"btdt_responses_n5\" btree (last_update_date)\n \"btdt_responses_n6\" btree (btdt_id, last_update_date)\nForeign-key constraints:\n \"btdt_responses_btdt_id_fkey\" FOREIGN KEY (btdt_id) REFERENCES btdt_items(btdt_id)\n \"btdt_responses_fk1\" FOREIGN KEY (btdt_id) REFERENCES btdt_items(btdt_id)\nHas OIDs: no\nOptions: autovacuum_enabled=true, autovacuum_vacuum_scale_factor=0.02, autovacuum_analyze_scale_factor=0.02\n\nThanks\n\nFrom: Igor Neyman [mailto:[email protected]]\nSent: Friday, June 5, 2015 11:06 AM\nTo: Sheena, Prabhjot; [email protected]; [email protected]\nSubject: RE: Query running slow for only one specific id. (Postgres 9.3) version\n\n\n\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]] On Behalf Of Sheena, Prabhjot\nSent: Friday, June 05, 2015 1:55 PM\nTo: [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\nSubject: [PERFORM] Query running slow for only one specific id. (Postgres 9.3) version\n\nPostgresql 9.3 Version\n\nGuys\n Here is the issue that I'm facing for couple of weeks now. I have table (size 7GB)\n\nIf I run this query with this specific registration id it is using the wrong execution plan and takes more than a minute to complete. Total number of rows for this registration_id is only 414 in this table\n\nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1 width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual time=86910.725..86910.725 rows=1 loops=1)\n -> Index Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n Index Cond: (last_update_date IS NOT NULL)\n Filter: ((response <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter: 52145434\nTotal runtime: 86910.766 ms\n\n\nSame query with any other registration id will come back in milli seconds\n\n\n\nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8688546267 AND response != 4;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=529.75..529.78 rows=1 width=8) (actual time=19.723..19.723 rows=1 loops=1)\n -> Index Scan using btdt_responses_u2 on btdt_responses (cost=0.57..529.45 rows=119 width=8) (actual time=0.097..19.689 rows=72 loops=1)\n Index Cond: (registration_id = 8688546267::bigint)\n Filter: (response <> 4)\n Rows Removed by Filter: 22\nTotal runtime: 19.769 ms\n\n\nPlease let me know what I can do to fix this issue.\n\n\nThanks\n\n\nNot enough info.\nTable structure? Is registration_id - PK? If not, what is the distribution of the values for this table?\nWhen was it analyzed last time? M.b. you need to increase statistics target for this table:\n\nIndex Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n\nIt expects 2214 records while really getting only 1.\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\n\n\n\nWhen I run vacuum analyze it fixes the problem but after 1 or 2 days the problem comes back\n \nHere is the table structure\n \n Column | Type | Modifiers | Storage | Stats target | Description\n------------------+-----------------------------+----------------------------------------------------------------------+---------+--------------+-------------\nresponse_id | integer | not null default nextval('btdt_responses_response_id_seq'::regclass) | plain | |\nregistration_id | bigint | not null | plain | |\nbtdt_id | integer | not null | plain | |\nresponse | integer | not null | plain | |\ncreation_date | timestamp without time zone | not null default now() | plain | |\nlast_update_date | timestamp without time zone | not null default now() | plain | |\nIndexes:\n \"btdt_responses_pkey\" PRIMARY KEY, btree (response_id)\n \"btdt_responses_u2\" UNIQUE, btree (registration_id, btdt_id)\n \"btdt_responses_n1\" btree (btdt_id)\n \"btdt_responses_n2\" btree (btdt_id, response)\n \"btdt_responses_n4\" btree (creation_date)\n \"btdt_responses_n5\" btree (last_update_date)\n \"btdt_responses_n6\" btree (btdt_id, last_update_date)\nForeign-key constraints:\n \"btdt_responses_btdt_id_fkey\" FOREIGN KEY (btdt_id) REFERENCES btdt_items(btdt_id)\n \"btdt_responses_fk1\" FOREIGN KEY (btdt_id) REFERENCES btdt_items(btdt_id)\nHas OIDs: no\nOptions: autovacuum_enabled=true, autovacuum_vacuum_scale_factor=0.02, autovacuum_analyze_scale_factor=0.02\n \n\nThanks\n\n \n\n\nFrom: Igor Neyman [mailto:[email protected]] \nSent: Friday, June 5, 2015 11:06 AM\nTo: Sheena, Prabhjot; [email protected]; [email protected]\nSubject: RE: Query running slow for only one specific id. (Postgres 9.3) version\n\n\n \n \n \n\n\nFrom: \[email protected] [mailto:[email protected]]\nOn Behalf Of Sheena, Prabhjot\nSent: Friday, June 05, 2015 1:55 PM\nTo: [email protected];\[email protected]\nSubject: [PERFORM] Query running slow for only one specific id. (Postgres 9.3) version\n\n\n \nPostgresql 9.3 Version \n \nGuys\n Here is the issue that I’m facing for couple of weeks now. I have table (size 7GB)\n \nIf I run this query with this specific registration id it is using the wrong execution plan and takes more than a minute to complete. Total number of rows for this registration_id is only 414 in this table\n \nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1 width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual time=86910.725..86910.725 rows=1 loops=1)\n -> Index Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n Index Cond: (last_update_date IS NOT NULL)\n Filter: ((response <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter: 52145434\nTotal runtime: 86910.766 ms\n \n \nSame query with any other registration id will come back in milli seconds\n\n \n \n \nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8688546267 AND response != 4;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=529.75..529.78 rows=1 width=8) (actual time=19.723..19.723 rows=1 loops=1)\n -> Index Scan using btdt_responses_u2 on btdt_responses (cost=0.57..529.45 rows=119 width=8) (actual time=0.097..19.689 rows=72 loops=1)\n Index Cond: (registration_id = 8688546267::bigint)\n Filter: (response <> 4)\n Rows Removed by Filter: 22\nTotal runtime: 19.769 ms\n \n \nPlease let me know what I can do to fix this issue. \n \n \nThanks\n\n \n\n \nNot enough info.\nTable structure? Is registration_id – PK? If not, what is the distribution of the values for this table?\nWhen was it analyzed last time? M.b. you need to increase statistics target for this table:\n\n \nIndex Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n \nIt expects 2214 records while really getting only 1.\n \nRegards,\nIgor Neyman",
"msg_date": "Fri, 5 Jun 2015 18:38:13 +0000",
"msg_from": "\"Sheena, Prabhjot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query running slow for only one specific id. (Postgres 9.3)\n version"
},
{
"msg_contents": "From: Sheena, Prabhjot [mailto:[email protected]]\nSent: Friday, June 05, 2015 2:38 PM\nTo: Igor Neyman; [email protected]; [email protected]\nSubject: RE: Query running slow for only one specific id. (Postgres 9.3) version\n\nWhen I run vacuum analyze it fixes the problem but after 1 or 2 days the problem comes back\n\nHere is the table structure\n\n Column | Type | Modifiers | Storage | Stats target | Description\n------------------+-----------------------------+----------------------------------------------------------------------+---------+--------------+-------------\nresponse_id | integer | not null default nextval('btdt_responses_response_id_seq'::regclass) | plain | |\nregistration_id | bigint | not null | plain | |\nbtdt_id | integer | not null | plain | |\nresponse | integer | not null | plain | |\ncreation_date | timestamp without time zone | not null default now() | plain | |\nlast_update_date | timestamp without time zone | not null default now() | plain | |\nIndexes:\n \"btdt_responses_pkey\" PRIMARY KEY, btree (response_id)\n \"btdt_responses_u2\" UNIQUE, btree (registration_id, btdt_id)\n \"btdt_responses_n1\" btree (btdt_id)\n \"btdt_responses_n2\" btree (btdt_id, response)\n \"btdt_responses_n4\" btree (creation_date)\n \"btdt_responses_n5\" btree (last_update_date)\n \"btdt_responses_n6\" btree (btdt_id, last_update_date)\nForeign-key constraints:\n \"btdt_responses_btdt_id_fkey\" FOREIGN KEY (btdt_id) REFERENCES btdt_items(btdt_id)\n \"btdt_responses_fk1\" FOREIGN KEY (btdt_id) REFERENCES btdt_items(btdt_id)\nHas OIDs: no\nOptions: autovacuum_enabled=true, autovacuum_vacuum_scale_factor=0.02, autovacuum_analyze_scale_factor=0.02\n\nThanks\n\nFrom: Igor Neyman [mailto:[email protected]]\nSent: Friday, June 5, 2015 11:06 AM\nTo: Sheena, Prabhjot; [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\nSubject: RE: Query running slow for only one specific id. (Postgres 9.3) version\n\n\n\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]] On Behalf Of Sheena, Prabhjot\nSent: Friday, June 05, 2015 1:55 PM\nTo: [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\nSubject: [PERFORM] Query running slow for only one specific id. (Postgres 9.3) version\n\nPostgresql 9.3 Version\n\nGuys\n Here is the issue that I'm facing for couple of weeks now. I have table (size 7GB)\n\nIf I run this query with this specific registration id it is using the wrong execution plan and takes more than a minute to complete. Total number of rows for this registration_id is only 414 in this table\n\nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1 width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual time=86910.725..86910.725 rows=1 loops=1)\n -> Index Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n Index Cond: (last_update_date IS NOT NULL)\n Filter: ((response <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter: 52145434\nTotal runtime: 86910.766 ms\n\n\nSame query with any other registration id will come back in milli seconds\n\n\n\nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8688546267 AND response != 4;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=529.75..529.78 rows=1 width=8) (actual time=19.723..19.723 rows=1 loops=1)\n -> Index Scan using btdt_responses_u2 on btdt_responses (cost=0.57..529.45 rows=119 width=8) (actual time=0.097..19.689 rows=72 loops=1)\n Index Cond: (registration_id = 8688546267::bigint)\n Filter: (response <> 4)\n Rows Removed by Filter: 22\nTotal runtime: 19.769 ms\n\n\nPlease let me know what I can do to fix this issue.\n\n\nThanks\n\n\nNot enough info.\nTable structure? Is registration_id - PK? If not, what is the distribution of the values for this table?\nWhen was it analyzed last time? M.b. you need to increase statistics target for this table:\n\nIndex Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n\nIt expects 2214 records while really getting only 1.\n\nRegards,\nIgor Neyman\n\n\nDo you have autovacuum running?\nIf yes, maybe it's not aggressive enough and you need to adjust its parameters.\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\n\n \n \n\n\nFrom: Sheena, Prabhjot [mailto:[email protected]]\n\nSent: Friday, June 05, 2015 2:38 PM\nTo: Igor Neyman; [email protected]; [email protected]\nSubject: RE: Query running slow for only one specific id. (Postgres 9.3) version\n\n\n \nWhen I run vacuum analyze it fixes the problem but after 1 or 2 days the problem comes back\n \nHere is the table structure\n \n Column | Type | Modifiers | Storage | Stats target | Description\n------------------+-----------------------------+----------------------------------------------------------------------+---------+--------------+-------------\nresponse_id | integer | not null default nextval('btdt_responses_response_id_seq'::regclass) | plain | |\nregistration_id | bigint | not null | plain | |\nbtdt_id | integer | not null | plain | |\nresponse | integer | not null | plain | |\ncreation_date | timestamp without time zone | not null default now() | plain | |\nlast_update_date | timestamp without time zone | not null default now() | plain | |\nIndexes:\n \"btdt_responses_pkey\" PRIMARY KEY, btree (response_id)\n \"btdt_responses_u2\" UNIQUE, btree (registration_id, btdt_id)\n \"btdt_responses_n1\" btree (btdt_id)\n \"btdt_responses_n2\" btree (btdt_id, response)\n \"btdt_responses_n4\" btree (creation_date)\n \"btdt_responses_n5\" btree (last_update_date)\n \"btdt_responses_n6\" btree (btdt_id, last_update_date)\nForeign-key constraints:\n \"btdt_responses_btdt_id_fkey\" FOREIGN KEY (btdt_id) REFERENCES btdt_items(btdt_id)\n \"btdt_responses_fk1\" FOREIGN KEY (btdt_id) REFERENCES btdt_items(btdt_id)\nHas OIDs: no\nOptions: autovacuum_enabled=true, autovacuum_vacuum_scale_factor=0.02, autovacuum_analyze_scale_factor=0.02\n \n\nThanks\n\n \n\n\nFrom: Igor Neyman [mailto:[email protected]]\n\nSent: Friday, June 5, 2015 11:06 AM\nTo: Sheena, Prabhjot; [email protected];\[email protected]\nSubject: RE: Query running slow for only one specific id. (Postgres 9.3) version\n\n\n \n \n \n\n\nFrom: \[email protected] [mailto:[email protected]]\nOn Behalf Of Sheena, Prabhjot\nSent: Friday, June 05, 2015 1:55 PM\nTo: [email protected];\[email protected]\nSubject: [PERFORM] Query running slow for only one specific id. (Postgres 9.3) version\n\n\n \nPostgresql 9.3 Version \n \nGuys\n Here is the issue that I’m facing for couple of weeks now. I have table (size 7GB)\n \nIf I run this query with this specific registration id it is using the wrong execution plan and takes more than a minute to complete. Total number of rows for this registration_id is only 414 in this table\n \nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1 width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual time=86910.725..86910.725 rows=1 loops=1)\n -> Index Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n Index Cond: (last_update_date IS NOT NULL)\n Filter: ((response <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter: 52145434\nTotal runtime: 86910.766 ms\n \n \nSame query with any other registration id will come back in milli seconds\n\n \n \n \nexplain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8688546267 AND response != 4;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=529.75..529.78 rows=1 width=8) (actual time=19.723..19.723 rows=1 loops=1)\n -> Index Scan using btdt_responses_u2 on btdt_responses (cost=0.57..529.45 rows=119 width=8) (actual time=0.097..19.689 rows=72 loops=1)\n Index Cond: (registration_id = 8688546267::bigint)\n Filter: (response <> 4)\n Rows Removed by Filter: 22\nTotal runtime: 19.769 ms\n \n \nPlease let me know what I can do to fix this issue. \n \n \nThanks\n\n \n\n \nNot enough info.\nTable structure? Is registration_id – PK? If not, what is the distribution of the values for this table?\nWhen was it analyzed last time? M.b. you need to increase statistics target for this table:\n\n \nIndex Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n \nIt expects 2214 records while really getting only 1.\n \nRegards,\nIgor Neyman\n\n \n\n \nDo you have autovacuum running?\nIf yes, maybe it’s not aggressive enough and you need to adjust its parameters.\n \nRegards,\nIgor Neyman",
"msg_date": "Fri, 5 Jun 2015 18:46:15 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query running slow for only one specific id. (Postgres 9.3)\n version"
},
{
"msg_contents": "On 06/05/2015 11:38 AM, Sheena, Prabhjot wrote:\n>\n> When I run vacuum analyze it fixes the problem but after 1 or 2 days \n> the problem comes back\n>\n>\nIs autovacuum running and using what settings?\n\n(select name, setting from pg_settings where name ~ 'autovacuum' Konsole \noutput or name ~ 'statistics';)\n\nCheers,\nSteve\n\nP.S. The convention on the PostgreSQL mailing lists it to bottom-post, \nnot top-post replies.\nKonsole outpor name ~ 'statistics';)\n\n\n\n\n\n\nOn 06/05/2015 11:38 AM, Sheena,\n Prabhjot wrote:\n\n\n\n\n\n\nWhen I run\n vacuum analyze it fixes the problem but after 1 or 2 days\n the problem comes back\n \n\n\n\n Is autovacuum running and using what settings?\n\n (select name, setting from pg_settings where name ~ 'autovacuum'\n Konsole output\n or name ~ 'statistics';)\n\n Cheers,\n Steve\n\n P.S. The convention on the PostgreSQL mailing lists it to\n bottom-post, not top-post replies.\n\n\nKonsole outpor name ~ 'statistics';)",
"msg_date": "Fri, 05 Jun 2015 12:28:23 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Query running slow for only one specific id. (Postgres\n 9.3) version"
},
{
"msg_contents": "On 06/05/2015 12:28 PM, Steve Crawford wrote:\n> On 06/05/2015 11:38 AM, Sheena, Prabhjot wrote:\n>>\n>> When I run vacuum analyze it fixes the problem but after 1 or 2 days \n>> the problem comes back\n>>\n>>\n> Is autovacuum running and using what settings?\n>\n> (select name, setting from pg_settings where name ~ 'autovacuum' \n> Konsole output or name ~ 'statistics';)\n>\n> Cheers,\n> Steve\n>\n> P.S. The convention on the PostgreSQL mailing lists it to bottom-post, \n> not top-post replies.\n> Konsole outpor name ~ 'statistics';) \n\nAnd just to confirm, are there any table-specific overrides to the \nsystem-wide settings?\n\nCheers,\nSteve\n\n\n\n\n\n\nOn 06/05/2015 12:28 PM, Steve Crawford\n wrote:\n\n\n\nOn 06/05/2015 11:38 AM, Sheena,\n Prabhjot wrote:\n\n\n\n\n\n\nWhen I run\n vacuum analyze it fixes the problem but after 1 or 2 days\n the problem comes back\n \n\n\n\n Is autovacuum running and using what settings?\n\n (select name, setting from pg_settings where name ~ 'autovacuum'\n Konsole output\n or name ~ 'statistics';)\n\n Cheers,\n Steve\n\n P.S. The convention on the PostgreSQL mailing lists it to\n bottom-post, not top-post replies.\n\n\nKonsole outpor name ~ 'statistics';)\n\n\n And just to confirm, are there any table-specific overrides to the\n system-wide settings?\n\n Cheers,\n Steve",
"msg_date": "Fri, 05 Jun 2015 12:34:05 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] Re: Query running slow for only one specific\n id. (Postgres 9.3) version"
},
{
"msg_contents": "On Fri, Jun 5, 2015 at 2:54 PM, Sheena, Prabhjot <\[email protected]> wrote:\n\n> explain analyze SELECT max(last_update_date) AS last_update_date FROM\n> btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n>\n>\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Result (cost=2902.98..2903.01 rows=1 width=0) (actual\n> time=86910.730..86910.731 rows=1 loops=1)\n>\n> InitPlan 1 (returns $0)\n>\n> -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual\n> time=86910.725..86910.725 rows=1 loops=1)\n>\n> -> Index Scan Backward using btdt_responses_n5 on\n> btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual\n> time=86910.723..86910.723 rows=1 loops=1)\n>\n> Index Cond: (last_update_date IS NOT NULL)\n>\n> Filter: ((response <> 4) AND (registration_id =\n> 8718704208::bigint))\n>\n> Rows Removed by Filter: 52145434\n>\n> Total runtime: 86910.766 ms\n>\n\nThe issue here is the \"Row Removed by Filter\", you are filtering out more\nthan 52M rows, so the index is not being much effective.\n\nWhat you want for this query is a composite index on (registration_id,\nlast_update_date). And if the filter always include `response <> 4`, then\nyou can also create a partial index with that (unless it is not very\nselective, then it might not be worthy it).\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Fri, Jun 5, 2015 at 2:54 PM, Sheena, Prabhjot <[email protected]> wrote:explain analyze SELECT max(last_update_date) AS last_update_date FROM btdt_responses WHERE registration_id = 8718704208 AND response != 4;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nResult (cost=2902.98..2903.01 rows=1 width=0) (actual time=86910.730..86910.731 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..2902.98 rows=1 width=8) (actual time=86910.725..86910.725 rows=1 loops=1)\n -> Index Scan Backward using btdt_responses_n5 on btdt_responses (cost=0.57..6425932.41 rows=2214 width=8) (actual time=86910.723..86910.723 rows=1 loops=1)\n Index Cond: (last_update_date IS NOT NULL)\n Filter: ((response <> 4) AND (registration_id = 8718704208::bigint))\n Rows Removed by Filter: 52145434\nTotal runtime: 86910.766 msThe issue here is the \"Row Removed by Filter\", you are filtering out more than 52M rows, so the index is not being much effective.What you want for this query is a composite index on (registration_id, last_update_date). And if the filter always include `response <> 4`, then you can also create a partial index with that (unless it is not very selective, then it might not be worthy it).Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 5 Jun 2015 19:34:30 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Query running slow for only one specific id. (Postgres\n 9.3) version"
}
] |
[
{
"msg_contents": "I have stopped this query after about 16 hours. At the same time I ran a\n'explain analyze' on the same query to find out why it took so long. These\ntwo processes generated temporary files of 173GB in\n/var/lib/postgresql/9.4/main/base/pgsql_tmp.\n\nCOPY\n (SELECT A.ut,\n B.go AS funding_org,\n B.gn AS grant_no,\n C.gt AS thanks,\n D.au\n FROM isi.funding_text C,\n isi.rauthor D,\n isi.africa_uts A\n LEFT JOIN isi.funding_org B ON (B.ut = A.ut)\n WHERE (C.ut IS NOT NULL\n OR B.ut IS NOT NULL)\n AND D.rart_id = C.ut\n AND C.ut = B.ut\n GROUP BY A.ut,\n GO,\n gn,\n gt,\n au\n ORDER BY funding_org) TO '/tmp/africafunding2.csv' WITH csv quote\n'\"' DELIMITER ',';\n\n\nA modified version of this query finished in 1min 27 sek:\n\nCOPY\n (SELECT 'UT'||A.ut,\n B.go AS funding_org,\n B.gn AS grant_no,\n C.gt AS thanks\n FROM isi.africa_uts A\n LEFT JOIN isi.funding_org B ON (B.ut = A.ut)\n LEFT JOIN isi.funding_text C ON (A.ut = C.ut)\n WHERE (C.ut IS NOT NULL\n OR B.ut IS NOT NULL)\n GROUP BY A.ut,\n GO,\n gn,\n gt) TO '/tmp/africafunding.csv' WITH csv quote '\"' DELIMITER ',';\n\nAs I said, the process of 'explain analyze' of the problematic query\ncontributed to the 173GB\ntemporary files and did not finish in about 16 hours.\n\nJust explain of the query part produces this:\n\n\"Sort (cost=4781458203.46..4798118612.44 rows=6664163593 width=390)\"\n\" Output: a.ut, b.go, b.gn, c.gt, (array_to_string(array_agg(d.au),\n';'::text)), b.go, b.gn, d.au\"\n\" Sort Key: b.go\"\n\" -> GroupAggregate (cost=2293037801.73..2509623118.51\nrows=6664163593 width=390)\"\n\" Output: a.ut, b.go, b.gn, c.gt,\narray_to_string(array_agg(d.au), ';'::text), b.go, b.gn, d.au\"\n\" Group Key: a.ut, b.go, b.gn, c.gt, d.au\"\n\" -> Sort (cost=2293037801.73..2309698210.72 rows=6664163593\nwidth=390)\"\n\" Output: a.ut, c.gt, b.go, b.gn, d.au\"\n\" Sort Key: a.ut, b.go, b.gn, c.gt, d.au\"\n\" -> Merge Join (cost=4384310.92..21202716.78\nrows=6664163593 width=390)\"\n\" Output: a.ut, c.gt, b.go, b.gn, d.au\"\n\" Merge Cond: ((c.ut)::text = (d.rart_id)::text)\"\n\" -> Merge Join (cost=635890.84..1675389.41\nrows=6069238 width=412)\"\n\" Output: c.gt, c.ut, a.ut, b.go, b.gn, b.ut\"\n\" Merge Cond: ((c.ut)::text = (b.ut)::text)\"\n\" Join Filter: ((c.ut IS NOT NULL) OR (b.ut\nIS NOT NULL))\"\n\" -> Merge Join (cost=635476.30..675071.77\nrows=1150354 width=348)\"\n\" Output: c.gt, c.ut, a.ut\"\n\" Merge Cond: ((a.ut)::text = (c.ut)::text)\"\n\" -> Index Only Scan using\nafrica_ut_idx on isi.africa_uts a (cost=0.42..19130.19 rows=628918\nwidth=16)\"\n\" Output: a.ut\"\n\" -> Sort (cost=632211.00..640735.23\nrows=3409691 width=332)\"\n\" Output: c.gt, c.ut\"\n\" Sort Key: c.ut\"\n\" -> Seq Scan on\nisi.funding_text c (cost=0.00..262238.91 rows=3409691 width=332)\"\n\" Output: c.gt, c.ut\"\n\" -> Index Scan using funding_org_ut_idx on\nisi.funding_org b (cost=0.56..912582.50 rows=9835492 width=64)\"\n\" Output: b.id, b.ut, b.go, b.gn\"\n\" -> Materialize (cost=0.57..17914892.46\nrows=159086560 width=26)\"\n\" Output: d.id, d.rart_id, d.au, d.ro, d.ln,\nd.af, d.ras, d.ad, d.aa, d.em, d.ag, d.tsv\"\n\" -> Index Scan using rauthor_rart_id_idx on\nisi.rauthor d (cost=0.57..17517176.06 rows=159086560 width=26)\"\n\" Output: d.id, d.rart_id, d.au, d.ro,\nd.ln, d.af, d.ras, d.ad, d.aa, d.em, d.ag, d.tsv\"\n\nAny idea on why adding the rauthor table in the query is so problematic?\n\nMy systerm:\n\n768 GB RAM\nshared_ buffers: 32GB\nwork_mem: 4608MB\n\nRegards\nJohann\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nI have stopped this query after about 16 hours. At the same time I ran a 'explain analyze' on the same query to find out why it took so long. These two processes generated temporary files of 173GB in /var/lib/postgresql/9.4/main/base/pgsql_tmp.COPY\n (SELECT A.ut,\n B.go AS funding_org,\n B.gn AS grant_no,\n C.gt AS thanks,\n D.au\n FROM isi.funding_text C,\n isi.rauthor D,\n isi.africa_uts A\n LEFT JOIN isi.funding_org B ON (B.ut = A.ut)\n WHERE (C.ut IS NOT NULL\n OR B.ut IS NOT NULL)\n AND D.rart_id = C.ut\n AND C.ut = B.ut\n GROUP BY A.ut,\n GO,\n gn,\n gt,\n au\n ORDER BY funding_org) TO '/tmp/africafunding2.csv' WITH csv quote '\"' DELIMITER ',';A modified version of this query finished in 1min 27 sek:COPY (SELECT 'UT'||A.ut, B.go AS funding_org, B.gn AS grant_no, C.gt AS thanks FROM isi.africa_uts A LEFT JOIN isi.funding_org B ON (B.ut = A.ut) LEFT JOIN isi.funding_text C ON (A.ut = C.ut) WHERE (C.ut IS NOT NULL OR B.ut IS NOT NULL) GROUP BY A.ut, GO, gn, gt) TO '/tmp/africafunding.csv' WITH csv quote '\"' DELIMITER ',';\n\nAs I said, the process of 'explain analyze' of the problematic query contributed to the 173GB temporary files and did not finish in about 16 hours.Just explain of the query part produces this:\"Sort (cost=4781458203.46..4798118612.44 rows=6664163593 width=390)\"\" Output: a.ut, b.go, b.gn, c.gt, (array_to_string(array_agg(d.au), ';'::text)), b.go, b.gn, d.au\"\" Sort Key: b.go\"\" -> GroupAggregate (cost=2293037801.73..2509623118.51 rows=6664163593 width=390)\"\" Output: a.ut, b.go, b.gn, c.gt, array_to_string(array_agg(d.au), ';'::text), b.go, b.gn, d.au\"\" Group Key: a.ut, b.go, b.gn, c.gt, d.au\"\" -> Sort (cost=2293037801.73..2309698210.72 rows=6664163593 width=390)\"\" Output: a.ut, c.gt, b.go, b.gn, d.au\"\" Sort Key: a.ut, b.go, b.gn, c.gt, d.au\"\" -> Merge Join (cost=4384310.92..21202716.78 rows=6664163593 width=390)\"\" Output: a.ut, c.gt, b.go, b.gn, d.au\"\" Merge Cond: ((c.ut)::text = (d.rart_id)::text)\"\" -> Merge Join (cost=635890.84..1675389.41 rows=6069238 width=412)\"\" Output: c.gt, c.ut, a.ut, b.go, b.gn, b.ut\"\" Merge Cond: ((c.ut)::text = (b.ut)::text)\"\" Join Filter: ((c.ut IS NOT NULL) OR (b.ut IS NOT NULL))\"\" -> Merge Join (cost=635476.30..675071.77 rows=1150354 width=348)\"\" Output: c.gt, c.ut, a.ut\"\" Merge Cond: ((a.ut)::text = (c.ut)::text)\"\" -> Index Only Scan using africa_ut_idx on isi.africa_uts a (cost=0.42..19130.19 rows=628918 width=16)\"\" Output: a.ut\"\" -> Sort (cost=632211.00..640735.23 rows=3409691 width=332)\"\" Output: c.gt, c.ut\"\" Sort Key: c.ut\"\" -> Seq Scan on isi.funding_text c (cost=0.00..262238.91 rows=3409691 width=332)\"\" Output: c.gt, c.ut\"\" -> Index Scan using funding_org_ut_idx on isi.funding_org b (cost=0.56..912582.50 rows=9835492 width=64)\"\" Output: b.id, b.ut, b.go, b.gn\"\" -> Materialize (cost=0.57..17914892.46 rows=159086560 width=26)\"\" Output: d.id, d.rart_id, d.au, d.ro, d.ln, d.af, d.ras, d.ad, d.aa, d.em, d.ag, d.tsv\"\" -> Index Scan using rauthor_rart_id_idx on isi.rauthor d (cost=0.57..17517176.06 rows=159086560 width=26)\"\" Output: d.id, d.rart_id, d.au, d.ro, d.ln, d.af, d.ras, d.ad, d.aa, d.em, d.ag, d.tsv\"Any idea on why adding the rauthor table in the query is so problematic?My systerm:768 GB RAMshared_ buffers: 32GBwork_mem: 4608MBRegardsJohann-- Because experiencing your loyal love is better than life itself, my lips will praise you. (Psalm 63:3)",
"msg_date": "Wed, 10 Jun 2015 14:39:50 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query - lots of temporary files."
},
{
"msg_contents": "On Wed, Jun 10, 2015 at 9:39 AM, Johann Spies <[email protected]> wrote:\n> COPY\n> (SELECT A.ut,\n> B.go AS funding_org,\n> B.gn AS grant_no,\n> C.gt AS thanks,\n> D.au\n> FROM isi.funding_text C,\n> isi.rauthor D,\n> isi.africa_uts A\n> LEFT JOIN isi.funding_org B ON (B.ut = A.ut)\n> WHERE (C.ut IS NOT NULL\n> OR B.ut IS NOT NULL)\n> AND D.rart_id = C.ut\n> AND C.ut = B.ut\n> GROUP BY A.ut,\n> GO,\n> gn,\n> gt,\n> au\n> ORDER BY funding_org) TO '/tmp/africafunding2.csv' WITH csv quote '\"'\n> DELIMITER ',';\n>\n>\n> A modified version of this query finished in 1min 27 sek:\n>\n> COPY\n> (SELECT 'UT'||A.ut,\n> B.go AS funding_org,\n> B.gn AS grant_no,\n> C.gt AS thanks\n> FROM isi.africa_uts A\n> LEFT JOIN isi.funding_org B ON (B.ut = A.ut)\n> LEFT JOIN isi.funding_text C ON (A.ut = C.ut)\n> WHERE (C.ut IS NOT NULL\n> OR B.ut IS NOT NULL)\n> GROUP BY A.ut,\n> GO,\n> gn,\n> gt) TO '/tmp/africafunding.csv' WITH csv quote '\"' DELIMITER\n> ',';\n>\n>\n> As I said, the process of 'explain analyze' of the problematic query\n> contributed to the 173GB\n> temporary files and did not finish in about 16 hours.\n\nThe joins are different on both versions, and the most likely culprit\nis the join against D. It's probably wrong, and the first query is\nbuilding a cartesian product.\n\nWithout more information about the schema it's difficult to be sure though.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 10:02:38 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query - lots of temporary files."
},
{
"msg_contents": "On 10 June 2015 at 15:02, Claudio Freire <[email protected]> wrote:\n\n>\n> The joins are different on both versions, and the most likely culprit\n> is the join against D. It's probably wrong, and the first query is\n> building a cartesian product.\n>\n> Without more information about the schema it's difficult to be sure though.\n>\n\nThanks for your reply. I will experiment futher with different joins.\n\nHere is the schema of the involved tables:\n\nnkb=# \\d isi.funding_text\n Table \"isi.funding_text\"\n Column | Type |\nModifiers\n--------+-----------------------+---------------------------------------------------------------\n id | integer | not null default\nnextval('isi.funding_text_id_seq'::regclass)\n ut | character varying(15) |\n gt | citext |\nIndexes:\n \"funding_text_pkey\" PRIMARY KEY, btree (id)\n \"funding_text_ut_idx\" btree (ut)\nForeign-key constraints:\n \"funding_text_ut_fkey\" FOREIGN KEY (ut) REFERENCES isi.ritem(ut)\n\nnkb=# \\d isi.funding_org\n Table \"isi.funding_org\"\n Column | Type |\nModifiers\n--------+-----------------------+--------------------------------------------------------------\n id | integer | not null default\nnextval('isi.funding_org_id_seq'::regclass)\n ut | character varying(15) |\n go | citext |\n gn | character varying |\nIndexes:\n \"funding_org_pkey\" PRIMARY KEY, btree (id)\n \"funding_org_ut_idx\" btree (ut)\nForeign-key constraints:\n \"funding_org_ut_fkey\" FOREIGN KEY (ut) REFERENCES isi.ritem(ut)\n\n\n Table \"isi.africa_uts\"\n Column | Type |\nModifiers\n--------+-----------------------+-------------------------------------------------------------\n ut | character varying(15) |\n id | integer | not null default\nnextval('isi.africa_uts_id_seq'::regclass)\nIndexes:\n \"africa_uts_pkey\" PRIMARY KEY, btree (id)\n \"africa_ut_idx\" btree (ut)\nForeign-key constraints:\n \"africa_uts_ut_fkey\" FOREIGN KEY (ut) REFERENCES isi.ritem(ut)\n\n\n Table \"isi.rauthor\"\n Column | Type |\nModifiers\n---------+------------------------+----------------------------------------------------------\n id | integer | not null default\nnextval('isi.rauthor_id_seq'::regclass)\n rart_id | character varying(15) |\n au | character varying(75) |\n ro | character varying(30) |\n ln | character varying(200) |\n af | character varying(200) |\n ras | character varying(4) |\n ad | integer |\n aa | text |\n em | character varying(250) |\n ag | character varying(75) |\n tsv | tsvector |\nIndexes:\n \"rauthor_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"rauthor_ad_idx\" btree (ad)\n \"rauthor_au_idx\" btree (au)\n \"rauthor_lower_idx\" btree (lower(au::text))\n \"rauthor_lower_lower1_idx\" btree (lower(ln::text), lower(af::text))\n \"rauthor_rart_id_idx\" btree (rart_id)\n \"rauthor_tsv_idx\" gin (tsv)\nReferenced by:\n TABLE \"level1.person\" CONSTRAINT \"person_auth_id_fkey\" FOREIGN KEY\n(auth_id) REFERENCES isi.rauthor(id) ON DELETE CASCADE\nTriggers:\n tsvectorupdate_for_rauthor BEFORE INSERT OR UPDATE ON isi.rauthor FOR\nEACH ROW EXECUTE PROCEDURE isi.update_rauthor_tsv()\n\nRegards\nJohann\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nOn 10 June 2015 at 15:02, Claudio Freire <[email protected]> wrote:\nThe joins are different on both versions, and the most likely culprit\r\nis the join against D. It's probably wrong, and the first query is\r\nbuilding a cartesian product.\n\r\nWithout more information about the schema it's difficult to be sure though.\nThanks for your reply. I will experiment futher with different joins.Here is the schema of the involved tables:nkb=# \\d isi.funding_text Table \"isi.funding_text\" Column | Type | Modifiers --------+-----------------------+--------------------------------------------------------------- id | integer | not null default nextval('isi.funding_text_id_seq'::regclass) ut | character varying(15) | gt | citext | Indexes: \"funding_text_pkey\" PRIMARY KEY, btree (id) \"funding_text_ut_idx\" btree (ut)Foreign-key constraints: \"funding_text_ut_fkey\" FOREIGN KEY (ut) REFERENCES isi.ritem(ut)nkb=# \\d isi.funding_org Table \"isi.funding_org\" Column | Type | Modifiers --------+-----------------------+-------------------------------------------------------------- id | integer | not null default nextval('isi.funding_org_id_seq'::regclass) ut | character varying(15) | go | citext | gn | character varying | Indexes: \"funding_org_pkey\" PRIMARY KEY, btree (id) \"funding_org_ut_idx\" btree (ut)Foreign-key constraints: \"funding_org_ut_fkey\" FOREIGN KEY (ut) REFERENCES isi.ritem(ut) Table \"isi.africa_uts\" Column | Type | Modifiers --------+-----------------------+------------------------------------------------------------- ut | character varying(15) | id | integer | not null default nextval('isi.africa_uts_id_seq'::regclass)Indexes: \"africa_uts_pkey\" PRIMARY KEY, btree (id) \"africa_ut_idx\" btree (ut)Foreign-key constraints: \"africa_uts_ut_fkey\" FOREIGN KEY (ut) REFERENCES isi.ritem(ut) Table \"isi.rauthor\" Column | Type | Modifiers ---------+------------------------+---------------------------------------------------------- id | integer | not null default nextval('isi.rauthor_id_seq'::regclass) rart_id | character varying(15) | au | character varying(75) | ro | character varying(30) | ln | character varying(200) | af | character varying(200) | ras | character varying(4) | ad | integer | aa | text | em | character varying(250) | ag | character varying(75) | tsv | tsvector | Indexes: \"rauthor_pkey\" PRIMARY KEY, btree (id) CLUSTER \"rauthor_ad_idx\" btree (ad) \"rauthor_au_idx\" btree (au) \"rauthor_lower_idx\" btree (lower(au::text)) \"rauthor_lower_lower1_idx\" btree (lower(ln::text), lower(af::text)) \"rauthor_rart_id_idx\" btree (rart_id) \"rauthor_tsv_idx\" gin (tsv)Referenced by: TABLE \"level1.person\" CONSTRAINT \"person_auth_id_fkey\" FOREIGN KEY (auth_id) REFERENCES isi.rauthor(id) ON DELETE CASCADETriggers: tsvectorupdate_for_rauthor BEFORE INSERT OR UPDATE ON isi.rauthor FOR EACH ROW EXECUTE PROCEDURE isi.update_rauthor_tsv()RegardsJohann-- Because experiencing your loyal love is better than life itself, my lips will praise you. (Psalm 63:3)",
"msg_date": "Wed, 10 Jun 2015 15:42:03 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query - lots of temporary files."
},
{
"msg_contents": "\n\nOn 06/10/15 15:42, Johann Spies wrote:\n> On 10 June 2015 at 15:02, Claudio Freire <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n>\n> The joins are different on both versions, and the most likely culprit\n> is the join against D. It's probably wrong, and the first query is\n> building a cartesian product.\n>\n> Without more information about the schema it's difficult to be sure\n> though.\n>\n>\n> Thanks for your reply. I will experiment futher with different joins.\n\nI don't know what you mean by \"experimenting with joins\" - that should \nbe determined by the schema.\n\nThe problematic piece of the explain plan is this:\n\n -> Merge Join (cost=4384310.92..21202716.78 rows=6664163593\n width=390)\"\n Output: a.ut, c.gt, b.go, b.gn, d.au\"\n Merge Cond: ((c.ut)::text = (d.rart_id)::text)\"\n\nThat is, the planner expects ~6.7 billion rows, each ~390B wide. That's \n~2.5TB of data that needs to be stored to disk (so that the sort can \nprocess it).\n\nThe way the schema is designed might be one of the issues - ISTM the \n'ut' column is somehow universal, mixing values referencing different \ncolumns in multiple tables. Not only that's utterly misleading for the \nplanner (and may easily cause issues with huge intermediate results), \nbut it also makes formulating the queries very difficult. And of course, \nthe casting between text and int is not very good either.\n\nFix the schema to follow relational best practices - separate the values \ninto multiple columns, and most of this will go away.\n\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 16:50:50 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query - lots of temporary files."
},
{
"msg_contents": "On 10 June 2015 at 16:50, Tomas Vondra <[email protected]> wrote:\n\n>\n>\n> The problematic piece of the explain plan is this:\n>\n> -> Merge Join (cost=4384310.92..21202716.78 rows=6664163593\n> width=390)\"\n> Output: a.ut, c.gt, b.go, b.gn, d.au\"\n> Merge Cond: ((c.ut)::text = (d.rart_id)::text)\"\n>\n> That is, the planner expects ~6.7 billion rows, each ~390B wide. That's\n> ~2.5TB of data that needs to be stored to disk (so that the sort can\n> process it).\n>\n> The way the schema is designed might be one of the issues - ISTM the 'ut'\n> column is somehow universal, mixing values referencing different columns in\n> multiple tables. Not only that's utterly misleading for the planner (and\n> may easily cause issues with huge intermediate results), but it also makes\n> formulating the queries very difficult. And of course, the casting between\n> text and int is not very good either.\n>\n> Fix the schema to follow relational best practices - separate the values\n> into multiple columns, and most of this will go away.\n>\n\nThanks for your reply Tomas.\n\nI do not understand what the problem with the 'ut' column is. It is a\nunique identifier in the first table(africa_uts) and is used in the other\ntables to establish joins and does have the same type definition in all the\ntables. Is the problem in the similar name. The data refers in all the\n'ut' columns of the different tables to the same data. I do not casting of\nintegers into text in this case. I don't know why the planner is doing\nit. The field 'rart_id' in isi.rauthor is just another name for 'ut' in\nthe other tables and have the same datatype.\n\nI do not understand your remark: \"separate the values into multiple\ncolumns\". I cannot see which values can be separated into different columns\nin the schema. Do you mean in the query? Why?\n\n\nJohann\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nOn 10 June 2015 at 16:50, Tomas Vondra <[email protected]> wrote:\n\nThe problematic piece of the explain plan is this:\n\n -> Merge Join (cost=4384310.92..21202716.78 rows=6664163593\n width=390)\"\n Output: a.ut, c.gt, b.go, b.gn, d.au\"\n Merge Cond: ((c.ut)::text = (d.rart_id)::text)\"\n\nThat is, the planner expects ~6.7 billion rows, each ~390B wide. That's ~2.5TB of data that needs to be stored to disk (so that the sort can process it).\n\nThe way the schema is designed might be one of the issues - ISTM the 'ut' column is somehow universal, mixing values referencing different columns in multiple tables. Not only that's utterly misleading for the planner (and may easily cause issues with huge intermediate results), but it also makes formulating the queries very difficult. And of course, the casting between text and int is not very good either.\n\nFix the schema to follow relational best practices - separate the values into multiple columns, and most of this will go away.Thanks for your reply Tomas.I do not understand what the problem with the 'ut' column is. It is a unique identifier in the first table(africa_uts) and is used in the other tables to establish joins and does have the same type definition in all the tables. Is the problem in the similar name. The data refers in all the 'ut' columns of the different tables to the same data. I do not casting of integers into text in this case. I don't know why the planner is doing it. The field 'rart_id' in isi.rauthor is just another name for 'ut' in the other tables and have the same datatype.I do not understand your remark: \"separate the values into multiple columns\". I cannot see which values can be separated into different columns in the schema. Do you mean in the query? Why?Johann-- Because experiencing your loyal love is better than life itself, my lips will praise you. (Psalm 63:3)",
"msg_date": "Fri, 12 Jun 2015 17:04:48 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query - lots of temporary files."
}
] |
[
{
"msg_contents": "Hi everyone --\n\nI had an issue the other day where a relatively simple query went from\ntaking about 1 minute to execute to taking 19 hours. It seems that the\nplanner chooses to use a materialize sometimes [1] and not other times\n[2]. I think the issue is that the row count estimate for the result\nof the condition \"type_id = 23 and ref.attributes ? 'reference'\" is\nabout 10k rows, but the actual result is 4624280. It seems the\nestimate varies slightly over time, and if it drops low enough then\nthe planner decides to materialize the result of the bitmap heap scan\nand the query takes forever.\n\nAs an exercise, I tried removing the clause \"ref.attributes ?\n'reference'\" and the estimates are very accurate [3].\n\nIt seems to me that improving the row estimates would be prudent, but\nI can't figure out how postgres figures out the estimate for the\nhstore ? operator. I suspect it is making a wild guess, since it has\nno data on the contents of the hstore in its estimates.\n\n[1] Query plan with materialize:\n# explain (analyze, buffers) declare \"foo_cursor\" cursor for SELECT\n ref.case_id, array_agg(ref.attributes -> 'reference') FROM\ncomponent ref JOIN document c ON c.id = ref.case_id WHERE ref.type_id\n= 23 AND ref.attributes ? 'reference' AND NOT 0 = ANY(c.types) GROUP\nBY ref.case_id;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=390241.39..16973874.43 rows=10 width=35)\n(actual time=81253.653..69928443.721 rows=90969 loops=1)\n Buffers: shared hit=54539 read=1551654, temp read=11959529274 written=126377\n -> Nested Loop (cost=390241.39..16973835.54 rows=5169 width=35)\n(actual time=16157.261..69925063.268 rows=2488142 loops=1)\n Join Filter: (ref.case_id = c.id)\n Rows Removed by Join Filter: 437611625378\n Buffers: shared hit=54539 read=1551654, temp read=11959529274\nwritten=126377\n -> Index Scan using document_pkey on document c\n(cost=0.42..314999.89 rows=99659 width=4) (actual\ntime=0.016..59255.527 rows=94634 loops=1)\n Filter: (0 <> ALL (types))\n Rows Removed by Filter: 70829\n Buffers: shared hit=54539 read=113549\n -> Materialize (cost=390240.97..3189944.33 rows=9010\nwidth=35) (actual time=0.088..450.898 rows=4624280 loops=94634)\n Buffers: shared read=1438105, temp read=11959529274\nwritten=126377\n -> Bitmap Heap Scan on component ref\n(cost=390240.97..3189899.28 rows=9010 width=35) (actual\ntime=8107.625..79508.136 rows=4624280 loops=1)\n Recheck Cond: (type_id = 23)\n Rows Removed by Index Recheck: 49237707\n Filter: (attributes ? 'reference'::text)\n Rows Removed by Filter: 4496624\n Buffers: shared read=1438105\n -> Bitmap Index Scan on component_type_id\n(cost=0.00..390238.72 rows=9009887 width=0) (actual\ntime=8105.963..8105.963 rows=9666659 loops=1)\n Index Cond: (type_id = 23)\n Buffers: shared read=156948\n Total runtime: 69928552.870 ms\n\n[2] Query plan with simple bitmap heap scan:\n# explain (analyze, buffers) declare \"foo_cursor\" cursor for SELECT\n ref.case_id, array_agg(ref.attributes -> 'reference') FROM\ncomponent ref JOIN document c ON c.id = ref.case_id WHERE ref.type_id\n= 23 AND ref.attributes ? 'reference' AND NOT 0 = ANY(c.types) GROUP\nBY ref.case_id;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=3724614.33..3724614.46 rows=10 width=34) (actual\ntime=248900.627..249030.530 rows=90969 loops=1)\n Buffers: shared hit=16962106 read=5551197 written=5806\n -> Nested Loop (cost=488092.46..3724570.36 rows=5863 width=34)\n(actual time=30839.635..246416.327 rows=2488142 loops=1)\n Buffers: shared hit=16962106 read=5551197 written=5806\n -> Bitmap Heap Scan on component ref\n(cost=488088.03..3638070.85 rows=10220 width=34) (actual\ntime=30833.215..196239.109 rows=4624280 loops=1)\n Recheck Cond: (type_id = 23)\n Rows Removed by Index Recheck: 57730489\n Filter: (attributes ? 'reference'::text)\n Rows Removed by Filter: 4496624\n Buffers: shared hit=6922 read=1901840 written=2252\n -> Bitmap Index Scan on component_type_id\n(cost=0.00..488085.48 rows=10220388 width=0) (actual\ntime=30811.185..30811.185 rows=13292968 loops=1)\n Index Cond: (type_id = 23)\n Buffers: shared hit=6922 read=162906 written=1529\n -> Bitmap Heap Scan on document c (cost=4.43..8.45 rows=1\nwidth=4) (actual time=0.010..0.010 rows=1 loops=4624280)\n Recheck Cond: (id = ref.case_id)\n Filter: (0 <> ALL (types))\n Rows Removed by Filter: 0\n Buffers: shared hit=16955184 read=3649357 written=3554\n -> Bitmap Index Scan on document_pkey\n(cost=0.00..4.43 rows=1 width=0) (actual time=0.004..0.004 rows=1\nloops=4624280)\n Index Cond: (id = ref.case_id)\n Buffers: shared hit=14090230 read=1890031 written=1819\n Total runtime: 249051.265 ms\n\n[3] Query plan with hstore clause removed:\n# explain (analyze, buffers) declare \"foo_cursor\" cursor for SELECT\n ref.case_id, array_agg(ref.attributes -> 'reference') FROM\ncomponent ref JOIN document c ON c.id = ref.case_id WHERE ref.type_id\n= 23 AND NOT 0 = ANY(c.types) GROUP BY ref.case_id;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=5922838.29..12804427.63 rows=9817 width=34)\n(actual time=168896.503..181202.278 rows=93580 loops=1)\n Buffers: shared hit=13847 read=2104804 written=26, temp read=902336\nwritten=902336\n -> Merge Join (cost=5922838.29..12760329.44 rows=5863397\nwidth=34) (actual time=168896.459..180103.335 rows=5115136 loops=1)\n Merge Cond: (c.id = ref.case_id)\n Buffers: shared hit=13847 read=2104804 written=26, temp\nread=902336 written=902336\n -> Index Scan using document_pkey on document c\n(cost=0.43..6696889.20 rows=2128590 width=4) (actual\ntime=0.006..7684.681 rows=94634 loops=1)\n Filter: (0 <> ALL (types))\n Rows Removed by Filter: 70829\n Buffers: shared hit=13847 read=196042\n -> Materialize (cost=5922836.37..5973938.31 rows=10220388\nwidth=34) (actual time=168896.449..171403.773 rows=9120904 loops=1)\n Buffers: shared read=1908762 written=26, temp\nread=902336 written=902336\n -> Sort (cost=5922836.37..5948387.34 rows=10220388\nwidth=34) (actual time=168896.447..170586.341 rows=9120904 loops=1)\n Sort Key: ref.case_id\n Sort Method: external merge Disk: 1392648kB\n Buffers: shared read=1908762 written=26, temp\nread=902336 written=902336\n -> Bitmap Heap Scan on component ref\n(cost=490640.57..3615072.42 rows=10220388 width=34) (actual\ntime=21652.779..148333.012 rows=9120904 loops=1)\n Recheck Cond: (type_id = 23)\n Rows Removed by Index Recheck: 57730489\n Buffers: shared read=1908762 written=26\n -> Bitmap Index Scan on component_type_id\n(cost=0.00..488085.48 rows=10220388 width=0) (actual\ntime=21649.716..21649.716 rows=13292968 loops=1)\n Index Cond: (type_id = 23)\n Buffers: shared read=169828\n Total runtime: 181378.101 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 10:32:18 -0700",
"msg_from": "Patrick Krecker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Row estimates off by two orders of magnitude with hstore"
},
{
"msg_contents": "On Wed, Jun 10, 2015 at 12:32 PM, Patrick Krecker <[email protected]> wrote:\n> Hi everyone --\n>\n> I had an issue the other day where a relatively simple query went from\n> taking about 1 minute to execute to taking 19 hours. It seems that the\n> planner chooses to use a materialize sometimes [1] and not other times\n> [2]. I think the issue is that the row count estimate for the result\n> of the condition \"type_id = 23 and ref.attributes ? 'reference'\" is\n> about 10k rows, but the actual result is 4624280. It seems the\n> estimate varies slightly over time, and if it drops low enough then\n> the planner decides to materialize the result of the bitmap heap scan\n> and the query takes forever.\n>\n> As an exercise, I tried removing the clause \"ref.attributes ?\n> 'reference'\" and the estimates are very accurate [3].\n\nThis is a fundamental issue with using 'database in a box' datatypes\nlike hstore and jsonb. They are opaque to the statistics gathering\nsystem and so are unable to give reasonable estimates beyond broad\nassumptions. Speaking generally, the workarounds are too:\n\n*) disable particular plan choices for this query\n(materialize/nestloop are common culprits)\n\n*) create btree indexes around specific extraction clauses\n\n*) refactor some of the query into set returning function with a\ncustom ROWS clause\n\n*) try alternate indexing strategy such as jsonb/jsquery\n\n*) move out of hstore and into more standard relational strucure\n\nnone of the above may be ideal in your particular case.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 13:32:17 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row estimates off by two orders of magnitude with hstore"
},
{
"msg_contents": "On 06/10/2015 11:32 AM, Merlin Moncure wrote:\n> This is a fundamental issue with using 'database in a box' datatypes\n> like hstore and jsonb. They are opaque to the statistics gathering\n> system and so are unable to give reasonable estimates beyond broad\n> assumptions. Speaking generally, the workarounds are too:\n> \n> *) disable particular plan choices for this query\n> (materialize/nestloop are common culprits)\n> \n> *) create btree indexes around specific extraction clauses\n> \n> *) refactor some of the query into set returning function with a\n> custom ROWS clause\n> \n> *) try alternate indexing strategy such as jsonb/jsquery\n> \n> *) move out of hstore and into more standard relational strucure\n\nYou forgot:\n\n*) Fund a PostgreSQL developer to add selectivity estimation and stats\nto hstore.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 12:40:11 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row estimates off by two orders of magnitude with hstore"
},
{
"msg_contents": "On Wed, Jun 10, 2015 at 2:40 PM, Josh Berkus <[email protected]> wrote:\n> On 06/10/2015 11:32 AM, Merlin Moncure wrote:\n>> This is a fundamental issue with using 'database in a box' datatypes\n>> like hstore and jsonb. They are opaque to the statistics gathering\n>> system and so are unable to give reasonable estimates beyond broad\n>> assumptions. Speaking generally, the workarounds are too:\n>>\n>> *) disable particular plan choices for this query\n>> (materialize/nestloop are common culprits)\n>>\n>> *) create btree indexes around specific extraction clauses\n>>\n>> *) refactor some of the query into set returning function with a\n>> custom ROWS clause\n>>\n>> *) try alternate indexing strategy such as jsonb/jsquery\n>>\n>> *) move out of hstore and into more standard relational strucure\n>\n> You forgot:\n>\n> *) Fund a PostgreSQL developer to add selectivity estimation and stats\n> to hstore.\n\nWell, I don't know. That's really complex to the point of making me\nwonder if it's worth doing even given infinite time and resources. If\nit was my money, I'd be researching a clean way to inject estimate\nreturning expressions into the query that the planner could utilize.\nNot 'hints' which are really about managing the output of the planner,\njust what feeds in. Also lots of various solutions of alcohol to\nlubricate the attendant -hackers discussions.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 15:01:43 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row estimates off by two orders of magnitude with hstore"
},
{
"msg_contents": "OK. Well, fortunately for us, we have a lot of possible solutions this\nproblem, and it sounds like actually getting statistics for attributes\n? 'reference' is not realistic. I just wanted to make sure it wasn't\nsome configuration error on our part.\n\nCan anyone explain where exactly the estimate for that clause comes from?\n\nI tried adding an index and I don't think it improved the estimation,\nthe planner still thinks there will be 9k rows as a result of type_id\n= 23 and attributes ? 'reference'. [1]. It might make the pathological\nplan less likely though. It's not clear to me that it reduces the risk\nof a pathological plan to zero.\n\nI also tried wrapping it in a subquery [2]. The estimate is, of\ncourse, still awful, but it doesn't matter anymore because it can't\npick a plan that leverages its low estimate. Its only choice is a\nsimple filter on the results.\n\n[1]\n# CREATE INDEX foobarbaz ON component((attributes -> 'reference'))\nWHERE ( attributes ? 'reference' );\n\nCREATE INDEX\n\njudicata=# explain (analyze, buffers) declare \"foo_cursor\" cursor for\nSELECT ref.case_id, array_agg(ref.attributes -> 'reference')\nFROM component ref JOIN document c ON c.id = ref.case_id WHERE\nref.type_id = 23 AND ref.attributes ? 'reference' AND NOT 0 =\nANY(c.types) GROUP BY ref.case_id;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=419667.86..419667.99 rows=10 width=34) (actual\ntime=97074.773..97197.487 rows=90969 loops=1)\n Buffers: shared hit=16954389 read=4533956 dirtied=2963 written=4759\n -> Nested Loop (cost=5472.44..419628.76 rows=5213 width=34)\n(actual time=537.202..94710.844 rows=2488142 loops=1)\n Buffers: shared hit=16954389 read=4533956 dirtied=2963 written=4759\n -> Bitmap Heap Scan on component ref\n(cost=5468.01..342716.88 rows=9087 width=34) (actual\ntime=534.862..49617.945 rows=4624280 loops=1)\n Recheck Cond: (attributes ? 'reference'::text)\n Rows Removed by Index Recheck: 28739170\n Filter: (type_id = 23)\n Rows Removed by Filter: 165268\n Buffers: shared hit=25 read=921758 dirtied=2963 written=906\n -> Bitmap Index Scan on foobarbaz (cost=0.00..5465.74\nrows=98636 width=0) (actual time=532.215..532.215 rows=4789548\nloops=1)\n Buffers: shared read=59300 written=57\n -> Bitmap Heap Scan on document c (cost=4.43..8.45 rows=1\nwidth=4) (actual time=0.009..0.009 rows=1 loops=4624280)\n Recheck Cond: (id = ref.case_id)\n Filter: (0 <> ALL (types))\n Rows Removed by Filter: 0\n Buffers: shared hit=16954364 read=3612198 written=3853\n -> Bitmap Index Scan on document_pkey\n(cost=0.00..4.43 rows=1 width=0) (actual time=0.003..0.003 rows=1\nloops=4624280)\n Index Cond: (id = ref.case_id)\n Buffers: shared hit=14082540 read=1859742 written=1974\n Total runtime: 97217.718 ms\n\n[2]\n# explain (analyze, buffers) declare \"foo_cursor\" cursor for SELECT *\nFROM (SELECT ref.case_id as case_id, array_agg(ref.attributes\n-> 'reference') as reference FROM component ref JOIN document c ON\nc.id = ref.case_id WHERE ref.type_id = 23 AND NOT 0 = ANY(c.types)\nGROUP BY ref.case_id) as t WHERE reference IS NOT NULL;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=5636347.52..12524155.45 rows=9817 width=34)\n(actual time=165466.502..195035.433 rows=93580 loops=1)\n Filter: (array_agg((ref.attributes -> 'reference'::text)) IS NOT NULL)\n Buffers: shared hit=13884 read=2085572 written=2952, temp\nread=902337 written=902337\n -> Merge Join (cost=5636347.52..12458841.11 rows=5213367\nwidth=34) (actual time=165383.814..193813.490 rows=5115136 loops=1)\n Merge Cond: (c.id = ref.case_id)\n Buffers: shared hit=13884 read=2085572 written=2952, temp\nread=902337 written=902337\n -> Index Scan using document_pkey on document c\n(cost=0.43..6696889.20 rows=2128590 width=4) (actual\ntime=0.009..24720.726 rows=94634 loops=1)\n Filter: (0 <> ALL (types))\n Rows Removed by Filter: 70829\n Buffers: shared hit=13852 read=195821\n -> Materialize (cost=5636345.76..5681782.42 rows=9087332\nwidth=34) (actual time=165383.798..168027.149 rows=9120904 loops=1)\n Buffers: shared hit=32 read=1889751 written=2952, temp\nread=902337 written=902337\n -> Sort (cost=5636345.76..5659064.09 rows=9087332\nwidth=34) (actual time=165383.793..167173.325 rows=9120904 loops=1)\n Sort Key: ref.case_id\n Sort Method: external merge Disk: 1392648kB\n Buffers: shared hit=32 read=1889751 written=2952,\ntemp read=902337 written=902337\n -> Bitmap Heap Scan on component ref\n(cost=481859.39..3592128.04 rows=9087332 width=34) (actual\ntime=20950.899..145515.599 rows=9120904 loops=1)\n Recheck Cond: (type_id = 23)\n Rows Removed by Index Recheck: 57286889\n Buffers: shared hit=32 read=1889751 written=2952\n -> Bitmap Index Scan on component_type_id\n(cost=0.00..479587.56 rows=9087332 width=0) (actual\ntime=20947.739..20947.739 rows=12143019 loops=1)\n Index Cond: (type_id = 23)\n Buffers: shared read=164918 written=2816\n Total runtime: 195213.232 ms\n\nOn Wed, Jun 10, 2015 at 1:01 PM, Merlin Moncure <[email protected]> wrote:\n> On Wed, Jun 10, 2015 at 2:40 PM, Josh Berkus <[email protected]> wrote:\n>> On 06/10/2015 11:32 AM, Merlin Moncure wrote:\n>>> This is a fundamental issue with using 'database in a box' datatypes\n>>> like hstore and jsonb. They are opaque to the statistics gathering\n>>> system and so are unable to give reasonable estimates beyond broad\n>>> assumptions. Speaking generally, the workarounds are too:\n>>>\n>>> *) disable particular plan choices for this query\n>>> (materialize/nestloop are common culprits)\n>>>\n>>> *) create btree indexes around specific extraction clauses\n>>>\n>>> *) refactor some of the query into set returning function with a\n>>> custom ROWS clause\n>>>\n>>> *) try alternate indexing strategy such as jsonb/jsquery\n>>>\n>>> *) move out of hstore and into more standard relational strucure\n>>\n>> You forgot:\n>>\n>> *) Fund a PostgreSQL developer to add selectivity estimation and stats\n>> to hstore.\n>\n> Well, I don't know. That's really complex to the point of making me\n> wonder if it's worth doing even given infinite time and resources. If\n> it was my money, I'd be researching a clean way to inject estimate\n> returning expressions into the query that the planner could utilize.\n> Not 'hints' which are really about managing the output of the planner,\n> just what feeds in. Also lots of various solutions of alcohol to\n> lubricate the attendant -hackers discussions.\n>\n> merlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 13:55:39 -0700",
"msg_from": "Patrick Krecker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row estimates off by two orders of magnitude with hstore"
},
{
"msg_contents": "On Wed, Jun 10, 2015 at 3:55 PM, Patrick Krecker <[email protected]> wrote:\n> OK. Well, fortunately for us, we have a lot of possible solutions this\n> problem, and it sounds like actually getting statistics for attributes\n> ? 'reference' is not realistic. I just wanted to make sure it wasn't\n> some configuration error on our part.\n>\n> Can anyone explain where exactly the estimate for that clause comes from?\n>\n> I tried adding an index and I don't think it improved the estimation,\n> the planner still thinks there will be 9k rows as a result of type_id\n> = 23 and attributes ? 'reference'. [1]. It might make the pathological\n> plan less likely though. It's not clear to me that it reduces the risk\n> of a pathological plan to zero.\n\nno, but done in conjunction with disabling managing out nestloops and\nmaterliaze query plans, nestloops (say, via SET LOCAL) it will\nprobably be fast and future proof..\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 16:08:36 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row estimates off by two orders of magnitude with hstore"
},
{
"msg_contents": "On Wed, Jun 10, 2015 at 2:08 PM, Merlin Moncure <[email protected]> wrote:\n> On Wed, Jun 10, 2015 at 3:55 PM, Patrick Krecker <[email protected]> wrote:\n>> OK. Well, fortunately for us, we have a lot of possible solutions this\n>> problem, and it sounds like actually getting statistics for attributes\n>> ? 'reference' is not realistic. I just wanted to make sure it wasn't\n>> some configuration error on our part.\n>>\n>> Can anyone explain where exactly the estimate for that clause comes from?\n>>\n>> I tried adding an index and I don't think it improved the estimation,\n>> the planner still thinks there will be 9k rows as a result of type_id\n>> = 23 and attributes ? 'reference'. [1]. It might make the pathological\n>> plan less likely though. It's not clear to me that it reduces the risk\n>> of a pathological plan to zero.\n>\n> no, but done in conjunction with disabling managing out nestloops and\n> materliaze query plans, nestloops (say, via SET LOCAL) it will\n> probably be fast and future proof..\n>\n> merlin\n\nWouldn't wrapping it in an optimization fence (e.g. SELECT * FROM\n(...) AS t WHERE t.attributes ? 'reference') have the same effect as\ndisabling materialize, but allow the planner to optimize the inner\nquery however it wants?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 14:37:00 -0700",
"msg_from": "Patrick Krecker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row estimates off by two orders of magnitude with hstore"
},
{
"msg_contents": "On Wed, Jun 10, 2015 at 4:37 PM, Patrick Krecker <[email protected]> wrote:\n> On Wed, Jun 10, 2015 at 2:08 PM, Merlin Moncure <[email protected]> wrote:\n>> On Wed, Jun 10, 2015 at 3:55 PM, Patrick Krecker <[email protected]> wrote:\n>>> OK. Well, fortunately for us, we have a lot of possible solutions this\n>>> problem, and it sounds like actually getting statistics for attributes\n>>> ? 'reference' is not realistic. I just wanted to make sure it wasn't\n>>> some configuration error on our part.\n>>>\n>>> Can anyone explain where exactly the estimate for that clause comes from?\n>>>\n>>> I tried adding an index and I don't think it improved the estimation,\n>>> the planner still thinks there will be 9k rows as a result of type_id\n>>> = 23 and attributes ? 'reference'. [1]. It might make the pathological\n>>> plan less likely though. It's not clear to me that it reduces the risk\n>>> of a pathological plan to zero.\n>>\n>> no, but done in conjunction with disabling managing out nestloops and\n>> materliaze query plans, nestloops (say, via SET LOCAL) it will\n>> probably be fast and future proof..\n>>\n>> merlin\n>\n> Wouldn't wrapping it in an optimization fence (e.g. SELECT * FROM\n> (...) AS t WHERE t.attributes ? 'reference') have the same effect as\n> disabling materialize, but allow the planner to optimize the inner\n> query however it wants?\n\nyes, but\nselect * from (query) q;\n\nis not an optimization fence. the server is smarter than you and I and\nwill immediately flatten that back out :-). however,\n\nselect * from (query ... OFFSET 0) q;\nand the more portable\nwith data as (query) select ... from query;\n\ncan fix up the estimates. they are both materialization fences\nessentially though.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jun 2015 17:08:10 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row estimates off by two orders of magnitude with hstore"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a query that takes ridiculously long to complete (over 500ms) but \nif I disable nested loop it does it really fast (24.5ms)\n\nHere are links for\n* first request (everything enabled): http://explain.depesz.com/s/Q1M\n* second request (nested loop disabled): http://explain.depesz.com/s/9ZY\n\nI have also noticed, that setting\n\nset join_collapse_limit = 1;\n\nproduces similar results as when nested loops are disabled.\n\nAutovacuumm is running, and I did manually performed both: analyze and \nvacuumm analyze. No effect.\n\nI tried increasing statistics for columns (slot, path_id, key) to 5000 \nfor table data. No effect.\n\nI tried increasing statistics for columns (id, parent, key) to 5000 for \ntable path. No effect.\n\nI can see, that postgres is doing wrong estimation on request count, but \nI can't figure it out why.\n\nTable path is used to represent tree-like structure.\n\n== QUERY ==\n\nSELECT p1.value as request_type, p2.value as app_id, p3.value as app_ip, \np3.id as id, data.*, server.name\nFROM data\nINNER JOIN path p3 ON data.path_id = p3.id\nINNER JOIN server on data.server_id = server.id\nINNER JOIN path p2 on p2.id = p3.parent\nINNER JOIN path p1 on p1.id = p2.parent\nWHERE data.slot between '2015-02-18 00:00:00' and '2015-02-19 00:00:00'\n AND p1.key = 'request_type' AND p2.key = 'app_id' AND p3.key = 'app_ip'\n;\n\n== TABLES ==\n Table \"public.path\"\n Column | Type | Modifiers | \nStorage | Description\n--------+-----------------------+---------------------------------------------------+----------+-------------\n id | integer | not null default \nnextval('path_id_seq'::regclass) | plain |\n parent | integer | | \nplain |\n key | character varying(25) | not \nnull | extended |\n value | character varying(50) | not \nnull | extended |\nIndexes:\n \"path_pkey\" PRIMARY KEY, btree (id)\n \"path_unique\" UNIQUE CONSTRAINT, btree (parent, key, value)\nForeign-key constraints:\n \"path.fg.parent->path(id)\" FOREIGN KEY (parent) REFERENCES path(id)\nReferenced by:\n TABLE \"data\" CONSTRAINT \"data_fkey_path\" FOREIGN KEY (path_id) \nREFERENCES path(id)\n TABLE \"path\" CONSTRAINT \"path.fg.parent->path(id)\" FOREIGN KEY \n(parent) REFERENCES path(id)\nHas OIDs: no\n\n Table \"public.data\"\n Column | Type | Modifiers | Storage | \nDescription\n-----------+--------------------------------+-----------+----------+-------------\n slot | timestamp(0) without time zone | not null | plain |\n server_id | integer | not null | plain |\n path_id | integer | not null | plain |\n key | character varying(50) | not null | extended |\n value | real | not null | plain |\nIndexes:\n \"data_pkey\" PRIMARY KEY, btree (slot, server_id, path_id, key)\nForeign-key constraints:\n \"data_fkey_path\" FOREIGN KEY (path_id) REFERENCES path(id)\nHas OIDs: no\n\nsvilic=> select count(*) from path;\n count\n-------\n 603\n\nsvilic=> select count(*) from path p1 inner join path p2 on p1.id = \np2.parent inner join path p3 on p2.id = p3.parent where p1.parent is null;\n count\n-------\n 463\n\nsvilic=> select count(*) from server;\n count\n-------\n 37\n\nsvilic=> select count(*) from data;\n count\n----------\n 23495552\n\n\nsvilic=> select version();\nversion\n-------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.1.17 on x86_64-unknown-linux-gnu, compiled by gcc \n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\n== SERVER CONFIGURATION ==\n\nshared_buffers = 512MB\nwork_mem = 8MB (I have tried changing it to 32, 128 and 512, no effect)\nmaintenance_work_mem = 64MB\ncheckpoint_segments = 100\nrandom_page_cost = 4.0\neffective_cache_size = 3072MB\n\n== HARDWARE CONFIGURATION ==\n\ncpu: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz (4 cores)\nmem: 8GB\nsystem is using regular disks, (no raid and no ssd)\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Jun 2015 02:18:29 +0200",
"msg_from": "Sasa Vilic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query: Postgres chooses nested loop over hash join, whery by\n hash join is much faster, wrong number of rows estimated"
},
{
"msg_contents": "On Thu, Jun 11, 2015 at 7:18 PM, Sasa Vilic <[email protected]> wrote:\n> Hi,\n>\n> I have a query that takes ridiculously long to complete (over 500ms) but if\n> I disable nested loop it does it really fast (24.5ms)\n>\n> Here are links for\n> * first request (everything enabled): http://explain.depesz.com/s/Q1M\n> * second request (nested loop disabled): http://explain.depesz.com/s/9ZY\n>\n> I have also noticed, that setting\n>\n> set join_collapse_limit = 1;\n>\n> produces similar results as when nested loops are disabled.\n>\n> Autovacuumm is running, and I did manually performed both: analyze and\n> vacuumm analyze. No effect.\n>\n> I tried increasing statistics for columns (slot, path_id, key) to 5000 for\n> table data. No effect.\n>\n> I tried increasing statistics for columns (id, parent, key) to 5000 for\n> table path. No effect.\n>\n> I can see, that postgres is doing wrong estimation on request count, but I\n> can't figure it out why.\n>\n> Table path is used to represent tree-like structure.\n>\n> == QUERY ==\n>\n> SELECT p1.value as request_type, p2.value as app_id, p3.value as app_ip,\n> p3.id as id, data.*, server.name\n> FROM data\n> INNER JOIN path p3 ON data.path_id = p3.id\n> INNER JOIN server on data.server_id = server.id\n> INNER JOIN path p2 on p2.id = p3.parent\n> INNER JOIN path p1 on p1.id = p2.parent\n> WHERE data.slot between '2015-02-18 00:00:00' and '2015-02-19 00:00:00'\n> AND p1.key = 'request_type' AND p2.key = 'app_id' AND p3.key = 'app_ip'\n> ;\n>\n> == TABLES ==\n> Table \"public.path\"\n> Column | Type | Modifiers | Storage |\n> Description\n> --------+-----------------------+---------------------------------------------------+----------+-------------\n> id | integer | not null default\n> nextval('path_id_seq'::regclass) | plain |\n> parent | integer | |\n> plain |\n> key | character varying(25) | not null\n> | extended |\n> value | character varying(50) | not null\n> | extended |\n> Indexes:\n> \"path_pkey\" PRIMARY KEY, btree (id)\n> \"path_unique\" UNIQUE CONSTRAINT, btree (parent, key, value)\n> Foreign-key constraints:\n> \"path.fg.parent->path(id)\" FOREIGN KEY (parent) REFERENCES path(id)\n> Referenced by:\n> TABLE \"data\" CONSTRAINT \"data_fkey_path\" FOREIGN KEY (path_id)\n> REFERENCES path(id)\n> TABLE \"path\" CONSTRAINT \"path.fg.parent->path(id)\" FOREIGN KEY (parent)\n> REFERENCES path(id)\n> Has OIDs: no\n>\n> Table \"public.data\"\n> Column | Type | Modifiers | Storage |\n> Description\n> -----------+--------------------------------+-----------+----------+-------------\n> slot | timestamp(0) without time zone | not null | plain |\n> server_id | integer | not null | plain |\n> path_id | integer | not null | plain |\n> key | character varying(50) | not null | extended |\n> value | real | not null | plain |\n> Indexes:\n> \"data_pkey\" PRIMARY KEY, btree (slot, server_id, path_id, key)\n> Foreign-key constraints:\n> \"data_fkey_path\" FOREIGN KEY (path_id) REFERENCES path(id)\n> Has OIDs: no\n>\n> svilic=> select count(*) from path;\n> count\n> -------\n> 603\n>\n> svilic=> select count(*) from path p1 inner join path p2 on p1.id =\n> p2.parent inner join path p3 on p2.id = p3.parent where p1.parent is null;\n> count\n> -------\n> 463\n>\n> svilic=> select count(*) from server;\n> count\n> -------\n> 37\n>\n> svilic=> select count(*) from data;\n> count\n> ----------\n> 23495552\n>\n>\n> svilic=> select version();\n> version\n> -------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.1.17 on x86_64-unknown-linux-gnu, compiled by gcc\n> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>\n> == SERVER CONFIGURATION ==\n>\n> shared_buffers = 512MB\n> work_mem = 8MB (I have tried changing it to 32, 128 and 512, no effect)\n> maintenance_work_mem = 64MB\n> checkpoint_segments = 100\n> random_page_cost = 4.0\n> effective_cache_size = 3072MB\n>\n> == HARDWARE CONFIGURATION ==\n>\n> cpu: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz (4 cores)\n> mem: 8GB\n> system is using regular disks, (no raid and no ssd)\n\nhuh. the query looks pretty clean (except for possible overuse of\nsurrogate keys which tend to exacerbate planning issues in certain\ncases).\n\nLet's try cranking statistics on data.path_id, first to 1000 and then\nto 10000 and see how it affects the plan. The database is clearly\nmisestimating row counts on that join.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Jun 2015 09:25:24 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: Postgres chooses nested loop over hash\n join, whery by hash join is much faster, wrong number of rows estimated"
}
] |
[
{
"msg_contents": "Guys we see spike in pg bouncer during the peak hours and that was slowing down the application. We did bump up the connection limit and it is helpful but now we again notice little spike in connection. And one thing that I notice that is different is jump in sv_used value when I run command show pools during problem times\n\n\nCan anyone please explain what value of sv_used means when i run show pools;\n\n\n\nRegards\nPrabhjot\n\n\n\n\n\n\n\n\n\n\nGuys we see spike in pg bouncer during the peak hours and that was slowing down the application. We did bump up the connection limit and it is helpful but now we again notice little spike in connection. And one thing that I notice that\n is different is jump in sv_used value when I run command show pools during problem times\n\n \n \nCan anyone please explain what value of sv_used means when i run show pools;\n \n \n\nRegards\nPrabhjot",
"msg_date": "Fri, 12 Jun 2015 17:57:22 +0000",
"msg_from": "\"Sheena, Prabhjot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg bouncer issue what does sv_used column means"
},
{
"msg_contents": "\n> From: \"Sheena, Prabhjot\" <[email protected]>\n>To: \"[email protected]\" <[email protected]>; \"[email protected]\" <[email protected]> \n>Sent: Friday, 12 June 2015, 18:57\n>Subject: [GENERAL] pg bouncer issue what does sv_used column means\n> \n>\n>\n>Guys we see spike in pg bouncer during the peak hours and that was slowing down the application. We did bump up the connection limit and it is helpful but now we again notice little spike in connection. And one thing that I notice that is different is jump in sv_used value when I run command show pools during problem times \n> \n> \n>Can anyone please explain what value of sv_used means when i run show pools;\n> \n> \n>\n\n\nSee the manual:\n\n\n http://pgbouncer.projects.pgfoundry.org/doc/usage.html\n\nI believe in this instance \"used\" is interpreted as idle server connections that are in the process of being (periodically) health checked before being made available to clients again. A stab in the dark would be to check what query you're using in server_check_query.\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Fri, 12 Jun 2015 19:01:29 +0000 (UTC)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg bouncer issue what does sv_used column means"
},
{
"msg_contents": "Unsubscribe\n\nOn Fri, Jun 12, 2015 at 8:57 PM, Sheena, Prabhjot <\[email protected]> wrote:\n\n> Guys we see spike in pg bouncer during the peak hours and that was\n> slowing down the application. We did bump up the connection limit and it is\n> helpful but now we again notice little spike in connection. And one thing\n> that I notice that is different is jump in sv_used value when I run command\n> show pools during problem times\n>\n>\n>\n>\n>\n> *Can anyone please explain what value of sv_used means when i run show\n> pools;*\n>\n>\n>\n>\n>\n>\n> Regards\n>\n> *Prabhjot *\n>\n>\n>\n\nUnsubscribeOn Fri, Jun 12, 2015 at 8:57 PM, Sheena, Prabhjot <[email protected]> wrote:\n\n\nGuys we see spike in pg bouncer during the peak hours and that was slowing down the application. We did bump up the connection limit and it is helpful but now we again notice little spike in connection. And one thing that I notice that\n is different is jump in sv_used value when I run command show pools during problem times\n\n \n \nCan anyone please explain what value of sv_used means when i run show pools;\n \n \n\nRegards\nPrabhjot",
"msg_date": "Fri, 12 Jun 2015 23:26:36 +0300",
"msg_from": "Xenofon Papadopoulos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg bouncer issue what does sv_used column means"
}
] |
[
{
"msg_contents": "Last night I was doing some tuning on a database The longest query I was\nrunning was taking around 160 seconds. I didn't see much change in the\nrunning time for that query, even after restarting PG.\n\nToday, with roughly the same system load (possibly even a bit heavier\nload), that query is running about 40 seconds.\n\nAre there tuning parameters in postgresql.conf that don't take effect right\naway, even after a restart of PG? The only thing I can come up that's\nhappened since last night was that we ran the nightly vacuum analyze on\nthat database, but I did not change the statistics target.\n\nThe parameters I was working with were:\n\neffective_cache_size\nshared_buffers\ntemp_buffers\nwork_mem\nmaintenance_work_mem\n\nLooking at the free command, I see a lot more memory being used for\nbuffer/cache today. (Centos 7.)\n--\nMike Nolan\[email protected]\n\nLast night I was doing some tuning on a database The longest query I was running was taking around 160 seconds. I didn't see much change in the running time for that query, even after restarting PG.Today, with roughly the same system load (possibly even a bit heavier load), that query is running about 40 seconds. Are there tuning parameters in postgresql.conf that don't take effect right away, even after a restart of PG? The only thing I can come up that's happened since last night was that we ran the nightly vacuum analyze on that database, but I did not change the statistics target. The parameters I was working with were:effective_cache_sizeshared_bufferstemp_bufferswork_memmaintenance_work_memLooking at the free command, I see a lot more memory being used for buffer/cache today. (Centos 7.)--Mike [email protected]",
"msg_date": "Fri, 12 Jun 2015 16:37:00 -0400",
"msg_from": "Michael Nolan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Are there tuning parameters that don't take effect immediately?"
},
{
"msg_contents": "On Fri, Jun 12, 2015 at 4:37 PM, Michael Nolan <[email protected]> wrote:\n\n> The only thing I can come up that's happened since last night was that we\n> ran the nightly vacuum analyze on that database, but I did not change the\n> statistics target.\n>\n\n\nThe answer to your question is no, parameters changes are worse would take\neffect after a reboot - though most are used on the very next query that\nruns.\n\nThe vacuum would indeed likely account for the gains - there being\nsignificantly fewer dead/invisible rows to have to scan over and discard\nwhile retrieving the live rows that fulfill your query.\n\nDavid J.\n\nOn Fri, Jun 12, 2015 at 4:37 PM, Michael Nolan <[email protected]> wrote:The only thing I can come up that's happened since last night was that we ran the nightly vacuum analyze on that database, but I did not change the statistics target. The answer to your question is no, parameters changes are worse would take effect after a reboot - though most are used on the very next query that runs.The vacuum would indeed likely account for the gains - there being significantly fewer dead/invisible rows to have to scan over and discard while retrieving the live rows that fulfill your query.David J.",
"msg_date": "Fri, 12 Jun 2015 16:52:33 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are there tuning parameters that don't take effect immediately?"
},
{
"msg_contents": "\nOn 06/12/2015 01:37 PM, Michael Nolan wrote:\n> Last night I was doing some tuning on a database The longest query I\n> was running was taking around 160 seconds. I didn't see much change in\n> the running time for that query, even after restarting PG.\n>\n> Today, with roughly the same system load (possibly even a bit heavier\n> load), that query is running about 40 seconds.\n\nSounds like some of the relations are cached versus not.\n\nJD\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Jun 2015 14:00:28 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are there tuning parameters that don't take effect\n immediately?"
},
{
"msg_contents": "On Fri, Jun 12, 2015 at 4:52 PM, David G. Johnston <\[email protected]> wrote:\n\n> On Fri, Jun 12, 2015 at 4:37 PM, Michael Nolan <[email protected]> wrote:\n>\n>> The only thing I can come up that's happened since last night was that we\n>> ran the nightly vacuum analyze on that database, but I did not change the\n>> statistics target.\n>>\n>\n>\n> The answer to your question is no, parameters changes are worse would\n> take effect after a reboot - though most are used on the very next query\n> that runs.\n>\n> The vacuum would indeed likely account for the gains - there being\n> significantly fewer dead/invisible rows to have to scan over and discard\n> while retrieving the live rows that fulfill your query.\n>\n> David J.\n>\n>\nI wouldn't have said there was much activity in those tables since the\nprevious day's vacuum, maybe a couple hundred rows changed or added in a\ntable that has nearly 900,000 rows, and the other tables involved probably\neven less than that. There may be one table with more activity, perhaps\n20,000 row updates and maybe a few dozen new rows in a table that has\n400,000 rows. Maybe I need to manually analyze that table more often?\n\nVacuum analyze verbose generate way too much output, is there a way to get\nsome more straight forward numbers from an analyze?\n\nI'm definitely not complaining about the improvement, I'm just trying to\nget a handle on what really caused it and whether I can improve it even\nfurther.\n--\nMike Nolan\n\nOn Fri, Jun 12, 2015 at 4:52 PM, David G. Johnston <[email protected]> wrote:On Fri, Jun 12, 2015 at 4:37 PM, Michael Nolan <[email protected]> wrote:The only thing I can come up that's happened since last night was that we ran the nightly vacuum analyze on that database, but I did not change the statistics target. The answer to your question is no, parameters changes are worse would take effect after a reboot - though most are used on the very next query that runs.The vacuum would indeed likely account for the gains - there being significantly fewer dead/invisible rows to have to scan over and discard while retrieving the live rows that fulfill your query.David J.\nI wouldn't have said there was much activity in those tables since the previous day's vacuum, maybe a couple hundred rows changed or added in a table that has nearly 900,000 rows, and the other tables involved probably even less than that. There may be one table with more activity, perhaps 20,000 row updates and maybe a few dozen new rows in a table that has 400,000 rows. Maybe I need to manually analyze that table more often? Vacuum analyze verbose generate way too much output, is there a way to get some more straight forward numbers from an analyze?I'm definitely not complaining about the improvement, I'm just trying to get a handle on what really caused it and whether I can improve it even further. --Mike Nolan",
"msg_date": "Fri, 12 Jun 2015 17:13:49 -0400",
"msg_from": "Michael Nolan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are there tuning parameters that don't take effect immediately?"
}
] |
[
{
"msg_contents": "Here is some more information\n\npool_mode | transaction\n\nWe have transactional pooling and our application is set up in such a way that we have one query per transaction. We have set default pool size to 100.\n\nThis is output . As you guys can see active connection are 100 and 224 are waiting. We are planning to move default pool size to 250. Please suggest if you guys think otherwise\n\npgbouncer=# show pools;\ndatabase | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait\n-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------\npgbouncer | pgbouncer | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0\nsite | feature | 418 | 0 | 20 | 17 | 0 | 0 | 0 | 0\nsite | service | 621 | 224 | 100 | 0 | 0 | 0 | 0 | 0\nsite | zabbix | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0\n\nPrabhjot Singh\nDatabase Administrator\n\nCLASSMATES\n1501 4th Ave., Suite 400\nSeattle, WA 98101\n206.301.4937 o\n206.301.5701 f\n\nFrom: Sheena, Prabhjot\nSent: Friday, June 12, 2015 10:57 AM\nTo: '[email protected]'; '[email protected]'\nSubject: pg bouncer issue what does sv_used column means\n\nGuys we see spike in pg bouncer during the peak hours and that was slowing down the application. We did bump up the connection limit and it is helpful but now we again notice little spike in connection. And one thing that I notice that is different is jump in sv_used value when I run command show pools during problem times\n\n\nCan anyone please explain what value of sv_used means when i run show pools;\n\n\n\nRegards\nPrabhjot\n\n\n\n\n\n\n\n\n\n\nHere is some more information\n \npool_mode | transaction\n \nWe have transactional pooling and our application is set up in such a way that we have one query per transaction. We have set default pool size to 100.\n\n \nThis is output . As you guys can see active connection are 100 and 224 are waiting. We are planning to move default pool size to 250. Please suggest if you guys think otherwise\n \npgbouncer=# show pools;\ndatabase | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait\n-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------\npgbouncer | pgbouncer | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0\nsite | feature | 418 | 0 | 20 | 17 | 0 | 0 | 0 | 0\nsite | service | 621 | 224 | 100 | 0 | 0 | 0 | 0 | 0\nsite | zabbix | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0\n \n\nPrabhjot Singh\nDatabase Administrator\n \nCLASSMATES\n1501 4th Ave., Suite 400\nSeattle, WA 98101\n206.301.4937 o\n206.301.5701 f\n\n \n\n\nFrom: Sheena, Prabhjot \nSent: Friday, June 12, 2015 10:57 AM\nTo: '[email protected]'; '[email protected]'\nSubject: pg bouncer issue what does sv_used column means\n\n\n \nGuys we see spike in pg bouncer during the peak hours and that was slowing down the application. We did bump up the connection limit and it is helpful but now we again notice little spike in connection. And one thing that I notice that\n is different is jump in sv_used value when I run command show pools during problem times\n\n \n \nCan anyone please explain what value of sv_used means when i run show pools;\n \n \n\nRegards\nPrabhjot",
"msg_date": "Fri, 12 Jun 2015 21:37:36 +0000",
"msg_from": "\"Sheena, Prabhjot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg bouncer issue what does sv_used column means"
},
{
"msg_contents": "\nPlease do not cross-post on the PostgreSQL lists. Pick the most \nappropriate list to post to and just post there.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Jun 2015 17:49:48 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg bouncer issue what does sv_used column means"
},
{
"msg_contents": "On Fri, Jun 12, 2015 at 09:37:36PM +0000, Sheena, Prabhjot wrote:\n> Here is some more information\n> \n> pool_mode | transaction\n> \n> We have transactional pooling and our application is set up in such a way that we have one query per transaction. We have set default pool size to 100.\n> \n> This is output . As you guys can see active connection are 100 and 224 are waiting. We are planning to move default pool size to 250. Please suggest if you guys think otherwise\n> \n> pgbouncer=# show pools;\n> database | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait\n> -----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------\n> pgbouncer | pgbouncer | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0\n> site | feature | 418 | 0 | 20 | 17 | 0 | 0 | 0 | 0\n> site | service | 621 | 224 | 100 | 0 | 0 | 0 | 0 | 0\n> site | zabbix | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0\n> \n> Prabhjot Singh\n> Database Administrator\n> \n> CLASSMATES\n> 1501 4th Ave., Suite 400\n> Seattle, WA 98101\n> 206.301.4937 o\n> 206.301.5701 f\n> \n> From: Sheena, Prabhjot\n> Sent: Friday, June 12, 2015 10:57 AM\n> To: '[email protected]'; '[email protected]'\n> Subject: pg bouncer issue what does sv_used column means\n> \n> Guys we see spike in pg bouncer during the peak hours and that was slowing down the application. We did bump up the connection limit and it is helpful but now we again notice little spike in connection. And one thing that I notice that is different is jump in sv_used value when I run command show pools during problem times\n> \n> \n> Can anyone please explain what value of sv_used means when i run show pools;\n> \n> \n> \n> Regards\n> Prabhjot\n> \n\nHi Parbhjot,\n\nThe spike in pgbouncer during peak hours just indicates that you are busier then. How\nmany sv_active do you have in non-peak hours? What kind of system is this on? I suspect\nthat your hardware cannot actually handle 100 simultaneous processes at once and if you\nincrease that to 250 processes there is a good likelyhood that your system response\nwill get even worse. Number of CPU to 2x number of CPU is typical for peak performance\nthroughput. Are you using a 50-core system? What do the I/O stats look like? You may be\nI/O limited.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Jun 2015 16:57:03 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg bouncer issue what does sv_used column means"
}
] |
[
{
"msg_contents": "Hi, I am using postgresql 9.2.10 on centos 6.2, 64 bit version. The server\nhas 512 GB mem.\n\nThe jobs are mainly OLAP like. So I need larger work_mem and shared\nbuffers. From the source code, there is a constant MaxAllocSize==1GB. So, I\nwonder whether work_mem and shared buffers can exceed 2GB in the 64 bit\nLinux server?\n\nThanks and regards,\nKaijiang\n\nHi, I am using postgresql 9.2.10 on centos 6.2, 64 bit version. The server has 512 GB mem.The jobs are mainly OLAP like. So I need larger work_mem and shared buffers. From the source code, there is a constant MaxAllocSize==1GB. So, I wonder whether work_mem and shared buffers can exceed 2GB in the 64 bit Linux server?Thanks and regards,Kaijiang",
"msg_date": "Sun, 14 Jun 2015 01:27:53 +0800",
"msg_from": "Kaijiang Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Do work_mem and shared buffers have 1g or 2g limit on 64 bit linux?"
},
{
"msg_contents": "\nOn 06/13/2015 10:27 AM, Kaijiang Chen wrote:\n> Hi, I am using postgresql 9.2.10 on centos 6.2, 64 bit version. The\n> server has 512 GB mem.\n>\n> The jobs are mainly OLAP like. So I need larger work_mem and shared\n> buffers. From the source code, there is a constant MaxAllocSize==1GB.\n> So, I wonder whether work_mem and shared buffers can exceed 2GB in the\n> 64 bit Linux server?\n\nShared Buffers is not limited.\n\nWork_mem IIRC can go past 2GB but has never been proven to be effective \nafter that.\n\nIt does depend on the version you are running.\n\nJD\n\n\n>\n> Thanks and regards,\n> Kaijiang\n>\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 13 Jun 2015 10:43:04 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do work_mem and shared buffers have 1g or 2g limit\n on 64 bit linux?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 06/13/2015 10:43 AM, Joshua D. Drake wrote:\n> \n> On 06/13/2015 10:27 AM, Kaijiang Chen wrote:\n>> Hi, I am using postgresql 9.2.10 on centos 6.2, 64 bit version.\n>> The server has 512 GB mem.\n>> \n>> The jobs are mainly OLAP like. So I need larger work_mem and\n>> shared buffers. From the source code, there is a constant\n>> MaxAllocSize==1GB. So, I wonder whether work_mem and shared\n>> buffers can exceed 2GB in the 64 bit Linux server?\n\n> Work_mem IIRC can go past 2GB but has never been proven to be\n> effective after that.\n> \n> It does depend on the version you are running.\n\nStarting with 9.4 work_mem and maintenance_work_mem can be usefully\nset to > 2 GB.\n\nI've done testing with index creation, for example, and you can set\nmaintenance_work_mem high enough (obviously depending on how much RAM\nyou have and how big the sort memory footprint is) to get the entire\nsort to happen in memory without spilling to disk. In some of those\ncases I saw time required to create indexes drop by a factor of 3 or\nmore...YMMV.\n\nI have not tested with large work_mem to encourage hash aggregate\nplans, but I suspect there is a lot to be gained there as well.\n\nHTH,\n\nJoe\n\n\n- -- \nJoe Conway\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.22 (GNU/Linux)\n\niQIcBAEBAgAGBQJVfHITAAoJEDfy90M199hlGvcP/ijyCsXnWZAeZSUAW4qb20YJ\nAHKn0Gl8D9mH9cfPfJeCO+60dcWINzUE6l7qOWWN8JtT6pgbRPGvQsCkx9xRzq+V\naXv/d/r5wW4g06krcootliQJ1TWnLbPBCQiqmI27HSvnEgDKmJ3kOdDji1FMrcdm\ntuBdNxppoSx0sIFMJ6Xe/brt9O8wG/a81E0lAnsyh2nncaaXba96ldIhUbKvU0ie\n7In88Rn1UYZDXnoQEtZLmF6ArdTN5dQZkyEZvNKR0CHrPVddVYXP/gMWm/XwnOu6\nk3Rg/evCY2yCyxveuQXU5AZhDFXB/VLoOQoZ5MhLxnoLCNDJrqJzymE1shsgIIji\ni8PfXkKU92/N2kxfDBGwO0LdBpjZzzgg8zMHBsk8FIpXiJvVQKtAfCxYpYkSaL8y\nL0g4Qi16s2/fFZcn1ORH23BaBlcmS1cnRWWyx/amyqPHX0v4XZvp3/kSj2jCSw+E\nV7HD8qLut4rEAxwA5AGCy+9iugZp8DKQUUNiXOYbuysAdjceAa9LzPE0BbB4kuFC\nOfOOjRstr97RyDKwRHjfGs2EnJSENGGcPdGz2HYgup0d4DlIctKww8xeSo55Khp/\nHhBjtk7rpnqqEmEeA8+N8w5Z60x4mK900Anr1xhX2x4ETTIG2g9mYkEEZL/OZRUC\nlihTXLyUhvd57/v7li5p\n=s0U8\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 13 Jun 2015 11:10:27 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do work_mem and shared buffers have 1g or 2g limit\n on 64 bit linux?"
},
{
"msg_contents": "I've checked the source codes in postgresql 9.2.4. In function\nstatic bool\ngrow_memtuples(Tuplesortstate *state)\n\nthe codes:\n\t/*\n\t * On a 64-bit machine, allowedMem could be high enough to get us into\n\t * trouble with MaxAllocSize, too.\n\t */\n\tif ((Size) (state->memtupsize * 2) >= MaxAllocSize / sizeof(SortTuple))\n\t\treturn false;\n\nNote that MaxAllocSize == 1GB - 1\nthat means, at least for sorting, it uses at most 1GB work_mem! And\nsetting larger work_mem has no use at all...\n\nIn 9.4, they have a MemoryContextAllocHuge, which allows to allocate\nmemory with any 64-bit size. So, it improves the performance.\n\n\n\nOn 6/14/15, Joe Conway <[email protected]> wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On 06/13/2015 10:43 AM, Joshua D. Drake wrote:\n>>\n>> On 06/13/2015 10:27 AM, Kaijiang Chen wrote:\n>>> Hi, I am using postgresql 9.2.10 on centos 6.2, 64 bit version.\n>>> The server has 512 GB mem.\n>>>\n>>> The jobs are mainly OLAP like. So I need larger work_mem and\n>>> shared buffers. From the source code, there is a constant\n>>> MaxAllocSize==1GB. So, I wonder whether work_mem and shared\n>>> buffers can exceed 2GB in the 64 bit Linux server?\n>\n>> Work_mem IIRC can go past 2GB but has never been proven to be\n>> effective after that.\n>>\n>> It does depend on the version you are running.\n>\n> Starting with 9.4 work_mem and maintenance_work_mem can be usefully\n> set to > 2 GB.\n>\n> I've done testing with index creation, for example, and you can set\n> maintenance_work_mem high enough (obviously depending on how much RAM\n> you have and how big the sort memory footprint is) to get the entire\n> sort to happen in memory without spilling to disk. In some of those\n> cases I saw time required to create indexes drop by a factor of 3 or\n> more...YMMV.\n>\n> I have not tested with large work_mem to encourage hash aggregate\n> plans, but I suspect there is a lot to be gained there as well.\n>\n> HTH,\n>\n> Joe\n>\n>\n> - --\n> Joe Conway\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v2.0.22 (GNU/Linux)\n>\n> iQIcBAEBAgAGBQJVfHITAAoJEDfy90M199hlGvcP/ijyCsXnWZAeZSUAW4qb20YJ\n> AHKn0Gl8D9mH9cfPfJeCO+60dcWINzUE6l7qOWWN8JtT6pgbRPGvQsCkx9xRzq+V\n> aXv/d/r5wW4g06krcootliQJ1TWnLbPBCQiqmI27HSvnEgDKmJ3kOdDji1FMrcdm\n> tuBdNxppoSx0sIFMJ6Xe/brt9O8wG/a81E0lAnsyh2nncaaXba96ldIhUbKvU0ie\n> 7In88Rn1UYZDXnoQEtZLmF6ArdTN5dQZkyEZvNKR0CHrPVddVYXP/gMWm/XwnOu6\n> k3Rg/evCY2yCyxveuQXU5AZhDFXB/VLoOQoZ5MhLxnoLCNDJrqJzymE1shsgIIji\n> i8PfXkKU92/N2kxfDBGwO0LdBpjZzzgg8zMHBsk8FIpXiJvVQKtAfCxYpYkSaL8y\n> L0g4Qi16s2/fFZcn1ORH23BaBlcmS1cnRWWyx/amyqPHX0v4XZvp3/kSj2jCSw+E\n> V7HD8qLut4rEAxwA5AGCy+9iugZp8DKQUUNiXOYbuysAdjceAa9LzPE0BbB4kuFC\n> OfOOjRstr97RyDKwRHjfGs2EnJSENGGcPdGz2HYgup0d4DlIctKww8xeSo55Khp/\n> HhBjtk7rpnqqEmEeA8+N8w5Z60x4mK900Anr1xhX2x4ETTIG2g9mYkEEZL/OZRUC\n> lihTXLyUhvd57/v7li5p\n> =s0U8\n> -----END PGP SIGNATURE-----\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Jun 2015 11:44:42 +0800",
"msg_from": "Kaijiang Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do work_mem and shared buffers have 1g or 2g limit on\n 64 bit linux?"
},
{
"msg_contents": "\n\nOn 06/15/15 05:44, Kaijiang Chen wrote:\n> I've checked the source codes in postgresql 9.2.4. In function\n> static bool\n> grow_memtuples(Tuplesortstate *state)\n>\n> the codes:\n> \t/*\n> \t * On a 64-bit machine, allowedMem could be high enough to get us into\n> \t * trouble with MaxAllocSize, too.\n> \t */\n> \tif ((Size) (state->memtupsize * 2) >= MaxAllocSize / sizeof(SortTuple))\n> \t\treturn false;\n>\n> Note that MaxAllocSize == 1GB - 1\n> that means, at least for sorting, it uses at most 1GB work_mem! And\n> setting larger work_mem has no use at all...\n\nThat's not true. This only limits the size of 'memtuples' array, which \nonly stores pointer to the actual tuple, and some additional data. The \ntuple itself is not counted against MaxAllocSize directly. The SortTuple \nstructure has ~24B which means you can track 33M tuples in that array, \nand the tuples may take a lot more space.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Jun 2015 16:57:53 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do work_mem and shared buffers have 1g or 2g limit\n on 64 bit linux?"
}
] |
[
{
"msg_contents": "Guys\n I have an issue going on with PGBOUNCER which is slowing down the site\n\nPGBOUNCER VERSION: pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16 cpu) 98GB RAM\nDATABASE VERION: postgresql 9.3\n\nWhen the total client connections to pgbouncer are close to 1000, site application works fine but when the total client connections crosses 1150 site application starts showing slowness.\n\nHere is an example of output\n\npostgres@symds-pg:~ $ netstat -atnp | grep 5432 | wc\n(Not all processes could be identified, non-owned process info\nwill not be shown, you would have to be root to see it all.)\n 960 6720 104640\n\n\nAs you can see total connections are like 960 right now my site application is working fine. When connections crosses 1150 and even though I see lot of available connections coz my default_pool_size is set high to 250 but still the application gets slow. Database performance on the other end is great with no slow running queries or anything. So the only place I can think the issue is at PGBOUNCER end.\n\npgbouncer=# show config;\n key | value | changeable\n---------------------------+----------------------------------+------------\njob_name | pgbouncer | no\nconffile | /etc/pgbouncer/pgbouncer.ini | yes\nlogfile | /var/log/pgbouncer.log | yes\npidfile | /var/run/pgbouncer/pgbouncer.pid | no\nlisten_addr | * | no\nlisten_port | 5432 | no\nlisten_backlog | 128 | no\nunix_socket_dir | /tmp | no\nunix_socket_mode | 511 | no\nunix_socket_group | | no\nauth_type | md5 | yes\nauth_file | /etc/pgbouncer/userlist.txt | yes\npool_mode | transaction | yes\nmax_client_conn | 3000 | yes\ndefault_pool_size | 250 | yes\nmin_pool_size | 0 | yes\nreserve_pool_size | 0 | yes\nreserve_pool_timeout | 5 | yes\nsyslog | 0 | yes\nsyslog_facility | daemon | yes\nsyslog_ident | pgbouncer | yes\nuser | | no\nautodb_idle_timeout | 3600 | yes\nserver_reset_query | | yes\nserver_check_query | select 1 | yes\nserver_check_delay | 30 | yes\nquery_timeout | 0 | yes\nquery_wait_timeout | 0 | yes\nclient_idle_timeout | 0 | yes\nclient_login_timeout | 60 | yes\nidle_transaction_timeout | 0 | yes\nserver_lifetime | 3600 | yes\nserver_idle_timeout | 600 | yes\nserver_connect_timeout | 15 | yes\nserver_login_retry | 15 | yes\nserver_round_robin | 0 | yes\nsuspend_timeout | 10 | yes\nignore_startup_parameters | extra_float_digits | yes\ndisable_pqexec | 0 | no\ndns_max_ttl | 15 | yes\ndns_zone_check_period | 0 | yes\nmax_packet_size | 2147483647 | yes\npkt_buf | 2048 | no\nsbuf_loopcnt | 5 | yes\ntcp_defer_accept | 1 | yes\ntcp_socket_buffer | 0 | yes\ntcp_keepalive | 1 | yes\ntcp_keepcnt | 0 | yes\ntcp_keepidle | 0 | yes\ntcp_keepintvl | 0 | yes\nverbose | 0 | yes\nadmin_users | postgres | yes\nstats_users | stats, postgres | yes\nstats_period | 60 | yes\nlog_connections | 1 | yes\nlog_disconnections | 1 | yes\nlog_pooler_errors | 1 | yes\n\n\nThanks\nPrabhjot\n\n\n\n\n\n\n\n\n\n\nGuys \n I have an issue going on with PGBOUNCER which is slowing down the site\n \nPGBOUNCER VERSION: pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16 cpu) 98GB RAM\nDATABASE VERION: postgresql 9.3\n \nWhen the total client connections to pgbouncer are close to 1000, site application works fine but when the total client connections crosses 1150 site application starts showing slowness.\n\n \nHere is an example of output\n \npostgres@symds-pg:~ $ netstat -atnp | grep 5432 | wc\n(Not all processes could be identified, non-owned process info\nwill not be shown, you would have to be root to see it all.)\n 960 6720 104640\n \n \nAs you can see total connections are like 960 right now my site application is working fine. When connections crosses 1150 and even though I see lot of available connections coz my default_pool_size is set high to 250 but still the application\n gets slow. Database performance on the other end is great with no slow running queries or anything. So the only place I can think the issue is at PGBOUNCER end.\n\n \npgbouncer=# show config;\n key | value | changeable\n---------------------------+----------------------------------+------------\njob_name | pgbouncer | no\nconffile | /etc/pgbouncer/pgbouncer.ini | yes\nlogfile | /var/log/pgbouncer.log | yes\npidfile | /var/run/pgbouncer/pgbouncer.pid | no\nlisten_addr | * | no\nlisten_port | 5432 | no\nlisten_backlog | 128 | no\nunix_socket_dir | /tmp | no\nunix_socket_mode | 511 | no\nunix_socket_group | | no\nauth_type | md5 | yes\nauth_file | /etc/pgbouncer/userlist.txt | yes\npool_mode | transaction | yes\nmax_client_conn | 3000 | yes\ndefault_pool_size | 250 | yes\nmin_pool_size | 0 | yes\nreserve_pool_size | 0 | yes\nreserve_pool_timeout | 5 | yes\nsyslog | 0 | yes\nsyslog_facility | daemon | yes\nsyslog_ident | pgbouncer | yes\nuser | | no\nautodb_idle_timeout | 3600 | yes\nserver_reset_query | | yes\nserver_check_query | select 1 | yes\nserver_check_delay | 30 | yes\nquery_timeout | 0 | yes\nquery_wait_timeout | 0 | yes\nclient_idle_timeout | 0 | yes\nclient_login_timeout | 60 | yes\nidle_transaction_timeout | 0 | yes\nserver_lifetime | 3600 | yes\nserver_idle_timeout | 600 | yes\nserver_connect_timeout | 15 | yes\nserver_login_retry | 15 | yes\nserver_round_robin | 0 | yes\nsuspend_timeout | 10 | yes\nignore_startup_parameters | extra_float_digits | yes\ndisable_pqexec | 0 | no\ndns_max_ttl | 15 | yes\ndns_zone_check_period | 0 | yes\nmax_packet_size | 2147483647 | yes\npkt_buf | 2048 | no\nsbuf_loopcnt | 5 | yes\ntcp_defer_accept | 1 | yes\ntcp_socket_buffer | 0 | yes\ntcp_keepalive | 1 | yes\ntcp_keepcnt | 0 | yes\ntcp_keepidle | 0 | yes\ntcp_keepintvl | 0 | yes\nverbose | 0 | yes\nadmin_users | postgres | yes\nstats_users | stats, postgres | yes\nstats_period | 60 | yes\nlog_connections | 1 | yes\nlog_disconnections | 1 | yes\nlog_pooler_errors | 1 | yes\n \n \nThanks\nPrabhjot",
"msg_date": "Thu, 18 Jun 2015 17:09:10 +0000",
"msg_from": "\"Sheena, Prabhjot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
},
{
"msg_contents": "On Thu, Jun 18, 2015 at 05:09:10PM +0000, Sheena, Prabhjot wrote:\n> Guys\n> I have an issue going on with PGBOUNCER which is slowing down the site\n> \n> PGBOUNCER VERSION: pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16 cpu) 98GB RAM\n> DATABASE VERION: postgresql 9.3\n> \n> When the total client connections to pgbouncer are close to 1000, site application works fine but when the total client connections crosses 1150 site application starts showing slowness.\n> \n> Here is an example of output\n> \n> postgres@symds-pg:~ $ netstat -atnp | grep 5432 | wc\n> (Not all processes could be identified, non-owned process info\n> will not be shown, you would have to be root to see it all.)\n> 960 6720 104640\n> \n> \n> As you can see total connections are like 960 right now my site application is working fine. When connections crosses 1150 and even though I see lot of available connections coz my default_pool_size is set high to 250 but still the application gets slow. Database performance on the other end is great with no slow running queries or anything. So the only place I can think the issue is at PGBOUNCER end.\n> \n\nHi Prabhjot,\n\nThis is classic behavior when you have a 1024 file limit. When you are below that\nnumber, it work fine. Above that number, you must wait for a connection to close\nand exit before you can connect which will cause a delay. See what ulimit has to\nsay?\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Thu, 18 Jun 2015 12:16:23 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
},
{
"msg_contents": "My guess is you are hitting an open file ulimit. Add ulimit -n 50000 to\nthe start of whatever you use to start pgbouncer (init script, etc..)\n\nOn Thu, Jun 18, 2015 at 1:10 PM Sheena, Prabhjot <\[email protected]> wrote:\n\n> Guys\n>\n> I have an issue going on with PGBOUNCER which is slowing down the\n> site\n>\n>\n>\n> PGBOUNCER VERSION: pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16\n> cpu) 98GB RAM\n>\n> DATABASE VERION: postgresql 9.3\n>\n>\n>\n> When the total client connections to pgbouncer are close to 1000, site\n> application works fine but when the total client connections crosses 1150\n> site application starts showing slowness.\n>\n>\n>\n> Here is an example of output\n>\n>\n>\n> postgres@symds-pg:~ $ netstat -atnp | grep 5432 | wc\n>\n> (Not all processes could be identified, non-owned process info\n>\n> will not be shown, you would have to be root to see it all.)\n>\n> *960* 6720 104640\n>\n>\n>\n>\n>\n> As you can see total connections are like 960 right now my site\n> application is working fine. When connections crosses 1150 and even though\n> I see lot of available connections coz my default_pool_size is set high to\n> 250 but still the application gets slow. Database performance on the\n> other end is great with no slow running queries or anything. So the only\n> place I can think the issue is at PGBOUNCER end.\n>\n>\n>\n> pgbouncer=# show config;\n>\n> key | value | changeable\n>\n> ---------------------------+----------------------------------+------------\n>\n> job_name | pgbouncer | no\n>\n> conffile | /etc/pgbouncer/pgbouncer.ini | yes\n>\n> logfile | /var/log/pgbouncer.log | yes\n>\n> pidfile | /var/run/pgbouncer/pgbouncer.pid | no\n>\n> listen_addr | * | no\n>\n> listen_port | 5432 | no\n>\n> listen_backlog | 128 | no\n>\n> unix_socket_dir | /tmp | no\n>\n> unix_socket_mode | 511 | no\n>\n> unix_socket_group | | no\n>\n> auth_type | md5 | yes\n>\n> auth_file | /etc/pgbouncer/userlist.txt | yes\n>\n> pool_mode | transaction | yes\n>\n> max_client_conn | 3000 | yes\n>\n> default_pool_size | 250 | yes\n>\n> min_pool_size | 0 | yes\n>\n> reserve_pool_size | 0 | yes\n>\n> reserve_pool_timeout | 5 | yes\n>\n> syslog | 0 | yes\n>\n> syslog_facility | daemon | yes\n>\n> syslog_ident | pgbouncer | yes\n>\n> user | | no\n>\n> autodb_idle_timeout | 3600 | yes\n>\n> server_reset_query | | yes\n>\n> server_check_query | select 1 | yes\n>\n> server_check_delay | 30 | yes\n>\n> query_timeout | 0 | yes\n>\n> query_wait_timeout | 0 | yes\n>\n> client_idle_timeout | 0 | yes\n>\n> client_login_timeout | 60 | yes\n>\n> idle_transaction_timeout | 0 | yes\n>\n> server_lifetime | 3600 | yes\n>\n> server_idle_timeout | 600 | yes\n>\n> server_connect_timeout | 15 | yes\n>\n> server_login_retry | 15 | yes\n>\n> server_round_robin | 0 | yes\n>\n> suspend_timeout | 10 | yes\n>\n> ignore_startup_parameters | extra_float_digits | yes\n>\n> disable_pqexec | 0 | no\n>\n> dns_max_ttl | 15 | yes\n>\n> dns_zone_check_period | 0 | yes\n>\n> max_packet_size | 2147483647 | yes\n>\n> pkt_buf | 2048 | no\n>\n> sbuf_loopcnt | 5 | yes\n>\n> tcp_defer_accept | 1 | yes\n>\n> tcp_socket_buffer | 0 | yes\n>\n> tcp_keepalive | 1 | yes\n>\n> tcp_keepcnt | 0 | yes\n>\n> tcp_keepidle | 0 | yes\n>\n> tcp_keepintvl | 0 | yes\n>\n> verbose | 0 | yes\n>\n> admin_users | postgres | yes\n>\n> stats_users | stats, postgres | yes\n>\n> stats_period | 60 | yes\n>\n> log_connections | 1 | yes\n>\n> log_disconnections | 1 | yes\n>\n> log_pooler_errors | 1 | yes\n>\n>\n>\n>\n>\n> Thanks\n>\n> Prabhjot\n>\n>\n>\n\nMy guess is you are hitting an open file ulimit. Add ulimit -n 50000 to the start of whatever you use to start pgbouncer (init script, etc..)On Thu, Jun 18, 2015 at 1:10 PM Sheena, Prabhjot <[email protected]> wrote:\n\n\nGuys \n I have an issue going on with PGBOUNCER which is slowing down the site\n \nPGBOUNCER VERSION: pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16 cpu) 98GB RAM\nDATABASE VERION: postgresql 9.3\n \nWhen the total client connections to pgbouncer are close to 1000, site application works fine but when the total client connections crosses 1150 site application starts showing slowness.\r\n\n \nHere is an example of output\n \npostgres@symds-pg:~ $ netstat -atnp | grep 5432 | wc\n(Not all processes could be identified, non-owned process info\nwill not be shown, you would have to be root to see it all.)\n 960 6720 104640\n \n \nAs you can see total connections are like 960 right now my site application is working fine. When connections crosses 1150 and even though I see lot of available connections coz my default_pool_size is set high to 250 but still the application\r\n gets slow. Database performance on the other end is great with no slow running queries or anything. So the only place I can think the issue is at PGBOUNCER end.\r\n\n \npgbouncer=# show config;\n key | value | changeable\n---------------------------+----------------------------------+------------\njob_name | pgbouncer | no\nconffile | /etc/pgbouncer/pgbouncer.ini | yes\nlogfile | /var/log/pgbouncer.log | yes\npidfile | /var/run/pgbouncer/pgbouncer.pid | no\nlisten_addr | * | no\nlisten_port | 5432 | no\nlisten_backlog | 128 | no\nunix_socket_dir | /tmp | no\nunix_socket_mode | 511 | no\nunix_socket_group | | no\nauth_type | md5 | yes\nauth_file | /etc/pgbouncer/userlist.txt | yes\npool_mode | transaction | yes\nmax_client_conn | 3000 | yes\ndefault_pool_size | 250 | yes\nmin_pool_size | 0 | yes\nreserve_pool_size | 0 | yes\nreserve_pool_timeout | 5 | yes\nsyslog | 0 | yes\nsyslog_facility | daemon | yes\nsyslog_ident | pgbouncer | yes\nuser | | no\nautodb_idle_timeout | 3600 | yes\nserver_reset_query | | yes\nserver_check_query | select 1 | yes\nserver_check_delay | 30 | yes\nquery_timeout | 0 | yes\nquery_wait_timeout | 0 | yes\nclient_idle_timeout | 0 | yes\nclient_login_timeout | 60 | yes\nidle_transaction_timeout | 0 | yes\nserver_lifetime | 3600 | yes\nserver_idle_timeout | 600 | yes\nserver_connect_timeout | 15 | yes\nserver_login_retry | 15 | yes\nserver_round_robin | 0 | yes\nsuspend_timeout | 10 | yes\nignore_startup_parameters | extra_float_digits | yes\ndisable_pqexec | 0 | no\ndns_max_ttl | 15 | yes\ndns_zone_check_period | 0 | yes\nmax_packet_size | 2147483647 | yes\npkt_buf | 2048 | no\nsbuf_loopcnt | 5 | yes\ntcp_defer_accept | 1 | yes\ntcp_socket_buffer | 0 | yes\ntcp_keepalive | 1 | yes\ntcp_keepcnt | 0 | yes\ntcp_keepidle | 0 | yes\ntcp_keepintvl | 0 | yes\nverbose | 0 | yes\nadmin_users | postgres | yes\nstats_users | stats, postgres | yes\nstats_period | 60 | yes\nlog_connections | 1 | yes\nlog_disconnections | 1 | yes\nlog_pooler_errors | 1 | yes\n \n \nThanks\nPrabhjot",
"msg_date": "Thu, 18 Jun 2015 17:17:36 +0000",
"msg_from": "Will Platnick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
},
{
"msg_contents": "Here is the output of OS limits\n\npostgres@symds-pg:~ $ ulimit -a\n\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 790527\nmax locked memory (kbytes, -l) 32\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 4096\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 10240\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 16384\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\n\n\nThanks\nPrabhjot\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] \nSent: Thursday, June 18, 2015 10:16 AM\nTo: Sheena, Prabhjot\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)\n\nOn Thu, Jun 18, 2015 at 05:09:10PM +0000, Sheena, Prabhjot wrote:\n> Guys\n> I have an issue going on with PGBOUNCER which is slowing down \n> the site\n> \n> PGBOUNCER VERSION: pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16 cpu) 98GB RAM\n> DATABASE VERION: postgresql 9.3\n> \n> When the total client connections to pgbouncer are close to 1000, site application works fine but when the total client connections crosses 1150 site application starts showing slowness.\n> \n> Here is an example of output\n> \n> postgres@symds-pg:~ $ netstat -atnp | grep 5432 | wc (Not all \n> processes could be identified, non-owned process info will not be \n> shown, you would have to be root to see it all.)\n> 960 6720 104640\n> \n> \n> As you can see total connections are like 960 right now my site application is working fine. When connections crosses 1150 and even though I see lot of available connections coz my default_pool_size is set high to 250 but still the application gets slow. Database performance on the other end is great with no slow running queries or anything. So the only place I can think the issue is at PGBOUNCER end.\n> \n\nHi Prabhjot,\n\nThis is classic behavior when you have a 1024 file limit. When you are below that number, it work fine. Above that number, you must wait for a connection to close and exit before you can connect which will cause a delay. See what ulimit has to say?\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Jun 2015 17:41:01 +0000",
"msg_from": "\"Sheena, Prabhjot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
},
{
"msg_contents": "On Thu, Jun 18, 2015 at 05:41:01PM +0000, Sheena, Prabhjot wrote:\n> Here is the output of OS limits\n> \n> postgres@symds-pg:~ $ ulimit -a\n> \n> core file size (blocks, -c) 0\n> data seg size (kbytes, -d) unlimited\n> scheduling priority (-e) 0\n> file size (blocks, -f) unlimited\n> pending signals (-i) 790527\n> max locked memory (kbytes, -l) 32\n> max memory size (kbytes, -m) unlimited\n> open files (-n) 4096\n> pipe size (512 bytes, -p) 8\n> POSIX message queues (bytes, -q) 819200\n> real-time priority (-r) 0\n> stack size (kbytes, -s) 10240\n> cpu time (seconds, -t) unlimited\n> max user processes (-u) 16384\n> virtual memory (kbytes, -v) unlimited\n> file locks (-x) unlimited\n> \n> \n> Thanks\n> Prabhjot\n> \n\nI would bump your open files as was suggested in your pgbouncer start\nscript.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Thu, 18 Jun 2015 13:10:23 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
},
{
"msg_contents": "Hi Ken/ Will\n \n I have checked the ulimit value and we are nowhere hitting the max 4096 that we have currently set. Is there any other explanation why we should be thinking of bumping it to like ulimit -n 50000 ( Add ulimit -n 50000 to the start of whatever you use to start pgbouncer (init script, etc..)) even though we are not reaching 4096 max value\n\nRegards\nPrabhjot Singh\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] \nSent: Thursday, June 18, 2015 11:10 AM\nTo: Sheena, Prabhjot\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)\n\nOn Thu, Jun 18, 2015 at 05:41:01PM +0000, Sheena, Prabhjot wrote:\n> Here is the output of OS limits\n> \n> postgres@symds-pg:~ $ ulimit -a\n> \n> core file size (blocks, -c) 0\n> data seg size (kbytes, -d) unlimited\n> scheduling priority (-e) 0\n> file size (blocks, -f) unlimited\n> pending signals (-i) 790527\n> max locked memory (kbytes, -l) 32\n> max memory size (kbytes, -m) unlimited\n> open files (-n) 4096\n> pipe size (512 bytes, -p) 8\n> POSIX message queues (bytes, -q) 819200\n> real-time priority (-r) 0\n> stack size (kbytes, -s) 10240\n> cpu time (seconds, -t) unlimited\n> max user processes (-u) 16384\n> virtual memory (kbytes, -v) unlimited\n> file locks (-x) unlimited\n> \n> \n> Thanks\n> Prabhjot\n> \n\nI would bump your open files as was suggested in your pgbouncer start script.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Jun 2015 19:19:13 +0000",
"msg_from": "\"Sheena, Prabhjot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
},
{
"msg_contents": "\"Sheena, Prabhjot\" <[email protected]> writes:\n\n> Hi Ken/ Will\n> \n> I have checked the ulimit value and we are nowhere hitting the max\n> 4096 that we have currently set. Is there any other explanation why\n> we should be thinking of bumping it to like ulimit -n 50000 ( Add\n> ulimit -n 50000 to the start of whatever you use to start pgbouncer\n> (init script, etc..)) even though we are not reaching 4096 max value\n\nIf I can assume you're running on linux, best you get limits readout\nfrom...\n\n/proc/$PID-of-bouncer-process/limits\n\nBest not to trust that run time env of interactive shell is same as\nwhere bouncer launched from.\n\nFWIW\n\n\n> Regards\n> Prabhjot Singh\n>\n>\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] \n> Sent: Thursday, June 18, 2015 11:10 AM\n> To: Sheena, Prabhjot\n> Cc: [email protected]; [email protected]\n> Subject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)\n>\n> On Thu, Jun 18, 2015 at 05:41:01PM +0000, Sheena, Prabhjot wrote:\n>> Here is the output of OS limits\n>> \n>> postgres@symds-pg:~ $ ulimit -a\n>> \n>> core file size (blocks, -c) 0\n>> data seg size (kbytes, -d) unlimited\n>> scheduling priority (-e) 0\n>> file size (blocks, -f) unlimited\n>> pending signals (-i) 790527\n>> max locked memory (kbytes, -l) 32\n>> max memory size (kbytes, -m) unlimited\n>> open files (-n) 4096\n>> pipe size (512 bytes, -p) 8\n>> POSIX message queues (bytes, -q) 819200\n>> real-time priority (-r) 0\n>> stack size (kbytes, -s) 10240\n>> cpu time (seconds, -t) unlimited\n>> max user processes (-u) 16384\n>> virtual memory (kbytes, -v) unlimited\n>> file locks (-x) unlimited\n>> \n>> \n>> Thanks\n>> Prabhjot\n>> \n>\n> I would bump your open files as was suggested in your pgbouncer start script.\n>\n> Regards,\n> Ken\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Thu, 18 Jun 2015 14:46:36 -0500",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Sheena, Prabhjot\nSent: Thursday, June 18, 2015 3:19 PM\nTo: [email protected]; Will Platnick\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)\n\nHi Ken/ Will\n \n I have checked the ulimit value and we are nowhere hitting the max 4096 that we have currently set. Is there any other explanation why we should be thinking of bumping it to like ulimit -n 50000 ( Add ulimit -n 50000 to the start of whatever you use to start pgbouncer (init script, etc..)) even though we are not reaching 4096 max value\n\nRegards\nPrabhjot Singh\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] \nSent: Thursday, June 18, 2015 11:10 AM\nTo: Sheena, Prabhjot\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)\n\nOn Thu, Jun 18, 2015 at 05:41:01PM +0000, Sheena, Prabhjot wrote:\n> Here is the output of OS limits\n> \n> postgres@symds-pg:~ $ ulimit -a\n> \n> core file size (blocks, -c) 0\n> data seg size (kbytes, -d) unlimited\n> scheduling priority (-e) 0\n> file size (blocks, -f) unlimited\n> pending signals (-i) 790527\n> max locked memory (kbytes, -l) 32\n> max memory size (kbytes, -m) unlimited\n> open files (-n) 4096\n> pipe size (512 bytes, -p) 8\n> POSIX message queues (bytes, -q) 819200\n> real-time priority (-r) 0\n> stack size (kbytes, -s) 10240\n> cpu time (seconds, -t) unlimited\n> max user processes (-u) 16384\n> virtual memory (kbytes, -v) unlimited\n> file locks (-x) unlimited\n> \n> \n> Thanks\n> Prabhjot\n> \n\nI would bump your open files as was suggested in your pgbouncer start script.\n\nRegards,\nKen\n\n---\n\nWhy are you so sure that it is PgBouncer causing slowness?\n\nYou, said, bouncer pool_size is set to 250. How many cores do you have on your db server?\n\nAlso, why are you running bouncer on a separate machine? It is very \"light-weight\", so running it on the db server wouldn't require much additional resource, but will eliminate some network traffic that you have with the current configuration.\n\nRegards,\nIgor Neyman\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Jun 2015 19:59:25 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
},
{
"msg_contents": "On Thu, Jun 18, 2015 at 07:19:13PM +0000, Sheena, Prabhjot wrote:\n> Hi Ken/ Will\n> \n> I have checked the ulimit value and we are nowhere hitting the max 4096 that we have currently set. Is there any other explanation why we should be thinking of bumping it to like ulimit -n 50000 ( Add ulimit -n 50000 to the start of whatever you use to start pgbouncer (init script, etc..)) even though we are not reaching 4096 max value\n> \n> Regards\n> Prabhjot Singh\n> \n\nHi,\n\nTry attaching to the pgbouncer with strace and see if you are getting any particular\nerrors. Do you have a /etc/security/limits.d directory? And if so, what is in it?\nWe found a nice default ulimit of 1024 for all non-root users. :(\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Jun 2015 15:02:03 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
},
{
"msg_contents": "Here is the output of pid\n\npostgres@symds-pg:~ $ cat /proc/15610/limits\nLimit Soft Limit Hard Limit Units\nMax cpu time unlimited unlimited seconds\nMax file size unlimited unlimited bytes\nMax data size unlimited unlimited bytes\nMax stack size 10485760 unlimited bytes\nMax core file size 0 0 bytes\nMax resident set unlimited unlimited bytes\nMax processes 16384 16384 processes\nMax open files 4096 4096 files\nMax locked memory 32768 32768 bytes\nMax address space unlimited unlimited bytes\nMax file locks unlimited unlimited locks\nMax pending signals 790527 790527 signals\nMax msgqueue size 819200 819200 bytes\nMax nice priority 0 0\nMax realtime priority 0 0\n\n\nThanks\nPrabhjot Singh\n\n-----Original Message-----\nFrom: Jerry Sievers [mailto:[email protected]] \nSent: Thursday, June 18, 2015 12:47 PM\nTo: Sheena, Prabhjot\nCc: [email protected]; Will Platnick; [email protected]; [email protected]\nSubject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)\n\n\"Sheena, Prabhjot\" <[email protected]> writes:\n\n> Hi Ken/ Will\n> \n> I have checked the ulimit value and we are nowhere hitting the max\n> 4096 that we have currently set. Is there any other explanation why\n> we should be thinking of bumping it to like ulimit -n 50000 ( Add\n> ulimit -n 50000 to the start of whatever you use to start pgbouncer\n> (init script, etc..)) even though we are not reaching 4096 max value\n\nIf I can assume you're running on linux, best you get limits readout from...\n\n/proc/$PID-of-bouncer-process/limits\n\nBest not to trust that run time env of interactive shell is same as where bouncer launched from.\n\nFWIW\n\n\n> Regards\n> Prabhjot Singh\n>\n>\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n> Sent: Thursday, June 18, 2015 11:10 AM\n> To: Sheena, Prabhjot\n> Cc: [email protected]; [email protected]\n> Subject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the \n> site)\n>\n> On Thu, Jun 18, 2015 at 05:41:01PM +0000, Sheena, Prabhjot wrote:\n>> Here is the output of OS limits\n>> \n>> postgres@symds-pg:~ $ ulimit -a\n>> \n>> core file size (blocks, -c) 0\n>> data seg size (kbytes, -d) unlimited\n>> scheduling priority (-e) 0\n>> file size (blocks, -f) unlimited\n>> pending signals (-i) 790527\n>> max locked memory (kbytes, -l) 32\n>> max memory size (kbytes, -m) unlimited\n>> open files (-n) 4096\n>> pipe size (512 bytes, -p) 8\n>> POSIX message queues (bytes, -q) 819200\n>> real-time priority (-r) 0\n>> stack size (kbytes, -s) 10240\n>> cpu time (seconds, -t) unlimited\n>> max user processes (-u) 16384\n>> virtual memory (kbytes, -v) unlimited\n>> file locks (-x) unlimited\n>> \n>> \n>> Thanks\n>> Prabhjot\n>> \n>\n> I would bump your open files as was suggested in your pgbouncer start script.\n>\n> Regards,\n> Ken\n\n--\nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Thu, 18 Jun 2015 20:26:25 +0000",
"msg_from": "\"Sheena, Prabhjot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a table with irregular distribution based in a foreign key, like you\ncan see in the end of the e-mail.\n\nSometimes, in simples joins with another tables with the same id_camada\n(but not the table owner of the foreign key, the planner chooses a seq scan\ninstead of use the index with id_camada.\nIf I do the join using also de table owner of the foreign key, then the\nindex is used.\n\nIn the first case, querys with seq scan tahe about 30 seconds and with the\nindex take about 40 ms.\n\nWhen I increase the statistics of the column id_camada to 900, then\neverything works using the index in both cases.\nMy doubt is: there is a way to discovery the best statistics number for\nthis column or is a process of trial and error?\n\nid_camada;count(*)\n123;10056782\n83;311471\n42;11316\n367;5564\n163;3362\n257;2100\n89;1725\n452;1092\n157;904\n84;883\n233;853\n271;638\n272;620\n269;548\n270;485\n455;437\n255;427\n32;371\n39;320\n31;309\n411;291\n91;260\n240;251\n162;250\n444;247\n165;227\n36;215\n236;193\n54;185\n53;175\n76;170\n412;153\n159;140\n160;139\n105;130\n59;117\n60;117\n267;115\n238;112\n279;111\n465;111\n5;107\n74;103\n243;98\n35;96\n68;82\n400;78\n391;75\n49;74\n124;68\n73;66\n260;64\n66;62\n168;60\n172;56\n4;54\n44;54\n384;53\n237;53\n390;52\n234;52\n387;51\n378;51\n148;50\n64;50\n379;47\n56;46\n52;46\n377;46\n443;46\n253;45\n97;45\n280;43\n77;43\n2;40\n376;39\n45;38\n235;36\n231;36\n413;36\n241;36\n232;34\n388;32\n101;32\n249;32\n99;32\n100;32\n69;32\n125;31\n166;30\n65;29\n433;29\n149;28\n96;27\n71;27\n98;26\n67;26\n386;25\n50;24\n21;24\n122;24\n47;24\n291;22\n287;22\n404;22\n70;22\n48;21\n63;21\n153;18\n13;18\n46;18\n262;18\n43;17\n72;17\n161;17\n344;15\n29;15\n439;14\n104;14\n119;13\n456;12\n434;12\n55;10\n3;10\n345;10\n286;10\n15;10\n141;9\n169;9\n258;9\n18;9\n158;9\n14;8\n94;8\n463;8\n218;8\n92;8\n170;8\n58;7\n17;7\n19;7\n6;7\n414;7\n10;7\n7;7\n22;7\n90;6\n430;6\n27;6\n195;6\n16;6\n223;6\n11;6\n242;6\n9;6\n26;5\n57;5\n82;5\n451;5\n61;5\n8;5\n445;5\n140;5\n431;5\n197;5\n20;5\n362;5\n24;5\n385;4\n23;4\n25;4\n62;4\n134;4\n150;4\n215;4\n217;4\n219;4\n220;4\n222;4\n224;4\n244;4\n284;4\n318;4\n389;4\n415;4\n449;4\n461;4\n93;3\n209;3\n136;3\n299;3\n188;3\n319;3\n264;3\n95;3\n337;3\n1;3\n221;3\n310;3\n143;2\n320;2\n321;2\n322;2\n324;2\n210;2\n302;2\n438;2\n303;2\n239;2\n330;2\n196;2\n447;2\n332;2\n333;2\n334;2\n307;2\n308;2\n309;2\n340;2\n341;2\n171;2\n190;2\n313;2\n193;2\n154;2\n294;2\n295;2\n250;2\n144;2\n311;1\n312;1\n314;1\n315;1\n316;1\n317;1\n51;1\n323;1\n325;1\n326;1\n327;1\n328;1\n329;1\n331;1\n335;1\n336;1\n338;1\n339;1\n342;1\n343;1\n186;1\n185;1\n354;1\n355;1\n356;1\n357;1\n359;1\n360;1\n361;1\n184;1\n363;1\n364;1\n366;1\n183;1\n369;1\n370;1\n182;1\n181;1\n180;1\n179;1\n380;1\n381;1\n382;1\n383;1\n178;1\n177;1\n176;1\n174;1\n30;1\n173;1\n392;1\n393;1\n155;1\n405;1\n407;1\n409;1\n151;1\n145;1\n12;1\n425;1\n138;1\n135;1\n103;1\n435;1\n437;1\n102;1\n440;1\n441;1\n442;1\n80;1\n448;1\n28;1\n226;1\n227;1\n228;1\n230;1\n225;1\n214;1\n216;1\n213;1\n212;1\n211;1\n208;1\n207;1\n206;1\n78;1\n245;1\n205;1\n204;1\n254;1\n203;1\n202;1\n201;1\n200;1\n199;1\n265;1\n198;1\n268;1\n194;1\n192;1\n273;1\n274;1\n275;1\n278;1\n191;1\n282;1\n75;1\n285;1\n189;1\n288;1\n289;1\n290;1\n187;1\n293;1\n296;1\n297;1\n300;1\n304;1\n305;1\n306;1\n\nHi,I have a table with irregular distribution based in a foreign key, like you can see in the end of the e-mail.Sometimes, in simples joins with another tables with the same id_camada (but not the table owner of the foreign key, the planner chooses a seq scan instead of use the index with id_camada.If I do the join using also de table owner of the foreign key, then the index is used.In the first case, querys with seq scan tahe about 30 seconds and with the index take about 40 ms.When I increase the statistics of the column id_camada to 900, then everything works using the index in both cases.My doubt is: there is a way to discovery the best statistics number for this column or is a process of trial and error?id_camada;count(*)123;1005678283;31147142;11316367;5564163;3362257;210089;1725452;1092157;90484;883233;853271;638272;620269;548270;485455;437255;42732;37139;32031;309411;29191;260240;251162;250444;247165;22736;215236;19354;18553;17576;170412;153159;140160;139105;13059;11760;117267;115238;112279;111465;1115;10774;103243;9835;9668;82400;78391;7549;74124;6873;66260;6466;62168;60172;564;5444;54384;53237;53390;52234;52387;51378;51148;5064;50379;4756;4652;46377;46443;46253;4597;45280;4377;432;40376;3945;38235;36231;36413;36241;36232;34388;32101;32249;3299;32100;3269;32125;31166;3065;29433;29149;2896;2771;2798;2667;26386;2550;2421;24122;2447;24291;22287;22404;2270;2248;2163;21153;1813;1846;18262;1843;1772;17161;17344;1529;15439;14104;14119;13456;12434;1255;103;10345;10286;1015;10141;9169;9258;918;9158;914;894;8463;8218;892;8170;858;717;719;76;7414;710;77;722;790;6430;627;6195;616;6223;611;6242;69;626;557;582;5451;561;58;5445;5140;5431;5197;520;5362;524;5385;423;425;462;4134;4150;4215;4217;4219;4220;4222;4224;4244;4284;4318;4389;4415;4449;4461;493;3209;3136;3299;3188;3319;3264;395;3337;31;3221;3310;3143;2320;2321;2322;2324;2210;2302;2438;2303;2239;2330;2196;2447;2332;2333;2334;2307;2308;2309;2340;2341;2171;2190;2313;2193;2154;2294;2295;2250;2144;2311;1312;1314;1315;1316;1317;151;1323;1325;1326;1327;1328;1329;1331;1335;1336;1338;1339;1342;1343;1186;1185;1354;1355;1356;1357;1359;1360;1361;1184;1363;1364;1366;1183;1369;1370;1182;1181;1180;1179;1380;1381;1382;1383;1178;1177;1176;1174;130;1173;1392;1393;1155;1405;1407;1409;1151;1145;112;1425;1138;1135;1103;1435;1437;1102;1440;1441;1442;180;1448;128;1226;1227;1228;1230;1225;1214;1216;1213;1212;1211;1208;1207;1206;178;1245;1205;1204;1254;1203;1202;1201;1200;1199;1265;1198;1268;1194;1192;1273;1274;1275;1278;1191;1282;175;1285;1189;1288;1289;1290;1187;1293;1296;1297;1300;1304;1305;1306;1",
"msg_date": "Thu, 18 Jun 2015 14:52:32 -0300",
"msg_from": "Irineu Ruiz <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to calculate statistics for one column"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Irineu Ruiz\r\nSent: Thursday, June 18, 2015 1:53 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] How to calculate statistics for one column\r\n\r\nHi,\r\n\r\nI have a table with irregular distribution based in a foreign key, like you can see in the end of the e-mail.\r\n\r\nSometimes, in simples joins with another tables with the same id_camada (but not the table owner of the foreign key, the planner chooses a seq scan instead of use the index with id_camada.\r\nIf I do the join using also de table owner of the foreign key, then the index is used.\r\n\r\nIn the first case, querys with seq scan tahe about 30 seconds and with the index take about 40 ms.\r\n\r\nWhen I increase the statistics of the column id_camada to 900, then everything works using the index in both cases.\r\nMy doubt is: there is a way to discovery the best statistics number for this column or is a process of trial and error?\r\n\r\nid_camada;count(*)\r\n123;10056782\r\n83;311471\r\n42;11316\r\n367;5564\r\n163;3362\r\n257;2100\r\n89;1725\r\n452;1092\r\n157;904\r\n84;883\r\n233;853\r\n271;638\r\n272;620\r\n269;548\r\n270;485\r\n455;437\r\n255;427\r\n32;371\r\n39;320\r\n31;309\r\n411;291\r\n91;260\r\n240;251\r\n162;250\r\n444;247\r\n165;227\r\n36;215\r\n236;193\r\n54;185\r\n53;175\r\n76;170\r\n412;153\r\n159;140\r\n160;139\r\n105;130\r\n59;117\r\n60;117\r\n267;115\r\n238;112\r\n279;111\r\n465;111\r\n5;107\r\n74;103\r\n243;98\r\n35;96\r\n68;82\r\n400;78\r\n391;75\r\n49;74\r\n124;68\r\n73;66\r\n260;64\r\n66;62\r\n168;60\r\n172;56\r\n4;54\r\n44;54\r\n384;53\r\n237;53\r\n390;52\r\n234;52\r\n387;51\r\n378;51\r\n148;50\r\n64;50\r\n379;47\r\n56;46\r\n52;46\r\n377;46\r\n443;46\r\n253;45\r\n97;45\r\n280;43\r\n77;43\r\n2;40\r\n376;39\r\n45;38\r\n235;36\r\n231;36\r\n413;36\r\n241;36\r\n232;34\r\n388;32\r\n101;32\r\n249;32\r\n99;32\r\n100;32\r\n69;32\r\n125;31\r\n166;30\r\n65;29\r\n433;29\r\n149;28\r\n96;27\r\n71;27\r\n98;26\r\n67;26\r\n386;25\r\n50;24\r\n21;24\r\n122;24\r\n47;24\r\n291;22\r\n287;22\r\n404;22\r\n70;22\r\n48;21\r\n63;21\r\n153;18\r\n13;18\r\n46;18\r\n262;18\r\n43;17\r\n72;17\r\n161;17\r\n344;15\r\n29;15\r\n439;14\r\n104;14\r\n119;13\r\n456;12\r\n434;12\r\n55;10\r\n3;10\r\n345;10\r\n286;10\r\n15;10\r\n141;9\r\n169;9\r\n258;9\r\n18;9\r\n158;9\r\n14;8\r\n94;8\r\n463;8\r\n218;8\r\n92;8\r\n170;8\r\n58;7\r\n17;7\r\n19;7\r\n6;7\r\n414;7\r\n10;7\r\n7;7\r\n22;7\r\n90;6\r\n430;6\r\n27;6\r\n195;6\r\n16;6\r\n223;6\r\n11;6\r\n242;6\r\n9;6\r\n26;5\r\n57;5\r\n82;5\r\n451;5\r\n61;5\r\n8;5\r\n445;5\r\n140;5\r\n431;5\r\n197;5\r\n20;5\r\n362;5\r\n24;5\r\n385;4\r\n23;4\r\n25;4\r\n62;4\r\n134;4\r\n150;4\r\n215;4\r\n217;4\r\n219;4\r\n220;4\r\n222;4\r\n224;4\r\n244;4\r\n284;4\r\n318;4\r\n389;4\r\n415;4\r\n449;4\r\n461;4\r\n93;3\r\n209;3\r\n136;3\r\n299;3\r\n188;3\r\n319;3\r\n264;3\r\n95;3\r\n337;3\r\n1;3\r\n221;3\r\n310;3\r\n143;2\r\n320;2\r\n321;2\r\n322;2\r\n324;2\r\n210;2\r\n302;2\r\n438;2\r\n303;2\r\n239;2\r\n330;2\r\n196;2\r\n447;2\r\n332;2\r\n333;2\r\n334;2\r\n307;2\r\n308;2\r\n309;2\r\n340;2\r\n341;2\r\n171;2\r\n190;2\r\n313;2\r\n193;2\r\n154;2\r\n294;2\r\n295;2\r\n250;2\r\n144;2\r\n311;1\r\n312;1\r\n314;1\r\n315;1\r\n316;1\r\n317;1\r\n51;1\r\n323;1\r\n325;1\r\n326;1\r\n327;1\r\n328;1\r\n329;1\r\n331;1\r\n335;1\r\n336;1\r\n338;1\r\n339;1\r\n342;1\r\n343;1\r\n186;1\r\n185;1\r\n354;1\r\n355;1\r\n356;1\r\n357;1\r\n359;1\r\n360;1\r\n361;1\r\n184;1\r\n363;1\r\n364;1\r\n366;1\r\n183;1\r\n369;1\r\n370;1\r\n182;1\r\n181;1\r\n180;1\r\n179;1\r\n380;1\r\n381;1\r\n382;1\r\n383;1\r\n178;1\r\n177;1\r\n176;1\r\n174;1\r\n30;1\r\n173;1\r\n392;1\r\n393;1\r\n155;1\r\n405;1\r\n407;1\r\n409;1\r\n151;1\r\n145;1\r\n12;1\r\n425;1\r\n138;1\r\n135;1\r\n103;1\r\n435;1\r\n437;1\r\n102;1\r\n440;1\r\n441;1\r\n442;1\r\n80;1\r\n448;1\r\n28;1\r\n226;1\r\n227;1\r\n228;1\r\n230;1\r\n225;1\r\n214;1\r\n216;1\r\n213;1\r\n212;1\r\n211;1\r\n208;1\r\n207;1\r\n206;1\r\n78;1\r\n245;1\r\n205;1\r\n204;1\r\n254;1\r\n203;1\r\n202;1\r\n201;1\r\n200;1\r\n199;1\r\n265;1\r\n198;1\r\n268;1\r\n194;1\r\n192;1\r\n273;1\r\n274;1\r\n275;1\r\n278;1\r\n191;1\r\n282;1\r\n75;1\r\n285;1\r\n189;1\r\n288;1\r\n289;1\r\n290;1\r\n187;1\r\n293;1\r\n296;1\r\n297;1\r\n300;1\r\n304;1\r\n305;1\r\n306;1\r\n\r\n--\r\n\r\nSo what’s the result of:\r\n\r\nSELECT COUNT(DISTINCT id_camada) FROM …\r\n\r\nDoes it change significantly over time?\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Irineu Ruiz\nSent: Thursday, June 18, 2015 1:53 PM\nTo: [email protected]\nSubject: [PERFORM] How to calculate statistics for one column\n \n\nHi,\n\n \n\n\nI have a table with irregular distribution based in a foreign key, like you can see in the end of the e-mail.\n\n\n \n\n\nSometimes, in simples joins with another tables with the same id_camada (but not the table owner of the foreign key, the planner chooses a seq scan instead of use the index with id_camada.\n\n\nIf I do the join using also de table owner of the foreign key, then the index is used.\n\n\n \n\n\nIn the first case, querys with seq scan tahe about 30 seconds and with the index take about 40 ms.\n\n\n \n\n\nWhen I increase the statistics of the column id_camada to 900, then everything works using the index in both cases.\n\n\nMy doubt is: there is a way to discovery the best statistics number for this column or is a process of trial and error?\n\n\n \n\n\nid_camada;count(*)\n\n\n\n123;10056782\n\n\n83;311471\n\n\n42;11316\n\n\n367;5564\n\n\n163;3362\n\n\n257;2100\n\n\n89;1725\n\n\n452;1092\n\n\n157;904\n\n\n84;883\n\n\n233;853\n\n\n271;638\n\n\n272;620\n\n\n269;548\n\n\n270;485\n\n\n455;437\n\n\n255;427\n\n\n32;371\n\n\n39;320\n\n\n31;309\n\n\n411;291\n\n\n91;260\n\n\n240;251\n\n\n162;250\n\n\n444;247\n\n\n165;227\n\n\n36;215\n\n\n236;193\n\n\n54;185\n\n\n53;175\n\n\n76;170\n\n\n412;153\n\n\n159;140\n\n\n160;139\n\n\n105;130\n\n\n59;117\n\n\n60;117\n\n\n267;115\n\n\n238;112\n\n\n279;111\n\n\n465;111\n\n\n5;107\n\n\n74;103\n\n\n243;98\n\n\n35;96\n\n\n68;82\n\n\n400;78\n\n\n391;75\n\n\n49;74\n\n\n124;68\n\n\n73;66\n\n\n260;64\n\n\n66;62\n\n\n168;60\n\n\n172;56\n\n\n4;54\n\n\n44;54\n\n\n384;53\n\n\n237;53\n\n\n390;52\n\n\n234;52\n\n\n387;51\n\n\n378;51\n\n\n148;50\n\n\n64;50\n\n\n379;47\n\n\n56;46\n\n\n52;46\n\n\n377;46\n\n\n443;46\n\n\n253;45\n\n\n97;45\n\n\n280;43\n\n\n77;43\n\n\n2;40\n\n\n376;39\n\n\n45;38\n\n\n235;36\n\n\n231;36\n\n\n413;36\n\n\n241;36\n\n\n232;34\n\n\n388;32\n\n\n101;32\n\n\n249;32\n\n\n99;32\n\n\n100;32\n\n\n69;32\n\n\n125;31\n\n\n166;30\n\n\n65;29\n\n\n433;29\n\n\n149;28\n\n\n96;27\n\n\n71;27\n\n\n98;26\n\n\n67;26\n\n\n386;25\n\n\n50;24\n\n\n21;24\n\n\n122;24\n\n\n47;24\n\n\n291;22\n\n\n287;22\n\n\n404;22\n\n\n70;22\n\n\n48;21\n\n\n63;21\n\n\n153;18\n\n\n13;18\n\n\n46;18\n\n\n262;18\n\n\n43;17\n\n\n72;17\n\n\n161;17\n\n\n344;15\n\n\n29;15\n\n\n439;14\n\n\n104;14\n\n\n119;13\n\n\n456;12\n\n\n434;12\n\n\n55;10\n\n\n3;10\n\n\n345;10\n\n\n286;10\n\n\n15;10\n\n\n141;9\n\n\n169;9\n\n\n258;9\n\n\n18;9\n\n\n158;9\n\n\n14;8\n\n\n94;8\n\n\n463;8\n\n\n218;8\n\n\n92;8\n\n\n170;8\n\n\n58;7\n\n\n17;7\n\n\n19;7\n\n\n6;7\n\n\n414;7\n\n\n10;7\n\n\n7;7\n\n\n22;7\n\n\n90;6\n\n\n430;6\n\n\n27;6\n\n\n195;6\n\n\n16;6\n\n\n223;6\n\n\n11;6\n\n\n242;6\n\n\n9;6\n\n\n26;5\n\n\n57;5\n\n\n82;5\n\n\n451;5\n\n\n61;5\n\n\n8;5\n\n\n445;5\n\n\n140;5\n\n\n431;5\n\n\n197;5\n\n\n20;5\n\n\n362;5\n\n\n24;5\n\n\n385;4\n\n\n23;4\n\n\n25;4\n\n\n62;4\n\n\n134;4\n\n\n150;4\n\n\n215;4\n\n\n217;4\n\n\n219;4\n\n\n220;4\n\n\n222;4\n\n\n224;4\n\n\n244;4\n\n\n284;4\n\n\n318;4\n\n\n389;4\n\n\n415;4\n\n\n449;4\n\n\n461;4\n\n\n93;3\n\n\n209;3\n\n\n136;3\n\n\n299;3\n\n\n188;3\n\n\n319;3\n\n\n264;3\n\n\n95;3\n\n\n337;3\n\n\n1;3\n\n\n221;3\n\n\n310;3\n\n\n143;2\n\n\n320;2\n\n\n321;2\n\n\n322;2\n\n\n324;2\n\n\n210;2\n\n\n302;2\n\n\n438;2\n\n\n303;2\n\n\n239;2\n\n\n330;2\n\n\n196;2\n\n\n447;2\n\n\n332;2\n\n\n333;2\n\n\n334;2\n\n\n307;2\n\n\n308;2\n\n\n309;2\n\n\n340;2\n\n\n341;2\n\n\n171;2\n\n\n190;2\n\n\n313;2\n\n\n193;2\n\n\n154;2\n\n\n294;2\n\n\n295;2\n\n\n250;2\n\n\n144;2\n\n\n311;1\n\n\n312;1\n\n\n314;1\n\n\n315;1\n\n\n316;1\n\n\n317;1\n\n\n51;1\n\n\n323;1\n\n\n325;1\n\n\n326;1\n\n\n327;1\n\n\n328;1\n\n\n329;1\n\n\n331;1\n\n\n335;1\n\n\n336;1\n\n\n338;1\n\n\n339;1\n\n\n342;1\n\n\n343;1\n\n\n186;1\n\n\n185;1\n\n\n354;1\n\n\n355;1\n\n\n356;1\n\n\n357;1\n\n\n359;1\n\n\n360;1\n\n\n361;1\n\n\n184;1\n\n\n363;1\n\n\n364;1\n\n\n366;1\n\n\n183;1\n\n\n369;1\n\n\n370;1\n\n\n182;1\n\n\n181;1\n\n\n180;1\n\n\n179;1\n\n\n380;1\n\n\n381;1\n\n\n382;1\n\n\n383;1\n\n\n178;1\n\n\n177;1\n\n\n176;1\n\n\n174;1\n\n\n30;1\n\n\n173;1\n\n\n392;1\n\n\n393;1\n\n\n155;1\n\n\n405;1\n\n\n407;1\n\n\n409;1\n\n\n151;1\n\n\n145;1\n\n\n12;1\n\n\n425;1\n\n\n138;1\n\n\n135;1\n\n\n103;1\n\n\n435;1\n\n\n437;1\n\n\n102;1\n\n\n440;1\n\n\n441;1\n\n\n442;1\n\n\n80;1\n\n\n448;1\n\n\n28;1\n\n\n226;1\n\n\n227;1\n\n\n228;1\n\n\n230;1\n\n\n225;1\n\n\n214;1\n\n\n216;1\n\n\n213;1\n\n\n212;1\n\n\n211;1\n\n\n208;1\n\n\n207;1\n\n\n206;1\n\n\n78;1\n\n\n245;1\n\n\n205;1\n\n\n204;1\n\n\n254;1\n\n\n203;1\n\n\n202;1\n\n\n201;1\n\n\n200;1\n\n\n199;1\n\n\n265;1\n\n\n198;1\n\n\n268;1\n\n\n194;1\n\n\n192;1\n\n\n273;1\n\n\n274;1\n\n\n275;1\n\n\n278;1\n\n\n191;1\n\n\n282;1\n\n\n75;1\n\n\n285;1\n\n\n189;1\n\n\n288;1\n\n\n289;1\n\n\n290;1\n\n\n187;1\n\n\n293;1\n\n\n296;1\n\n\n297;1\n\n\n300;1\n\n\n304;1\n\n\n305;1\n\n\n306;1\n\n\n \n\n\n--\n \nSo what’s the result of:\r\n\n \nSELECT COUNT(DISTINCT id_camada) FROM …\n \nDoes it change significantly over time?\n \nRegards,\nIgor Neyman",
"msg_date": "Thu, 18 Jun 2015 18:16:07 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to calculate statistics for one column"
},
{
"msg_contents": "SELECT COUNT(DISTINCT id_camada) FROM … equals\n349\n\nAnd it doesn't change significantly over time.\n\n[]'s\n\n2015-06-18 15:16 GMT-03:00 Igor Neyman <[email protected]>:\n\n>\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Irineu Ruiz\n> *Sent:* Thursday, June 18, 2015 1:53 PM\n> *To:* [email protected]\n> *Subject:* [PERFORM] How to calculate statistics for one column\n>\n>\n>\n> Hi,\n>\n>\n>\n> I have a table with irregular distribution based in a foreign key, like\n> you can see in the end of the e-mail.\n>\n>\n>\n> Sometimes, in simples joins with another tables with the same id_camada\n> (but not the table owner of the foreign key, the planner chooses a seq scan\n> instead of use the index with id_camada.\n>\n> If I do the join using also de table owner of the foreign key, then the\n> index is used.\n>\n>\n>\n> In the first case, querys with seq scan tahe about 30 seconds and with the\n> index take about 40 ms.\n>\n>\n>\n> When I increase the statistics of the column id_camada to 900, then\n> everything works using the index in both cases.\n>\n> My doubt is: there is a way to discovery the best statistics number for\n> this column or is a process of trial and error?\n>\n>\n>\n> id_camada;count(*)\n>\n> 123;10056782\n>\n> 83;311471\n>\n> 42;11316\n>\n> 367;5564\n>\n> 163;3362\n>\n> 257;2100\n>\n> 89;1725\n>\n> 452;1092\n>\n> 157;904\n>\n> 84;883\n>\n> 233;853\n>\n> 271;638\n>\n> 272;620\n>\n> 269;548\n>\n> 270;485\n>\n> 455;437\n>\n> 255;427\n>\n> 32;371\n>\n> 39;320\n>\n> 31;309\n>\n> 411;291\n>\n> 91;260\n>\n> 240;251\n>\n> 162;250\n>\n> 444;247\n>\n> 165;227\n>\n> 36;215\n>\n> 236;193\n>\n> 54;185\n>\n> 53;175\n>\n> 76;170\n>\n> 412;153\n>\n> 159;140\n>\n> 160;139\n>\n> 105;130\n>\n> 59;117\n>\n> 60;117\n>\n> 267;115\n>\n> 238;112\n>\n> 279;111\n>\n> 465;111\n>\n> 5;107\n>\n> 74;103\n>\n> 243;98\n>\n> 35;96\n>\n> 68;82\n>\n> 400;78\n>\n> 391;75\n>\n> 49;74\n>\n> 124;68\n>\n> 73;66\n>\n> 260;64\n>\n> 66;62\n>\n> 168;60\n>\n> 172;56\n>\n> 4;54\n>\n> 44;54\n>\n> 384;53\n>\n> 237;53\n>\n> 390;52\n>\n> 234;52\n>\n> 387;51\n>\n> 378;51\n>\n> 148;50\n>\n> 64;50\n>\n> 379;47\n>\n> 56;46\n>\n> 52;46\n>\n> 377;46\n>\n> 443;46\n>\n> 253;45\n>\n> 97;45\n>\n> 280;43\n>\n> 77;43\n>\n> 2;40\n>\n> 376;39\n>\n> 45;38\n>\n> 235;36\n>\n> 231;36\n>\n> 413;36\n>\n> 241;36\n>\n> 232;34\n>\n> 388;32\n>\n> 101;32\n>\n> 249;32\n>\n> 99;32\n>\n> 100;32\n>\n> 69;32\n>\n> 125;31\n>\n> 166;30\n>\n> 65;29\n>\n> 433;29\n>\n> 149;28\n>\n> 96;27\n>\n> 71;27\n>\n> 98;26\n>\n> 67;26\n>\n> 386;25\n>\n> 50;24\n>\n> 21;24\n>\n> 122;24\n>\n> 47;24\n>\n> 291;22\n>\n> 287;22\n>\n> 404;22\n>\n> 70;22\n>\n> 48;21\n>\n> 63;21\n>\n> 153;18\n>\n> 13;18\n>\n> 46;18\n>\n> 262;18\n>\n> 43;17\n>\n> 72;17\n>\n> 161;17\n>\n> 344;15\n>\n> 29;15\n>\n> 439;14\n>\n> 104;14\n>\n> 119;13\n>\n> 456;12\n>\n> 434;12\n>\n> 55;10\n>\n> 3;10\n>\n> 345;10\n>\n> 286;10\n>\n> 15;10\n>\n> 141;9\n>\n> 169;9\n>\n> 258;9\n>\n> 18;9\n>\n> 158;9\n>\n> 14;8\n>\n> 94;8\n>\n> 463;8\n>\n> 218;8\n>\n> 92;8\n>\n> 170;8\n>\n> 58;7\n>\n> 17;7\n>\n> 19;7\n>\n> 6;7\n>\n> 414;7\n>\n> 10;7\n>\n> 7;7\n>\n> 22;7\n>\n> 90;6\n>\n> 430;6\n>\n> 27;6\n>\n> 195;6\n>\n> 16;6\n>\n> 223;6\n>\n> 11;6\n>\n> 242;6\n>\n> 9;6\n>\n> 26;5\n>\n> 57;5\n>\n> 82;5\n>\n> 451;5\n>\n> 61;5\n>\n> 8;5\n>\n> 445;5\n>\n> 140;5\n>\n> 431;5\n>\n> 197;5\n>\n> 20;5\n>\n> 362;5\n>\n> 24;5\n>\n> 385;4\n>\n> 23;4\n>\n> 25;4\n>\n> 62;4\n>\n> 134;4\n>\n> 150;4\n>\n> 215;4\n>\n> 217;4\n>\n> 219;4\n>\n> 220;4\n>\n> 222;4\n>\n> 224;4\n>\n> 244;4\n>\n> 284;4\n>\n> 318;4\n>\n> 389;4\n>\n> 415;4\n>\n> 449;4\n>\n> 461;4\n>\n> 93;3\n>\n> 209;3\n>\n> 136;3\n>\n> 299;3\n>\n> 188;3\n>\n> 319;3\n>\n> 264;3\n>\n> 95;3\n>\n> 337;3\n>\n> 1;3\n>\n> 221;3\n>\n> 310;3\n>\n> 143;2\n>\n> 320;2\n>\n> 321;2\n>\n> 322;2\n>\n> 324;2\n>\n> 210;2\n>\n> 302;2\n>\n> 438;2\n>\n> 303;2\n>\n> 239;2\n>\n> 330;2\n>\n> 196;2\n>\n> 447;2\n>\n> 332;2\n>\n> 333;2\n>\n> 334;2\n>\n> 307;2\n>\n> 308;2\n>\n> 309;2\n>\n> 340;2\n>\n> 341;2\n>\n> 171;2\n>\n> 190;2\n>\n> 313;2\n>\n> 193;2\n>\n> 154;2\n>\n> 294;2\n>\n> 295;2\n>\n> 250;2\n>\n> 144;2\n>\n> 311;1\n>\n> 312;1\n>\n> 314;1\n>\n> 315;1\n>\n> 316;1\n>\n> 317;1\n>\n> 51;1\n>\n> 323;1\n>\n> 325;1\n>\n> 326;1\n>\n> 327;1\n>\n> 328;1\n>\n> 329;1\n>\n> 331;1\n>\n> 335;1\n>\n> 336;1\n>\n> 338;1\n>\n> 339;1\n>\n> 342;1\n>\n> 343;1\n>\n> 186;1\n>\n> 185;1\n>\n> 354;1\n>\n> 355;1\n>\n> 356;1\n>\n> 357;1\n>\n> 359;1\n>\n> 360;1\n>\n> 361;1\n>\n> 184;1\n>\n> 363;1\n>\n> 364;1\n>\n> 366;1\n>\n> 183;1\n>\n> 369;1\n>\n> 370;1\n>\n> 182;1\n>\n> 181;1\n>\n> 180;1\n>\n> 179;1\n>\n> 380;1\n>\n> 381;1\n>\n> 382;1\n>\n> 383;1\n>\n> 178;1\n>\n> 177;1\n>\n> 176;1\n>\n> 174;1\n>\n> 30;1\n>\n> 173;1\n>\n> 392;1\n>\n> 393;1\n>\n> 155;1\n>\n> 405;1\n>\n> 407;1\n>\n> 409;1\n>\n> 151;1\n>\n> 145;1\n>\n> 12;1\n>\n> 425;1\n>\n> 138;1\n>\n> 135;1\n>\n> 103;1\n>\n> 435;1\n>\n> 437;1\n>\n> 102;1\n>\n> 440;1\n>\n> 441;1\n>\n> 442;1\n>\n> 80;1\n>\n> 448;1\n>\n> 28;1\n>\n> 226;1\n>\n> 227;1\n>\n> 228;1\n>\n> 230;1\n>\n> 225;1\n>\n> 214;1\n>\n> 216;1\n>\n> 213;1\n>\n> 212;1\n>\n> 211;1\n>\n> 208;1\n>\n> 207;1\n>\n> 206;1\n>\n> 78;1\n>\n> 245;1\n>\n> 205;1\n>\n> 204;1\n>\n> 254;1\n>\n> 203;1\n>\n> 202;1\n>\n> 201;1\n>\n> 200;1\n>\n> 199;1\n>\n> 265;1\n>\n> 198;1\n>\n> 268;1\n>\n> 194;1\n>\n> 192;1\n>\n> 273;1\n>\n> 274;1\n>\n> 275;1\n>\n> 278;1\n>\n> 191;1\n>\n> 282;1\n>\n> 75;1\n>\n> 285;1\n>\n> 189;1\n>\n> 288;1\n>\n> 289;1\n>\n> 290;1\n>\n> 187;1\n>\n> 293;1\n>\n> 296;1\n>\n> 297;1\n>\n> 300;1\n>\n> 304;1\n>\n> 305;1\n>\n> 306;1\n>\n>\n>\n> --\n>\n>\n>\n> So what’s the result of:\n>\n>\n>\n> SELECT COUNT(DISTINCT id_camada) FROM …\n>\n>\n>\n> Does it change significantly over time?\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n>\n>\n\n\n\n-- \n\n[image: E-mail]\n\n*Irineu Ruiz*\n\nRua Helena, 275 - 12º Andar - Vila Olímpia\n\n04552-050 - São Paulo - SP – (011) 2667-0708\n\nwww.rassystem.com.br – *[email protected] <[email protected]>*\n\nSELECT COUNT(DISTINCT id_camada) FROM … equals349And it doesn't change significantly over time.[]'s2015-06-18 15:16 GMT-03:00 Igor Neyman <[email protected]>:\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Irineu Ruiz\nSent: Thursday, June 18, 2015 1:53 PM\nTo: [email protected]\nSubject: [PERFORM] How to calculate statistics for one column\n \n\nHi,\n\n \n\n\nI have a table with irregular distribution based in a foreign key, like you can see in the end of the e-mail.\n\n\n \n\n\nSometimes, in simples joins with another tables with the same id_camada (but not the table owner of the foreign key, the planner chooses a seq scan instead of use the index with id_camada.\n\n\nIf I do the join using also de table owner of the foreign key, then the index is used.\n\n\n \n\n\nIn the first case, querys with seq scan tahe about 30 seconds and with the index take about 40 ms.\n\n\n \n\n\nWhen I increase the statistics of the column id_camada to 900, then everything works using the index in both cases.\n\n\nMy doubt is: there is a way to discovery the best statistics number for this column or is a process of trial and error?\n\n\n \n\n\nid_camada;count(*)\n\n\n\n123;10056782\n\n\n83;311471\n\n\n42;11316\n\n\n367;5564\n\n\n163;3362\n\n\n257;2100\n\n\n89;1725\n\n\n452;1092\n\n\n157;904\n\n\n84;883\n\n\n233;853\n\n\n271;638\n\n\n272;620\n\n\n269;548\n\n\n270;485\n\n\n455;437\n\n\n255;427\n\n\n32;371\n\n\n39;320\n\n\n31;309\n\n\n411;291\n\n\n91;260\n\n\n240;251\n\n\n162;250\n\n\n444;247\n\n\n165;227\n\n\n36;215\n\n\n236;193\n\n\n54;185\n\n\n53;175\n\n\n76;170\n\n\n412;153\n\n\n159;140\n\n\n160;139\n\n\n105;130\n\n\n59;117\n\n\n60;117\n\n\n267;115\n\n\n238;112\n\n\n279;111\n\n\n465;111\n\n\n5;107\n\n\n74;103\n\n\n243;98\n\n\n35;96\n\n\n68;82\n\n\n400;78\n\n\n391;75\n\n\n49;74\n\n\n124;68\n\n\n73;66\n\n\n260;64\n\n\n66;62\n\n\n168;60\n\n\n172;56\n\n\n4;54\n\n\n44;54\n\n\n384;53\n\n\n237;53\n\n\n390;52\n\n\n234;52\n\n\n387;51\n\n\n378;51\n\n\n148;50\n\n\n64;50\n\n\n379;47\n\n\n56;46\n\n\n52;46\n\n\n377;46\n\n\n443;46\n\n\n253;45\n\n\n97;45\n\n\n280;43\n\n\n77;43\n\n\n2;40\n\n\n376;39\n\n\n45;38\n\n\n235;36\n\n\n231;36\n\n\n413;36\n\n\n241;36\n\n\n232;34\n\n\n388;32\n\n\n101;32\n\n\n249;32\n\n\n99;32\n\n\n100;32\n\n\n69;32\n\n\n125;31\n\n\n166;30\n\n\n65;29\n\n\n433;29\n\n\n149;28\n\n\n96;27\n\n\n71;27\n\n\n98;26\n\n\n67;26\n\n\n386;25\n\n\n50;24\n\n\n21;24\n\n\n122;24\n\n\n47;24\n\n\n291;22\n\n\n287;22\n\n\n404;22\n\n\n70;22\n\n\n48;21\n\n\n63;21\n\n\n153;18\n\n\n13;18\n\n\n46;18\n\n\n262;18\n\n\n43;17\n\n\n72;17\n\n\n161;17\n\n\n344;15\n\n\n29;15\n\n\n439;14\n\n\n104;14\n\n\n119;13\n\n\n456;12\n\n\n434;12\n\n\n55;10\n\n\n3;10\n\n\n345;10\n\n\n286;10\n\n\n15;10\n\n\n141;9\n\n\n169;9\n\n\n258;9\n\n\n18;9\n\n\n158;9\n\n\n14;8\n\n\n94;8\n\n\n463;8\n\n\n218;8\n\n\n92;8\n\n\n170;8\n\n\n58;7\n\n\n17;7\n\n\n19;7\n\n\n6;7\n\n\n414;7\n\n\n10;7\n\n\n7;7\n\n\n22;7\n\n\n90;6\n\n\n430;6\n\n\n27;6\n\n\n195;6\n\n\n16;6\n\n\n223;6\n\n\n11;6\n\n\n242;6\n\n\n9;6\n\n\n26;5\n\n\n57;5\n\n\n82;5\n\n\n451;5\n\n\n61;5\n\n\n8;5\n\n\n445;5\n\n\n140;5\n\n\n431;5\n\n\n197;5\n\n\n20;5\n\n\n362;5\n\n\n24;5\n\n\n385;4\n\n\n23;4\n\n\n25;4\n\n\n62;4\n\n\n134;4\n\n\n150;4\n\n\n215;4\n\n\n217;4\n\n\n219;4\n\n\n220;4\n\n\n222;4\n\n\n224;4\n\n\n244;4\n\n\n284;4\n\n\n318;4\n\n\n389;4\n\n\n415;4\n\n\n449;4\n\n\n461;4\n\n\n93;3\n\n\n209;3\n\n\n136;3\n\n\n299;3\n\n\n188;3\n\n\n319;3\n\n\n264;3\n\n\n95;3\n\n\n337;3\n\n\n1;3\n\n\n221;3\n\n\n310;3\n\n\n143;2\n\n\n320;2\n\n\n321;2\n\n\n322;2\n\n\n324;2\n\n\n210;2\n\n\n302;2\n\n\n438;2\n\n\n303;2\n\n\n239;2\n\n\n330;2\n\n\n196;2\n\n\n447;2\n\n\n332;2\n\n\n333;2\n\n\n334;2\n\n\n307;2\n\n\n308;2\n\n\n309;2\n\n\n340;2\n\n\n341;2\n\n\n171;2\n\n\n190;2\n\n\n313;2\n\n\n193;2\n\n\n154;2\n\n\n294;2\n\n\n295;2\n\n\n250;2\n\n\n144;2\n\n\n311;1\n\n\n312;1\n\n\n314;1\n\n\n315;1\n\n\n316;1\n\n\n317;1\n\n\n51;1\n\n\n323;1\n\n\n325;1\n\n\n326;1\n\n\n327;1\n\n\n328;1\n\n\n329;1\n\n\n331;1\n\n\n335;1\n\n\n336;1\n\n\n338;1\n\n\n339;1\n\n\n342;1\n\n\n343;1\n\n\n186;1\n\n\n185;1\n\n\n354;1\n\n\n355;1\n\n\n356;1\n\n\n357;1\n\n\n359;1\n\n\n360;1\n\n\n361;1\n\n\n184;1\n\n\n363;1\n\n\n364;1\n\n\n366;1\n\n\n183;1\n\n\n369;1\n\n\n370;1\n\n\n182;1\n\n\n181;1\n\n\n180;1\n\n\n179;1\n\n\n380;1\n\n\n381;1\n\n\n382;1\n\n\n383;1\n\n\n178;1\n\n\n177;1\n\n\n176;1\n\n\n174;1\n\n\n30;1\n\n\n173;1\n\n\n392;1\n\n\n393;1\n\n\n155;1\n\n\n405;1\n\n\n407;1\n\n\n409;1\n\n\n151;1\n\n\n145;1\n\n\n12;1\n\n\n425;1\n\n\n138;1\n\n\n135;1\n\n\n103;1\n\n\n435;1\n\n\n437;1\n\n\n102;1\n\n\n440;1\n\n\n441;1\n\n\n442;1\n\n\n80;1\n\n\n448;1\n\n\n28;1\n\n\n226;1\n\n\n227;1\n\n\n228;1\n\n\n230;1\n\n\n225;1\n\n\n214;1\n\n\n216;1\n\n\n213;1\n\n\n212;1\n\n\n211;1\n\n\n208;1\n\n\n207;1\n\n\n206;1\n\n\n78;1\n\n\n245;1\n\n\n205;1\n\n\n204;1\n\n\n254;1\n\n\n203;1\n\n\n202;1\n\n\n201;1\n\n\n200;1\n\n\n199;1\n\n\n265;1\n\n\n198;1\n\n\n268;1\n\n\n194;1\n\n\n192;1\n\n\n273;1\n\n\n274;1\n\n\n275;1\n\n\n278;1\n\n\n191;1\n\n\n282;1\n\n\n75;1\n\n\n285;1\n\n\n189;1\n\n\n288;1\n\n\n289;1\n\n\n290;1\n\n\n187;1\n\n\n293;1\n\n\n296;1\n\n\n297;1\n\n\n300;1\n\n\n304;1\n\n\n305;1\n\n\n306;1\n\n\n \n\n\n--\n \nSo what’s the result of:\n\n \nSELECT COUNT(DISTINCT id_camada) FROM …\n \nDoes it change significantly over time?\n \nRegards,\nIgor Neyman\n\n \n\n\n\n\n-- Irineu RuizRua Helena, 275 - 12º Andar - Vila Olímpia04552-050 - São Paulo - SP – (011) 2667-0708www.rassystem.com.br – [email protected]",
"msg_date": "Thu, 18 Jun 2015 15:18:24 -0300",
"msg_from": "Irineu Ruiz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to calculate statistics for one column"
},
{
"msg_contents": "From: Irineu Ruiz [mailto:[email protected]]\r\nSent: Thursday, June 18, 2015 2:18 PM\r\nTo: Igor Neyman\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] How to calculate statistics for one column\r\n\r\nSELECT COUNT(DISTINCT id_camada) FROM … equals\r\n349\r\n\r\nAnd it doesn't change significantly over time.\r\n\r\n[]'s\r\n\r\n2015-06-18 15:16 GMT-03:00 Igor Neyman <[email protected]<mailto:[email protected]>>:\r\n\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of Irineu Ruiz\r\nSent: Thursday, June 18, 2015 1:53 PM\r\nTo: [email protected]<mailto:[email protected]>\r\nSubject: [PERFORM] How to calculate statistics for one column\r\n\r\nHi,\r\n\r\nI have a table with irregular distribution based in a foreign key, like you can see in the end of the e-mail.\r\n\r\nSometimes, in simples joins with another tables with the same id_camada (but not the table owner of the foreign key, the planner chooses a seq scan instead of use the index with id_camada.\r\nIf I do the join using also de table owner of the foreign key, then the index is used.\r\n\r\nIn the first case, querys with seq scan tahe about 30 seconds and with the index take about 40 ms.\r\n\r\nWhen I increase the statistics of the column id_camada to 900, then everything works using the index in both cases.\r\nMy doubt is: there is a way to discovery the best statistics number for this column or is a process of trial and error?\r\n\r\nid_camada;count(*)\r\n123;10056782\r\n83;311471\r\n42;11316\r\n367;5564\r\n163;3362\r\n257;2100\r\n89;1725\r\n452;1092\r\n157;904\r\n84;883\r\n233;853\r\n271;638\r\n272;620\r\n269;548\r\n270;485\r\n455;437\r\n255;427\r\n32;371\r\n39;320\r\n31;309\r\n411;291\r\n91;260\r\n240;251\r\n162;250\r\n444;247\r\n165;227\r\n36;215\r\n236;193\r\n54;185\r\n53;175\r\n76;170\r\n412;153\r\n159;140\r\n160;139\r\n105;130\r\n59;117\r\n60;117\r\n267;115\r\n238;112\r\n279;111\r\n465;111\r\n5;107\r\n74;103\r\n243;98\r\n35;96\r\n68;82\r\n400;78\r\n391;75\r\n49;74\r\n124;68\r\n73;66\r\n260;64\r\n66;62\r\n168;60\r\n172;56\r\n4;54\r\n44;54\r\n384;53\r\n237;53\r\n390;52\r\n234;52\r\n387;51\r\n378;51\r\n148;50\r\n64;50\r\n379;47\r\n56;46\r\n52;46\r\n377;46\r\n443;46\r\n253;45\r\n97;45\r\n280;43\r\n77;43\r\n2;40\r\n376;39\r\n45;38\r\n235;36\r\n231;36\r\n413;36\r\n241;36\r\n232;34\r\n388;32\r\n101;32\r\n249;32\r\n99;32\r\n100;32\r\n69;32\r\n125;31\r\n166;30\r\n65;29\r\n433;29\r\n149;28\r\n96;27\r\n71;27\r\n98;26\r\n67;26\r\n386;25\r\n50;24\r\n21;24\r\n122;24\r\n47;24\r\n291;22\r\n287;22\r\n404;22\r\n70;22\r\n48;21\r\n63;21\r\n153;18\r\n13;18\r\n46;18\r\n262;18\r\n43;17\r\n72;17\r\n161;17\r\n344;15\r\n29;15\r\n439;14\r\n104;14\r\n119;13\r\n456;12\r\n434;12\r\n55;10\r\n3;10\r\n345;10\r\n286;10\r\n15;10\r\n141;9\r\n169;9\r\n258;9\r\n18;9\r\n158;9\r\n14;8\r\n94;8\r\n463;8\r\n218;8\r\n92;8\r\n170;8\r\n58;7\r\n17;7\r\n19;7\r\n6;7\r\n414;7\r\n10;7\r\n7;7\r\n22;7\r\n90;6\r\n430;6\r\n27;6\r\n195;6\r\n16;6\r\n223;6\r\n11;6\r\n242;6\r\n9;6\r\n26;5\r\n57;5\r\n82;5\r\n451;5\r\n61;5\r\n8;5\r\n445;5\r\n140;5\r\n431;5\r\n197;5\r\n20;5\r\n362;5\r\n24;5\r\n385;4\r\n23;4\r\n25;4\r\n62;4\r\n134;4\r\n150;4\r\n215;4\r\n217;4\r\n219;4\r\n220;4\r\n222;4\r\n224;4\r\n244;4\r\n284;4\r\n318;4\r\n389;4\r\n415;4\r\n449;4\r\n461;4\r\n93;3\r\n209;3\r\n136;3\r\n299;3\r\n188;3\r\n319;3\r\n264;3\r\n95;3\r\n337;3\r\n1;3\r\n221;3\r\n310;3\r\n143;2\r\n320;2\r\n321;2\r\n322;2\r\n324;2\r\n210;2\r\n302;2\r\n438;2\r\n303;2\r\n239;2\r\n330;2\r\n196;2\r\n447;2\r\n332;2\r\n333;2\r\n334;2\r\n307;2\r\n308;2\r\n309;2\r\n340;2\r\n341;2\r\n171;2\r\n190;2\r\n313;2\r\n193;2\r\n154;2\r\n294;2\r\n295;2\r\n250;2\r\n144;2\r\n311;1\r\n312;1\r\n314;1\r\n315;1\r\n316;1\r\n317;1\r\n51;1\r\n323;1\r\n325;1\r\n326;1\r\n327;1\r\n328;1\r\n329;1\r\n331;1\r\n335;1\r\n336;1\r\n338;1\r\n339;1\r\n342;1\r\n343;1\r\n186;1\r\n185;1\r\n354;1\r\n355;1\r\n356;1\r\n357;1\r\n359;1\r\n360;1\r\n361;1\r\n184;1\r\n363;1\r\n364;1\r\n366;1\r\n183;1\r\n369;1\r\n370;1\r\n182;1\r\n181;1\r\n180;1\r\n179;1\r\n380;1\r\n381;1\r\n382;1\r\n383;1\r\n178;1\r\n177;1\r\n176;1\r\n174;1\r\n30;1\r\n173;1\r\n392;1\r\n393;1\r\n155;1\r\n405;1\r\n407;1\r\n409;1\r\n151;1\r\n145;1\r\n12;1\r\n425;1\r\n138;1\r\n135;1\r\n103;1\r\n435;1\r\n437;1\r\n102;1\r\n440;1\r\n441;1\r\n442;1\r\n80;1\r\n448;1\r\n28;1\r\n226;1\r\n227;1\r\n228;1\r\n230;1\r\n225;1\r\n214;1\r\n216;1\r\n213;1\r\n212;1\r\n211;1\r\n208;1\r\n207;1\r\n206;1\r\n78;1\r\n245;1\r\n205;1\r\n204;1\r\n254;1\r\n203;1\r\n202;1\r\n201;1\r\n200;1\r\n199;1\r\n265;1\r\n198;1\r\n268;1\r\n194;1\r\n192;1\r\n273;1\r\n274;1\r\n275;1\r\n278;1\r\n191;1\r\n282;1\r\n75;1\r\n285;1\r\n189;1\r\n288;1\r\n289;1\r\n290;1\r\n187;1\r\n293;1\r\n296;1\r\n297;1\r\n300;1\r\n304;1\r\n305;1\r\n306;1\r\n\r\n--\r\n\r\nSo what’s the result of:\r\n\r\nSELECT COUNT(DISTINCT id_camada) FROM …\r\n\r\nDoes it change significantly over time?\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\r\n\r\nThen, I’d think that’s approximately your statistics target.\r\n\r\nRegards,\r\nIgor Neyman\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: Irineu Ruiz [mailto:[email protected]]\r\n\nSent: Thursday, June 18, 2015 2:18 PM\nTo: Igor Neyman\nCc: [email protected]\nSubject: Re: [PERFORM] How to calculate statistics for one column\n \n\n\nSELECT COUNT(DISTINCT id_camada) FROM … equals\n\n349\n\n \n\n\nAnd it doesn't change significantly over time.\n\n\n \n\n\n[]'s\n\n\n\n \n\n2015-06-18 15:16 GMT-03:00 Igor Neyman <[email protected]>:\n\n\n\n \n \nFrom:\[email protected] [mailto:[email protected]]\r\nOn Behalf Of Irineu Ruiz\nSent: Thursday, June 18, 2015 1:53 PM\nTo: [email protected]\nSubject: [PERFORM] How to calculate statistics for one column\n \n\n\n\nHi,\n\n \n\n\nI have a table with irregular distribution based in a foreign key, like you can see in the end of the e-mail.\n\n\n \n\n\nSometimes, in simples joins with another tables with the same id_camada (but not the table owner of the foreign key, the planner chooses a seq scan instead of use the index with\r\n id_camada.\n\n\nIf I do the join using also de table owner of the foreign key, then the index is used.\n\n\n \n\n\nIn the first case, querys with seq scan tahe about 30 seconds and with the index take about 40 ms.\n\n\n \n\n\nWhen I increase the statistics of the column id_camada to 900, then everything works using the index in both cases.\n\n\nMy doubt is: there is a way to discovery the best statistics number for this column or is a process of trial and error?\n\n\n \n\n\nid_camada;count(*)\n\n\n\n\n\n\n\n123;10056782\n\n\n83;311471\n\n\n42;11316\n\n\n367;5564\n\n\n163;3362\n\n\n257;2100\n\n\n89;1725\n\n\n452;1092\n\n\n157;904\n\n\n84;883\n\n\n233;853\n\n\n271;638\n\n\n272;620\n\n\n269;548\n\n\n270;485\n\n\n455;437\n\n\n255;427\n\n\n32;371\n\n\n39;320\n\n\n31;309\n\n\n411;291\n\n\n91;260\n\n\n240;251\n\n\n162;250\n\n\n444;247\n\n\n165;227\n\n\n36;215\n\n\n236;193\n\n\n54;185\n\n\n53;175\n\n\n76;170\n\n\n412;153\n\n\n159;140\n\n\n160;139\n\n\n105;130\n\n\n59;117\n\n\n60;117\n\n\n267;115\n\n\n238;112\n\n\n279;111\n\n\n465;111\n\n\n5;107\n\n\n74;103\n\n\n243;98\n\n\n35;96\n\n\n68;82\n\n\n400;78\n\n\n391;75\n\n\n49;74\n\n\n124;68\n\n\n73;66\n\n\n260;64\n\n\n66;62\n\n\n168;60\n\n\n172;56\n\n\n4;54\n\n\n44;54\n\n\n384;53\n\n\n237;53\n\n\n390;52\n\n\n234;52\n\n\n387;51\n\n\n378;51\n\n\n148;50\n\n\n64;50\n\n\n379;47\n\n\n56;46\n\n\n52;46\n\n\n377;46\n\n\n443;46\n\n\n253;45\n\n\n97;45\n\n\n280;43\n\n\n77;43\n\n\n2;40\n\n\n376;39\n\n\n45;38\n\n\n235;36\n\n\n231;36\n\n\n413;36\n\n\n241;36\n\n\n232;34\n\n\n388;32\n\n\n101;32\n\n\n249;32\n\n\n99;32\n\n\n100;32\n\n\n69;32\n\n\n125;31\n\n\n166;30\n\n\n65;29\n\n\n433;29\n\n\n149;28\n\n\n96;27\n\n\n71;27\n\n\n98;26\n\n\n67;26\n\n\n386;25\n\n\n50;24\n\n\n21;24\n\n\n122;24\n\n\n47;24\n\n\n291;22\n\n\n287;22\n\n\n404;22\n\n\n70;22\n\n\n48;21\n\n\n63;21\n\n\n153;18\n\n\n13;18\n\n\n46;18\n\n\n262;18\n\n\n43;17\n\n\n72;17\n\n\n161;17\n\n\n344;15\n\n\n29;15\n\n\n439;14\n\n\n104;14\n\n\n119;13\n\n\n456;12\n\n\n434;12\n\n\n55;10\n\n\n3;10\n\n\n345;10\n\n\n286;10\n\n\n15;10\n\n\n141;9\n\n\n169;9\n\n\n258;9\n\n\n18;9\n\n\n158;9\n\n\n14;8\n\n\n94;8\n\n\n463;8\n\n\n218;8\n\n\n92;8\n\n\n170;8\n\n\n58;7\n\n\n17;7\n\n\n19;7\n\n\n6;7\n\n\n414;7\n\n\n10;7\n\n\n7;7\n\n\n22;7\n\n\n90;6\n\n\n430;6\n\n\n27;6\n\n\n195;6\n\n\n16;6\n\n\n223;6\n\n\n11;6\n\n\n242;6\n\n\n9;6\n\n\n26;5\n\n\n57;5\n\n\n82;5\n\n\n451;5\n\n\n61;5\n\n\n8;5\n\n\n445;5\n\n\n140;5\n\n\n431;5\n\n\n197;5\n\n\n20;5\n\n\n362;5\n\n\n24;5\n\n\n385;4\n\n\n23;4\n\n\n25;4\n\n\n62;4\n\n\n134;4\n\n\n150;4\n\n\n215;4\n\n\n217;4\n\n\n219;4\n\n\n220;4\n\n\n222;4\n\n\n224;4\n\n\n244;4\n\n\n284;4\n\n\n318;4\n\n\n389;4\n\n\n415;4\n\n\n449;4\n\n\n461;4\n\n\n93;3\n\n\n209;3\n\n\n136;3\n\n\n299;3\n\n\n188;3\n\n\n319;3\n\n\n264;3\n\n\n95;3\n\n\n337;3\n\n\n1;3\n\n\n221;3\n\n\n310;3\n\n\n143;2\n\n\n320;2\n\n\n321;2\n\n\n322;2\n\n\n324;2\n\n\n210;2\n\n\n302;2\n\n\n438;2\n\n\n303;2\n\n\n239;2\n\n\n330;2\n\n\n196;2\n\n\n447;2\n\n\n332;2\n\n\n333;2\n\n\n334;2\n\n\n307;2\n\n\n308;2\n\n\n309;2\n\n\n340;2\n\n\n341;2\n\n\n171;2\n\n\n190;2\n\n\n313;2\n\n\n193;2\n\n\n154;2\n\n\n294;2\n\n\n295;2\n\n\n250;2\n\n\n144;2\n\n\n311;1\n\n\n312;1\n\n\n314;1\n\n\n315;1\n\n\n316;1\n\n\n317;1\n\n\n51;1\n\n\n323;1\n\n\n325;1\n\n\n326;1\n\n\n327;1\n\n\n328;1\n\n\n329;1\n\n\n331;1\n\n\n335;1\n\n\n336;1\n\n\n338;1\n\n\n339;1\n\n\n342;1\n\n\n343;1\n\n\n186;1\n\n\n185;1\n\n\n354;1\n\n\n355;1\n\n\n356;1\n\n\n357;1\n\n\n359;1\n\n\n360;1\n\n\n361;1\n\n\n184;1\n\n\n363;1\n\n\n364;1\n\n\n366;1\n\n\n183;1\n\n\n369;1\n\n\n370;1\n\n\n182;1\n\n\n181;1\n\n\n180;1\n\n\n179;1\n\n\n380;1\n\n\n381;1\n\n\n382;1\n\n\n383;1\n\n\n178;1\n\n\n177;1\n\n\n176;1\n\n\n174;1\n\n\n30;1\n\n\n173;1\n\n\n392;1\n\n\n393;1\n\n\n155;1\n\n\n405;1\n\n\n407;1\n\n\n409;1\n\n\n151;1\n\n\n145;1\n\n\n12;1\n\n\n425;1\n\n\n138;1\n\n\n135;1\n\n\n103;1\n\n\n435;1\n\n\n437;1\n\n\n102;1\n\n\n440;1\n\n\n441;1\n\n\n442;1\n\n\n80;1\n\n\n448;1\n\n\n28;1\n\n\n226;1\n\n\n227;1\n\n\n228;1\n\n\n230;1\n\n\n225;1\n\n\n214;1\n\n\n216;1\n\n\n213;1\n\n\n212;1\n\n\n211;1\n\n\n208;1\n\n\n207;1\n\n\n206;1\n\n\n78;1\n\n\n245;1\n\n\n205;1\n\n\n204;1\n\n\n254;1\n\n\n203;1\n\n\n202;1\n\n\n201;1\n\n\n200;1\n\n\n199;1\n\n\n265;1\n\n\n198;1\n\n\n268;1\n\n\n194;1\n\n\n192;1\n\n\n273;1\n\n\n274;1\n\n\n275;1\n\n\n278;1\n\n\n191;1\n\n\n282;1\n\n\n75;1\n\n\n285;1\n\n\n189;1\n\n\n288;1\n\n\n289;1\n\n\n290;1\n\n\n187;1\n\n\n293;1\n\n\n296;1\n\n\n297;1\n\n\n300;1\n\n\n304;1\n\n\n305;1\n\n\n306;1\n\n\n \n\n\n\n\n--\n \nSo what’s the result of:\r\n\n \nSELECT COUNT(DISTINCT id_camada) FROM …\n \nDoes it change significantly over time?\n \nRegards,\nIgor Neyman\n\n \n\n\n\n\n\n\n\n\nThen, I’d think that’s approximately your statistics target.\n \nRegards,\nIgor Neyman",
"msg_date": "Thu, 18 Jun 2015 18:24:40 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to calculate statistics for one column"
},
{
"msg_contents": "I din't understood.\n\nIn this case, my statistics target should be approximately 349?\nI already try this range but didn't work.\n\nIt's only work when I put 900 in my statistics.\n\nThere is some kind of formula to calculate a good statistics for a column\nlike this?\n\n\n\n2015-06-18 15:24 GMT-03:00 Igor Neyman <[email protected]>:\n\n>\n>\n>\n>\n> *From:* Irineu Ruiz [mailto:[email protected]]\n> *Sent:* Thursday, June 18, 2015 2:18 PM\n> *To:* Igor Neyman\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] How to calculate statistics for one column\n>\n>\n>\n> SELECT COUNT(DISTINCT id_camada) FROM … equals\n>\n> 349\n>\n>\n>\n> And it doesn't change significantly over time.\n>\n>\n>\n> []'s\n>\n>\n>\n> 2015-06-18 15:16 GMT-03:00 Igor Neyman <[email protected]>:\n>\n>\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Irineu Ruiz\n> *Sent:* Thursday, June 18, 2015 1:53 PM\n> *To:* [email protected]\n> *Subject:* [PERFORM] How to calculate statistics for one column\n>\n>\n>\n> Hi,\n>\n>\n>\n> I have a table with irregular distribution based in a foreign key, like\n> you can see in the end of the e-mail.\n>\n>\n>\n> Sometimes, in simples joins with another tables with the same id_camada\n> (but not the table owner of the foreign key, the planner chooses a seq scan\n> instead of use the index with id_camada.\n>\n> If I do the join using also de table owner of the foreign key, then the\n> index is used.\n>\n>\n>\n> In the first case, querys with seq scan tahe about 30 seconds and with the\n> index take about 40 ms.\n>\n>\n>\n> When I increase the statistics of the column id_camada to 900, then\n> everything works using the index in both cases.\n>\n> My doubt is: there is a way to discovery the best statistics number for\n> this column or is a process of trial and error?\n>\n>\n>\n> id_camada;count(*)\n>\n> 123;10056782\n>\n> 83;311471\n>\n> 42;11316\n>\n> 367;5564\n>\n> 163;3362\n>\n> 257;2100\n>\n> 89;1725\n>\n> 452;1092\n>\n> 157;904\n>\n> 84;883\n>\n> 233;853\n>\n> 271;638\n>\n> 272;620\n>\n> 269;548\n>\n> 270;485\n>\n> 455;437\n>\n> 255;427\n>\n> 32;371\n>\n> 39;320\n>\n> 31;309\n>\n> 411;291\n>\n> 91;260\n>\n> 240;251\n>\n> 162;250\n>\n> 444;247\n>\n> 165;227\n>\n> 36;215\n>\n> 236;193\n>\n> 54;185\n>\n> 53;175\n>\n> 76;170\n>\n> 412;153\n>\n> 159;140\n>\n> 160;139\n>\n> 105;130\n>\n> 59;117\n>\n> 60;117\n>\n> 267;115\n>\n> 238;112\n>\n> 279;111\n>\n> 465;111\n>\n> 5;107\n>\n> 74;103\n>\n> 243;98\n>\n> 35;96\n>\n> 68;82\n>\n> 400;78\n>\n> 391;75\n>\n> 49;74\n>\n> 124;68\n>\n> 73;66\n>\n> 260;64\n>\n> 66;62\n>\n> 168;60\n>\n> 172;56\n>\n> 4;54\n>\n> 44;54\n>\n> 384;53\n>\n> 237;53\n>\n> 390;52\n>\n> 234;52\n>\n> 387;51\n>\n> 378;51\n>\n> 148;50\n>\n> 64;50\n>\n> 379;47\n>\n> 56;46\n>\n> 52;46\n>\n> 377;46\n>\n> 443;46\n>\n> 253;45\n>\n> 97;45\n>\n> 280;43\n>\n> 77;43\n>\n> 2;40\n>\n> 376;39\n>\n> 45;38\n>\n> 235;36\n>\n> 231;36\n>\n> 413;36\n>\n> 241;36\n>\n> 232;34\n>\n> 388;32\n>\n> 101;32\n>\n> 249;32\n>\n> 99;32\n>\n> 100;32\n>\n> 69;32\n>\n> 125;31\n>\n> 166;30\n>\n> 65;29\n>\n> 433;29\n>\n> 149;28\n>\n> 96;27\n>\n> 71;27\n>\n> 98;26\n>\n> 67;26\n>\n> 386;25\n>\n> 50;24\n>\n> 21;24\n>\n> 122;24\n>\n> 47;24\n>\n> 291;22\n>\n> 287;22\n>\n> 404;22\n>\n> 70;22\n>\n> 48;21\n>\n> 63;21\n>\n> 153;18\n>\n> 13;18\n>\n> 46;18\n>\n> 262;18\n>\n> 43;17\n>\n> 72;17\n>\n> 161;17\n>\n> 344;15\n>\n> 29;15\n>\n> 439;14\n>\n> 104;14\n>\n> 119;13\n>\n> 456;12\n>\n> 434;12\n>\n> 55;10\n>\n> 3;10\n>\n> 345;10\n>\n> 286;10\n>\n> 15;10\n>\n> 141;9\n>\n> 169;9\n>\n> 258;9\n>\n> 18;9\n>\n> 158;9\n>\n> 14;8\n>\n> 94;8\n>\n> 463;8\n>\n> 218;8\n>\n> 92;8\n>\n> 170;8\n>\n> 58;7\n>\n> 17;7\n>\n> 19;7\n>\n> 6;7\n>\n> 414;7\n>\n> 10;7\n>\n> 7;7\n>\n> 22;7\n>\n> 90;6\n>\n> 430;6\n>\n> 27;6\n>\n> 195;6\n>\n> 16;6\n>\n> 223;6\n>\n> 11;6\n>\n> 242;6\n>\n> 9;6\n>\n> 26;5\n>\n> 57;5\n>\n> 82;5\n>\n> 451;5\n>\n> 61;5\n>\n> 8;5\n>\n> 445;5\n>\n> 140;5\n>\n> 431;5\n>\n> 197;5\n>\n> 20;5\n>\n> 362;5\n>\n> 24;5\n>\n> 385;4\n>\n> 23;4\n>\n> 25;4\n>\n> 62;4\n>\n> 134;4\n>\n> 150;4\n>\n> 215;4\n>\n> 217;4\n>\n> 219;4\n>\n> 220;4\n>\n> 222;4\n>\n> 224;4\n>\n> 244;4\n>\n> 284;4\n>\n> 318;4\n>\n> 389;4\n>\n> 415;4\n>\n> 449;4\n>\n> 461;4\n>\n> 93;3\n>\n> 209;3\n>\n> 136;3\n>\n> 299;3\n>\n> 188;3\n>\n> 319;3\n>\n> 264;3\n>\n> 95;3\n>\n> 337;3\n>\n> 1;3\n>\n> 221;3\n>\n> 310;3\n>\n> 143;2\n>\n> 320;2\n>\n> 321;2\n>\n> 322;2\n>\n> 324;2\n>\n> 210;2\n>\n> 302;2\n>\n> 438;2\n>\n> 303;2\n>\n> 239;2\n>\n> 330;2\n>\n> 196;2\n>\n> 447;2\n>\n> 332;2\n>\n> 333;2\n>\n> 334;2\n>\n> 307;2\n>\n> 308;2\n>\n> 309;2\n>\n> 340;2\n>\n> 341;2\n>\n> 171;2\n>\n> 190;2\n>\n> 313;2\n>\n> 193;2\n>\n> 154;2\n>\n> 294;2\n>\n> 295;2\n>\n> 250;2\n>\n> 144;2\n>\n> 311;1\n>\n> 312;1\n>\n> 314;1\n>\n> 315;1\n>\n> 316;1\n>\n> 317;1\n>\n> 51;1\n>\n> 323;1\n>\n> 325;1\n>\n> 326;1\n>\n> 327;1\n>\n> 328;1\n>\n> 329;1\n>\n> 331;1\n>\n> 335;1\n>\n> 336;1\n>\n> 338;1\n>\n> 339;1\n>\n> 342;1\n>\n> 343;1\n>\n> 186;1\n>\n> 185;1\n>\n> 354;1\n>\n> 355;1\n>\n> 356;1\n>\n> 357;1\n>\n> 359;1\n>\n> 360;1\n>\n> 361;1\n>\n> 184;1\n>\n> 363;1\n>\n> 364;1\n>\n> 366;1\n>\n> 183;1\n>\n> 369;1\n>\n> 370;1\n>\n> 182;1\n>\n> 181;1\n>\n> 180;1\n>\n> 179;1\n>\n> 380;1\n>\n> 381;1\n>\n> 382;1\n>\n> 383;1\n>\n> 178;1\n>\n> 177;1\n>\n> 176;1\n>\n> 174;1\n>\n> 30;1\n>\n> 173;1\n>\n> 392;1\n>\n> 393;1\n>\n> 155;1\n>\n> 405;1\n>\n> 407;1\n>\n> 409;1\n>\n> 151;1\n>\n> 145;1\n>\n> 12;1\n>\n> 425;1\n>\n> 138;1\n>\n> 135;1\n>\n> 103;1\n>\n> 435;1\n>\n> 437;1\n>\n> 102;1\n>\n> 440;1\n>\n> 441;1\n>\n> 442;1\n>\n> 80;1\n>\n> 448;1\n>\n> 28;1\n>\n> 226;1\n>\n> 227;1\n>\n> 228;1\n>\n> 230;1\n>\n> 225;1\n>\n> 214;1\n>\n> 216;1\n>\n> 213;1\n>\n> 212;1\n>\n> 211;1\n>\n> 208;1\n>\n> 207;1\n>\n> 206;1\n>\n> 78;1\n>\n> 245;1\n>\n> 205;1\n>\n> 204;1\n>\n> 254;1\n>\n> 203;1\n>\n> 202;1\n>\n> 201;1\n>\n> 200;1\n>\n> 199;1\n>\n> 265;1\n>\n> 198;1\n>\n> 268;1\n>\n> 194;1\n>\n> 192;1\n>\n> 273;1\n>\n> 274;1\n>\n> 275;1\n>\n> 278;1\n>\n> 191;1\n>\n> 282;1\n>\n> 75;1\n>\n> 285;1\n>\n> 189;1\n>\n> 288;1\n>\n> 289;1\n>\n> 290;1\n>\n> 187;1\n>\n> 293;1\n>\n> 296;1\n>\n> 297;1\n>\n> 300;1\n>\n> 304;1\n>\n> 305;1\n>\n> 306;1\n>\n>\n>\n> --\n>\n>\n>\n> So what’s the result of:\n>\n>\n>\n> SELECT COUNT(DISTINCT id_camada) FROM …\n>\n>\n>\n> Does it change significantly over time?\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n>\n>\n>\n>\n> Then, I’d think that’s approximately your statistics target.\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n\n\n\n-- \n\n[image: E-mail]\n\n*Irineu Ruiz*\n\nRua Helena, 275 - 12º Andar - Vila Olímpia\n\n04552-050 - São Paulo - SP – (011) 2667-0708\n\nwww.rassystem.com.br – *[email protected] <[email protected]>*\n\nI din't understood.In this case, my statistics target should be approximately 349?I already try this range but didn't work.It's only work when I put 900 in my statistics.There is some kind of formula to calculate a good statistics for a column like this?2015-06-18 15:24 GMT-03:00 Igor Neyman <[email protected]>:\n\n\n \n \nFrom: Irineu Ruiz [mailto:[email protected]]\n\nSent: Thursday, June 18, 2015 2:18 PM\nTo: Igor Neyman\nCc: [email protected]\nSubject: Re: [PERFORM] How to calculate statistics for one column\n \n\n\nSELECT COUNT(DISTINCT id_camada) FROM … equals\n\n349\n\n \n\n\nAnd it doesn't change significantly over time.\n\n\n \n\n\n[]'s\n\n\n\n \n\n2015-06-18 15:16 GMT-03:00 Igor Neyman <[email protected]>:\n\n\n\n \n \nFrom:\[email protected] [mailto:[email protected]]\nOn Behalf Of Irineu Ruiz\nSent: Thursday, June 18, 2015 1:53 PM\nTo: [email protected]\nSubject: [PERFORM] How to calculate statistics for one column\n \n\n\n\nHi,\n\n \n\n\nI have a table with irregular distribution based in a foreign key, like you can see in the end of the e-mail.\n\n\n \n\n\nSometimes, in simples joins with another tables with the same id_camada (but not the table owner of the foreign key, the planner chooses a seq scan instead of use the index with\n id_camada.\n\n\nIf I do the join using also de table owner of the foreign key, then the index is used.\n\n\n \n\n\nIn the first case, querys with seq scan tahe about 30 seconds and with the index take about 40 ms.\n\n\n \n\n\nWhen I increase the statistics of the column id_camada to 900, then everything works using the index in both cases.\n\n\nMy doubt is: there is a way to discovery the best statistics number for this column or is a process of trial and error?\n\n\n \n\n\nid_camada;count(*)\n\n\n\n\n\n\n\n123;10056782\n\n\n83;311471\n\n\n42;11316\n\n\n367;5564\n\n\n163;3362\n\n\n257;2100\n\n\n89;1725\n\n\n452;1092\n\n\n157;904\n\n\n84;883\n\n\n233;853\n\n\n271;638\n\n\n272;620\n\n\n269;548\n\n\n270;485\n\n\n455;437\n\n\n255;427\n\n\n32;371\n\n\n39;320\n\n\n31;309\n\n\n411;291\n\n\n91;260\n\n\n240;251\n\n\n162;250\n\n\n444;247\n\n\n165;227\n\n\n36;215\n\n\n236;193\n\n\n54;185\n\n\n53;175\n\n\n76;170\n\n\n412;153\n\n\n159;140\n\n\n160;139\n\n\n105;130\n\n\n59;117\n\n\n60;117\n\n\n267;115\n\n\n238;112\n\n\n279;111\n\n\n465;111\n\n\n5;107\n\n\n74;103\n\n\n243;98\n\n\n35;96\n\n\n68;82\n\n\n400;78\n\n\n391;75\n\n\n49;74\n\n\n124;68\n\n\n73;66\n\n\n260;64\n\n\n66;62\n\n\n168;60\n\n\n172;56\n\n\n4;54\n\n\n44;54\n\n\n384;53\n\n\n237;53\n\n\n390;52\n\n\n234;52\n\n\n387;51\n\n\n378;51\n\n\n148;50\n\n\n64;50\n\n\n379;47\n\n\n56;46\n\n\n52;46\n\n\n377;46\n\n\n443;46\n\n\n253;45\n\n\n97;45\n\n\n280;43\n\n\n77;43\n\n\n2;40\n\n\n376;39\n\n\n45;38\n\n\n235;36\n\n\n231;36\n\n\n413;36\n\n\n241;36\n\n\n232;34\n\n\n388;32\n\n\n101;32\n\n\n249;32\n\n\n99;32\n\n\n100;32\n\n\n69;32\n\n\n125;31\n\n\n166;30\n\n\n65;29\n\n\n433;29\n\n\n149;28\n\n\n96;27\n\n\n71;27\n\n\n98;26\n\n\n67;26\n\n\n386;25\n\n\n50;24\n\n\n21;24\n\n\n122;24\n\n\n47;24\n\n\n291;22\n\n\n287;22\n\n\n404;22\n\n\n70;22\n\n\n48;21\n\n\n63;21\n\n\n153;18\n\n\n13;18\n\n\n46;18\n\n\n262;18\n\n\n43;17\n\n\n72;17\n\n\n161;17\n\n\n344;15\n\n\n29;15\n\n\n439;14\n\n\n104;14\n\n\n119;13\n\n\n456;12\n\n\n434;12\n\n\n55;10\n\n\n3;10\n\n\n345;10\n\n\n286;10\n\n\n15;10\n\n\n141;9\n\n\n169;9\n\n\n258;9\n\n\n18;9\n\n\n158;9\n\n\n14;8\n\n\n94;8\n\n\n463;8\n\n\n218;8\n\n\n92;8\n\n\n170;8\n\n\n58;7\n\n\n17;7\n\n\n19;7\n\n\n6;7\n\n\n414;7\n\n\n10;7\n\n\n7;7\n\n\n22;7\n\n\n90;6\n\n\n430;6\n\n\n27;6\n\n\n195;6\n\n\n16;6\n\n\n223;6\n\n\n11;6\n\n\n242;6\n\n\n9;6\n\n\n26;5\n\n\n57;5\n\n\n82;5\n\n\n451;5\n\n\n61;5\n\n\n8;5\n\n\n445;5\n\n\n140;5\n\n\n431;5\n\n\n197;5\n\n\n20;5\n\n\n362;5\n\n\n24;5\n\n\n385;4\n\n\n23;4\n\n\n25;4\n\n\n62;4\n\n\n134;4\n\n\n150;4\n\n\n215;4\n\n\n217;4\n\n\n219;4\n\n\n220;4\n\n\n222;4\n\n\n224;4\n\n\n244;4\n\n\n284;4\n\n\n318;4\n\n\n389;4\n\n\n415;4\n\n\n449;4\n\n\n461;4\n\n\n93;3\n\n\n209;3\n\n\n136;3\n\n\n299;3\n\n\n188;3\n\n\n319;3\n\n\n264;3\n\n\n95;3\n\n\n337;3\n\n\n1;3\n\n\n221;3\n\n\n310;3\n\n\n143;2\n\n\n320;2\n\n\n321;2\n\n\n322;2\n\n\n324;2\n\n\n210;2\n\n\n302;2\n\n\n438;2\n\n\n303;2\n\n\n239;2\n\n\n330;2\n\n\n196;2\n\n\n447;2\n\n\n332;2\n\n\n333;2\n\n\n334;2\n\n\n307;2\n\n\n308;2\n\n\n309;2\n\n\n340;2\n\n\n341;2\n\n\n171;2\n\n\n190;2\n\n\n313;2\n\n\n193;2\n\n\n154;2\n\n\n294;2\n\n\n295;2\n\n\n250;2\n\n\n144;2\n\n\n311;1\n\n\n312;1\n\n\n314;1\n\n\n315;1\n\n\n316;1\n\n\n317;1\n\n\n51;1\n\n\n323;1\n\n\n325;1\n\n\n326;1\n\n\n327;1\n\n\n328;1\n\n\n329;1\n\n\n331;1\n\n\n335;1\n\n\n336;1\n\n\n338;1\n\n\n339;1\n\n\n342;1\n\n\n343;1\n\n\n186;1\n\n\n185;1\n\n\n354;1\n\n\n355;1\n\n\n356;1\n\n\n357;1\n\n\n359;1\n\n\n360;1\n\n\n361;1\n\n\n184;1\n\n\n363;1\n\n\n364;1\n\n\n366;1\n\n\n183;1\n\n\n369;1\n\n\n370;1\n\n\n182;1\n\n\n181;1\n\n\n180;1\n\n\n179;1\n\n\n380;1\n\n\n381;1\n\n\n382;1\n\n\n383;1\n\n\n178;1\n\n\n177;1\n\n\n176;1\n\n\n174;1\n\n\n30;1\n\n\n173;1\n\n\n392;1\n\n\n393;1\n\n\n155;1\n\n\n405;1\n\n\n407;1\n\n\n409;1\n\n\n151;1\n\n\n145;1\n\n\n12;1\n\n\n425;1\n\n\n138;1\n\n\n135;1\n\n\n103;1\n\n\n435;1\n\n\n437;1\n\n\n102;1\n\n\n440;1\n\n\n441;1\n\n\n442;1\n\n\n80;1\n\n\n448;1\n\n\n28;1\n\n\n226;1\n\n\n227;1\n\n\n228;1\n\n\n230;1\n\n\n225;1\n\n\n214;1\n\n\n216;1\n\n\n213;1\n\n\n212;1\n\n\n211;1\n\n\n208;1\n\n\n207;1\n\n\n206;1\n\n\n78;1\n\n\n245;1\n\n\n205;1\n\n\n204;1\n\n\n254;1\n\n\n203;1\n\n\n202;1\n\n\n201;1\n\n\n200;1\n\n\n199;1\n\n\n265;1\n\n\n198;1\n\n\n268;1\n\n\n194;1\n\n\n192;1\n\n\n273;1\n\n\n274;1\n\n\n275;1\n\n\n278;1\n\n\n191;1\n\n\n282;1\n\n\n75;1\n\n\n285;1\n\n\n189;1\n\n\n288;1\n\n\n289;1\n\n\n290;1\n\n\n187;1\n\n\n293;1\n\n\n296;1\n\n\n297;1\n\n\n300;1\n\n\n304;1\n\n\n305;1\n\n\n306;1\n\n\n \n\n\n\n\n--\n \nSo what’s the result of:\n\n \nSELECT COUNT(DISTINCT id_camada) FROM …\n \nDoes it change significantly over time?\n \nRegards,\nIgor Neyman\n\n \n\n\n\n\n\n\n\n\nThen, I’d think that’s approximately your statistics target.\n \nRegards,\nIgor Neyman\n\n\n\n-- Irineu RuizRua Helena, 275 - 12º Andar - Vila Olímpia04552-050 - São Paulo - SP – (011) 2667-0708www.rassystem.com.br – [email protected]",
"msg_date": "Thu, 18 Jun 2015 16:09:54 -0300",
"msg_from": "Irineu Ruiz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to calculate statistics for one column"
},
{
"msg_contents": "From: Irineu Ruiz [mailto:[email protected]]\r\nSent: Thursday, June 18, 2015 3:10 PM\r\nTo: Igor Neyman\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] How to calculate statistics for one column\r\n\r\nI din't understood.\r\n\r\nIn this case, my statistics target should be approximately 349?\r\nI already try this range but didn't work.\r\n\r\nIt's only work when I put 900 in my statistics.\r\n\r\nThere is some kind of formula to calculate a good statistics for a column like this?\r\n\r\n\r\n\r\n2015-06-18 15:24 GMT-03:00 Igor Neyman <[email protected]<mailto:[email protected]>>:\r\n\r\n\r\nFrom: Irineu Ruiz [mailto:[email protected]<mailto:[email protected]>]\r\nSent: Thursday, June 18, 2015 2:18 PM\r\nTo: Igor Neyman\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] How to calculate statistics for one column\r\n\r\nSELECT COUNT(DISTINCT id_camada) FROM … equals\r\n349\r\n\r\nAnd it doesn't change significantly over time.\r\n\r\n[]'s\r\n\r\n2015-06-18 15:16 GMT-03:00 Igor Neyman <[email protected]<mailto:[email protected]>>:\r\n\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of Irineu Ruiz\r\nSent: Thursday, June 18, 2015 1:53 PM\r\nTo: [email protected]<mailto:[email protected]>\r\nSubject: [PERFORM] How to calculate statistics for one column\r\n\r\nHi,\r\n\r\nI have a table with irregular distribution based in a foreign key, like you can see in the end of the e-mail.\r\n\r\nSometimes, in simples joins with another tables with the same id_camada (but not the table owner of the foreign key, the planner chooses a seq scan instead of use the index with id_camada.\r\nIf I do the join using also de table owner of the foreign key, then the index is used.\r\n\r\nIn the first case, querys with seq scan tahe about 30 seconds and with the index take about 40 ms.\r\n\r\nWhen I increase the statistics of the column id_camada to 900, then everything works using the index in both cases.\r\nMy doubt is: there is a way to discovery the best statistics number for this column or is a process of trial and error?\r\n\r\nid_camada;count(*)\r\n123;10056782\r\n83;311471\r\n42;11316\r\n367;5564\r\n163;3362\r\n257;2100\r\n89;1725\r\n452;1092\r\n157;904\r\n84;883\r\n233;853\r\n271;638\r\n272;620\r\n269;548\r\n270;485\r\n455;437\r\n255;427\r\n32;371\r\n39;320\r\n31;309\r\n411;291\r\n91;260\r\n240;251\r\n162;250\r\n444;247\r\n165;227\r\n36;215\r\n236;193\r\n54;185\r\n53;175\r\n76;170\r\n412;153\r\n159;140\r\n160;139\r\n105;130\r\n59;117\r\n60;117\r\n267;115\r\n238;112\r\n279;111\r\n465;111\r\n5;107\r\n74;103\r\n243;98\r\n35;96\r\n68;82\r\n400;78\r\n391;75\r\n49;74\r\n124;68\r\n73;66\r\n260;64\r\n66;62\r\n168;60\r\n172;56\r\n4;54\r\n44;54\r\n384;53\r\n237;53\r\n390;52\r\n234;52\r\n387;51\r\n378;51\r\n148;50\r\n64;50\r\n379;47\r\n56;46\r\n52;46\r\n377;46\r\n443;46\r\n253;45\r\n97;45\r\n280;43\r\n77;43\r\n2;40\r\n376;39\r\n45;38\r\n235;36\r\n231;36\r\n413;36\r\n241;36\r\n232;34\r\n388;32\r\n101;32\r\n249;32\r\n99;32\r\n100;32\r\n69;32\r\n125;31\r\n166;30\r\n65;29\r\n433;29\r\n149;28\r\n96;27\r\n71;27\r\n98;26\r\n67;26\r\n386;25\r\n50;24\r\n21;24\r\n122;24\r\n47;24\r\n291;22\r\n287;22\r\n404;22\r\n70;22\r\n48;21\r\n63;21\r\n153;18\r\n13;18\r\n46;18\r\n262;18\r\n43;17\r\n72;17\r\n161;17\r\n344;15\r\n29;15\r\n439;14\r\n104;14\r\n119;13\r\n456;12\r\n434;12\r\n55;10\r\n3;10\r\n345;10\r\n286;10\r\n15;10\r\n141;9\r\n169;9\r\n258;9\r\n18;9\r\n158;9\r\n14;8\r\n94;8\r\n463;8\r\n218;8\r\n92;8\r\n170;8\r\n58;7\r\n17;7\r\n19;7\r\n6;7\r\n414;7\r\n10;7\r\n7;7\r\n22;7\r\n90;6\r\n430;6\r\n27;6\r\n195;6\r\n16;6\r\n223;6\r\n11;6\r\n242;6\r\n9;6\r\n26;5\r\n57;5\r\n82;5\r\n451;5\r\n61;5\r\n8;5\r\n445;5\r\n140;5\r\n431;5\r\n197;5\r\n20;5\r\n362;5\r\n24;5\r\n385;4\r\n23;4\r\n25;4\r\n62;4\r\n134;4\r\n150;4\r\n215;4\r\n217;4\r\n219;4\r\n220;4\r\n222;4\r\n224;4\r\n244;4\r\n284;4\r\n318;4\r\n389;4\r\n415;4\r\n449;4\r\n461;4\r\n93;3\r\n209;3\r\n136;3\r\n299;3\r\n188;3\r\n319;3\r\n264;3\r\n95;3\r\n337;3\r\n1;3\r\n221;3\r\n310;3\r\n143;2\r\n320;2\r\n321;2\r\n322;2\r\n324;2\r\n210;2\r\n302;2\r\n438;2\r\n303;2\r\n239;2\r\n330;2\r\n196;2\r\n447;2\r\n332;2\r\n333;2\r\n334;2\r\n307;2\r\n308;2\r\n309;2\r\n340;2\r\n341;2\r\n171;2\r\n190;2\r\n313;2\r\n193;2\r\n154;2\r\n294;2\r\n295;2\r\n250;2\r\n144;2\r\n311;1\r\n312;1\r\n314;1\r\n315;1\r\n316;1\r\n317;1\r\n51;1\r\n323;1\r\n325;1\r\n326;1\r\n327;1\r\n328;1\r\n329;1\r\n331;1\r\n335;1\r\n336;1\r\n338;1\r\n339;1\r\n342;1\r\n343;1\r\n186;1\r\n185;1\r\n354;1\r\n355;1\r\n356;1\r\n357;1\r\n359;1\r\n360;1\r\n361;1\r\n184;1\r\n363;1\r\n364;1\r\n366;1\r\n183;1\r\n369;1\r\n370;1\r\n182;1\r\n181;1\r\n180;1\r\n179;1\r\n380;1\r\n381;1\r\n382;1\r\n383;1\r\n178;1\r\n177;1\r\n176;1\r\n174;1\r\n30;1\r\n173;1\r\n392;1\r\n393;1\r\n155;1\r\n405;1\r\n407;1\r\n409;1\r\n151;1\r\n145;1\r\n12;1\r\n425;1\r\n138;1\r\n135;1\r\n103;1\r\n435;1\r\n437;1\r\n102;1\r\n440;1\r\n441;1\r\n442;1\r\n80;1\r\n448;1\r\n28;1\r\n226;1\r\n227;1\r\n228;1\r\n230;1\r\n225;1\r\n214;1\r\n216;1\r\n213;1\r\n212;1\r\n211;1\r\n208;1\r\n207;1\r\n206;1\r\n78;1\r\n245;1\r\n205;1\r\n204;1\r\n254;1\r\n203;1\r\n202;1\r\n201;1\r\n200;1\r\n199;1\r\n265;1\r\n198;1\r\n268;1\r\n194;1\r\n192;1\r\n273;1\r\n274;1\r\n275;1\r\n278;1\r\n191;1\r\n282;1\r\n75;1\r\n285;1\r\n189;1\r\n288;1\r\n289;1\r\n290;1\r\n187;1\r\n293;1\r\n296;1\r\n297;1\r\n300;1\r\n304;1\r\n305;1\r\n306;1\r\n\r\n--\r\n\r\nSo what’s the result of:\r\n\r\nSELECT COUNT(DISTINCT id_camada) FROM …\r\n\r\nDoes it change significantly over time?\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\r\n\r\nThen, I’d think that’s approximately your statistics target.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\nWell, check if information in pg_stats for your table is correct:\r\n\r\nhttp://www.postgresql.org/docs/9.4/static/view-pg-stats.html\r\n\r\nRegards,\r\nIgor Neyman\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: Irineu Ruiz [mailto:[email protected]]\r\n\nSent: Thursday, June 18, 2015 3:10 PM\nTo: Igor Neyman\nCc: [email protected]\nSubject: Re: [PERFORM] How to calculate statistics for one column\n \n\nI din't understood.\n\n \n\n\nIn this case, my statistics target should be approximately 349?\n\n\nI already try this range but didn't work.\n\n\n \n\n\nIt's only work when I put 900 in my statistics.\n\n\n \n\n\nThere is some kind of formula to calculate a good statistics for a column like this?\n\n\n \n\n\n \n\n\n\n \n\n2015-06-18 15:24 GMT-03:00 Igor Neyman <[email protected]>:\n\n\n\n \n \nFrom: Irineu Ruiz [mailto:[email protected]]\r\n\nSent: Thursday, June 18, 2015 2:18 PM\nTo: Igor Neyman\nCc: [email protected]\nSubject: Re: [PERFORM] How to calculate statistics for one column\n\n\n \n\n\nSELECT COUNT(DISTINCT id_camada) FROM … equals\n\n349\n\n \n\n\nAnd it doesn't change significantly over time.\n\n\n \n\n\n[]'s\n\n\n\n\n\n\n\n \n\n2015-06-18 15:16 GMT-03:00 Igor Neyman <[email protected]>:\n\n\n\n \n \nFrom:\[email protected] [mailto:[email protected]]\r\nOn Behalf Of Irineu Ruiz\nSent: Thursday, June 18, 2015 1:53 PM\nTo: [email protected]\nSubject: [PERFORM] How to calculate statistics for one column\n \n\n\n\nHi,\n\n \n\n\nI have a table with irregular distribution based in a foreign key, like you can see in the end of the e-mail.\n\n\n \n\n\nSometimes, in simples joins with another tables with the same id_camada (but not the table owner of the foreign key, the planner chooses a seq scan instead of use the index with\r\n id_camada.\n\n\nIf I do the join using also de table owner of the foreign key, then the index is used.\n\n\n \n\n\nIn the first case, querys with seq scan tahe about 30 seconds and with the index take about 40 ms.\n\n\n \n\n\nWhen I increase the statistics of the column id_camada to 900, then everything works using the index in both cases.\n\n\nMy doubt is: there is a way to discovery the best statistics number for this column or is a process of trial and error?\n\n\n \n\n\nid_camada;count(*)\n\n\n\n\n\n\n\n123;10056782\n\n\n83;311471\n\n\n42;11316\n\n\n367;5564\n\n\n163;3362\n\n\n257;2100\n\n\n89;1725\n\n\n452;1092\n\n\n157;904\n\n\n84;883\n\n\n233;853\n\n\n271;638\n\n\n272;620\n\n\n269;548\n\n\n270;485\n\n\n455;437\n\n\n255;427\n\n\n32;371\n\n\n39;320\n\n\n31;309\n\n\n411;291\n\n\n91;260\n\n\n240;251\n\n\n162;250\n\n\n444;247\n\n\n165;227\n\n\n36;215\n\n\n236;193\n\n\n54;185\n\n\n53;175\n\n\n76;170\n\n\n412;153\n\n\n159;140\n\n\n160;139\n\n\n105;130\n\n\n59;117\n\n\n60;117\n\n\n267;115\n\n\n238;112\n\n\n279;111\n\n\n465;111\n\n\n5;107\n\n\n74;103\n\n\n243;98\n\n\n35;96\n\n\n68;82\n\n\n400;78\n\n\n391;75\n\n\n49;74\n\n\n124;68\n\n\n73;66\n\n\n260;64\n\n\n66;62\n\n\n168;60\n\n\n172;56\n\n\n4;54\n\n\n44;54\n\n\n384;53\n\n\n237;53\n\n\n390;52\n\n\n234;52\n\n\n387;51\n\n\n378;51\n\n\n148;50\n\n\n64;50\n\n\n379;47\n\n\n56;46\n\n\n52;46\n\n\n377;46\n\n\n443;46\n\n\n253;45\n\n\n97;45\n\n\n280;43\n\n\n77;43\n\n\n2;40\n\n\n376;39\n\n\n45;38\n\n\n235;36\n\n\n231;36\n\n\n413;36\n\n\n241;36\n\n\n232;34\n\n\n388;32\n\n\n101;32\n\n\n249;32\n\n\n99;32\n\n\n100;32\n\n\n69;32\n\n\n125;31\n\n\n166;30\n\n\n65;29\n\n\n433;29\n\n\n149;28\n\n\n96;27\n\n\n71;27\n\n\n98;26\n\n\n67;26\n\n\n386;25\n\n\n50;24\n\n\n21;24\n\n\n122;24\n\n\n47;24\n\n\n291;22\n\n\n287;22\n\n\n404;22\n\n\n70;22\n\n\n48;21\n\n\n63;21\n\n\n153;18\n\n\n13;18\n\n\n46;18\n\n\n262;18\n\n\n43;17\n\n\n72;17\n\n\n161;17\n\n\n344;15\n\n\n29;15\n\n\n439;14\n\n\n104;14\n\n\n119;13\n\n\n456;12\n\n\n434;12\n\n\n55;10\n\n\n3;10\n\n\n345;10\n\n\n286;10\n\n\n15;10\n\n\n141;9\n\n\n169;9\n\n\n258;9\n\n\n18;9\n\n\n158;9\n\n\n14;8\n\n\n94;8\n\n\n463;8\n\n\n218;8\n\n\n92;8\n\n\n170;8\n\n\n58;7\n\n\n17;7\n\n\n19;7\n\n\n6;7\n\n\n414;7\n\n\n10;7\n\n\n7;7\n\n\n22;7\n\n\n90;6\n\n\n430;6\n\n\n27;6\n\n\n195;6\n\n\n16;6\n\n\n223;6\n\n\n11;6\n\n\n242;6\n\n\n9;6\n\n\n26;5\n\n\n57;5\n\n\n82;5\n\n\n451;5\n\n\n61;5\n\n\n8;5\n\n\n445;5\n\n\n140;5\n\n\n431;5\n\n\n197;5\n\n\n20;5\n\n\n362;5\n\n\n24;5\n\n\n385;4\n\n\n23;4\n\n\n25;4\n\n\n62;4\n\n\n134;4\n\n\n150;4\n\n\n215;4\n\n\n217;4\n\n\n219;4\n\n\n220;4\n\n\n222;4\n\n\n224;4\n\n\n244;4\n\n\n284;4\n\n\n318;4\n\n\n389;4\n\n\n415;4\n\n\n449;4\n\n\n461;4\n\n\n93;3\n\n\n209;3\n\n\n136;3\n\n\n299;3\n\n\n188;3\n\n\n319;3\n\n\n264;3\n\n\n95;3\n\n\n337;3\n\n\n1;3\n\n\n221;3\n\n\n310;3\n\n\n143;2\n\n\n320;2\n\n\n321;2\n\n\n322;2\n\n\n324;2\n\n\n210;2\n\n\n302;2\n\n\n438;2\n\n\n303;2\n\n\n239;2\n\n\n330;2\n\n\n196;2\n\n\n447;2\n\n\n332;2\n\n\n333;2\n\n\n334;2\n\n\n307;2\n\n\n308;2\n\n\n309;2\n\n\n340;2\n\n\n341;2\n\n\n171;2\n\n\n190;2\n\n\n313;2\n\n\n193;2\n\n\n154;2\n\n\n294;2\n\n\n295;2\n\n\n250;2\n\n\n144;2\n\n\n311;1\n\n\n312;1\n\n\n314;1\n\n\n315;1\n\n\n316;1\n\n\n317;1\n\n\n51;1\n\n\n323;1\n\n\n325;1\n\n\n326;1\n\n\n327;1\n\n\n328;1\n\n\n329;1\n\n\n331;1\n\n\n335;1\n\n\n336;1\n\n\n338;1\n\n\n339;1\n\n\n342;1\n\n\n343;1\n\n\n186;1\n\n\n185;1\n\n\n354;1\n\n\n355;1\n\n\n356;1\n\n\n357;1\n\n\n359;1\n\n\n360;1\n\n\n361;1\n\n\n184;1\n\n\n363;1\n\n\n364;1\n\n\n366;1\n\n\n183;1\n\n\n369;1\n\n\n370;1\n\n\n182;1\n\n\n181;1\n\n\n180;1\n\n\n179;1\n\n\n380;1\n\n\n381;1\n\n\n382;1\n\n\n383;1\n\n\n178;1\n\n\n177;1\n\n\n176;1\n\n\n174;1\n\n\n30;1\n\n\n173;1\n\n\n392;1\n\n\n393;1\n\n\n155;1\n\n\n405;1\n\n\n407;1\n\n\n409;1\n\n\n151;1\n\n\n145;1\n\n\n12;1\n\n\n425;1\n\n\n138;1\n\n\n135;1\n\n\n103;1\n\n\n435;1\n\n\n437;1\n\n\n102;1\n\n\n440;1\n\n\n441;1\n\n\n442;1\n\n\n80;1\n\n\n448;1\n\n\n28;1\n\n\n226;1\n\n\n227;1\n\n\n228;1\n\n\n230;1\n\n\n225;1\n\n\n214;1\n\n\n216;1\n\n\n213;1\n\n\n212;1\n\n\n211;1\n\n\n208;1\n\n\n207;1\n\n\n206;1\n\n\n78;1\n\n\n245;1\n\n\n205;1\n\n\n204;1\n\n\n254;1\n\n\n203;1\n\n\n202;1\n\n\n201;1\n\n\n200;1\n\n\n199;1\n\n\n265;1\n\n\n198;1\n\n\n268;1\n\n\n194;1\n\n\n192;1\n\n\n273;1\n\n\n274;1\n\n\n275;1\n\n\n278;1\n\n\n191;1\n\n\n282;1\n\n\n75;1\n\n\n285;1\n\n\n189;1\n\n\n288;1\n\n\n289;1\n\n\n290;1\n\n\n187;1\n\n\n293;1\n\n\n296;1\n\n\n297;1\n\n\n300;1\n\n\n304;1\n\n\n305;1\n\n\n306;1\n\n\n \n\n\n\n\n--\n \nSo what’s the result of:\r\n\n \nSELECT COUNT(DISTINCT id_camada) FROM …\n \nDoes it change significantly over time?\n \nRegards,\nIgor Neyman\n\n \n\n\n\n\n\n\n\n\n\n\nThen, I’d think that’s approximately your statistics target.\n \nRegards,\nIgor Neyman\n\n\n\n\n\n\nWell, check if information in pg_stats for your table is correct:\n \nhttp://www.postgresql.org/docs/9.4/static/view-pg-stats.html\n \nRegards,\nIgor Neyman",
"msg_date": "Thu, 18 Jun 2015 19:52:51 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to calculate statistics for one column"
}
] |
[
{
"msg_contents": "We are trying to improve performance by avoiding the temp file creation.\n\nLOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8068.125071\", size 58988604\nSTATEMENT: SELECT iiid.installed_item__id, item_detail.id, item_detail.model_id, item_detail.type\nFROM installed_item__item_detail AS iiid\nINNER JOIN item_detail ON iiid.item_detail__id = item_detail.id\nINNER JOIN item ON (item.installed_item__id = iiid.installed_item__id )\nINNER JOIN model ON (item.id = model.item__id AND model.id = $1)\n\nOur hypothesis is that the temp file creation is caused by the high row count of the\ninstalled_item__item_detail table.\n\ninstalled_item__item_detail: 72916824 rows (27 GB)\nitem_detail: 59212436 rows (40 GB)\n\nThe other two tables, item and model, are temporary tables created during this particular process. Unfortunately, I don't have those table sizes.\n\nWhat are the causes of temp file creation? In general, temp files are created when the sort merge data will not fit in work_mem. What can I do to reduce the amount of data that is being merged? Is the simple fact that the tables have millions of rows going to cause a merge sort?\n\nI noticed that this query selects from installed_item__item_detail instead of from item_detail which seems like it would also work. Would this change make a positive difference?\n\ninstalled_item__item_detail is a simple join table. The installed_item__id side cannot be reduced. Would reducing the number of item_detail rows using additional joins benefit?\n\nWhat additional information can I gather in order have a better understanding of how to improve this query?\n\n(Unfortunately we do not have (easy) access to this particular database in order to experiment.)\n\n ...Duane\n\nBackground information:\n\n=> select version();\n version\n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit\n\n$uname -a\nLinux host.name.com 2.6.32-358.6.2.el6.x86_64 #1 SMP Thu May 16 20:59:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n\n=> select name, current_setting(name), source from pg_settings where source not in ('default', 'override');\n name | current_setting | source\n------------------------------+--------------------+----------------------\n application_name | psql | client\n checkpoint_completion_target | 0.9 | configuration file\n checkpoint_segments | 128 | configuration file\n client_encoding | UTF8 | client\n DateStyle | ISO, MDY | configuration file\n default_statistics_target | 100 | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size | 512MB | configuration file\n lc_messages | en_US.UTF-8 | configuration file\n lc_monetary | en_US.UTF-8 | configuration file\n lc_numeric | en_US.UTF-8 | configuration file\n lc_time | en_US.UTF-8 | configuration file\n log_autovacuum_min_duration | 1s | configuration file\n log_destination | stderr,syslog | configuration file\n log_line_prefix | [%m]: | configuration file\n log_min_duration_statement | 5min | configuration file\n log_min_error_statement | notice | configuration file\n log_rotation_age | 1d | configuration file\n log_rotation_size | 0 | configuration file\n log_temp_files | 1MB | configuration file\n log_timezone | US/Pacific | environment variable\n log_truncate_on_rotation | on | configuration file\n logging_collector | on | configuration file\n maintenance_work_mem | 384MB | configuration file\n max_connections | 100 | configuration file\n max_stack_depth | 2MB | environment variable\n port | 5432 | command line\n shared_buffers | 256MB | configuration file\n syslog_facility | local0 | configuration file\n TimeZone | US/Pacific | environment variable\n wal_buffers | 1MB | configuration file\n work_mem | 128MB | configuration file\n(32 rows)\n\n\nWe are trying to improve performance by avoiding the temp file creation.LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8068.125071\", size 58988604STATEMENT: SELECT iiid.installed_item__id, item_detail.id, item_detail.model_id, item_detail.typeFROM installed_item__item_detail AS iiidINNER JOIN item_detail ON iiid.item_detail__id = item_detail.idINNER JOIN item ON (item.installed_item__id = iiid.installed_item__id )INNER JOIN model ON (item.id = model.item__id AND model.id = $1)Our hypothesis is that the temp file creation is caused by the high row count of theinstalled_item__item_detail table.installed_item__item_detail: 72916824 rows (27 GB)item_detail: 59212436 rows (40 GB)The other two tables, item and model, are temporary tables created during this particular process. Unfortunately, I don't have those table sizes.What are the causes of temp file creation? In general, temp files are created when the sort merge data will not fit in work_mem. What can I do to reduce the amount of data that is being merged? Is the simple fact that the tables have millions of rows going to cause a merge sort?I noticed that this query selects from installed_item__item_detail instead of from item_detail which seems like it would also work. Would this change make a positive difference?installed_item__item_detail is a simple join table. The installed_item__id side cannot be reduced. Would reducing the number of item_detail rows using additional joins benefit?What additional information can I gather in order have a better understanding of how to improve this query?(Unfortunately we do not have (easy) access to this particular database in order to experiment.) ...Duane\nBackground information:=> select version(); version-------------------------------------------------------------------------------------------------------------- PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit$uname -aLinux host.name.com 2.6.32-358.6.2.el6.x86_64 #1 SMP Thu May 16 20:59:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux=> select name, current_setting(name), source from pg_settings where source not in ('default', 'override'); name | current_setting | source------------------------------+--------------------+---------------------- application_name | psql | client checkpoint_completion_target | 0.9 | configuration file checkpoint_segments | 128 | configuration file client_encoding | UTF8 | client DateStyle | ISO, MDY | configuration file default_statistics_target | 100 | configuration file default_text_search_config | pg_catalog.english | configuration file effective_cache_size | 512MB | configuration file lc_messages | en_US.UTF-8 | configuration file lc_monetary | en_US.UTF-8 | configuration file lc_numeric | en_US.UTF-8 | configuration file lc_time | en_US.UTF-8 | configuration file log_autovacuum_min_duration | 1s | configuration file log_destination | stderr,syslog | configuration file log_line_prefix | [%m]: | configuration file log_min_duration_statement | 5min | configuration file log_min_error_statement | notice | configuration file log_rotation_age | 1d | configuration file log_rotation_size | 0 | configuration file log_temp_files | 1MB | configuration file log_timezone | US/Pacific | environment variable log_truncate_on_rotation | on | configuration file logging_collector | on | configuration file maintenance_work_mem | 384MB | configuration file max_connections | 100 | configuration file max_stack_depth | 2MB | environment variable port | 5432 | command line shared_buffers | 256MB | configuration file syslog_facility | local0 | configuration file TimeZone | US/Pacific | environment variable wal_buffers | 1MB | configuration file work_mem | 128MB | configuration file(32 rows)",
"msg_date": "Thu, 18 Jun 2015 12:38:55 -0700",
"msg_from": "Duane Murphy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Techniques to Avoid Temp Files"
},
{
"msg_contents": "Duane Murphy wrote:\r\n> We are trying to improve performance by avoiding the temp file creation.\r\n> \r\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8068.125071\", size 58988604\r\n> STATEMENT: SELECT iiid.installed_item__id, item_detail.id, item_detail.model_id, item_detail.type\r\n> FROM installed_item__item_detail AS iiid\r\n> INNER JOIN item_detail ON iiid.item_detail__id = item_detail.id\r\n> INNER JOIN item ON (item.installed_item__id = iiid.installed_item__id )\r\n> INNER JOIN model ON (item.id = model.item__id AND model.id = $1)\r\n\r\n> What are the causes of temp file creation?\r\n\r\nOperations like hash and sort that need more space than work_mem promises.\r\n\r\n> What additional information can I gather in order have a better understanding of how to improve this\r\n> query?\r\n\r\nIt woul be really useful to see the result of \"EXPLAIN (ANALYZE, BUFFERS) SELECT ...\"\r\nfor your query.\r\n\r\nBut essentially the answer to avoid temporary files is always \"increase work_mem\".\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jun 2015 08:17:13 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Techniques to Avoid Temp Files"
},
{
"msg_contents": "On Thu, Jun 18, 2015 at 12:38 PM, Duane Murphy <[email protected]>\nwrote:\n\n> We are trying to improve performance by avoiding the temp file creation.\n>\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8068.125071\", size\n> 58988604\n> STATEMENT: SELECT iiid.installed_item__id, item_detail.id,\n> item_detail.model_id, item_detail.type\n> FROM installed_item__item_detail AS iiid\n> INNER JOIN item_detail ON iiid.item_detail__id = item_detail.id\n> INNER JOIN item ON (item.installed_item__id = iiid.installed_item__id )\n> INNER JOIN model ON (item.id = model.item__id AND model.id = $1)\n>\n> Our hypothesis is that the temp file creation is caused by the high row\n> count of the\n> installed_item__item_detail table.\n>\n> installed_item__item_detail: 72916824 rows (27 GB)\n> item_detail: 59212436 rows (40 GB)\n>\n> The other two tables, item and model, are temporary tables created during\n> this particular process. Unfortunately, I don't have those table sizes.\n>\n\nThose temporary tables aren't providing any output to the query, so their\nonly role must be to restrict the rows returned by the permanent tables.\nIf they restrict that by a lot, then it could do a nested loop over the\ntemp tables, doing indexed queries against the permanent tables assuming\nyou have the right indexes.\n\nTemporary tables do not get analyzed automatically, so you should probably\nrun ANALYZE on them explicitly before this big query.\n\n\n>\n> What additional information can I gather in order have a better\n> understanding of how to improve this query?\n>\n\nWhat indexes do the tables have? What is the output of EXPLAIN, or better\nyet EXPLAIN (ANALYZE,BUFFERS), for the query?\n\nCheers,\n\nJeff\n\nOn Thu, Jun 18, 2015 at 12:38 PM, Duane Murphy <[email protected]> wrote:We are trying to improve performance by avoiding the temp file creation.LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8068.125071\", size 58988604STATEMENT: SELECT iiid.installed_item__id, item_detail.id, item_detail.model_id, item_detail.typeFROM installed_item__item_detail AS iiidINNER JOIN item_detail ON iiid.item_detail__id = item_detail.idINNER JOIN item ON (item.installed_item__id = iiid.installed_item__id )INNER JOIN model ON (item.id = model.item__id AND model.id = $1)Our hypothesis is that the temp file creation is caused by the high row count of theinstalled_item__item_detail table.installed_item__item_detail: 72916824 rows (27 GB)item_detail: 59212436 rows (40 GB)The other two tables, item and model, are temporary tables created during this particular process. Unfortunately, I don't have those table sizes.Those temporary tables aren't providing any output to the query, so their only role must be to restrict the rows returned by the permanent tables. If they restrict that by a lot, then it could do a nested loop over the temp tables, doing indexed queries against the permanent tables assuming you have the right indexes.Temporary tables do not get analyzed automatically, so you should probably run ANALYZE on them explicitly before this big query. What additional information can I gather in order have a better understanding of how to improve this query?What indexes do the tables have? What is the output of EXPLAIN, or better yet EXPLAIN (ANALYZE,BUFFERS), for the query? Cheers,Jeff",
"msg_date": "Fri, 19 Jun 2015 11:26:15 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Techniques to Avoid Temp Files"
}
] |
[
{
"msg_contents": "Hi Folks,\n\nThis is my first time posting here, so hopefully I manage to convey all \nthe information needed.\nWe have a simple query that just started giving us problems in \nproduction when the number of rows gets too large (>100k).\nThe issue seems to be that the planner wants to sort the rows using a \nsequential scan, rather than the index provided specifically for this \nquery. This isn't a problem with low numbers of rows, but eventually the \nquery outgrows work_mem and uses the disk, slowing does the query \ngreatly. I know the common answer is to increase work_mem... but since \nthis tables growth is unpredictable, that isn't a viable strategy.\nI've tried increasing shared_buffers and effective_cache_size, but that \ndoesn't appear to effect the plan chosen here. Setting \nrandom_page_cost=1.0 works, but I'm hoping for a more general solution \nthat doesn't require setting that locally each time I run the query. I \nguess my real question is wether or not there is any way to get the \nplanner to take into account the fact that it's going to need to do an \n'external merge', and that it is going to take a LONG time?\n\nTable and Index Schemas:\nCREATE TABLE events\n(\n id serial NOT NULL,\n name character varying(64),\n eventspy_id integer NOT NULL,\n camera_id integer NOT NULL,\n start_time timestamp without time zone NOT NULL,\n millisecond smallint NOT NULL,\n uid smallint NOT NULL,\n update_time timestamp without time zone NOT NULL DEFAULT now(),\n length integer NOT NULL,\n objects text NOT NULL,\n priority smallint NOT NULL,\n type character varying(45) NOT NULL DEFAULT 'alarm'::character varying,\n status event_status NOT NULL DEFAULT 'new'::event_status,\n confidence smallint NOT NULL DEFAULT 100::smallint,\n CONSTRAINT events_pkey PRIMARY KEY (id)\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX events_confidnce\n ON events\n USING btree\n (confidence);\n\nCREATE INDEX events_summary\n ON events\n USING btree\n (name COLLATE pg_catalog.\"default\", eventspy_id, camera_id, type \nCOLLATE pg_catalog.\"default\", status);\n\nQuery:\nSELECT name, type, eventspy_id, camera_id, status, COUNT(id), \nMAX(update_time), MIN(start_time), MAX(start_time) FROM events WHERE \nconfidence>=0 GROUP BY name, eventspy_id, camera_id, type, status;\n\nExplain Analyze outputs (links as requested):\nDefault plan: http://explain.depesz.com/s/ib3k\nForced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP\n\nSoftware/Hardware: PGSql 9.2.1, Windows 8.1, 8GB RAM\nAll pgsql settings are at their defaults.\n\nThanks for any help you can provide,\n-Ian Pushee\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jun 2015 10:34:01 -0400",
"msg_from": "Ian Pushee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query (planner insisting on using 'external merge' sort type)"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Ian Pushee\r\nSent: Friday, June 19, 2015 10:34 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] Slow query (planner insisting on using 'external merge' sort type)\r\n\r\nHi Folks,\r\n\r\nThis is my first time posting here, so hopefully I manage to convey all the information needed.\r\nWe have a simple query that just started giving us problems in production when the number of rows gets too large (>100k).\r\nThe issue seems to be that the planner wants to sort the rows using a sequential scan, rather than the index provided specifically for this query. This isn't a problem with low numbers of rows, but eventually the query outgrows work_mem and uses the disk, slowing does the query greatly. I know the common answer is to increase work_mem... but since this tables growth is unpredictable, that isn't a viable strategy.\r\nI've tried increasing shared_buffers and effective_cache_size, but that doesn't appear to effect the plan chosen here. Setting\r\nrandom_page_cost=1.0 works, but I'm hoping for a more general solution that doesn't require setting that locally each time I run the query. I guess my real question is wether or not there is any way to get the planner to take into account the fact that it's going to need to do an 'external merge', and that it is going to take a LONG time?\r\n\r\nTable and Index Schemas:\r\nCREATE TABLE events\r\n(\r\n id serial NOT NULL,\r\n name character varying(64),\r\n eventspy_id integer NOT NULL,\r\n camera_id integer NOT NULL,\r\n start_time timestamp without time zone NOT NULL,\r\n millisecond smallint NOT NULL,\r\n uid smallint NOT NULL,\r\n update_time timestamp without time zone NOT NULL DEFAULT now(),\r\n length integer NOT NULL,\r\n objects text NOT NULL,\r\n priority smallint NOT NULL,\r\n type character varying(45) NOT NULL DEFAULT 'alarm'::character varying,\r\n status event_status NOT NULL DEFAULT 'new'::event_status,\r\n confidence smallint NOT NULL DEFAULT 100::smallint,\r\n CONSTRAINT events_pkey PRIMARY KEY (id)\r\n)\r\nWITH (\r\n OIDS=FALSE\r\n);\r\n\r\nCREATE INDEX events_confidnce\r\n ON events\r\n USING btree\r\n (confidence);\r\n\r\nCREATE INDEX events_summary\r\n ON events\r\n USING btree\r\n (name COLLATE pg_catalog.\"default\", eventspy_id, camera_id, type COLLATE pg_catalog.\"default\", status);\r\n\r\nQuery:\r\nSELECT name, type, eventspy_id, camera_id, status, COUNT(id), MAX(update_time), MIN(start_time), MAX(start_time) FROM events WHERE \r\nconfidence>=0 GROUP BY name, eventspy_id, camera_id, type, status;\r\n\r\nExplain Analyze outputs (links as requested):\r\nDefault plan: http://explain.depesz.com/s/ib3k\r\nForced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP\r\n\r\nSoftware/Hardware: PGSql 9.2.1, Windows 8.1, 8GB RAM\r\nAll pgsql settings are at their defaults.\r\n\r\nThanks for any help you can provide,\r\n-Ian Pushee\r\n\r\n---\r\n\r\nProbably events_confidnce index is not very selective, that's why optimizer prefers seq scan.\r\nI'd try to create an index on (name, eventspy_id, camera_id, type, status).\r\n\r\nAlso, the recent 9.2 is 9.2.13, you should upgrade.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jun 2015 14:46:42 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query (planner insisting on using 'external merge' sort\n type)"
},
{
"msg_contents": "\n> Explain Analyze outputs (links as requested):\n> Default plan: http://explain.depesz.com/s/ib3k\n> Forced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP\n> \n> Software/Hardware: PGSql 9.2.1, Windows 8.1, 8GB RAM\n> All pgsql settings are at their defaults.\n\nincrease work_mem. per session via set work_mem = 'xxxMB'; or in\npostgresql.conf, reload.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jun 2015 16:47:09 +0200 (CEST)",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query (planner insisting on using 'external merge' sort\n type)"
},
{
"msg_contents": "\n\nOn 6/19/2015 10:46 AM, Igor Neyman wrote:\n>\n> Probably events_confidnce index is not very selective, that's why optimizer prefers seq scan.\n> I'd try to create an index on (name, eventspy_id, camera_id, type, status).\n>\n> Also, the recent 9.2 is 9.2.13, you should upgrade.\n>\n> Regards,\n> Igor Neyman\n\nHi Igor,\n\nI already have an index for (name, eventspy_id, camera_id, type, \nstatus)... that is the index being used (apparently silently) when I set \nrandom_page_cost=1.0.\n\nThanks,\n-Ian\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jun 2015 10:53:43 -0400",
"msg_from": "Ian Pushee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query (planner insisting on using 'external merge' sort\n type)"
},
{
"msg_contents": "\n\nOn 6/19/2015 10:47 AM, Andreas Kretschmer wrote:\n>> Explain Analyze outputs (links as requested):\n>> Default plan: http://explain.depesz.com/s/ib3k\n>> Forced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP\n>>\n>> Software/Hardware: PGSql 9.2.1, Windows 8.1, 8GB RAM\n>> All pgsql settings are at their defaults.\n> increase work_mem. per session via set work_mem = 'xxxMB'; or in\n> postgresql.conf, reload.\n>\n>\n\nHi Andreas,\n\nThe number of rows in the events table isn't constrained, so \nunfortunately it isn't feasible to set work_mem high enough to allow an \nin-memory sort. Forcing the planner to use the index works to produce a \nfast query, so I'm wondering if there is a more general way to getting \nthe planner to take into account that work_mem isn't big enough to fit \nthe query which will result in a MUCH more costly external merge.\n\nThanks,\n-Ian\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jun 2015 10:57:53 -0400",
"msg_from": "Ian Pushee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query (planner insisting on using 'external merge' sort\n type)"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Ian Pushee\r\nSent: Friday, June 19, 2015 10:54 AM\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] Slow query (planner insisting on using 'external merge' sort type)\r\n\r\n\r\n\r\nOn 6/19/2015 10:46 AM, Igor Neyman wrote:\r\n>\r\n> Probably events_confidnce index is not very selective, that's why optimizer prefers seq scan.\r\n> I'd try to create an index on (name, eventspy_id, camera_id, type, status).\r\n>\r\n> Also, the recent 9.2 is 9.2.13, you should upgrade.\r\n>\r\n> Regards,\r\n> Igor Neyman\r\n\r\nHi Igor,\r\n\r\nI already have an index for (name, eventspy_id, camera_id, type, status)... that is the index being used (apparently silently) when I set random_page_cost=1.0.\r\n\r\nThanks,\r\n-Ian\r\n\r\n\r\n--\r\n\r\nWell, having 8GB Ram on the machine you probably should not be using default config parameters.\r\nDepending on what else is this machine is being used for, and depending on queries you are running, you should definitely modify Postgres config.\r\nIf this machine is designated database server, I'd start with the following parameters modified from default values:\r\n\r\nshared_buffers = 1024MB\r\ntemp_buffers = 8MB\r\nwork_mem = 64MB\t\t\t\t\r\neffective_cache_size = 1024MB\r\nrandom_page_cost = 2.5\r\ncpu_tuple_cost = 0.03\r\ncpu_index_tuple_cost = 0.05\r\n\r\nand see how it goes.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jun 2015 15:06:37 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query (planner insisting on using 'external merge' sort\n type)"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Igor Neyman\r\nSent: Friday, June 19, 2015 11:07 AM\r\nTo: Ian Pushee; [email protected]\r\nSubject: Re: [PERFORM] Slow query (planner insisting on using 'external merge' sort type)\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Ian Pushee\r\nSent: Friday, June 19, 2015 10:54 AM\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] Slow query (planner insisting on using 'external merge' sort type)\r\n\r\n\r\n\r\nOn 6/19/2015 10:46 AM, Igor Neyman wrote:\r\n>\r\n> Probably events_confidnce index is not very selective, that's why optimizer prefers seq scan.\r\n> I'd try to create an index on (name, eventspy_id, camera_id, type, status).\r\n>\r\n> Also, the recent 9.2 is 9.2.13, you should upgrade.\r\n>\r\n> Regards,\r\n> Igor Neyman\r\n\r\nHi Igor,\r\n\r\nI already have an index for (name, eventspy_id, camera_id, type, status)... that is the index being used (apparently silently) when I set random_page_cost=1.0.\r\n\r\nThanks,\r\n-Ian\r\n\r\n\r\n--\r\n\r\nWell, having 8GB Ram on the machine you probably should not be using default config parameters.\r\nDepending on what else is this machine is being used for, and depending on queries you are running, you should definitely modify Postgres config.\r\nIf this machine is designated database server, I'd start with the following parameters modified from default values:\r\n\r\nshared_buffers = 1024MB\r\ntemp_buffers = 8MB\r\nwork_mem = 64MB\t\t\t\t\r\neffective_cache_size = 1024MB\r\nrandom_page_cost = 2.5\r\ncpu_tuple_cost = 0.03\r\ncpu_index_tuple_cost = 0.05\r\n\r\nand see how it goes.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n---\r\n\r\nOops, should be at least:\r\n\r\neffective_cache_size = 5120MB\r\n\r\non dedicated server.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jun 2015 15:18:25 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query (planner insisting on using 'external merge' sort\n type)"
},
{
"msg_contents": "On 6/19/15 9:57 AM, Ian Pushee wrote:\n>\n>\n> On 6/19/2015 10:47 AM, Andreas Kretschmer wrote:\n>>> Explain Analyze outputs (links as requested):\n>>> Default plan: http://explain.depesz.com/s/ib3k\n>>> Forced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP\n>>>\n>>> Software/Hardware: PGSql 9.2.1, Windows 8.1, 8GB RAM\n>>> All pgsql settings are at their defaults.\n>> increase work_mem. per session via set work_mem = 'xxxMB'; or in\n>> postgresql.conf, reload.\n>>\n>>\n>\n> Hi Andreas,\n>\n> The number of rows in the events table isn't constrained, so\n> unfortunately it isn't feasible to set work_mem high enough to allow an\n> in-memory sort. Forcing the planner to use the index works to produce a\n> fast query, so I'm wondering if there is a more general way to getting\n> the planner to take into account that work_mem isn't big enough to fit\n> the query which will result in a MUCH more costly external merge.\n\nWhat Andreas is saying is the reason the sort is so expensive is because \nit spilled to disk. If you don't have enough memory to do the sort \nin-memory, then you probably don't have enough memory to buffer the \ntable either, which means the index scan is going to be a LOT more \nexpensive than a sort.\n\nThat said, the better your IO system is the lower you need to set \nrandom_page_cost. With a good raid setup 2.0 is a good starting point, \nand I've run as low as 1.1. I've never run a system on all SSD, but I've \nheard others recommend setting it as low as 1.0 on an all SSD setup.\n\nIt's also worth noting that there's some consensus that the optimizer is \ngenerally too eager to switch from an index scan to a seqscan.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 Jun 2015 16:05:13 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query (planner insisting on using 'external merge' sort\n type)"
},
{
"msg_contents": "On 24/06/15 09:05, Jim Nasby wrote:\n> On 6/19/15 9:57 AM, Ian Pushee wrote:\n>>\n>>\n>> On 6/19/2015 10:47 AM, Andreas Kretschmer wrote:\n>>>> Explain Analyze outputs (links as requested):\n>>>> Default plan: http://explain.depesz.com/s/ib3k\n>>>> Forced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP\n>>>>\n>>>> Software/Hardware: PGSql 9.2.1, Windows 8.1, 8GB RAM\n>>>> All pgsql settings are at their defaults.\n>>> increase work_mem. per session via set work_mem = 'xxxMB'; or in\n>>> postgresql.conf, reload.\n>>>\n>>>\n>>\n>> Hi Andreas,\n>>\n>> The number of rows in the events table isn't constrained, so\n>> unfortunately it isn't feasible to set work_mem high enough to allow an\n>> in-memory sort. Forcing the planner to use the index works to produce a\n>> fast query, so I'm wondering if there is a more general way to getting\n>> the planner to take into account that work_mem isn't big enough to fit\n>> the query which will result in a MUCH more costly external merge.\n> \n> What Andreas is saying is the reason the sort is so expensive is because\n> it spilled to disk. If you don't have enough memory to do the sort\n> in-memory, then you probably don't have enough memory to buffer the\n> table either, which means the index scan is going to be a LOT more\n> expensive than a sort.\n> \n> That said, the better your IO system is the lower you need to set\n> random_page_cost. With a good raid setup 2.0 is a good starting point,\n> and I've run as low as 1.1. I've never run a system on all SSD, but I've\n> heard others recommend setting it as low as 1.0 on an all SSD setup.\n> \n> It's also worth noting that there's some consensus that the optimizer is\n> generally too eager to switch from an index scan to a seqscan.\n\n\nMind you, this eagerness could be caused by the OP having\neffective_cache_size set to the default. This should be changed (set to\na few GB...)!\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 Jun 2015 14:01:57 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query (planner insisting on using 'external merge' sort\n type)"
}
] |
[
{
"msg_contents": "Guys can anyone please explain or point me to a link where i can understand this output for pgbouncer. What does each column of this table mean?\n\n\npgbouncer=# show mem;\n\n name | size | used | free | memtotal\n--------------+------+------+------+----------\n user_cache | 184 | 12 | 77 | 16376\n db_cache | 160 | 2 | 100 | 16320\n pool_cache | 408 | 4 | 46 | 20400\n server_cache | 360 | 121 | 279 | 144000\n client_cache | 360 | 1309 | 291 | 576000\n iobuf_cache | 2064 | 3 | 797 | 1651200\n\n\n\nThanks\nPrabhjot\n\n\n\n\n\n\n\n\n\n\n\n\nGuys can anyone please explain or point me to a link where i can understand this output for pgbouncer. What does each column of this table mean?\n \n \npgbouncer=# show mem;\n \n name | size | used | free | memtotal\n--------------+------+------+------+----------\n user_cache | 184 | 12 | 77 | 16376\n db_cache | 160 | 2 | 100 | 16320\n pool_cache | 408 | 4 | 46 | 20400\n server_cache | 360 | 121 | 279 | 144000\n client_cache | 360 | 1309 | 291 | 576000\n iobuf_cache | 2064 | 3 | 797 | 1651200\n \n \n \nThanks\nPrabhjot",
"msg_date": "Wed, 24 Jun 2015 18:09:02 +0000",
"msg_from": "\"Sheena, Prabhjot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbouncer issue"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\n> Guys can anyone please explain or point me to a link where i \n> can understand this output for pgbouncer. What does each column of this table mean?\n> \n> pgbouncer=# show mem;\n\n(Please do not post to more than one mailing list at a time). It does appear to be \nundocumented. Your best bet is to ask on the pgbouncer mailing list:\n\nhttp://lists.pgfoundry.org/mailman/listinfo/pgbouncer-general\n\nYou may want to raise this as an official doc issue as well:\n\nhttps://github.com/pgbouncer/pgbouncer/issues\n\nThat said, if it's undocumented and nobody else has complained, it's \nprobably not too important as far as day to day pgbouncer use. :)\n\n- -- \nGreg Sabino Mullane [email protected]\nEnd Point Corporation http://www.endpoint.com/\nPGP Key: 0x14964AC8 201507051040\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAlWZQwsACgkQvJuQZxSWSsjLBwCfao3lKsN5IZvwKISTkb9FabBO\n/6kAoLHOSKplGOM+K0L5JkxL5ZX+bygH\n=R7lS\n-----END PGP SIGNATURE-----\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 5 Jul 2015 14:47:20 -0000",
"msg_from": "\"Greg Sabino Mullane\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbouncer issue"
}
] |
[
{
"msg_contents": "All:\n\nDoes anyone have python code which parses pgbench -r output for\nstatistical analysis? I dug through pgbench-tools, but its reading of\npgbench results isn't easily separable from the rest.\n\nI'll write a lib, but if someone already has one, it would save me some\ntime in developing a new public benchmark.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 27 Jun 2015 12:29:23 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does anyone have python code which digests pgbench -r output?"
}
] |
[
{
"msg_contents": "Hello all,\nThis is my very first message to the Postgresql community, and I really hope\nyou can help me solve the trouble I'm facing.\n\nI've an 80 core server (multithread) with close to 500GB RAM.\n\nMy configuration is:\nMaxConn: 1500 (was 850)\nShared buffers: 188Gb\nwork_mem: 110Mb (was 220Mb)\nmaintenance_work_mem: 256Mb\neffective_cache_size: 340Gb\n\nThe database is running under postgresql 9.3.9 on an Ubuntu Server 14.04 LTS\n(build 3.13.0-55-generic)\n\nTwo days from now, I've been experiencing that, randomly, the connections\nrise up till they reach max connections, and the load average of the server\ngoes arround 300~400, making every command issued on the server take\nforever. When this happens, ram is relatively low (70Gb used), cores\nactivity is lower than usual and sometimes swap happens (I've swappiness\nconfigured to 10%)\n\nI've been trying to find the cause of this server underperformance, even\nlogging all queries in debug mode, but nothing strange found so far.\n\nI really don't know which can be my next step to try to isolate the problem\nand that's why I write to you guys. Have you ever seen this behaviour\nbefore?\nCould you kindly help me suggesting any step to follow?\n\nThanks and best regards!\nE.v.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Sudden-connection-and-load-average-spikes-with-postgresql-9-3-tp5855895.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Jun 2015 07:52:16 -0700 (MST)",
"msg_from": "eudald_v <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sudden connection and load average spikes with postgresql 9.3"
},
{
"msg_contents": "eudald_v <[email protected]> writes:\n> This is my very first message to the Postgresql community, and I really hope\n> you can help me solve the trouble I'm facing.\n\n> I've an 80 core server (multithread) with close to 500GB RAM.\n\n> My configuration is:\n> MaxConn: 1500 (was 850)\n> Shared buffers: 188Gb\n> work_mem: 110Mb (was 220Mb)\n> maintenance_work_mem: 256Mb\n> effective_cache_size: 340Gb\n\n> The database is running under postgresql 9.3.9 on an Ubuntu Server 14.04 LTS\n> (build 3.13.0-55-generic)\n\n> Two days from now, I've been experiencing that, randomly, the connections\n> rise up till they reach max connections, and the load average of the server\n> goes arround 300~400, making every command issued on the server take\n> forever. When this happens, ram is relatively low (70Gb used), cores\n> activity is lower than usual and sometimes swap happens (I've swappiness\n> configured to 10%)\n\nYou haven't described why you would suddenly be getting more connections,\nbut if that's just an expected byproduct of transactions taking too long\nto finish, then a plausible theory is that something is acquiring a strong\nlock on some heavily-used table, causing other transactions to back up\nbehind it. The pg_stat_activity and pg_locks views may help identify\nthe culprit.\n\nIn any case, a good mitigation plan would be to back off your\nmax_connections setting, and instead use a connection pooler to manage\nlarge numbers of clients. The usual rule of thumb is that you don't want\nnumber of backends much more than the number of cores. 1500 active\nsessions is WAY more than your hardware can support, and even approaching\nthat will exacerbate whatever the root performance problem is.\n\nI'm also a bit worried by your focus on CPU capacity without any mention\nof the disk subsystem. It may well be that the actual bottleneck is disk\nthroughput, and that you can't even sustain 80 sessions unless they're\nread-mostly. It would be a good idea to watch the i/o stats the next\ntime you see one of these high-load episodes.\n\nIt might also be a good idea to decrease the shared_buffers setting and\nrely more on OS-level disk caching. Some have found that very large\nshared buffer pools tend to increase contention without much payback.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Jun 2015 11:37:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden connection and load average spikes with postgresql 9.3"
},
{
"msg_contents": "Dear Tom,\nThanks for your fast approach.\n\nFirst of all, yes, queries seems to take more time to process and they are\nlike queued up (you can even see inserts with status waiting on top/htop).\n\nI didn't know about that connection tip, and I will absolutely find a moment\nto add a pg_pooler to reduce the amount of active/idle connections shown at\nthe database.\n\nI'm monitoring the pg_stat_activity table and when connections raise there's\nno query being processed that could block a table and cause the other\nqueries to stack waiting. I'm checking pg_lock table too, with results to be\ndetermined yet.\n\nAt least for the moment, it has spiked up once again (it's happening with\nmore frequency). I was able to catch this log from I/O. These are 24 300Gb\nSAS disks on Raid 10 with hot spare.\n\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 42,00 136,00 96,00 136 96\navg-cpu: %user %nice %system %iowait %steal %idle\n 14,82 0,00 52,75 0,61 0,00 31,82\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 598,00 1764,00 15188,00 1764 15188\navg-cpu: %user %nice %system %iowait %steal %idle\n 17,79 0,00 82,19 0,01 0,00 0,01\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 230,00 28,00 6172,00 28 6172\navg-cpu: %user %nice %system %iowait %steal %idle\n 26,79 0,00 72,49 0,12 0,00 0,60\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 200,00 96,00 8364,00 96 8364\navg-cpu: %user %nice %system %iowait %steal %idle\n 17,88 0,00 81,95 0,01 0,00 0,15\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 87,00 48,00 4164,00 48 4164\navg-cpu: %user %nice %system %iowait %steal %idle\n 26,09 0,00 66,86 1,69 0,00 5,36\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 563,00 1932,00 9004,00 1932 9004\navg-cpu: %user %nice %system %iowait %steal %idle\n 19,30 0,00 80,38 0,10 0,00 0,21\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 159,00 348,00 5292,00 348 5292\navg-cpu: %user %nice %system %iowait %steal %idle\n 28,15 0,00 68,03 0,19 0,00 3,63\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 250,00 196,00 7388,00 196 7388\navg-cpu: %user %nice %system %iowait %steal %idle\n 11,03 0,00 38,68 0,14 0,00 50,16\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 294,00 1508,00 2916,00 1508 2916\n\nHope it helps. If you feel we should reduce shared_buffers, what would be a\nnice value?\n\nThanks!\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Sudden-connection-and-load-average-spikes-with-postgresql-9-3-tp5855895p5855900.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Jun 2015 09:08:56 -0700 (MST)",
"msg_from": "eudald_v <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden connection and load average spikes with postgresql 9.3"
},
{
"msg_contents": "On 06/30/2015 07:52 AM, eudald_v wrote:\n> Two days from now, I've been experiencing that, randomly, the connections\n> rise up till they reach max connections, and the load average of the server\n> goes arround 300~400, making every command issued on the server take\n> forever. When this happens, ram is relatively low (70Gb used), cores\n> activity is lower than usual and sometimes swap happens (I've swappiness\n> configured to 10%)\n\nAs Tom said, the most likely reason for this is application behavior and\nblocking locks. Try some of these queries on our scripts page:\n\nhttps://github.com/pgexperts/pgx_scripts/tree/master/locks\n\nHowever, I have seem some other things which cause these kinds of stalls:\n\n* runaway connection generation by the application, due to either a\nprogramming bug or an irresponsible web crawler (see\nhttps://www.pgexperts.com/blog/quinn_weaver/)\n\n* issues evicting blocks from shared_buffers: what is your\nshared_buffers set to? How large is your database?\n\n* Checkpoint stalls: what FS are you on? What are your transaction log\nsettings for PostgreSQL?\n\n* Issues with the software/hardware stack around your storage, causing\ntotal IO stalls periodically. What does IO throughput look like\nbefore/during/after the stalls?\n\nThe last was the cause the last time I dealt with a situation like\nyours; it turned out the issue was bad RAID card firmware where the card\nwould lock up whenever the write-through buffer got too much pressure.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Jun 2015 15:56:48 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden connection and load average spikes with postgresql\n 9.3"
},
{
"msg_contents": "Dear Josh,\nI'm sorry I didn't write before, but we have been very busy with this issue\nand, you know, when something goes wrong, the apocalypse comes with it.\n\nI've been working on everything you suggested.\n\nI used your tables and script and I can give you a sample of it on\nlocked_query_start\n 2015-07-02 14:49:45.972129+02 | 15314 | | 4001 | \n| \"TABLE_Z\" | tuple | ExclusiveLock | 24018:24 | relation \n| ShareUpdateExclusiveLock | | | YYY.YYY.YYY.YYY/32\n| 2015-07-02 14:49:26.635599+02 | 2015-07-02 14:49:26.635599+02 | 2015-07-02\n14:49:26.635601+02 | INSERT INTO \"TABLE_X\" (\"field1\", \"field2\", \"field3\",\n\"field4\", \"field5\", \"field6\", \"field7\") VALUES (22359509, 92, 5, 88713,\n'XXX.XXX.XXX.XXX', 199, 10) | | | 2015-07-02\n14:11:45.368709+02 | 2015-07-02 14:11:45.368709+02 | active |\n2015-07-02 14:11:45.36871+02 | autovacuum: VACUUM ANALYZE public.TABLE_Z\n\n 2015-07-02 14:49:45.972129+02 | 15857 | | 4001 | \n| \"TABLE_Z\" | tuple | ExclusiveLock | 24018:24 | relation \n| ShareUpdateExclusiveLock | | | YYY.YYY.YYY.YYY/32\n| 2015-07-02 14:49:22.79166+02 | 2015-07-02 14:49:22.79166+02 | 2015-07-02\n14:49:22.791665+02 | INSERT INTO \"TABLE_X\" (\"field1\", \"field2\", \"field3\",\n\"field4\", \"field5\", \"field6\", \"field7\") VALUES (14515978, 92, 5, 88713,\n'XXX.XXX.XXX.XXX', 199, 10) | | | 2015-07-02\n14:11:45.368709+02 | 2015-07-02 14:11:45.368709+02 | active |\n2015-07-02 14:11:45.36871+02 | autovacuum: VACUUM ANALYZE public.TABLE_Z\n\n2015-07-02 14:49:45.972129+02 | 15314 | | 14712 | \n| \"TABLE_Z\" | tuple | ExclusiveLock | 24018:24 | relation \n| AccessShareLock | | |\n1YYY.YYY.YYY.YYY/32 | 2015-07-02 14:49:26.635599+02 | 2015-07-02\n14:49:26.635599+02 | 2015-07-02 14:49:26.635601+02 | INSERT INTO \"TABLE_X\"\n(\"field1\", \"field2\", \"field3\", \"field4\", \"field5\", \"field6\", \"field\") VALUES\n(22359509, 92, 5, 88713, 'XXX.XXX.XXX.XXX', 199, 10) | |\n185.10.253.72/32 | 2015-07-02 14:48:48.841375+02 | 2015-07-02\n14:48:48.841375+02 | active | 2015-07-02 14:48:48.841384+02 | INSERT\nINTO \"TABLE_Y\" (\"email_id\", \"sendout_id\", \"feed_id\", \"isp_id\") VALUES\n(46015879, 75471, 419, 0)\n\nAll that was recorded during a spike. From this log I have to point\nsomething:\nTables TABLE_X and TABLE_Y have both a TRIGGER that does an INSERT to\nTABLE_Z\nAs you can see, TABLE_Z was being VACUUM ANALYZED. I wonder if TRIGGERS and\nVACUUM work well together, just to check another perspective.\n\nWe also have carefully looked at our scripts and we have performed some code\noptimitzations (like close db connections earlier), but the spikes continue\nto happen.\n\nFS is ext4 and I don't know how can I check the transaction log\nconfiguration\n\nThis is how IO lookslike before and after any problem happens:\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 357,00 7468,00 8840,00 7468 8840\navg-cpu: %user %nice %system %iowait %steal %idle\n 5,02 0,00 2,44 0,06 0,00 92,47\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 471,00 7032,00 13760,00 7032 13760\navg-cpu: %user %nice %system %iowait %steal %idle\n 5,14 0,00 2,92 0,03 0,00 91,92\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 376,00 7192,00 8048,00 7192 8048\navg-cpu: %user %nice %system %iowait %steal %idle\n 4,77 0,00 2,57 0,03 0,00 92,63\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 304,00 7280,00 8252,00 7280 8252\n\nAnd this is how it looks like when the spike happens:\nhttp://pastebin.com/2hAYuDZ5\n\nHope it can help into determining what's happening.\n\nThanks for all your efforts and collaboration!\nEudald\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Sudden-connection-and-load-average-spikes-with-postgresql-9-3-tp5855895p5856298.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Jul 2015 08:41:20 -0700 (MST)",
"msg_from": "eudald_v <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden connection and load average spikes with postgresql 9.3"
},
{
"msg_contents": "On 07/02/2015 08:41 AM, eudald_v wrote:\n> All that was recorded during a spike. From this log I have to point\n> something:\n> Tables TABLE_X and TABLE_Y have both a TRIGGER that does an INSERT to\n> TABLE_Z\n> As you can see, TABLE_Z was being VACUUM ANALYZED. I wonder if TRIGGERS and\n> VACUUM work well together, just to check another perspective.\n\nWell, it's not triggers in particular, but vacuum does create some\ncontention and possible sluggishness. Questions:\n\n* what kind of writes to the triggers do?\n* can they conflict between sessions? that is, are different writes on X\nand/or Y possibly overwriting the same rows on Z?\n* is that autovacuum a regular autovacuum, or is it \"to prevent wraparound\"?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 02 Jul 2015 12:31:08 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden connection and load average spikes with postgresql\n 9.3"
},
{
"msg_contents": "On 07/02/2015 08:41 AM, eudald_v wrote:\n> And this is how it looks like when the spike happens:\n> http://pastebin.com/2hAYuDZ5\n\nHmm, those incredibly high system % indicate that there's something\nwrong with your system. If you're not using software RAID or ZFS, you\nshould never see that.\n\nI think you have a driver, kernel, Linux memory management, or IO stack\nissue.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 02 Jul 2015 12:33:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden connection and load average spikes with postgresql\n 9.3"
},
{
"msg_contents": "Josh Berkus <[email protected]> wrote:\n> On 07/02/2015 08:41 AM, eudald_v wrote:\n\n>> And this is how it looks like when the spike happens:\n>> [system CPU staying over 90%]\n\n> I think you have a driver, kernel, Linux memory management, or IO\n> stack issue.\n\nIn my experience this is usually caused by failure to disable\ntransparent huge page support. The larger shared_buffers is\nconfigured, the bigger the problem.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Jul 2015 21:13:47 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden connection and load average spikes with\n postgresql 9.3"
},
{
"msg_contents": "On Tue, Jun 30, 2015 at 8:52 AM, eudald_v <[email protected]> wrote:\n> Hello all,\n> This is my very first message to the Postgresql community, and I really hope\n> you can help me solve the trouble I'm facing.\n>\n> I've an 80 core server (multithread) with close to 500GB RAM.\n>\n> My configuration is:\n> MaxConn: 1500 (was 850)\n\nDrop this to 80 to 160, and use pgbouncer for pooling. pg_pooler is\nnice, but it's a lot harder to configure. pgbouncer takes literally a\ncouple of minutes and you're up and running.\n\n> Shared buffers: 188Gb\n\nI have yet to see shared_buffers this big be a big help. I'd drop it\ndown to 1 to 16G or so but that's just me.\n\n> work_mem: 110Mb (was 220Mb)\n\n110MB*1500connections*1, 2, or 3 sorts per query == disaster. Drop\nthis down as well. If you have ONE query that needs a lot, create a\nuser for that query, alter that user for bigger work_mem and then use\nit only for that big query.\n\n> maintenance_work_mem: 256Mb\n> effective_cache_size: 340Gb\n>\n> The database is running under postgresql 9.3.9 on an Ubuntu Server 14.04 LTS\n> (build 3.13.0-55-generic)\n\nGood kernel.\n\nOK so you've got 80 cores. Have you checked zone_reclaim_mode? Under\nno circumstances should that ever be turned on on a database server.\n\nif \"sysctl -a|grep zone\" returns:\nvm.zone_reclaim_mode = 1\n\nthen use /etc/sysctl.conf to set it to 0. It's a silent killer and it\nwill make your machine run REALLY slow for periods of 10 to 30 minutes\nfor no apparent reason.\n\nBut first and foremost GET A POOLER in the loop NOW. Not having one is\nmaking your machine extremely vulnerable to overload, and making it\nimpossible for you to manage it.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Jul 2015 17:04:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden connection and load average spikes with\n postgresql 9.3"
},
{
"msg_contents": "Hello guys!\n\nI finally got rid of it.\nIt looks that at the end it was all due to transparent_hugepages values.\n\nI disabled them and cpu spikes disappeared. I am sorry cause it's something\nI usually disable on postgresql servers, but I forgot to do so on this one\nand never thought about it.\n\nThanks a lot for all your helpful messages!\n\nEudald\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Sudden-connection-and-load-average-spikes-with-postgresql-9-3-tp5855895p5856914.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 04:29:23 -0700 (MST)",
"msg_from": "eudald_v <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden connection and load average spikes with postgresql 9.3"
},
{
"msg_contents": "Note that if you still have the settings you showed in your original\npost you're just moving the goal posts a few feet further back. Any\nheavy load can still trigger this kind of behaviour.\n\nOn Tue, Jul 7, 2015 at 5:29 AM, eudald_v <[email protected]> wrote:\n> Hello guys!\n>\n> I finally got rid of it.\n> It looks that at the end it was all due to transparent_hugepages values.\n>\n> I disabled them and cpu spikes disappeared. I am sorry cause it's something\n> I usually disable on postgresql servers, but I forgot to do so on this one\n> and never thought about it.\n>\n> Thanks a lot for all your helpful messages!\n>\n> Eudald\n>\n>\n>\n> --\n> View this message in context: http://postgresql.nabble.com/Sudden-connection-and-load-average-spikes-with-postgresql-9-3-tp5855895p5856914.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 11:17:37 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden connection and load average spikes with\n postgresql 9.3"
}
] |
[
{
"msg_contents": "We're buying a new server in the near future to replace an aging system.\nI'd appreciate advice on the best SSD devices and RAID controller cards\navailable today.\n\nThe database is about 750 GB. This is a \"warehouse\" server. We load\nsupplier catalogs throughout a typical work week, then on the weekend\n(after Q/A), integrate the new supplier catalogs into our customer-visible\n\"store\", which is then copied to a production server where customers see\nit. So the load is mostly data loading, and essentially no OLTP. Typically\nthere are fewer than a dozen connections to Postgres.\n\nLinux 2.6.32\nPostgres 9.3\nHardware:\n 2 x INTEL WESTMERE 4C XEON 2.40GHZ\n 12GB DDR3 ECC 1333MHz\n 3WARE 9650SE-12ML with BBU\n 12 x 1TB Hitachi 7200RPM SATA disks\nRAID 1 (2 disks)\n Linux partition\n Swap partition\n pg_xlog partition\nRAID 10 (8 disks)\n Postgres database partition\n\nWe get 5000-7000 TPS from pgbench on this system.\n\nThe new system will have at least as many CPUs, and probably a lot more\nmemory (196 GB). The database hasn't reached 1TB yet, but we'd like room to\ngrow, so we'd like a 2TB file system for Postgres. We'll start with the\nlatest versions of Linux and Postgres.\n\nIntel's products have always received good reports in this forum. Is that\nstill the best recommendation? Or are there good alternatives that are\nprice competitive?\n\nWhat about a RAID controller? Are RAID controllers even available for\nPCI-Express SSD drives, or do we have to stick with SATA if we need a\nbattery-backed RAID controller? Or is software RAID sufficient for SSD\ndrives?\n\nAre spinning disks still a good choice for the pg_xlog partition and OS? Is\nthere any reason to get spinning disks at all, or is it better/simpler to\njust put everything on SSD drives?\n\nThanks in advance for your advice!\n\nCraig\n\nWe're buying a new server in the near future to replace an aging system. I'd appreciate advice on the best SSD devices and RAID controller cards available today.The database is about 750 GB. This is a \"warehouse\" server. We load supplier catalogs throughout a typical work week, then on the weekend (after Q/A), integrate the new supplier catalogs into our customer-visible \"store\", which is then copied to a production server where customers see it. So the load is mostly data loading, and essentially no OLTP. Typically there are fewer than a dozen connections to Postgres.Linux 2.6.32Postgres 9.3Hardware: 2 x INTEL WESTMERE 4C XEON 2.40GHZ 12GB DDR3 ECC 1333MHz 3WARE 9650SE-12ML with BBU 12 x 1TB Hitachi 7200RPM SATA disksRAID 1 (2 disks) Linux partition Swap partition pg_xlog partitionRAID 10 (8 disks) Postgres database partitionWe get 5000-7000 TPS from pgbench on this system.The new system will have at least as many CPUs, and probably a lot more memory (196 GB). The database hasn't reached 1TB yet, but we'd like room to grow, so we'd like a 2TB file system for Postgres. We'll start with the latest versions of Linux and Postgres.Intel's products have always received good reports in this forum. Is that still the best recommendation? Or are there good alternatives that are price competitive?What about a RAID controller? Are RAID controllers even available for PCI-Express SSD drives, or do we have to stick with SATA if we need a battery-backed RAID controller? Or is software RAID sufficient for SSD drives?Are spinning disks still a good choice for the pg_xlog partition and OS? Is there any reason to get spinning disks at all, or is it better/simpler to just put everything on SSD drives?Thanks in advance for your advice!Craig",
"msg_date": "Wed, 1 Jul 2015 16:06:57 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "New server: SSD/RAID recommendations?"
},
{
"msg_contents": "På torsdag 02. juli 2015 kl. 01:06:57, skrev Craig James <[email protected] \n<mailto:[email protected]>>:\nWe're buying a new server in the near future to replace an aging system. I'd \nappreciate advice on the best SSD devices and RAID controller cards available \ntoday. \nThe database is about 750 GB. This is a \"warehouse\" server. We load supplier \ncatalogs throughout a typical work week, then on the weekend (after Q/A), \nintegrate the new supplier catalogs into our customer-visible \"store\", which is \nthen copied to a production server where customers see it. So the load is \nmostly data loading, and essentially no OLTP. Typically there are fewer than a \ndozen connections to Postgres. \nLinux 2.6.32\nPostgres 9.3\nHardware:\n 2 x INTEL WESTMERE 4C XEON 2.40GHZ\n 12GB DDR3 ECC 1333MHz\n 3WARE 9650SE-12ML with BBU\n 12 x 1TB Hitachi 7200RPM SATA disks\nRAID 1 (2 disks)\n Linux partition\n Swap partition\n pg_xlog partition\nRAID 10 (8 disks)\n Postgres database partition\n \nWe get 5000-7000 TPS from pgbench on this system.\n \nThe new system will have at least as many CPUs, and probably a lot more memory \n(196 GB). The database hasn't reached 1TB yet, but we'd like room to grow, so \nwe'd like a 2TB file system for Postgres. We'll start with the latest versions \nof Linux and Postgres.\n \nIntel's products have always received good reports in this forum. Is that \nstill the best recommendation? Or are there good alternatives that are price \ncompetitive?\n \nWhat about a RAID controller? Are RAID controllers even available for \nPCI-Express SSD drives, or do we have to stick with SATA if we need a \nbattery-backed RAID controller? Or is software RAID sufficient for SSD drives?\n \nAre spinning disks still a good choice for the pg_xlog partition and OS? Is \nthere any reason to get spinning disks at all, or is it better/simpler to just \nput everything on SSD drives?\n \nThanks in advance for your advice!\n\n\n\n\n \n \nDepends on you SSD-drives, but today's enterprise-grade SSD disks can handle \npg_xlog just fine. So I'd go full SSD, unless you have many BLOBs in \npg_largeobject, then move that to a separate tablespace with \n\"archive-grade\"-disks (spinning disks).\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>",
"msg_date": "Thu, 2 Jul 2015 01:56:19 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On Wed, Jul 1, 2015 at 5:06 PM, Craig James <[email protected]> wrote:\n> We're buying a new server in the near future to replace an aging system. I'd\n> appreciate advice on the best SSD devices and RAID controller cards\n> available today.\n>\n> The database is about 750 GB. This is a \"warehouse\" server. We load supplier\n> catalogs throughout a typical work week, then on the weekend (after Q/A),\n> integrate the new supplier catalogs into our customer-visible \"store\", which\n> is then copied to a production server where customers see it. So the load is\n> mostly data loading, and essentially no OLTP. Typically there are fewer than\n> a dozen connections to Postgres.\n>\n> Linux 2.6.32\n\nUpgrade to an OS with a later kernel, 3.11 at the lowest. 2.6.32 is\nbroken from an IO perspective. It writes 2 to 4x more data than needed\nfor normal operation.\n\n> Postgres 9.3\n> Hardware:\n> 2 x INTEL WESTMERE 4C XEON 2.40GHZ\n> 12GB DDR3 ECC 1333MHz\n> 3WARE 9650SE-12ML with BBU\n> 12 x 1TB Hitachi 7200RPM SATA disks\n> RAID 1 (2 disks)\n> Linux partition\n> Swap partition\n> pg_xlog partition\n> RAID 10 (8 disks)\n> Postgres database partition\n>\n> We get 5000-7000 TPS from pgbench on this system.\n>\n> The new system will have at least as many CPUs, and probably a lot more\n> memory (196 GB). The database hasn't reached 1TB yet, but we'd like room to\n> grow, so we'd like a 2TB file system for Postgres. We'll start with the\n> latest versions of Linux and Postgres.\n\nOnce your db is bigger than memory, the size of the memory isn't as\nimportant as the speed of the IO. Being able to read and write huge\nswathes of data becomes more important than memory size at that point.\nBeing able to read 100MB/s versus being able to read 1,000MB/s is the\ndifference between 10 minute queries and 10 hour queries on a\nreporting box. For sequential throughput, i.e. loading and retreiving\nwith only one or two clients connected, you can throw more and more\nspinners at it. If you're gonna have enough clients connected to make\nthe array go from sequential to random access, then you want to try\nand put SSDs in there if possible, but the cost / Gig is much higher\nthan spinners.\n\nZFS can use SSDs as cache, as can some newer RAID controllers, which\nrepresents a compromise between the two.\n\nIf you go with spinners, with or without ssd cache, throw as many at\nthe problem as you can. And run them in RAID-10 if you possibly can.\nRAID-5 or 6 are much slower, especially on spinners.\n\n> What about a RAID controller? Are RAID controllers even available for\n> PCI-Express SSD drives, or do we have to stick with SATA if we need a\n> battery-backed RAID controller? Or is software RAID sufficient for SSD\n> drives?\n\nNot that I know of. PCI-E drives act as their own drive. You could\nsoftware RAID them I guess. Or do you mean are there PCI-E controlelrs\nfor SATA SSD drives? Plenty of those.\n\nMany modern controllers don't use battery backed cache, they've gone\nto flash memory, which requires no battery to survive powerdown. I\nlike LSI, 3Ware and Areca RAID HBAs.\n\n\n> Are spinning disks still a good choice for the pg_xlog partition and OS? Is\n> there any reason to get spinning disks at all, or is it better/simpler to\n> just put everything on SSD drives?\n\nSpinning drives are fine for xlog and OS. If you're logging to the\nsame drive set as pg_xlog is using, you will hit the wall faster.\n\nSSDs are great, until you need more space. I'd rather have an 8TB xlog\npartition of spinners when setting up replication and xlog archiving\nthan a 500GB xlog partition. 8TB sounds like a lot until you need to\nhold on to a week's worth of xlog files on a busy server.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 Jul 2015 17:57:37 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "What about a RAID controller? Are RAID controllers even available for PCI-Express SSD drives, or do we have to stick with SATA if we need a battery-backed RAID controller? Or is software RAID sufficient for SSD drives?\r\n\r\nQuite a few of the benefits of using a hardware RAID controller are irrelevant when using modern SSDs. The great random write performance of the drives means the cache on the controller is less useful and the drives you’re considering (Intel’s enterprise grade) will have full power protection for inflight data.\r\n\r\nIn my own testing (CentOS 7/Postgres 9.4/128GB RAM/ 8x SSDs RAID5/10/0 with mdadm vs hw controllers) I’ve found that the RAID controller is actually limiting performance compared to just using software RAID. In worst-case workloads I’m able to saturate the controller with 2 SATA drives.\r\n\r\nAnother advantage in using mdadm is that it’ll properly pass TRIM to the drive. You’ll need to test whether “discard” in your fstab will have a negative impact on performance but being able to run “fstrim” occasionally will definitely help performance in the long run.\r\n\r\nIf you want another drive to consider you should look at the Micron M500DC. Full power protection for inflight data, same NAND as Intel uses in their drives, good mixed workload performance. (I’m obviously a little biased, though ;-)\r\n\r\nWes Vaske | Senior Storage Solutions Engineer\r\nMicron Technology\r\n101 West Louis Henna Blvd, Suite 210 | Austin, TX 78728\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Andreas Joseph Krogh\r\nSent: Wednesday, July 01, 2015 6:56 PM\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] New server: SSD/RAID recommendations?\r\n\r\nPå torsdag 02. juli 2015 kl. 01:06:57, skrev Craig James <[email protected]<mailto:[email protected]>>:\r\nWe're buying a new server in the near future to replace an aging system. I'd appreciate advice on the best SSD devices and RAID controller cards available today.\r\n\r\nThe database is about 750 GB. This is a \"warehouse\" server. We load supplier catalogs throughout a typical work week, then on the weekend (after Q/A), integrate the new supplier catalogs into our customer-visible \"store\", which is then copied to a production server where customers see it. So the load is mostly data loading, and essentially no OLTP. Typically there are fewer than a dozen connections to Postgres.\r\n\r\nLinux 2.6.32\r\nPostgres 9.3\r\nHardware:\r\n 2 x INTEL WESTMERE 4C XEON 2.40GHZ\r\n 12GB DDR3 ECC 1333MHz\r\n 3WARE 9650SE-12ML with BBU\r\n 12 x 1TB Hitachi 7200RPM SATA disks\r\nRAID 1 (2 disks)\r\n Linux partition\r\n Swap partition\r\n pg_xlog partition\r\nRAID 10 (8 disks)\r\n Postgres database partition\r\n\r\nWe get 5000-7000 TPS from pgbench on this system.\r\n\r\nThe new system will have at least as many CPUs, and probably a lot more memory (196 GB). The database hasn't reached 1TB yet, but we'd like room to grow, so we'd like a 2TB file system for Postgres. We'll start with the latest versions of Linux and Postgres.\r\n\r\nIntel's products have always received good reports in this forum. Is that still the best recommendation? Or are there good alternatives that are price competitive?\r\n\r\nWhat about a RAID controller? Are RAID controllers even available for PCI-Express SSD drives, or do we have to stick with SATA if we need a battery-backed RAID controller? Or is software RAID sufficient for SSD drives?\r\n\r\nAre spinning disks still a good choice for the pg_xlog partition and OS? Is there any reason to get spinning disks at all, or is it better/simpler to just put everything on SSD drives?\r\n\r\nThanks in advance for your advice!\r\n\r\n\r\nDepends on you SSD-drives, but today's enterprise-grade SSD disks can handle pg_xlog just fine. So I'd go full SSD, unless you have many BLOBs in pg_largeobject, then move that to a separate tablespace with \"archive-grade\"-disks (spinning disks).\r\n\r\n--\r\nAndreas Joseph Krogh\r\nCTO / Partner - Visena AS\r\nMobile: +47 909 56 963\r\[email protected]<mailto:[email protected]>\r\nwww.visena.com<https://www.visena.com>\r\n[cid:[email protected]]<https://www.visena.com>",
"msg_date": "Thu, 2 Jul 2015 14:01:32 +0000",
"msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On Wed, Jul 1, 2015 at 6:06 PM, Craig James <[email protected]> wrote:\n> We're buying a new server in the near future to replace an aging system. I'd\n> appreciate advice on the best SSD devices and RAID controller cards\n> available today.\n>\n> The database is about 750 GB. This is a \"warehouse\" server. We load supplier\n> catalogs throughout a typical work week, then on the weekend (after Q/A),\n> integrate the new supplier catalogs into our customer-visible \"store\", which\n> is then copied to a production server where customers see it. So the load is\n> mostly data loading, and essentially no OLTP. Typically there are fewer than\n> a dozen connections to Postgres.\n>\n> Linux 2.6.32\n> Postgres 9.3\n> Hardware:\n> 2 x INTEL WESTMERE 4C XEON 2.40GHZ\n> 12GB DDR3 ECC 1333MHz\n> 3WARE 9650SE-12ML with BBU\n> 12 x 1TB Hitachi 7200RPM SATA disks\n> RAID 1 (2 disks)\n> Linux partition\n> Swap partition\n> pg_xlog partition\n> RAID 10 (8 disks)\n> Postgres database partition\n>\n> We get 5000-7000 TPS from pgbench on this system.\n>\n> The new system will have at least as many CPUs, and probably a lot more\n> memory (196 GB). The database hasn't reached 1TB yet, but we'd like room to\n> grow, so we'd like a 2TB file system for Postgres. We'll start with the\n> latest versions of Linux and Postgres.\n>\n> Intel's products have always received good reports in this forum. Is that\n> still the best recommendation? Or are there good alternatives that are price\n> competitive?\n\nIn my opinion, the intel S3500 still has incredible value. Sub 1$/gb\nand extremely fast. Heavily used both on production systems I manage\nand my personal workstation. This report:\nhttp://lkcl.net/reports/ssd_analysis.html told me everything I needed\nto know about the drive. If you are sustaining extremely high rates\nof writing data though particularly of the random kind, you need to\nfactor in drive lifespan and may want to consider the S3700 or one of\nit's competitors. Both drives have been refreshed into the 3510 and\n3710 modes but they are brand new and not highly reviewed so tread\ncarefully. On my crapbox workstation I get about 5k random writes on\nlarge scale factor from a single device.\n\nI definitely support software raid and not picking up a fancy raid\ncontroller as long as you know your way around mdadm. Oh, and be\nsure to crank effective_io_concurrency:\nhttp://www.postgresql.org/message-id/CAHyXU0wgpE2E3B+rmZ959tJT_adPFfPvHNqeA9K9mkJRAT9HXw@mail.gmail.com\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Jul 2015 09:25:12 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On Thu, Jul 2, 2015 at 7:01 AM, Wes Vaske (wvaske) <[email protected]>\nwrote:\n\n> What about a RAID controller? Are RAID controllers even available for\n> PCI-Express SSD drives, or do we have to stick with SATA if we need a\n> battery-backed RAID controller? Or is software RAID sufficient for SSD\n> drives?\n>\n>\n>\n> Quite a few of the benefits of using a hardware RAID controller are\n> irrelevant when using modern SSDs. The great random write performance of\n> the drives means the cache on the controller is less useful and the drives\n> you’re considering (Intel’s enterprise grade) will have full power\n> protection for inflight data.\n>\n>\n>\n> In my own testing (CentOS 7/Postgres 9.4/128GB RAM/ 8x SSDs RAID5/10/0\n> with mdadm vs hw controllers) I’ve found that the RAID controller is\n> actually limiting performance compared to just using software RAID. In\n> worst-case workloads I’m able to saturate the controller with 2 SATA drives.\n>\n>\n>\n> Another advantage in using mdadm is that it’ll properly pass TRIM to the\n> drive. You’ll need to test whether “discard” in your fstab will have a\n> negative impact on performance but being able to run “fstrim” occasionally\n> will definitely help performance in the long run.\n>\n>\n>\n> If you want another drive to consider you should look at the Micron\n> M500DC. Full power protection for inflight data, same NAND as Intel uses in\n> their drives, good mixed workload performance. (I’m obviously a little\n> biased, though ;-)\n>\n\nThanks Wes. That's good advice. I've always liked mdadm and how well RAID\nis supported by Linux, and mostly used a controller for the cache and BBU.\n\nI'll definitely check out your product. Can you point me to any benchmarks,\nboth on performance and lifetime?\n\nCraig\n\n\n>\n> *Wes Vaske *| Senior Storage Solutions Engineer\n>\n> Micron Technology\n>\n> 101 West Louis Henna Blvd, Suite 210 | Austin, TX 78728\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Andreas Joseph\n> Krogh\n> *Sent:* Wednesday, July 01, 2015 6:56 PM\n> *To:* [email protected]\n> *Subject:* Re: [PERFORM] New server: SSD/RAID recommendations?\n>\n>\n>\n> På torsdag 02. juli 2015 kl. 01:06:57, skrev Craig James <\n> [email protected]>:\n>\n> We're buying a new server in the near future to replace an aging system.\n> I'd appreciate advice on the best SSD devices and RAID controller cards\n> available today.\n>\n>\n>\n> The database is about 750 GB. This is a \"warehouse\" server. We load\n> supplier catalogs throughout a typical work week, then on the weekend\n> (after Q/A), integrate the new supplier catalogs into our customer-visible\n> \"store\", which is then copied to a production server where customers see\n> it. So the load is mostly data loading, and essentially no OLTP. Typically\n> there are fewer than a dozen connections to Postgres.\n>\n>\n>\n> Linux 2.6.32\n>\n> Postgres 9.3\n>\n> Hardware:\n>\n> 2 x INTEL WESTMERE 4C XEON 2.40GHZ\n>\n> 12GB DDR3 ECC 1333MHz\n>\n> 3WARE 9650SE-12ML with BBU\n>\n> 12 x 1TB Hitachi 7200RPM SATA disks\n>\n> RAID 1 (2 disks)\n>\n> Linux partition\n>\n> Swap partition\n>\n> pg_xlog partition\n>\n> RAID 10 (8 disks)\n>\n> Postgres database partition\n>\n>\n>\n> We get 5000-7000 TPS from pgbench on this system.\n>\n>\n>\n> The new system will have at least as many CPUs, and probably a lot more\n> memory (196 GB). The database hasn't reached 1TB yet, but we'd like room to\n> grow, so we'd like a 2TB file system for Postgres. We'll start with the\n> latest versions of Linux and Postgres.\n>\n>\n>\n> Intel's products have always received good reports in this forum. Is that\n> still the best recommendation? Or are there good alternatives that are\n> price competitive?\n>\n>\n>\n> What about a RAID controller? Are RAID controllers even available for\n> PCI-Express SSD drives, or do we have to stick with SATA if we need a\n> battery-backed RAID controller? Or is software RAID sufficient for SSD\n> drives?\n>\n>\n>\n> Are spinning disks still a good choice for the pg_xlog partition and OS?\n> Is there any reason to get spinning disks at all, or is it better/simpler\n> to just put everything on SSD drives?\n>\n>\n>\n> Thanks in advance for your advice!\n>\n>\n>\n>\n>\n> Depends on you SSD-drives, but today's enterprise-grade SSD disks can\n> handle pg_xlog just fine. So I'd go full SSD, unless you have many BLOBs in\n> pg_largeobject, then move that to a separate tablespace with\n> \"archive-grade\"-disks (spinning disks).\n>\n>\n>\n> --\n>\n> *Andreas Joseph Krogh*\n>\n> CTO / Partner - Visena AS\n>\n> Mobile: +47 909 56 963\n>\n> [email protected]\n>\n> www.visena.com\n>\n> <https://www.visena.com>\n>\n>\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------",
"msg_date": "Thu, 2 Jul 2015 10:20:00 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On Wed, Jul 1, 2015 at 4:56 PM, Andreas Joseph Krogh <[email protected]>\nwrote:\n\n> På torsdag 02. juli 2015 kl. 01:06:57, skrev Craig James <\n> [email protected]>:\n>\n> We're buying a new server in the near future to replace an aging system.\n> I'd appreciate advice on the best SSD devices and RAID controller cards\n> available today.\n>\n> The database is about 750 GB. This is a \"warehouse\" server. We load\n> supplier catalogs throughout a typical work week, then on the weekend\n> (after Q/A), integrate the new supplier catalogs into our customer-visible\n> \"store\", which is then copied to a production server where customers see\n> it. So the load is mostly data loading, and essentially no OLTP. Typically\n> there are fewer than a dozen connections to Postgres.\n>\n> Linux 2.6.32\n> Postgres 9.3\n> Hardware:\n> 2 x INTEL WESTMERE 4C XEON 2.40GHZ\n> 12GB DDR3 ECC 1333MHz\n> 3WARE 9650SE-12ML with BBU\n> 12 x 1TB Hitachi 7200RPM SATA disks\n> RAID 1 (2 disks)\n> Linux partition\n> Swap partition\n> pg_xlog partition\n> RAID 10 (8 disks)\n> Postgres database partition\n>\n> We get 5000-7000 TPS from pgbench on this system.\n>\n> The new system will have at least as many CPUs, and probably a lot more\n> memory (196 GB). The database hasn't reached 1TB yet, but we'd like room to\n> grow, so we'd like a 2TB file system for Postgres. We'll start with the\n> latest versions of Linux and Postgres.\n>\n> Intel's products have always received good reports in this forum. Is that\n> still the best recommendation? Or are there good alternatives that are\n> price competitive?\n>\n> What about a RAID controller? Are RAID controllers even available for\n> PCI-Express SSD drives, or do we have to stick with SATA if we need a\n> battery-backed RAID controller? Or is software RAID sufficient for SSD\n> drives?\n>\n> Are spinning disks still a good choice for the pg_xlog partition and OS?\n> Is there any reason to get spinning disks at all, or is it better/simpler\n> to just put everything on SSD drives?\n>\n> Thanks in advance for your advice!\n>\n>\n>\n> Depends on you SSD-drives, but today's enterprise-grade SSD disks can\n> handle pg_xlog just fine. So I'd go full SSD, unless you have many BLOBs in\n> pg_largeobject, then move that to a separate tablespace with\n> \"archive-grade\"-disks (spinning disks).\n>\n\nNo blobs in our database, so that sounds like good advice. It simplifies\nthe hardware a lot if we can go with just SSDs.\n\nCraig\n\n\n>\n> --\n> *Andreas Joseph Krogh*\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected]\n> www.visena.com\n> <https://www.visena.com>\n>\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------",
"msg_date": "Thu, 2 Jul 2015 10:22:12 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On Wed, Jul 1, 2015 at 4:57 PM, Scott Marlowe <[email protected]>\nwrote:\n\n> On Wed, Jul 1, 2015 at 5:06 PM, Craig James <[email protected]> wrote:\n> > We're buying a new server in the near future to replace an aging system.\n> I'd\n> > appreciate advice on the best SSD devices and RAID controller cards\n> > available today.\n> > ....\n> SSDs are great, until you need more space. I'd rather have an 8TB xlog\n> partition of spinners when setting up replication and xlog archiving\n> than a 500GB xlog partition. 8TB sounds like a lot until you need to\n> hold on to a week's worth of xlog files on a busy server.\n>\n\nGood point. I'll talk to our guy who does all the barman stuff about this.\n\nCraig\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Wed, Jul 1, 2015 at 4:57 PM, Scott Marlowe <[email protected]> wrote:On Wed, Jul 1, 2015 at 5:06 PM, Craig James <[email protected]> wrote:\n> We're buying a new server in the near future to replace an aging system. I'd\n> appreciate advice on the best SSD devices and RAID controller cards\n> available today.\n> ....\nSSDs are great, until you need more space. I'd rather have an 8TB xlog\npartition of spinners when setting up replication and xlog archiving\nthan a 500GB xlog partition. 8TB sounds like a lot until you need to\nhold on to a week's worth of xlog files on a busy server.\nGood point. I'll talk to our guy who does all the barman stuff about this.Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------",
"msg_date": "Thu, 2 Jul 2015 10:44:14 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "Storage Review has a pretty good process and reviewed the M500DC when it released last year. http://www.storagereview.com/micron_m500dc_enterprise_ssd_review\r\n\r\nThe only database-specific info we have available are for Cassandra and MSSQL:\r\nhttp://www.micron.com/~/media/documents/products/technical-marketing-brief/cassandra_and_m500dc_enterprise_ssd_tech_brief.pdf\r\nhttp://www.micron.com/~/media/documents/products/technical-marketing-brief/sql_server_2014_and_m500dc_raid_configuration_tech_brief.pdf\r\n\r\n(some of that info might be relevant)\r\n\r\nIn terms of endurance, the M500DC is rated to 2 Drive Writes Per Day (DWPD) for 5-years. For comparison:\r\nMicron M500DC (20nm) – 2 DWPD\r\nIntel S3500 (20nm) – 0.3 DWPD\r\nIntel S3510 (16nm) – 0.3 DWPD\r\nIntel S3710 (20nm) – 10 DWPD\r\n\r\nThey’re all great drives, the question is how write-intensive is the workload.\r\n\r\nWes Vaske | Senior Storage Solutions Engineer\r\nMicron Technology\r\n101 West Louis Henna Blvd, Suite 210 | Austin, TX 78728\r\nMobile: 515-451-7742\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Craig James\r\nSent: Thursday, July 02, 2015 12:20 PM\r\nTo: Wes Vaske (wvaske)\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] New server: SSD/RAID recommendations?\r\n\r\nOn Thu, Jul 2, 2015 at 7:01 AM, Wes Vaske (wvaske) <[email protected]<mailto:[email protected]>> wrote:\r\nWhat about a RAID controller? Are RAID controllers even available for PCI-Express SSD drives, or do we have to stick with SATA if we need a battery-backed RAID controller? Or is software RAID sufficient for SSD drives?\r\n\r\nQuite a few of the benefits of using a hardware RAID controller are irrelevant when using modern SSDs. The great random write performance of the drives means the cache on the controller is less useful and the drives you’re considering (Intel’s enterprise grade) will have full power protection for inflight data.\r\n\r\nIn my own testing (CentOS 7/Postgres 9.4/128GB RAM/ 8x SSDs RAID5/10/0 with mdadm vs hw controllers) I’ve found that the RAID controller is actually limiting performance compared to just using software RAID. In worst-case workloads I’m able to saturate the controller with 2 SATA drives.\r\n\r\nAnother advantage in using mdadm is that it’ll properly pass TRIM to the drive. You’ll need to test whether “discard” in your fstab will have a negative impact on performance but being able to run “fstrim” occasionally will definitely help performance in the long run.\r\n\r\nIf you want another drive to consider you should look at the Micron M500DC. Full power protection for inflight data, same NAND as Intel uses in their drives, good mixed workload performance. (I’m obviously a little biased, though ;-)\r\n\r\nThanks Wes. That's good advice. I've always liked mdadm and how well RAID is supported by Linux, and mostly used a controller for the cache and BBU.\r\n\r\nI'll definitely check out your product. Can you point me to any benchmarks, both on performance and lifetime?\r\n\r\nCraig\r\n\r\n\r\nWes Vaske | Senior Storage Solutions Engineer\r\nMicron Technology\r\n101 West Louis Henna Blvd, Suite 210 | Austin, TX 78728\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of Andreas Joseph Krogh\r\nSent: Wednesday, July 01, 2015 6:56 PM\r\nTo: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] New server: SSD/RAID recommendations?\r\n\r\nPå torsdag 02. juli 2015 kl. 01:06:57, skrev Craig James <[email protected]<mailto:[email protected]>>:\r\nWe're buying a new server in the near future to replace an aging system. I'd appreciate advice on the best SSD devices and RAID controller cards available today.\r\n\r\nThe database is about 750 GB. This is a \"warehouse\" server. We load supplier catalogs throughout a typical work week, then on the weekend (after Q/A), integrate the new supplier catalogs into our customer-visible \"store\", which is then copied to a production server where customers see it. So the load is mostly data loading, and essentially no OLTP. Typically there are fewer than a dozen connections to Postgres.\r\n\r\nLinux 2.6.32\r\nPostgres 9.3\r\nHardware:\r\n 2 x INTEL WESTMERE 4C XEON 2.40GHZ\r\n 12GB DDR3 ECC 1333MHz\r\n 3WARE 9650SE-12ML with BBU\r\n 12 x 1TB Hitachi 7200RPM SATA disks\r\nRAID 1 (2 disks)\r\n Linux partition\r\n Swap partition\r\n pg_xlog partition\r\nRAID 10 (8 disks)\r\n Postgres database partition\r\n\r\nWe get 5000-7000 TPS from pgbench on this system.\r\n\r\nThe new system will have at least as many CPUs, and probably a lot more memory (196 GB). The database hasn't reached 1TB yet, but we'd like room to grow, so we'd like a 2TB file system for Postgres. We'll start with the latest versions of Linux and Postgres.\r\n\r\nIntel's products have always received good reports in this forum. Is that still the best recommendation? Or are there good alternatives that are price competitive?\r\n\r\nWhat about a RAID controller? Are RAID controllers even available for PCI-Express SSD drives, or do we have to stick with SATA if we need a battery-backed RAID controller? Or is software RAID sufficient for SSD drives?\r\n\r\nAre spinning disks still a good choice for the pg_xlog partition and OS? Is there any reason to get spinning disks at all, or is it better/simpler to just put everything on SSD drives?\r\n\r\nThanks in advance for your advice!\r\n\r\n\r\nDepends on you SSD-drives, but today's enterprise-grade SSD disks can handle pg_xlog just fine. So I'd go full SSD, unless you have many BLOBs in pg_largeobject, then move that to a separate tablespace with \"archive-grade\"-disks (spinning disks).\r\n\r\n--\r\nAndreas Joseph Krogh\r\nCTO / Partner - Visena AS\r\nMobile: +47 909 56 963\r\[email protected]<mailto:[email protected]>\r\nwww.visena.com<https://www.visena.com>\r\n[cid:[email protected]]<https://www.visena.com/>\r\n\r\n\r\n\r\n\r\n--\r\n---------------------------------\r\nCraig A. James\r\nChief Technology Officer\r\neMolecules, Inc.\r\n---------------------------------",
"msg_date": "Thu, 2 Jul 2015 18:00:21 +0000",
"msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On 07/02/2015 07:01 AM, Wes Vaske (wvaske) wrote:\n>\n> What about a RAID controller? Are RAID controllers even available for \n> PCI-Express SSD drives, or do we have to stick with SATA if we need a \n> battery-backed RAID controller? Or is software RAID sufficient for SSD \n> drives?\n>\n> Quite a few of the benefits of using a hardware RAID controller are \n> irrelevant when using modern SSDs. The great random write performance \n> of the drives means the cache on the controller is less useful and the \n> drives you’re considering (Intel’s enterprise grade) will have full \n> power protection for inflight data.\n>\n\nFor what it's worth, in my most recent iteration I decided to go with \nthe Intel Enterprise NVMe drives and no RAID. My reasoning was thus:\n\n1. Modern SSDs are so fast that even if you had an infinitely fast RAID \ncard you would still be severely constrained by the limits of SAS/SATA. \nTo get the full speed advantages you have to connect directly into the bus.\n\n2. We don't typically have redundant electronic components in our \nservers. Sure, we have dual power supplies and dual NICs (though \ngenerally to handle external failures) and ECC-RAM but no hot-backup CPU \nor redundant RAM banks and...no backup RAID card. Intel Enterprise SSD \nalready have power-fail protection so I don't need a RAID card to give \nme BBU. Given the MTBF of good enterprise SSD I'm left to wonder if \nplacing a RAID card in front merely adds a new point of failure and \nscheduled-downtime-inducing hands-on maintenance (I'm looking at you, \nRAID backup battery).\n\n3. I'm streaming to an entire redundant server and doing regular backups \nanyway so I'm covered for availability and recovery should the SSD (or \nanything else in the server) fail.\n\nBTW, here's an article worth reading: \nhttps://blog.algolia.com/when-solid-state-drives-are-not-that-solid/\n\nCheers,\nSteve\n\n\n\n\n\n\n\nOn 07/02/2015 07:01 AM, Wes Vaske\n (wvaske) wrote:\n\n\n\n\n\n\n\nWhat about a RAID\n controller? Are RAID controllers even available for\n PCI-Express SSD drives, or do we have to stick with SATA if we\n need a battery-backed RAID controller? Or is software RAID\n sufficient for SSD drives?\n \nQuite\n a few of the benefits of using a hardware RAID controller\n are irrelevant when using modern SSDs. The great random\n write performance of the drives means the cache on the\n controller is less useful and the drives you’re considering\n (Intel’s enterprise grade) will have full power protection\n for inflight data.\n\n\n\n For what it's worth, in my most recent iteration I decided to go\n with the Intel Enterprise NVMe drives and no RAID. My reasoning was\n thus:\n\n 1. Modern SSDs are so fast that even if you had an infinitely fast\n RAID card you would still be severely constrained by the limits of\n SAS/SATA. To get the full speed advantages you have to connect\n directly into the bus.\n\n 2. We don't typically have redundant electronic components in our\n servers. Sure, we have dual power supplies and dual NICs (though\n generally to handle external failures) and ECC-RAM but no hot-backup\n CPU or redundant RAM banks and...no backup RAID card. Intel\n Enterprise SSD already have power-fail protection so I don't need a\n RAID card to give me BBU. Given the MTBF of good enterprise SSD I'm\n left to wonder if placing a RAID card in front merely adds a new\n point of failure and scheduled-downtime-inducing hands-on\n maintenance (I'm looking at you, RAID backup battery).\n\n 3. I'm streaming to an entire redundant server and doing regular\n backups anyway so I'm covered for availability and recovery should\n the SSD (or anything else in the server) fail.\n\n BTW, here's an article worth reading:\n https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/\n\n Cheers,\n Steve",
"msg_date": "Mon, 06 Jul 2015 09:56:18 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nOn 07/06/2015 09:56 AM, Steve Crawford wrote:\n> On 07/02/2015 07:01 AM, Wes Vaske (wvaske) wrote:\n\n> For what it's worth, in my most recent iteration I decided to go with\n> the Intel Enterprise NVMe drives and no RAID. My reasoning was thus:\n>\n> 1. Modern SSDs are so fast that even if you had an infinitely fast RAID\n> card you would still be severely constrained by the limits of SAS/SATA.\n> To get the full speed advantages you have to connect directly into the bus.\n\nCorrect. What we have done in the past is use smaller drives with RAID \n10. This isn't for the performance but for the longevity of the drive. \nWe obviously could do this with Software RAID or Hardware RAID.\n\n>\n> 2. We don't typically have redundant electronic components in our\n> servers. Sure, we have dual power supplies and dual NICs (though\n> generally to handle external failures) and ECC-RAM but no hot-backup CPU\n> or redundant RAM banks and...no backup RAID card. Intel Enterprise SSD\n> already have power-fail protection so I don't need a RAID card to give\n> me BBU. Given the MTBF of good enterprise SSD I'm left to wonder if\n> placing a RAID card in front merely adds a new point of failure and\n> scheduled-downtime-inducing hands-on maintenance (I'm looking at you,\n> RAID backup battery).\n\nThat's an interesting question. It definitely adds yet another \ncomponent. I can't believe how often we need to \"hotfix\" a raid controller.\n\nJD\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 06 Jul 2015 10:20:29 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nCompletely agree with Steve.\n\n1. Intel NVMe looks like the best bet if you have modern enough hardware for NVMe. Otherwise e.g. S3700 mentioned elsewhere.\n\n2. RAID controllers. \n\nWe have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various machines. \nThis might give people idea about where the risk lies in the path from disk to CPU. \n\nWe've had 2 RAID card failures in the last 12 months that nuked the array with days of downtime, and 2 problems with batteries suddenly becoming useless or suddenly reporting wildly varying temperatures/overheating. There may have been other RAID problems I don't know about. \n\nOur IT dept were replacing Seagate HDDs last year at a rate of 2-3 per week (I guess they have 100-200 disks?). We also have about 25-30 Hitachi/HGST HDDs.\n\nSo by my estimates:\n30% annual problem rate with RAID controllers\n30-50% failure rate with Seagate HDDs (backblaze saw similar results)\n0% failure rate with HGST HDDs. \n0% failure in our SSDs. (to be fair, our one samsung SSD apparently has a bug in TRIM under linux, which I'll need to investigate to see if we have been affected by). \n\nalso, RAID controllers aren't free - not just the money but also the management of them (ever tried writing a complex install script that interacts work with MegaCLI? It can be done but it's not much fun.). Just take a look at the MegaCLI manual and ask yourself... is this even worth it (if you have a good MTBF on an enterprise SSD).\n\nRAID was meant to be about ensuring availability of data. I have trouble believing that these days....\n\nGraeme Bell\n\n\nOn 06 Jul 2015, at 18:56, Steve Crawford <[email protected]> wrote:\n\n> \n> 2. We don't typically have redundant electronic components in our servers. Sure, we have dual power supplies and dual NICs (though generally to handle external failures) and ECC-RAM but no hot-backup CPU or redundant RAM banks and...no backup RAID card. Intel Enterprise SSD already have power-fail protection so I don't need a RAID card to give me BBU. Given the MTBF of good enterprise SSD I'm left to wonder if placing a RAID card in front merely adds a new point of failure and scheduled-downtime-inducing hands-on maintenance (I'm looking at you, RAID backup battery).\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 10:22:00 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "Thanks for the Info.\n\nSo if RAID controllers are not an option, what one should use to build\nbig databases? LVM with xfs? BtrFs? Zfs?\n\nTigran.\n\n----- Original Message -----\n> From: \"Graeme B. Bell\" <[email protected]>\n> To: \"Steve Crawford\" <[email protected]>\n> Cc: \"Wes Vaske (wvaske)\" <[email protected]>, \"pgsql-performance\" <[email protected]>\n> Sent: Tuesday, July 7, 2015 12:22:00 PM\n> Subject: Re: [PERFORM] New server: SSD/RAID recommendations?\n\n> Completely agree with Steve.\n> \n> 1. Intel NVMe looks like the best bet if you have modern enough hardware for\n> NVMe. Otherwise e.g. S3700 mentioned elsewhere.\n> \n> 2. RAID controllers.\n> \n> We have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various machines.\n> This might give people idea about where the risk lies in the path from disk to\n> CPU.\n> \n> We've had 2 RAID card failures in the last 12 months that nuked the array with\n> days of downtime, and 2 problems with batteries suddenly becoming useless or\n> suddenly reporting wildly varying temperatures/overheating. There may have been\n> other RAID problems I don't know about.\n> \n> Our IT dept were replacing Seagate HDDs last year at a rate of 2-3 per week (I\n> guess they have 100-200 disks?). We also have about 25-30 Hitachi/HGST HDDs.\n> \n> So by my estimates:\n> 30% annual problem rate with RAID controllers\n> 30-50% failure rate with Seagate HDDs (backblaze saw similar results)\n> 0% failure rate with HGST HDDs.\n> 0% failure in our SSDs. (to be fair, our one samsung SSD apparently has a bug\n> in TRIM under linux, which I'll need to investigate to see if we have been\n> affected by).\n> \n> also, RAID controllers aren't free - not just the money but also the management\n> of them (ever tried writing a complex install script that interacts work with\n> MegaCLI? It can be done but it's not much fun.). Just take a look at the\n> MegaCLI manual and ask yourself... is this even worth it (if you have a good\n> MTBF on an enterprise SSD).\n> \n> RAID was meant to be about ensuring availability of data. I have trouble\n> believing that these days....\n> \n> Graeme Bell\n> \n> \n> On 06 Jul 2015, at 18:56, Steve Crawford <[email protected]> wrote:\n> \n>> \n>> 2. We don't typically have redundant electronic components in our servers. Sure,\n>> we have dual power supplies and dual NICs (though generally to handle external\n>> failures) and ECC-RAM but no hot-backup CPU or redundant RAM banks and...no\n>> backup RAID card. Intel Enterprise SSD already have power-fail protection so I\n>> don't need a RAID card to give me BBU. Given the MTBF of good enterprise SSD\n>> I'm left to wonder if placing a RAID card in front merely adds a new point of\n>> failure and scheduled-downtime-inducing hands-on maintenance (I'm looking at\n>> you, RAID backup battery).\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 12:28:18 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nI am unsure about the performance side but, ZFS is generally very attractive to me. \n\nKey advantages:\n\n1) Checksumming and automatic fixing-of-broken-things on every file (not just postgres pages, but your scripts, O/S, program files). \n2) Built-in lightweight compression (doesn't help with TOAST tables, in fact may slow them down, but helpful for other things). This may actually be a net negative for pg so maybe turn it off. \n3) ZRAID mirroring or ZRAID5/6. If you have trouble persuading someone that it's safe to replace a RAID array with a single drive... you can use a couple of NVMe SSDs with ZFS mirror or zraid, and get the same availability you'd get from a RAID controller. Slightly better, arguably, since they claim to have fixed the raid write-hole problem. \n4) filesystem snapshotting\n\nDespite the costs of checksumming etc., I suspect ZRAID running on a fast CPU with multiple NVMe drives will outperform quite a lot of the alternatives, with great data integrity guarantees. \n\nHaven't built one yet. Hope to, later this year. Steve, I would love to know more about how you're getting on with your NVMe disk in postgres!\n\nGraeme. \n\nOn 07 Jul 2015, at 12:28, Mkrtchyan, Tigran <[email protected]> wrote:\n\n> Thanks for the Info.\n> \n> So if RAID controllers are not an option, what one should use to build\n> big databases? LVM with xfs? BtrFs? Zfs?\n> \n> Tigran.\n> \n> ----- Original Message -----\n>> From: \"Graeme B. Bell\" <[email protected]>\n>> To: \"Steve Crawford\" <[email protected]>\n>> Cc: \"Wes Vaske (wvaske)\" <[email protected]>, \"pgsql-performance\" <[email protected]>\n>> Sent: Tuesday, July 7, 2015 12:22:00 PM\n>> Subject: Re: [PERFORM] New server: SSD/RAID recommendations?\n> \n>> Completely agree with Steve.\n>> \n>> 1. Intel NVMe looks like the best bet if you have modern enough hardware for\n>> NVMe. Otherwise e.g. S3700 mentioned elsewhere.\n>> \n>> 2. RAID controllers.\n>> \n>> We have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various machines.\n>> This might give people idea about where the risk lies in the path from disk to\n>> CPU.\n>> \n>> We've had 2 RAID card failures in the last 12 months that nuked the array with\n>> days of downtime, and 2 problems with batteries suddenly becoming useless or\n>> suddenly reporting wildly varying temperatures/overheating. There may have been\n>> other RAID problems I don't know about.\n>> \n>> Our IT dept were replacing Seagate HDDs last year at a rate of 2-3 per week (I\n>> guess they have 100-200 disks?). We also have about 25-30 Hitachi/HGST HDDs.\n>> \n>> So by my estimates:\n>> 30% annual problem rate with RAID controllers\n>> 30-50% failure rate with Seagate HDDs (backblaze saw similar results)\n>> 0% failure rate with HGST HDDs.\n>> 0% failure in our SSDs. (to be fair, our one samsung SSD apparently has a bug\n>> in TRIM under linux, which I'll need to investigate to see if we have been\n>> affected by).\n>> \n>> also, RAID controllers aren't free - not just the money but also the management\n>> of them (ever tried writing a complex install script that interacts work with\n>> MegaCLI? It can be done but it's not much fun.). Just take a look at the\n>> MegaCLI manual and ask yourself... is this even worth it (if you have a good\n>> MTBF on an enterprise SSD).\n>> \n>> RAID was meant to be about ensuring availability of data. I have trouble\n>> believing that these days....\n>> \n>> Graeme Bell\n>> \n>> \n>> On 06 Jul 2015, at 18:56, Steve Crawford <[email protected]> wrote:\n>> \n>>> \n>>> 2. We don't typically have redundant electronic components in our servers. Sure,\n>>> we have dual power supplies and dual NICs (though generally to handle external\n>>> failures) and ECC-RAM but no hot-backup CPU or redundant RAM banks and...no\n>>> backup RAID card. Intel Enterprise SSD already have power-fail protection so I\n>>> don't need a RAID card to give me BBU. Given the MTBF of good enterprise SSD\n>>> I'm left to wonder if placing a RAID card in front merely adds a new point of\n>>> failure and scheduled-downtime-inducing hands-on maintenance (I'm looking at\n>>> you, RAID backup battery).\n>> \n>> \n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 10:38:10 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "----- Original Message -----\n> From: \"Graeme B. Bell\" <[email protected]>\n> To: \"Mkrtchyan, Tigran\" <[email protected]>\n> Cc: \"Graeme B. Bell\" <[email protected]>, \"Steve Crawford\" <[email protected]>, \"Wes Vaske (wvaske)\"\n> <[email protected]>, \"pgsql-performance\" <[email protected]>\n> Sent: Tuesday, July 7, 2015 12:38:10 PM\n> Subject: Re: [PERFORM] New server: SSD/RAID recommendations?\n\n> I am unsure about the performance side but, ZFS is generally very attractive to\n> me.\n> \n> Key advantages:\n> \n> 1) Checksumming and automatic fixing-of-broken-things on every file (not just\n> postgres pages, but your scripts, O/S, program files).\n> 2) Built-in lightweight compression (doesn't help with TOAST tables, in fact\n> may slow them down, but helpful for other things). This may actually be a net\n> negative for pg so maybe turn it off.\n> 3) ZRAID mirroring or ZRAID5/6. If you have trouble persuading someone that it's\n> safe to replace a RAID array with a single drive... you can use a couple of\n> NVMe SSDs with ZFS mirror or zraid, and get the same availability you'd get\n> from a RAID controller. Slightly better, arguably, since they claim to have\n> fixed the raid write-hole problem.\n> 4) filesystem snapshotting\n> \n> Despite the costs of checksumming etc., I suspect ZRAID running on a fast CPU\n> with multiple NVMe drives will outperform quite a lot of the alternatives, with\n> great data integrity guarantees.\n\n\nWe are planing to have a test setup as well. For now I have single NVMe SSD on my\ntest system:\n\n# lspci | grep NVM\n85:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03)\n\n# mount | grep nvm\n/dev/nvme0n1p1 on /var/lib/pgsql/9.5 type ext4 (rw,noatime,nodiratime,data=ordered)\n\n\nand quite happy with it. We have write heavy workload on it to see when it will\nbreak. Postgres Performs very well. About x2.5 faster than with regular disks\nwith a single client and almost linear with multiple clients (picture attached.\nOn Y number of high level op/s our application does, X number of clients). The\nsetup is used last 3 months. Looks promising but for production we need to\nto have disk size twice as big as on the test system. Until today, I was\nplanning to use a RAID10 with a HW controller...\n\nRelated to ZFS. We use ZFSonlinux and behaviour is not as good as with solaris.\nLet's re-phrase it: performance is unpredictable. We run READZ2 with 30x3TB disks.\n\nTigran.\n\n> \n> Haven't built one yet. Hope to, later this year. Steve, I would love to know\n> more about how you're getting on with your NVMe disk in postgres!\n> \n> Graeme.\n> \n> On 07 Jul 2015, at 12:28, Mkrtchyan, Tigran <[email protected]> wrote:\n> \n>> Thanks for the Info.\n>> \n>> So if RAID controllers are not an option, what one should use to build\n>> big databases? LVM with xfs? BtrFs? Zfs?\n>> \n>> Tigran.\n>> \n>> ----- Original Message -----\n>>> From: \"Graeme B. Bell\" <[email protected]>\n>>> To: \"Steve Crawford\" <[email protected]>\n>>> Cc: \"Wes Vaske (wvaske)\" <[email protected]>, \"pgsql-performance\"\n>>> <[email protected]>\n>>> Sent: Tuesday, July 7, 2015 12:22:00 PM\n>>> Subject: Re: [PERFORM] New server: SSD/RAID recommendations?\n>> \n>>> Completely agree with Steve.\n>>> \n>>> 1. Intel NVMe looks like the best bet if you have modern enough hardware for\n>>> NVMe. Otherwise e.g. S3700 mentioned elsewhere.\n>>> \n>>> 2. RAID controllers.\n>>> \n>>> We have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various machines.\n>>> This might give people idea about where the risk lies in the path from disk to\n>>> CPU.\n>>> \n>>> We've had 2 RAID card failures in the last 12 months that nuked the array with\n>>> days of downtime, and 2 problems with batteries suddenly becoming useless or\n>>> suddenly reporting wildly varying temperatures/overheating. There may have been\n>>> other RAID problems I don't know about.\n>>> \n>>> Our IT dept were replacing Seagate HDDs last year at a rate of 2-3 per week (I\n>>> guess they have 100-200 disks?). We also have about 25-30 Hitachi/HGST HDDs.\n>>> \n>>> So by my estimates:\n>>> 30% annual problem rate with RAID controllers\n>>> 30-50% failure rate with Seagate HDDs (backblaze saw similar results)\n>>> 0% failure rate with HGST HDDs.\n>>> 0% failure in our SSDs. (to be fair, our one samsung SSD apparently has a bug\n>>> in TRIM under linux, which I'll need to investigate to see if we have been\n>>> affected by).\n>>> \n>>> also, RAID controllers aren't free - not just the money but also the management\n>>> of them (ever tried writing a complex install script that interacts work with\n>>> MegaCLI? It can be done but it's not much fun.). Just take a look at the\n>>> MegaCLI manual and ask yourself... is this even worth it (if you have a good\n>>> MTBF on an enterprise SSD).\n>>> \n>>> RAID was meant to be about ensuring availability of data. I have trouble\n>>> believing that these days....\n>>> \n>>> Graeme Bell\n>>> \n>>> \n>>> On 06 Jul 2015, at 18:56, Steve Crawford <[email protected]> wrote:\n>>> \n>>>> \n>>>> 2. We don't typically have redundant electronic components in our servers. Sure,\n>>>> we have dual power supplies and dual NICs (though generally to handle external\n>>>> failures) and ECC-RAM but no hot-backup CPU or redundant RAM banks and...no\n>>>> backup RAID card. Intel Enterprise SSD already have power-fail protection so I\n>>>> don't need a RAID card to give me BBU. Given the MTBF of good enterprise SSD\n>>>> I'm left to wonder if placing a RAID card in front merely adds a new point of\n>>>> failure and scheduled-downtime-inducing hands-on maintenance (I'm looking at\n>>>> you, RAID backup battery).\n>>> \n>>> \n>>> \n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 7 Jul 2015 12:56:53 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On 7/7/2015 05:56, Mkrtchyan, Tigran wrote:\n>\n> ----- Original Message -----\n>> From: \"Graeme B. Bell\" <[email protected]>\n>> To: \"Mkrtchyan, Tigran\" <[email protected]>\n>> Cc: \"Graeme B. Bell\" <[email protected]>, \"Steve Crawford\" <[email protected]>, \"Wes Vaske (wvaske)\"\n>> <[email protected]>, \"pgsql-performance\" <[email protected]>\n>> Sent: Tuesday, July 7, 2015 12:38:10 PM\n>> Subject: Re: [PERFORM] New server: SSD/RAID recommendations?\n>> I am unsure about the performance side but, ZFS is generally very attractive to\n>> me.\n>>\n>> Key advantages:\n>>\n>> 1) Checksumming and automatic fixing-of-broken-things on every file (not just\n>> postgres pages, but your scripts, O/S, program files).\n>> 2) Built-in lightweight compression (doesn't help with TOAST tables, in fact\n>> may slow them down, but helpful for other things). This may actually be a net\n>> negative for pg so maybe turn it off.\n>> 3) ZRAID mirroring or ZRAID5/6. If you have trouble persuading someone that it's\n>> safe to replace a RAID array with a single drive... you can use a couple of\n>> NVMe SSDs with ZFS mirror or zraid, and get the same availability you'd get\n>> from a RAID controller. Slightly better, arguably, since they claim to have\n>> fixed the raid write-hole problem.\n>> 4) filesystem snapshotting\n>>\n>> Despite the costs of checksumming etc., I suspect ZRAID running on a fast CPU\n>> with multiple NVMe drives will outperform quite a lot of the alternatives, with\n>> great data integrity guarantees.\nLz4 compression and standard 128kb block size has shown to be materially\nfaster here than using 8kb blocks and no compression, both with rotating\ndisks and SSDs.\n\nThis is workload dependent in my experience but in the applications we\nput Postgres to there is a very material improvement in throughput using\ncompression and the larger blocksize, which is counter-intuitive and\nalso opposite the \"conventional wisdom.\"\n\nFor best throughput we use mirrored vdev sets.\n\n-- \nKarl Denninger\[email protected] <mailto:[email protected]>\n/The Market Ticker/\n/[S/MIME encrypted email preferred]/",
"msg_date": "Tue, 07 Jul 2015 06:28:24 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "Hi Karl,\n\nGreat post, thanks. \n\nThough I don't think it's against conventional wisdom to aggregate writes into larger blocks rather than rely on 4k performance on ssds :-) \n\n128kb blocks + compression certainly makes sense. But it might make less sense I suppose if you had some incredibly high rate of churn in your rows. \nBut for the work we do here, we could use 16MB blocks for all the difference it would make. (Tip to others: don't do that. 128kb block performance is already enough out the IO bus to most ssds)\n\nDo you have your WAL log on a compressed zfs fs? \n\nGraeme Bell\n\n\nOn 07 Jul 2015, at 13:28, Karl Denninger <[email protected]> wrote:\n\n> Lz4 compression and standard 128kb block size has shown to be materially faster here than using 8kb blocks and no compression, both with rotating disks and SSDs.\n> \n> This is workload dependent in my experience but in the applications we put Postgres to there is a very material improvement in throughput using compression and the larger blocksize, which is counter-intuitive and also opposite the \"conventional wisdom.\"\n> \n> For best throughput we use mirrored vdev sets.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 11:52:07 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\n1. Does the sammy nvme have *complete* power loss protection though, for all fsync'd data?\nI am very badly burned by my experiences with Crucial SSDs and their 'power loss protection' which doesn't actually ensure all fsync'd data gets into flash.\nIt certainly looks pretty with all those capacitors on top in the photos, but we need some plug pull tests to be sure. \n\n2. Apologies for the typo in the previous post, raidz5 should have been raidz1. \n\n3. Also, something to think about when you start having single disk solutions (or non-ZFS raid, for that matter).\n\nSSDs are so unlike HDDs. \n\nThe samsung nvme has a UBER (uncorrectable bit error rate) measured at 1 in 10^17. That's one bit gone bad in 12500 TB, a good number. Chances are the drives fails before you hit a bit error, and if not, ZFS would catch it.\n\nWhereas current HDDS are at the 1 in 10^14 level. That means an error every 12TB, by the specs. That means, every time you fill your cheap 6-8TB seagate drive, it likely corrupted some of your data *even if it performed according to the spec*. (That's also why RAID5 isn't viable for rebuilding large arrays, incidentally).\n\nGraeme Bell\n\n\nOn 07 Jul 2015, at 12:56, Mkrtchyan, Tigran <[email protected]> wrote:\n\n> \n> \n> ----- Original Message -----\n>> From: \"Graeme B. Bell\" <[email protected]>\n>> To: \"Mkrtchyan, Tigran\" <[email protected]>\n>> Cc: \"Graeme B. Bell\" <[email protected]>, \"Steve Crawford\" <[email protected]>, \"Wes Vaske (wvaske)\"\n>> <[email protected]>, \"pgsql-performance\" <[email protected]>\n>> Sent: Tuesday, July 7, 2015 12:38:10 PM\n>> Subject: Re: [PERFORM] New server: SSD/RAID recommendations?\n> \n>> I am unsure about the performance side but, ZFS is generally very attractive to\n>> me.\n>> \n>> Key advantages:\n>> \n>> 1) Checksumming and automatic fixing-of-broken-things on every file (not just\n>> postgres pages, but your scripts, O/S, program files).\n>> 2) Built-in lightweight compression (doesn't help with TOAST tables, in fact\n>> may slow them down, but helpful for other things). This may actually be a net\n>> negative for pg so maybe turn it off.\n>> 3) ZRAID mirroring or ZRAID5/6. If you have trouble persuading someone that it's\n>> safe to replace a RAID array with a single drive... you can use a couple of\n>> NVMe SSDs with ZFS mirror or zraid, and get the same availability you'd get\n>> from a RAID controller. Slightly better, arguably, since they claim to have\n>> fixed the raid write-hole problem.\n>> 4) filesystem snapshotting\n>> \n>> Despite the costs of checksumming etc., I suspect ZRAID running on a fast CPU\n>> with multiple NVMe drives will outperform quite a lot of the alternatives, with\n>> great data integrity guarantees.\n> \n> \n> We are planing to have a test setup as well. For now I have single NVMe SSD on my\n> test system:\n> \n> # lspci | grep NVM\n> 85:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03)\n> \n> # mount | grep nvm\n> /dev/nvme0n1p1 on /var/lib/pgsql/9.5 type ext4 (rw,noatime,nodiratime,data=ordered)\n> \n> \n> and quite happy with it. We have write heavy workload on it to see when it will\n> break. Postgres Performs very well. About x2.5 faster than with regular disks\n> with a single client and almost linear with multiple clients (picture attached.\n> On Y number of high level op/s our application does, X number of clients). The\n> setup is used last 3 months. Looks promising but for production we need to\n> to have disk size twice as big as on the test system. Until today, I was\n> planning to use a RAID10 with a HW controller...\n> \n> Related to ZFS. We use ZFSonlinux and behaviour is not as good as with solaris.\n> Let's re-phrase it: performance is unpredictable. We run READZ2 with 30x3TB disks.\n> \n> Tigran.\n> \n>> \n>> Haven't built one yet. Hope to, later this year. Steve, I would love to know\n>> more about how you're getting on with your NVMe disk in postgres!\n>> \n>> Graeme.\n>> \n>> On 07 Jul 2015, at 12:28, Mkrtchyan, Tigran <[email protected]> wrote:\n>> \n>>> Thanks for the Info.\n>>> \n>>> So if RAID controllers are not an option, what one should use to build\n>>> big databases? LVM with xfs? BtrFs? Zfs?\n>>> \n>>> Tigran.\n>>> \n>>> ----- Original Message -----\n>>>> From: \"Graeme B. Bell\" <[email protected]>\n>>>> To: \"Steve Crawford\" <[email protected]>\n>>>> Cc: \"Wes Vaske (wvaske)\" <[email protected]>, \"pgsql-performance\"\n>>>> <[email protected]>\n>>>> Sent: Tuesday, July 7, 2015 12:22:00 PM\n>>>> Subject: Re: [PERFORM] New server: SSD/RAID recommendations?\n>>> \n>>>> Completely agree with Steve.\n>>>> \n>>>> 1. Intel NVMe looks like the best bet if you have modern enough hardware for\n>>>> NVMe. Otherwise e.g. S3700 mentioned elsewhere.\n>>>> \n>>>> 2. RAID controllers.\n>>>> \n>>>> We have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various machines.\n>>>> This might give people idea about where the risk lies in the path from disk to\n>>>> CPU.\n>>>> \n>>>> We've had 2 RAID card failures in the last 12 months that nuked the array with\n>>>> days of downtime, and 2 problems with batteries suddenly becoming useless or\n>>>> suddenly reporting wildly varying temperatures/overheating. There may have been\n>>>> other RAID problems I don't know about.\n>>>> \n>>>> Our IT dept were replacing Seagate HDDs last year at a rate of 2-3 per week (I\n>>>> guess they have 100-200 disks?). We also have about 25-30 Hitachi/HGST HDDs.\n>>>> \n>>>> So by my estimates:\n>>>> 30% annual problem rate with RAID controllers\n>>>> 30-50% failure rate with Seagate HDDs (backblaze saw similar results)\n>>>> 0% failure rate with HGST HDDs.\n>>>> 0% failure in our SSDs. (to be fair, our one samsung SSD apparently has a bug\n>>>> in TRIM under linux, which I'll need to investigate to see if we have been\n>>>> affected by).\n>>>> \n>>>> also, RAID controllers aren't free - not just the money but also the management\n>>>> of them (ever tried writing a complex install script that interacts work with\n>>>> MegaCLI? It can be done but it's not much fun.). Just take a look at the\n>>>> MegaCLI manual and ask yourself... is this even worth it (if you have a good\n>>>> MTBF on an enterprise SSD).\n>>>> \n>>>> RAID was meant to be about ensuring availability of data. I have trouble\n>>>> believing that these days....\n>>>> \n>>>> Graeme Bell\n>>>> \n>>>> \n>>>> On 06 Jul 2015, at 18:56, Steve Crawford <[email protected]> wrote:\n>>>> \n>>>>> \n>>>>> 2. We don't typically have redundant electronic components in our servers. Sure,\n>>>>> we have dual power supplies and dual NICs (though generally to handle external\n>>>>> failures) and ECC-RAM but no hot-backup CPU or redundant RAM banks and...no\n>>>>> backup RAID card. Intel Enterprise SSD already have power-fail protection so I\n>>>>> don't need a RAID card to give me BBU. Given the MTBF of good enterprise SSD\n>>>>> I'm left to wonder if placing a RAID card in front merely adds a new point of\n>>>>> failure and scheduled-downtime-inducing hands-on maintenance (I'm looking at\n>>>>> you, RAID backup battery).\n>>>> \n>>>> \n>>>> \n>>>> --\n>>>> Sent via pgsql-performance mailing list ([email protected])\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n>> \n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> <pg-with-ssd.png>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 12:04:21 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On 7/7/2015 06:52, Graeme B. Bell wrote:\n> Hi Karl,\n>\n> Great post, thanks. \n>\n> Though I don't think it's against conventional wisdom to aggregate writes into larger blocks rather than rely on 4k performance on ssds :-) \n>\n> 128kb blocks + compression certainly makes sense. But it might make less sense I suppose if you had some incredibly high rate of churn in your rows. \n> But for the work we do here, we could use 16MB blocks for all the difference it would make. (Tip to others: don't do that. 128kb block performance is already enough out the IO bus to most ssds)\n>\n> Do you have your WAL log on a compressed zfs fs? \n>\n> Graeme Bell\nYes.\n\nData goes on one mirrored set of vdevs, pg_xlog goes on a second,\nseparate pool. WAL goes on a third pool on RaidZ2. WAL typically goes\non rotating storage since I use it (and a basebackup) as disaster\nrecovery (and in hot spare apps the source for the syncing hot standbys)\nand that's nearly a big-block-write-only data stream. Rotating media is\nfine for that in most applications. I take a new basebackup on\nreasonable intervals and rotate the WAL logs to keep that from growing\nwithout boundary.\n\nI use LSI host adapters for the drives themselves (no hardware RAID);\nI'm currently running on FreeBSD 10.1. Be aware that ZFS on FreeBSD has\nsome fairly nasty issues that I developed (and publish) a patch for;\nwithout it some workloads can result in very undesirable behavior where\nworking set gets paged out in favor of ZFS ARC; if that happens your\nperformance will go straight into the toilet.\n\nBack before FreeBSD 9 when ZFS was simply not stable enough for me I\nused ARECA hardware RAID adapters and rotating media with BBUs and large\ncache memory installed on them with UFS filesystems. Hardware adapters\nare, however, a net lose in a ZFS environment even when they nominally\nwork well (and they frequently interact very badly with ZFS during\ncertain operations making them just flat-out unsuitable.) All-in I far\nprefer ZFS on a host adapter to UFS on a RAID adapter both from a data\nintegrity and performance standpoint.\n\nMy SSD drives of choice are all Intel; for lower-end requirements the\n730s work very well; the S3500 is next and if your write volume is high\nenough the S3700 has much greater endurance (but at a correspondingly\nhigher price.) All three are properly power-fail protected. All three\nare much, much faster than rotating storage. If you can saturate the\nSATA channels and need still more I/O throughput NVMe drives are the\nnext quantum up in performance; I'm not there with our application at\nthe present time.\n\nIncidentally while there are people who have questioned the 730 series\npower loss protection I've tested it with plug-pulls and in addition it\nwatchdogs its internal power loss capacitors -- from the smartctl -a\ndisplay of one of them on an in-service machine here:\n\n175 Power_Loss_Cap_Test 0x0033 100 100 010 Pre-fail \nAlways - 643 (4 6868)\n\n\n-- \nKarl Denninger\[email protected] <mailto:[email protected]>\n/The Market Ticker/\n/[S/MIME encrypted email preferred]/",
"msg_date": "Tue, 07 Jul 2015 07:39:01 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nThanks, this is very useful to know about the 730. When you say 'tested it with plug-pulls', you were using diskchecker.pl, right?\n\nGraeme.\n\nOn 07 Jul 2015, at 14:39, Karl Denninger <[email protected]> wrote:\n\n> \n> Incidentally while there are people who have questioned the 730 series power loss protection I've tested it with plug-pulls and in addition it watchdogs its internal power loss capacitors -- from the smartctl -a display of one of them on an in-service machine here:\n> \n> 175 Power_Loss_Cap_Test 0x0033 100 100 010 Pre-fail Always - 643 (4 6868)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 13:08:51 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On Thu, Jul 2, 2015 at 1:00 PM, Wes Vaske (wvaske) <[email protected]>\nwrote:\n\n> Storage Review has a pretty good process and reviewed the M500DC when it\n> released last year.\n> http://www.storagereview.com/micron_m500dc_enterprise_ssd_review\n>\n>\n>\n> The only database-specific info we have available are for Cassandra and\n> MSSQL:\n>\n>\n> http://www.micron.com/~/media/documents/products/technical-marketing-brief/cassandra_and_m500dc_enterprise_ssd_tech_brief.pdf\n>\n>\n> http://www.micron.com/~/media/documents/products/technical-marketing-brief/sql_server_2014_and_m500dc_raid_configuration_tech_brief.pdf\n>\n>\n>\n> (some of that info might be relevant)\n>\n>\n>\n> In terms of endurance, the M500DC is rated to 2 Drive Writes Per Day\n> (DWPD) for 5-years. For comparison:\n>\n> Micron M500DC (20nm) – 2 DWPD\n>\n> Intel S3500 (20nm) – 0.3 DWPD\n>\n> Intel S3510 (16nm) – 0.3 DWPD\n>\n> Intel S3710 (20nm) – 10 DWPD\n>\n>\n>\n> They’re all great drives, the question is how write-intensive is the\n> workload.\n>\n>\n>\nIntel added a new product, the 3610, that is rated for 3 DWPD. Pricing\nlooks to be around 1.20$/GB.\n\nmerlin\n\nOn Thu, Jul 2, 2015 at 1:00 PM, Wes Vaske (wvaske) <[email protected]> wrote:\n\n\nStorage Review has a pretty good process and reviewed the M500DC when it released last year.\nhttp://www.storagereview.com/micron_m500dc_enterprise_ssd_review\n \nThe only database-specific info we have available are for Cassandra and MSSQL:\nhttp://www.micron.com/~/media/documents/products/technical-marketing-brief/cassandra_and_m500dc_enterprise_ssd_tech_brief.pdf\nhttp://www.micron.com/~/media/documents/products/technical-marketing-brief/sql_server_2014_and_m500dc_raid_configuration_tech_brief.pdf\n \n(some of that info might be relevant)\n \nIn terms of endurance, the M500DC is rated to 2 Drive Writes Per Day (DWPD) for 5-years. For comparison:\nMicron M500DC (20nm) – 2 DWPD\nIntel S3500 (20nm) – 0.3 DWPD\nIntel S3510 (16nm) – 0.3 DWPD\nIntel S3710 (20nm) – 10 DWPD\n \nThey’re all great drives, the question is how write-intensive is the workload.\nIntel added a new product, the 3610, that is rated for 3 DWPD. Pricing looks to be around 1.20$/GB.merlin",
"msg_date": "Tue, 7 Jul 2015 08:12:57 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nAs I have warned elsewhere,\n\nThe M500/M550 from $SOME_COMPANY is NOT SUITABLE for postgres unless you have a RAID controller with BBU to protect yourself.\nThe M500/M550 are NOT plug-pull safe despite the 'power loss protection' claimed on the packaging. Not all fsync'd data is preserved in the event of a power loss, which completely undermines postgres's sanity. \n\nI would be extremely skeptical about the M500DC given the name and manufacturer. \n\nI went to quite a lot of trouble to provide $SOME_COMPANYs engineers with the full details of this fault after extensive testing (we have e.g. 20-25 of these disks) on multiple machines and controllers, at their request. Result: they stopped replying to me, and soon after I saw their PR reps talking about how 'power loss protection isn't about protecting all data during a power loss'. \n\nThe only safe way to use an M500/M550 with postgres is:\n\na) disable the disk cache, which will cripple performance to about 3-5% of normal.\nb) use a battery backed or cap-backed RAID controller, which will generally hurt performance, by limiting you to the peak performance of the flash on the raid controller. \n\nIf you are buying such a drive, I strongly recommend buying only one and doing extensive plug pull testing before commiting to several. \nFor myself, my time is valuable enough that it will be cheaper to buy intel in future. \n\nGraeme.\n\nOn 07 Jul 2015, at 15:12, Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Jul 2, 2015 at 1:00 PM, Wes Vaske (wvaske) <[email protected]> wrote:\n> Storage Review has a pretty good process and reviewed the M500DC when it released last year. http://www.storagereview.com/micron_m500dc_enterprise_ssd_review\n> \n> \n> \n> The only database-specific info we have available are for Cassandra and MSSQL:\n> \n> http://www.micron.com/~/media/documents/products/technical-marketing-brief/cassandra_and_m500dc_enterprise_ssd_tech_brief.pdf\n> \n> http://www.micron.com/~/media/documents/products/technical-marketing-brief/sql_server_2014_and_m500dc_raid_configuration_tech_brief.pdf\n> \n> \n> \n> (some of that info might be relevant)\n> \n> \n> \n> In terms of endurance, the M500DC is rated to 2 Drive Writes Per Day (DWPD) for 5-years. For comparison:\n> \n> Micron M500DC (20nm) – 2 DWPD\n> \n> Intel S3500 (20nm) – 0.3 DWPD\n> \n> Intel S3510 (16nm) – 0.3 DWPD\n> \n> Intel S3710 (20nm) – 10 DWPD\n> \n> \n> \n> They’re all great drives, the question is how write-intensive is the workload.\n> \n> \n> \n> \n> Intel added a new product, the 3610, that is rated for 3 DWPD. Pricing looks to be around 1.20$/GB.\n> \n> merlin \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 13:25:34 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "The M500/M550/M600 are consumer class drives that don't have power protection for all inflight data.* (like the Samsung 8x0 series and the Intel 3x0 & 5x0 series).\n\nThe M500DC has full power protection for inflight data and is an enterprise-class drive (like the Samsung 845DC or Intel S3500 & S3700 series).\n\nSo any drive without the capacitors to protect inflight data will suffer from data loss if you're using disk write cache and you pull the power.\n\n*Big addendum:\nThere are two issues on powerloss that will mess with Postgres. Data Loss and Data Corruption. The micron consumer drives will have power loss protection against Data Corruption and the enterprise drive will have power loss protection against BOTH.\n\nhttps://www.micron.com/~/media/documents/products/white-paper/wp_ssd_power_loss_protection.pdf \n\nThe Data Corruption problem is only an issue in non-SLC NAND but it's industry wide. And even though some drives will protect against that, the protection of inflight data that's been fsync'd is more important and should disqualify *any* consumer drives from *any* company from consideration for use with Postgres.\n\nWes Vaske | Senior Storage Solutions Engineer\nMicron Technology \n\n-----Original Message-----\nFrom: Graeme B. Bell [mailto:[email protected]] \nSent: Tuesday, July 07, 2015 8:26 AM\nTo: Merlin Moncure\nCc: Wes Vaske (wvaske); Craig James; [email protected]\nSubject: Re: [PERFORM] New server: SSD/RAID recommendations?\n\n\nAs I have warned elsewhere,\n\nThe M500/M550 from $SOME_COMPANY is NOT SUITABLE for postgres unless you have a RAID controller with BBU to protect yourself.\nThe M500/M550 are NOT plug-pull safe despite the 'power loss protection' claimed on the packaging. Not all fsync'd data is preserved in the event of a power loss, which completely undermines postgres's sanity. \n\nI would be extremely skeptical about the M500DC given the name and manufacturer. \n\nI went to quite a lot of trouble to provide $SOME_COMPANYs engineers with the full details of this fault after extensive testing (we have e.g. 20-25 of these disks) on multiple machines and controllers, at their request. Result: they stopped replying to me, and soon after I saw their PR reps talking about how 'power loss protection isn't about protecting all data during a power loss'. \n\nThe only safe way to use an M500/M550 with postgres is:\n\na) disable the disk cache, which will cripple performance to about 3-5% of normal.\nb) use a battery backed or cap-backed RAID controller, which will generally hurt performance, by limiting you to the peak performance of the flash on the raid controller. \n\nIf you are buying such a drive, I strongly recommend buying only one and doing extensive plug pull testing before commiting to several. \nFor myself, my time is valuable enough that it will be cheaper to buy intel in future. \n\nGraeme.\n\nOn 07 Jul 2015, at 15:12, Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Jul 2, 2015 at 1:00 PM, Wes Vaske (wvaske) <[email protected]> wrote:\n> Storage Review has a pretty good process and reviewed the M500DC when it released last year. http://www.storagereview.com/micron_m500dc_enterprise_ssd_review\n> \n> \n> \n> The only database-specific info we have available are for Cassandra and MSSQL:\n> \n> http://www.micron.com/~/media/documents/products/technical-marketing-brief/cassandra_and_m500dc_enterprise_ssd_tech_brief.pdf\n> \n> http://www.micron.com/~/media/documents/products/technical-marketing-brief/sql_server_2014_and_m500dc_raid_configuration_tech_brief.pdf\n> \n> \n> \n> (some of that info might be relevant)\n> \n> \n> \n> In terms of endurance, the M500DC is rated to 2 Drive Writes Per Day (DWPD) for 5-years. For comparison:\n> \n> Micron M500DC (20nm) - 2 DWPD\n> \n> Intel S3500 (20nm) - 0.3 DWPD\n> \n> Intel S3510 (16nm) - 0.3 DWPD\n> \n> Intel S3710 (20nm) - 10 DWPD\n> \n> \n> \n> They're all great drives, the question is how write-intensive is the workload.\n> \n> \n> \n> \n> Intel added a new product, the 3610, that is rated for 3 DWPD. Pricing looks to be around 1.20$/GB.\n> \n> merlin \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 14:15:58 +0000",
"msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On 07/07/2015 05:15 PM, Wes Vaske (wvaske) wrote:\n> The M500/M550/M600 are consumer class drives that don't have power\n> protection for all inflight data.* (like the Samsung 8x0 series and\n> the Intel 3x0 & 5x0 series).\n>\n> The M500DC has full power protection for inflight data and is an\n> enterprise-class drive (like the Samsung 845DC or Intel S3500 & S3700\n> series).\n>\n> So any drive without the capacitors to protect inflight data will\n> suffer from data loss if you're using disk write cache and you pull\n> the power.\n\nWow, I would be pretty angry if I installed a SSD in my desktop, and it \nloses a file that I saved just before pulling the power plug.\n\n> *Big addendum: There are two issues on powerloss that will mess with\n> Postgres. Data Loss and Data Corruption. The micron consumer drives\n> will have power loss protection against Data Corruption and the\n> enterprise drive will have power loss protection against BOTH.\n>\n> https://www.micron.com/~/media/documents/products/white-paper/wp_ssd_power_loss_protection.pdf\n>\n> The Data Corruption problem is only an issue in non-SLC NAND but\n> it's industry wide. And even though some drives will protect against\n> that, the protection of inflight data that's been fsync'd is more\n> important and should disqualify *any* consumer drives from *any*\n> company from consideration for use with Postgres.\n\nSo it lies about fsync()... The next question is, does it nevertheless \nenforce the correct ordering of persisting fsync'd data? If you write to \nfile A and fsync it, then write to another file B and fsync it too, is \nit guaranteed that if B is persisted, A is as well? Because if it isn't, \nyou can end up with filesystem (or database) corruption anyway.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Jul 2015 17:59:49 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nHi Wes\n\n1. The first interesting thing is that prior to my mentioning this problem to C_____ a year or two back, the power loss protection was advertised everywhere as simply that, without qualifiers about 'not inflight data'. Check out the marketing of the M500 for the first year or so and try to find an example where they say 'but inflight data isn't protected!'. \n\n2. The second (and more important) interesting thing is that this is irrelevant!\n\nFsync'd data is BY DEFINITION not data in flight. \nFsync means \"This data is secure on the disk!\" \nHowever, the drives corrupt it.\n\nPostgres's sanity depends on a reliable fsync. That's why we see posts on the performance list saying 'fsync=no makes your postgres faster but really, don't do it in production\". \nWe are talking about internal DB corruption, not just a crash and a few lost transactions.\n\nThese drives return from fsync while data is still in volatile cache.\nThat's breaking the spec, and it's why they are not OK for postgres by themselves. \n\nThis is not about 'in-flight' data, it's about fsync'd wal log data. \n\nGraeme. \n\n\nOn 07 Jul 2015, at 16:15, Wes Vaske (wvaske) <[email protected]> wrote:\n\n> The M500/M550/M600 are consumer class drives that don't have power protection for all inflight data.* (like the Samsung 8x0 series and the Intel 3x0 & 5x0 series).\n> \n> The M500DC has full power protection for inflight data and is an enterprise-class drive (like the Samsung 845DC or Intel S3500 & S3700 series).\n> \n> So any drive without the capacitors to protect inflight data will suffer from data loss if you're using disk write cache and you pull the power.\n> \n> *Big addendum:\n> There are two issues on powerloss that will mess with Postgres. Data Loss and Data Corruption. The micron consumer drives will have power loss protection against Data Corruption and the enterprise drive will have power loss protection against BOTH.\n> \n> https://www.micron.com/~/media/documents/products/white-paper/wp_ssd_power_loss_protection.pdf \n> \n> The Data Corruption problem is only an issue in non-SLC NAND but it's industry wide. And even though some drives will protect against that, the protection of inflight data that's been fsync'd is more important and should disqualify *any* consumer drives from *any* company from consideration for use with Postgres.\n> \n> Wes Vaske | Senior Storage Solutions Engineer\n> Micron Technology \n> \n> -----Original Message-----\n> From: Graeme B. Bell [mailto:[email protected]] \n> Sent: Tuesday, July 07, 2015 8:26 AM\n> To: Merlin Moncure\n> Cc: Wes Vaske (wvaske); Craig James; [email protected]\n> Subject: Re: [PERFORM] New server: SSD/RAID recommendations?\n> \n> \n> As I have warned elsewhere,\n> \n> The M500/M550 from $SOME_COMPANY is NOT SUITABLE for postgres unless you have a RAID controller with BBU to protect yourself.\n> The M500/M550 are NOT plug-pull safe despite the 'power loss protection' claimed on the packaging. Not all fsync'd data is preserved in the event of a power loss, which completely undermines postgres's sanity. \n> \n> I would be extremely skeptical about the M500DC given the name and manufacturer. \n> \n> I went to quite a lot of trouble to provide $SOME_COMPANYs engineers with the full details of this fault after extensive testing (we have e.g. 20-25 of these disks) on multiple machines and controllers, at their request. Result: they stopped replying to me, and soon after I saw their PR reps talking about how 'power loss protection isn't about protecting all data during a power loss'. \n> \n> The only safe way to use an M500/M550 with postgres is:\n> \n> a) disable the disk cache, which will cripple performance to about 3-5% of normal.\n> b) use a battery backed or cap-backed RAID controller, which will generally hurt performance, by limiting you to the peak performance of the flash on the raid controller. \n> \n> If you are buying such a drive, I strongly recommend buying only one and doing extensive plug pull testing before commiting to several. \n> For myself, my time is valuable enough that it will be cheaper to buy intel in future. \n> \n> Graeme.\n> \n> On 07 Jul 2015, at 15:12, Merlin Moncure <[email protected]> wrote:\n> \n>> On Thu, Jul 2, 2015 at 1:00 PM, Wes Vaske (wvaske) <[email protected]> wrote:\n>> Storage Review has a pretty good process and reviewed the M500DC when it released last year. http://www.storagereview.com/micron_m500dc_enterprise_ssd_review\n>> \n>> \n>> \n>> The only database-specific info we have available are for Cassandra and MSSQL:\n>> \n>> http://www.micron.com/~/media/documents/products/technical-marketing-brief/cassandra_and_m500dc_enterprise_ssd_tech_brief.pdf\n>> \n>> http://www.micron.com/~/media/documents/products/technical-marketing-brief/sql_server_2014_and_m500dc_raid_configuration_tech_brief.pdf\n>> \n>> \n>> \n>> (some of that info might be relevant)\n>> \n>> \n>> \n>> In terms of endurance, the M500DC is rated to 2 Drive Writes Per Day (DWPD) for 5-years. For comparison:\n>> \n>> Micron M500DC (20nm) - 2 DWPD\n>> \n>> Intel S3500 (20nm) - 0.3 DWPD\n>> \n>> Intel S3510 (16nm) - 0.3 DWPD\n>> \n>> Intel S3710 (20nm) - 10 DWPD\n>> \n>> \n>> \n>> They're all great drives, the question is how write-intensive is the workload.\n>> \n>> \n>> \n>> \n>> Intel added a new product, the 3610, that is rated for 3 DWPD. Pricing looks to be around 1.20$/GB.\n>> \n>> merlin \n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 15:53:43 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nYikes. I would not be able to sleep tonight if it were not for the BBU cache in front of these disks... \n\ndiskchecker.pl consistently reported several examples of corruption post-power-loss (usually 10 - 30 ) on unprotected M500s/M550s, so I think it's pretty much open to debate what types of madness and corruption you'll find if you look close enough.\n\nG\n\n\nOn 07 Jul 2015, at 16:59, Heikki Linnakangas <[email protected]> wrote:\n\n> \n> So it lies about fsync()... The next question is, does it nevertheless enforce the correct ordering of persisting fsync'd data? If you write to file A and fsync it, then write to another file B and fsync it too, is it guaranteed that if B is persisted, A is as well? Because if it isn't, you can end up with filesystem (or database) corruption anyway.\n> \n> - Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 15:58:49 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "Hi.\n\nHow would BBU cache help you if it lies about fsync? I suppose any RAID\ncontroller removes data from BBU cache after it was fsynced by the drive.\nAs I know, there is no other \"magic command\" for drive to tell controller\nthat the data is safe now and can be removed from BBU cache.\n\nВт, 7 лип. 2015 11:59 Graeme B. Bell <[email protected]> пише:\n\n>\n> Yikes. I would not be able to sleep tonight if it were not for the BBU\n> cache in front of these disks...\n>\n> diskchecker.pl consistently reported several examples of corruption\n> post-power-loss (usually 10 - 30 ) on unprotected M500s/M550s, so I think\n> it's pretty much open to debate what types of madness and corruption you'll\n> find if you look close enough.\n>\n> G\n>\n>\n> On 07 Jul 2015, at 16:59, Heikki Linnakangas <[email protected]> wrote:\n>\n> >\n> > So it lies about fsync()... The next question is, does it nevertheless\n> enforce the correct ordering of persisting fsync'd data? If you write to\n> file A and fsync it, then write to another file B and fsync it too, is it\n> guaranteed that if B is persisted, A is as well? Because if it isn't, you\n> can end up with filesystem (or database) corruption anyway.\n> >\n> > - Heikki\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi.\nHow would BBU cache help you if it lies about fsync? I suppose any RAID controller removes data from BBU cache after it was fsynced by the drive. As I know, there is no other \"magic command\" for drive to tell controller that the data is safe now and can be removed from BBU cache.\n\nВт, 7 лип. 2015 11:59 Graeme B. Bell <[email protected]> пише:\nYikes. I would not be able to sleep tonight if it were not for the BBU cache in front of these disks...\n\ndiskchecker.pl consistently reported several examples of corruption post-power-loss (usually 10 - 30 ) on unprotected M500s/M550s, so I think it's pretty much open to debate what types of madness and corruption you'll find if you look close enough.\n\nG\n\n\nOn 07 Jul 2015, at 16:59, Heikki Linnakangas <[email protected]> wrote:\n\n>\n> So it lies about fsync()... The next question is, does it nevertheless enforce the correct ordering of persisting fsync'd data? If you write to file A and fsync it, then write to another file B and fsync it too, is it guaranteed that if B is persisted, A is as well? Because if it isn't, you can end up with filesystem (or database) corruption anyway.\n>\n> - Heikki\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 07 Jul 2015 16:27:43 +0000",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "> On 07 Jul 2015, at 16:59, Heikki Linnakangas <[email protected]> wrote:\n>\n>>\n>> So it lies about fsync()... The next question is, does it nevertheless enforce the correct ordering of persisting fsync'd data? If you write to file A and fsync it, then write to another file B and fsync it too, is it guaranteed that if B is persisted, A is as well? Because if it isn't, you can end up with filesystem (or database) corruption anyway.\n\nOn Tue, Jul 7, 2015 at 10:58 AM, Graeme B. Bell <[email protected]> wrote:\n>\n> Yikes. I would not be able to sleep tonight if it were not for the BBU cache in front of these disks...\n>\n> diskchecker.pl consistently reported several examples of corruption post-power-loss (usually 10 - 30 ) on unprotected M500s/M550s, so I think it's pretty much open to debate what types of madness and corruption you'll find if you look close enough.\n\n100% agree with your sentiments. I do believe that there are other\nenterprise SSD vendors that offer reliable parts but not at the price\npoint intel does for the cheaper drives. The consumer grade vendors\nare simply not trustworthy unless proven otherwise (I had my own\nunpleasant experience with OCZ for example). Intel played the same\ngame with their early parts but have since become a model of how to\nship drives to the market.\n\nRAID controllers are completely unnecessary for SSD as they currently\nexist. Software raid is superior in every way; the hardware features\nof raid controllers, BBU, write caching, and write consolidation are\nredundant to what the SSD themselves do (being themselves RAID 0\nbasically). A hypothetical SSD optimized raid controller is possible;\nit could do things like balance wear and optimize writes across\nmultiple physical drives. This would require deep participation\nbetween the drive and the controller and FWICT no such things exists\nexcepting super expensive sans which I don't recommend anyways.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 11:35:38 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "> \n> RAID controllers are completely unnecessary for SSD as they currently\n> exist.\n\nAgreed. The best solution is not to buy cheap disks and not to buy RAID controllers now, imho.\n\nIn my own situation, I had a tight budget, high performance demand and a newish machine with RAID controller and HDDs in it as a starting point. \nSo it was more a question of 'what can you do with a free raid controller and not much money' back in 2013. And it has worked very well.\nStill, I had hoped for a bit more from the cheaper SSDs though, I'd hoped to use fastpath on the controller and bypass the cache. \n\nThe way NVMe prices are going though, I wouldn't do it again if I was doing it this year. I'd just go direct to nvme and trash the raid controller. These sammy and intel nvmes are basically enterprise hardware at consumer prices. Heck, I'll probably put one in my next gaming PC. \n\nRe: software raid. \n\nI agree, but once you accept that software raid is now pretty much superior to hardware raid, you start looking at ZFS and thinking 'why the heck am I even using software raid?'\n\nG\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 16:46:38 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\r\nThat is a very good question, which I have raised elsewhere on the postgresql lists previously.\r\n\r\nIn practice: I have *never* managed to make diskchecker fail with the BBU enabled in front of the drives and I spent days trying with plug pulls till I reached the point where as a statistical event it just can't be that likely at all. That's not to say it can't ever happen, just that I've taken all reasonable measures that I can to find out on the time and money budget I had available. \r\n\r\nIn theory: It may be the fact the BBU makes the drives run at about half speed, so that the capacitors go a good bit further to empty the cache, after all: without the BBU in the way, the drive manages to save everything but the last fragment of writes. But I also suspect that the controller itself maybe replaying the last set of writes from around the time of power loss. \r\n\r\nAnyway I'm 50/50 on those two explanations. Any other thoughts welcome. \r\n\r\nThis raises another interesting question. Does anyone hear have a document explaining how their BBU cache works EXACTLY (at cache / sata level) on their server? Because I haven't been able to find any for mine (Dell PERC H710/H710P). Can anyone tell me with godlike authority and precision, what exactly happens inside that BBU post-power failure?\r\n\r\nThere is rather too much magic involved for me to be happy.\r\n\r\nG\r\n\r\nOn 07 Jul 2015, at 18:27, Vitalii Tymchyshyn <[email protected]> wrote:\r\n\r\n> Hi.\r\n> \r\n> How would BBU cache help you if it lies about fsync? I suppose any RAID controller removes data from BBU cache after it was fsynced by the drive. As I know, there is no other \"magic command\" for drive to tell controller that the data is safe now and can be removed from BBU cache.\r\n> \r\n> Вт, 7 лип. 2015 11:59 Graeme B. Bell <[email protected]> пише:\r\n> \r\n> Yikes. I would not be able to sleep tonight if it were not for the BBU cache in front of these disks...\r\n> \r\n> diskchecker.pl consistently reported several examples of corruption post-power-loss (usually 10 - 30 ) on unprotected M500s/M550s, so I think it's pretty much open to debate what types of madness and corruption you'll find if you look close enough.\r\n> \r\n> G\r\n> \r\n> \r\n> On 07 Jul 2015, at 16:59, Heikki Linnakangas <[email protected]> wrote:\r\n> \r\n> >\r\n> > So it lies about fsync()... The next question is, does it nevertheless enforce the correct ordering of persisting fsync'd data? If you write to file A and fsync it, then write to another file B and fsync it too, is it guaranteed that if B is persisted, A is as well? Because if it isn't, you can end up with filesystem (or database) corruption anyway.\r\n> >\r\n> > - Heikki\r\n> \r\n> \r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list ([email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 16:54:57 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "Hi Graeme,\n\nWhy would you think that you don't need RAID for ZFS?\n\nReason I'm asking if because we are moving to ZFS on FreeBSD for our future\nprojects.\n\nRegards,\nWei Shan\n\nOn 8 July 2015 at 00:46, Graeme B. Bell <[email protected]> wrote:\n\n> >\n> > RAID controllers are completely unnecessary for SSD as they currently\n> > exist.\n>\n> Agreed. The best solution is not to buy cheap disks and not to buy RAID\n> controllers now, imho.\n>\n> In my own situation, I had a tight budget, high performance demand and a\n> newish machine with RAID controller and HDDs in it as a starting point.\n> So it was more a question of 'what can you do with a free raid controller\n> and not much money' back in 2013. And it has worked very well.\n> Still, I had hoped for a bit more from the cheaper SSDs though, I'd hoped\n> to use fastpath on the controller and bypass the cache.\n>\n> The way NVMe prices are going though, I wouldn't do it again if I was\n> doing it this year. I'd just go direct to nvme and trash the raid\n> controller. These sammy and intel nvmes are basically enterprise hardware\n> at consumer prices. Heck, I'll probably put one in my next gaming PC.\n>\n> Re: software raid.\n>\n> I agree, but once you accept that software raid is now pretty much\n> superior to hardware raid, you start looking at ZFS and thinking 'why the\n> heck am I even using software raid?'\n>\n> G\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRegards,\nAng Wei Shan\n\nHi Graeme,Why would you think that you don't need RAID for ZFS?Reason I'm asking if because we are moving to ZFS on FreeBSD for our future projects.Regards,Wei ShanOn 8 July 2015 at 00:46, Graeme B. Bell <[email protected]> wrote:>\n> RAID controllers are completely unnecessary for SSD as they currently\n> exist.\n\nAgreed. The best solution is not to buy cheap disks and not to buy RAID controllers now, imho.\n\nIn my own situation, I had a tight budget, high performance demand and a newish machine with RAID controller and HDDs in it as a starting point.\nSo it was more a question of 'what can you do with a free raid controller and not much money' back in 2013. And it has worked very well.\nStill, I had hoped for a bit more from the cheaper SSDs though, I'd hoped to use fastpath on the controller and bypass the cache.\n\nThe way NVMe prices are going though, I wouldn't do it again if I was doing it this year. I'd just go direct to nvme and trash the raid controller. These sammy and intel nvmes are basically enterprise hardware at consumer prices. Heck, I'll probably put one in my next gaming PC.\n\nRe: software raid.\n\nI agree, but once you accept that software raid is now pretty much superior to hardware raid, you start looking at ZFS and thinking 'why the heck am I even using software raid?'\n\nG\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Regards,Ang Wei Shan",
"msg_date": "Wed, 8 Jul 2015 00:56:47 +0800",
"msg_from": "Wei Shan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "> \r\n> This raises another interesting question. Does anyone hear have a document explaining how their BBU cache works EXACTLY (at cache / sata level) on their server? Because I haven't been able to find any for mine (Dell PERC H710/H710P). Can anyone tell me with godlike authority and precision, what exactly happens inside that BBU post-power failure?\r\n\r\n\r\n(and if you have that manual - how can you know it's accurate? that the implementation matches the manual and is free of bugs? because my M500s didn't match the packaging and neither did a H710 we bought - Dell had advertised features in some marketing material that were only present on the H710P)\r\n\r\nAnd I see UBER (unrecoverable bit error) rates for SSDs and HDDs, but has anyone ever seen them for the flash-based cache on their raid controller?\r\n\r\nSleep well, friends.\r\n\r\nGraeme. \r\n\r\nOn 07 Jul 2015, at 18:54, Graeme B. Bell <[email protected]> wrote:\r\n\r\n> \r\n> That is a very good question, which I have raised elsewhere on the postgresql lists previously.\r\n> \r\n> In practice: I have *never* managed to make diskchecker fail with the BBU enabled in front of the drives and I spent days trying with plug pulls till I reached the point where as a statistical event it just can't be that likely at all. That's not to say it can't ever happen, just that I've taken all reasonable measures that I can to find out on the time and money budget I had available. \r\n> \r\n> In theory: It may be the fact the BBU makes the drives run at about half speed, so that the capacitors go a good bit further to empty the cache, after all: without the BBU in the way, the drive manages to save everything but the last fragment of writes. But I also suspect that the controller itself maybe replaying the last set of writes from around the time of power loss. \r\n> \r\n> Anyway I'm 50/50 on those two explanations. Any other thoughts welcome. \r\n> \r\n> This raises another interesting question. Does anyone hear have a document explaining how their BBU cache works EXACTLY (at cache / sata level) on their server? Because I haven't been able to find any for mine (Dell PERC H710/H710P). Can anyone tell me with godlike authority and precision, what exactly happens inside that BBU post-power failure?\r\n> \r\n> There is rather too much magic involved for me to be happy.\r\n> \r\n> G\r\n> \r\n> On 07 Jul 2015, at 18:27, Vitalii Tymchyshyn <[email protected]> wrote:\r\n> \r\n>> Hi.\r\n>> \r\n>> How would BBU cache help you if it lies about fsync? I suppose any RAID controller removes data from BBU cache after it was fsynced by the drive. As I know, there is no other \"magic command\" for drive to tell controller that the data is safe now and can be removed from BBU cache.\r\n>> \r\n>> Вт, 7 лип. 2015 11:59 Graeme B. Bell <[email protected]> пише:\r\n>> \r\n>> Yikes. I would not be able to sleep tonight if it were not for the BBU cache in front of these disks...\r\n>> \r\n>> diskchecker.pl consistently reported several examples of corruption post-power-loss (usually 10 - 30 ) on unprotected M500s/M550s, so I think it's pretty much open to debate what types of madness and corruption you'll find if you look close enough.\r\n>> \r\n>> G\r\n>> \r\n>> \r\n>> On 07 Jul 2015, at 16:59, Heikki Linnakangas <[email protected]> wrote:\r\n>> \r\n>>> \r\n>>> So it lies about fsync()... The next question is, does it nevertheless enforce the correct ordering of persisting fsync'd data? If you write to file A and fsync it, then write to another file B and fsync it too, is it guaranteed that if B is persisted, A is as well? Because if it isn't, you can end up with filesystem (or database) corruption anyway.\r\n>>> \r\n>>> - Heikki\r\n>> \r\n>> \r\n>> \r\n>> --\r\n>> Sent via pgsql-performance mailing list ([email protected])\r\n>> To make changes to your subscription:\r\n>> http://www.postgresql.org/mailpref/pgsql-performance\r\n> \r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 16:58:49 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "After a plug-pull during the create, reboot and here is the verify:\n\nroot@Dbms2:/var/tmp # ./diskchecker.pl -s newfs verify /test/biteme\n verifying: 0.00%\n verifying: 3.81%\n verifying: 10.91%\n verifying: 18.71%\n verifying: 26.46%\n verifying: 33.95%\n verifying: 41.20%\n verifying: 49.48%\n verifying: 57.23%\n verifying: 64.89%\n verifying: 72.54%\n verifying: 80.04%\n verifying: 87.96%\n verifying: 95.15%\n verifying: 100.00%\nTotal errors: 0\n\nda6 at mps0 bus 0 scbus0 target 17 lun 0\nda6: <ATA INTEL SSDSC2BP24 0420> Fixed Direct Access SPC-4 SCSI device\nda6: Serial Number BTJR446401KW240AGN \nda6: 600.000MB/s transfers\nda6: Command Queueing enabled\nda6: 228936MB (468862128 512 byte sectors: 255H 63S/T 29185C)\n\n# smartctl -a /dev/da6\n\n=== START OF INFORMATION SECTION ===\nModel Family: Intel 730 and DC S3500/S3700 Series SSDs\nDevice Model: INTEL SSDSC2BP240G4\nSerial Number: BTJR446401KW240AGN\nLU WWN Device Id: 5 5cd2e4 04b71afc7\nFirmware Version: L2010420\nUser Capacity: 240,057,409,536 bytes [240 GB]\nSector Size: 512 bytes logical/physical\nRotation Rate: Solid State Device\nForm Factor: 2.5 inches\nDevice is: In smartctl database [for details use: -P show]\nATA Version is: ATA8-ACS T13/1699-D revision 4\nSATA Version is: SATA 2.6, 6.0 Gb/s (current: 6.0 Gb/s)\nLocal Time is: Tue Jul 7 17:01:36 2015 CDT\nSMART support is: Available - device has SMART capability.\nSMART support is: Enabled\n\nNote -- same firmware between all three series of Intel devices...... :-)\n\nYes, I like these SSDs -- they don't lie and they don't lose data on a\npower-pull.\n\n\nOn 7/7/2015 08:08, Graeme B. Bell wrote:\n> Thanks, this is very useful to know about the 730. When you say 'tested it with plug-pulls', you were using diskchecker.pl, right?\n>\n> Graeme.\n>\n> On 07 Jul 2015, at 14:39, Karl Denninger <[email protected]> wrote:\n>\n>> Incidentally while there are people who have questioned the 730 series power loss protection I've tested it with plug-pulls and in addition it watchdogs its internal power loss capacitors -- from the smartctl -a display of one of them on an in-service machine here:\n>>\n>> 175 Power_Loss_Cap_Test 0x0033 100 100 010 Pre-fail Always - 643 (4 6868)\n>\n>\n\n-- \nKarl Denninger\[email protected] <mailto:[email protected]>\n/The Market Ticker/\n/[S/MIME encrypted email preferred]/",
"msg_date": "Tue, 07 Jul 2015 12:05:49 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\n> Why would you think that you don't need RAID for ZFS?\n> \n> Reason I'm asking if because we are moving to ZFS on FreeBSD for our future projects.\n\n\nBecause you have zraid. :-)\n\nhttps://blogs.oracle.com/bonwick/entry/raid_z\n\nGeneral points:\n\n1. It's my understanding that ZFS is designed to talk to the hardware directly, and so it would be bad to hide the physical layer from ZFS unless you had to.\nAfter all, I don't think they implemented a raid-like system inside ZFS just for the fun of it. \n\n2. You have zraid built in and easy to manage within ZFS - and well tested compared to NewRaidController (TM) - why add another layer of management to your disk storage?\n\n3. You reintroduce the raid write hole.\n\n4. There might be some argument for hardware raid (existing system) but with software raid (the point I was addressing) it makes little sense at all.\n\n5. If you're on hardware raid and your controller dies, you're screwed in several ways. It's harder to get a new raid controller than a new pc. Your chances of recovery are lower than zfs. IMHO more scary to recover from a failed raid controller, too. \n\n6. Recovery is faster if the disks aren't full. e.g. ZFS recovers what it is there. This might not seem a big deal but chances are it would save you 50% of your downtime in a crisis. \n\nHowever, I think with Linux you might want to use RAID for the boot disk. I don't know if linux can boot from ZFS yet. I would (and am) using Freebsd with zfs.\n\nGraeme.\n\n\nOn 07 Jul 2015, at 18:56, Wei Shan <[email protected]> wrote:\n\n> Hi Graeme,\n> \n> Why would you think that you don't need RAID for ZFS?\n> \n> Reason I'm asking if because we are moving to ZFS on FreeBSD for our future projects.\n> \n> Regards,\n> Wei Shan\n> \n> On 8 July 2015 at 00:46, Graeme B. Bell <[email protected]> wrote:\n> >\n> > RAID controllers are completely unnecessary for SSD as they currently\n> > exist.\n> \n> Agreed. The best solution is not to buy cheap disks and not to buy RAID controllers now, imho.\n> \n> In my own situation, I had a tight budget, high performance demand and a newish machine with RAID controller and HDDs in it as a starting point.\n> So it was more a question of 'what can you do with a free raid controller and not much money' back in 2013. And it has worked very well.\n> Still, I had hoped for a bit more from the cheaper SSDs though, I'd hoped to use fastpath on the controller and bypass the cache.\n> \n> The way NVMe prices are going though, I wouldn't do it again if I was doing it this year. I'd just go direct to nvme and trash the raid controller. These sammy and intel nvmes are basically enterprise hardware at consumer prices. Heck, I'll probably put one in my next gaming PC.\n> \n> Re: software raid.\n> \n> I agree, but once you accept that software raid is now pretty much superior to hardware raid, you start looking at ZFS and thinking 'why the heck am I even using software raid?'\n> \n> G\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> -- \n> Regards,\n> Ang Wei Shan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 17:21:52 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On Tue, Jul 7, 2015 at 10:59 AM, Heikki Linnakangas <[email protected]> wrote:\n\n> On 07/07/2015 05:15 PM, Wes Vaske (wvaske) wrote:\n>\n>> The M500/M550/M600 are consumer class drives that don't have power\n>> protection for all inflight data.* (like the Samsung 8x0 series and\n>> the Intel 3x0 & 5x0 series).\n>>\n>> The M500DC has full power protection for inflight data and is an\n>> enterprise-class drive (like the Samsung 845DC or Intel S3500 & S3700\n>> series).\n>>\n>> So any drive without the capacitors to protect inflight data will\n>> suffer from data loss if you're using disk write cache and you pull\n>> the power.\n>>\n>\n> Wow, I would be pretty angry if I installed a SSD in my desktop, and it\n> loses a file that I saved just before pulling the power plug.\n>\n\nThat can (and does) happen with spinning disks, too.\n\n\n>\n> *Big addendum: There are two issues on powerloss that will mess with\n>> Postgres. Data Loss and Data Corruption. The micron consumer drives\n>> will have power loss protection against Data Corruption and the\n>> enterprise drive will have power loss protection against BOTH.\n>>\n>>\n>> https://www.micron.com/~/media/documents/products/white-paper/wp_ssd_power_loss_protection.pdf\n>>\n>> The Data Corruption problem is only an issue in non-SLC NAND but\n>> it's industry wide. And even though some drives will protect against\n>> that, the protection of inflight data that's been fsync'd is more\n>> important and should disqualify *any* consumer drives from *any*\n>> company from consideration for use with Postgres.\n>>\n>\n> So it lies about fsync()... The next question is, does it nevertheless\n> enforce the correct ordering of persisting fsync'd data? If you write to\n> file A and fsync it, then write to another file B and fsync it too, is it\n> guaranteed that if B is persisted, A is as well? Because if it isn't, you\n> can end up with filesystem (or database) corruption anyway.\n>\n> - Heikki\n>\n>\nThe sad fact is that MANY drives (ssd as well as spinning) lie about their\nfsync status.\n--\nMike Nolan\n\nOn Tue, Jul 7, 2015 at 10:59 AM, Heikki Linnakangas <[email protected]> wrote:On 07/07/2015 05:15 PM, Wes Vaske (wvaske) wrote:\n\nThe M500/M550/M600 are consumer class drives that don't have power\nprotection for all inflight data.* (like the Samsung 8x0 series and\nthe Intel 3x0 & 5x0 series).\n\nThe M500DC has full power protection for inflight data and is an\nenterprise-class drive (like the Samsung 845DC or Intel S3500 & S3700\nseries).\n\nSo any drive without the capacitors to protect inflight data will\nsuffer from data loss if you're using disk write cache and you pull\nthe power.\n\n\nWow, I would be pretty angry if I installed a SSD in my desktop, and it loses a file that I saved just before pulling the power plug.That can (and does) happen with spinning disks, too. \n\n\n*Big addendum: There are two issues on powerloss that will mess with\nPostgres. Data Loss and Data Corruption. The micron consumer drives\nwill have power loss protection against Data Corruption and the\nenterprise drive will have power loss protection against BOTH.\n\nhttps://www.micron.com/~/media/documents/products/white-paper/wp_ssd_power_loss_protection.pdf\n\n The Data Corruption problem is only an issue in non-SLC NAND but\nit's industry wide. And even though some drives will protect against\nthat, the protection of inflight data that's been fsync'd is more\nimportant and should disqualify *any* consumer drives from *any*\ncompany from consideration for use with Postgres.\n\n\nSo it lies about fsync()... The next question is, does it nevertheless enforce the correct ordering of persisting fsync'd data? If you write to file A and fsync it, then write to another file B and fsync it too, is it guaranteed that if B is persisted, A is as well? Because if it isn't, you can end up with filesystem (or database) corruption anyway.\n\n- HeikkiThe sad fact is that MANY drives (ssd as well as spinning) lie about their fsync status.--Mike Nolan",
"msg_date": "Tue, 7 Jul 2015 13:28:15 -0400",
"msg_from": "Michael Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nThe comment on HDDs is true and gave me another thought. \n\nThese new 'shingled' HDDs (the 8TB ones) rely on rewriting all the data on tracks that overlap your data, any time you change the data. Result: disks 8-20x slower during writes, after they fill up. \n\nDo they have power loss protection for the data being rewritten during reshingling? You could have data commited at position X and you accidentally nuke data at position Y.\n\n[I know that using a shingled disk sounds crazy (it sounds crazy to me) but you can bet there are people that just want to max out the disk bays in their server... ]\n\nGraeme. \n\nOn 07 Jul 2015, at 19:28, Michael Nolan <[email protected]> wrote:\n\n> \n> \n> On Tue, Jul 7, 2015 at 10:59 AM, Heikki Linnakangas <[email protected]> wrote:\n> On 07/07/2015 05:15 PM, Wes Vaske (wvaske) wrote:\n> The M500/M550/M600 are consumer class drives that don't have power\n> protection for all inflight data.* (like the Samsung 8x0 series and\n> the Intel 3x0 & 5x0 series).\n> \n> The M500DC has full power protection for inflight data and is an\n> enterprise-class drive (like the Samsung 845DC or Intel S3500 & S3700\n> series).\n> \n> So any drive without the capacitors to protect inflight data will\n> suffer from data loss if you're using disk write cache and you pull\n> the power.\n> \n> Wow, I would be pretty angry if I installed a SSD in my desktop, and it loses a file that I saved just before pulling the power plug.\n> \n> That can (and does) happen with spinning disks, too.\n> \n> \n> *Big addendum: There are two issues on powerloss that will mess with\n> Postgres. Data Loss and Data Corruption. The micron consumer drives\n> will have power loss protection against Data Corruption and the\n> enterprise drive will have power loss protection against BOTH.\n> \n> https://www.micron.com/~/media/documents/products/white-paper/wp_ssd_power_loss_protection.pdf\n> \n> The Data Corruption problem is only an issue in non-SLC NAND but\n> it's industry wide. And even though some drives will protect against\n> that, the protection of inflight data that's been fsync'd is more\n> important and should disqualify *any* consumer drives from *any*\n> company from consideration for use with Postgres.\n> \n> So it lies about fsync()... The next question is, does it nevertheless enforce the correct ordering of persisting fsync'd data? If you write to file A and fsync it, then write to another file B and fsync it too, is it guaranteed that if B is persisted, A is as well? Because if it isn't, you can end up with filesystem (or database) corruption anyway.\n> \n> - Heikki\n> \n> \n> The sad fact is that MANY drives (ssd as well as spinning) lie about their fsync status.\n> --\n> Mike Nolan \n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 17:43:24 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On Tue, Jul 7, 2015 at 11:43 AM, Graeme B. Bell <[email protected]> wrote:\n>\n> The comment on HDDs is true and gave me another thought.\n>\n> These new 'shingled' HDDs (the 8TB ones) rely on rewriting all the data on tracks that overlap your data, any time you change the data. Result: disks 8-20x slower during writes, after they fill up.\n>\n> Do they have power loss protection for the data being rewritten during reshingling? You could have data commited at position X and you accidentally nuke data at position Y.\n>\n> [I know that using a shingled disk sounds crazy (it sounds crazy to me) but you can bet there are people that just want to max out the disk bays in their server... ]\n\nLet's just say no online backup companies are using those disks. :)\nBiggest current production spinners being used I know of are 4TB,\nnon-shingled.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 11:47:31 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nOn 07 Jul 2015, at 19:47, Scott Marlowe <[email protected]> wrote:\n\n>> [I know that using a shingled disk sounds crazy (it sounds crazy to me) but you can bet there are people that just want to max out the disk bays in their server... ]\n> \n> Let's just say no online backup companies are using those disks. :)\n\nI'm not so sure. Literally the most famous online backup company is (or was planning to): \nhttps://www.backblaze.com/blog/6-tb-hard-drive-face-off/\nBut I think that a massive read-only archive really is the only use for these things. I hope they go out of fashion, soon. \n\nBut I was thinking more of the 'small company postgres server' or 'charitable organisation postgres server'.\nSomeone is going to make this mistake, you can bet. \nProbably not someone on THIS list, of course... \n\n> Biggest current production spinners being used I know of are 4TB,\n> non-shingled.\n\nI think we may have some 6TB WD reds around here. I'll need to look around.\n\nG\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 17:58:04 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "Regarding:\r\n“lie about their fsync status.”\r\n\r\nThis is mostly semantics but it might help google searches on the issue.\r\n\r\nA drive doesn’t support fsync(), that’s a filesystem/kernel process. A drive will do a FLUSH CACHE. Before kernels 2.6.<low numbers> the fsync() call wouldn’t sent any ATA or SCSI command to flush the disk cache. Whereas—AFAICT—modern kernels and file system versions *will* do this. When ‘sync’ is called the filesystem will issue the appropriate command to the disk to flush the write cache.\r\n\r\nFor ATA, this is “FLUSH CACHE” (E7h). To check support for the command use:\r\n[root@postgres ~]# smartctl --identify /dev/sdu | grep \"FLUSH CACHE\"\r\n 83 13 1 FLUSH CACHE EXT supported\r\n 83 12 1 FLUSH CACHE supported\r\n 86 13 1 FLUSH CACHE EXT supported\r\n 86 12 1 FLUSH CACHE supported\r\n\r\nThe 1s in the 3rd column represent SUPPORTED for the feature listed in the last column.\r\n\r\nCheers,\r\nWes Vaske\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Michael Nolan\r\nSent: Tuesday, July 07, 2015 12:28 PM\r\nTo: [email protected]\r\nCc: Wes Vaske (wvaske); Graeme B. Bell; [email protected]\r\nSubject: Re: [PERFORM] New server: SSD/RAID recommendations?\r\n\r\n\r\n\r\nOn Tue, Jul 7, 2015 at 10:59 AM, Heikki Linnakangas <[email protected]<mailto:[email protected]>> wrote:\r\nOn 07/07/2015 05:15 PM, Wes Vaske (wvaske) wrote:\r\nThe M500/M550/M600 are consumer class drives that don't have power\r\nprotection for all inflight data.* (like the Samsung 8x0 series and\r\nthe Intel 3x0 & 5x0 series).\r\n\r\nThe M500DC has full power protection for inflight data and is an\r\nenterprise-class drive (like the Samsung 845DC or Intel S3500 & S3700\r\nseries).\r\n\r\nSo any drive without the capacitors to protect inflight data will\r\nsuffer from data loss if you're using disk write cache and you pull\r\nthe power.\r\n\r\nWow, I would be pretty angry if I installed a SSD in my desktop, and it loses a file that I saved just before pulling the power plug.\r\n\r\nThat can (and does) happen with spinning disks, too.\r\n\r\n\r\n*Big addendum: There are two issues on powerloss that will mess with\r\nPostgres. Data Loss and Data Corruption. The micron consumer drives\r\nwill have power loss protection against Data Corruption and the\r\nenterprise drive will have power loss protection against BOTH.\r\n\r\nhttps://www.micron.com/~/media/documents/products/white-paper/wp_ssd_power_loss_protection.pdf\r\n\r\n The Data Corruption problem is only an issue in non-SLC NAND but\r\nit's industry wide. And even though some drives will protect against\r\nthat, the protection of inflight data that's been fsync'd is more\r\nimportant and should disqualify *any* consumer drives from *any*\r\ncompany from consideration for use with Postgres.\r\n\r\nSo it lies about fsync()... The next question is, does it nevertheless enforce the correct ordering of persisting fsync'd data? If you write to file A and fsync it, then write to another file B and fsync it too, is it guaranteed that if B is persisted, A is as well? Because if it isn't, you can end up with filesystem (or database) corruption anyway.\r\n\r\n- Heikki\r\n\r\n\r\nThe sad fact is that MANY drives (ssd as well as spinning) lie about their fsync status.\r\n--\r\nMike Nolan\r\n\r\n\n\n\n\n\n\n\n\n\nRegarding:\n“lie about their fsync status.”\n \nThis is mostly semantics but it might help google searches on the issue.\n \nA drive doesn’t support fsync(), that’s a filesystem/kernel process. A drive will do a FLUSH CACHE. Before kernels 2.6.<low numbers> the fsync() call wouldn’t\r\n sent any ATA or SCSI command to flush the disk cache. Whereas—AFAICT—modern kernels and file system versions *will* do this. When ‘sync’ is called the filesystem will issue the appropriate command to the disk to flush the write cache.\n \nFor ATA, this is “FLUSH CACHE” (E7h). To check support for the command use:\n[root@postgres ~]# smartctl --identify /dev/sdu | grep \"FLUSH CACHE\"\n 83 13 1 FLUSH CACHE EXT supported\n 83 12 1 FLUSH CACHE supported\n 86 13 1 FLUSH CACHE EXT supported\n 86 12 1 FLUSH CACHE supported\n \nThe 1s in the 3rd column represent SUPPORTED for the feature listed in the last column.\n \nCheers,\nWes Vaske\r\n\n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Michael Nolan\nSent: Tuesday, July 07, 2015 12:28 PM\nTo: [email protected]\nCc: Wes Vaske (wvaske); Graeme B. Bell; [email protected]\nSubject: Re: [PERFORM] New server: SSD/RAID recommendations?\n \n\n \n\n \n\nOn Tue, Jul 7, 2015 at 10:59 AM, Heikki Linnakangas <[email protected]> wrote:\n\nOn 07/07/2015 05:15 PM, Wes Vaske (wvaske) wrote:\n\nThe M500/M550/M600 are consumer class drives that don't have power\r\nprotection for all inflight data.* (like the Samsung 8x0 series and\r\nthe Intel 3x0 & 5x0 series).\n\r\nThe M500DC has full power protection for inflight data and is an\r\nenterprise-class drive (like the Samsung 845DC or Intel S3500 & S3700\r\nseries).\n\r\nSo any drive without the capacitors to protect inflight data will\r\nsuffer from data loss if you're using disk write cache and you pull\r\nthe power.\n\n\r\nWow, I would be pretty angry if I installed a SSD in my desktop, and it loses a file that I saved just before pulling the power plug.\n\n\n \n\n\nThat can (and does) happen with spinning disks, too.\r\n \n\n\n \n\n*Big addendum: There are two issues on powerloss that will mess with\r\nPostgres. Data Loss and Data Corruption. The micron consumer drives\r\nwill have power loss protection against Data Corruption and the\r\nenterprise drive will have power loss protection against BOTH.\n\nhttps://www.micron.com/~/media/documents/products/white-paper/wp_ssd_power_loss_protection.pdf\n\r\n The Data Corruption problem is only an issue in non-SLC NAND but\r\nit's industry wide. And even though some drives will protect against\r\nthat, the protection of inflight data that's been fsync'd is more\r\nimportant and should disqualify *any* consumer drives from *any*\r\ncompany from consideration for use with Postgres.\n\n\r\nSo it lies about fsync()... The next question is, does it nevertheless enforce the correct ordering of persisting fsync'd data? If you write to file A and fsync it, then write to another file B and fsync it too, is it guaranteed that if B is persisted, A is\r\n as well? Because if it isn't, you can end up with filesystem (or database) corruption anyway.\n\n- Heikki\n\n\n \n\n\n\n\n \n\n\nThe sad fact is that MANY drives (ssd as well as spinning) lie about their fsync status.\r\n--\n\n\nMike Nolan",
"msg_date": "Tue, 7 Jul 2015 18:01:17 +0000",
"msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On Tue, Jul 7, 2015 at 11:46 AM, Graeme B. Bell <[email protected]> wrote:\n>>\n>> RAID controllers are completely unnecessary for SSD as they currently\n>> exist.\n>\n> Agreed. The best solution is not to buy cheap disks and not to buy RAID controllers now, imho.\n>\n> In my own situation, I had a tight budget, high performance demand and a newish machine with RAID controller and HDDs in it as a starting point.\n> So it was more a question of 'what can you do with a free raid controller and not much money' back in 2013. And it has worked very well.\n> Still, I had hoped for a bit more from the cheaper SSDs though, I'd hoped to use fastpath on the controller and bypass the cache.\n>\n> The way NVMe prices are going though, I wouldn't do it again if I was doing it this year. I'd just go direct to nvme and trash the raid controller. These sammy and intel nvmes are basically enterprise hardware at consumer prices. Heck, I'll probably put one in my next gaming PC.\n>\n> Re: software raid.\n>\n> I agree, but once you accept that software raid is now pretty much superior to hardware raid, you start looking at ZFS and thinking 'why the heck am I even using software raid?'\n\nGood point. At least for me, I've yet to jump on the ZFS bandwagon and\nso don't have an opinion on it.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 13:22:51 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On 07/07/2015 09:01 PM, Wes Vaske (wvaske) wrote:\n> Regarding:\n> “lie about their fsync status.”\n>\n> This is mostly semantics but it might help google searches on the issue.\n>\n> A drive doesn’t support fsync(), that’s a filesystem/kernel process. A drive will do a FLUSH CACHE. Before kernels 2.6.<low numbers> the fsync() call wouldn’t sent any ATA or SCSI command to flush the disk cache. Whereas—AFAICT—modern kernels and file system versions*will* do this. When ‘sync’ is called the filesystem will issue the appropriate command to the disk to flush the write cache.\n>\n> For ATA, this is “FLUSH CACHE” (E7h). To check support for the command use:\n> [root@postgres ~]# smartctl --identify /dev/sdu | grep \"FLUSH CACHE\"\n> 83 13 1 FLUSH CACHE EXT supported\n> 83 12 1 FLUSH CACHE supported\n> 86 13 1 FLUSH CACHE EXT supported\n> 86 12 1 FLUSH CACHE supported\n>\n> The 1s in the 3rd column represent SUPPORTED for the feature listed in the last column.\n\nRight, to be precise, the problem isn't the drive lies about fsync(). It \nlies about FLUSH CACHE instead. Search & replace fsync() with FLUSH \nCACHE, and the same question remains: When the drive breaks its promise \nwrt. FLUSH CACHE, does it nevertheless guarantee that the order the data \nis eventually flushed to disk is consistent with the order in which the \ndata and FLUSH CACHE were sent to the drive? That's an important \ndistinction, because it makes the difference between \"the most recent \ndata the application saved might be lost even though the FLUSH CACHE \ncommand returned\" and \"your filesystem is corrupt\".\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Jul 2015 22:53:21 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "\nCache flushing isn't an atomic operation though. Even if the ordering is right, you are likely to have a partial fsync on the disk when the lights go out - isn't your FS still corrupt?\n\nOn 07 Jul 2015, at 21:53, Heikki Linnakangas <[email protected]> wrote:\n\n> On 07/07/2015 09:01 PM, Wes Vaske (wvaske) wrote:\n> \n> Right, to be precise, the problem isn't the drive lies about fsync(). It lies about FLUSH CACHE instead. Search & replace fsync() with FLUSH CACHE, and the same question remains: When the drive breaks its promise wrt. FLUSH CACHE, does it nevertheless guarantee that the order the data is eventually flushed to disk is consistent with the order in which the data and FLUSH CACHE were sent to the drive? That's an important distinction, because it makes the difference between \"the most recent data the application saved might be lost even though the FLUSH CACHE command returned\" and \"your filesystem is corrupt\".\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 19:59:54 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
},
{
"msg_contents": "On 07/07/2015 10:59 PM, Graeme B. Bell wrote:\n> Cache flushing isn't an atomic operation though. Even if the ordering\n> is right, you are likely to have a partial fsync on the disk when the\n> lights go out - isn't your FS still corrupt?\n\nIf the filesystem is worth its salt, no. Journaling filesystems for \nexample rely on the journal to work around that problem, and there are \nother mechanisms.\n\nPostgreSQL has exactly the same problem and uses the WAL to solve it.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Jul 2015 23:05:06 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server: SSD/RAID recommendations?"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI've written a new open source tool for easily parallelising SQL scripts in postgres. [obligatory plug: https://github.com/gbb/par_psql ]\n\nUsing it, I'm seeing a problem I've seen in other postgres projects involving parallelisation in the last 12 months.\n\nBasically:\n\n- I have machines here with up to 16 CPUs and 128GB memory, very fast SSDs and controller etc, carefully configured kernel/postgresql.conf for high performance.\n\n- Ordinary queries parallelise nearly perfectly (e.g. SELECT some_stuff ...), e.g. almost up to 16x performance improvement.\n\n- Calls to CPU-intensive user-defined pl/pgsql functions (e.g. SELECT myfunction(some_stuff)) do not parallelise well, even when they are independent or accessing tables in a read-only way. They hit a limit at 2.5x performance improvement relative to single-CPU performance (pg9.4) and 2x performance (pg9.3). This is about 6 times slower than I'm expecting. \n\n- Can't see what would be locking. It seems like it's the pl/pgsql environment itself that is somehow locking or incurring some huge frictional costs. Whether I use independently defined functions, independent source tables, independent output tables, makes no difference whatsoever, so it doesn't feel 'locky'. It also doesn't seem to be WAL/synchronisation related, as the machines I'm using can hit absurdly high pgbench rates, and I'm using unlogged tables.\n\nCurious? Take a quick peek here: https://github.com/gbb/par_psql/blob/master/BENCHMARKS.md\n\nWondering what I'm missing here. Any ideas?\n\nGraeme. \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Jul 2015 16:15:52 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hmmm... why does CPU-intensive pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Thu, Jul 2, 2015 at 9:15 AM, Graeme B. Bell <[email protected]> wrote:\n\n> Hi everyone,\n>\n> I've written a new open source tool for easily parallelising SQL scripts\n> in postgres. [obligatory plug: https://github.com/gbb/par_psql ]\n>\n> Using it, I'm seeing a problem I've seen in other postgres projects\n> involving parallelisation in the last 12 months.\n>\n> Basically:\n>\n> - I have machines here with up to 16 CPUs and 128GB memory, very fast SSDs\n> and controller etc, carefully configured kernel/postgresql.conf for high\n> performance.\n>\n> - Ordinary queries parallelise nearly perfectly (e.g. SELECT some_stuff\n> ...), e.g. almost up to 16x performance improvement.\n>\n> - Calls to CPU-intensive user-defined pl/pgsql functions (e.g. SELECT\n> myfunction(some_stuff)) do not parallelise well, even when they are\n> independent or accessing tables in a read-only way. They hit a limit at\n> 2.5x performance improvement relative to single-CPU performance (pg9.4) and\n> 2x performance (pg9.3). This is about 6 times slower than I'm expecting.\n>\n> - Can't see what would be locking. It seems like it's the pl/pgsql\n> environment itself that is somehow locking or incurring some huge\n> frictional costs. Whether I use independently defined functions,\n> independent source tables, independent output tables, makes no difference\n> whatsoever, so it doesn't feel 'locky'. It also doesn't seem to be\n> WAL/synchronisation related, as the machines I'm using can hit absurdly\n> high pgbench rates, and I'm using unlogged tables.\n>\n> Curious? Take a quick peek here:\n> https://github.com/gbb/par_psql/blob/master/BENCHMARKS.md\n>\n> Wondering what I'm missing here. Any ideas?\n>\n\nNo ideas, but I ran into the same thing. I have a set of C/C++ functions\nthat put some chemistry calculations into Postgres as extensions (things\nlike, \"calculate the molecular weight of this molecule\"). As SQL functions,\nthe whole thing bogged down, and we never got the scalability we needed. On\nour 8-CPU setup, we couldn't get more than 2 CPUs busy at the same time,\neven with dozens of clients.\n\nWhen I moved these same functions into an Apache fast-CGI HTTP service\n(exact same code, same network overhead), I could easily scale up and use\nthe full 100% of all eight CPUs.\n\nI have no idea why, and never investigated further. The convenience of\nhaving the functions in SQL wasn't that important.\n\nCraig\n\n\n>\n> Graeme.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Thu, Jul 2, 2015 at 9:15 AM, Graeme B. Bell <[email protected]> wrote:Hi everyone,\n\nI've written a new open source tool for easily parallelising SQL scripts in postgres. [obligatory plug: https://github.com/gbb/par_psql ]\n\nUsing it, I'm seeing a problem I've seen in other postgres projects involving parallelisation in the last 12 months.\n\nBasically:\n\n- I have machines here with up to 16 CPUs and 128GB memory, very fast SSDs and controller etc, carefully configured kernel/postgresql.conf for high performance.\n\n- Ordinary queries parallelise nearly perfectly (e.g. SELECT some_stuff ...), e.g. almost up to 16x performance improvement.\n\n- Calls to CPU-intensive user-defined pl/pgsql functions (e.g. SELECT myfunction(some_stuff)) do not parallelise well, even when they are independent or accessing tables in a read-only way. They hit a limit at 2.5x performance improvement relative to single-CPU performance (pg9.4) and 2x performance (pg9.3). This is about 6 times slower than I'm expecting.\n\n- Can't see what would be locking. It seems like it's the pl/pgsql environment itself that is somehow locking or incurring some huge frictional costs. Whether I use independently defined functions, independent source tables, independent output tables, makes no difference whatsoever, so it doesn't feel 'locky'. It also doesn't seem to be WAL/synchronisation related, as the machines I'm using can hit absurdly high pgbench rates, and I'm using unlogged tables.\n\nCurious? Take a quick peek here: https://github.com/gbb/par_psql/blob/master/BENCHMARKS.md\n\nWondering what I'm missing here. Any ideas?No ideas, but I ran into the same thing. I have a set of C/C++ functions that put some chemistry calculations into Postgres as extensions (things like, \"calculate the molecular weight of this molecule\"). As SQL functions, the whole thing bogged down, and we never got the scalability we needed. On our 8-CPU setup, we couldn't get more than 2 CPUs busy at the same time, even with dozens of clients.When I moved these same functions into an Apache fast-CGI HTTP service (exact same code, same network overhead), I could easily scale up and use the full 100% of all eight CPUs.I have no idea why, and never investigated further. The convenience of having the functions in SQL wasn't that important.Craig \n\nGraeme.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------",
"msg_date": "Tue, 7 Jul 2015 20:05:57 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "\nOn 07/07/2015 08:05 PM, Craig James wrote:\n>\n>\n> No ideas, but I ran into the same thing. I have a set of C/C++ functions\n> that put some chemistry calculations into Postgres as extensions (things\n> like, \"calculate the molecular weight of this molecule\"). As SQL\n> functions, the whole thing bogged down, and we never got the scalability\n> we needed. On our 8-CPU setup, we couldn't get more than 2 CPUs busy at\n> the same time, even with dozens of clients.\n>\n> When I moved these same functions into an Apache fast-CGI HTTP service\n> (exact same code, same network overhead), I could easily scale up and\n> use the full 100% of all eight CPUs.\n>\n> I have no idea why, and never investigated further. The convenience of\n> having the functions in SQL wasn't that important.\n\nI admit that I haven't read this whole thread but:\n\nUsing Apache Fast-CGI, you are going to fork a process for each instance \nof the function being executed and that in turn will use all CPUs up to \nthe max available resource.\n\nWith PostgreSQL, that isn't going to happen unless you are running (at \nleast) 8 functions across 8 connections.\n\nJD\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Jul 2015 22:31:43 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code parallelise\n so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "> On 07/07/2015 08:05 PM, Craig James wrote:\n>> \n>> \n>> No ideas, but I ran into the same thing. I have a set of C/C++ functions\n>> that put some chemistry calculations into Postgres as extensions (things\n>> like, \"calculate the molecular weight of this molecule\"). As SQL\n>> functions, the whole thing bogged down, and we never got the scalability\n>> we needed. On our 8-CPU setup, we couldn't get more than 2 CPUs busy at\n>> the same time, even with dozens of clients.\n\n\nHi all,\n\nThe sample code / results were put up last night at http://github.com/gbb/ppppt\n\nCraig's problem sounds similar to my own, assuming he means running C indirectly via SQL vs running C more directly.\nLots of parallel connections to postgres but maximum 2 CPUs of scaling (and it gets worse, as you try to run more things).\n\nTom Lane has posted an interesting comment over on the bugs list which identies a likely source at least one of the problems, maybe both. \nIt seems to be linked to internal locking inside postgres (which makes sense, given the results - both problems feel 'lock-y').\nAlso, he mentions a workaround for some functions that scales to 8-way apparently. \n\nhttp://www.postgresql.org/message-id/[email protected]\n\nI think it's potentially a big problem for CPU intensive postgres libraries like pgrouting, or perhaps the postgis & postgis raster functions, things like that.\nI don't know how well their functions are marked for e.g. immutability. \nAre there any postgis devs on this list?\n\nGraeme Bell\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 8 Jul 2015 11:06:50 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake <[email protected]>\nwrote:\n\n>\n> On 07/07/2015 08:05 PM, Craig James wrote:\n>\n>>\n>>\n>> No ideas, but I ran into the same thing. I have a set of C/C++ functions\n>> that put some chemistry calculations into Postgres as extensions (things\n>> like, \"calculate the molecular weight of this molecule\"). As SQL\n>> functions, the whole thing bogged down, and we never got the scalability\n>> we needed. On our 8-CPU setup, we couldn't get more than 2 CPUs busy at\n>> the same time, even with dozens of clients.\n>>\n>> When I moved these same functions into an Apache fast-CGI HTTP service\n>> (exact same code, same network overhead), I could easily scale up and\n>> use the full 100% of all eight CPUs.\n>>\n>> I have no idea why, and never investigated further. The convenience of\n>> having the functions in SQL wasn't that important.\n>>\n>\n> I admit that I haven't read this whole thread but:\n>\n> Using Apache Fast-CGI, you are going to fork a process for each instance\n> of the function being executed and that in turn will use all CPUs up to the\n> max available resource.\n>\n> With PostgreSQL, that isn't going to happen unless you are running (at\n> least) 8 functions across 8 connections.\n\n\nWell, right, which is why I mentioned \"even with dozens of clients.\"\nShouldn't that scale to at least all of the CPUs in use if the function is\nCPU intensive (which it is)?\n\nCraig\n\n\n\n>\n> JD\n>\n> --\n> Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\n> PostgreSQL Centered full stack support, consulting and development.\n> Announcing \"I'm offended\" is basically telling the world you can't\n> control your own emotions, so everyone else should do it for you.\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake <[email protected]> wrote:\nOn 07/07/2015 08:05 PM, Craig James wrote:\n\n\n\nNo ideas, but I ran into the same thing. I have a set of C/C++ functions\nthat put some chemistry calculations into Postgres as extensions (things\nlike, \"calculate the molecular weight of this molecule\"). As SQL\nfunctions, the whole thing bogged down, and we never got the scalability\nwe needed. On our 8-CPU setup, we couldn't get more than 2 CPUs busy at\nthe same time, even with dozens of clients.\n\nWhen I moved these same functions into an Apache fast-CGI HTTP service\n(exact same code, same network overhead), I could easily scale up and\nuse the full 100% of all eight CPUs.\n\nI have no idea why, and never investigated further. The convenience of\nhaving the functions in SQL wasn't that important.\n\n\nI admit that I haven't read this whole thread but:\n\nUsing Apache Fast-CGI, you are going to fork a process for each instance of the function being executed and that in turn will use all CPUs up to the max available resource.\n\nWith PostgreSQL, that isn't going to happen unless you are running (at least) 8 functions across 8 connections.Well, right, which is why I mentioned \"even with dozens of clients.\" Shouldn't that scale to at least all of the CPUs in use if the function is CPU intensive (which it is)?Craig\n\nJD\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------",
"msg_date": "Wed, 8 Jul 2015 10:48:07 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "\nOn 07/08/2015 10:48 AM, Craig James wrote:\n\n> I admit that I haven't read this whole thread but:\n>\n> Using Apache Fast-CGI, you are going to fork a process for each\n> instance of the function being executed and that in turn will use\n> all CPUs up to the max available resource.\n>\n> With PostgreSQL, that isn't going to happen unless you are running\n> (at least) 8 functions across 8 connections.\n>\n>\n> Well, right, which is why I mentioned \"even with dozens of clients.\"\n> Shouldn't that scale to at least all of the CPUs in use if the function\n> is CPU intensive (which it is)?\n\nIn theory but that isn't PostgreSQL that does that, it will be the \nkernel scheduler. Although (and I am grasping at straws):\n\nI wonder if the execution is taking place outside of the backend proper \nor... are you using a pooler?\n\nJD\n\n\n>\n> Craig\n>\n>\n>\n>\n> JD\n>\n> --\n> Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\n> PostgreSQL Centered full stack support, consulting and development.\n> Announcing \"I'm offended\" is basically telling the world you can't\n> control your own emotions, so everyone else should do it for you.\n>\n>\n>\n>\n> --\n> ---------------------------------\n> Craig A. James\n> Chief Technology Officer\n> eMolecules, Inc.\n> ---------------------------------\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 08 Jul 2015 10:52:08 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code parallelise\n so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Wed, Jul 8, 2015 at 10:52 AM, Joshua D. Drake <[email protected]>\nwrote:\n\n>\n> On 07/08/2015 10:48 AM, Craig James wrote:\n>\n> I admit that I haven't read this whole thread but:\n>>\n>> Using Apache Fast-CGI, you are going to fork a process for each\n>> instance of the function being executed and that in turn will use\n>> all CPUs up to the max available resource.\n>>\n>> With PostgreSQL, that isn't going to happen unless you are running\n>> (at least) 8 functions across 8 connections.\n>>\n>>\n>> Well, right, which is why I mentioned \"even with dozens of clients.\"\n>> Shouldn't that scale to at least all of the CPUs in use if the function\n>> is CPU intensive (which it is)?\n>>\n>\n> In theory but that isn't PostgreSQL that does that, it will be the kernel\n> scheduler. Although (and I am grasping at straws):\n>\n> I wonder if the execution is taking place outside of the backend proper\n> or... are you using a pooler?\n>\n\nNo pooler, and the functions were in an ordinary SQL extension .so library\nand loaded as\n\n CREATE OR REPLACE FUNCTION funcname( ... ) returns ...\n AS 'libxxx.so', 'funcname LANGUAGE c STRICT IMMUTABLE COST 10000;\n\nCraig\n\n\n> JD\n>\n>\n>\n>> Craig\n>>\n>>\n>>\n>>\n>> JD\n>>\n>> --\n>> Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\n>> PostgreSQL Centered full stack support, consulting and development.\n>> Announcing \"I'm offended\" is basically telling the world you can't\n>> control your own emotions, so everyone else should do it for you.\n>>\n>>\n>>\n>>\n>> --\n>> ---------------------------------\n>> Craig A. James\n>> Chief Technology Officer\n>> eMolecules, Inc.\n>> ---------------------------------\n>>\n>\n>\n> --\n> Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\n> PostgreSQL Centered full stack support, consulting and development.\n> Announcing \"I'm offended\" is basically telling the world you can't\n> control your own emotions, so everyone else should do it for you.\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Wed, Jul 8, 2015 at 10:52 AM, Joshua D. Drake <[email protected]> wrote:\nOn 07/08/2015 10:48 AM, Craig James wrote:\n\n\n I admit that I haven't read this whole thread but:\n\n Using Apache Fast-CGI, you are going to fork a process for each\n instance of the function being executed and that in turn will use\n all CPUs up to the max available resource.\n\n With PostgreSQL, that isn't going to happen unless you are running\n (at least) 8 functions across 8 connections.\n\n\nWell, right, which is why I mentioned \"even with dozens of clients.\"\nShouldn't that scale to at least all of the CPUs in use if the function\nis CPU intensive (which it is)?\n\n\nIn theory but that isn't PostgreSQL that does that, it will be the kernel scheduler. Although (and I am grasping at straws):\n\nI wonder if the execution is taking place outside of the backend proper or... are you using a pooler?No pooler, and the functions were in an ordinary SQL extension .so library and loaded as CREATE OR REPLACE FUNCTION funcname( ... ) returns ... AS 'libxxx.so', 'funcname LANGUAGE c STRICT IMMUTABLE COST 10000;Craig\n\nJD\n\n\n\n\nCraig\n\n\n\n\n JD\n\n --\n Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\n PostgreSQL Centered full stack support, consulting and development.\n Announcing \"I'm offended\" is basically telling the world you can't\n control your own emotions, so everyone else should do it for you.\n\n\n\n\n--\n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nAnnouncing \"I'm offended\" is basically telling the world you can't\ncontrol your own emotions, so everyone else should do it for you.\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------",
"msg_date": "Wed, 8 Jul 2015 11:08:56 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Wed, Jul 8, 2015 at 12:48 PM, Craig James <[email protected]> wrote:\n> On Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake <[email protected]>\n> wrote:\n>>\n>>\n>> On 07/07/2015 08:05 PM, Craig James wrote:\n>>>\n>>>\n>>>\n>>> No ideas, but I ran into the same thing. I have a set of C/C++ functions\n>>> that put some chemistry calculations into Postgres as extensions (things\n>>> like, \"calculate the molecular weight of this molecule\"). As SQL\n>>> functions, the whole thing bogged down, and we never got the scalability\n>>> we needed. On our 8-CPU setup, we couldn't get more than 2 CPUs busy at\n>>> the same time, even with dozens of clients.\n>>>\n>>> When I moved these same functions into an Apache fast-CGI HTTP service\n>>> (exact same code, same network overhead), I could easily scale up and\n>>> use the full 100% of all eight CPUs.\n>>>\n>>> I have no idea why, and never investigated further. The convenience of\n>>> having the functions in SQL wasn't that important.\n>>\n>>\n>> I admit that I haven't read this whole thread but:\n>>\n>> Using Apache Fast-CGI, you are going to fork a process for each instance\n>> of the function being executed and that in turn will use all CPUs up to the\n>> max available resource.\n>>\n>> With PostgreSQL, that isn't going to happen unless you are running (at\n>> least) 8 functions across 8 connections.\n>\n>\n> Well, right, which is why I mentioned \"even with dozens of clients.\"\n> Shouldn't that scale to at least all of the CPUs in use if the function is\n> CPU intensive (which it is)?\n\nonly in the absence of inter-process locking and cache line bouncing.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 8 Jul 2015 13:46:53 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On 2015-07-08 13:46:53 -0500, Merlin Moncure wrote:\n> On Wed, Jul 8, 2015 at 12:48 PM, Craig James <[email protected]> wrote:\n> > On Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake <[email protected]>\n> >> Using Apache Fast-CGI, you are going to fork a process for each instance\n> >> of the function being executed and that in turn will use all CPUs up to the\n> >> max available resource.\n> >>\n> >> With PostgreSQL, that isn't going to happen unless you are running (at\n> >> least) 8 functions across 8 connections.\n> >\n> >\n> > Well, right, which is why I mentioned \"even with dozens of clients.\"\n> > Shouldn't that scale to at least all of the CPUs in use if the function is\n> > CPU intensive (which it is)?\n> \n> only in the absence of inter-process locking and cache line bouncing.\n\nAnd addititionally memory bandwidth (shared between everything, even in\nthe numa case), cross socket/bus bandwidth (absolutely performance\ncritical in multi-socket configurations), cache capacity (shared between\ncores, and sometimes even sockets!).\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 8 Jul 2015 22:27:33 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Wed, Jul 8, 2015 at 1:27 PM, Andres Freund <[email protected]> wrote:\n\n> On 2015-07-08 13:46:53 -0500, Merlin Moncure wrote:\n> > On Wed, Jul 8, 2015 at 12:48 PM, Craig James <[email protected]>\n> wrote:\n> > > On Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake <[email protected]\n> >\n> > >> Using Apache Fast-CGI, you are going to fork a process for each\n> instance\n> > >> of the function being executed and that in turn will use all CPUs up\n> to the\n> > >> max available resource.\n> > >>\n> > >> With PostgreSQL, that isn't going to happen unless you are running (at\n> > >> least) 8 functions across 8 connections.\n> > >\n> > >\n> > > Well, right, which is why I mentioned \"even with dozens of clients.\"\n> > > Shouldn't that scale to at least all of the CPUs in use if the\n> function is\n> > > CPU intensive (which it is)?\n> >\n> > only in the absence of inter-process locking and cache line bouncing.\n>\n> And addititionally memory bandwidth (shared between everything, even in\n> the numa case), cross socket/bus bandwidth (absolutely performance\n> critical in multi-socket configurations), cache capacity (shared between\n> cores, and sometimes even sockets!).\n>\n\n From my admittedly naive point of view, it's hard to see why any of this\nmatters. I have functions that do purely CPU-intensive mathematical\ncalculations ... you could imagine something like is_prime(N) that\ndetermines if N is a prime number. I have eight clients that connect to\neight backends. Each client issues an SQL command like, \"select\nis_prime(N)\" where N is a simple number.\n\nAre you saying that in order to calculate is_prime(N), all of that stuff\n(inter-process locking, memory bandwith, bus bandwidth, cache capacity,\netc.) is even relevant? And if so, how is it that Postgres is so different\nfrom an Apache fast-CGI program that runs the exact same is_prime(N)\ncalculation?\n\nJust curious ... as I said, I've already implemented a different solution.\n\nCraig\n\nOn Wed, Jul 8, 2015 at 1:27 PM, Andres Freund <[email protected]> wrote:On 2015-07-08 13:46:53 -0500, Merlin Moncure wrote:\n> On Wed, Jul 8, 2015 at 12:48 PM, Craig James <[email protected]> wrote:\n> > On Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake <[email protected]>\n> >> Using Apache Fast-CGI, you are going to fork a process for each instance\n> >> of the function being executed and that in turn will use all CPUs up to the\n> >> max available resource.\n> >>\n> >> With PostgreSQL, that isn't going to happen unless you are running (at\n> >> least) 8 functions across 8 connections.\n> >\n> >\n> > Well, right, which is why I mentioned \"even with dozens of clients.\"\n> > Shouldn't that scale to at least all of the CPUs in use if the function is\n> > CPU intensive (which it is)?\n>\n> only in the absence of inter-process locking and cache line bouncing.\n\nAnd addititionally memory bandwidth (shared between everything, even in\nthe numa case), cross socket/bus bandwidth (absolutely performance\ncritical in multi-socket configurations), cache capacity (shared between\ncores, and sometimes even sockets!).\nFrom my admittedly naive point of view, it's hard to see why any of this matters. I have functions that do purely CPU-intensive mathematical calculations ... you could imagine something like is_prime(N) that determines if N is a prime number. I have eight clients that connect to eight backends. Each client issues an SQL command like, \"select is_prime(N)\" where N is a simple number.Are you saying that in order to calculate is_prime(N), all of that stuff (inter-process locking, memory bandwith, bus bandwidth, cache capacity, etc.) is even relevant? And if so, how is it that Postgres is so different from an Apache fast-CGI program that runs the exact same is_prime(N) calculation?Just curious ... as I said, I've already implemented a different solution.Craig",
"msg_date": "Wed, 8 Jul 2015 15:38:24 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On 2015-07-08 15:38:24 -0700, Craig James wrote:\n> From my admittedly naive point of view, it's hard to see why any of this\n> matters. I have functions that do purely CPU-intensive mathematical\n> calculations ... you could imagine something like is_prime(N) that\n> determines if N is a prime number. I have eight clients that connect to\n> eight backends. Each client issues an SQL command like, \"select\n> is_prime(N)\" where N is a simple number.\n\nI mostly replied to Merlin's general point (additionally in the context of\nplpgsql).\n\nBut I have a hard time seing that postgres would be the bottleneck for a\nis_prime() function (or something with similar characteristics) that's\nwritten in C where the average runtime is more than, say, a couple\nthousand cyles. I'd like to see a profile of that.\n\nAndres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 00:45:18 +0200",
"msg_from": "[email protected] (Andres Freund)",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "[email protected] (Andres Freund) writes:\n> On 2015-07-08 15:38:24 -0700, Craig James wrote:\n>> From my admittedly naive point of view, it's hard to see why any of this\n>> matters. I have functions that do purely CPU-intensive mathematical\n>> calculations ... you could imagine something like is_prime(N) that\n>> determines if N is a prime number. I have eight clients that connect to\n>> eight backends. Each client issues an SQL command like, \"select\n>> is_prime(N)\" where N is a simple number.\n\n> I mostly replied to Merlin's general point (additionally in the context of\n> plpgsql).\n\n> But I have a hard time seing that postgres would be the bottleneck for a\n> is_prime() function (or something with similar characteristics) that's\n> written in C where the average runtime is more than, say, a couple\n> thousand cyles. I'd like to see a profile of that.\n\nBut that was not the case that Graeme was complaining about. He's talking\nabout simple-arithmetic-and-looping written in plpgsql, in a volatile\nfunction that is going to take a new snapshot for every statement, even if\nthat's only \"n := n+1\". So it's going to spend a substantial fraction of\nits runtime banging on the ProcArray, and that doesn't scale. If you\nwrite your is_prime function purely in plpgsql, and don't bother to mark\nit nonvolatile, *it will not scale*. It'll be slow even in single-thread\nterms, but it'll be particularly bad if you're saturating a multicore\nmachine with it.\n\nOne of my Salesforce colleagues has been looking into ways that we could\ndecide to skip the per-statement snapshot acquisition even in volatile\nfunctions, if we could be sure that a particular statement isn't going to\ndo anything that would need a snapshot. Now, IMO that doesn't really do\nmuch for properly written plpgsql; but there's an awful lot of bad plpgsql\ncode out there, and it can make a huge difference for that.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 08 Jul 2015 23:38:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On 08 Jul 2015, at 22:27, Andres Freund <[email protected]> wrote:\n\n> On 2015-07-08 13:46:53 -0500, Merlin Moncure wrote:\n>> On Wed, Jul 8, 2015 at 12:48 PM, Craig James <[email protected]> wrote:\n>>> \n>>> Well, right, which is why I mentioned \"even with dozens of clients.\"\n>>> Shouldn't that scale to at least all of the CPUs in use if the function is\n>>> CPU intensive (which it is)?\n>> \n>> only in the absence of inter-process locking and cache line bouncing.\n> \n> And addititionally memory bandwidth (shared between everything, even in\n> the numa case), cross socket/bus bandwidth (absolutely performance\n> critical in multi-socket configurations), cache capacity (shared between\n> cores, and sometimes even sockets!).\n\n1. Note for future readers - it's also worth noting that depending on the operation, and on your hardware, you may have less \"CPU cores\" than you think to parallelise upon.\n\n1a. For example AMD CPUs list the number of integer cores (e.g. 16), but there is actually only half as many cores available for floating point work (8). So if your functions need to use floating point, your scaling will suffer badly on FP functions. \n\nhttps://en.wikipedia.org/wiki/Bulldozer_(microarchitecture)\n \"In terms of hardware complexity and functionality, this \"module\" is equal to a dual-core processor in its integer power, and to a single-core processor in its floating-point power: for each two integer cores, there is one floating-point core.\"\n\n\n1b. Or, if you have hyper-threading enabled on an Intel CPU, you may think you have e.g. 8 cores, but if all the threads are running the same type of operation repeatedly, it won't be possible for the hyper-threading to work well and you'll only get 4 in practice. Maybe less due to overheads. Or, if your work is continuallly going to main memory for data (e.g. limited by the memory bus), it will run at 4-core speed, because the cores have to share the same memory bus. \n\nHyper-threading depends on the 2 logical cores being asked to perform two different types of tasks at once (each having relatively lower demands on memory).\n\n\"When execution resources would not be used by the current task in a processor without hyper-threading, and especially when the processor is stalled, a hyper-threading equipped processor can use those execution resources to execute another scheduled task.\"\nhttps://en.wikipedia.org/wiki/Hyper-threading\nhttps://en.wikipedia.org/wiki/Superscalar\n\n\n2. Keep in mind also when benchmarking that it's normal to see an small drop-off when you hit the maximum number of cores for your system. \nAfter all, the O/S and the benchmark program and anything else you have running will need a core or two.\n\n \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 08:59:26 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On 2015-07-08 23:38:38 -0400, Tom Lane wrote:\n> [email protected] (Andres Freund) writes:\n> > On 2015-07-08 15:38:24 -0700, Craig James wrote:\n> >> From my admittedly naive point of view, it's hard to see why any of this\n> >> matters. I have functions that do purely CPU-intensive mathematical\n> >> calculations ... you could imagine something like is_prime(N) that\n> >> determines if N is a prime number. I have eight clients that connect to\n> >> eight backends. Each client issues an SQL command like, \"select\n> >> is_prime(N)\" where N is a simple number.\n>\n> > I mostly replied to Merlin's general point (additionally in the context of\n> > plpgsql).\n>\n> > But I have a hard time seing that postgres would be the bottleneck for a\n> > is_prime() function (or something with similar characteristics) that's\n> > written in C where the average runtime is more than, say, a couple\n> > thousand cyles. I'd like to see a profile of that.\n>\n> But that was not the case that Graeme was complaining about.\n\nNo, Craig was complaining about that case...\n\n> One of my Salesforce colleagues has been looking into ways that we could\n> decide to skip the per-statement snapshot acquisition even in volatile\n> functions, if we could be sure that a particular statement isn't going to\n> do anything that would need a snapshot.\n\nYea, I actually commented about that on IRC as well.\n\nI was thinking about actually continuing to get a snapshot, but mark it\nas 'complete on usage'. I.e. only call GetSnapshotData() only when the\nsnapshot is used to decide about visibility. We probably can't do that\nin the toplevel visibility case because it'll probably have noticeable\nsemantic effects, but ISTM it should be doable for the volatile function\nusing spi case.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 11:00:08 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On 09 Jul 2015, at 05:38, Tom Lane <[email protected]> wrote:\n\n> If you\n> write your is_prime function purely in plpgsql, and don't bother to mark\n> it nonvolatile, *it will not scale*. \n\n> much for properly written plpgsql; but there's an awful lot of bad plpgsql\n> code out there, and it can make a huge difference for that.\n\n\nHi Tom, \n\nI object to phrases like 'don't bother to mark it' and 'bad plpgsql' here. That is putting the blame on programmers. Clearly, if there is no end of code out there that isn't right in this regard, there's something very wrong in the project documentation.\n\n1. I have been writing pl/pgsql on and off for a couple of years now and I've read quite a bit of the postgres doumentation, but I don't recall seeing a clear statement telling me I should mark pl/pgsql functions nonvolatile wherever possible or throw all performance and scalability out the window. I'm sure there may be a line hidden somewhere in the docs, but judging from the impact it has in practice, this seems like a very fundamental concept that should be repeatedly and clearly marked in the docs. \n\n2. Furthermore, I have never come across anything in the documentation that made it clear to me that any pl/pgsql function I write will, by default, be taking out locks for every single statement in the code. I've written code in I dunno, maybe 15-20 different languages in my life, and I can't think of another language offhand that does that by default. From the reactions on this thread to this benchmark and the par_psql benchmarks, it doesn't seem that it was even immediately obvious to many postgres enthusiasts and developers.\n\n3. I don't disagree that the benchmark code is objectively 'bad' in the sense that it is missing an important optimisation. \n\nBut I really don't think it helps to frame this as laziness or \"bad\" in any other sense of the word e.g. 'clumsy'.\n\nLet's look at the postgresql documentation for some examples of 'bad' and lazy code: \n\nhttp://www.postgresql.org/docs/9.3/static/plpgsql-structure.html\nhttp://www.postgresql.org/docs/9.3/static/plpgsql-declarations.html\n\nThere are about 13 functions on that page.\nHow many functions on that page make use non-volatile or immutable wherever it would be appropriate? \nzero.\n\nor this one: http://www.postgresql.org/docs/9.3/static/plpgsql-control-structures.html\nzero\n\nor this one: http://www.postgresql.org/docs/9.3/static/plpgsql-cursors.html#PLPGSQL-CURSOR-USING\nzero\n\nThe reason 90% of people out there are 'not bothering' and 'writing bad code' is because **99% of the postgresql documentation teaches them to do it that way**. \n\nSo when you talk about other people 'not bothering' to do things - who is really at fault here what for what you see as endemic 'bad' or 'lazy' code? Is it the new postgres programmers, or the people that taught them with \"bad\" examples consistently throughout the *entire project documentation*, starting from the very first example? \n\nI think I'm going to raise this as a documentation bug. \n\nGraeme. \n\n\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 09:44:24 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Wed, Jul 8, 2015 at 5:38 PM, Craig James <[email protected]> wrote:\n> On Wed, Jul 8, 2015 at 1:27 PM, Andres Freund <[email protected]> wrote:\n>>\n>> On 2015-07-08 13:46:53 -0500, Merlin Moncure wrote:\n>> > On Wed, Jul 8, 2015 at 12:48 PM, Craig James <[email protected]>\n>> > wrote:\n>> > > On Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake\n>> > > <[email protected]>\n>> > >> Using Apache Fast-CGI, you are going to fork a process for each\n>> > >> instance\n>> > >> of the function being executed and that in turn will use all CPUs up\n>> > >> to the\n>> > >> max available resource.\n>> > >>\n>> > >> With PostgreSQL, that isn't going to happen unless you are running\n>> > >> (at\n>> > >> least) 8 functions across 8 connections.\n>> > >\n>> > >\n>> > > Well, right, which is why I mentioned \"even with dozens of clients.\"\n>> > > Shouldn't that scale to at least all of the CPUs in use if the\n>> > > function is\n>> > > CPU intensive (which it is)?\n>> >\n>> > only in the absence of inter-process locking and cache line bouncing.\n>>\n>> And addititionally memory bandwidth (shared between everything, even in\n>> the numa case), cross socket/bus bandwidth (absolutely performance\n>> critical in multi-socket configurations), cache capacity (shared between\n>> cores, and sometimes even sockets!).\n>\n>\n> From my admittedly naive point of view, it's hard to see why any of this\n> matters. I have functions that do purely CPU-intensive mathematical\n> calculations ... you could imagine something like is_prime(N) that\n> determines if N is a prime number. I have eight clients that connect to\n> eight backends. Each client issues an SQL command like, \"select is_prime(N)\"\n> where N is a simple number.\n>\n> Are you saying that in order to calculate is_prime(N), all of that stuff\n> (inter-process locking, memory bandwith, bus bandwidth, cache capacity,\n> etc.) is even relevant? And if so, how is it that Postgres is so different\n> from an Apache fast-CGI program that runs the exact same is_prime(N)\n> calculation?\n>\n> Just curious ... as I said, I've already implemented a different solution.\n\nIf your is_prime() was written in C and was written so that it did not\nutilize the database API, it should scale up quite nicely. This can\nbe easily proved. On my quad core workstation,\n\npostgres=# select 12345! * 0;\n ?column?\n──────────\n 0\n(1 row)\n\nTime: 10435.554 ms\n\n\n...which is heavily cpu bound, takes about 10 seconds. scaling out to\n4 threads via:\n\ntime ~/pgdev/bin/pgbench -n -t1 -c4 -f <(echo \"select 12345! * 0;\")\n\nyields:\nreal 0m11.317s\nuser 0m0.001s\nsys 0m0.005s\n\n...I'll call that pretty good scaling. The reason why this scales so\ngood is that the numeric code is all operating on local data\nstructures and is not involving backend componentry with it's various\nattached complexity such as having to be checked for being visible to\nthe current transaction.\n\nI submit that toy benchmarks like factoring or pi digits are not\nreally good indicators of language scaling and performance because\njust about all real world code involves data structures, i/o, memory\nallocation, amateur coders, etc. Java tends to approach C in\nbenchmark shootouts but woefully underperforms my expectations\nrelative to C in code that does things that's actually useful (aside:\nif you think I'm knocking java, the situation is even worse with most\nother languages I come across).\n\npl/pgsql is simply not optimized for that style of coding although if\nyou know postgres you can start to tickle the limits of what's\nexpected from the language. If that isn't working for you, pl/v8\nstrikes me as the best alternative due to it's performance and good\nintegration with postgres data structures (in fact, I'd be arguing for\nit to be moved to core if the v8 dependency wasn't so capricious).\nEither way, I'll advocate any solution that allows you to code inside\nthe database environment as opposed to the client side.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 07:56:47 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Thu, Jul 9, 2015 at 4:44 AM, Graeme B. Bell <[email protected]> wrote:\n> On 09 Jul 2015, at 05:38, Tom Lane <[email protected]> wrote:\n>\n>> If you\n>> write your is_prime function purely in plpgsql, and don't bother to mark\n>> it nonvolatile, *it will not scale*.\n>\n>> much for properly written plpgsql; but there's an awful lot of bad plpgsql\n>> code out there, and it can make a huge difference for that.\n>\n>\n> Hi Tom,\n>\n> I object to phrases like 'don't bother to mark it' and 'bad plpgsql' here. That is putting the blame on programmers. Clearly, if there is no end of code out there that isn't right in this regard, there's something very wrong in the project documentation.\n>\n> 1. I have been writing pl/pgsql on and off for a couple of years now and I've read quite a bit of the postgres doumentation, but I don't recall seeing a clear statement telling me I should mark pl/pgsql functions nonvolatile wherever possible or throw all performance and scalability out the window. I'm sure there may be a line hidden somewhere in the docs, but judging from the impact it has in practice, this seems like a very fundamental concept that should be repeatedly and clearly marked in the docs.\n>\n> 2. Furthermore, I have never come across anything in the documentation that made it clear to me that any pl/pgsql function I write will, by default, be taking out locks for every single statement in the code. I've written code in I dunno, maybe 15-20 different languages in my life, and I can't think of another language offhand that does that by default. From the reactions on this thread to this benchmark and the par_psql benchmarks, it doesn't seem that it was even immediately obvious to many postgres enthusiasts and developers.\n>\n> 3. I don't disagree that the benchmark code is objectively 'bad' in the sense that it is missing an important optimisation.\n\nParticularly with regards documentation, a patch improving things is\nmuch more likely to improve the situation than griping. Also,\nconversation on this list gets recorded for posterity and google is\nremarkably good at matching people looking for problems with\nsolutions. So, even in absence of a patch perhaps we've made the\nlives of future head-scratchers a little bit easier with this\ndiscussion.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 08:03:32 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "Graeme B. Bell schrieb am 09.07.2015 um 11:44:\n> I don't recall seeing a clear statement telling me I should mark pl/pgsql\n> functions nonvolatile wherever possible or throw all performance and\n> scalability out the window. \n\nFrom: http://www.postgresql.org/docs/current/static/xfunc-volatility.html\n\n \"For best optimization results, you should label your functions \n with the strictest volatility category that is valid for them.\"\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 09 Jul 2015 15:22:10 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "\nOn 09 Jul 2015, at 15:22, Thomas Kellerer <[email protected]> wrote:\n\n> Graeme B. Bell schrieb am 09.07.2015 um 11:44:\n>> I don't recall seeing a clear statement telling me I should mark pl/pgsql\n>> functions nonvolatile wherever possible or throw all performance and\n>> scalability out the window. \n> \n> From: http://www.postgresql.org/docs/current/static/xfunc-volatility.html\n> \n> \"For best optimization results, you should label your functions \n> with the strictest volatility category that is valid for them.\"\n\n\nHi Thomas,\n\nThank you very much for the link.\n\nHowever, the point I was making wasn't that no sentence exists anywhere. My point was that I've read the docs more than anyone else in my institute and I was completely unaware of this. \n\nIt also quite vague - if you hand that to a younger programmer in particular, how do they implement it in practice? When is it important to do it? If this one factor silently breaks multiprocessor scaling of pl/pgsql, and multiprocessing is the biggest trend in CPU processing of the last decade (comparing server CPUS of 2005 with 2015), then why is this information not up front and clear?\n\n\nA second point to keep in mind that optimization and parallelisation/scalability are not always the same thing. \n\nFor example, in one project I took a bunch of looped parallel UPDATEs on a set of 50 tables, and rewrote them so as to run the loop all at once inside a pl/pgsql function. Crudely, I took out the table-level for loop and put it at row-level instead. \n\nI expected they'd execute much faster if UPDATEs were using data still in cache. Also, I would be updating without writing out WAL entries to disk repeatedly. \n\nIt turns out the update per row ran much faster - as expected - when I used one table, but when I ran it in parallel on many tables, the performance was even worse than when I started. If you look at the benchmarks, you'll see that performance drops through the floor at 8-16 cores. I think that was when I first noticed this bug/feature.\n\n[If anyone is curious, the way I solved that one in the end was to pre-calculate every possible way the tables might be updated after N loops of updates using Python, and import that as a lookup table into PG. It turns out that although we had 10's of GB of data per table, there were only about 100,00 different types of situation, and only e.g. 80 iterations to consider). Then I ran a single set of UPDATEs with no pl/pgsql. It was something like a 10000x performance improvement.]\n\nGraeme.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 15:04:06 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": ">> \n>> 3. I don't disagree that the benchmark code is objectively 'bad' in the sense that it is missing an important optimisation.\n> \n> Particularly with regards documentation, a patch improving things is\n> much more likely to improve the situation than griping. Also,\n> conversation on this list gets recorded for posterity and google is\n> remarkably good at matching people looking for problems with\n> solutions. So, even in absence of a patch perhaps we've made the\n> lives of future head-scratchers a little bit easier with this\n> discussion.\n\nI agree that patch>gripe, and about the google aspect. But nonetheless, a well-intentioned gripe is > ignorance of a problem. \n\nAs mentioned earlier, I'm sick just now and will be back in hospital again tomorrow & monday, so a patch may be a little bit much to ask from me here :-) It's a bit much even keeping up with the posts on the thread so far.\n\nI might try to fix the documentation a bit later, though as someone with no experience in marking up volatility on pl/pgsql functions I doubt my efforts would be that great. I also have other OSS project contributions that need some attention first. \n\nRe: the google effect. Are these mailing list archives mirrored anywhere, incidentally? For example, I notice we just lost http:reddit.com/r/amd at the weekend, all the discussion of the last few years on that forum is out of reach. \n\nGraeme Bell\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 15:12:04 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Thu, Jul 9, 2015 at 10:12 AM, Graeme B. Bell <[email protected]> wrote:\n>>>\n>>> 3. I don't disagree that the benchmark code is objectively 'bad' in the sense that it is missing an important optimisation.\n>>\n>> Particularly with regards documentation, a patch improving things is\n>> much more likely to improve the situation than griping. Also,\n>> conversation on this list gets recorded for posterity and google is\n>> remarkably good at matching people looking for problems with\n>> solutions. So, even in absence of a patch perhaps we've made the\n>> lives of future head-scratchers a little bit easier with this\n>> discussion.\n>\n> I agree that patch>gripe, and about the google aspect. But nonetheless, a well-intentioned gripe is > ignorance of a problem.\n>\n> As mentioned earlier, I'm sick just now and will be back in hospital again tomorrow & monday, so a patch may be a little bit much to ask from me here :-) It's a bit much even keeping up with the posts on the thread so far.\n>\n> I might try to fix the documentation a bit later, though as someone with no experience in marking up volatility on pl/pgsql functions I doubt my efforts would be that great. I also have other OSS project contributions that need some attention first.\n>\n> Re: the google effect. Are these mailing list archives mirrored anywhere, incidentally? For example, I notice we just lost http:reddit.com/r/amd at the weekend, all the discussion of the last few years on that forum is out of reach.\n\nThe community maintains it's own mailing list archives in\npostgresql.org. Short of an array of tactical nuclear strikes this is\ngoing to be preserved because it represents the history of the project\nand in many ways is as important as the source code itself.\n\nThe archives are also mirrored by a number of high quality providers\nsuch as nabble (which tend to beat our archives in google rankings --\nlikely due to the improved interface).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 10:42:11 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "\nOn 09 Jul 2015, at 17:42, Merlin Moncure <[email protected]> wrote:\n\n> The community maintains it's own mailing list archives in\n> postgresql.org. Short of an array of tactical nuclear strikes this is\n> going to be preserved \n\nGood to know, I've seen a lot of dead software projects throughout my life. \n\nBut still - we will have to pray that Kim Jong Un never decides to become a MySQL contributor... :)\n\nGraeme.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 16:31:29 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code\n parallelise so badly when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": ">>>>> \"GBB\" == Graeme B Bell <[email protected]> writes:\n\nGBB> 1a. For example AMD CPUs list the number of integer cores (e.g. 16),\nGBB> but there is actually only half as many cores available for floating\nGBB> point work (8). So if your functions need to use floating point, your\nGBB> scaling will suffer badly on FP functions.\n\nThat is half as many 256-bit float units; for scalar math and for\n128-bit vector math each core gets a half of the float unit.\n\nOnly for the 256-bit vector math do the schedulars have to compete for\nfloat unit access.\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 0x997A9F17ED7DAEA6\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 11 Jul 2015 19:18:21 -0400",
"msg_from": "James Cloos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does CPU-intensive pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI've written a new open source tool for easily parallelising SQL scripts in postgres. [obligatory plug: https://github.com/gbb/par_psql ]\n\nUsing it, I'm seeing a problem that I've also seen in other postgres projects involving high degrees of parallelisation in the last 12 months.\n\nBasically:\n\n- I have machines here with up to 16 CPU cores and 128GB memory, very fast SSDs and controller etc, carefully configured kernel/postgresql.conf for high performance.\n\n- Ordinary queries parallelise nearly perfectly (e.g. SELECT some_stuff ...), e.g. almost up to 16x performance improvement.\n\n- Non-DB stuff like GDAL, python etc. parallelise nearly perfectly. \n\n- HOWEVER calls to CPU-intensive user-defined pl/pgsql functions (e.g. SELECT myfunction(some_stuff)) do not parallelise well, even when they are independently defined functions, or accessing tables in a read-only way. They hit a limit of 2.5x performance improvement relative to single-CPU performance (pg9.4) and merely 2x performance (pg9.3) regardless of how many CPU cores I throw at them. This is about 6 times slower than I'm expecting. \n\n\nI can't see what would be locking. It seems like it's the pl/pgsql environment itself that is somehow locking or incurring some huge frictional costs. Whether I use independently defined functions, independent source tables, independent output tables, makes no difference whatsoever, so it doesn't feel 'lock-related'. It also doesn't seem to be WAL/synchronisation related, as the machines I'm using can hit absurdly high pgbench rates, and I'm using unlogged tables for output. \n\nTake a quick peek here: https://github.com/gbb/par_psql/blob/master/BENCHMARKS.md\n\nI'm wondering what I'm missing here. Any ideas? \n\nGraeme.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Jul 2015 14:48:43 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hmmm... why does pl/pgsql code parallelise so badly when queries\n parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "\n>\n>Hi everyone,\n>\n>I've written a new open source tool for easily parallelising SQL scripts in postgres. [obligatory plug: https://github.com/gbb/par_psql ]\n>\n>Using it, I'm seeing a problem that I've also seen in other postgres projects involving high degrees of parallelisation in the last 12 months.\n>\n>Basically:\n>\n>- I have machines here with up to 16 CPU cores and 128GB memory, very fast SSDs and controller etc, carefully configured kernel/postgresql.conf for high performance.\n>\n>- Ordinary queries parallelise nearly perfectly (e.g. SELECT some_stuff ...), e.g. almost up to 16x performance improvement.\n>\n>- Non-DB stuff like GDAL, python etc. parallelise nearly perfectly.\n>\n>- HOWEVER calls to CPU-intensive user-defined pl/pgsql functions (e.g. SELECT myfunction(some_stuff)) do not parallelise well, even when they are independently defined functions, or accessing tables in a read-only way. They hit a limit of 2.5x performance improvement relative to single-CPU performance (pg9.4) and merely 2x performance (pg9.3) regardless of how many CPU cores I throw at them. This is about 6 times slower than I'm expecting.\n>\n>\n>I can't see what would be locking. It seems like it's the pl/pgsql environment itself that is somehow locking or incurring some huge frictional costs. Whether I use independently defined functions, independent source tables, independent output tables, makes no difference whatsoever, so it doesn't feel 'lock-related'. It also doesn't seem to be WAL/synchronisation related, as the machines I'm using can hit absurdly high pgbench rates, and I'm using unlogged tables for output.\n>\n>Take a quick peek here: https://github.com/gbb/par_psql/blob/master/BENCHMARKS.md\n>\n>I'm wondering what I'm missing here. Any ideas?\n>\n>Graeme.\n>\n\nauto explain might help giving some insight in what's going on: http://www.postgresql.org/docs/9.4/static/auto-explain.html\n\nRegards, \nMarc Mamin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Jul 2015 16:15:44 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does pl/pgsql code parallelise so badly when\n queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Fri, Jul 3, 2015 at 9:48 AM, Graeme B. Bell <[email protected]> wrote:\n> Hi everyone,\n>\n> I've written a new open source tool for easily parallelising SQL scripts in postgres. [obligatory plug: https://github.com/gbb/par_psql ]\n>\n> Using it, I'm seeing a problem that I've also seen in other postgres projects involving high degrees of parallelisation in the last 12 months.\n>\n> Basically:\n>\n> - I have machines here with up to 16 CPU cores and 128GB memory, very fast SSDs and controller etc, carefully configured kernel/postgresql.conf for high performance.\n>\n> - Ordinary queries parallelise nearly perfectly (e.g. SELECT some_stuff ...), e.g. almost up to 16x performance improvement.\n>\n> - Non-DB stuff like GDAL, python etc. parallelise nearly perfectly.\n>\n> - HOWEVER calls to CPU-intensive user-defined pl/pgsql functions (e.g. SELECT myfunction(some_stuff)) do not parallelise well, even when they are independently defined functions, or accessing tables in a read-only way. They hit a limit of 2.5x performance improvement relative to single-CPU performance (pg9.4) and merely 2x performance (pg9.3) regardless of how many CPU cores I throw at them. This is about 6 times slower than I'm expecting.\n>\n> I can't see what would be locking. It seems like it's the pl/pgsql environment itself that is somehow locking or incurring some huge frictional costs. Whether I use independently defined functions, independent source tables, independent output tables, makes no difference whatsoever, so it doesn't feel 'lock-related'. It also doesn't seem to be WAL/synchronisation related, as the machines I'm using can hit absurdly high pgbench rates, and I'm using unlogged tables for output.\n>\n> Take a quick peek here: https://github.com/gbb/par_psql/blob/master/BENCHMARKS.md\n>\n> I'm wondering what I'm missing here. Any ideas?\n\nI'm not necessarily seeing your results. via pgbench,\n\nmmoncure@mernix2 11:34 AM ~$ ~/pgdev/bin/pgbench -n -T 60 -f b.sql\ntransaction type: Custom query\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 658833\nlatency average: 0.091 ms\ntps = 10980.538470 (including connections establishing)\ntps = 10980.994547 (excluding connections establishing)\nmmoncure@mernix2 11:35 AM ~$ ~/pgdev/bin/pgbench -n -T 60 -c4 -j4 -f b.sql\ntransaction type: Custom query\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 4\nduration: 60 s\nnumber of transactions actually processed: 2847631\nlatency average: 0.084 ms\ntps = 47460.430447 (including connections establishing)\ntps = 47463.702074 (excluding connections establishing)\n\nb.sql:\nselect f();\n\nf():\ncreate or replace function f() returns int as $$ begin return 1; end;\n$$ language plpgsql;\n\nthe results are pretty volatile even with a 60s run, but I'm clearly\nnot capped at 2.5x parallelization (my box is 4 core). It would help\nif you disclosed the function body you're benchmarking. If the\nproblem is indeed on the sever, the next step I think is to profile\nthe code and look for locking issues.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 Jul 2015 11:40:27 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "\nHi Merlin, \n\nLong story short - thanks for the reply, but you're not measuring anything about the parallelism of code running in a pl/pgsql environment here. You're just measuring whether postgres can parallelise entering that environment and get back out. Don't get me wrong - it's great that this scales well because it affects situations where you have lots of calls to trivial functions. \nHowever it's not the problem I'm talking about. I mean 'real' pl'pgsql functions. e.g. things that you might find in postgis or similar. \n\nIf you re-read my previous email or look at par_psql (http://parpsql.com) and look at the benchmarks there you'll maybe see more about what I'm talking about.\n\nTo clear up the issue I build a little test harness around your comment below. \nIf anyone was wondering if it's par_psql itself that causes bad scaling in postgres.\nThe answer is clearly no. :-)\n\nWhat I found this evening is that there are several problems here. I did some testing here using a machine with 16 physical cores and lots of memory/IO. \n\n- Using a table as a source of input rather than a fixed parameter e.g. 'select col1... ' vs. 'select 3'. Please note I am not talking about poor performance, I am talking about poor scaling of performance to multicore. There should be no reason for this when read-locks are being taken on the table, and no reason for this when it is combined with e.g. a bunch of pl/pgsql work in a function. However the impact of this problem is only seen above 8 cores where performance crashes. \n\n- Using pl/pgsql itself intensively (e.g. anything non-trivial) causes horrifically bad scaling above 2 cores on the systems I've tested and performance crashes very hard soon after. This matches what I've seen elsewhere in big projects and in par_psql's tests. \n\nOf course, it could be some wacky postgresql.conf setting (I doubt it here), so I'd be glad if others could give it a try. If you're bored, set the time to 5s and run, from testing I can tell you it shouldn't alter the results. \n\nThe repo will be up in around 30 minutes time on http://github.com/gbb/ppppt, and I'm going to submit it as a bug to the pg bugs list. \n\nGraeme. \n\n\nOn 06 Jul 2015, at 18:40, Merlin Moncure <[email protected]> wrote:\n\n> On Fri, Jul 3, 2015 at 9:48 AM, Graeme B. Bell <[email protected]> wrote:\n>> Hi everyone,\n>> \n>> I've written a new open source tool for easily parallelising SQL scripts in postgres. [obligatory plug: https://github.com/gbb/par_psql ]\n>> \n>> Using it, I'm seeing a problem that I've also seen in other postgres projects involving high degrees of parallelisation in the last 12 months.\n>> \n>> Basically:\n>> \n>> - I have machines here with up to 16 CPU cores and 128GB memory, very fast SSDs and controller etc, carefully configured kernel/postgresql.conf for high performance.\n>> \n>> - Ordinary queries parallelise nearly perfectly (e.g. SELECT some_stuff ...), e.g. almost up to 16x performance improvement.\n>> \n>> - Non-DB stuff like GDAL, python etc. parallelise nearly perfectly.\n>> \n>> - HOWEVER calls to CPU-intensive user-defined pl/pgsql functions (e.g. SELECT myfunction(some_stuff)) do not parallelise well, even when they are independently defined functions, or accessing tables in a read-only way. They hit a limit of 2.5x performance improvement relative to single-CPU performance (pg9.4) and merely 2x performance (pg9.3) regardless of how many CPU cores I throw at them. This is about 6 times slower than I'm expecting.\n>> \n>> I can't see what would be locking. It seems like it's the pl/pgsql environment itself that is somehow locking or incurring some huge frictional costs. Whether I use independently defined functions, independent source tables, independent output tables, makes no difference whatsoever, so it doesn't feel 'lock-related'. It also doesn't seem to be WAL/synchronisation related, as the machines I'm using can hit absurdly high pgbench rates, and I'm using unlogged tables for output.\n>> \n>> Take a quick peek here: https://github.com/gbb/par_psql/blob/master/BENCHMARKS.md\n>> \n>> I'm wondering what I'm missing here. Any ideas?\n> \n> I'm not necessarily seeing your results. via pgbench,\n> \n> mmoncure@mernix2 11:34 AM ~$ ~/pgdev/bin/pgbench -n -T 60 -f b.sql\n> transaction type: Custom query\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> number of threads: 1\n> duration: 60 s\n> number of transactions actually processed: 658833\n> latency average: 0.091 ms\n> tps = 10980.538470 (including connections establishing)\n> tps = 10980.994547 (excluding connections establishing)\n> mmoncure@mernix2 11:35 AM ~$ ~/pgdev/bin/pgbench -n -T 60 -c4 -j4 -f b.sql\n> transaction type: Custom query\n> scaling factor: 1\n> query mode: simple\n> number of clients: 4\n> number of threads: 4\n> duration: 60 s\n> number of transactions actually processed: 2847631\n> latency average: 0.084 ms\n> tps = 47460.430447 (including connections establishing)\n> tps = 47463.702074 (excluding connections establishing)\n> \n> b.sql:\n> select f();\n> \n> f():\n> create or replace function f() returns int as $$ begin return 1; end;\n> $$ language plpgsql;\n> \n> the results are pretty volatile even with a 60s run, but I'm clearly\n> not capped at 2.5x parallelization (my box is 4 core). It would help\n> if you disclosed the function body you're benchmarking. If the\n> problem is indeed on the sever, the next step I think is to profile\n> the code and look for locking issues.\n> \n> merlin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 20:33:34 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On Tue, Jul 7, 2015 at 3:33 PM, Graeme B. Bell <[email protected]> wrote:\n>\n> Hi Merlin,\n>\n> Long story short - thanks for the reply, but you're not measuring anything about the parallelism of code running in a pl/pgsql environment here. You're just measuring whether postgres can parallelise entering that environment and get back out. Don't get me wrong - it's great that this scales well because it affects situations where you have lots of calls to trivial functions.\n> However it's not the problem I'm talking about. I mean 'real' pl'pgsql functions. e.g. things that you might find in postgis or similar.\n\nMaybe so. But it will be a lot easier for me (and others on this)\nlist if you submit a self contained test case that runs via pgbench.\nFrom there it's a simple matter of a perf top and other standard\nlocking diagnostic tests and also rules out any suspicion of 3rd party\nissues. This will also get better feedback on -bugs.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Jul 2015 15:52:28 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On 07 Jul 2015, at 22:52, Merlin Moncure <[email protected]> wrote:\n\n> On Tue, Jul 7, 2015 at 3:33 PM, Graeme B. Bell <[email protected]> wrote:\n>> \n>> Hi Merlin,\n>> \n>> Long story short - thanks for the reply, but you're not measuring anything about the parallelism of code running in a pl/pgsql environment here. You're just measuring whether postgres can parallelise entering that environment and get back out. Don't get me wrong - it's great that this scales well because it affects situations where you have lots of calls to trivial functions.\n>> However it's not the problem I'm talking about. I mean 'real' pl'pgsql functions. e.g. things that you might find in postgis or similar.\n> \n> Maybe so. But it will be a lot easier for me (and others on this)\n> list if you submit a self contained test case that runs via pgbench.\n\n\nHi Merlin, \n\nI'm guessing you are maybe pressed for time at the moment because I already clearly included this on the last email, as well as the links to the alternative benchmarks with the same problem I referred to on both of my last emails which are also trivial to drop into pgbench (cut/paste). \n\ne.g. did you see these parts of my previous email \n\n\"To clear up the issue I build a little test harness around your comment below.\"\n\"http://github.com/gbb/ppppt\"\n\nJust pick any function you like, there are 6 there, and 3 of them demonstrate 2 different problems, all of it is clearly documented. \n\nI haven't used perf with pgbench before, and I can't run any code today. \nIf you're interested in this but short on time, maybe you can glance at the repo above and just add 'perf' at the appropriate point in the rbuild wrapper.\n\nGraeme. \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 8 Jul 2015 11:13:04 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On 2015-07-08 11:13:04 +0000, Graeme B. Bell wrote:\n> I'm guessing you are maybe pressed for time at the moment because I\n> already clearly included this on the last email, as well as the links\n> to the alternative benchmarks with the same problem I referred to on\n> both of my last emails which are also trivial to drop into pgbench\n> (cut/paste).\n\nYou realize that you want something here, not Merlin, right?\n\n> e.g. did you see these parts of my previous email \n> \n> \"To clear up the issue I build a little test harness around your comment below.\"\n> \"http://github.com/gbb/ppppt\"\n\nWell, that requires reviewing the source code of the run script and\nsuch.\n\n\n\nI think we shouldn't discuss this on two threads (-performance, -bugs),\nthat makes it hard to follow. Given Tom's more detailed answer I think\nthe -bugs thread already contains more pertinent information.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 8 Jul 2015 13:20:43 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "\nOn 08 Jul 2015, at 13:20, Andres Freund <[email protected]> wrote:\n\n> On 2015-07-08 11:13:04 +0000, Graeme B. Bell wrote:\n>> I'm guessing you are maybe pressed for time at the moment because I\n>> already clearly included this on the last email, as well as the links\n>> to the alternative benchmarks with the same problem I referred to on\n>> both of my last emails which are also trivial to drop into pgbench\n>> (cut/paste).\n> \n> You realize that you want something here, not Merlin, right?\n\nHi Andreas,\n\nMy email was saying it's not helpful for anyone on the list for him to keep asking me to give him X and me to keep sending it. Do you disagree with that idea?\n\nI tried to phrase my request politely, but perhaps I failed. If you have suggestions for better ways to say \"I already sent it, twice\" more politely in this situation, I'd welcome them off list. \n\nHe asked me to disclose the function body I was testing. I did that, *and* also disclosed the entire approach to the benchmark too in a way that made it trivial for him or others to replicate the situation I'd found. I'm pretty sure you should not be discouraging this kind of thing in bug/performance reports. \n\nI get your point that when you're asking for other people to look at something with you, don't antagonise them. \n\nI didn't intend it as antagonising and Merlin hasn't mailed me anything to say he was antagonised. I'm quite sure he's capable of defending himself or communicating with me himself if he does feel antagonised by something. I hope we can end the discussion of that here?\n\nMerlin, if you were antagonised, sorry, I did not mean to antagonise you. I just wanted to just wanted make it clear that I'd sent you what you asked for, + more, and that I was surprised you hadn't noticed it. \n\n>> \"To clear up the issue I build a little test harness around your comment below.\"\n>> \"http://github.com/gbb/ppppt\"\n> \n> Well, that requires reviewing the source code of the run script and\n> such.\n\nNo, of course it doesn't. It appears that you didn't look at the repo or read my previous mail before you wrote this. \n\nI do not wish to antagonise you either, so please go and look at the repo before you write the next reply. \n\n\"http://github.com/gbb/ppppt\nJust pick any function you like, there are 6 there, and 3 of them demonstrate 2 different problems, all of it is clearly documented.\"\n\nWhen you open up the repo, there are the tests\nhttps://github.com/gbb/ppppt/tree/master/tests\n\nYou don't need to review any code from the run script. The functions are there as isolated files and what they are intended to demonstrate is clearly described with text and graphics. I could see your point if I had mailed out some giant script with a bunch of SQL calls embedded in its guts, but that's the opposite of what I did here. \n\nDid you find it difficult to navigate the repo structure (2 folders, a few files)? If so please let me know off-list what was difficult and I will see if I can improve it. \n\n> I think we shouldn't discuss this on two threads (-performance, -bugs),\n> that makes it hard to follow. Given Tom's more detailed answer I think\n> the -bugs thread already contains more pertinent information.\n\nI don't necessarily disagree with this idea, but...\n\nHow many people concerned with performance are following the -bugs list? How much space is there for discussion of this on -bugs? Since only working solutions for this performance problem so far are all user-side rather than commiter-side, why would you want to restrict that information to a commiter-side list?\n\nIt has developed this way because I noticed it as a performance issue first, then decided to report it as a potential bug.\n\nPerhaps it would be useful to keep the discussion separate as the -commiter side aspects (how to fix this at the server level) and -user side (what you can do to improve performance right now). I will defer to general opinion on this in my follow-up posts. \n\nGraeme. \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 10:30:35 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": "On 2015-07-09 10:30:35 +0000, Graeme B. Bell wrote:\n> > Well, that requires reviewing the source code of the run script and\n> > such.\n> \n> No, of course it doesn't. It appears that you didn't look at the repo or read my previous mail before you wrote this. \n\nFFS, I *ran* some of the tests and reported on results. With you in CC.\n\nWhat I mean is that I don't just run random code from some random github\nrepository.\n\n> I do not wish to antagonise you either, so please go and look at the\n> repo before you write the next reply.\n\nOver and out.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 12:33:46 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hmmm... why does pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
},
{
"msg_contents": ">> No, of course it doesn't. It appears that you didn't look at the repo or read my previous mail before you wrote this. \n> \n> FFS, I *ran* some of the tests and reported on results. With you in CC.\n\nJust checked back. So you did. I'm sorry, I made the mistake I accused you of. \n\nBut... why then did you say I hadn't provided him with individual functions, when you've seen the repo yourself? I don't understand. You knew they're there.\n\n> What I mean is that I don't just run random code from some random github\n> repository.\n\nSure, but surely that's not an issue when the SQL functions are also seperately provided and clearly labelled in the repo?\n\nDo you feel there is a difference about the trustworthiness of isolated files containing an SQL function presented in a github repo, and SQL functions presented in an email?\n\nI am not sure I can agree with that idea, I think they are both just SQL functions. The difference is that one also offers you a bit more if you want to check/try it.\n\n> I do not wish to antagonise you either, so please go and look at the\n>> repo before you write the next reply.\n> \n> Over and out.\n\nSeems there has been a misunderstanding here and I feel I'm still missing something in what you're saying. Sorry Andres. Let's just forget this. I don't think we disagree especially on this and I am not looking to make an enemy here.\n\nAlso, thanks for running the benchmarks to get some numbers.\n\nGraeme. \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 15:22:32 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hmmm... why does pl/pgsql code parallelise so badly\n when queries parallelise fine? Anyone else seen this?"
}
] |
[
{
"msg_contents": "\nHi,\n\ntoday I have update my test system to 9.5alpha1.\nMost of the operations are ok, except delete.\nI get ~1000 times slower!\n\n\nchimera=# SELECT\n (total_time / 1000 )::numeric(10,2) as total_secs,\n (total_time/calls)::numeric(10,2) as average_time_ms, calls,\n query\nFROM pg_stat_statements where userid = 16384\nORDER BY 1 DESC\nLIMIT 10;\n total_secs | average_time_ms | calls | query \n \n------------+-----------------+-------+------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------\n 255.88 | 566.11 | 452 | DELETE FROM t_inodes WHERE ipnfsid=$1 AND inlink = ?\n 0.13 | 0.13 | 1006 | insert into t_dirs (iparent, iname, ipnfsid) (select $1 as iparent, $2 as iname, $3 as ipnfsid where not exists (select ? from t_dirs where iparent=$4 and iname=$5))\n 0.11 | 0.02 | 6265 | SELECT isize,inlink,itype,imode,iuid,igid,iatime,ictime,imtime,icrtime,igeneration FROM t_inodes WHERE ipnfsid=$1\n 0.03 | 0.03 | 1002 | INSERT INTO t_inodes VALUES($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)\n 0.02 | 0.02 | 1002 | UPDATE t_inodes SET inlink=inlink +$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4\n 0.02 | 0.03 | 905 | UPDATE t_inodes SET inlink=inlink -$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4\n 0.02 | 0.01 | 2000 | SELECT ilocation,ipriority,ictime,iatime FROM t_locationinfo WHERE itype=$1 AND ipnfsid=$2 AND istate=? ORDER BY ipriority DESC\n 0.01 | 0.01 | 906 | SELECT ipnfsid FROM t_dirs WHERE iname=$1 AND iparent=$2\n 0.01 | 0.01 | 453 | DELETE FROM t_dirs WHERE iname=$1 AND iparent=$2\n\n\n\n\nchimera=# \\d t_inodes\n Table \"public.t_inodes\"\n Column | Type | Modifiers \n-------------+--------------------------+------------------------\n ipnfsid | character varying(36) | not null\n itype | integer | not null\n imode | integer | not null\n inlink | integer | not null\n iuid | integer | not null\n igid | integer | not null\n isize | bigint | not null\n iio | integer | not null\n ictime | timestamp with time zone | not null\n iatime | timestamp with time zone | not null\n imtime | timestamp with time zone | not null\n icrtime | timestamp with time zone | not null default now()\n igeneration | bigint | not null default 0\nIndexes:\n \"t_inodes_pkey\" PRIMARY KEY, btree (ipnfsid)\nReferenced by:\n TABLE \"t_access_latency\" CONSTRAINT \"t_access_latency_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_acl\" CONSTRAINT \"t_acl_fkey\" FOREIGN KEY (rs_id) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_dirs\" CONSTRAINT \"t_dirs_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid)\n TABLE \"t_inodes_checksum\" CONSTRAINT \"t_inodes_checksum_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_inodes_data\" CONSTRAINT \"t_inodes_data_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_level_1\" CONSTRAINT \"t_level_1_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_level_2\" CONSTRAINT \"t_level_2_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_level_3\" CONSTRAINT \"t_level_3_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_level_4\" CONSTRAINT \"t_level_4_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_level_5\" CONSTRAINT \"t_level_5_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_level_6\" CONSTRAINT \"t_level_6_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_level_7\" CONSTRAINT \"t_level_7_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_locationinfo\" CONSTRAINT \"t_locationinfo_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_retention_policy\" CONSTRAINT \"t_retention_policy_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_storageinfo\" CONSTRAINT \"t_storageinfo_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n TABLE \"t_tags\" CONSTRAINT \"t_tags_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES t_inodes(ipnfsid)\nTriggers:\n tgr_locationinfo_trash BEFORE DELETE ON t_inodes FOR EACH ROW EXECUTE PROCEDURE f_locationinfo2trash()\n\n\n\nAny ideas?\n\n Tigran.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 5 Jul 2015 13:10:51 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "9.5alpha1 vs 9.4"
},
{
"msg_contents": "\nAnd this is with 9.4 in the same hardware ( restored from backup)\n\n 0.35 | 0.35 | 1002 | DELETE FROM t_inodes WHERE ipnfsid=$1 AND inlink = ?\n 0.16 | 0.16 | 1006 | insert into t_dirs (iparent, iname, ipnfsid) (select $1 as iparent, $2 as iname, $3 as ipnfsid where not exists (select\n ? from t_dirs where iparent=$4 and iname=$5))\n 0.15 | 0.02 | 8026 | SELECT isize,inlink,itype,imode,iuid,igid,iatime,ictime,imtime,icrtime,igeneration FROM t_inodes WHERE ipnfsid=$1\n 0.06 | 0.06 | 1002 | INSERT INTO t_inodes VALUES($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)\n 0.04 | 0.02 | 2004 | UPDATE t_inodes SET inlink=inlink -$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4\n 0.03 | 0.01 | 2000 | SELECT ilocation,ipriority,ictime,iatime FROM t_locationinfo WHERE itype=$1 AND ipnfsid=$2 AND istate=? ORDER BY ipriori\nty DESC\n 0.02 | 0.02 | 1002 | UPDATE t_inodes SET inlink=inlink +$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4\n 0.02 | 0.01 | 2006 | SELECT ipnfsid FROM t_dirs WHERE iname=$1 AND iparent=$2\n 0.01 | 0.01 | 1006 | DELETE FROM t_dirs WHERE iname=$1 AND iparent=$2\n 0.00 | 0.00 | 2004 | COMMI\n\nTigran.\n\n\n----- Original Message -----\n> From: \"Mkrtchyan, Tigran\" <[email protected]>\n> To: \"pgsql-performance\" <[email protected]>\n> Sent: Sunday, July 5, 2015 1:10:51 PM\n> Subject: [PERFORM] 9.5alpha1 vs 9.4\n\n> Hi,\n> \n> today I have update my test system to 9.5alpha1.\n> Most of the operations are ok, except delete.\n> I get ~1000 times slower!\n> \n> \n> chimera=# SELECT\n> (total_time / 1000 )::numeric(10,2) as total_secs,\n> (total_time/calls)::numeric(10,2) as average_time_ms, calls,\n> query\n> FROM pg_stat_statements where userid = 16384\n> ORDER BY 1 DESC\n> LIMIT 10;\n> total_secs | average_time_ms | calls |\n> query\n> \n> ------------+-----------------+-------+------------------------------------------------------------------------------------------------------------------------------------------\n> -------------------------------\n> 255.88 | 566.11 | 452 | DELETE FROM t_inodes WHERE ipnfsid=$1 AND\n> inlink = ?\n> 0.13 | 0.13 | 1006 | insert into t_dirs (iparent, iname, ipnfsid)\n> (select $1 as iparent, $2 as iname, $3 as ipnfsid where not exists (select ?\n> from t_dirs where iparent=$4 and iname=$5))\n> 0.11 | 0.02 | 6265 | SELECT\n> isize,inlink,itype,imode,iuid,igid,iatime,ictime,imtime,icrtime,igeneration\n> FROM t_inodes WHERE ipnfsid=$1\n> 0.03 | 0.03 | 1002 | INSERT INTO t_inodes\n> VALUES($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)\n> 0.02 | 0.02 | 1002 | UPDATE t_inodes SET inlink=inlink\n> +$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4\n> 0.02 | 0.03 | 905 | UPDATE t_inodes SET inlink=inlink\n> -$1,imtime=$2,ictime=$3,igeneration=igeneration+? WHERE ipnfsid=$4\n> 0.02 | 0.01 | 2000 | SELECT ilocation,ipriority,ictime,iatime FROM\n> t_locationinfo WHERE itype=$1 AND ipnfsid=$2 AND istate=? ORDER BY ipriority\n> DESC\n> 0.01 | 0.01 | 906 | SELECT ipnfsid FROM t_dirs WHERE iname=$1 AND\n> iparent=$2\n> 0.01 | 0.01 | 453 | DELETE FROM t_dirs WHERE iname=$1 AND\n> iparent=$2\n> \n> \n> \n> \n> chimera=# \\d t_inodes\n> Table \"public.t_inodes\"\n> Column | Type | Modifiers\n> -------------+--------------------------+------------------------\n> ipnfsid | character varying(36) | not null\n> itype | integer | not null\n> imode | integer | not null\n> inlink | integer | not null\n> iuid | integer | not null\n> igid | integer | not null\n> isize | bigint | not null\n> iio | integer | not null\n> ictime | timestamp with time zone | not null\n> iatime | timestamp with time zone | not null\n> imtime | timestamp with time zone | not null\n> icrtime | timestamp with time zone | not null default now()\n> igeneration | bigint | not null default 0\n> Indexes:\n> \"t_inodes_pkey\" PRIMARY KEY, btree (ipnfsid)\n> Referenced by:\n> TABLE \"t_access_latency\" CONSTRAINT \"t_access_latency_ipnfsid_fkey\" FOREIGN KEY\n> (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_acl\" CONSTRAINT \"t_acl_fkey\" FOREIGN KEY (rs_id) REFERENCES\n> t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_dirs\" CONSTRAINT \"t_dirs_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES\n> t_inodes(ipnfsid)\n> TABLE \"t_inodes_checksum\" CONSTRAINT \"t_inodes_checksum_ipnfsid_fkey\" FOREIGN\n> KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_inodes_data\" CONSTRAINT \"t_inodes_data_ipnfsid_fkey\" FOREIGN KEY\n> (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_level_1\" CONSTRAINT \"t_level_1_ipnfsid_fkey\" FOREIGN KEY (ipnfsid)\n> REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_level_2\" CONSTRAINT \"t_level_2_ipnfsid_fkey\" FOREIGN KEY (ipnfsid)\n> REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_level_3\" CONSTRAINT \"t_level_3_ipnfsid_fkey\" FOREIGN KEY (ipnfsid)\n> REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_level_4\" CONSTRAINT \"t_level_4_ipnfsid_fkey\" FOREIGN KEY (ipnfsid)\n> REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_level_5\" CONSTRAINT \"t_level_5_ipnfsid_fkey\" FOREIGN KEY (ipnfsid)\n> REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_level_6\" CONSTRAINT \"t_level_6_ipnfsid_fkey\" FOREIGN KEY (ipnfsid)\n> REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_level_7\" CONSTRAINT \"t_level_7_ipnfsid_fkey\" FOREIGN KEY (ipnfsid)\n> REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_locationinfo\" CONSTRAINT \"t_locationinfo_ipnfsid_fkey\" FOREIGN KEY\n> (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_retention_policy\" CONSTRAINT \"t_retention_policy_ipnfsid_fkey\" FOREIGN\n> KEY (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_storageinfo\" CONSTRAINT \"t_storageinfo_ipnfsid_fkey\" FOREIGN KEY\n> (ipnfsid) REFERENCES t_inodes(ipnfsid) ON DELETE CASCADE\n> TABLE \"t_tags\" CONSTRAINT \"t_tags_ipnfsid_fkey\" FOREIGN KEY (ipnfsid) REFERENCES\n> t_inodes(ipnfsid)\n> Triggers:\n> tgr_locationinfo_trash BEFORE DELETE ON t_inodes FOR EACH ROW EXECUTE PROCEDURE\n> f_locationinfo2trash()\n> \n> \n> \n> Any ideas?\n> \n> Tigran.\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 5 Jul 2015 13:46:08 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 9.5alpha1 vs 9.4"
},
{
"msg_contents": "Hi,\n\nOn 2015-07-05 13:10:51 +0200, Mkrtchyan, Tigran wrote:\n> today I have update my test system to 9.5alpha1.\n> Most of the operations are ok, except delete.\n> I get ~1000 times slower!\n\n> 255.88 | 566.11 | 452 | DELETE FROM t_inodes WHERE ipnfsid=$1 AND inlink = ?\n\nThat certainly should not be the case. Could you show the query plan for\nthis statement in both versions? Any chance that there's a parameter\ntype mismatch for $1?\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 5 Jul 2015 14:54:03 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.5alpha1 vs 9.4"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2015-07-05 13:10:51 +0200, Mkrtchyan, Tigran wrote:\n>> today I have update my test system to 9.5alpha1.\n>> Most of the operations are ok, except delete.\n>> I get ~1000 times slower!\n\n>> 255.88 | 566.11 | 452 | DELETE FROM t_inodes WHERE ipnfsid=$1 AND inlink = ?\n\n> That certainly should not be the case. Could you show the query plan for\n> this statement in both versions?\n\nEXPLAIN ANALYZE, please. I'm wondering about a missing index on some\nforeign-key-involved column. That would show up as excessive time in\nthe relevant trigger ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 05 Jul 2015 10:33:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.5alpha1 vs 9.4"
},
{
"msg_contents": "Thanks for the hin. My bad. The backup db and 9.5 had a different type on\none of the foreign-key constrains char(36) vs varchar(36).\n\nThe schema was screwed couple of days ago, byt performance numbers I checked only\nafter migration to 9.5.\n\n\nSorry for the noise.\n\nTigran.\n\n----- Original Message -----\n> From: \"Tom Lane\" <[email protected]>\n> To: \"Andres Freund\" <[email protected]>\n> Cc: \"Mkrtchyan, Tigran\" <[email protected]>, \"pgsql-performance\" <[email protected]>\n> Sent: Sunday, July 5, 2015 4:33:25 PM\n> Subject: Re: [PERFORM] 9.5alpha1 vs 9.4\n\n> Andres Freund <[email protected]> writes:\n>> On 2015-07-05 13:10:51 +0200, Mkrtchyan, Tigran wrote:\n>>> today I have update my test system to 9.5alpha1.\n>>> Most of the operations are ok, except delete.\n>>> I get ~1000 times slower!\n> \n>>> 255.88 | 566.11 | 452 | DELETE FROM t_inodes WHERE ipnfsid=$1 AND\n>>> inlink = ?\n> \n>> That certainly should not be the case. Could you show the query plan for\n>> this statement in both versions?\n> \n> EXPLAIN ANALYZE, please. I'm wondering about a missing index on some\n> foreign-key-involved column. That would show up as excessive time in\n> the relevant trigger ...\n> \n> \t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 5 Jul 2015 19:16:24 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 9.5alpha1 vs 9.4"
},
{
"msg_contents": "On 07/05/2015 10:16 AM, Mkrtchyan, Tigran wrote:\n> Thanks for the hin. My bad. The backup db and 9.5 had a different type on\n> one of the foreign-key constrains char(36) vs varchar(36).\n> \n> The schema was screwed couple of days ago, byt performance numbers I checked only\n> after migration to 9.5.\n\nThank you for testing!\n\nCan you re-run your tests with the fixed schema? How does it look?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 06 Jul 2015 09:45:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.5alpha1 vs 9.4"
}
] |
[
{
"msg_contents": "\nOn Jul 6, 2015 18:45, Josh Berkus <[email protected]> wrote:\n>\n> On 07/05/2015 10:16 AM, Mkrtchyan, Tigran wrote: \n> > Thanks for the hin. My bad. The backup db and 9.5 had a different type on \n> > one of the foreign-key constrains char(36) vs varchar(36). \n> > \n> > The schema was screwed couple of days ago, byt performance numbers I checked only \n> > after migration to 9.5. \n>\n> Thank you for testing! \n>\n> Can you re-run your tests with the fixed schema? How does it look? \n\nWith fixed schema performance equal to 9.4. I have updated my code to use ON CONFLICT statement. ~5% better compared with INSERT WHERE NOT EXIST. Really cool! Thanks.\n\nTigran.\n>\n> -- \n> Josh Berkus \n> PostgreSQL Experts Inc. \n> http://pgexperts.com \n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected]) \n> To make changes to your subscription: \n> http://www.postgresql.org/mailpref/pgsql-performance \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 Jul 2015 23:14:31 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 9.5alpha1 vs 9.4"
}
] |
[
{
"msg_contents": "I had a query that was filtering with a wildcard search of a text field for\n%SUCCESS%. The query took about 5 seconds and was running often so I wanted\nto improve it. I suggested that the engineers include a new boolean column\nfor successful status. They implemented the requested field, but the query\nthat filters on that new column runs very long (i kill it after letting it\nrun for about an hour). Can someone help me understand why that is the\ncase and how to resolve it?\n\nFirst query:\nSELECT *\nFROM \"lead\"\nWHERE ( NOT ( ( \"lead\".\"id\" IN\n ( SELECT U1.\"lead_id\" AS \"lead_id\"\n FROM \"event\" U1\n WHERE U1.\"event_type\" = 'type_1' )\n OR ( \"lead\".\"id\" IN\n ( SELECT U1.\"lead_id\" AS \"lead_id\"\n FROM \"event\" U1\n WHERE U1.\"event_type\" = 'type_2' )\n AND \"lead\".\"id\" IN\n ( SELECT U1.\"lead_id\" AS \"lead_id\"\n FROM \"event\" U1\n WHERE UPPER(U1.\"response\"::text) LIKE\nUPPER('%success%') ) ) ) )\n AND NOT (\"lead\".\"ReferenceNumber\" = '') ) ;\n\nexplain/analyze result:\n\nSeq Scan on lead (cost=130951.81..158059.21 rows=139957 width=369) (actual\ntime=4699.619..4699.869 rows=1 loops=1)\n Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (hashed SubPlan 3))))\n Rows Removed by Filter: 375369\n\n SubPlan 1\n\n -> Seq Scan on event u1 (cost=0.00..42408.62 rows=7748 width=4)\n(actual time=0.005..171.350 rows=7414 loops=1)\n Filter: ((event_type)::text = 'type_1'::text)\n\n Rows Removed by Filter: 1099436\n\n SubPlan 2\n\n -> Seq Scan on event u1_1 (cost=0.00..42408.62 rows=375665 width=4)\n(actual time=0.006..219.092 rows=373298 loops=1)\n Filter: ((event_type)::text = 'type_2'::text)\n\n Rows Removed by Filter: 733552\n\n SubPlan 3\n\n -> Seq Scan on event u1_2 (cost=0.00..45175.75 rows=111 width=4)\n(actual time=0.040..3389.550 rows=712952 loops=1)\n Filter: (upper(response) ~~ '%SUCCESS%'::text)\n\n Rows Removed by Filter: 393898\n\nThe main thing that sticks out to me for this plan is the low estimate for\nthe rows it will return on the %SUCCESS% filter.\n\nHere is the second query with explain:\n\nSELECT *\nFROM \"lead\"\nWHERE\n(\n NOT\n (\n (\"lead\".\"id\" IN\n (\n SELECT U1.\"lead_id\" AS \"lead_id\"\n FROM \"event\" U1\n WHERE U1.\"event_type\" ='type_1'\n )\n OR\n (\"lead\".\"id\" IN\n (\n SELECT U1.\"lead_id\" AS \"lead_id\"\n FROM \"event\" U1\n WHERE U1.\"event_type\" = 'type_2\n )\n AND \"lead\".\"id\" IN\n (\n SELECT U1.\"lead_id\" AS \"lead_id\"\n FROM \"event\" U1\n WHERE successful\n )\n )\n )\n )\n AND NOT (\"lead\".\"ReferenceNumber\" = '')\n) ;\n\nexplain result:\n Seq Scan on lead (cost=85775.78..9005687281.12 rows=139957 width=369)\n\n Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (SubPlan 3))))\n SubPlan 1\n\n -> Seq Scan on event u1 (cost=0.00..42408.62 rows=7748 width=4)\n\n Filter: ((event_type)::text = 'type_1'::text)\n\n SubPlan 2\n\n -> Seq Scan on event u1_1 (cost=0.00..42408.62 rows=375665 width=4)\n\n Filter: ((event_type)::text = 'type_2'::text)\n\n SubPlan 3\n\n -> Materialize (cost=0.00..46154.43 rows=731185 width=4)\n\n -> Seq Scan on event u1_2 (cost=0.00..39641.50 rows=731185\nwidth=4)\n Filter: successful\n\nHere is does a materialize and estimates rows properly, but as stated this\nquery just hangs and pegs load. There are no locks and its in an active\nstate the whole time. I am running these queries in a test environment on\na recently exported full schema from production, with a reindex and a\nvacuum/analyze. This is postgres 9.3.6 on rhel6.\n\nWhen I run just the different subquery element:\nSELECT U1.\"lead_id\" AS \"lead_id\"\n FROM \"event\" U1\n\n WHERE successful;\n\nit returns in about 250ms, with the text field %SUCCESS% it runs in about 4\nseconds. This seemed like a low hanging fruit query improvement so I'm\nsurprised its not working, it seems like we are just lucky that the planner\nis estimating that filter incorrectly in the original form.\n\nI'm sure the query just needs to be completely overhauled and am starting\nto pull it apart and work with the engineers to get something more\nefficient set up overall, but I am not sure how to answer the question as\nto why this original attempt at improving the query is not successful.\n\nAny guidance is greatly appreciated, thanks!\n\nI had a query that was filtering with a wildcard search of a text field for %SUCCESS%. The query took about 5 seconds and was running often so I wanted to improve it. I suggested that the engineers include a new boolean column for successful status. They implemented the requested field, but the query that filters on that new column runs very long (i kill it after letting it run for about an hour). Can someone help me understand why that is the case and how to resolve it? First query: SELECT *FROM \"lead\" WHERE ( NOT ( ( \"lead\".\"id\" IN ( SELECT U1.\"lead_id\" AS \"lead_id\" FROM \"event\" U1 WHERE U1.\"event_type\" = 'type_1' ) OR ( \"lead\".\"id\" IN ( SELECT U1.\"lead_id\" AS \"lead_id\" FROM \"event\" U1 WHERE U1.\"event_type\" = 'type_2' ) AND \"lead\".\"id\" IN ( SELECT U1.\"lead_id\" AS \"lead_id\" FROM \"event\" U1 WHERE UPPER(U1.\"response\"::text) LIKE UPPER('%success%') ) ) ) ) AND NOT (\"lead\".\"ReferenceNumber\" = '') ) ;explain/analyze result:Seq Scan on lead (cost=130951.81..158059.21 rows=139957 width=369) (actual time=4699.619..4699.869 rows=1 loops=1) Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <> ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (hashed SubPlan 3)))) Rows Removed by Filter: 375369 SubPlan 1 -> Seq Scan on event u1 (cost=0.00..42408.62 rows=7748 width=4) (actual time=0.005..171.350 rows=7414 loops=1) Filter: ((event_type)::text = 'type_1'::text) Rows Removed by Filter: 1099436 SubPlan 2 -> Seq Scan on event u1_1 (cost=0.00..42408.62 rows=375665 width=4) (actual time=0.006..219.092 rows=373298 loops=1) Filter: ((event_type)::text = 'type_2'::text) Rows Removed by Filter: 733552 SubPlan 3 -> Seq Scan on event u1_2 (cost=0.00..45175.75 rows=111 width=4) (actual time=0.040..3389.550 rows=712952 loops=1) Filter: (upper(response) ~~ '%SUCCESS%'::text) Rows Removed by Filter: 393898 The main thing that sticks out to me for this plan is the low estimate for the rows it will return on the %SUCCESS% filter.Here is the second query with explain:SELECT *FROM \"lead\" WHERE( NOT ( (\"lead\".\"id\" IN ( SELECT U1.\"lead_id\" AS \"lead_id\" FROM \"event\" U1 WHERE U1.\"event_type\" ='type_1' ) OR (\"lead\".\"id\" IN ( SELECT U1.\"lead_id\" AS \"lead_id\" FROM \"event\" U1 WHERE U1.\"event_type\" = 'type_2 ) AND \"lead\".\"id\" IN ( SELECT U1.\"lead_id\" AS \"lead_id\" FROM \"event\" U1 WHERE successful ) ) ) ) AND NOT (\"lead\".\"ReferenceNumber\" = '')) ;explain result: Seq Scan on lead (cost=85775.78..9005687281.12 rows=139957 width=369) Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <> ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (SubPlan 3)))) SubPlan 1 -> Seq Scan on event u1 (cost=0.00..42408.62 rows=7748 width=4) Filter: ((event_type)::text = 'type_1'::text) SubPlan 2 -> Seq Scan on event u1_1 (cost=0.00..42408.62 rows=375665 width=4) Filter: ((event_type)::text = 'type_2'::text) SubPlan 3 -> Materialize (cost=0.00..46154.43 rows=731185 width=4) -> Seq Scan on event u1_2 (cost=0.00..39641.50 rows=731185 width=4) Filter: successful Here is does a materialize and estimates rows properly, but as stated this query just hangs and pegs load. There are no locks and its in an active state the whole time. I am running these queries in a test environment on a recently exported full schema from production, with a reindex and a vacuum/analyze. This is postgres 9.3.6 on rhel6. When I run just the different subquery element: SELECT U1.\"lead_id\" AS \"lead_id\" FROM \"event\" U1 WHERE successful;it returns in about 250ms, with the text field %SUCCESS% it runs in about 4 seconds. This seemed like a low hanging fruit query improvement so I'm surprised its not working, it seems like we are just lucky that the planner is estimating that filter incorrectly in the original form. I'm sure the query just needs to be completely overhauled and am starting to pull it apart and work with the engineers to get something more efficient set up overall, but I am not sure how to answer the question as to why this original attempt at improving the query is not successful.Any guidance is greatly appreciated, thanks!",
"msg_date": "Tue, 7 Jul 2015 10:40:48 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "wildcard text filter switched to boolean column,\n performance is way worse"
},
{
"msg_contents": "Mike Broers <[email protected]> writes:\n> I had a query that was filtering with a wildcard search of a text field for\n> %SUCCESS%. The query took about 5 seconds and was running often so I wanted\n> to improve it. I suggested that the engineers include a new boolean column\n> for successful status. They implemented the requested field, but the query\n> that filters on that new column runs very long (i kill it after letting it\n> run for about an hour). Can someone help me understand why that is the\n> case and how to resolve it?\n\nIt's hashing the subplan output in the first case and not the second:\n\n> Seq Scan on lead (cost=130951.81..158059.21 rows=139957 width=369) (actual\n> time=4699.619..4699.869 rows=1 loops=1)\n> Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n> ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (hashed SubPlan 3))))\n ^^^^^^^^^^^^^^^^\nvs\n\n> Seq Scan on lead (cost=85775.78..9005687281.12 rows=139957 width=369)\n> Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n> ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (SubPlan 3))))\n ^^^^^^^^^\n\nPresumably, the new more-accurate rows count causes the planner to realize\nthat the hash table will exceed work_mem so it doesn't choose to hash ...\nbut for your situation, you'd rather it did, because what you're getting\ninstead is a Materialize node that spills to disk (again, because the data\ninvolved exceeds work_mem) and that's a killer for this query. You should\nbe able to get back the old behavior if you raise work_mem enough.\n\nAnother idea you might think about is changing the OR'd IN conditions\nto a single IN over a UNION ALL of the subselects. I'm not really sure if\nthat would produce a better plan, but it's worth trying if it wouldn't\nrequire too much app-side contortion.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Jul 2015 12:02:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wildcard text filter switched to boolean column,\n performance is way worse"
},
{
"msg_contents": "Thanks, very informative! I'll experiment with work_mem settings and report\nback.\n\nOn Tue, Jul 7, 2015 at 11:02 AM, Tom Lane <[email protected]> wrote:\n\n> Mike Broers <[email protected]> writes:\n> > I had a query that was filtering with a wildcard search of a text field\n> for\n> > %SUCCESS%. The query took about 5 seconds and was running often so I\n> wanted\n> > to improve it. I suggested that the engineers include a new boolean\n> column\n> > for successful status. They implemented the requested field, but the\n> query\n> > that filters on that new column runs very long (i kill it after letting\n> it\n> > run for about an hour). Can someone help me understand why that is the\n> > case and how to resolve it?\n>\n> It's hashing the subplan output in the first case and not the second:\n>\n> > Seq Scan on lead (cost=130951.81..158059.21 rows=139957 width=369)\n> (actual\n> > time=4699.619..4699.869 rows=1 loops=1)\n> > Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n> > ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (hashed SubPlan 3))))\n> ^^^^^^^^^^^^^^^^\n> vs\n>\n> > Seq Scan on lead (cost=85775.78..9005687281.12 rows=139957 width=369)\n> > Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n> > ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (SubPlan 3))))\n> ^^^^^^^^^\n>\n> Presumably, the new more-accurate rows count causes the planner to realize\n> that the hash table will exceed work_mem so it doesn't choose to hash ...\n> but for your situation, you'd rather it did, because what you're getting\n> instead is a Materialize node that spills to disk (again, because the data\n> involved exceeds work_mem) and that's a killer for this query. You should\n> be able to get back the old behavior if you raise work_mem enough.\n>\n> Another idea you might think about is changing the OR'd IN conditions\n> to a single IN over a UNION ALL of the subselects. I'm not really sure if\n> that would produce a better plan, but it's worth trying if it wouldn't\n> require too much app-side contortion.\n>\n> regards, tom lane\n>\n\nThanks, very informative! I'll experiment with work_mem settings and report back.On Tue, Jul 7, 2015 at 11:02 AM, Tom Lane <[email protected]> wrote:Mike Broers <[email protected]> writes:\n> I had a query that was filtering with a wildcard search of a text field for\n> %SUCCESS%. The query took about 5 seconds and was running often so I wanted\n> to improve it. I suggested that the engineers include a new boolean column\n> for successful status. They implemented the requested field, but the query\n> that filters on that new column runs very long (i kill it after letting it\n> run for about an hour). Can someone help me understand why that is the\n> case and how to resolve it?\n\nIt's hashing the subplan output in the first case and not the second:\n\n> Seq Scan on lead (cost=130951.81..158059.21 rows=139957 width=369) (actual\n> time=4699.619..4699.869 rows=1 loops=1)\n> Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n> ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (hashed SubPlan 3))))\n ^^^^^^^^^^^^^^^^\nvs\n\n> Seq Scan on lead (cost=85775.78..9005687281.12 rows=139957 width=369)\n> Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n> ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (SubPlan 3))))\n ^^^^^^^^^\n\nPresumably, the new more-accurate rows count causes the planner to realize\nthat the hash table will exceed work_mem so it doesn't choose to hash ...\nbut for your situation, you'd rather it did, because what you're getting\ninstead is a Materialize node that spills to disk (again, because the data\ninvolved exceeds work_mem) and that's a killer for this query. You should\nbe able to get back the old behavior if you raise work_mem enough.\n\nAnother idea you might think about is changing the OR'd IN conditions\nto a single IN over a UNION ALL of the subselects. I'm not really sure if\nthat would produce a better plan, but it's worth trying if it wouldn't\nrequire too much app-side contortion.\n\n regards, tom lane",
"msg_date": "Tue, 7 Jul 2015 11:10:16 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wildcard text filter switched to boolean column,\n performance is way worse"
},
{
"msg_contents": "After bumping up work_mem from 12MB to 25MB that last materialize is indeed\nhashing and this cut the query time by about 60%. Thanks, this was very\nhelpful and gives me something else to look for when troubleshooting\nexplains.\n\n\n\nOn Tue, Jul 7, 2015 at 11:10 AM, Mike Broers <[email protected]> wrote:\n\n> Thanks, very informative! I'll experiment with work_mem settings and\n> report back.\n>\n> On Tue, Jul 7, 2015 at 11:02 AM, Tom Lane <[email protected]> wrote:\n>\n>> Mike Broers <[email protected]> writes:\n>> > I had a query that was filtering with a wildcard search of a text field\n>> for\n>> > %SUCCESS%. The query took about 5 seconds and was running often so I\n>> wanted\n>> > to improve it. I suggested that the engineers include a new boolean\n>> column\n>> > for successful status. They implemented the requested field, but the\n>> query\n>> > that filters on that new column runs very long (i kill it after letting\n>> it\n>> > run for about an hour). Can someone help me understand why that is the\n>> > case and how to resolve it?\n>>\n>> It's hashing the subplan output in the first case and not the second:\n>>\n>> > Seq Scan on lead (cost=130951.81..158059.21 rows=139957 width=369)\n>> (actual\n>> > time=4699.619..4699.869 rows=1 loops=1)\n>> > Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n>> > ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (hashed SubPlan 3))))\n>> ^^^^^^^^^^^^^^^^\n>> vs\n>>\n>> > Seq Scan on lead (cost=85775.78..9005687281.12 rows=139957\n>> width=369)\n>> > Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n>> > ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (SubPlan 3))))\n>> ^^^^^^^^^\n>>\n>> Presumably, the new more-accurate rows count causes the planner to realize\n>> that the hash table will exceed work_mem so it doesn't choose to hash ...\n>> but for your situation, you'd rather it did, because what you're getting\n>> instead is a Materialize node that spills to disk (again, because the data\n>> involved exceeds work_mem) and that's a killer for this query. You should\n>> be able to get back the old behavior if you raise work_mem enough.\n>>\n>> Another idea you might think about is changing the OR'd IN conditions\n>> to a single IN over a UNION ALL of the subselects. I'm not really sure if\n>> that would produce a better plan, but it's worth trying if it wouldn't\n>> require too much app-side contortion.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nAfter bumping up work_mem from 12MB to 25MB that last materialize is indeed hashing and this cut the query time by about 60%. Thanks, this was very helpful and gives me something else to look for when troubleshooting explains. On Tue, Jul 7, 2015 at 11:10 AM, Mike Broers <[email protected]> wrote:Thanks, very informative! I'll experiment with work_mem settings and report back.On Tue, Jul 7, 2015 at 11:02 AM, Tom Lane <[email protected]> wrote:Mike Broers <[email protected]> writes:\n> I had a query that was filtering with a wildcard search of a text field for\n> %SUCCESS%. The query took about 5 seconds and was running often so I wanted\n> to improve it. I suggested that the engineers include a new boolean column\n> for successful status. They implemented the requested field, but the query\n> that filters on that new column runs very long (i kill it after letting it\n> run for about an hour). Can someone help me understand why that is the\n> case and how to resolve it?\n\nIt's hashing the subplan output in the first case and not the second:\n\n> Seq Scan on lead (cost=130951.81..158059.21 rows=139957 width=369) (actual\n> time=4699.619..4699.869 rows=1 loops=1)\n> Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n> ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (hashed SubPlan 3))))\n ^^^^^^^^^^^^^^^^\nvs\n\n> Seq Scan on lead (cost=85775.78..9005687281.12 rows=139957 width=369)\n> Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\n> ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (SubPlan 3))))\n ^^^^^^^^^\n\nPresumably, the new more-accurate rows count causes the planner to realize\nthat the hash table will exceed work_mem so it doesn't choose to hash ...\nbut for your situation, you'd rather it did, because what you're getting\ninstead is a Materialize node that spills to disk (again, because the data\ninvolved exceeds work_mem) and that's a killer for this query. You should\nbe able to get back the old behavior if you raise work_mem enough.\n\nAnother idea you might think about is changing the OR'd IN conditions\nto a single IN over a UNION ALL of the subselects. I'm not really sure if\nthat would produce a better plan, but it's worth trying if it wouldn't\nrequire too much app-side contortion.\n\n regards, tom lane",
"msg_date": "Tue, 7 Jul 2015 11:28:17 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wildcard text filter switched to boolean column,\n performance is way worse"
},
{
"msg_contents": "Hello,\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> From: [email protected] [mailto:[email protected]] On Behalf Of Mike Broers\r\n> Sent: Dienstag, 7. Juli 2015 18:28\r\n> To: Tom Lane\r\n> Cc: [email protected]\r\n> Subject: Re: [PERFORM] wildcard text filter switched to boolean column, performance is way worse\r\n> \r\n> After bumping up work_mem from 12MB to 25MB that last materialize is indeed hashing and this cut the query time by about 60%. Thanks, this was very helpful and gives me something else to look for when troubleshooting explains. \r\n> \r\n> \r\n> \r\n> On Tue, Jul 7, 2015 at 11:10 AM, Mike Broers <[email protected]> wrote:\r\n> Thanks, very informative! I'll experiment with work_mem settings and report back.\r\n> \r\n> On Tue, Jul 7, 2015 at 11:02 AM, Tom Lane <[email protected]> wrote:\r\n> Mike Broers <[email protected]> writes:\r\n> > I had a query that was filtering with a wildcard search of a text field for\r\n> > %SUCCESS%. The query took about 5 seconds and was running often so I wanted\r\n> > to improve it. I suggested that the engineers include a new boolean column\r\n> > for successful status. They implemented the requested field, but the query\r\n> > that filters on that new column runs very long (i kill it after letting it\r\n> > run for about an hour). Can someone help me understand why that is the\r\n> > case and how to resolve it?\r\n> \r\n> It's hashing the subplan output in the first case and not the second:\r\n> \r\n> > Seq Scan on lead (cost=130951.81..158059.21 rows=139957 width=369) (actual\r\n> > time=4699.619..4699.869 rows=1 loops=1)\r\n> > Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\r\n> > ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (hashed SubPlan 3))))\r\n> ^^^^^^^^^^^^^^^^\r\n> vs\r\n> \r\n> > Seq Scan on lead (cost=85775.78..9005687281.12 rows=139957 width=369)\r\n> > Filter: ((NOT (hashed SubPlan 1)) AND ((\"ReferenceNumber\")::text <>\r\n> > ''::text) AND ((NOT (hashed SubPlan 2)) OR (NOT (SubPlan 3))))\r\n> ^^^^^^^^^\r\n> \r\n> Presumably, the new more-accurate rows count causes the planner to realize\r\n> that the hash table will exceed work_mem so it doesn't choose to hash ...\r\n> but for your situation, you'd rather it did, because what you're getting\r\n> instead is a Materialize node that spills to disk (again, because the data\r\n> involved exceeds work_mem) and that's a killer for this query. You should\r\n> be able to get back the old behavior if you raise work_mem enough.\r\n> \r\n> Another idea you might think about is changing the OR'd IN conditions\r\n> to a single IN over a UNION ALL of the subselects. I'm not really sure if\r\n> that would produce a better plan, but it's worth trying if it wouldn't\r\n> require too much app-side contortion.\r\n\r\n\r\nHello,\r\nyou might try to use a CTE to first collect the IDs to exclude, and join them to your main table.\r\nThis should result in an anti join pattern.\r\n\r\nSomething like:\r\n\r\nWITH IDS as (\r\n SELECT U1.\"lead_id\" AS \"lead_id\" \r\n FROM \"event\" U1 \r\n WHERE U1.\"event_type\" ='type_1'\r\n UNION (\r\n SELECT U1.\"lead_id\" AS \"lead_id\" \r\n FROM \"event\" U1 \r\n WHERE U1.\"event_type\" = 'type_2'\r\n INTERSECT\r\n SELECT U1.\"lead_id\" AS \"lead_id\" \r\n FROM \"event\" U1 \r\n WHERE successful\r\n )\r\n)\r\nSELECT * FROM lead LEFT OUTER JOIN IDS ON (lead.id=IDS.lead_id)\r\nWHERE IDS.lead_id IS NULL;\r\n\r\nregards,\r\n\r\nMarc Mamin\r\n\r\n\r\n\r\n\r\n\r\n\r\n> regards, tom lane\r\n> \r\n> \r\n>\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 8 Jul 2015 07:43:54 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wildcard text filter switched to boolean column,\n performance is way worse"
}
] |
[
{
"msg_contents": "Hello,\n\nI wonder how understanding pg_stat_all_indexes working\n\nWhen I run an explain, some index are not used, but\npg_stat_all_indexes.idx_scan is incremented for those indexes.\n\nDoes this mean idx_scan is incremented each time the planner check if an\nindex could be use whenever it won't use it ?\n\nIs there a better way to check which index could be delete ?\n\nThanks by advance.\n\nHello,I wonder how understanding pg_stat_all_indexes workingWhen I run an explain, some index are not used, but pg_stat_all_indexes.idx_scan is incremented for those indexes.Does this mean idx_scan is incremented each time the planner check if an index could be use whenever it won't use it ?Is there a better way to check which index could be delete ?Thanks by advance.",
"msg_date": "Thu, 9 Jul 2015 14:20:04 +0200",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_all_indexes understand"
},
{
"msg_contents": "On Thu, Jul 9, 2015 at 5:20 AM, Nicolas Paris <[email protected]> wrote:\n\n> Hello,\n>\n> I wonder how understanding pg_stat_all_indexes working\n>\n> When I run an explain, some index are not used, but\n> pg_stat_all_indexes.idx_scan is incremented for those indexes.\n>\n\nWhen the planner considers using a merge join on a indexed column, it uses\nan index to check the endpoints of the column (the min and the max) to make\nsure it has the latest values to get the most accurate estimate. This\ncauses the usage counts to get incremented. Even when it doesn't end up\nusing the merge join.\n\n\n> Does this mean idx_scan is incremented each time the planner check if an\n> index could be use whenever it won't use it ?\n>\n\nNot in general, only in a few peculiar cases.\n\nCheers,\n\nJeff\n\nOn Thu, Jul 9, 2015 at 5:20 AM, Nicolas Paris <[email protected]> wrote:Hello,I wonder how understanding pg_stat_all_indexes workingWhen I run an explain, some index are not used, but pg_stat_all_indexes.idx_scan is incremented for those indexes.When the planner considers using a merge join on a indexed column, it uses an index to check the endpoints of the column (the min and the max) to make sure it has the latest values to get the most accurate estimate. This causes the usage counts to get incremented. Even when it doesn't end up using the merge join. Does this mean idx_scan is incremented each time the planner check if an index could be use whenever it won't use it ?Not in general, only in a few peculiar cases.Cheers,Jeff",
"msg_date": "Thu, 9 Jul 2015 09:45:25 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_all_indexes understand"
},
{
"msg_contents": "On Thu, Jul 9, 2015 at 09:45:25AM -0700, Jeff Janes wrote:\n> \n> On Thu, Jul 9, 2015 at 5:20 AM, Nicolas Paris <[email protected]> wrote:\n> \n> Hello,\n> \n> I wonder how understanding�pg_stat_all_indexes working\n> \n> When I run an explain, some index are not used, but\n> pg_stat_all_indexes.idx_scan is incremented for those indexes.\n> \n> \n> When the planner considers using a merge join on a indexed column, it uses an\n> index to check the endpoints of the column (the min and the max) to make sure\n> it has the latest values to get the most accurate estimate.� This causes the\n> usage counts to get incremented.� Even when it doesn't end up using the merge\n> join.\n\nAnd it will be documented in 9.5:\n\n\tcommit 7e9ed623d9988fcb1497a2a8ca7f676a5bfa136f\n\tAuthor: Bruce Momjian <[email protected]>\n\tDate: Thu Mar 19 22:38:12 2015 -0400\n\t\n\t docs: mention the optimizer can increase the index usage count\n\t\n\t Report by Marko Tiikkaja\n\t\n\t+ The optimizer also accesses indexes to check for supplied constants\n\t+ whose values are outside the recorded range of the optimizer statistics\n\t+ because the optimizer statistics might be stale.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 9 Sep 2015 17:51:33 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_all_indexes understand"
}
] |
[
{
"msg_contents": "\nThis is a reply to to Andreas's post on the #13495 documentation thread in -bugs. \nI am responding to it here because it relates to #13493 only.\n\nAndres wrote, re: #13493\n\n>> This issue is absolutely critical for performance and scalability of code,\n\n> Pft. In most cases it doesn't actually matter that much because the\n> contained query are the expensive stuff. It's just when you do lots of\n> very short and cheap things that it has such a big effect. Usually the\n> effect on the planner is bigger.\n\nHi Andres,\n\n'Pft' is kinda rude - I wouldn't comment on it normally, but seeing as you just lectured me on -performance on something you perceived as impolite (just like you lectured me on not spreading things onto multiple threads), can you please try to set a good example? You don't encourage new contributors into open source communities this way. \n\nGetting to the point. I think the gap between our viewpoints comes from the fact I (and others here at my institute) have a bunch of pl/pgsql code here with for loops and calculations, which we see as 'code'. Thinking of all the users I know myself, I know there are plenty of GIS people out there using for loops and pgsql to simulate models on data in the DB, and I expect the same is true among e.g. older scientists with DB datasets. \n\nWhereas it sounds like you and Tom see pl/pgsql as 'glue' and don't see any problem. As I have never seen statistics on pl/pgsql use-cases among users at large, I don't know what happens everywhere else outside of GIS-world and pgdev-world. Have you any references/data you can share on that? I would be interested to know because I don't want to overclaim on the importance of these bugs or any other bugs in future. In this case, #13493 wrecked the code for estimates on a 20 million euro national roadbuilding project here and it cost me a few weeks of my life, but for all I know you're totally right about the general importance to the world at large.\n\nThough keep in mind: This isn't just only about scaling up one program. It's a db-level problem. If you have a large GIS DB server with many users, long-running queries etc. on large amounts of data, then you only need e.g. 2-3 people to be running some code with for-loops or a long series of calculation in pl/pgsql, and everything will fall apart in pgsql-land. \n\nLast point. When I wrote 'absolutely critical' I was under the impression this bug could have some serious impact on postgis/pgrouting. Since I wanted to double check what you said about 'expensive stuff' vs 'short/cheap stuff', I ran some benchmarks to check on a few functions. \n\nYou are right that only short, looped things are affected. e.g. for loops with calculations and so on. Didn't see any trouble with the calls I made to postgis inside or outside of pgsql. This confirms/replicates your findings. Updated numbers/tests posted to github shortly.\n\nRegards\n\nGraeme Bell\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 Jul 2015 15:59:55 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] BUG #13493: pl/pgsql doesn't scale with cpus (PG9.3, 9.4)"
}
] |
[
{
"msg_contents": "Hello,\n\nMy 9.4 database is used as datawharehouse. I can't change the queries\ngenerated.\n\nfirst index : INDEX COL (A,B,C,D,E)\n\n\nIn case of query based on COL A, the query planner sometimes go to a seq\nscan instead of using the first composite index.\n\nThe solution is to add a second indexe (redondant)\nsecond index : INDEX COL (A)\n\nIn case of query based on COL A, B, C, D, (without E) as well, it doesn't\nuses the first index and prefers a seq scan.\n\nI could create a third indexe :\nfirst index : INDEX COL (A,B,C,D)\n\nBut I hope there is an other solution for that (table is huge).\n\nIt seems that the malus for using composite indexes is high.\n\nQuestion is : is there a way to make the composite index more attractive to\nquery planner ? (idealy equivalent to mono column indexe)\n\n\nThanks by advance\n\nHello,My 9.4 database is used as datawharehouse. I can't change the queries generated.first index : INDEX COL (A,B,C,D,E)In case of query based on COL A, the query planner sometimes go to a seq scan instead of using the first composite index.The solution is to add a second indexe (redondant)second index : INDEX COL (A)In case of query based on COL A, B, C, D, (without E) as well, it doesn't uses the first index and prefers a seq scan.I could create a third indexe :first index : INDEX COL (A,B,C,D) But I hope there is an other solution for that (table is huge).It seems that the malus for using composite indexes is high.Question is : is there a way to make the composite index more attractive to query planner ? (idealy equivalent to mono column indexe)Thanks by advance",
"msg_date": "Thu, 9 Jul 2015 22:34:25 +0200",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "QUERY PLANNER - Indexe mono column VS composite Index"
},
{
"msg_contents": "2015-07-09 22:34 GMT+02:00 Nicolas Paris <[email protected]>:\n\n> Hello,\n>\n> My 9.4 database is used as datawharehouse. I can't change the queries\n> generated.\n>\n> first index : INDEX COL (A,B,C,D,E)\n>\n>\n> In case of query based on COL A, the query planner sometimes go to a seq\n> scan instead of using the first composite index.\n>\n> The solution is to add a second indexe (redondant)\n> second index : INDEX COL (A)\n>\n> In case of query based on COL A, B, C, D, (without E) as well, it doesn't\n> uses the first index and prefers a seq scan.\n>\n> I could create a third indexe :\n> first index : INDEX COL (A,B,C,D)\n>\n> But I hope there is an other solution for that (table is huge).\n>\n> It seems that the malus for using composite indexes is high.\n>\n> Question is : is there a way to make the composite index more attractive\n> to query planner ? (idealy equivalent to mono column indexe)\n>\n>\nThere's no way we can answer that without seeing actual queries and query\nplans.\n\n\n-- \nGuillaume.\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com\n\n2015-07-09 22:34 GMT+02:00 Nicolas Paris <[email protected]>:Hello,My 9.4 database is used as datawharehouse. I can't change the queries generated.first index : INDEX COL (A,B,C,D,E)In case of query based on COL A, the query planner sometimes go to a seq scan instead of using the first composite index.The solution is to add a second indexe (redondant)second index : INDEX COL (A)In case of query based on COL A, B, C, D, (without E) as well, it doesn't uses the first index and prefers a seq scan.I could create a third indexe :first index : INDEX COL (A,B,C,D) But I hope there is an other solution for that (table is huge).It seems that the malus for using composite indexes is high.Question is : is there a way to make the composite index more attractive to query planner ? (idealy equivalent to mono column indexe)There's no way we can answer that without seeing actual queries and query plans.-- Guillaume. http://blog.guillaume.lelarge.info http://www.dalibo.com",
"msg_date": "Thu, 9 Jul 2015 22:49:04 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: QUERY PLANNER - Indexe mono column VS composite Index"
},
{
"msg_contents": "Ok, here is the problem (it's different than what I explained before)\n==INDEX ==\nCREATE INDEX of_idx_modifier\n ON i2b2data_multi_nomi.observation_fact\n USING btree\n (concept_cd COLLATE pg_catalog.\"default\", modifier_cd COLLATE\npg_catalog.\"default\", valtype_cd COLLATE pg_catalog.\"default\", tval_char\nCOLLATE pg_catalog.\"default\", nval_num);\n\n==QUERY==\n\n EXPLAIN ANALYSE select f.patient_num\nfrom i2b2data_multi_nomi.observation_fact f\nwhere\nf.concept_cd IN (select concept_cd from\n i2b2data_multi_nomi.concept_dimension where concept_path LIKE\n'\\\\i2b2\\\\cim10\\\\A00-B99\\\\%')\n AND ( modifier_cd = '@' AND valtype_cd = 'T' AND tval_char IN\n('DP') )\ngroup by f.patient_num ;\n\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=35153.99..35154.40 rows=41 width=4) (actual\ntime=81.223..82.718 rows=5206 loops=1)\n Group Key: f.patient_num\n -> Nested Loop (cost=4740.02..35089.11 rows=25951 width=4) (actual\ntime=45.393..76.893 rows=7359 loops=1)\n -> HashAggregate (cost=4739.45..4748.64 rows=919 width=10)\n(actual time=45.097..45.586 rows=925 loops=1)\n Group Key: (concept_dimension.concept_cd)::text\n -> Seq Scan on concept_dimension (cost=0.00..4734.73\nrows=1892 width=10) (actual time=17.479..44.573 rows=925 loops=1)\n Filter: ((concept_path)::text ~~\n'\\\\i2b2\\\\cim10\\\\A00-B99\\\\%'::text)\n Rows Removed by Filter: 186413\n -> Index Scan using of_idx_modifier on observation_fact f\n (cost=0.56..32.86 rows=15 width=14) (actual time=0.025..0.031 rows=8\nloops=925)\n Index Cond: (((concept_cd)::text =\n(concept_dimension.concept_cd)::text) AND ((modifier_cd)::text = '@'::text)\nAND ((valtype_cd)::text\n= 'T'::text) AND ((tval_char)::text = 'DP'::text))\n Planning time: 2.843 ms\n Execution time: 83.273 ms\n(12 rows)\n\n\n\n============2 : without 3 constraint that match index => seq\nscan=======================================================================\n\n EXPLAIN ANALYSE select f.patient_num\nfrom i2b2data_multi_nomi.observation_fact f\nwhere\nf.concept_cd IN (select concept_cd from\n i2b2data_multi_nomi.concept_dimension where concept_path LIKE\n'\\\\i2b2\\\\cim10\\\\A00-B99\\\\%')\n -- AND ( modifier_cd = '@' AND valtype_cd = 'T' AND tval_char IN\n('DP') )\ngroup by f.patient_num ;\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1345377.85..1346073.80 rows=69595 width=4) (actual\ntime=18043.140..18048.741 rows=16865 loops=1)\n Group Key: f.patient_num\n -> Hash Join (cost=4760.13..1233828.53 rows=44619728 width=4) (actual\ntime=17109.041..18027.763 rows=33835 loops=1)\n Hash Cond: ((f.concept_cd)::text =\n(concept_dimension.concept_cd)::text)\n -> Seq Scan on observation_fact f (cost=0.00..1057264.28\nrows=44619728 width=14) (actual time=0.040..7918.984 rows=44619320 loops=1)\n -> Hash (cost=4748.64..4748.64 rows=919 width=10) (actual\ntime=49.523..49.523 rows=925 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 39kB\n -> HashAggregate (cost=4739.45..4748.64 rows=919 width=10)\n(actual time=48.806..49.117 rows=925 loops=1)\n Group Key: (concept_dimension.concept_cd)::text\n -> Seq Scan on concept_dimension (cost=0.00..4734.73\nrows=1892 width=10) (actual time=18.828..48.191 rows=925 loops=1)\n Filter: ((concept_path)::text ~~\n'\\\\i2b2\\\\cim10\\\\A00-B99\\\\%'::text)\n Rows Removed by Filter: 186413\n Planning time: 2.588 ms\n Execution time: 18051.031 ms\n(14 rows)\n\n\n=========3: without a constraint on tval_char => seq\nscan========================================================================\n\n\n EXPLAIN ANALYSE select f.patient_num\nfrom i2b2data_multi_nomi.observation_fact f\nwhere\nf.concept_cd IN (select concept_cd from\n i2b2data_multi_nomi.concept_dimension where concept_path LIKE\n'\\\\i2b2\\\\cim10\\\\A00-B99\\\\%')\n AND ( modifier_cd = '@' AND valtype_cd = 'T' )\ngroup by f.patient_num ;\n\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=1305637.84..1305688.23 rows=5039 width=4) (actual\ntime=22689.279..22694.583 rows=16865 loops=1)\n Group Key: f.patient_num\n -> Hash Join (cost=4760.13..1297561.67 rows=3230468 width=4) (actual\ntime=12368.418..22674.145 rows=33835 loops=1)\n Hash Cond: ((f.concept_cd)::text =\n(concept_dimension.concept_cd)::text)\n -> Seq Scan on observation_fact f (cost=0.00..1280362.92\nrows=3230468 width=14) (actual time=0.226..22004.808 rows=3195625 loops=1)\n Filter: (((modifier_cd)::text = '@'::text) AND\n((valtype_cd)::text = 'T'::text))\n Rows Removed by Filter: 41423695\n -> Hash (cost=4748.64..4748.64 rows=919 width=10) (actual\ntime=46.833..46.833 rows=925 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 39kB\n -> HashAggregate (cost=4739.45..4748.64 rows=919 width=10)\n(actual time=46.196..46.515 rows=925 loops=1)\n Group Key: (concept_dimension.concept_cd)::text\n -> Seq Scan on concept_dimension (cost=0.00..4734.73\nrows=1892 width=10) (actual time=18.899..45.800 rows=925 loops=1)\n Filter: ((concept_path)::text ~~\n'\\\\i2b2\\\\cim10\\\\A00-B99\\\\%'::text)\n Rows Removed by Filter: 186413\n Planning time: 1.940 ms\n Execution time: 22695.913 ms\n\nWhat I would like is the planner allways hit of_idx_modifier\n\nThanks !\n\n2015-07-09 22:49 GMT+02:00 Guillaume Lelarge <[email protected]>:\n\n> 2015-07-09 22:34 GMT+02:00 Nicolas Paris <[email protected]>:\n>\n>> Hello,\n>>\n>> My 9.4 database is used as datawharehouse. I can't change the queries\n>> generated.\n>>\n>> first index : INDEX COL (A,B,C,D,E)\n>>\n>>\n>> In case of query based on COL A, the query planner sometimes go to a seq\n>> scan instead of using the first composite index.\n>>\n>> The solution is to add a second indexe (redondant)\n>> second index : INDEX COL (A)\n>>\n>> In case of query based on COL A, B, C, D, (without E) as well, it doesn't\n>> uses the first index and prefers a seq scan.\n>>\n>> I could create a third indexe :\n>> first index : INDEX COL (A,B,C,D)\n>>\n>> But I hope there is an other solution for that (table is huge).\n>>\n>> It seems that the malus for using composite indexes is high.\n>>\n>> Question is : is there a way to make the composite index more attractive\n>> to query planner ? (idealy equivalent to mono column indexe)\n>>\n>>\n> There's no way we can answer that without seeing actual queries and query\n> plans.\n>\n>\n> --\n> Guillaume.\n> http://blog.guillaume.lelarge.info\n> http://www.dalibo.com\n>\n\nOk, here is the problem (it's different than what I explained before)==INDEX ==CREATE INDEX of_idx_modifier ON i2b2data_multi_nomi.observation_fact USING btree (concept_cd COLLATE pg_catalog.\"default\", modifier_cd COLLATE pg_catalog.\"default\", valtype_cd COLLATE pg_catalog.\"default\", tval_char COLLATE pg_catalog.\"default\", nval_num);==QUERY== EXPLAIN ANALYSE select f.patient_num from i2b2data_multi_nomi.observation_fact f where f.concept_cd IN (select concept_cd from i2b2data_multi_nomi.concept_dimension where concept_path LIKE '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%') AND ( modifier_cd = '@' AND valtype_cd = 'T' AND tval_char IN ('DP') ) group by f.patient_num ; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=35153.99..35154.40 rows=41 width=4) (actual time=81.223..82.718 rows=5206 loops=1) Group Key: f.patient_num -> Nested Loop (cost=4740.02..35089.11 rows=25951 width=4) (actual time=45.393..76.893 rows=7359 loops=1) -> HashAggregate (cost=4739.45..4748.64 rows=919 width=10) (actual time=45.097..45.586 rows=925 loops=1) Group Key: (concept_dimension.concept_cd)::text -> Seq Scan on concept_dimension (cost=0.00..4734.73 rows=1892 width=10) (actual time=17.479..44.573 rows=925 loops=1) Filter: ((concept_path)::text ~~ '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%'::text) Rows Removed by Filter: 186413 -> Index Scan using of_idx_modifier on observation_fact f (cost=0.56..32.86 rows=15 width=14) (actual time=0.025..0.031 rows=8 loops=925) Index Cond: (((concept_cd)::text = (concept_dimension.concept_cd)::text) AND ((modifier_cd)::text = '@'::text) AND ((valtype_cd)::text = 'T'::text) AND ((tval_char)::text = 'DP'::text)) Planning time: 2.843 ms Execution time: 83.273 ms(12 rows)============2 : without 3 constraint that match index => seq scan======================================================================= EXPLAIN ANALYSE select f.patient_num from i2b2data_multi_nomi.observation_fact f where f.concept_cd IN (select concept_cd from i2b2data_multi_nomi.concept_dimension where concept_path LIKE '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%') -- AND ( modifier_cd = '@' AND valtype_cd = 'T' AND tval_char IN ('DP') ) group by f.patient_num ; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=1345377.85..1346073.80 rows=69595 width=4) (actual time=18043.140..18048.741 rows=16865 loops=1) Group Key: f.patient_num -> Hash Join (cost=4760.13..1233828.53 rows=44619728 width=4) (actual time=17109.041..18027.763 rows=33835 loops=1) Hash Cond: ((f.concept_cd)::text = (concept_dimension.concept_cd)::text) -> Seq Scan on observation_fact f (cost=0.00..1057264.28 rows=44619728 width=14) (actual time=0.040..7918.984 rows=44619320 loops=1) -> Hash (cost=4748.64..4748.64 rows=919 width=10) (actual time=49.523..49.523 rows=925 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 39kB -> HashAggregate (cost=4739.45..4748.64 rows=919 width=10) (actual time=48.806..49.117 rows=925 loops=1) Group Key: (concept_dimension.concept_cd)::text -> Seq Scan on concept_dimension (cost=0.00..4734.73 rows=1892 width=10) (actual time=18.828..48.191 rows=925 loops=1) Filter: ((concept_path)::text ~~ '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%'::text) Rows Removed by Filter: 186413 Planning time: 2.588 ms Execution time: 18051.031 ms(14 rows)=========3: without a constraint on tval_char => seq scan======================================================================== EXPLAIN ANALYSE select f.patient_num from i2b2data_multi_nomi.observation_fact f where f.concept_cd IN (select concept_cd from i2b2data_multi_nomi.concept_dimension where concept_path LIKE '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%') AND ( modifier_cd = '@' AND valtype_cd = 'T' ) group by f.patient_num ; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------HashAggregate (cost=1305637.84..1305688.23 rows=5039 width=4) (actual time=22689.279..22694.583 rows=16865 loops=1) Group Key: f.patient_num -> Hash Join (cost=4760.13..1297561.67 rows=3230468 width=4) (actual time=12368.418..22674.145 rows=33835 loops=1) Hash Cond: ((f.concept_cd)::text = (concept_dimension.concept_cd)::text) -> Seq Scan on observation_fact f (cost=0.00..1280362.92 rows=3230468 width=14) (actual time=0.226..22004.808 rows=3195625 loops=1) Filter: (((modifier_cd)::text = '@'::text) AND ((valtype_cd)::text = 'T'::text)) Rows Removed by Filter: 41423695 -> Hash (cost=4748.64..4748.64 rows=919 width=10) (actual time=46.833..46.833 rows=925 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 39kB -> HashAggregate (cost=4739.45..4748.64 rows=919 width=10) (actual time=46.196..46.515 rows=925 loops=1) Group Key: (concept_dimension.concept_cd)::text -> Seq Scan on concept_dimension (cost=0.00..4734.73 rows=1892 width=10) (actual time=18.899..45.800 rows=925 loops=1) Filter: ((concept_path)::text ~~ '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%'::text) Rows Removed by Filter: 186413 Planning time: 1.940 ms Execution time: 22695.913 msWhat I would like is the planner allways hit of_idx_modifier Thanks !2015-07-09 22:49 GMT+02:00 Guillaume Lelarge <[email protected]>:2015-07-09 22:34 GMT+02:00 Nicolas Paris <[email protected]>:Hello,My 9.4 database is used as datawharehouse. I can't change the queries generated.first index : INDEX COL (A,B,C,D,E)In case of query based on COL A, the query planner sometimes go to a seq scan instead of using the first composite index.The solution is to add a second indexe (redondant)second index : INDEX COL (A)In case of query based on COL A, B, C, D, (without E) as well, it doesn't uses the first index and prefers a seq scan.I could create a third indexe :first index : INDEX COL (A,B,C,D) But I hope there is an other solution for that (table is huge).It seems that the malus for using composite indexes is high.Question is : is there a way to make the composite index more attractive to query planner ? (idealy equivalent to mono column indexe)There's no way we can answer that without seeing actual queries and query plans.-- Guillaume. http://blog.guillaume.lelarge.info http://www.dalibo.com",
"msg_date": "Fri, 10 Jul 2015 11:34:13 +0200",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: QUERY PLANNER - Indexe mono column VS composite Index"
},
{
"msg_contents": "On Fri, Jul 10, 2015 at 2:34 AM, Nicolas Paris <[email protected]> wrote:\n\n>\n>\n> =========3: without a constraint on tval_char => seq\n> scan========================================================================\n>\n>\n> EXPLAIN ANALYSE select f.patient_num\n> from i2b2data_multi_nomi.observation_fact f\n> where\n> f.concept_cd IN (select concept_cd from\n> i2b2data_multi_nomi.concept_dimension where concept_path LIKE\n> '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%')\n> AND ( modifier_cd = '@' AND valtype_cd = 'T' )\n> group by f.patient_num ;\n>\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=1305637.84..1305688.23 rows=5039 width=4) (actual\n> time=22689.279..22694.583 rows=16865 loops=1)\n> Group Key: f.patient_num\n> -> Hash Join (cost=4760.13..1297561.67 rows=3230468 width=4) (actual\n> time=12368.418..22674.145 rows=33835 loops=1)\n> Hash Cond: ((f.concept_cd)::text =\n> (concept_dimension.concept_cd)::text)\n> -> Seq Scan on observation_fact f (cost=0.00..1280362.92\n> rows=3230468 width=14) (actual time=0.226..22004.808 rows=3195625 loops=1)\n> Filter: (((modifier_cd)::text = '@'::text) AND\n> ((valtype_cd)::text = 'T'::text))\n> Rows Removed by Filter: 41423695\n> -> Hash (cost=4748.64..4748.64 rows=919 width=10) (actual\n> time=46.833..46.833 rows=925 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 39kB\n> -> HashAggregate (cost=4739.45..4748.64 rows=919\n> width=10) (actual time=46.196..46.515 rows=925 loops=1)\n> Group Key: (concept_dimension.concept_cd)::text\n> -> Seq Scan on concept_dimension\n> (cost=0.00..4734.73 rows=1892 width=10) (actual time=18.899..45.800\n> rows=925 loops=1)\n> Filter: ((concept_path)::text ~~\n> '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%'::text)\n> Rows Removed by Filter: 186413\n> Planning time: 1.940 ms\n> Execution time: 22695.913 ms\n>\n> What I would like is the planner allways hit of_idx_modifier\n>\n\nWhat does the above explain analyze query give when you have an index on\njust modifier_cd, or maybe on both (modifier_cd, valtype_cd)?\n\nYour original email said it uses the index in that case, but we would need\nto see the numbers in the query plan in order to figure out why it is doing\nthat.\n\nIt seems like that the \"tval_char IN ('DP')\" part of the restriction is\nvery selective, while the other two restrictions are not.\n\nCheers,\n\nJeff\n\nOn Fri, Jul 10, 2015 at 2:34 AM, Nicolas Paris <[email protected]> wrote:=========3: without a constraint on tval_char => seq scan======================================================================== EXPLAIN ANALYSE select f.patient_num from i2b2data_multi_nomi.observation_fact f where f.concept_cd IN (select concept_cd from i2b2data_multi_nomi.concept_dimension where concept_path LIKE '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%') AND ( modifier_cd = '@' AND valtype_cd = 'T' ) group by f.patient_num ; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------HashAggregate (cost=1305637.84..1305688.23 rows=5039 width=4) (actual time=22689.279..22694.583 rows=16865 loops=1) Group Key: f.patient_num -> Hash Join (cost=4760.13..1297561.67 rows=3230468 width=4) (actual time=12368.418..22674.145 rows=33835 loops=1) Hash Cond: ((f.concept_cd)::text = (concept_dimension.concept_cd)::text) -> Seq Scan on observation_fact f (cost=0.00..1280362.92 rows=3230468 width=14) (actual time=0.226..22004.808 rows=3195625 loops=1) Filter: (((modifier_cd)::text = '@'::text) AND ((valtype_cd)::text = 'T'::text)) Rows Removed by Filter: 41423695 -> Hash (cost=4748.64..4748.64 rows=919 width=10) (actual time=46.833..46.833 rows=925 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 39kB -> HashAggregate (cost=4739.45..4748.64 rows=919 width=10) (actual time=46.196..46.515 rows=925 loops=1) Group Key: (concept_dimension.concept_cd)::text -> Seq Scan on concept_dimension (cost=0.00..4734.73 rows=1892 width=10) (actual time=18.899..45.800 rows=925 loops=1) Filter: ((concept_path)::text ~~ '\\\\i2b2\\\\cim10\\\\A00-B99\\\\%'::text) Rows Removed by Filter: 186413 Planning time: 1.940 ms Execution time: 22695.913 msWhat I would like is the planner allways hit of_idx_modifier What does the above explain analyze query give when you have an index on just modifier_cd, or maybe on both (modifier_cd, valtype_cd)?Your original email said it uses the index in that case, but we would need to see the numbers in the query plan in order to figure out why it is doing that. It seems like that the \"tval_char IN ('DP')\" part of the restriction is very selective, while the other two restrictions are not.Cheers,Jeff",
"msg_date": "Fri, 10 Jul 2015 09:20:21 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: QUERY PLANNER - Indexe mono column VS composite Index"
}
] |
[
{
"msg_contents": "Hi folks,\n\nI have a fairly simple three-table query (pasted below) with two LEFT JOINs\nand an OR in the WHERE clause that for some reason is doing sequential\nscans on all three tables (two of them large -- several million rows), even\nthough I have indexes on the relevant \"filename\" columns.\n\nNote the two parts of the where clause -- a filter on the image2 table and\na filter on the panoramas table. If I comment out either filter and just\nfilter on i.filename by itself, or p.filename by itself, the query planner\nuses the relevant index and the query takes a few milliseconds. But when I\nhave both clauses (as shown below) it falls back to sequential scanning all\nthree tables for some reason, taking several seconds.\n\nWhat am I missing? There must be some reason PostgreSQL can't use the index\nin this case, but I can't see what it is. If I were PostgreSQL I'd be using\nthe index on i.filename and p.filename to filter to a couple of rows first,\nthen join, making it super-quick.\n\nIn this test I'm running PostgreSQL 9.3.3 on Windows 64-bit, but the same\nthing happens on our Linux-based database. We're using autovacuum on both\ndatabases, and I've tried manually VACUUM ANALYZE-ing all three tables just\nin case, but that doesn't help. Memory config values are set to sensible\nvalues. Note: this is a new query, so in terms of \"history\", it's always\nbeen slow.\n\nMy query and PostgreSQL version and the explain and a lot of other table\ndata is pasted below.\n\nQUERY\n----------\nselect ai.position, i.filename as image_filename, p.filename as\npanorama_filename\nfrom album_items ai\nleft join image2 i on i.imageid = ai.image_id\nleft join panoramas p on p.id = ai.panorama_id\nwhere i.filename in ('pano360--v471.jpg', 'pano360-2--v474.jpg') or\n p.filename in ('pano360--v471', 'pano360-2--v474')\n----------\n\nPOSTGRESQL VERSION\n----------\nPostgreSQL 9.3.3, compiled by Visual C++ build 1600, 64-bit\n----------\n\nEXPLAIN (ANALYZE, BUFFERS)\n----------\nhttp://explain.depesz.com/s/qcpQ\n\nHash Left Join (cost=344184.62..963863.99 rows=376 width=57) (actual\ntime=3157.104..8838.329 rows=2 loops=1)\n Hash Cond: (ai.panorama_id = p.id)\n Filter: ((i.filename = ANY\n('{pano360--v471.jpg,pano360-2--v474.jpg}'::text[])) OR (p.filename = ANY\n('{pano360--v471,pano360-2--v474}'::text[])))\n Rows Removed by Filter: 7347790\n Buffers: shared hit=8967 read=198827, temp read=76936 written=75908\n I/O Timings: read=609.403\n -> Hash Left Join (cost=341001.56..781324.85 rows=7346959 width=39)\n(actual time=2660.821..7842.202 rows=7347792 loops=1)\n Hash Cond: (ai.image_id = i.imageid)\n Buffers: shared hit=6936 read=198827, temp read=76662 written=75640\n I/O Timings: read=609.403\n -> Seq Scan on album_items ai (cost=0.00..156576.59 rows=7346959\nwidth=12) (actual time=0.009..981.074 rows=7347792 loops=1)\n Buffers: shared hit=4297 read=78810\n I/O Timings: read=251.402\n -> Hash (cost=194687.36..194687.36 rows=7203136 width=35) (actual\ntime=2658.643..2658.643 rows=7200287 loops=1)\n Buckets: 2048 Batches: 512 Memory Usage: 976kB\n Buffers: shared hit=2639 read=120017, temp written=49961\n I/O Timings: read=358.000\n -> Seq Scan on image2 i (cost=0.00..194687.36 rows=7203136\nwidth=35) (actual time=0.007..1063.586 rows=7200287 loops=1)\n Buffers: shared hit=2639 read=120017\n I/O Timings: read=358.000\n -> Hash (cost=2423.47..2423.47 rows=39247 width=26) (actual\ntime=12.100..12.100 rows=39247 loops=1)\n Buckets: 2048 Batches: 4 Memory Usage: 575kB\n Buffers: shared hit=2031, temp written=170\n -> Seq Scan on panoramas p (cost=0.00..2423.47 rows=39247\nwidth=26) (actual time=0.003..5.470 rows=39247 loops=1)\n Buffers: shared hit=2031\nTotal runtime: 8838.701 ms\n----------\n\nTABLE METADATA\n----------\nNumber of rows in album_items: 7347792\nNumber of rows in image2: 7200287\nNumber of rows in panoramas: 39247\n----------\n\nTABLES (AND THEIR INDEXES) REFERENCED IN QUERY\n----------\nCREATE TABLE content.album_items\n(\n album_id integer NOT NULL,\n image_id integer,\n \"position\" integer,\n caption text,\n active boolean NOT NULL,\n panorama_id integer,\n CONSTRAINT album_items_album_id_fkey FOREIGN KEY (album_id) REFERENCES\ncontent.albums (id),\n CONSTRAINT album_items_image_id_fkey FOREIGN KEY (image_id) REFERENCES\ncontent.image2 (imageid),\n CONSTRAINT album_items_panorama_id_fkey FOREIGN KEY (panorama_id)\nREFERENCES content.panoramas (id),\n CONSTRAINT album_image_unique UNIQUE (album_id, image_id)\n);\n\nCREATE INDEX album_items_album_id_idx ON content.album_items (album_id);\nCREATE INDEX album_items_image_id_idx ON content.album_items (image_id);\nCREATE INDEX album_items_panorama_id_idx ON content.album_items\n(panorama_id);\n\n\nCREATE TABLE content.image2\n(\n imageid integer NOT NULL DEFAULT nextval('image2_imageid_seq'::regclass),\n hotelid integer,\n filename text NOT NULL,\n originalfoldername text NOT NULL,\n width integer,\n height integer,\n active boolean NOT NULL DEFAULT false,\n importid integer,\n timetaken timestamp without time zone,\n state integer NOT NULL DEFAULT 1,\n has_wide boolean NOT NULL DEFAULT false,\n type integer,\n document_id integer,\n property_id integer,\n CONSTRAINT image2_pkey PRIMARY KEY (imageid),\n CONSTRAINT fk_image2_hotelid FOREIGN KEY (hotelid) REFERENCES\ncontent.hotel (hotelid),\n CONSTRAINT fk_image2_importid FOREIGN KEY (importid) REFERENCES\ncontent.imageimport (importid),\n CONSTRAINT image2_document_id_fkey FOREIGN KEY (document_id) REFERENCES\ncontent.documents (id),\n CONSTRAINT image2_property_id_fkey FOREIGN KEY (property_id) REFERENCES\ncontent.properties (id),\n CONSTRAINT uq_image2_filename UNIQUE (filename)\n);\n\nCREATE INDEX fki_image2_property_id_fkey ON content.image2 (property_id);\nCREATE INDEX image2_document_id_idx ON content.image2 (document_id);\nCREATE INDEX image2_importid_idx ON content.image2 (importid);\nCREATE INDEX ix_image2_hotelid ON content.image2 (hotelid);\nCREATE INDEX ix_image2_imageid ON content.image2 (imageid);\n\n\nCREATE TABLE content.panoramas\n(\n id integer NOT NULL DEFAULT nextval('panoramas_id_seq'::regclass),\n hotel_id integer,\n filename text NOT NULL,\n folder text NOT NULL,\n import_id integer NOT NULL,\n active boolean NOT NULL DEFAULT false,\n state integer NOT NULL,\n num_images integer NOT NULL DEFAULT 0,\n type integer,\n hdr boolean NOT NULL DEFAULT false,\n has_preview boolean NOT NULL DEFAULT false,\n property_id integer,\n data json,\n previews_created boolean NOT NULL DEFAULT false,\n CONSTRAINT panoramas_pkey PRIMARY KEY (id),\n CONSTRAINT fk_panoramas_hotel_id FOREIGN KEY (hotel_id) REFERENCES\ncontent.hotel (hotelid),\n CONSTRAINT fk_panoramas_import_id FOREIGN KEY (import_id) REFERENCES\ncontent.imageimport (importid),\n CONSTRAINT panoramas_property_id_fkey FOREIGN KEY (property_id)\nREFERENCES content.properties (id),\n CONSTRAINT panoramas_uq_filename UNIQUE (filename)\n);\n\nCREATE INDEX fki_panoramas_property_id_fkey ON content.panoramas\n(property_id);\nCREATE INDEX panoramas_hotel_id_idx ON content.panoramas (hotel_id);\nCREATE INDEX panoramas_import_id_idx ON content.panoramas (import_id);\n----------\n\nThanks in advance,\nBen\n\nHi folks,I have a fairly simple three-table query (pasted below) with two LEFT JOINs and an OR in the WHERE clause that for some reason is doing sequential scans on all three tables (two of them large -- several million rows), even though I have indexes on the relevant \"filename\" columns.Note the two parts of the where clause -- a filter on the image2 table and a filter on the panoramas table. If I comment out either filter and just filter on i.filename by itself, or p.filename by itself, the query planner uses the relevant index and the query takes a few milliseconds. But when I have both clauses (as shown below) it falls back to sequential scanning all three tables for some reason, taking several seconds.What am I missing? There must be some reason PostgreSQL can't use the index in this case, but I can't see what it is. If I were PostgreSQL I'd be using the index on i.filename and p.filename to filter to a couple of rows first, then join, making it super-quick.In this test I'm running PostgreSQL 9.3.3 on Windows 64-bit, but the same thing happens on our Linux-based database. We're using autovacuum on both databases, and I've tried manually VACUUM ANALYZE-ing all three tables just in case, but that doesn't help. Memory config values are set to sensible values. Note: this is a new query, so in terms of \"history\", it's always been slow.My query and PostgreSQL version and the explain and a lot of other table data is pasted below.QUERY----------select ai.position, i.filename as image_filename, p.filename as panorama_filenamefrom album_items aileft join image2 i on i.imageid = ai.image_idleft join panoramas p on p.id = ai.panorama_idwhere i.filename in ('pano360--v471.jpg', 'pano360-2--v474.jpg') or p.filename in ('pano360--v471', 'pano360-2--v474')----------POSTGRESQL VERSION----------PostgreSQL 9.3.3, compiled by Visual C++ build 1600, 64-bit----------EXPLAIN (ANALYZE, BUFFERS)----------http://explain.depesz.com/s/qcpQHash Left Join (cost=344184.62..963863.99 rows=376 width=57) (actual time=3157.104..8838.329 rows=2 loops=1) Hash Cond: (ai.panorama_id = p.id) Filter: ((i.filename = ANY ('{pano360--v471.jpg,pano360-2--v474.jpg}'::text[])) OR (p.filename = ANY ('{pano360--v471,pano360-2--v474}'::text[]))) Rows Removed by Filter: 7347790 Buffers: shared hit=8967 read=198827, temp read=76936 written=75908 I/O Timings: read=609.403 -> Hash Left Join (cost=341001.56..781324.85 rows=7346959 width=39) (actual time=2660.821..7842.202 rows=7347792 loops=1) Hash Cond: (ai.image_id = i.imageid) Buffers: shared hit=6936 read=198827, temp read=76662 written=75640 I/O Timings: read=609.403 -> Seq Scan on album_items ai (cost=0.00..156576.59 rows=7346959 width=12) (actual time=0.009..981.074 rows=7347792 loops=1) Buffers: shared hit=4297 read=78810 I/O Timings: read=251.402 -> Hash (cost=194687.36..194687.36 rows=7203136 width=35) (actual time=2658.643..2658.643 rows=7200287 loops=1) Buckets: 2048 Batches: 512 Memory Usage: 976kB Buffers: shared hit=2639 read=120017, temp written=49961 I/O Timings: read=358.000 -> Seq Scan on image2 i (cost=0.00..194687.36 rows=7203136 width=35) (actual time=0.007..1063.586 rows=7200287 loops=1) Buffers: shared hit=2639 read=120017 I/O Timings: read=358.000 -> Hash (cost=2423.47..2423.47 rows=39247 width=26) (actual time=12.100..12.100 rows=39247 loops=1) Buckets: 2048 Batches: 4 Memory Usage: 575kB Buffers: shared hit=2031, temp written=170 -> Seq Scan on panoramas p (cost=0.00..2423.47 rows=39247 width=26) (actual time=0.003..5.470 rows=39247 loops=1) Buffers: shared hit=2031Total runtime: 8838.701 ms----------TABLE METADATA----------Number of rows in album_items: 7347792Number of rows in image2: 7200287Number of rows in panoramas: 39247----------TABLES (AND THEIR INDEXES) REFERENCED IN QUERY----------CREATE TABLE content.album_items( album_id integer NOT NULL, image_id integer, \"position\" integer, caption text, active boolean NOT NULL, panorama_id integer, CONSTRAINT album_items_album_id_fkey FOREIGN KEY (album_id) REFERENCES content.albums (id), CONSTRAINT album_items_image_id_fkey FOREIGN KEY (image_id) REFERENCES content.image2 (imageid), CONSTRAINT album_items_panorama_id_fkey FOREIGN KEY (panorama_id) REFERENCES content.panoramas (id), CONSTRAINT album_image_unique UNIQUE (album_id, image_id));CREATE INDEX album_items_album_id_idx ON content.album_items (album_id);CREATE INDEX album_items_image_id_idx ON content.album_items (image_id);CREATE INDEX album_items_panorama_id_idx ON content.album_items (panorama_id);CREATE TABLE content.image2( imageid integer NOT NULL DEFAULT nextval('image2_imageid_seq'::regclass), hotelid integer, filename text NOT NULL, originalfoldername text NOT NULL, width integer, height integer, active boolean NOT NULL DEFAULT false, importid integer, timetaken timestamp without time zone, state integer NOT NULL DEFAULT 1, has_wide boolean NOT NULL DEFAULT false, type integer, document_id integer, property_id integer, CONSTRAINT image2_pkey PRIMARY KEY (imageid), CONSTRAINT fk_image2_hotelid FOREIGN KEY (hotelid) REFERENCES content.hotel (hotelid), CONSTRAINT fk_image2_importid FOREIGN KEY (importid) REFERENCES content.imageimport (importid), CONSTRAINT image2_document_id_fkey FOREIGN KEY (document_id) REFERENCES content.documents (id), CONSTRAINT image2_property_id_fkey FOREIGN KEY (property_id) REFERENCES content.properties (id), CONSTRAINT uq_image2_filename UNIQUE (filename));CREATE INDEX fki_image2_property_id_fkey ON content.image2 (property_id);CREATE INDEX image2_document_id_idx ON content.image2 (document_id);CREATE INDEX image2_importid_idx ON content.image2 (importid);CREATE INDEX ix_image2_hotelid ON content.image2 (hotelid);CREATE INDEX ix_image2_imageid ON content.image2 (imageid);CREATE TABLE content.panoramas( id integer NOT NULL DEFAULT nextval('panoramas_id_seq'::regclass), hotel_id integer, filename text NOT NULL, folder text NOT NULL, import_id integer NOT NULL, active boolean NOT NULL DEFAULT false, state integer NOT NULL, num_images integer NOT NULL DEFAULT 0, type integer, hdr boolean NOT NULL DEFAULT false, has_preview boolean NOT NULL DEFAULT false, property_id integer, data json, previews_created boolean NOT NULL DEFAULT false, CONSTRAINT panoramas_pkey PRIMARY KEY (id), CONSTRAINT fk_panoramas_hotel_id FOREIGN KEY (hotel_id) REFERENCES content.hotel (hotelid), CONSTRAINT fk_panoramas_import_id FOREIGN KEY (import_id) REFERENCES content.imageimport (importid), CONSTRAINT panoramas_property_id_fkey FOREIGN KEY (property_id) REFERENCES content.properties (id), CONSTRAINT panoramas_uq_filename UNIQUE (filename));CREATE INDEX fki_panoramas_property_id_fkey ON content.panoramas (property_id);CREATE INDEX panoramas_hotel_id_idx ON content.panoramas (hotel_id);CREATE INDEX panoramas_import_id_idx ON content.panoramas (import_id);----------Thanks in advance,Ben",
"msg_date": "Mon, 13 Jul 2015 16:54:29 -0400",
"msg_from": "Ben Hoyt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query planner not using indexes with JOIN query and OR clause"
},
{
"msg_contents": "On Mon, Jul 13, 2015 at 3:54 PM, Ben Hoyt <[email protected]> wrote:\n> Hi folks,\n>\n> I have a fairly simple three-table query (pasted below) with two LEFT JOINs\n> and an OR in the WHERE clause that for some reason is doing sequential scans\n> on all three tables (two of them large -- several million rows), even though\n> I have indexes on the relevant \"filename\" columns.\n>\n> Note the two parts of the where clause -- a filter on the image2 table and a\n> filter on the panoramas table. If I comment out either filter and just\n> filter on i.filename by itself, or p.filename by itself, the query planner\n> uses the relevant index and the query takes a few milliseconds. But when I\n> have both clauses (as shown below) it falls back to sequential scanning all\n> three tables for some reason, taking several seconds.\n>\n> What am I missing? There must be some reason PostgreSQL can't use the index\n> in this case, but I can't see what it is. If I were PostgreSQL I'd be using\n> the index on i.filename and p.filename to filter to a couple of rows first,\n> then join, making it super-quick.\n>\n> In this test I'm running PostgreSQL 9.3.3 on Windows 64-bit, but the same\n\nFYI, this won'f fix your issue, but upgrade your postgres to the\nlatest bugfix release, 9.3.9.\n\n> My query and PostgreSQL version and the explain and a lot of other table\n> data is pasted below.\n>\n> QUERY\n> ----------\n> select ai.position, i.filename as image_filename, p.filename as\n> panorama_filename\n> from album_items ai\n> left join image2 i on i.imageid = ai.image_id\n> left join panoramas p on p.id = ai.panorama_id\n> where i.filename in ('pano360--v471.jpg', 'pano360-2--v474.jpg') or\n> p.filename in ('pano360--v471', 'pano360-2--v474')\n\nTry refactoring to:\n\nselect ai.position, i.filename as image_filename, p.filename as\npanorama_filename\nfrom album_items ai\nleft join image2 i on i.imageid = ai.image_id\nleft join panoramas p on p.id = ai.panorama_id\nwhere i.filename in ('pano360--v471.jpg', 'pano360-2--v474.jpg')\nunion all select ai.position, i.filename as image_filename, p.filename\nas panorama_filename\nfrom album_items ai\nleft join image2 i on i.imageid = ai.image_id\nleft join panoramas p on p.id = ai.panorama_id\nwhere p.filename in ('pano360--v471', 'pano360-2--v474')\n\n...and see if that helps. Dealing with 'or' conditions is a general\nweakness of the planner that has gotten better over time but in some\ncases you have to boil it to 'union all'.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Jul 2015 16:01:35 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner not using indexes with JOIN query and OR clause"
},
{
"msg_contents": "On Mon, Jul 13, 2015 at 4:01 PM, Merlin Moncure <[email protected]> wrote:\n> On Mon, Jul 13, 2015 at 3:54 PM, Ben Hoyt <[email protected]> wrote:\n>> Hi folks,\n>>\n>> I have a fairly simple three-table query (pasted below) with two LEFT JOINs\n>> and an OR in the WHERE clause that for some reason is doing sequential scans\n>> on all three tables (two of them large -- several million rows), even though\n>> I have indexes on the relevant \"filename\" columns.\n>>\n>> Note the two parts of the where clause -- a filter on the image2 table and a\n>> filter on the panoramas table. If I comment out either filter and just\n>> filter on i.filename by itself, or p.filename by itself, the query planner\n>> uses the relevant index and the query takes a few milliseconds. But when I\n>> have both clauses (as shown below) it falls back to sequential scanning all\n>> three tables for some reason, taking several seconds.\n>>\n>> What am I missing? There must be some reason PostgreSQL can't use the index\n>> in this case, but I can't see what it is. If I were PostgreSQL I'd be using\n>> the index on i.filename and p.filename to filter to a couple of rows first,\n>> then join, making it super-quick.\n>>\n>> In this test I'm running PostgreSQL 9.3.3 on Windows 64-bit, but the same\n>\n> FYI, this won'f fix your issue, but upgrade your postgres to the\n> latest bugfix release, 9.3.9.\n>\n>> My query and PostgreSQL version and the explain and a lot of other table\n>> data is pasted below.\n>>\n>> QUERY\n>> ----------\n>> select ai.position, i.filename as image_filename, p.filename as\n>> panorama_filename\n>> from album_items ai\n>> left join image2 i on i.imageid = ai.image_id\n>> left join panoramas p on p.id = ai.panorama_id\n>> where i.filename in ('pano360--v471.jpg', 'pano360-2--v474.jpg') or\n>> p.filename in ('pano360--v471', 'pano360-2--v474')\n>\n> Try refactoring to:\n>\n> select ai.position, i.filename as image_filename, p.filename as\n> panorama_filename\n> from album_items ai\n> left join image2 i on i.imageid = ai.image_id\n> left join panoramas p on p.id = ai.panorama_id\n> where i.filename in ('pano360--v471.jpg', 'pano360-2--v474.jpg')\n> union all select ai.position, i.filename as image_filename, p.filename\n> as panorama_filename\n> from album_items ai\n> left join image2 i on i.imageid = ai.image_id\n> left join panoramas p on p.id = ai.panorama_id\n> where p.filename in ('pano360--v471', 'pano360-2--v474')\n\none thing, if a row matches both conditions you may get duplicate\nrows. If that's an issue, one possible way to fix that would be to\nconvert 'union all' to 'union'. I'd have to see what the query output\nand how it performed. Another way to tackle queries like this is with\nEXISTS:\n\nselect ai.position, i.filename as image_filename, p.filename as\npanorama_filename\nfrom album_items ai\nwhere exists (\n select 1 from image2 i\n where i.imageid = ai.image_id\n and i.filename in ('pano360--v471.jpg', 'pano360-2--v474.jpg')\n union all select 1 from panoramas p\n where p.id = ai.panorama_id\n and p.filename in ('pano360--v471', 'pano360-2--v474')\n)\n\nwhy do this? by pushing the UNION ALL into the exists statement, you\nguarantee your self at most one row from album_items without having to\nresort to performance dangerous tradeoffs like DISTINCT or UNION.\nGive it a try...\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 Jul 2015 08:14:00 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner not using indexes with JOIN query and OR clause"
},
{
"msg_contents": ">\n> Try refactoring to:\n>\n> select ai.position, i.filename as image_filename, p.filename as\n> panorama_filename\n> from album_items ai\n> left join image2 i on i.imageid = ai.image_id\n> left join panoramas p on p.id = ai.panorama_id\n> where i.filename in ('pano360--v471.jpg', 'pano360-2--v474.jpg')\n> union all select ai.position, i.filename as image_filename, p.filename\n> as panorama_filename\n> from album_items ai\n> left join image2 i on i.imageid = ai.image_id\n> left join panoramas p on p.id = ai.panorama_id\n> where p.filename in ('pano360--v471', 'pano360-2--v474')\n>\n> ...and see if that helps. Dealing with 'or' conditions is a general\n> weakness of the planner that has gotten better over time but in some\n> cases you have to boil it to 'union all'.\n>\n\nYes, this definitely helps and the query performance goes back to normal,\nthanks. It makes the code a bit more complicated, so not ideal, but\ndefinitely works!\n\nThanks for the help. I don't how much you know about PostgreSQL internals\n(I don't!), but what optimization would need to be in place for PostgreSQL\nto be smarter about this query?\n\n-Ben\n\nTry refactoring to:\n\nselect ai.position, i.filename as image_filename, p.filename as\npanorama_filename\nfrom album_items ai\nleft join image2 i on i.imageid = ai.image_id\nleft join panoramas p on p.id = ai.panorama_id\nwhere i.filename in ('pano360--v471.jpg', 'pano360-2--v474.jpg')\nunion all select ai.position, i.filename as image_filename, p.filename\nas panorama_filename\nfrom album_items ai\nleft join image2 i on i.imageid = ai.image_id\nleft join panoramas p on p.id = ai.panorama_id\nwhere p.filename in ('pano360--v471', 'pano360-2--v474')\n\n...and see if that helps. Dealing with 'or' conditions is a general\nweakness of the planner that has gotten better over time but in some\ncases you have to boil it to 'union all'.Yes, this definitely helps and the query performance goes back to normal, thanks. It makes the code a bit more complicated, so not ideal, but definitely works!Thanks for the help. I don't how much you know about PostgreSQL internals (I don't!), but what optimization would need to be in place for PostgreSQL to be smarter about this query?-Ben",
"msg_date": "Tue, 14 Jul 2015 16:17:52 -0400",
"msg_from": "Ben Hoyt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query planner not using indexes with JOIN query and OR clause"
}
] |
[
{
"msg_contents": "Apologies ahead of time for not knowing which group to send to, but I\nwanted to see if anyone has encountered and resolved this type of error.\nI'm setting up postgresql 9.2 streaming replication on RH and after copying\nthe master data directory over to the slave, the psql service refuses start\nand gives the following errors.\n\n\n\n 2015-07-13 23:55:41.224 UTC FATAL: could not create shared memory\nsegment: Invalid argument\n 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was\nshmget(key=5432001, size=1146945536, 03600).\n 2015-07-13 23:55:41.224 UTC HINT: This error usually means that\nPostgreSQL's request for a shared memory segment exceeded your kernel's\nSHMMAX parameter. You can either reduce the request size or reconfigure\nthe kernel with larger SHMMAX. To reduce the request size (currently\n1146945536 bytes), reduce PostgreSQL's shared memory usage, perhaps by\nreducing shared_buffers or max_connections.\n If the request size is already small, it's possible that it is less\nthan your kernel's SHMMIN parameter, in which case raising the request size\nor reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about shared\nmemory configuration.\n 2015-07-13 23:56:21.344 UTC FATAL: could not create shared memory\nsegment: Invalid argument\n 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was\nshmget(key=5432001, size=58302464, 03600).\n 2015-07-13 23:56:21.344 UTC HINT: This error usually means that\nPostgreSQL's request for a shared memory segment exceeded your kernel's\nSHMMAX parameter. You can either reduce the request size or reconfigure\nthe kernel with larger SHMMAX. To reduce the request size (currently\n58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by\nreducing shared_buffers or max_connections.\n If the request size is already small, it's possible that it is less\nthan your kernel's SHMMIN parameter, in which case raising the request size\nor reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about shared\nmemory configuration.\n\n\n\nI've set shared_buffer way down to next to nothing along with kernel.shmmax\nand kernel.shmall per some blogs. However, the same error persists, and I'm\ngetting no where. I think ultimately the solution is to upgrade, but the\ndevs may not be ready for an upgrade at this point. Any help would be\ngreatly appreciated. Thanks!\n\nApologies ahead of time for not knowing which group to send to, but I wanted to see if anyone has encountered and resolved this type of error. I'm setting up postgresql 9.2 streaming replication on RH and after copying the master data directory over to the slave, the psql service refuses start and gives the following errors. 2015-07-13 23:55:41.224 UTC FATAL: could not create shared memory segment: Invalid argument 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was shmget(key=5432001, size=1146945536, 03600). 2015-07-13 23:55:41.224 UTC HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 1146945536 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections. If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for. The PostgreSQL documentation contains more information about shared memory configuration. 2015-07-13 23:56:21.344 UTC FATAL: could not create shared memory segment: Invalid argument 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was shmget(key=5432001, size=58302464, 03600). 2015-07-13 23:56:21.344 UTC HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections. If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for. The PostgreSQL documentation contains more information about shared memory configuration.I've set shared_buffer way down to next to nothing along with kernel.shmmax and kernel.shmall per some blogs. However, the same error persists, and I'm getting no where. I think ultimately the solution is to upgrade, but the devs may not be ready for an upgrade at this point. Any help would be greatly appreciated. Thanks!",
"msg_date": "Mon, 13 Jul 2015 19:08:22 -0500",
"msg_from": "Ryan King - NOAA Affiliate <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: could not create shared memory segment: Invalid argument"
},
{
"msg_contents": "On Jul 13, 2015, at 6:08 PM, Ryan King - NOAA Affiliate <[email protected]> wrote:\n> \n> I've set shared_buffer way down to next to nothing...\n\nYet pg is still trying for 1GB at startup. You should make sure that the postgresql.conf you've been editing is the one that's actually being used. (And, BTW, admin is the right list for this discussion.)\n\n-- \nScott Ribe\[email protected]\nhttp://www.elevated-dev.com/\nhttps://www.linkedin.com/in/scottribe/\n(303) 722-0567 voice\n\n\n\n\n\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Mon, 13 Jul 2015 18:16:57 -0600",
"msg_from": "Scott Ribe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not create shared memory segment: Invalid argument"
},
{
"msg_contents": "On 7/13/2015 7:08 PM, Ryan King - NOAA Affiliate wrote:\n> Apologies ahead of time for not knowing which group to send to, but I\n> wanted to see if anyone has encountered and resolved this type of error.\n> I'm setting up postgresql 9.2 streaming replication on RH and after\n> copying the master data directory over to the slave, the psql service\n> refuses start and gives the following errors.\n>\n>\n>\n> 2015-07-13 23:55:41.224 UTC FATAL: could not create shared memory\n> segment: Invalid argument\n> 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was\n> shmget(key=5432001, size=1146945536, 03600).\n> 2015-07-13 23:55:41.224 UTC HINT: This error usually means that\n> PostgreSQL's request for a shared memory segment exceeded your kernel's\n> SHMMAX parameter. You can either reduce the request size or reconfigure\n> the kernel with larger SHMMAX. To reduce the request size (currently\n> 1146945536 bytes), reduce PostgreSQL's shared memory usage, perhaps by\n> reducing shared_buffers or max_connections.\n> If the request size is already small, it's possible that it is\n> less than your kernel's SHMMIN parameter, in which case raising the\n> request size or reconfiguring SHMMIN is called for.\n> The PostgreSQL documentation contains more information about\n> shared memory configuration.\n> 2015-07-13 23:56:21.344 UTC FATAL: could not create shared memory\n> segment: Invalid argument\n> 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was\n> shmget(key=5432001, size=58302464, 03600).\n> 2015-07-13 23:56:21.344 UTC HINT: This error usually means that\n> PostgreSQL's request for a shared memory segment exceeded your kernel's\n> SHMMAX parameter. You can either reduce the request size or reconfigure\n> the kernel with larger SHMMAX. To reduce the request size (currently\n> 58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by\n> reducing shared_buffers or max_connections.\n> If the request size is already small, it's possible that it is\n> less than your kernel's SHMMIN parameter, in which case raising the\n> request size or reconfiguring SHMMIN is called for.\n> The PostgreSQL documentation contains more information about\n> shared memory configuration.\n>\n>\n>\n> I've set shared_buffer way down to next to nothing along with\n> kernel.shmmax and kernel.shmall per some blogs. However, the same error\n> persists, and I'm getting no where. I think ultimately the solution is\n> to upgrade, but the devs may not be ready for an upgrade at this point.\n> Any help would be greatly appreciated. Thanks!\n\nYou don't want to decrease kernel.shmmax you want to set it to the \nrequest size:\n\nsysctl -w kernel.shmmax=1146945536\n\nshmmax is the only thing you really need to play with.\n\n-Andy\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Tue, 14 Jul 2015 08:59:16 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not create shared memory segment: Invalid argument"
},
{
"msg_contents": "I tried that too - same result. I updated another box w/ the same issue to\n9.4.4, and all is well there. Thanks for your reply.\n\nOn Tue, Jul 14, 2015 at 8:59 AM, Andy Colson <[email protected]> wrote:\n\n> On 7/13/2015 7:08 PM, Ryan King - NOAA Affiliate wrote:\n>\n>> Apologies ahead of time for not knowing which group to send to, but I\n>> wanted to see if anyone has encountered and resolved this type of error.\n>> I'm setting up postgresql 9.2 streaming replication on RH and after\n>> copying the master data directory over to the slave, the psql service\n>> refuses start and gives the following errors.\n>>\n>>\n>>\n>> 2015-07-13 23:55:41.224 UTC FATAL: could not create shared memory\n>> segment: Invalid argument\n>> 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was\n>> shmget(key=5432001, size=1146945536, 03600).\n>> 2015-07-13 23:55:41.224 UTC HINT: This error usually means that\n>> PostgreSQL's request for a shared memory segment exceeded your kernel's\n>> SHMMAX parameter. You can either reduce the request size or reconfigure\n>> the kernel with larger SHMMAX. To reduce the request size (currently\n>> 1146945536 bytes), reduce PostgreSQL's shared memory usage, perhaps by\n>> reducing shared_buffers or max_connections.\n>> If the request size is already small, it's possible that it is\n>> less than your kernel's SHMMIN parameter, in which case raising the\n>> request size or reconfiguring SHMMIN is called for.\n>> The PostgreSQL documentation contains more information about\n>> shared memory configuration.\n>> 2015-07-13 23:56:21.344 UTC FATAL: could not create shared memory\n>> segment: Invalid argument\n>> 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was\n>> shmget(key=5432001, size=58302464, 03600).\n>> 2015-07-13 23:56:21.344 UTC HINT: This error usually means that\n>> PostgreSQL's request for a shared memory segment exceeded your kernel's\n>> SHMMAX parameter. You can either reduce the request size or reconfigure\n>> the kernel with larger SHMMAX. To reduce the request size (currently\n>> 58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by\n>> reducing shared_buffers or max_connections.\n>> If the request size is already small, it's possible that it is\n>> less than your kernel's SHMMIN parameter, in which case raising the\n>> request size or reconfiguring SHMMIN is called for.\n>> The PostgreSQL documentation contains more information about\n>> shared memory configuration.\n>>\n>>\n>>\n>> I've set shared_buffer way down to next to nothing along with\n>> kernel.shmmax and kernel.shmall per some blogs. However, the same error\n>> persists, and I'm getting no where. I think ultimately the solution is\n>> to upgrade, but the devs may not be ready for an upgrade at this point.\n>> Any help would be greatly appreciated. Thanks!\n>>\n>\n> You don't want to decrease kernel.shmmax you want to set it to the request\n> size:\n>\n> sysctl -w kernel.shmmax=1146945536\n>\n> shmmax is the only thing you really need to play with.\n>\n> -Andy\n>\n>\n\nI tried that too - same result. I updated another box w/ the same issue to 9.4.4, and all is well there. Thanks for your reply. On Tue, Jul 14, 2015 at 8:59 AM, Andy Colson <[email protected]> wrote:On 7/13/2015 7:08 PM, Ryan King - NOAA Affiliate wrote:\n\nApologies ahead of time for not knowing which group to send to, but I\nwanted to see if anyone has encountered and resolved this type of error.\nI'm setting up postgresql 9.2 streaming replication on RH and after\ncopying the master data directory over to the slave, the psql service\nrefuses start and gives the following errors.\n\n\n\n 2015-07-13 23:55:41.224 UTC FATAL: could not create shared memory\nsegment: Invalid argument\n 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was\nshmget(key=5432001, size=1146945536, 03600).\n 2015-07-13 23:55:41.224 UTC HINT: This error usually means that\nPostgreSQL's request for a shared memory segment exceeded your kernel's\nSHMMAX parameter. You can either reduce the request size or reconfigure\nthe kernel with larger SHMMAX. To reduce the request size (currently\n1146945536 bytes), reduce PostgreSQL's shared memory usage, perhaps by\nreducing shared_buffers or max_connections.\n If the request size is already small, it's possible that it is\nless than your kernel's SHMMIN parameter, in which case raising the\nrequest size or reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about\nshared memory configuration.\n 2015-07-13 23:56:21.344 UTC FATAL: could not create shared memory\nsegment: Invalid argument\n 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was\nshmget(key=5432001, size=58302464, 03600).\n 2015-07-13 23:56:21.344 UTC HINT: This error usually means that\nPostgreSQL's request for a shared memory segment exceeded your kernel's\nSHMMAX parameter. You can either reduce the request size or reconfigure\nthe kernel with larger SHMMAX. To reduce the request size (currently\n58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by\nreducing shared_buffers or max_connections.\n If the request size is already small, it's possible that it is\nless than your kernel's SHMMIN parameter, in which case raising the\nrequest size or reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about\nshared memory configuration.\n\n\n\nI've set shared_buffer way down to next to nothing along with\nkernel.shmmax and kernel.shmall per some blogs. However, the same error\npersists, and I'm getting no where. I think ultimately the solution is\nto upgrade, but the devs may not be ready for an upgrade at this point.\nAny help would be greatly appreciated. Thanks!\n\n\nYou don't want to decrease kernel.shmmax you want to set it to the request size:\n\nsysctl -w kernel.shmmax=1146945536\n\nshmmax is the only thing you really need to play with.\n\n-Andy",
"msg_date": "Wed, 15 Jul 2015 09:13:53 -0500",
"msg_from": "Ryan King - NOAA Affiliate <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: could not create shared memory segment: Invalid argument"
},
{
"msg_contents": "> On Tue, Jul 14, 2015 at 8:59 AM, Andy Colson <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> On 7/13/2015 7:08 PM, Ryan King - NOAA Affiliate wrote:\n>\n> Apologies ahead of time for not knowing which group to send to,\n> but I\n> wanted to see if anyone has encountered and resolved this type\n> of error.\n> I'm setting up postgresql 9.2 streaming replication on RH and after\n> copying the master data directory over to the slave, the psql\n> service\n> refuses start and gives the following errors.\n>\n>\n>\n> 2015-07-13 23:55:41.224 UTC FATAL: could not create shared\n> memory\n> segment: Invalid argument\n> 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was\n> shmget(key=5432001, size=1146945536, 03600).\n> 2015-07-13 23:55:41.224 UTC HINT: This error usually means\n> that\n> PostgreSQL's request for a shared memory segment exceeded your\n> kernel's\n> SHMMAX parameter. You can either reduce the request size or\n> reconfigure\n> the kernel with larger SHMMAX. To reduce the request size\n> (currently\n> 1146945536 bytes), reduce PostgreSQL's shared memory usage,\n> perhaps by\n> reducing shared_buffers or max_connections.\n> If the request size is already small, it's possible\n> that it is\n> less than your kernel's SHMMIN parameter, in which case raising the\n> request size or reconfiguring SHMMIN is called for.\n> The PostgreSQL documentation contains more information\n> about\n> shared memory configuration.\n> 2015-07-13 23:56:21.344 UTC FATAL: could not create shared\n> memory\n> segment: Invalid argument\n> 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was\n> shmget(key=5432001, size=58302464, 03600).\n> 2015-07-13 23:56:21.344 UTC HINT: This error usually means\n> that\n> PostgreSQL's request for a shared memory segment exceeded your\n> kernel's\n> SHMMAX parameter. You can either reduce the request size or\n> reconfigure\n> the kernel with larger SHMMAX. To reduce the request size\n> (currently\n> 58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by\n> reducing shared_buffers or max_connections.\n> If the request size is already small, it's possible\n> that it is\n> less than your kernel's SHMMIN parameter, in which case raising the\n> request size or reconfiguring SHMMIN is called for.\n> The PostgreSQL documentation contains more information\n> about\n> shared memory configuration.\n>\n>\n>\n> I've set shared_buffer way down to next to nothing along with\n> kernel.shmmax and kernel.shmall per some blogs. However, the\n> same error\n> persists, and I'm getting no where. I think ultimately the\n> solution is\n> to upgrade, but the devs may not be ready for an upgrade at this\n> point.\n> Any help would be greatly appreciated. Thanks!\n>\n>\n> You don't want to decrease kernel.shmmax you want to set it to the\n> request size:\n>\n> sysctl -w kernel.shmmax=1146945536\n>\n> shmmax is the only thing you really need to play with.\n>\n> -Andy\n>\n>\n\nOn 7/15/2015 9:13 AM, Ryan King - NOAA Affiliate wrote:\n > I tried that too - same result. I updated another box w/ the same issue\n > to 9.4.4, and all is well there. Thanks for your reply.\n >\n\n\nAh, I assume then that something else is already using some shared memory.\n\nPG needs:\n> To reduce the request size (currently 58302464 bytes),\n\nThat much shared memory *free*. You can check current usage with: ipcs -m\n\nAdd what PG needs to what you are already using, and you should be good \nto go.\n\n-Andy\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Thu, 16 Jul 2015 15:22:20 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not create shared memory segment: Invalid\n argument"
}
] |
[
{
"msg_contents": "What is your kernel SHMMAX? Usually somewhere under /etc/sysconfig. Depends on your distro. This is telling you that your kernel does not have sufficient resources.\r\n\r\n-------- Original message --------\r\nFrom: Ryan King - NOAA Affiliate <[email protected]>\r\nDate: 07/13/2015 7:10 PM (GMT-06:00)\r\nTo: [email protected], [email protected], [email protected]\r\nSubject: Re: [ADMIN] could not create shared memory segment: Invalid argument\r\n\r\nApologies ahead of time for not knowing which group to send to, but I wanted to see if anyone has encountered and resolved this type of error. I'm setting up postgresql 9.2 streaming replication on RH and after copying the master data directory over to the slave, the psql service refuses start and gives the following errors.\r\n\r\n\r\n\r\n 2015-07-13 23:55:41.224 UTC FATAL: could not create shared memory segment: Invalid argument\r\n 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was shmget(key=5432001, size=1146945536, 03600).\r\n 2015-07-13 23:55:41.224 UTC HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 1146945536 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\r\n If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.\r\n The PostgreSQL documentation contains more information about shared memory configuration.\r\n 2015-07-13 23:56:21.344 UTC FATAL: could not create shared memory segment: Invalid argument\r\n 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was shmget(key=5432001, size=58302464, 03600).\r\n 2015-07-13 23:56:21.344 UTC HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\r\n If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.\r\n The PostgreSQL documentation contains more information about shared memory configuration.\r\n\r\n\r\n\r\nI've set shared_buffer way down to next to nothing along with kernel.shmmax and kernel.shmall per some blogs. However, the same error persists, and I'm getting no where. I think ultimately the solution is to upgrade, but the devs may not be ready for an upgrade at this point. Any help would be greatly appreciated. Thanks!\r\n\r\n\r\nJournyx, Inc.\r\n7600 Burnet Road #300\r\nAustin, TX 78757\r\nwww.journyx.com\r\n\r\np 512.834.8888\r\nf 512-834-8858\r\n\r\nDo you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/\r\n\n\n\n\n\n\r\nWhat is your kernel SHMMAX? Usually somewhere under /etc/sysconfig. Depends on your distro. This is telling you that your kernel does not have sufficient resources.\n\r\n-------- Original message --------\r\nFrom: Ryan King - NOAA Affiliate <[email protected]> \r\nDate: 07/13/2015 7:10 PM (GMT-06:00) \r\nTo: [email protected], [email protected], [email protected]\r\n\r\nSubject: Re: [ADMIN] could not create shared memory segment: Invalid argument \n\n\n\n\n\n\n\n\n\n\n\n\nApologies ahead of time for not knowing which group to send to, but I wanted to see if anyone has encountered and resolved this type of error. I'm setting up postgresql 9.2 streaming replication on RH and after copying the master data directory over to\r\n the slave, the psql service refuses start and gives the following errors. \n\n\n\n\n\n\n 2015-07-13 23:55:41.224 UTC FATAL: could not create shared memory segment: Invalid argument\n\n 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was shmget(key=5432001, size=1146945536, 03600).\n 2015-07-13 23:55:41.224 UTC HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce\r\n the request size (currently 1146945536 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\n If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about shared memory configuration.\n 2015-07-13 23:56:21.344 UTC FATAL: could not create shared memory segment: Invalid argument\n 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was shmget(key=5432001, size=58302464, 03600).\n 2015-07-13 23:56:21.344 UTC HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce\r\n the request size (currently 58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\n If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about shared memory configuration.\n\n\n\n\n\n\n\nI've set shared_buffer way down to next to nothing along with kernel.shmmax and kernel.shmall per some blogs. However, the same error persists, and I'm getting no where. I think ultimately the solution is to upgrade, but the devs may not be ready for an\r\n upgrade at this point. Any help would be greatly appreciated. Thanks!\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJournyx, Inc.\n7600 Burnet Road #300 \r\nAustin, TX 78757 \r\nwww.journyx.com \n\n\n\n\np 512.834.8888 \nf 512-834-8858 \n\n\n\nDo you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/",
"msg_date": "Tue, 14 Jul 2015 00:15:19 +0000",
"msg_from": "Scott Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: could not create shared memory segment: Invalid argument"
},
{
"msg_contents": "Hi Scott. Removing the other groups and only copying admin.\n\nHere are the current settings - I know very low, but I just kept going\ndownwards until there was no where else to go:\n\n\n# Controls the maximum shared segment size, in bytes\n#kernel.shmmax = 68719476736 - orginal\n#kernel.shmmax = 16833540096\nkernel.shmmax = 168\n\n# Controls the maximum number of shared memory segments, in pages\n#kernel.shmall = 4294967296 - orginal\nkernel.shmall = 4109751\n\nThanks\n\n\nRyan King\n\nInternet Dissemination Group, Kansas City\n\nShared Infrastructure Services Branch\n\nNational Weather Service\n\nContractor / Insight Global <https://www.insightglobal.net/> / Ace Info\nSolutions, Inc. <http://www.aceinfosolutions.com/>\n\nOn Mon, Jul 13, 2015 at 7:15 PM, Scott Whitney <[email protected]> wrote:\n\n> What is your kernel SHMMAX? Usually somewhere under /etc/sysconfig.\n> Depends on your distro. This is telling you that your kernel does not have\n> sufficient resources.\n>\n> -------- Original message --------\n> From: Ryan King - NOAA Affiliate <[email protected]>\n> Date: 07/13/2015 7:10 PM (GMT-06:00)\n> To: [email protected], [email protected],\n> [email protected]\n> Subject: Re: [ADMIN] could not create shared memory segment: Invalid\n> argument\n>\n> Apologies ahead of time for not knowing which group to send to, but\n> I wanted to see if anyone has encountered and resolved this type of error.\n> I'm setting up postgresql 9.2 streaming replication on RH and after copying\n> the master data directory over to the slave, the psql service refuses start\n> and gives the following errors.\n>\n>\n>\n> 2015-07-13 23:55:41.224 UTC FATAL: could not create shared memory\n> segment: Invalid argument\n> 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was\n> shmget(key=5432001, size=1146945536, 03600).\n> 2015-07-13 23:55:41.224 UTC HINT: This error usually means that\n> PostgreSQL's request for a shared memory segment exceeded your kernel's\n> SHMMAX parameter. You can either reduce the request size or reconfigure\n> the kernel with larger SHMMAX. To reduce the request size (currently\n> 1146945536 bytes), reduce PostgreSQL's shared memory usage, perhaps by\n> reducing shared_buffers or max_connections.\n> If the request size is already small, it's possible that it is\n> less than your kernel's SHMMIN parameter, in which case raising the request\n> size or reconfiguring SHMMIN is called for.\n> The PostgreSQL documentation contains more information about\n> shared memory configuration.\n> 2015-07-13 23:56:21.344 UTC FATAL: could not create shared memory\n> segment: Invalid argument\n> 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was\n> shmget(key=5432001, size=58302464, 03600).\n> 2015-07-13 23:56:21.344 UTC HINT: This error usually means that\n> PostgreSQL's request for a shared memory segment exceeded your kernel's\n> SHMMAX parameter. You can either reduce the request size or reconfigure\n> the kernel with larger SHMMAX. To reduce the request size (currently\n> 58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by\n> reducing shared_buffers or max_connections.\n> If the request size is already small, it's possible that it is\n> less than your kernel's SHMMIN parameter, in which case raising the request\n> size or reconfiguring SHMMIN is called for.\n> The PostgreSQL documentation contains more information about\n> shared memory configuration.\n>\n>\n>\n> I've set shared_buffer way down to next to nothing along with\n> kernel.shmmax and kernel.shmall per some blogs. However, the same error\n> persists, and I'm getting no where. I think ultimately the solution is to\n> upgrade, but the devs may not be ready for an upgrade at this point. Any\n> help would be greatly appreciated. Thanks!\n>\n>\n> Journyx, Inc.\n> 7600 Burnet Road #300\n> Austin, TX 78757\n> www.journyx.com\n>\n> p 512.834.8888\n> f 512-834-8858\n>\n> Do you receive our promotional emails? You can subscribe or unsubscribe\n> to those emails at http://go.journyx.com/emailPreference/e/4932/714/\n>\n\nHi Scott. Removing the other groups and only copying admin. Here are the current settings - I know very low, but I just kept going downwards until there was no where else to go: # Controls the maximum shared segment size, in bytes#kernel.shmmax = 68719476736 - orginal#kernel.shmmax = 16833540096kernel.shmmax = 168# Controls the maximum number of shared memory segments, in pages#kernel.shmall = 4294967296 - orginalkernel.shmall = 4109751Thanks\nRyan\nKingInternet Dissemination\nGroup, Kansas CityShared Infrastructure\nServices BranchNational Weather ServiceContractor\n/ Insight\nGlobal / Ace Info Solutions,\nInc.\n\nOn Mon, Jul 13, 2015 at 7:15 PM, Scott Whitney <[email protected]> wrote:\n\nWhat is your kernel SHMMAX? Usually somewhere under /etc/sysconfig. Depends on your distro. This is telling you that your kernel does not have sufficient resources.\n\n-------- Original message --------\nFrom: Ryan King - NOAA Affiliate <[email protected]> \nDate: 07/13/2015 7:10 PM (GMT-06:00) \nTo: [email protected], [email protected], [email protected]\n\nSubject: Re: [ADMIN] could not create shared memory segment: Invalid argument \n\n\n\n\n\n\n\n\n\n\n\n\nApologies ahead of time for not knowing which group to send to, but I wanted to see if anyone has encountered and resolved this type of error. I'm setting up postgresql 9.2 streaming replication on RH and after copying the master data directory over to\n the slave, the psql service refuses start and gives the following errors. \n\n\n\n\n\n\n 2015-07-13 23:55:41.224 UTC FATAL: could not create shared memory segment: Invalid argument\n\n 2015-07-13 23:55:41.224 UTC DETAIL: Failed system call was shmget(key=5432001, size=1146945536, 03600).\n 2015-07-13 23:55:41.224 UTC HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce\n the request size (currently 1146945536 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\n If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about shared memory configuration.\n 2015-07-13 23:56:21.344 UTC FATAL: could not create shared memory segment: Invalid argument\n 2015-07-13 23:56:21.344 UTC DETAIL: Failed system call was shmget(key=5432001, size=58302464, 03600).\n 2015-07-13 23:56:21.344 UTC HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce\n the request size (currently 58302464 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\n If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about shared memory configuration.\n\n\n\n\n\n\n\nI've set shared_buffer way down to next to nothing along with kernel.shmmax and kernel.shmall per some blogs. However, the same error persists, and I'm getting no where. I think ultimately the solution is to upgrade, but the devs may not be ready for an\n upgrade at this point. Any help would be greatly appreciated. Thanks!\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJournyx, Inc.\n7600 Burnet Road #300 \nAustin, TX 78757 \nwww.journyx.com \n\n\n\n\np 512.834.8888 \nf 512-834-8858 \n\n\n\nDo you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/",
"msg_date": "Mon, 13 Jul 2015 19:26:53 -0500",
"msg_from": "Ryan King - NOAA Affiliate <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not create shared memory segment: Invalid argument"
},
{
"msg_contents": "On Jul 13, 2015, at 6:26 PM, Ryan King - NOAA Affiliate <[email protected]> wrote:\n> \n> # Controls the maximum shared segment size, in bytes\n> #kernel.shmmax = 68719476736 - orginal\n> #kernel.shmmax = 16833540096\n> kernel.shmmax = 168\n> \n> # Controls the maximum number of shared memory segments, in pages\n> #kernel.shmall = 4294967296 - orginal\n> kernel.shmall = 4109751\n\nWait, you need to be going *UP* on these. They need to be >= postgres's shared buffers + some minor other stuff.\n\n-- \nScott Ribe\[email protected]\nhttp://www.elevated-dev.com/\nhttps://www.linkedin.com/in/scottribe/\n(303) 722-0567 voice\n\n\n\n\n\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Mon, 13 Jul 2015 18:43:18 -0600",
"msg_from": "Scott Ribe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not create shared memory segment: Invalid argument"
},
{
"msg_contents": "Yeah thanks I see I was going the wrong way...thanks.\nSo shared_buffers is 15gb and max_conn = 1000.\n\nHowever, after increasing, same issue.\n\n# Controls the maximum shared segment size, in bytes\n#kernel.shmmax = 68719476736\nkernel.shmmax = 268719476736\n\n# Controls the maximum number of shared memory segments, in pages\n#kernel.shmall = 4294967296\nkernel.shmall = 4294967296\n\nThis should be plenty...\n\nOn Mon, Jul 13, 2015 at 7:43 PM, Scott Ribe <[email protected]>\nwrote:\n\n> On Jul 13, 2015, at 6:26 PM, Ryan King - NOAA Affiliate <\n> [email protected]> wrote:\n> >\n> > # Controls the maximum shared segment size, in bytes\n> > #kernel.shmmax = 68719476736 - orginal\n> > #kernel.shmmax = 16833540096\n> > kernel.shmmax = 168\n> >\n> > # Controls the maximum number of shared memory segments, in pages\n> > #kernel.shmall = 4294967296 - orginal\n> > kernel.shmall = 4109751\n>\n> Wait, you need to be going *UP* on these. They need to be >= postgres's\n> shared buffers + some minor other stuff.\n>\n> --\n> Scott Ribe\n> [email protected]\n> http://www.elevated-dev.com/\n> https://www.linkedin.com/in/scottribe/\n> (303) 722-0567 voice\n>\n>\n>\n>\n>\n>\n\nYeah thanks I see I was going the wrong way...thanks. So shared_buffers is 15gb and max_conn = 1000. However, after increasing, same issue. # Controls the maximum shared segment size, in bytes#kernel.shmmax = 68719476736kernel.shmmax = 268719476736# Controls the maximum number of shared memory segments, in pages#kernel.shmall = 4294967296kernel.shmall = 4294967296This should be plenty...On Mon, Jul 13, 2015 at 7:43 PM, Scott Ribe <[email protected]> wrote:On Jul 13, 2015, at 6:26 PM, Ryan King - NOAA Affiliate <[email protected]> wrote:\n>\n> # Controls the maximum shared segment size, in bytes\n> #kernel.shmmax = 68719476736 - orginal\n> #kernel.shmmax = 16833540096\n> kernel.shmmax = 168\n>\n> # Controls the maximum number of shared memory segments, in pages\n> #kernel.shmall = 4294967296 - orginal\n> kernel.shmall = 4109751\n\nWait, you need to be going *UP* on these. They need to be >= postgres's shared buffers + some minor other stuff.\n\n--\nScott Ribe\[email protected]\nhttp://www.elevated-dev.com/\nhttps://www.linkedin.com/in/scottribe/\n(303) 722-0567 voice",
"msg_date": "Mon, 13 Jul 2015 19:49:59 -0500",
"msg_from": "Ryan King - NOAA Affiliate <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not create shared memory segment: Invalid argument"
},
{
"msg_contents": "On Jul 13, 2015, at 6:49 PM, Ryan King - NOAA Affiliate <[email protected]> wrote:\n> \n> Yeah thanks I see I was going the wrong way...thanks. \n> So shared_buffers is 15gb and max_conn = 1000. \n\nOK, you haven't shared your OS or hardware setup, but a few general points:\n\n- 15GB is large for PG shared buffers; it usually doesn't help to go that high; remember, shared buffers is a kind of working cache, not the whole cache, PG depends on the OS caching of recently-used files;\n\n- Your shmall is only 16GB. PG may not be the only user of shared memory. It doesn't make sense to have shmall * page size < shmmax.\n\n- Also, does your platform have some absolute upper limit on shmmax? 250GB seems awfully high...\n\n- With 1,000 clients, you'd likely benefit from connection pooling; after you get things up, you should consider pgbouncer.\n\n-- \nScott Ribe\[email protected]\nhttp://www.elevated-dev.com/\nhttps://www.linkedin.com/in/scottribe/\n(303) 722-0567 voice\n\n\n\n\n\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Mon, 13 Jul 2015 19:15:12 -0600",
"msg_from": "Scott Ribe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not create shared memory segment: Invalid argument"
}
] |
[
{
"msg_contents": "First off I apologize if this is question has been beaten to death. I've\nlooked around for a simple answer and could not find one.\n\nGiven a database that will not have it's PKEY or indices modified, is it\ngenerally faster to INSERT or UPDATE data. And if there is a performance\ndifference is it substantial?\n\nI have a situation where I can easily do one or the other to the same\neffect. For example, I have a journaling schema with a limited number of\n\"states\" for an \"entry\". Currently each state is it's own table so I just\ninsert them as they occur. But I could easily have a single \"entry\" table\nwhere the row is updated with column information for states (after the\nentry's initial insertion).\n\nNot a big deal but since it's so easy for me to take either approach I was\nwondering if one was more efficient (for a large DB) than another.\n\nThanks!\n\nFirst off I apologize if this is question has been beaten to death. I've looked around for a simple answer and could not find one. Given a database that will not have it's PKEY or indices modified, is it generally faster to INSERT or UPDATE data. And if there is a performance difference is it substantial?I have a situation where I can easily do one or the other to the same effect. For example, I have a journaling schema with a limited number of \"states\" for an \"entry\". Currently each state is it's own table so I just insert them as they occur. But I could easily have a single \"entry\" table where the row is updated with column information for states (after the entry's initial insertion). Not a big deal but since it's so easy for me to take either approach I was wondering if one was more efficient (for a large DB) than another. Thanks!",
"msg_date": "Wed, 15 Jul 2015 09:16:21 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert vs Update"
},
{
"msg_contents": "On Wed, Jul 15, 2015 at 12:16 PM, Robert DiFalco <[email protected]>\nwrote:\n\n> First off I apologize if this is question has been beaten to death. I've\n> looked around for a simple answer and could not find one.\n>\n> Given a database that will not have it's PKEY or indices modified, is it\n> generally faster to INSERT or UPDATE data. And if there is a performance\n> difference is it substantial?\n>\n> I have a situation where I can easily do one or the other to the same\n> effect. For example, I have a journaling schema with a limited number of\n> \"states\" for an \"entry\". Currently each state is it's own table so I just\n> insert them as they occur. But I could easily have a single \"entry\" table\n> where the row is updated with column information for states (after the\n> entry's initial insertion).\n>\n> Not a big deal but since it's so easy for me to take either approach I was\n> wondering if one was more efficient (for a large DB) than another.\n>\n>\nThere is HOT (heap only tuple?) optimization that can occur if only\nnon-indexed data is altered. I do not recall the specifics.\n\nDave\n\n\nOn Wed, Jul 15, 2015 at 12:16 PM, Robert DiFalco <[email protected]> wrote:First off I apologize if this is question has been beaten to death. I've looked around for a simple answer and could not find one. Given a database that will not have it's PKEY or indices modified, is it generally faster to INSERT or UPDATE data. And if there is a performance difference is it substantial?I have a situation where I can easily do one or the other to the same effect. For example, I have a journaling schema with a limited number of \"states\" for an \"entry\". Currently each state is it's own table so I just insert them as they occur. But I could easily have a single \"entry\" table where the row is updated with column information for states (after the entry's initial insertion). Not a big deal but since it's so easy for me to take either approach I was wondering if one was more efficient (for a large DB) than another. There is HOT (heap only tuple?) optimization that can occur if only non-indexed data is altered. I do not recall the specifics.Dave",
"msg_date": "Wed, 15 Jul 2015 13:10:46 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "I\n\nOn Wed, Jul 15, 2015 at 12:16 PM, Robert DiFalco <[email protected]>\nwrote:\n\n> First off I apologize if this is question has been beaten to death. I've\n> looked around for a simple answer and could not find one.\n>\n> Given a database that will not have it's PKEY or indices modified, is it\n> generally faster to INSERT or UPDATE data. And if there is a performance\n> difference is it substantial?\n>\n> I have a situation where I can easily do one or the other to the same\n> effect. For example, I have a journaling schema with a limited number of\n> \"states\" for an \"entry\". Currently each state is it's own table so I just\n> insert them as they occur. But I could easily have a single \"entry\" table\n> where the row is updated with column information for states (after the\n> entry's initial insertion).\n>\n> Not a big deal but since it's so easy for me to take either approach I was\n> wondering if one was more efficient (for a large DB) than another.\n>\n> Thanks\n>\n\nIf you think of an update as a delete-insert operation (glossing over the\nfine points of what has to be done for ACID), it seems pretty clear that an\nupdate involves more work than an insert. Measuring that impact on\nperformance is probably a bit more challenging, because it's going to be\ndependent on the specific table and the contents of the row, among other\nthings.\n--\nMike Nolan\[email protected]\n\nIOn Wed, Jul 15, 2015 at 12:16 PM, Robert DiFalco <[email protected]> wrote:First off I apologize if this is question has been beaten to death. I've looked around for a simple answer and could not find one. Given a database that will not have it's PKEY or indices modified, is it generally faster to INSERT or UPDATE data. And if there is a performance difference is it substantial?I have a situation where I can easily do one or the other to the same effect. For example, I have a journaling schema with a limited number of \"states\" for an \"entry\". Currently each state is it's own table so I just insert them as they occur. But I could easily have a single \"entry\" table where the row is updated with column information for states (after the entry's initial insertion). Not a big deal but since it's so easy for me to take either approach I was wondering if one was more efficient (for a large DB) than another. Thanks If you think of an update as a delete-insert operation (glossing over the\n fine points of what has to be done for ACID), it seems pretty clear \nthat an update involves more work than an insert. Measuring that impact\n on performance is probably a bit more challenging, because it's going \nto be dependent on the specific table and the contents of the row, among other things. --Mike [email protected]",
"msg_date": "Wed, 15 Jul 2015 13:15:15 -0400",
"msg_from": "Michael Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "On Wednesday, July 15, 2015, Robert DiFalco <[email protected]>\nwrote:\n\n> First off I apologize if this is question has been beaten to death. I've\n> looked around for a simple answer and could not find one.\n>\n> Given a database that will not have it's PKEY or indices modified, is it\n> generally faster to INSERT or UPDATE data. And if there is a performance\n> difference is it substantial?\n>\n\nThis seems odd. If you have an option to update but choose to insert what\nbecomes of the other record?\n\nOn Wednesday, July 15, 2015, Robert DiFalco <[email protected]> wrote:First off I apologize if this is question has been beaten to death. I've looked around for a simple answer and could not find one. Given a database that will not have it's PKEY or indices modified, is it generally faster to INSERT or UPDATE data. And if there is a performance difference is it substantial?This seems odd. If you have an option to update but choose to insert what becomes of the other record?",
"msg_date": "Wed, 15 Jul 2015 13:33:22 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "On Wed, Jul 15, 2015 at 10:33 AM, David G. Johnston <\[email protected]> wrote:\n\n> On Wednesday, July 15, 2015, Robert DiFalco <[email protected]>\n> wrote:\n>\n>> First off I apologize if this is question has been beaten to death. I've\n>> looked around for a simple answer and could not find one.\n>>\n>> Given a database that will not have it's PKEY or indices modified, is it\n>> generally faster to INSERT or UPDATE data. And if there is a performance\n>> difference is it substantial?\n>>\n>\n> This seems odd. If you have an option to update but choose to insert what\n> becomes of the other record?\n>\n\n\nConsider the two pseudo-schemas, I'm just making this up for example\npurposes:\n\nSCHEMA A\n=========\nmeal(id SEQUENCE,user_id, started DEFAULT NOW())\nmeal_prepared(ref_meal_id, prepared DEFAULT NOW())\nmeal_abandoned(ref_meal_id, abandoned ...)\nmeal_consumed(ref_meal_id, consumed ...)\netc.\n\nThen in response to different meal events you always have an insert.\n\naMealId = INSERT INTO meal(user_id) VALUES (aUserId);\n\nWhen preparation starts:\n\nINSERT INTO meal_prepared(ref_meal_id) VALUES (aMealId);\n\nAnd so on for each event.\n\nCompare that to this:\n\nSCHEMA B\n=========\nmeal_event(id, started, prepared, abandoned, consumed, ...)\n\nThe start of the meal is an INSERT:\n\naMealId = INSERT INTO meal_event(user_id, started) VALUES (aUserId, NOW());\n\nWhen preparation starts:\n\nUPDATE meal_event SET prepared = NOW() WHERE id = aMealId;\n\nAnd so on.\n\nBasically the same data, in one case you always do inserts and add new\ntables for new events. In the other case you only insert once and then\nupdate for each state, then you add columns if you have new states.\n\nAs I said this is just an example. But in SCHEMA A you have only inserts,\nlots of tables and in SCHEMA B you have a lot of updates and a lot of\npossibly NULL columns if certain events don't occur.\n\nIs that more clear?\n\nR.\n\nOn Wed, Jul 15, 2015 at 10:33 AM, David G. Johnston <[email protected]> wrote:On Wednesday, July 15, 2015, Robert DiFalco <[email protected]> wrote:First off I apologize if this is question has been beaten to death. I've looked around for a simple answer and could not find one. Given a database that will not have it's PKEY or indices modified, is it generally faster to INSERT or UPDATE data. And if there is a performance difference is it substantial?This seems odd. If you have an option to update but choose to insert what becomes of the other record? \nConsider the two pseudo-schemas, I'm just making this up for example purposes:SCHEMA A=========meal(id SEQUENCE,user_id, started DEFAULT NOW())meal_prepared(ref_meal_id, prepared DEFAULT NOW())meal_abandoned(ref_meal_id, abandoned ...)meal_consumed(ref_meal_id, consumed ...)etc.Then in response to different meal events you always have an insert. aMealId = INSERT INTO meal(user_id) VALUES (aUserId);When preparation starts:INSERT INTO meal_prepared(ref_meal_id) VALUES (aMealId);And so on for each event. Compare that to this:SCHEMA B=========meal_event(id, started, prepared, abandoned, consumed, ...)The start of the meal is an INSERT:aMealId = INSERT INTO meal_event(user_id, started) VALUES (aUserId, NOW());When preparation starts:UPDATE meal_event SET prepared = NOW() WHERE id = aMealId;And so on.Basically the same data, in one case you always do inserts and add new tables for new events. In the other case you only insert once and then update for each state, then you add columns if you have new states. As I said this is just an example. But in SCHEMA A you have only inserts, lots of tables and in SCHEMA B you have a lot of updates and a lot of possibly NULL columns if certain events don't occur. Is that more clear?R.",
"msg_date": "Wed, 15 Jul 2015 10:56:26 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "On Wed, Jul 15, 2015 at 1:56 PM, Robert DiFalco <[email protected]>\nwrote:\n\n>\n>\n> On Wed, Jul 15, 2015 at 10:33 AM, David G. Johnston <\n> [email protected]> wrote:\n>\n>> On Wednesday, July 15, 2015, Robert DiFalco <[email protected]>\n>> wrote:\n>>\n>>> First off I apologize if this is question has been beaten to death. I've\n>>> looked around for a simple answer and could not find one.\n>>>\n>>> Given a database that will not have it's PKEY or indices modified, is it\n>>> generally faster to INSERT or UPDATE data. And if there is a performance\n>>> difference is it substantial?\n>>>\n>>\n>> This seems odd. If you have an option to update but choose to insert\n>> what becomes of the other record?\n>>\n>\n>\n> Consider the two pseudo-schemas, I'm just making this up for example\n> purposes:\n>\n> SCHEMA A\n> =========\n> meal(id SEQUENCE,user_id, started DEFAULT NOW())\n> meal_prepared(ref_meal_id, prepared DEFAULT NOW())\n> meal_abandoned(ref_meal_id, abandoned ...)\n> meal_consumed(ref_meal_id, consumed ...)\n> etc.\n>\n> Then in response to different meal events you always have an insert.\n>\n> aMealId = INSERT INTO meal(user_id) VALUES (aUserId);\n>\n> When preparation starts:\n>\n> INSERT INTO meal_prepared(ref_meal_id) VALUES (aMealId);\n>\n> And so on for each event.\n>\n> Compare that to this:\n>\n> SCHEMA B\n> =========\n> meal_event(id, started, prepared, abandoned, consumed, ...)\n>\n> The start of the meal is an INSERT:\n>\n> aMealId = INSERT INTO meal_event(user_id, started) VALUES (aUserId, NOW());\n>\n> When preparation starts:\n>\n> UPDATE meal_event SET prepared = NOW() WHERE id = aMealId;\n>\n> And so on.\n>\n> Basically the same data, in one case you always do inserts and add new\n> tables for new events. In the other case you only insert once and then\n> update for each state, then you add columns if you have new states.\n>\n> As I said this is just an example. But in SCHEMA A you have only inserts,\n> lots of tables and in SCHEMA B you have a lot of updates and a lot of\n> possibly NULL columns if certain events don't occur.\n>\n> Is that more clear?\n>\n>\nYes, you are trying to choose between a bunch of one-to-one (optional)\nrelationships versus adding additional columns to a table all of which can\nbe null.\n\nI'd argue that neither option is \"normal\" (in the DB normalization sense).\n\nCREATE TABLE meal (meal_id bigserial)\nCREATE TABLE meal_event_type (meal_event_id bigserial)\nCREATE TABLE meal_event (meal_id bigint, meal_event_id bigint, occurred_at\ntimestamptz)\n\nSo now the decision is one of how to denormalize. materialzed views and\ntwo ways to do so. The specific solution would depend in part on the final\napplication queries that you need to write.\n\nIf you do want to model the de-normalized form, which I would likely be\ntempted to do given a fixed set of \"events\" that do not require additional\nrelated attributes, would be to place the few event timestamps on the main\ntable and UPDATE them to non-null.\n\nIn the normal form you will likely find partial indexes to be quite useful.\n\nDavid J.\n\n\nOn Wed, Jul 15, 2015 at 1:56 PM, Robert DiFalco <[email protected]> wrote:On Wed, Jul 15, 2015 at 10:33 AM, David G. Johnston <[email protected]> wrote:On Wednesday, July 15, 2015, Robert DiFalco <[email protected]> wrote:First off I apologize if this is question has been beaten to death. I've looked around for a simple answer and could not find one. Given a database that will not have it's PKEY or indices modified, is it generally faster to INSERT or UPDATE data. And if there is a performance difference is it substantial?This seems odd. If you have an option to update but choose to insert what becomes of the other record? \nConsider the two pseudo-schemas, I'm just making this up for example purposes:SCHEMA A=========meal(id SEQUENCE,user_id, started DEFAULT NOW())meal_prepared(ref_meal_id, prepared DEFAULT NOW())meal_abandoned(ref_meal_id, abandoned ...)meal_consumed(ref_meal_id, consumed ...)etc.Then in response to different meal events you always have an insert. aMealId = INSERT INTO meal(user_id) VALUES (aUserId);When preparation starts:INSERT INTO meal_prepared(ref_meal_id) VALUES (aMealId);And so on for each event. Compare that to this:SCHEMA B=========meal_event(id, started, prepared, abandoned, consumed, ...)The start of the meal is an INSERT:aMealId = INSERT INTO meal_event(user_id, started) VALUES (aUserId, NOW());When preparation starts:UPDATE meal_event SET prepared = NOW() WHERE id = aMealId;And so on.Basically the same data, in one case you always do inserts and add new tables for new events. In the other case you only insert once and then update for each state, then you add columns if you have new states. As I said this is just an example. But in SCHEMA A you have only inserts, lots of tables and in SCHEMA B you have a lot of updates and a lot of possibly NULL columns if certain events don't occur. Is that more clear?Yes, you are trying to choose between a bunch of one-to-one (optional) relationships versus adding additional columns to a table all of which can be null.I'd argue that neither option is \"normal\" (in the DB normalization sense).CREATE TABLE meal (meal_id bigserial)CREATE TABLE meal_event_type (meal_event_id bigserial)CREATE TABLE meal_event (meal_id bigint, meal_event_id bigint, occurred_at timestamptz)So now the decision is one of how to denormalize. materialzed views and two ways to do so. The specific solution would depend in part on the final application queries that you need to write.If you do want to model the de-normalized form, which I would likely be tempted to do given a fixed set of \"events\" that do not require additional related attributes, would be to place the few event timestamps on the main table and UPDATE them to non-null.In the normal form you will likely find partial indexes to be quite useful.David J.",
"msg_date": "Wed, 15 Jul 2015 14:15:14 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "On Wed, Jul 15, 2015 at 11:15 AM, David G. Johnston <\[email protected]> wrote:\n\n>\n> Yes, you are trying to choose between a bunch of one-to-one (optional)\n> relationships versus adding additional columns to a table all of which can\n> be null.\n>\n> I'd argue that neither option is \"normal\" (in the DB normalization sense).\n>\n> CREATE TABLE meal (meal_id bigserial)\n> CREATE TABLE meal_event_type (meal_event_id bigserial)\n> CREATE TABLE meal_event (meal_id bigint, meal_event_id bigint, occurred_at\n> timestamptz)\n>\n> So now the decision is one of how to denormalize. materialzed views and\n> two ways to do so. The specific solution would depend in part on the final\n> application queries that you need to write.\n>\n> If you do want to model the de-normalized form, which I would likely be\n> tempted to do given a fixed set of \"events\" that do not require additional\n> related attributes, would be to place the few event timestamps on the main\n> table and UPDATE them to non-null.\n>\n> In the normal form you will likely find partial indexes to be quite useful.\n>\n> David J.\n> \n>\n>\nThanks David, my example was a big simplification, but I appreciate your\nguidance. The different event types have differing amounts of related data.\nQuery speed on this schema is not important, it's really the write speed\nthat matters. So I was just wondering given the INSERT or UPDATE approach\n(with no indexed data being changed) if one is likely to be substantially\nfaster than the other.\n\nOn Wed, Jul 15, 2015 at 11:15 AM, David G. Johnston <[email protected]> wrote:Yes, you are trying to choose between a bunch of one-to-one (optional) relationships versus adding additional columns to a table all of which can be null.I'd argue that neither option is \"normal\" (in the DB normalization sense).CREATE TABLE meal (meal_id bigserial)CREATE TABLE meal_event_type (meal_event_id bigserial)CREATE TABLE meal_event (meal_id bigint, meal_event_id bigint, occurred_at timestamptz)So now the decision is one of how to denormalize. materialzed views and two ways to do so. The specific solution would depend in part on the final application queries that you need to write.If you do want to model the de-normalized form, which I would likely be tempted to do given a fixed set of \"events\" that do not require additional related attributes, would be to place the few event timestamps on the main table and UPDATE them to non-null.In the normal form you will likely find partial indexes to be quite useful.David J.\nThanks David, my example was a big simplification, but I appreciate your guidance. The different event types have differing amounts of related data. Query speed on this schema is not important, it's really the write speed that matters. So I was just wondering given the INSERT or UPDATE approach (with no indexed data being changed) if one is likely to be substantially faster than the other.",
"msg_date": "Wed, 15 Jul 2015 12:16:43 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco <[email protected]>\nwrote:\n\n> The different event types have differing amounts of related data.\n>\n\nOn this basis alone I would select the multiple-table version as my\nbaseline and only consider something different if the performance of this\nwas insufficient and I could prove that an alternative arrangement was more\nperformant.\n\nA single optional date with meta-data embedded in the column name\n\nis usually workable but if you then have a bunch of other columns with\nname like:\n\npreparation_date, preparation_col1, preparation_col2, consumed_col1,\nconsumed_col2, consumed_date\n\n\nI would find that to be undesirable.\n\nYou may be able to put Table Inheritance to good use here...\n\nI do not know (but doubt) if HOT optimization works when going from NULL to\nnon-NULL since the former is stored in a bitmap while the later occupies\nnormal relation space and thus the update would likely end up writing an\nentirely new record upon each event category recording.\n\nDavid J.\n\nOn Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco <[email protected]> wrote:The different event types have differing amounts of related data. On this basis alone I would select the multiple-table version as my baseline and only consider something different if the performance of this was insufficient and I could prove that an alternative arrangement was more performant.A single optional date with meta-data embedded in the column name is usually workable but if you then have a bunch of other columns with name like:preparation_date, preparation_col1, preparation_col2, consumed_col1, consumed_col2, consumed_dateI would find that to be undesirable.You may be able to put Table Inheritance to good use here...I do not know (but doubt) if HOT optimization works when going from NULL to non-NULL since the former is stored in a bitmap while the later occupies normal relation space and thus the update would likely end up writing an entirely new record upon each event category recording.David J.",
"msg_date": "Wed, 15 Jul 2015 15:32:18 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "On Wed, Jul 15, 2015 at 12:32 PM, David G. Johnston <\[email protected]> wrote:\n\n>\n> You may be able to put Table Inheritance to good use here...\n>\n> I do not know (but doubt) if HOT optimization works when going from NULL\n> to non-NULL since the former is stored in a bitmap while the later occupies\n> normal relation space and thus the update would likely end up writing an\n> entirely new record upon each event category recording.\n>\n> David J.\n>\n>\n>\nThanks!\n\nOn Wed, Jul 15, 2015 at 12:32 PM, David G. Johnston <[email protected]> wrote:You may be able to put Table Inheritance to good use here...I do not know (but doubt) if HOT optimization works when going from NULL to non-NULL since the former is stored in a bitmap while the later occupies normal relation space and thus the update would likely end up writing an entirely new record upon each event category recording.David J.Thanks!",
"msg_date": "Wed, 15 Jul 2015 12:49:31 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco <[email protected]>\nwrote:\n\n>\n> Thanks David, my example was a big simplification, but I appreciate your\n> guidance. The different event types have differing amounts of related data.\n> Query speed on this schema is not important, it's really the write speed\n> that matters. So I was just wondering given the INSERT or UPDATE approach\n> (with no indexed data being changed) if one is likely to be substantially\n> faster than the other.\n>\n>\nAs I understand how ACID compliance is done, updating a record will require\nupdating any indexes for that record, even if the index keys are not\nchanging. That's because any pending transactions still need to be able to\nfind the 'old' data, while new transactions need to be able to find the\n'new' data. And ACID also means an update is essentially a\ndelete-and-insert.\n--\nMike Nolan\n\nOn Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco <[email protected]> wrote:Thanks David, my example was a big simplification, but I appreciate your guidance. The different event types have differing amounts of related data. Query speed on this schema is not important, it's really the write speed that matters. So I was just wondering given the INSERT or UPDATE approach (with no indexed data being changed) if one is likely to be substantially faster than the other. \nAs I understand how ACID compliance is done, updating a record will require updating any indexes for that record, even if the index keys are not changing. That's because any pending transactions still need to be able to find the 'old' data, while new transactions need to be able to find the 'new' data. And ACID also means an update is essentially a delete-and-insert. --Mike Nolan",
"msg_date": "Wed, 15 Jul 2015 16:53:10 -0400",
"msg_from": "Michael Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "On Wed, Jul 15, 2015 at 4:53 PM, Michael Nolan <[email protected]> wrote:\n\n> On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco <[email protected]>\n> wrote:\n>\n>>\n>> Thanks David, my example was a big simplification, but I appreciate your\n>> guidance. The different event types have differing amounts of related data.\n>> Query speed on this schema is not important, it's really the write speed\n>> that matters. So I was just wondering given the INSERT or UPDATE approach\n>> (with no indexed data being changed) if one is likely to be substantially\n>> faster than the other.\n>>\n>>\n> As I understand how ACID compliance is done, updating a record will\n> require updating any indexes for that record, even if the index keys are\n> not changing. That's because any pending transactions still need to be\n> able to find the 'old' data, while new transactions need to be able to find\n> the 'new' data. And ACID also means an update is essentially a\n> delete-and-insert.\n>\n\nI might be a bit pedantic here but what you describe is a byproduct of the\nspecific implementation that PostgreSQL uses to affect Consistency (the C\nin ACID) as opposed to a forgone outcome in being ACID compliant.\n\nhttp://www.postgresql.org/docs/9.4/static/mvcc-intro.html\n\nI'm out of my comfort zone here but the HOT optimization is designed to\nleverage the fact that an update to a row that does not affect indexed\nvalues is able to leave the index alone and instead during index lookup the\nindex points to the old tuple, notices that there is a chain present, and\nwalks that chain to find the currently active tuple.\n\nIn short, if the only index is a PK an update of the row can avoid touching\nthat index.\n\nI mentioned that going from NULL to Not NULL may disrupt this but I'm\nthinking I may have mis-spoken.\n\nAlso, with separate tables the amount of data to write is going to be less\nbecause you'd have fewer columns on the affected tables.\n\nWhile an update is a delete+insert a delete is mostly just a bit-flip\naction - at least mid-transaction. Depending on volume, though, the\nperiodic impact of vaccuming may want to be taken into consideration.\n\nDavid J.\n\nOn Wed, Jul 15, 2015 at 4:53 PM, Michael Nolan <[email protected]> wrote:On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco <[email protected]> wrote:Thanks David, my example was a big simplification, but I appreciate your guidance. The different event types have differing amounts of related data. Query speed on this schema is not important, it's really the write speed that matters. So I was just wondering given the INSERT or UPDATE approach (with no indexed data being changed) if one is likely to be substantially faster than the other. \nAs I understand how ACID compliance is done, updating a record will require updating any indexes for that record, even if the index keys are not changing. That's because any pending transactions still need to be able to find the 'old' data, while new transactions need to be able to find the 'new' data. And ACID also means an update is essentially a delete-and-insert. I might be a bit pedantic here but what you describe is a byproduct of the specific implementation that PostgreSQL uses to affect Consistency (the C in ACID) as opposed to a forgone outcome in being ACID compliant.http://www.postgresql.org/docs/9.4/static/mvcc-intro.htmlI'm out of my comfort zone here but the HOT optimization is designed to leverage the fact that an update to a row that does not affect indexed values is able to leave the index alone and instead during index lookup the index points to the old tuple, notices that there is a chain present, and walks that chain to find the currently active tuple.In short, if the only index is a PK an update of the row can avoid touching that index.I mentioned that going from NULL to Not NULL may disrupt this but I'm thinking I may have mis-spoken.Also, with separate tables the amount of data to write is going to be less because you'd have fewer columns on the affected tables.While an update is a delete+insert a delete is mostly just a bit-flip action - at least mid-transaction. Depending on volume, though, the periodic impact of vaccuming may want to be taken into consideration.David J.",
"msg_date": "Wed, 15 Jul 2015 17:14:42 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert vs Update"
},
{
"msg_contents": "Le 15 juil. 2015 11:16 PM, \"David G. Johnston\" <[email protected]>\na écrit :\n>\n> On Wed, Jul 15, 2015 at 4:53 PM, Michael Nolan <[email protected]> wrote:\n>>\n>> On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco <[email protected]>\nwrote:\n>>>\n>>>\n>>> Thanks David, my example was a big simplification, but I appreciate\nyour guidance. The different event types have differing amounts of related\ndata. Query speed on this schema is not important, it's really the write\nspeed that matters. So I was just wondering given the INSERT or UPDATE\napproach (with no indexed data being changed) if one is likely to be\nsubstantially faster than the other.\n>>>\n>>\n>> As I understand how ACID compliance is done, updating a record will\nrequire updating any indexes for that record, even if the index keys are\nnot changing. That's because any pending transactions still need to be\nable to find the 'old' data, while new transactions need to be able to find\nthe 'new' data. And ACID also means an update is essentially a\ndelete-and-insert.\n>\n>\n> I might be a bit pedantic here but what you describe is a byproduct of\nthe specific implementation that PostgreSQL uses to affect Consistency\n(the C in ACID) as opposed to a forgone outcome in being ACID compliant.\n>\n> http://www.postgresql.org/docs/9.4/static/mvcc-intro.html\n>\n> I'm out of my comfort zone here but the HOT optimization is designed to\nleverage the fact that an update to a row that does not affect indexed\nvalues is able to leave the index alone and instead during index lookup the\nindex points to the old tuple, notices that there is a chain present, and\nwalks that chain to find the currently active tuple.\n>\n\nThat's true as long as the old and new tuples are stored in the same block.\n\n> In short, if the only index is a PK an update of the row can avoid\ntouching that index.\n>\n> I mentioned that going from NULL to Not NULL may disrupt this but I'm\nthinking I may have mis-spoken.\n>\n> Also, with separate tables the amount of data to write is going to be\nless because you'd have fewer columns on the affected tables.\n>\n> While an update is a delete+insert a delete is mostly just a bit-flip\naction - at least mid-transaction. Depending on volume, though, the\nperiodic impact of vaccuming may want to be taken into consideration.\n\n-- \nGuillaume\n\nLe 15 juil. 2015 11:16 PM, \"David G. Johnston\" <[email protected]> a écrit :\n>\n> On Wed, Jul 15, 2015 at 4:53 PM, Michael Nolan <[email protected]> wrote:\n>>\n>> On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco <[email protected]> wrote:\n>>>\n>>>\n>>> Thanks David, my example was a big simplification, but I appreciate your guidance. The different event types have differing amounts of related data. Query speed on this schema is not important, it's really the write speed that matters. So I was just wondering given the INSERT or UPDATE approach (with no indexed data being changed) if one is likely to be substantially faster than the other. \n>>>\n>>\n>> As I understand how ACID compliance is done, updating a record will require updating any indexes for that record, even if the index keys are not changing. That's because any pending transactions still need to be able to find the 'old' data, while new transactions need to be able to find the 'new' data. And ACID also means an update is essentially a delete-and-insert. \n>\n>\n> I might be a bit pedantic here but what you describe is a byproduct of the specific implementation that PostgreSQL uses to affect Consistency (the C in ACID) as opposed to a forgone outcome in being ACID compliant.\n>\n> http://www.postgresql.org/docs/9.4/static/mvcc-intro.html\n>\n> I'm out of my comfort zone here but the HOT optimization is designed to leverage the fact that an update to a row that does not affect indexed values is able to leave the index alone and instead during index lookup the index points to the old tuple, notices that there is a chain present, and walks that chain to find the currently active tuple.\n>\nThat's true as long as the old and new tuples are stored in the same block.\n> In short, if the only index is a PK an update of the row can avoid touching that index.\n>\n> I mentioned that going from NULL to Not NULL may disrupt this but I'm thinking I may have mis-spoken.\n>\n> Also, with separate tables the amount of data to write is going to be less because you'd have fewer columns on the affected tables.\n>\n> While an update is a delete+insert a delete is mostly just a bit-flip action - at least mid-transaction. Depending on volume, though, the periodic impact of vaccuming may want to be taken into consideration.\n-- \nGuillaume",
"msg_date": "Thu, 16 Jul 2015 07:13:40 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert vs Update"
}
] |
[
{
"msg_contents": "hi everyone,\n\nRecently update a database to machine with RHEL7, but i see that the \nperformance is betther if the hyperthreading tecnology is deactivated \nand use only 32 cores.\n\nis normal that the machine performance is better with 32 cores that 64 \ncores?.\n\nBD: postgresql 9.3.5\nMachine: Dell PE R820\nprocessor: 4x Intel(R) Xeon(R) CPU E5-4620 v2 @ 2.60GHz eigth-core\nRAM: 128GB\nStorage SSD SAN\n\nthanks by your help\n\n-- \nAtentamente,\n\n\nJEISON BEDOYA DELGADO\nADM.Servidores y comunicaciones\nAUDIFARMA S.A.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Jul 2015 21:59:18 -0500",
"msg_from": "Jeison Bedoya Delgado <[email protected]>",
"msg_from_op": true,
"msg_subject": "hyperthreadin low performance"
},
{
"msg_contents": "On 21 July 2015 at 14:59, Jeison Bedoya Delgado <[email protected]>\nwrote:\n\n> hi everyone,\n>\n> Recently update a database to machine with RHEL7, but i see that the\n> performance is betther if the hyperthreading tecnology is deactivated and\n> use only 32 cores.\n>\n> is normal that the machine performance is better with 32 cores that 64\n> cores?.\n>\n>\nYou might be interested in\nhttp://www.postgresql.org/message-id/[email protected]\n\nRegards\n\nDavid Rowley\n\n--\n David Rowley http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn 21 July 2015 at 14:59, Jeison Bedoya Delgado <[email protected]> wrote:hi everyone,\n\nRecently update a database to machine with RHEL7, but i see that the performance is betther if the hyperthreading tecnology is deactivated and use only 32 cores.\n\nis normal that the machine performance is better with 32 cores that 64 cores?.\nYou might be interested in http://www.postgresql.org/message-id/[email protected] Rowley-- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 21 Jul 2015 20:04:32 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hyperthreadin low performance"
},
{
"msg_contents": "On 21/07/15 20:04, David Rowley wrote:\n> On 21 July 2015 at 14:59, Jeison Bedoya Delgado\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> hi everyone,\n>\n> Recently update a database to machine with RHEL7, but i see that the\n> performance is betther if the hyperthreading tecnology is\n> deactivated and use only 32 cores.\n>\n> is normal that the machine performance is better with 32 cores that\n> 64 cores?.\n>\n>\n> You might be interested in\n> http://www.postgresql.org/message-id/[email protected]\n>\n\nHowever I do wonder if we have been misinterpreting these tests. We tend \nto assume the position of \"see hyperthreading is bad, switch it off\".\n\nThe linked post under the one above:\n\nhttp://www.postgresql.org/message-id/[email protected]\n\nshows that 60 core (no hyperthreading) performance is also pessimal, \nleading me to conclude that *perhaps* it is simply the number of cores \nthat is the problem - particularly as benchmark results for single \nsocket cpus clearly show hyperthreading helps performance...\n\nRegards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Jul 2015 21:07:46 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hyperthreadin low performance"
},
{
"msg_contents": "is the problem also in PostgreSQL 9.4.x?\nI'm going to buy a production's server with 4 sockets E7-4850 12 cores\nso 12*4 = 48 cores (and 96 threads using HT).\n\nWhat do you suggest?\nUsing or not HT?\n\nBR\nDomenico\n\n2015-07-21 11:07 GMT+02:00 Mark Kirkwood <[email protected]>:\n> On 21/07/15 20:04, David Rowley wrote:\n>>\n>> On 21 July 2015 at 14:59, Jeison Bedoya Delgado\n>> <[email protected] <mailto:[email protected]>> wrote:\n>>\n>> hi everyone,\n>>\n>> Recently update a database to machine with RHEL7, but i see that the\n>> performance is betther if the hyperthreading tecnology is\n>> deactivated and use only 32 cores.\n>>\n>> is normal that the machine performance is better with 32 cores that\n>> 64 cores?.\n>>\n>>\n>> You might be interested in\n>> http://www.postgresql.org/message-id/[email protected]\n>>\n>\n> However I do wonder if we have been misinterpreting these tests. We tend to\n> assume the position of \"see hyperthreading is bad, switch it off\".\n>\n> The linked post under the one above:\n>\n> http://www.postgresql.org/message-id/[email protected]\n>\n> shows that 60 core (no hyperthreading) performance is also pessimal, leading\n> me to conclude that *perhaps* it is simply the number of cores that is the\n> problem - particularly as benchmark results for single socket cpus clearly\n> show hyperthreading helps performance...\n>\n> Regards\n>\n> Mark\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n>\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 Jul 2015 13:37:29 +0200",
"msg_from": "domenico febbo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hyperthreadin low performance"
},
{
"msg_contents": "On 23 Jul 2015, at 13:37, domenico febbo <[email protected]> wrote:\n\n> is the problem also in PostgreSQL 9.4.x?\n> I'm going to buy a production's server with 4 sockets E7-4850 12 cores\n> so 12*4 = 48 cores (and 96 threads using HT).\n> \n> What do you suggest?\n> Using or not HT?\n> \n> BR\n\n\n1. If you have enough money to buy a 4-socket E7, then you certainly have enough money to pay someone (maybe yourself) for the 30 minutes of work needed to run a benchmark on the machine with and without hyperthreading and compare them. I mean literally, run pgbench, reboot, turn on/off HT, run pgbench. Then you'll know what works best for your configuration. Don't be lazy about this, it's as important as the money you're throwing at the hardware. \n\n2. Keep in mind most of the numbers people throw around are pgbench numbers. Pgbench is representative of some workloads (e.g. bank transactions) and less representative of others (mixed query types, GIS work, scientific work, heavy IO, interaction with other applications/libraries...). Are you using the server for other tasks besides postgres, for example? I find I get better performance with HT when I'm using postgres with GDAL on the same server. Probably because the HT cores are being asked to do two different types of things, which is where HT shines. \n\n3. IMPORTANT : it doesn't matter how pgbench performs for other people on other computers and what they think is best.\nWhat matters is 'how does YOUR normal workload perform on YOUR computer'.\nThe best way to do that is to put together a simple simulated workload that looks like your intended use of the system.\nLeave it running.\nIf it's for an important system, look at all aspects of performance: transactions per second, I/O stalls, latency, ... \nIf you can't do that, pgbench can be used instead.\n\n====\n\nFinally. A serious point. The lack of diversity in postgres benchmarking is quite amazing, to my mind, and is probably at the root of the eternal disagreements about optimal settings as well as the existence of long-standing hidden scaling/performance bugs (or weird kernel interactions). pgbench is useful, but really... let's make some more tools (or share links, if you know of them). \n\nSince contribution >>> gripe, here is my own (first, tiny) contribution, which I mentioned earlier in the month: https://github.com/gbb/ppppt. \n\nAs a point of contrast. Take a look at how computer game players measure the performance of graphics cards and disk drives in their product reviews. \nhttp://www.guru3d.com/articles-pages/radeon-r9-290-review-benchmarks,32.html\n\n32 pages of data and discussion to test the performance of a single model (among thousands of possibilities and millions of configurations)! And this article is ordinary, run of the mill stuff in the gaming scene, literally the first link I hit in Google. Has anyone ever in the history of these lists ever posted so much diverse and structured evidence in support of their beliefs about a postgres setting?\n\nGaming reviewers use a multitude of real-world games, synthetic benchmarks, theoretical estimates... as someone with a foot in both worlds it is quite amusing to see that game-players address benchmarking and optimisation of performance far more seriously, scientifically (and successfully) than most professional database admins. \n\nMany graphics card reviews care very much about reproducability/repeated results, surrounding test conditions (very detailed information about other components used in the test, software versioning), warmup effects, benchmark quirks, performance at different scales/settings, and so on... writing 'I saw some post where someone said they got a better result from XYZ' would certainly not be good enough in that community. \n\nGraeme Bell. \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 Jul 2015 12:41:11 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hyperthreadin low performance (and some discussion\n about benchmarking)"
},
{
"msg_contents": "On 23/07/15 23:37, domenico febbo wrote:\n> is the problem also in PostgreSQL 9.4.x?\n> I'm going to buy a production's server with 4 sockets E7-4850 12 cores\n> so 12*4 = 48 cores (and 96 threads using HT).\n>\n> What do you suggest?\n> Using or not HT?\n>\n\n From my experience 9.4 is considerably better (we are using it on the \n60 core box mentioned prev).\n\n48 cores should be fine, enabling HT and asking Postgres to effectively \nhandle 96 could provoke issues. However it is reasonably easy to test - \ntune shared_buffers and checkpoint segments sensibly and run pgbench for \na steadily increasing number of clients. With 48 cores you should \n(hopefully) see a tps curve that increases and then gently flattens off \nsomewhere. If 96 cores are \"too many\" then you will see a tps curve that \ninitially increases then sharply drops.\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 25 Jul 2015 16:10:30 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hyperthreadin low performance"
}
] |
[
{
"msg_contents": "Hi everyone,\n I host a Postgresql server on Ubuntu 12.04 and I am facing server \nload spikes (if I run top, it goes up to 25-30 on a 4-core system)...\nIn some cases, I have to restart potgresql service because users call us \ncomplaining of the slowness, but in some cases I can leave things on \ntheir way and I see that after a bunch of minutes (about 5-10) the \nsituations drops to the normality (0.50-2 load).\nThe problem is, as in the most cases, the I/O, but I need a small hand \nto know some methods or tools that can help me to investigate who or \nwhat is causing me these spikes.\n\nAny help would be appreciated.\nBest regards,\nMoreno.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Jul 2015 14:50:48 +0200",
"msg_from": "Moreno Andreo <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to find the culprit in server load spikes?"
},
{
"msg_contents": "Moreno Andreo <[email protected]> wrote:\n\n> I host a Postgresql server on Ubuntu 12.04 and I am facing server\n> load spikes (if I run top, it goes up to 25-30 on a 4-core system)...\n> In some cases, I have to restart potgresql service because users call us\n> complaining of the slowness, but in some cases I can leave things on\n> their way and I see that after a bunch of minutes (about 5-10) the\n> situations drops to the normality (0.50-2 load).\n>\n> The problem is, as in the most cases, the I/O,\n\nIf you have confirmed that there is an I/O glut during these\nepisodes, it is probably that a cascade of dirty cache pages\ncaused the OS dirty pages to hit the vm.dirty_ratio percentage. If\nyou were seeing a high number for system CPU time the below would\nprobably not help.\n\nMake sure you are using a storage system that has a persistent\ncache configured for write-back (rather than write-through).\nReduce the OS vm.dirty_background_bytes setting to less than the\nsize of the persistent write cache. Make sure that vm.dirty_ratio\nis at least 20, possibly higher. Configure the PostgreSQL\nbackground writer to be more aggressive. If those don't do it,\nreduce the size of shared_buffers.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Jul 2015 16:40:55 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to find the culprit in server load spikes?"
},
{
"msg_contents": "On Wed, Jul 22, 2015 at 5:50 AM, Moreno Andreo <[email protected]>\nwrote:\n\n> Hi everyone,\n> I host a Postgresql server on Ubuntu 12.04 and I am facing server load\n> spikes (if I run top, it goes up to 25-30 on a 4-core system)...\n> In some cases, I have to restart potgresql service because users call us\n> complaining of the slowness, but in some cases I can leave things on their\n> way and I see that after a bunch of minutes (about 5-10) the situations\n> drops to the normality (0.50-2 load).\n> The problem is, as in the most cases, the I/O, but I need a small hand to\n> know some methods or tools that can help me to investigate who or what is\n> causing me these spikes.\n>\n\nI always run systems starting out with logging cranked up to at least these\nsettings:\n\nlog_checkpoints = on\nlog_lock_waits = on\nshared_preload_libraries = 'pg_stat_statements,auto_explain'\nauto_explain.log_min_duration = '10s'\ntrack_io_timing = on\nlog_autovacuum_min_duration = 1000\nlog_min_duration_statement = 1000 ## or less\n\nIn particular, you would want to see what the reported \"sync\" time is for\nthe checkpoint, and whether the slowness (as shown by the frequency of\nstatement min duration log events) is occurring in a pattern around the\nbeginning and end of a checkpoint.\n\nI'd also set up vmstat to run continuously capturing output to a logfile\nwith a timestamp, which can later be correlated to the postgres log file\nentries.\n\nCheers,\n\nJeff\n\nOn Wed, Jul 22, 2015 at 5:50 AM, Moreno Andreo <[email protected]> wrote:Hi everyone,\n I host a Postgresql server on Ubuntu 12.04 and I am facing server load spikes (if I run top, it goes up to 25-30 on a 4-core system)...\nIn some cases, I have to restart potgresql service because users call us complaining of the slowness, but in some cases I can leave things on their way and I see that after a bunch of minutes (about 5-10) the situations drops to the normality (0.50-2 load).\nThe problem is, as in the most cases, the I/O, but I need a small hand to know some methods or tools that can help me to investigate who or what is causing me these spikes.I always run systems starting out with logging cranked up to at least these settings:log_checkpoints = onlog_lock_waits = onshared_preload_libraries = 'pg_stat_statements,auto_explain'auto_explain.log_min_duration = '10s'track_io_timing = onlog_autovacuum_min_duration = 1000log_min_duration_statement = 1000 ## or lessIn particular, you would want to see what the reported \"sync\" time is for the checkpoint, and whether the slowness (as shown by the frequency of statement min duration log events) is occurring in a pattern around the beginning and end of a checkpoint.I'd also set up vmstat to run continuously capturing output to a logfile with a timestamp, which can later be correlated to the postgres log file entries.Cheers,Jeff",
"msg_date": "Wed, 22 Jul 2015 10:18:48 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to find the culprit in server load spikes?"
}
] |
[
{
"msg_contents": "\nHi all,\n\n1. For those that don't like par_psql (http://github.com/gbb/par_psql), here's an alternative approach that uses the Gnu Parallel command to organise parallelism for queries that take days to run usually. Short script and GIS-focused, but may give you a few ideas about how to parallelise your own code with Gnu Parallel. \n\nhttps://github.com/gbb/fast_map_intersection\n\n\n2. Also, I gave a talk at FOSS4G Como about these tools, and how to get better performance from your DB with parallelisation. May be helpful to people who are new to parallelisation / multi-core work with postgres. \n\nhttp://graemebell.net/foss4gcomo.pdf \n\n\nGraeme Bell.\n\np.s. (this version of the slides still has a few typos, which will be fixed soon when I get the source ppts back from my colleague's laptop).\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 Jul 2015 13:45:54 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "parallelisation provides postgres performance (script example + ppt\n slides)"
}
] |
[
{
"msg_contents": "Hi,\n\nI have read that GIN indexes don't require a recheck cond for full text\nsearch as long as work_mem is big enough, otherwise you get lossy blocks,\nand the recheck cond.\n\nIn my case, I have no lossy blocks (from what I could tell), but I do have\na recheck...\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(1) FROM enterprises WHERE fts @@\n'activ'::tsquery\n\n\"Aggregate (cost=264555.07..264555.08 rows=1 width=0) (actual\ntime=25813.920..25813.921 rows=1 loops=1)\"\n\" Buffers: shared hit=1 read=178192\"\n\" -> Bitmap Heap Scan on enterprises (cost=5004.86..263202.54\nrows=541014 width=0) (actual time=170.546..25663.048 rows=528376 loops=1)\"\n\" Recheck Cond: (fts @@ '''activ'''::tsquery)\"\n\" Heap Blocks: exact=178096\"\n\" Buffers: shared hit=1 read=178192\"\n\" -> Bitmap Index Scan on enterprises_fts_idx (cost=0.00..4869.61\nrows=541014 width=0) (actual time=120.214..120.214 rows=528376 loops=1)\"\n\" Index Cond: (fts @@ '''activ'''::tsquery)\"\n\" Buffers: shared hit=1 read=96\"\n\"Planning time: 2.383 ms\"\n\"Execution time: 25824.476 ms\"\n\nAny advice would be greatly appreciated. I'm running PostgreSQL 9.4.1.\n\nThank you,\n\nLaurent Debacker\n\nHi,I have read that GIN indexes don't require a recheck cond for full text search as long as work_mem is big enough, otherwise you get lossy blocks, and the recheck cond.In my case, I have no lossy blocks (from what I could tell), but I do have a recheck...EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(1) FROM enterprises WHERE fts @@ 'activ'::tsquery\"Aggregate (cost=264555.07..264555.08 rows=1 width=0) (actual time=25813.920..25813.921 rows=1 loops=1)\"\" Buffers: shared hit=1 read=178192\"\" -> Bitmap Heap Scan on enterprises (cost=5004.86..263202.54 rows=541014 width=0) (actual time=170.546..25663.048 rows=528376 loops=1)\"\" Recheck Cond: (fts @@ '''activ'''::tsquery)\"\" Heap Blocks: exact=178096\"\" Buffers: shared hit=1 read=178192\"\" -> Bitmap Index Scan on enterprises_fts_idx (cost=0.00..4869.61 rows=541014 width=0) (actual time=120.214..120.214 rows=528376 loops=1)\"\" Index Cond: (fts @@ '''activ'''::tsquery)\"\" Buffers: shared hit=1 read=96\"\"Planning time: 2.383 ms\"\"Execution time: 25824.476 ms\"Any advice would be greatly appreciated. I'm running PostgreSQL 9.4.1.Thank you,Laurent Debacker",
"msg_date": "Thu, 23 Jul 2015 18:58:03 +0200",
"msg_from": "Laurent Debacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "bitmap heap scan recheck for gin/fts with no lossy blocks"
},
{
"msg_contents": "On Thu, Jul 23, 2015 at 9:58 AM, Laurent Debacker <[email protected]>\nwrote:\n\n> Hi,\n>\n> I have read that GIN indexes don't require a recheck cond for full text\n> search as long as work_mem is big enough, otherwise you get lossy blocks,\n> and the recheck cond.\n>\n> In my case, I have no lossy blocks (from what I could tell), but I do have\n> a recheck...\n>\n> EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(1) FROM enterprises WHERE fts @@\n> 'activ'::tsquery\n>\n> \"Aggregate (cost=264555.07..264555.08 rows=1 width=0) (actual\n> time=25813.920..25813.921 rows=1 loops=1)\"\n> \" Buffers: shared hit=1 read=178192\"\n> \" -> Bitmap Heap Scan on enterprises (cost=5004.86..263202.54\n> rows=541014 width=0) (actual time=170.546..25663.048 rows=528376 loops=1)\"\n> \" Recheck Cond: (fts @@ '''activ'''::tsquery)\"\n> \" Heap Blocks: exact=178096\"\n> \" Buffers: shared hit=1 read=178192\"\n> \" -> Bitmap Index Scan on enterprises_fts_idx (cost=0.00..4869.61\n> rows=541014 width=0) (actual time=120.214..120.214 rows=528376 loops=1)\"\n> \" Index Cond: (fts @@ '''activ'''::tsquery)\"\n> \" Buffers: shared hit=1 read=96\"\n> \"Planning time: 2.383 ms\"\n> \"Execution time: 25824.476 ms\"\n>\n> Any advice would be greatly appreciated. I'm running PostgreSQL 9.4.1.\n>\n\nThe Recheck Cond line is a plan-time piece of info, not a run-time piece.\nIt only tells you what condition is going to be rechecked if a recheck is\nfound to be necessary.\n\nIt doesn't indicate how many times it was found it to be necessary to do\nthe recheck. Presumably that number was zero.\n\nCheers,\n\nJeff\n\nOn Thu, Jul 23, 2015 at 9:58 AM, Laurent Debacker <[email protected]> wrote:Hi,I have read that GIN indexes don't require a recheck cond for full text search as long as work_mem is big enough, otherwise you get lossy blocks, and the recheck cond.In my case, I have no lossy blocks (from what I could tell), but I do have a recheck...EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(1) FROM enterprises WHERE fts @@ 'activ'::tsquery\"Aggregate (cost=264555.07..264555.08 rows=1 width=0) (actual time=25813.920..25813.921 rows=1 loops=1)\"\" Buffers: shared hit=1 read=178192\"\" -> Bitmap Heap Scan on enterprises (cost=5004.86..263202.54 rows=541014 width=0) (actual time=170.546..25663.048 rows=528376 loops=1)\"\" Recheck Cond: (fts @@ '''activ'''::tsquery)\"\" Heap Blocks: exact=178096\"\" Buffers: shared hit=1 read=178192\"\" -> Bitmap Index Scan on enterprises_fts_idx (cost=0.00..4869.61 rows=541014 width=0) (actual time=120.214..120.214 rows=528376 loops=1)\"\" Index Cond: (fts @@ '''activ'''::tsquery)\"\" Buffers: shared hit=1 read=96\"\"Planning time: 2.383 ms\"\"Execution time: 25824.476 ms\"Any advice would be greatly appreciated. I'm running PostgreSQL 9.4.1.The Recheck Cond line is a plan-time piece of info, not a run-time piece. It only tells you what condition is going to be rechecked if a recheck is found to be necessary.It doesn't indicate how many times it was found it to be necessary to do the recheck. Presumably that number was zero.Cheers,Jeff",
"msg_date": "Thu, 23 Jul 2015 11:04:16 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bitmap heap scan recheck for gin/fts with no lossy blocks"
},
{
"msg_contents": "Thanks Jeff! That makes sense indeed.\n\nI'm a bit surprised a COUNT(1) would need a bitmap heap scan since we know\nthe row count from the index, but okay.\n\nHave a nice day,\n\nLaurent\n\nOn Thu, Jul 23, 2015 at 8:04 PM, Jeff Janes <[email protected]> wrote:\n\n> On Thu, Jul 23, 2015 at 9:58 AM, Laurent Debacker <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> I have read that GIN indexes don't require a recheck cond for full text\n>> search as long as work_mem is big enough, otherwise you get lossy blocks,\n>> and the recheck cond.\n>>\n>> In my case, I have no lossy blocks (from what I could tell), but I do\n>> have a recheck...\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(1) FROM enterprises WHERE fts @@\n>> 'activ'::tsquery\n>>\n>> \"Aggregate (cost=264555.07..264555.08 rows=1 width=0) (actual\n>> time=25813.920..25813.921 rows=1 loops=1)\"\n>> \" Buffers: shared hit=1 read=178192\"\n>> \" -> Bitmap Heap Scan on enterprises (cost=5004.86..263202.54\n>> rows=541014 width=0) (actual time=170.546..25663.048 rows=528376 loops=1)\"\n>> \" Recheck Cond: (fts @@ '''activ'''::tsquery)\"\n>> \" Heap Blocks: exact=178096\"\n>> \" Buffers: shared hit=1 read=178192\"\n>> \" -> Bitmap Index Scan on enterprises_fts_idx\n>> (cost=0.00..4869.61 rows=541014 width=0) (actual time=120.214..120.214\n>> rows=528376 loops=1)\"\n>> \" Index Cond: (fts @@ '''activ'''::tsquery)\"\n>> \" Buffers: shared hit=1 read=96\"\n>> \"Planning time: 2.383 ms\"\n>> \"Execution time: 25824.476 ms\"\n>>\n>> Any advice would be greatly appreciated. I'm running PostgreSQL 9.4.1.\n>>\n>\n> The Recheck Cond line is a plan-time piece of info, not a run-time piece.\n> It only tells you what condition is going to be rechecked if a recheck is\n> found to be necessary.\n>\n> It doesn't indicate how many times it was found it to be necessary to do\n> the recheck. Presumably that number was zero.\n>\n> Cheers,\n>\n> Jeff\n>\n\nThanks Jeff! That makes sense indeed.I'm a bit surprised a COUNT(1) would need a bitmap heap scan since we know the row count from the index, but okay.Have a nice day,LaurentOn Thu, Jul 23, 2015 at 8:04 PM, Jeff Janes <[email protected]> wrote:On Thu, Jul 23, 2015 at 9:58 AM, Laurent Debacker <[email protected]> wrote:Hi,I have read that GIN indexes don't require a recheck cond for full text search as long as work_mem is big enough, otherwise you get lossy blocks, and the recheck cond.In my case, I have no lossy blocks (from what I could tell), but I do have a recheck...EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(1) FROM enterprises WHERE fts @@ 'activ'::tsquery\"Aggregate (cost=264555.07..264555.08 rows=1 width=0) (actual time=25813.920..25813.921 rows=1 loops=1)\"\" Buffers: shared hit=1 read=178192\"\" -> Bitmap Heap Scan on enterprises (cost=5004.86..263202.54 rows=541014 width=0) (actual time=170.546..25663.048 rows=528376 loops=1)\"\" Recheck Cond: (fts @@ '''activ'''::tsquery)\"\" Heap Blocks: exact=178096\"\" Buffers: shared hit=1 read=178192\"\" -> Bitmap Index Scan on enterprises_fts_idx (cost=0.00..4869.61 rows=541014 width=0) (actual time=120.214..120.214 rows=528376 loops=1)\"\" Index Cond: (fts @@ '''activ'''::tsquery)\"\" Buffers: shared hit=1 read=96\"\"Planning time: 2.383 ms\"\"Execution time: 25824.476 ms\"Any advice would be greatly appreciated. I'm running PostgreSQL 9.4.1.The Recheck Cond line is a plan-time piece of info, not a run-time piece. It only tells you what condition is going to be rechecked if a recheck is found to be necessary.It doesn't indicate how many times it was found it to be necessary to do the recheck. Presumably that number was zero.Cheers,Jeff",
"msg_date": "Fri, 24 Jul 2015 23:40:37 +0200",
"msg_from": "Laurent Debacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bitmap heap scan recheck for gin/fts with no lossy blocks"
},
{
"msg_contents": "On Fri, Jul 24, 2015 at 2:40 PM, Laurent Debacker <[email protected]>\nwrote:\n\n\nThe Recheck Cond line is a plan-time piece of info, not a run-time piece.\n> It only tells you what condition is going to be rechecked if a recheck is\n> found to be necessary.\n\n\nThanks Jeff! That makes sense indeed.\n>\n> I'm a bit surprised a COUNT(1) would need a bitmap heap scan since we know\n> the row count from the index, but okay.\n>\n\nGin indexes do not (yet) implement index only scans. It has to visit the\nblock to check the visibility of the rows, as visibility data is not stored\nin the index.\n\nCheers,\n\nJeff\n\nOn Fri, Jul 24, 2015 at 2:40 PM, Laurent Debacker <[email protected]> wrote:The Recheck Cond line is a plan-time piece of info, not a run-time piece. It only tells you what condition is going to be rechecked if a recheck is found to be necessary.Thanks Jeff! That makes sense indeed.I'm a bit surprised a COUNT(1) would need a bitmap heap scan since we know the row count from the index, but okay.Gin indexes do not (yet) implement index only scans. It has to visit the block to check the visibility of the rows, as visibility data is not stored in the index.Cheers,Jeff",
"msg_date": "Fri, 24 Jul 2015 15:27:28 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bitmap heap scan recheck for gin/fts with no lossy blocks"
}
] |
[
{
"msg_contents": "The canonical advice here is to avoid more connections than you have CPUs,\nand to use something like pg_pooler to achieve that under heavy load.\n\nWe are considering using the Apache mod_perl \"fast-CGI\" system and perl's\nApache::DBI module, which caches persistent connections in order to improve\nperformance for lightweight web requests. Due to the way our customers are\norganized (a separate schema per client company), it's possible that there\nwould be (for example) 32 fast-CGI processes, each of which had hundreds of\ncached connections open at any given time. This would result in a thousand\nor so Postgres connections on a machine with 32 CPUs.\n\nBut, Apache's fast-CGI mechanism allows you to specify the maximum number\nof fast-CGI processes that can run at one time; requests are queue by the\nApache server if the load exceeds this maximum. That means that there would\nnever be more than a configured maximum number of active connections; the\nrest would be idle.\n\nSo we'd have a situation where there there could be thousands of\nconnections, but the actual workload would be throttled to any limit we\nlike. We'd almost certainly limit it to match the number of CPUs.\n\nSo the question is: do idle connections impact performance?\n\nThanks,\nCraig\n\nThe canonical advice here is to avoid more connections than you have CPUs, and to use something like pg_pooler to achieve that under heavy load.We are considering using the Apache mod_perl \"fast-CGI\" system and perl's Apache::DBI module, which caches persistent connections in order to improve performance for lightweight web requests. Due to the way our customers are organized (a separate schema per client company), it's possible that there would be (for example) 32 fast-CGI processes, each of which had hundreds of cached connections open at any given time. This would result in a thousand or so Postgres connections on a machine with 32 CPUs.But, Apache's fast-CGI mechanism allows you to specify the maximum number of fast-CGI processes that can run at one time; requests are queue by the Apache server if the load exceeds this maximum. That means that there would never be more than a configured maximum number of active connections; the rest would be idle.So we'd have a situation where there there could be thousands of connections, but the actual workload would be throttled to any limit we like. We'd almost certainly limit it to match the number of CPUs.So the question is: do idle connections impact performance?Thanks,Craig",
"msg_date": "Sat, 25 Jul 2015 07:50:35 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Are many idle connections bad?"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> ... This would result in a thousand\n> or so Postgres connections on a machine with 32 CPUs.\n\n> So the question is: do idle connections impact performance?\n\nYes. Those connections have to be examined when gathering snapshot\ninformation, since you don't know that they're idle until you look.\nSo the cost of taking a snapshot is proportional to the total number\nof connections, even when most are idle. This sort of situation\nis known to aggravate contention for the ProcArrayLock, which is a\nperformance bottleneck if you've got lots of CPUs.\n\nYou'd be a lot better off with a pooler.\n\n(There has been, and continues to be, interest in getting rid of this\nbottleneck ... but it's a problem in all existing Postgres versions.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 25 Jul 2015 11:04:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are many idle connections bad?"
},
{
"msg_contents": "On Sat, Jul 25, 2015 at 8:04 AM, Tom Lane <[email protected]> wrote:\n\n> Craig James <[email protected]> writes:\n> > ... This would result in a thousand\n> > or so Postgres connections on a machine with 32 CPUs.\n>\n> > So the question is: do idle connections impact performance?\n>\n> Yes. Those connections have to be examined when gathering snapshot\n> information, since you don't know that they're idle until you look.\n> So the cost of taking a snapshot is proportional to the total number\n> of connections, even when most are idle. This sort of situation\n> is known to aggravate contention for the ProcArrayLock, which is a\n> performance bottleneck if you've got lots of CPUs.\n>\n> You'd be a lot better off with a pooler.\n>\n\nOK, thanks for the info, that answers the question.\n\nAnother choice we have, since all schemas are in the same database, is to\nuse a single \"super user\" connection that has access to every schema. Each\nfast-CGI would then only need a single connection. That's a lot more work,\nas it requires altering our security, altering all of the SQL statements,\netc. It moves the security model from the database to the application.\n\nA pooler isn't an idea solution here, because there is still overhead\nassociated with each connection. Persistent connections are blazingly fast\n(we already use them in a more limited fast-CGI application).\n\nCraig\n\n\n>\n> (There has been, and continues to be, interest in getting rid of this\n> bottleneck ... but it's a problem in all existing Postgres versions.)\n>\n> regards, tom lane\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Sat, Jul 25, 2015 at 8:04 AM, Tom Lane <[email protected]> wrote:Craig James <[email protected]> writes:\n> ... This would result in a thousand\n> or so Postgres connections on a machine with 32 CPUs.\n\n> So the question is: do idle connections impact performance?\n\nYes. Those connections have to be examined when gathering snapshot\ninformation, since you don't know that they're idle until you look.\nSo the cost of taking a snapshot is proportional to the total number\nof connections, even when most are idle. This sort of situation\nis known to aggravate contention for the ProcArrayLock, which is a\nperformance bottleneck if you've got lots of CPUs.\n\nYou'd be a lot better off with a pooler.OK, thanks for the info, that answers the question.Another choice we have, since all schemas are in the same database, is to use a single \"super user\" connection that has access to every schema. Each fast-CGI would then only need a single connection. That's a lot more work, as it requires altering our security, altering all of the SQL statements, etc. It moves the security model from the database to the application.A pooler isn't an idea solution here, because there is still overhead associated with each connection. Persistent connections are blazingly fast (we already use them in a more limited fast-CGI application).Craig \n\n(There has been, and continues to be, interest in getting rid of this\nbottleneck ... but it's a problem in all existing Postgres versions.)\n\n regards, tom lane\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------",
"msg_date": "Sat, 25 Jul 2015 09:06:53 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are many idle connections bad?"
},
{
"msg_contents": "On Sat, Jul 25, 2015 at 7:50 AM, Craig James <[email protected]> wrote:\n\n> The canonical advice here is to avoid more connections than you have CPUs,\n> and to use something like pg_pooler to achieve that under heavy load.\n>\n> We are considering using the Apache mod_perl \"fast-CGI\" system and perl's\n> Apache::DBI module, which caches persistent connections in order to improve\n> performance for lightweight web requests. Due to the way our customers are\n> organized (a separate schema per client company),\n>\n\nAnd presumably with a different PostgreSQL user to go with each schema?\n\n\n> it's possible that there would be (for example) 32 fast-CGI processes,\n> each of which had hundreds of cached connections open at any given time.\n> This would result in a thousand or so Postgres connections on a machine\n> with 32 CPUs.\n>\n\nWhy would it need so many cached connections per fast-CGI process? Could\nyou set up affinity so that the same client (or at least the same web\nsession) usually ends up at the same fast-CGI process (when it is\navailable), so the other fast-CGI processes don't need to cache DBI\nconnections for every DB user, but just for the ones they habitually serve?\n\n\n>\n> But, Apache's fast-CGI mechanism allows you to specify the maximum number\n> of fast-CGI processes that can run at one time; requests are queue by the\n> Apache server if the load exceeds this maximum. That means that there would\n> never be more than a configured maximum number of active connections; the\n> rest would be idle.\n>\n> So we'd have a situation where there there could be thousands of\n> connections, but the actual workload would be throttled to any limit we\n> like. We'd almost certainly limit it to match the number of CPUs.\n>\n> So the question is: do idle connections impact performance?\n>\n\nIn my hands, truly idle connections are very very cheap, other than the\ngeneral overhead of a having a process in the process table and some local\nmemory. Where people usually run into trouble are:\n\n1) that the idle connections are only idle \"normally\", and as soon as the\nsystem runs into trouble the app starts trying to use all of those\nusually-idle connections. So you get increased use at the exact moment\nwhen you can't deal with it--when the system is already under stress. It\nsounds like you have that base covered.\n\n2) That the idle connections are \"idle in transaction\", not truly idle, and\nthis causes a variety of troubles, like vacuum not working effectively and\nhint bits that are permanently unsettable.\n\n2b) A special case of 2 is that transaction has inserted a bunch of\nuncommitted tuples and then gone idle (or is just doing some other time\nconsuming things) before either committing them or rolling them back. This\ncan create an enormous amount of contention the proclock, as every process\nwhich stumbles across the tuple then has to ask every other active process\n\"Is this your tuple? Are you done with it?\". This could be particularly\nproblematic if for example you are bulk loading a vendor catalog in a\nsingle transaction and therefore have a bunch of uncommitted tuples that\nare hanging around for along time.\n\nIf you have reasonably good load generator, it is pretty easy to spin up a\nbunch of idle connections and see what happens on your own hardware with\nyour own workload and your own version of PostgreSQL.\n\nCheers,\n\nJeff\n\nOn Sat, Jul 25, 2015 at 7:50 AM, Craig James <[email protected]> wrote:The canonical advice here is to avoid more connections than you have CPUs, and to use something like pg_pooler to achieve that under heavy load.We are considering using the Apache mod_perl \"fast-CGI\" system and perl's Apache::DBI module, which caches persistent connections in order to improve performance for lightweight web requests. Due to the way our customers are organized (a separate schema per client company), And presumably with a different PostgreSQL user to go with each schema? it's possible that there would be (for example) 32 fast-CGI processes, each of which had hundreds of cached connections open at any given time. This would result in a thousand or so Postgres connections on a machine with 32 CPUs.Why would it need so many cached connections per fast-CGI process? Could you set up affinity so that the same client (or at least the same web session) usually ends up at the same fast-CGI process (when it is available), so the other fast-CGI processes don't need to cache DBI connections for every DB user, but just for the ones they habitually serve? But, Apache's fast-CGI mechanism allows you to specify the maximum number of fast-CGI processes that can run at one time; requests are queue by the Apache server if the load exceeds this maximum. That means that there would never be more than a configured maximum number of active connections; the rest would be idle.So we'd have a situation where there there could be thousands of connections, but the actual workload would be throttled to any limit we like. We'd almost certainly limit it to match the number of CPUs.So the question is: do idle connections impact performance?In my hands, truly idle connections are very very cheap, other than the general overhead of a having a process in the process table and some local memory. Where people usually run into trouble are:1) that the idle connections are only idle \"normally\", and as soon as the system runs into trouble the app starts trying to use all of those usually-idle connections. So you get increased use at the exact moment when you can't deal with it--when the system is already under stress. It sounds like you have that base covered. 2) That the idle connections are \"idle in transaction\", not truly idle, and this causes a variety of troubles, like vacuum not working effectively and hint bits that are permanently unsettable.2b) A special case of 2 is that transaction has inserted a bunch of uncommitted tuples and then gone idle (or is just doing some other time consuming things) before either committing them or rolling them back. This can create an enormous amount of contention the proclock, as every process which stumbles across the tuple then has to ask every other active process \"Is this your tuple? Are you done with it?\". This could be particularly problematic if for example you are bulk loading a vendor catalog in a single transaction and therefore have a bunch of uncommitted tuples that are hanging around for along time.If you have reasonably good load generator, it is pretty easy to spin up a bunch of idle connections and see what happens on your own hardware with your own workload and your own version of PostgreSQL.Cheers,Jeff",
"msg_date": "Sat, 25 Jul 2015 17:43:06 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are many idle connections bad?"
},
{
"msg_contents": "On Sat, Jul 25, 2015 at 8:50 AM, Craig James <[email protected]> wrote:\n\n> The canonical advice here is to avoid more connections than you have CPUs,\n> and to use something like pg_pooler to achieve that under heavy load.\n>\n> We are considering using the Apache mod_perl \"fast-CGI\" system and perl's\n> Apache::DBI module, which caches persistent connections in order to improve\n> performance for lightweight web requests. Due to the way our customers are\n> organized (a separate schema per client company), it's possible that there\n> would be (for example) 32 fast-CGI processes, each of which had hundreds of\n> cached connections open at any given time. This would result in a thousand\n> or so Postgres connections on a machine with 32 CPUs.\n>\n\nI don't have any hard performance numbers, but I ditched Apache::DBI years\nago in favor of pgbouncer.\n\nBTW if you are starting something new, my advice would be to use PSGI/Plack\ninstead of apache/mod_perl. Granted, you can still use apache & mod_perl\nwith PSGI if you want. It's more flexible in that it gives you the option\nof switching to another server willy nilly. I've found Starlet and Gazelle\nto be easier to work with and more performant than apache + mod_perl.\n\nOn Sat, Jul 25, 2015 at 8:50 AM, Craig James <[email protected]> wrote:The canonical advice here is to avoid more connections than you have CPUs, and to use something like pg_pooler to achieve that under heavy load.We are considering using the Apache mod_perl \"fast-CGI\" system and perl's Apache::DBI module, which caches persistent connections in order to improve performance for lightweight web requests. Due to the way our customers are organized (a separate schema per client company), it's possible that there would be (for example) 32 fast-CGI processes, each of which had hundreds of cached connections open at any given time. This would result in a thousand or so Postgres connections on a machine with 32 CPUs.I don't have any hard performance numbers, but I ditched Apache::DBI years ago in favor of pgbouncer.BTW if you are starting something new, my advice would be to use PSGI/Plack instead of apache/mod_perl. Granted, you can still use apache & mod_perl with PSGI if you want. It's more flexible in that it gives you the option of switching to another server willy nilly. I've found Starlet and Gazelle to be easier to work with and more performant than apache + mod_perl.",
"msg_date": "Thu, 30 Jul 2015 00:28:19 -0600",
"msg_from": "Alex Hunsaker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are many idle connections bad?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have following table definition with 6209888 rows in it. It stores the\noccurrences of species in various regions.\n\n*TABLE DEFINITION*\n\n Column | Type | Modifiers\n\n\n--------------+------------------------+----------------------------------------------------------\n\n id | integer | not null default\nnextval('occurrences_id_seq'::regclass)\n\n gbifid | integer | not null\n\n sname | character varying(512) |\n\n cname | character varying(512) |\n\n species | character varying(512) |\n\n location | geometry | not null\n\n month | integer |\n\n year | integer |\n\n event_date | date |\n\n dataset_key | character varying(512) |\n\n taxon_key | character varying(512) |\n\n taxon_rank | character varying(512) |\n\n record_basis | character varying(512) |\n\n category_id | integer |\n\n country | character varying(512) |\n\n lat | double precision |\n\n lng | double precision |\n\nIndexes:\n\n \"occurrences_pkey\" PRIMARY KEY, btree (id)\n\n \"unique_occurrences_gbifid\" UNIQUE, btree (gbifid)\n\n \"index_occurences_taxon_key\" btree (taxon_key)\n\n \"index_occurrences_category_id\" btree (category_id)\n\n \"index_occurrences_cname\" btree (cname)\n\n \"index_occurrences_country\" btree (country)\n\n \"index_occurrences_lat\" btree (lat)\n\n \"index_occurrences_lng\" btree (lng)\n\n \"index_occurrences_month\" btree (month)\n\n \"index_occurrences_sname\" btree (sname)\n\n \"occurrence_location_gix\" gist (location)\n\nI am trying to fetch the count of number of occurrences within a certain\nregion. I save the location of each occurrence as a geometric field as well\nas lat, lng combination. Both fields are indexed. The query that is issued\nis as follows.\n\n*QUERY*\n\n SELECT COUNT(*) FROM \"occurrences\" WHERE (\"lat\" >= -27.91550355958 AND\n\"lat\" <= -27.015680440420002 AND \"lng\" >= 152.13307044728307 AND \"lng\" <=\n153.03137355271693 AND \"category_id\" = 1 AND (ST_Intersects(\nST_Buffer(ST_PointFromText('POINT(152.582222 -27.465592)')::geography,\n50000)::geography, location::geography)));\n\nThe problem is it takes more than acceptable time to execute the query.\nBelow is the explain analyze output for the same query.\n\n*EXPLAIN ANALYZE QUERY OUTPUT (**http://explain.depesz.com/s/p2a\n<http://explain.depesz.com/s/p2a>)*\n\nAggregate (cost=127736.06..127736.07 rows=1 width=0) (actual\ntime=13491.678..13491.679 rows=1 loops=1)\n\n Buffers: shared hit=3 read=56025\n\n -> Bitmap Heap Scan on occurrences (cost=28249.46..127731.08 rows=1995\nwidth=0) (actual time=528.053..13388.458 rows=167511 loops=1)\n\n Recheck Cond: ((lat >= (-27.91550355958)::double precision) AND\n(lat <= (-27.01568044042)::double precision) AND (lng >=\n152.133070447283::double precision) AND (lng <= 153.031373552717::double\nprecision))\n\n Rows Removed by Index Recheck: 748669\n\n Filter: ((category_id = 1) AND\n('0103000020E6100000010000002100000090D8AD28D32263403905504558773BC0CADDAF0384226340E7AD43F4E38D3BC0B559D93A98216340B7BE554692A33BC0C904C18C18206340DF8EA9338DB73BC052F75181131E6340A1D9E30E0FC93BC00BDCB5E39C1B6340A1A40E496AD73BC074D30D03CD1863405DD7BF5110E23BC05DD3A2C0BF156340439B784797E83BC078E9287593126340EF9E5C37BEEA3BC072EB40B9670F63409E25A1BA6FE83BC06964481F5C0C6340B331B5D2C2E13BC08F6785ED8E0963409979FBF9F9D63BC0135DB3E71B0763402F78807480C83BC0E321E8351B0563405E96CB00E6B63BC0672CD874A00363403BC84B1BD9A23BC018FAE8F8B90263400314CD15208D3BC039DE8C4A70026340653B324F91763BC0F5BA5CDFC502634086322DDE0A603BC0E90E8A10B7036340C5FA1C046A4A3BC01ECCC14C3A056340CE4011BC82363BC022F38481400763407655DBB517253BC05B3B5AB6B50963404C9AA306D3163BC079EF01D3810C634010A732D03F0C3BC05152F188890F634044CE5D16C5053BC0EDDABA57AF126340926E14EFA1033BC08D7E9CA3D41563401EFB9E2DEB053BC071A929D5DA186340A3F1F29C8A0C3BC049A7E478A41B63404F8BD2CF3F173BC04B409855161E634057CC1080A2253BC0F9C65E70182063404BEB146926373BC0349D83F596216340449EEA7C204B3BC07803D7FD82226340A16F1747CD603BC090D8AD28D32263403905504558773BC0'::geography\n&& (location)::geography) AND\n(_st_distance('0103000020E6100000010000002100000090D8AD28D32263403905504558773BC0CADDAF0384226340E7AD43F4E38D3BC0B559D93A98216340B7BE554692A33BC0C904C18C18206340DF8EA9338DB73BC052F75181131E6340A1D9E30E0FC93BC00BDCB5E39C1B6340A1A40E496AD73BC074D30D03CD1863405DD7BF5110E23BC05DD3A2C0BF156340439B784797E83BC078E9287593126340EF9E5C37BEEA3BC072EB40B9670F63409E25A1BA6FE83BC06964481F5C0C6340B331B5D2C2E13BC08F6785ED8E0963409979FBF9F9D63BC0135DB3E71B0763402F78807480C83BC0E321E8351B0563405E96CB00E6B63BC0672CD874A00363403BC84B1BD9A23BC018FAE8F8B90263400314CD15208D3BC039DE8C4A70026340653B324F91763BC0F5BA5CDFC502634086322DDE0A603BC0E90E8A10B7036340C5FA1C046A4A3BC01ECCC14C3A056340CE4011BC82363BC022F38481400763407655DBB517253BC05B3B5AB6B50963404C9AA306D3163BC079EF01D3810C634010A732D03F0C3BC05152F188890F634044CE5D16C5053BC0EDDABA57AF126340926E14EFA1033BC08D7E9CA3D41563401EFB9E2DEB053BC071A929D5DA186340A3F1F29C8A0C3BC049A7E478A41B63404F8BD2CF3F173BC04B409855161E634057CC1080A2253BC0F9C65E70182063404BEB146926373BC0349D83F596216340449EEA7C204B3BC07803D7FD82226340A16F1747CD603BC090D8AD28D32263403905504558773BC0'::geography,\n(location)::geography, 0::double precision, false) < 1e-05::double\nprecision))\n\n Rows Removed by Filter: 6357\n\n Heap Blocks: exact=29947 lossy=22601\n\n Buffers: shared hit=3 read=56025\n\n -> BitmapAnd (cost=28249.46..28249.46 rows=32476 width=0)\n(actual time=519.091..519.091 rows=0 loops=1)\n\n Buffers: shared read=3477\n\n -> Bitmap Index Scan on index_occurrences_lat\n(cost=0.00..11691.20 rows=365877 width=0) (actual time=218.999..218.999\nrows=392415 loops=1)\n\n Index Cond: ((lat >= (-27.91550355958)::double\nprecision) AND (lat <= (-27.01568044042)::double precision))\n\n Buffers: shared read=1444\n\n -> Bitmap Index Scan on index_occurrences_lng\n(cost=0.00..16557.01 rows=517658 width=0) (actual time=285.211..285.211\nrows=550523 loops=1)\n\n Index Cond: ((lng >= 152.133070447283::double\nprecision) AND (lng <= 153.031373552717::double precision))\n\n Buffers: shared read=2033\n\n Planning time: 2.812 ms\n\n Execution time: 13493.617 ms\n\n(19 rows)\n\n\nIt seems that the planner is underestimating the number of rows\nreturned in Bitmap\nHeap Scan on occurrences. I have run vacuum analyze on this table couple of\ntimes, but it still produces the same result. Any idea how I can speed up\nthis query? How I can assist planner in providing better row estimates for\nBitmap Heap Scan section?\n\n*POSTGRESQL VERSION INFO*\n\n version\n\n\n------------------------------------------------------------------------------------------------------\n\n PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu\n4.8.2-19ubuntu1) 4.8.2, 64-bit\n\n*HARDWARE*\n\nI am running the Postgresql instance on a digital ocean vm with 1 core, SSD\ndisk and 1 GB of ram.\n\n\nAppreciate your help.\n\nThanks,\nPriyank\n\nHi,I have following table definition with 6209888 rows in it. It stores the occurrences of species in various regions.TABLE DEFINITION Column | Type | Modifiers --------------+------------------------+---------------------------------------------------------- id | integer | not null default nextval('occurrences_id_seq'::regclass) gbifid | integer | not null sname | character varying(512) | cname | character varying(512) | species | character varying(512) | location | geometry | not null month | integer | year | integer | event_date | date | dataset_key | character varying(512) | taxon_key | character varying(512) | taxon_rank | character varying(512) | record_basis | character varying(512) | category_id | integer | country | character varying(512) | lat | double precision | lng | double precision | Indexes: \"occurrences_pkey\" PRIMARY KEY, btree (id) \"unique_occurrences_gbifid\" UNIQUE, btree (gbifid) \"index_occurences_taxon_key\" btree (taxon_key) \"index_occurrences_category_id\" btree (category_id) \"index_occurrences_cname\" btree (cname) \"index_occurrences_country\" btree (country) \"index_occurrences_lat\" btree (lat) \"index_occurrences_lng\" btree (lng) \"index_occurrences_month\" btree (month) \"index_occurrences_sname\" btree (sname) \"occurrence_location_gix\" gist (location)I am trying to fetch the count of number of occurrences within a certain region. I save the location of each occurrence as a geometric field as well as lat, lng combination. Both fields are indexed. The query that is issued is as follows.\nQUERY\nSELECT COUNT(*) FROM \"occurrences\" WHERE (\"lat\" >= -27.91550355958 AND \"lat\" <= -27.015680440420002 AND \"lng\" >= 152.13307044728307 AND \"lng\" <= 153.03137355271693 AND \"category_id\" = 1 AND (ST_Intersects( ST_Buffer(ST_PointFromText('POINT(152.582222 -27.465592)')::geography, 50000)::geography, location::geography)));The problem is it takes more than acceptable time to execute the query. Below is the explain analyze output for the same query.EXPLAIN ANALYZE QUERY OUTPUT (http://explain.depesz.com/s/p2a)Aggregate (cost=127736.06..127736.07 rows=1 width=0) (actual time=13491.678..13491.679 rows=1 loops=1) Buffers: shared hit=3 read=56025 -> Bitmap Heap Scan on occurrences (cost=28249.46..127731.08 rows=1995 width=0) (actual time=528.053..13388.458 rows=167511 loops=1) Recheck Cond: ((lat >= (-27.91550355958)::double precision) AND (lat <= (-27.01568044042)::double precision) AND (lng >= 152.133070447283::double precision) AND (lng <= 153.031373552717::double precision)) Rows Removed by Index Recheck: 748669 Filter: ((category_id = 1) AND ('0103000020E6100000010000002100000090D8AD28D32263403905504558773BC0CADDAF0384226340E7AD43F4E38D3BC0B559D93A98216340B7BE554692A33BC0C904C18C18206340DF8EA9338DB73BC052F75181131E6340A1D9E30E0FC93BC00BDCB5E39C1B6340A1A40E496AD73BC074D30D03CD1863405DD7BF5110E23BC05DD3A2C0BF156340439B784797E83BC078E9287593126340EF9E5C37BEEA3BC072EB40B9670F63409E25A1BA6FE83BC06964481F5C0C6340B331B5D2C2E13BC08F6785ED8E0963409979FBF9F9D63BC0135DB3E71B0763402F78807480C83BC0E321E8351B0563405E96CB00E6B63BC0672CD874A00363403BC84B1BD9A23BC018FAE8F8B90263400314CD15208D3BC039DE8C4A70026340653B324F91763BC0F5BA5CDFC502634086322DDE0A603BC0E90E8A10B7036340C5FA1C046A4A3BC01ECCC14C3A056340CE4011BC82363BC022F38481400763407655DBB517253BC05B3B5AB6B50963404C9AA306D3163BC079EF01D3810C634010A732D03F0C3BC05152F188890F634044CE5D16C5053BC0EDDABA57AF126340926E14EFA1033BC08D7E9CA3D41563401EFB9E2DEB053BC071A929D5DA186340A3F1F29C8A0C3BC049A7E478A41B63404F8BD2CF3F173BC04B409855161E634057CC1080A2253BC0F9C65E70182063404BEB146926373BC0349D83F596216340449EEA7C204B3BC07803D7FD82226340A16F1747CD603BC090D8AD28D32263403905504558773BC0'::geography && (location)::geography) AND (_st_distance('0103000020E6100000010000002100000090D8AD28D32263403905504558773BC0CADDAF0384226340E7AD43F4E38D3BC0B559D93A98216340B7BE554692A33BC0C904C18C18206340DF8EA9338DB73BC052F75181131E6340A1D9E30E0FC93BC00BDCB5E39C1B6340A1A40E496AD73BC074D30D03CD1863405DD7BF5110E23BC05DD3A2C0BF156340439B784797E83BC078E9287593126340EF9E5C37BEEA3BC072EB40B9670F63409E25A1BA6FE83BC06964481F5C0C6340B331B5D2C2E13BC08F6785ED8E0963409979FBF9F9D63BC0135DB3E71B0763402F78807480C83BC0E321E8351B0563405E96CB00E6B63BC0672CD874A00363403BC84B1BD9A23BC018FAE8F8B90263400314CD15208D3BC039DE8C4A70026340653B324F91763BC0F5BA5CDFC502634086322DDE0A603BC0E90E8A10B7036340C5FA1C046A4A3BC01ECCC14C3A056340CE4011BC82363BC022F38481400763407655DBB517253BC05B3B5AB6B50963404C9AA306D3163BC079EF01D3810C634010A732D03F0C3BC05152F188890F634044CE5D16C5053BC0EDDABA57AF126340926E14EFA1033BC08D7E9CA3D41563401EFB9E2DEB053BC071A929D5DA186340A3F1F29C8A0C3BC049A7E478A41B63404F8BD2CF3F173BC04B409855161E634057CC1080A2253BC0F9C65E70182063404BEB146926373BC0349D83F596216340449EEA7C204B3BC07803D7FD82226340A16F1747CD603BC090D8AD28D32263403905504558773BC0'::geography, (location)::geography, 0::double precision, false) < 1e-05::double precision)) Rows Removed by Filter: 6357 Heap Blocks: exact=29947 lossy=22601 Buffers: shared hit=3 read=56025 -> BitmapAnd (cost=28249.46..28249.46 rows=32476 width=0) (actual time=519.091..519.091 rows=0 loops=1) Buffers: shared read=3477 -> Bitmap Index Scan on index_occurrences_lat (cost=0.00..11691.20 rows=365877 width=0) (actual time=218.999..218.999 rows=392415 loops=1) Index Cond: ((lat >= (-27.91550355958)::double precision) AND (lat <= (-27.01568044042)::double precision)) Buffers: shared read=1444 -> Bitmap Index Scan on index_occurrences_lng (cost=0.00..16557.01 rows=517658 width=0) (actual time=285.211..285.211 rows=550523 loops=1) Index Cond: ((lng >= 152.133070447283::double precision) AND (lng <= 153.031373552717::double precision)) Buffers: shared read=2033 Planning time: 2.812 ms Execution time: 13493.617 ms\n(19 rows)It seems that the planner is underestimating the number of rows returned in Bitmap Heap Scan on occurrences. I have run vacuum analyze on this table couple of times, but it still produces the same result. Any idea how I can speed up this query? How I can assist planner in providing better row estimates for Bitmap Heap Scan section?POSTGRESQL VERSION INFO version ------------------------------------------------------------------------------------------------------\n PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bitHARDWAREI am running the Postgresql instance on a digital ocean vm with 1 core, SSD disk and 1 GB of ram.Appreciate your help.Thanks,Priyank",
"msg_date": "Tue, 28 Jul 2015 13:22:16 +0530",
"msg_from": "Priyank Tiwari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Any ideas how can I speed up this query?"
},
{
"msg_contents": "> \n> QUERY\n> \n> SELECT COUNT(*) FROM \"occurrences\" WHERE (\"lat\" >= -27.91550355958 AND \"lat\" <= -27.015680440420002 AND \"lng\" >= 152.13307044728307 AND \"lng\" <= 153.03137355271693 AND \"category_id\" = 1 AND (ST_Intersects( ST_Buffer(ST_PointFromText('POINT(152.582222 -27.465592)')::geography, 50000)::geography, location::geography)));\n\n> How I can assist planner in providing better row estimates for Bitmap Heap Scan section?\n\nBy googling this phrase from your EXPLAIN: \"Rows Removed by Index Recheck: 748669\" - you can find this explanation: \n\nhttp://stackoverflow.com/questions/26418715/postgresql-rows-removed-by-index\n\n\"The inner Bitmap Index Scan node is producing a bitmap, putting 1 to all the places where records that match your search key are found, and 0 otherwise. As your table is quite big, the size of the bitmap is getting bigger, then available memory for these kind of operations, configured via work_mem, becomes small to keep the whole bitmap.\n\nWhen in lack of a memory, inner node will start producing 1 not for records, but rather for blocks that are known to contain matching records. This means, that outer node Bitmap Heap Scan has to read all records from such block and re-check them. Obiously, there'll be some non-matching ones, and their number is what you see as Rows Removed by Index Recheck.\"\n\nTherefore, try substantially increasing your work_mem (use set..... so that it's on a per-session basis, not global) so that you don't have to read in all the rows to re-check them.\nThis is why Googling phrases from your explain before list-posting is always a good idea :-)\n\nBTW - what are your statistics set to? If you have a huge table, it can be worth raising them from the default. \n http://www.postgresql.org/docs/9.4/static/planner-stats.html\nALTER TABLE SET STATISTICS, try raising this to 1000.\n\n\n> POSTGRESQL VERSION INFO\n\nFor postgis-related questions, remember to also include the postgis version. \n\nHope this helps and good luck\n\nGraeme Bell.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 Jul 2015 08:19:00 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any ideas how can I speed up this query?"
},
{
"msg_contents": "1 GB of ram is quite small.\nI think it is worth to try creating an index on a combination of\ncolumns(lat, lng).\nSo that Bitmap Heap Scan would be omitted.\n\n1 GB of ram is quite small. I think it is worth to try creating an index on a combination of columns(lat, lng).So that Bitmap Heap Scan would be omitted.",
"msg_date": "Tue, 28 Jul 2015 17:57:51 +0900",
"msg_from": "=?UTF-8?B?5p6X5aOr5Y2a?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any ideas how can I speed up this query?"
}
] |
[
{
"msg_contents": "Some of you may have had annoying problems in the past with autofreeze or autovacuum running at unexpected moments and dropping the performance of your server randomly. \n\nOn our SSD-RAID10 based system we found a 20GB table finished it's vacuum freeze in about 100 seconds. There were no noticeable interruptions to our services; maybe a tiny little bit of extra latency on the web maps, very hard to tell if it was real or imagination.\n\n\bIf auto-stuff in postgresql has been a pain point for you in the past, I can confirm that SSD drives are a nice solution (and also for any other autovacuum/analyze type stuff) since they can handle incoming random IO very nicely while also making very fast progress with the housekeeping work. \n\nGraeme Bell\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 Jul 2015 15:39:41 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "autofreeze/vacuuming - avoiding the random performance hit"
},
{
"msg_contents": "Did you put your entire database on SSD or just the WAL/indexes?\n\nOn 28 July 2015 at 23:39, Graeme B. Bell <[email protected]> wrote:\n\n> Some of you may have had annoying problems in the past with autofreeze or\n> autovacuum running at unexpected moments and dropping the performance of\n> your server randomly.\n>\n> On our SSD-RAID10 based system we found a 20GB table finished it's vacuum\n> freeze in about 100 seconds. There were no noticeable interruptions to our\n> services; maybe a tiny little bit of extra latency on the web maps, very\n> hard to tell if it was real or imagination.\n>\n> If auto-stuff in postgresql has been a pain point for you in the past, I\n> can confirm that SSD drives are a nice solution (and also for any other\n> autovacuum/analyze type stuff) since they can handle incoming random IO\n> very nicely while also making very fast progress with the housekeeping work.\n>\n> Graeme Bell\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRegards,\nAng Wei Shan\n\nDid you put your entire database on SSD or just the WAL/indexes?On 28 July 2015 at 23:39, Graeme B. Bell <[email protected]> wrote:Some of you may have had annoying problems in the past with autofreeze or autovacuum running at unexpected moments and dropping the performance of your server randomly.\n\nOn our SSD-RAID10 based system we found a 20GB table finished it's vacuum freeze in about 100 seconds. There were no noticeable interruptions to our services; maybe a tiny little bit of extra latency on the web maps, very hard to tell if it was real or imagination.\n\n If auto-stuff in postgresql has been a pain point for you in the past, I can confirm that SSD drives are a nice solution (and also for any other autovacuum/analyze type stuff) since they can handle incoming random IO very nicely while also making very fast progress with the housekeeping work.\n\nGraeme Bell\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Regards,Ang Wei Shan",
"msg_date": "Tue, 28 Jul 2015 23:51:01 +0800",
"msg_from": "Wei Shan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autofreeze/vacuuming - avoiding the random performance hit"
},
{
"msg_contents": "\nEntire database. People have talked about using SSDs for data/indices and spinning disks for WAL. However I find having everything on the same disks is good for 3 reasons. \n\n1. The SSD is simply vastly faster than the disks. That means if huge amount of WAL is being written out (e.g. tons of data inserted), WAL isn't lagging at all. Anyone arguing that WAL suits spinning disk because they write fast sequentially should acknowledge that SSDs also write fast sequentially - considerably faster. \n\n2. By having extra 'fsync' events, IO is less bumpy. Every time wal is written out, all your buffers are getting flushed out (in principle), which helps to avoid huge IO spikes. \n\n3. Simpler setup, less volumes to worry about in linux or disk types to manage. For example, we only need spare SSDs in the hotspare bay and on the shelf. Even a single HDD for wal requires a mirrored HDD, plus a hotspare (that's 3 bays gone from e.g. 8), plus some more on the shelf... all to get worse performance. \n\nOur DBs have been a total dream since I put SSDs everywhere. It got rid of every throughput/latency/io spike problem. The only thing I'd do differently today is that I'd buy intel ssds instead of the ones we chose; and preferably a NVMe direct connect with software raid in place of hardware raid and sata.\n\nGraeme Bell.\n\nOn 28 Jul 2015, at 17:51, Wei Shan <[email protected]> wrote:\n\n> Did you put your entire database on SSD or just the WAL/indexes?\n> \n> On 28 July 2015 at 23:39, Graeme B. Bell <[email protected]> wrote:\n> Some of you may have had annoying problems in the past with autofreeze or autovacuum running at unexpected moments and dropping the performance of your server randomly.\n> \n> On our SSD-RAID10 based system we found a 20GB table finished it's vacuum freeze in about 100 seconds. There were no noticeable interruptions to our services; maybe a tiny little bit of extra latency on the web maps, very hard to tell if it was real or imagination.\n> \n> If auto-stuff in postgresql has been a pain point for you in the past, I can confirm that SSD drives are a nice solution (and also for any other autovacuum/analyze type stuff) since they can handle incoming random IO very nicely while also making very fast progress with the housekeeping work.\n> \n> Graeme Bell\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> -- \n> Regards,\n> Ang Wei Shan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 Jul 2015 16:16:27 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autofreeze/vacuuming - avoiding the random\n performance hit"
}
] |
[
{
"msg_contents": "Entering production, availability 2016\n1000x faster than nand flash/ssd , eg dram-latency\n10x denser than dram\n1000x write endurance of nand\nPriced between flash and dram\nManufactured by intel/micron\nNon-volatile\n\nGuess what's going in my 2016 db servers :-)\n\nPlease, don't be vapourware... \n\nhttp://hothardware.com/news/intel-and-micron-jointly-drop-disruptive-game-changing-3d-xpoint-cross-point-memory-1000x-faster-than-nand\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 Jul 2015 20:29:23 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "incredible surprise news from intel/micron right now..."
},
{
"msg_contents": "On 28 Jul 2015, at 22:29, Graeme B. Bell <[email protected]> wrote:\n\n> Entering production, availability 2016\n> 1000x faster than nand flash/ssd , eg dram-latency\n> 10x denser than dram\n> 1000x write endurance of nand\n> Priced between flash and dram\n> Manufactured by intel/micron\n> Non-volatile\n\nhttp://www.anandtech.com/show/9541/intel-announces-optane-storage-brand-for-3d-xpoint-products\n\nSome new information (for anyone putting thought into 2016 DB hardware purchases). \n\nThroughput seems to be good. \n>7x better IOPS than one of the best enterprise PCIe SSDs on the market, with queue depth 1, \n>5x better as queue depth gets higher. \n\nGraeme. \n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 20 Aug 2015 08:26:52 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: incredible surprise news from intel/micron right\n now..."
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to see if I can do anything to optimize the following plan.\n\nI have two tables and I am doing a join between them. After joining it\ncalculates aggregates (Sum and Count)\nTable 1 : timestamp (one per day) for 2 years (730 records)\nTable 2 : Window based validity records. Window here means start and end\ntimestamp indicating a period of validity for a record.\nHash some 10 odd columns including start_time and end_time. (1 million\nrecords)\n\nMachine has 244 GB RAM. Queries are taking more than a min and in some case\n2-3 mins.\n\nBelow is the plan I am getting. The Nested loop blows up the number of\nrecords and we expect that. I have tried playing around work_mem and cache\nconfigs which hasn't helped.\n\nQuery\nselect sum(a), count(id), a.ts, st from table1 a, table2 b where a.ts >\nb.start_date and a.ts < b.end_date and a.ts > '2015-01-01 20:50:44.000000\n+00:00:00' and a.ts < '2015-07-01 19:50:44.000000 +00:00:00' group by a.ts,\nst order by a.ts\n\nPlan (EXPLAIN ANALYZE)\n\"Sort (cost=10005447874.54..10005447879.07 rows=1810 width=44) (actual\ntime=178883.936..178884.159 rows=1355 loops=1)\"\n\" Output: (sum(b.a)), (count(b.id)), a.ts, b.st\"\n\" Sort Key: a.ts\"\n\" Sort Method: quicksort Memory: 154kB\"\n\" Buffers: shared hit=47068722 read=102781\"\n\" I/O Timings: read=579.946\"\n\" -> HashAggregate (cost=10005447758.51..10005447776.61 rows=1810\nwidth=44) (actual time=178882.874..178883.320 rows=1355 loops=1)\"\n\" Output: sum(b.a), count(b.id), a.ts, b.st\"\n\" Group Key: a.ts, b.st\"\n\" Buffers: shared hit=47068719 read=102781\"\n\" I/O Timings: read=579.946\"\n\" -> Nested Loop (cost=10000000000.43..10004821800.38\nrows=62595813 width=44) (actual time=0.167..139484.854 rows=73112419\nloops=1)\"\n\" Output: a.ts, b.st, b.a, b.id\"\n\" Buffers: shared hit=47068719 read=102781\"\n\" I/O Timings: read=579.946\"\n\" -> Seq Scan on public.table1 a (cost=0.00..14.81 rows=181\nwidth=8) (actual time=0.058..0.563 rows=181 loops=1)\"\n\" Output: a.ts\"\n\" Filter: ((a.ts > '2015-01-01 20:50:44+00'::timestamp\nwith time zone) AND (a.ts < '2015-07-01 19:50:44+00'::timestamp with time\nzone))\"\n\" Rows Removed by Filter: 540\"\n\" Buffers: shared read=4\"\n\" I/O Timings: read=0.061\"\n\" -> Index Scan using end_date_idx on public.table2 b\n (cost=0.43..23181.37 rows=345833 width=52) (actual time=0.063..622.274\nrows=403936 loops=181)\"\n\" Output: b.serial_no, b.name, b.st, b.end_date, b.a,\nb.start_date\"\n\" Index Cond: (a.ts < b.end_date)\"\n\" Filter: (a.ts > b.start_date)\"\n\" Rows Removed by Filter: 392642\"\n\" Buffers: shared hit=47068719 read=102777\"\n\" I/O Timings: read=579.885\"\n\"Planning time: 0.198 ms\"\n\"Execution time: 178884.467 ms\"\n\nAny pointers on how to go about optimizing this?\n\n--yr\n\nHi,I am trying to see if I can do anything to optimize the following plan. I have two tables and I am doing a join between them. After joining it calculates aggregates (Sum and Count)Table 1 : timestamp (one per day) for 2 years (730 records)Table 2 : Window based validity records. Window here means start and end timestamp indicating a period of validity for a record. Hash some 10 odd columns including start_time and end_time. (1 million records)Machine has 244 GB RAM. Queries are taking more than a min and in some case 2-3 mins.Below is the plan I am getting. The Nested loop blows up the number of records and we expect that. I have tried playing around work_mem and cache configs which hasn't helped. Queryselect sum(a), count(id), a.ts, st from table1 a, table2 b where a.ts > b.start_date and a.ts < b.end_date and a.ts > '2015-01-01 20:50:44.000000 +00:00:00' and a.ts < '2015-07-01 19:50:44.000000 +00:00:00' group by a.ts, st order by a.tsPlan (EXPLAIN ANALYZE)\"Sort (cost=10005447874.54..10005447879.07 rows=1810 width=44) (actual time=178883.936..178884.159 rows=1355 loops=1)\"\" Output: (sum(b.a)), (count(b.id)), a.ts, b.st\"\" Sort Key: a.ts\"\" Sort Method: quicksort Memory: 154kB\"\" Buffers: shared hit=47068722 read=102781\"\" I/O Timings: read=579.946\"\" -> HashAggregate (cost=10005447758.51..10005447776.61 rows=1810 width=44) (actual time=178882.874..178883.320 rows=1355 loops=1)\"\" Output: sum(b.a), count(b.id), a.ts, b.st\"\" Group Key: a.ts, b.st\"\" Buffers: shared hit=47068719 read=102781\"\" I/O Timings: read=579.946\"\" -> Nested Loop (cost=10000000000.43..10004821800.38 rows=62595813 width=44) (actual time=0.167..139484.854 rows=73112419 loops=1)\"\" Output: a.ts, b.st, b.a, b.id\"\" Buffers: shared hit=47068719 read=102781\"\" I/O Timings: read=579.946\"\" -> Seq Scan on public.table1 a (cost=0.00..14.81 rows=181 width=8) (actual time=0.058..0.563 rows=181 loops=1)\"\" Output: a.ts\"\" Filter: ((a.ts > '2015-01-01 20:50:44+00'::timestamp with time zone) AND (a.ts < '2015-07-01 19:50:44+00'::timestamp with time zone))\"\" Rows Removed by Filter: 540\"\" Buffers: shared read=4\"\" I/O Timings: read=0.061\"\" -> Index Scan using end_date_idx on public.table2 b (cost=0.43..23181.37 rows=345833 width=52) (actual time=0.063..622.274 rows=403936 loops=181)\"\" Output: b.serial_no, b.name, b.st, b.end_date, b.a, b.start_date\"\" Index Cond: (a.ts < b.end_date)\"\" Filter: (a.ts > b.start_date)\"\" Rows Removed by Filter: 392642\"\" Buffers: shared hit=47068719 read=102777\"\" I/O Timings: read=579.885\"\"Planning time: 0.198 ms\"\"Execution time: 178884.467 ms\"Any pointers on how to go about optimizing this? --yr",
"msg_date": "Thu, 30 Jul 2015 00:51:44 -0700",
"msg_from": "Ram N <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issue with NestedLoop query"
},
{
"msg_contents": "On Thu, Jul 30, 2015 at 12:51 AM, Ram N <[email protected]> wrote:\n> \" -> Index Scan using end_date_idx on public.table2 b\n> (cost=0.43..23181.37 rows=345833 width=52) (actual time=0.063..622.274\n> rows=403936 loops=181)\"\n> \" Output: b.serial_no, b.name, b.st, b.end_date, b.a,\n> b.start_date\"\n> \" Index Cond: (a.ts < b.end_date)\"\n> \" Filter: (a.ts > b.start_date)\"\n> \" Rows Removed by Filter: 392642\"\n\nIn your case, do you have index built for both b.end_date and\nb.start_date? If so, can you try\n\nset enable_index=off\n\nto see if bitmap heap scan helps?\n\nRegards,\nQingqing\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 Jul 2015 13:24:51 -0700",
"msg_from": "Qingqing Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with NestedLoop query"
},
{
"msg_contents": "Thanks Qingqing for responding. That didn't help. It in fact increased the\nscan time. Looks like a lot of time is being spent on the NestedLoop Join\nthan index lookups though I am not sure how to optimize the join. I am\nassuming its in memory join, so I am not sure why it should take such a lot\nof time. Increase work_mem has helped in reducing the processing time but\nit's still > 1 min.\n\n--yr\n\nOn Thu, Jul 30, 2015 at 1:24 PM, Qingqing Zhou <[email protected]>\nwrote:\n\n> On Thu, Jul 30, 2015 at 12:51 AM, Ram N <[email protected]> wrote:\n> > \" -> Index Scan using end_date_idx on public.table2 b\n> > (cost=0.43..23181.37 rows=345833 width=52) (actual time=0.063..622.274\n> > rows=403936 loops=181)\"\n> > \" Output: b.serial_no, b.name, b.st, b.end_date, b.a,\n> > b.start_date\"\n> > \" Index Cond: (a.ts < b.end_date)\"\n> > \" Filter: (a.ts > b.start_date)\"\n> > \" Rows Removed by Filter: 392642\"\n>\n> In your case, do you have index built for both b.end_date and\n> b.start_date? If so, can you try\n>\n> set enable_index=off\n>\n> to see if bitmap heap scan helps?\n>\n> Regards,\n> Qingqing\n>\n\nThanks Qingqing for responding. That didn't help. It in fact increased the scan time. Looks like a lot of time is being spent on the NestedLoop Join than index lookups though I am not sure how to optimize the join. I am assuming its in memory join, so I am not sure why it should take such a lot of time. Increase work_mem has helped in reducing the processing time but it's still > 1 min.--yrOn Thu, Jul 30, 2015 at 1:24 PM, Qingqing Zhou <[email protected]> wrote:On Thu, Jul 30, 2015 at 12:51 AM, Ram N <[email protected]> wrote:\n> \" -> Index Scan using end_date_idx on public.table2 b\n> (cost=0.43..23181.37 rows=345833 width=52) (actual time=0.063..622.274\n> rows=403936 loops=181)\"\n> \" Output: b.serial_no, b.name, b.st, b.end_date, b.a,\n> b.start_date\"\n> \" Index Cond: (a.ts < b.end_date)\"\n> \" Filter: (a.ts > b.start_date)\"\n> \" Rows Removed by Filter: 392642\"\n\nIn your case, do you have index built for both b.end_date and\nb.start_date? If so, can you try\n\nset enable_index=off\n\nto see if bitmap heap scan helps?\n\nRegards,\nQingqing",
"msg_date": "Fri, 31 Jul 2015 10:55:44 -0700",
"msg_from": "Ram N <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue with NestedLoop query"
},
{
"msg_contents": "On Thu, Jul 30, 2015 at 4:51 AM, Ram N <[email protected]> wrote:\n\n> select sum(a), count(id), a.ts, st from table1 a, table2 b where a.ts >\n> b.start_date and a.ts < b.end_date and a.ts > '2015-01-01 20:50:44.000000\n> +00:00:00' and a.ts < '2015-07-01 19:50:44.000000 +00:00:00' group by a.ts,\n> st order by a.ts\n\n\nYou could try to use a range type:\n\n CREATE INDEX ON table2 USING gin (tstzrange(start_date, end_date,\n'()'));\n\nThen:\n\n select sum(a), count(id), a.ts, st\n from table1 a, table2 b\n where tstzrange(b.start_date, b.end_date, '()') @> a.ts\n and a.ts < '2015-07-01 19:50:44.000000 +00:00:00'\n group by a.ts, st\n order by a.ts\n\nRegards,\n-- \nMatheus de Oliveira\n\nOn Thu, Jul 30, 2015 at 4:51 AM, Ram N <[email protected]> wrote:select sum(a), count(id), a.ts, st from table1 a, table2 b where a.ts > b.start_date and a.ts < b.end_date and a.ts > '2015-01-01 20:50:44.000000 +00:00:00' and a.ts < '2015-07-01 19:50:44.000000 +00:00:00' group by a.ts, st order by a.tsYou could try to use a range type: CREATE INDEX ON table2 USING gin (tstzrange(start_date, end_date, '()'));Then: select sum(a), count(id), a.ts, st from table1 a, table2 b where tstzrange(b.start_date, b.end_date, '()') @> a.ts and a.ts < '2015-07-01 19:50:44.000000 \n+00:00:00' group by a.ts, st order by a.tsRegards,-- Matheus de Oliveira",
"msg_date": "Fri, 31 Jul 2015 15:06:23 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with NestedLoop query"
},
{
"msg_contents": "On Fri, Jul 31, 2015 at 3:06 PM, Matheus de Oliveira <\[email protected]> wrote:\n\n> CREATE INDEX ON table2 USING gin (tstzrange(start_date, end_date,\n> '()'));\n\n\nThe index should be USING GIST, not GIN. Sorry.\n\n\n-- \nMatheus de Oliveira\n\nOn Fri, Jul 31, 2015 at 3:06 PM, Matheus de Oliveira <[email protected]> wrote: CREATE INDEX ON table2 USING gin (tstzrange(start_date, end_date, '()'));The index should be USING GIST, not GIN. Sorry.-- Matheus de Oliveira",
"msg_date": "Fri, 31 Jul 2015 15:08:08 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with NestedLoop query"
},
{
"msg_contents": "On Fri, Jul 31, 2015 at 10:55 AM, Ram N <[email protected]> wrote:\n>\n> Thanks Qingqing for responding. That didn't help. It in fact increased the\n> scan time. Looks like a lot of time is being spent on the NestedLoop Join\n> than index lookups though I am not sure how to optimize the join.\n>\n\nGood news is that optimizer is right this time :-). The NLJ here does\nalmost nothing but schedule each outer row to probing the inner index.\nSo the index seek is the major cost.\n\nHave you tried build a two column index on (b.start_date, b.end_date)?\n\nRegards,\nQingqing\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 31 Jul 2015 11:37:41 -0700",
"msg_from": "Qingqing Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with NestedLoop query"
},
{
"msg_contents": "Thanks much for responding guys. I have tried both, building multi column\nindexes and GIST, with no improvement. I have reduced the window from 180\ndays to 30 days and below are the numbers\n\nComposite index - takes 30 secs\n\nWith Btree indexing - takes 9 secs\n\nWith GIST - takes >30 secs with kind of materialize plan in explain\n\nAny other ideas I can do for window based joins.\n\n--yr\n\n\nOn Fri, Jul 31, 2015 at 11:37 AM, Qingqing Zhou <[email protected]>\nwrote:\n\n> On Fri, Jul 31, 2015 at 10:55 AM, Ram N <[email protected]> wrote:\n> >\n> > Thanks Qingqing for responding. That didn't help. It in fact increased\n> the\n> > scan time. Looks like a lot of time is being spent on the NestedLoop Join\n> > than index lookups though I am not sure how to optimize the join.\n> >\n>\n> Good news is that optimizer is right this time :-). The NLJ here does\n> almost nothing but schedule each outer row to probing the inner index.\n> So the index seek is the major cost.\n>\n> Have you tried build a two column index on (b.start_date, b.end_date)?\n>\n> Regards,\n> Qingqing\n>\n\nThanks much for responding guys. I have tried both, building multi column indexes and GIST, with no improvement. I have reduced the window from 180 days to 30 days and below are the numbersComposite index - takes 30 secsWith Btree indexing - takes 9 secsWith GIST - takes >30 secs with kind of materialize plan in explainAny other ideas I can do for window based joins. --yrOn Fri, Jul 31, 2015 at 11:37 AM, Qingqing Zhou <[email protected]> wrote:On Fri, Jul 31, 2015 at 10:55 AM, Ram N <[email protected]> wrote:\n>\n> Thanks Qingqing for responding. That didn't help. It in fact increased the\n> scan time. Looks like a lot of time is being spent on the NestedLoop Join\n> than index lookups though I am not sure how to optimize the join.\n>\n\nGood news is that optimizer is right this time :-). The NLJ here does\nalmost nothing but schedule each outer row to probing the inner index.\nSo the index seek is the major cost.\n\nHave you tried build a two column index on (b.start_date, b.end_date)?\n\nRegards,\nQingqing",
"msg_date": "Tue, 4 Aug 2015 20:40:18 -0700",
"msg_from": "Ram N <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue with NestedLoop query"
},
{
"msg_contents": "On Tue, Aug 4, 2015 at 8:40 PM, Ram N <[email protected]> wrote:\n>\n> Thanks much for responding guys. I have tried both, building multi column\n> indexes and GIST, with no improvement. I have reduced the window from 180\n> days to 30 days and below are the numbers\n>\n> Composite index - takes 30 secs\n>\n> With Btree indexing - takes 9 secs\n>\n> With GIST - takes >30 secs with kind of materialize plan in explain\n>\n> Any other ideas I can do for window based joins.\n>\n\n From this query:\n\nselect sum(a), count(id), a.ts, st from table1 a, table2 b where a.ts\n> b.start_date and a.ts < b.end_date and a.ts > '2015-01-01\n20:50:44.000000 +00:00:00' and a.ts < '2015-07-01 19:50:44.000000\n+00:00:00' group by a.ts, st order by a.ts\n\nWe can actually derive that b.start_date > '2015-07-01 19:50:44.000000\n+00:00:00' and b.end_date < '2015-01-01 20:50:44.000000 +00:00:00'. If\nwe add these two predicates to the original query, does it help?\n\nThanks,\nQingqing\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Aug 2015 10:12:46 -0700",
"msg_from": "Qingqing Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with NestedLoop query"
}
] |
[
{
"msg_contents": "Hi,\n\nFirst, sorry to compare Post with other database system, but I know nothing\nabout Oracle...\n\nThis customer have an application made with a framework thats generates the\nSQL statements (so, We can't make any query optimizations) .\n\nWe did the following tests:\n\n1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA\ndisk,Core i5)\n2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores, SAS\ndisks)\n\nIn the first machine, postgresql takes from 20,000 to 40,000 ms to complete\nthe query and from 1,200 to 2,000 ms in the others runs. Oracle in this\nmachine takes 2,000ms in the first run and *70ms* using cache.\n\nIn the second machine, postgresql takes about 2,000ms in the first run and\nabout 800ms in the others. 11x slow than Oracle times, in a much more\npowefull machine.\n\nBellow is the 2 explains in the second server:\n\ndatabase=# explain (analyze,buffers) SELECT\nT1.fr13baixa,T1.fr13dtlanc,T2.fr02empfo,COALESCE( T4.fr13TotQtd, 0) AS\nfr13TotQtd,T1.fr13codpr,T1.fr13categ,COALESCE( T5.fr13TotBx, 0) AS\nfr13TotBx,COALESCE( T4.fr13VrTot, 0) AS fr13VrTot,T2.fr09cod, T3.fr09desc,\nT1.fr02codigo,T1.fr01codemp FROM((((FR13T T1 LEFT JOIN FR02T T2 ON\nT2.fr01codemp = T1.fr01codemp AND T2.fr02codigo = T1.fr02codigo)LEFT JOIN\nFR09T T3 ON T3.fr01codemp = T1.fr01codemp AND T3.fr09cod = T2.fr09cod) LEFT\nJOIN (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo,\nfr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS\nNUMERIC(18,10))) AS fr13VrTot\nFROM FR13T1 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON\nT4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND\nT4.fr13dtlanc = T1.fr13dtlanc)\nLEFT JOIN (SELECT SUM(fr13VrBx) AS fr13TotBx, fr01codemp, fr02codigo,\nfr13dtlanc FROM FR13T3 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T5 ON\nT5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND\nT5.fr13dtlanc = T1.fr13dtlanc)\nWHERE (T1.fr01codemp = '1' and T1.fr13codpr = '60732' and T1.fr13dtlanc >=\n'01/05/2014') AND (T1.fr02codigo >= '0' and T1.fr02codigo <= '9999999999')\nAND (T1.fr13dtlanc <= '31/05/2014') ORDER BY T1.fr01codemp, T1.fr13codpr,\nT1.fr13dtlanc;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=30535.97..33804.07 rows=1 width=130) (actual\ntime=1371.548..1728.058 rows=2 loops=1)\n Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND (fr13t3.fr02codigo\n= t1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 368\n Buffers: shared hit=95 read=21267\n -> Nested Loop Left Join (cost=30529.83..33796.84 rows=1 width=98)\n(actual time=1345.565..1701.990 rows=2 loops=1)\n Join Filter: (t3.fr01codemp = t1.fr01codemp)\n Buffers: shared hit=95 read=21265\n -> Nested Loop Left Join (cost=30529.70..33796.67 rows=1\nwidth=87) (actual time=1340.393..1696.793 rows=2 loops=1)\n Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND\n(fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 500202\n Buffers: shared hit=93 read=21263\n -> Nested Loop Left Join (cost=0.70..2098.42 rows=1\nwidth=23) (actual time=36.424..66.841 rows=2 loops=1)\n Buffers: shared hit=93 read=88\n -> Index Scan using ufr13t2 on fr13t t1\n (cost=0.42..2094.11 rows=1 width=19) (actual time=27.518..57.910 rows=2\nloops=1)\n Index Cond: ((fr01codemp = 1::smallint) AND\n(fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date))\n\n Filter: ((fr02codigo >= 0::numeric) AND\n(fr02codigo <= 9999999999::numeric) AND (fr13codpr = 60732))\n\n Rows Removed by Filter: 5621\n\n\n Buffers: shared hit=90 read=85\n\n\n -> Index Scan using fr02t_pkey on fr02t t2\n (cost=0.28..4.30 rows=1 width=12) (actual time=4.455..4.458 rows=1 loops=2)\n Index Cond: ((fr01codemp = t1.fr01codemp) AND\n(fr01codemp = 1::smallint) AND (fr02codigo = t1.fr02codigo))\n Buffers: shared hit=3 read=3\n -> HashAggregate (cost=30529.00..30840.80 rows=31180\nwidth=21) (actual time=630.594..753.406 rows=250102 loops=2)\n Buffers: shared read=21175\n -> Seq Scan on fr13t1 (cost=0.00..25072.50\nrows=311800 width=21) (actual time=6.354..720.037 rows=311800 loops=1)\n Filter: (fr01codemp = 1::smallint)\n Buffers: shared read=21175\n -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16\nrows=1 width=15) (actual time=2.584..2.586 rows=1 loops=2)\n Index Cond: ((fr01codemp = 1::smallint) AND (fr09cod =\nt2.fr09cod))\n Buffers: shared hit=2 read=2\n -> HashAggregate (cost=6.14..6.43 rows=29 width=17) (actual\ntime=12.906..12.972 rows=184 loops=2)\n Buffers: shared read=2\n -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17)\n(actual time=25.570..25.624 rows=184 loops=1)\n Filter: (fr01codemp = 1::smallint)\n Buffers: shared read=2\n Total runtime: 1733.320 ms\n(35 rows)\n\ndatabase=# explain (analyze,buffers) SELECT\nT1.fr13baixa,T1.fr13dtlanc,T2.fr02empfo,COALESCE( T4.fr13TotQtd, 0) AS\nfr13TotQtd,T1.fr13codpr,T1.fr13categ,COALESCE( T5.fr13TotBx, 0) AS\nfr13TotBx,COALESCE( T4.fr13VrTot, 0) AS fr13VrTot,T2.fr09cod, T3.fr09desc,\nT1.fr02codigo,T1.fr01codemp FROM((((FR13T T1 LEFT JOIN FR02T T2 ON\nT2.fr01codemp = T1.fr01codemp AND T2.fr02codigo = T1.fr02codigo)LEFT JOIN\nFR09T T3 ON T3.fr01codemp = T1.fr01codemp AND T3.fr09cod = T2.fr09cod) LEFT\nJOIN (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo,\nfr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS\nNUMERIC(18,10))) AS fr13VrTot\nFROM FR13T1 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON\nT4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND\nT4.fr13dtlanc = T1.fr13dtlanc)\nLEFT JOIN (SELECT SUM(fr13VrBx) AS fr13TotBx, fr01codemp, fr02codigo,\nfr13dtlanc FROM FR13T3 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T5 ON\nT5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND\nT5.fr13dtlanc = T1.fr13dtlanc)\nWHERE (T1.fr01codemp = '1' and T1.fr13codpr = '60732' and T1.fr13dtlanc >=\n'01/05/2014') AND (T1.fr02codigo >= '0' and T1.fr02codigo <= '9999999999')\nAND (T1.fr13dtlanc <= '31/05/2014') ORDER BY T1.fr01codemp, T1.fr13codpr,\nT1.fr13dtlanc;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=30535.97..33804.07 rows=1 width=130) (actual\ntime=492.669..763.313 rows=2 loops=1)\n Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND (fr13t3.fr02codigo\n= t1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 368\n Buffers: shared hit=21362\n -> Nested Loop Left Join (cost=30529.83..33796.84 rows=1 width=98)\n(actual time=492.462..763.015 rows=2 loops=1)\n Join Filter: (t3.fr01codemp = t1.fr01codemp)\n Buffers: shared hit=21360\n -> Nested Loop Left Join (cost=30529.70..33796.67 rows=1\nwidth=87) (actual time=492.423..762.939 rows=2 loops=1)\n Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND\n(fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 500202\n Buffers: shared hit=21356\n -> Nested Loop Left Join (cost=0.70..2098.42 rows=1\nwidth=23) (actual time=0.855..2.268 rows=2 loops=1)\n Buffers: shared hit=181\n -> Index Scan using ufr13t2 on fr13t t1\n (cost=0.42..2094.11 rows=1 width=19) (actual time=0.844..2.229 rows=2\nloops=1)\n Index Cond: ((fr01codemp = 1::smallint) AND\n(fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date))\n Filter: ((fr02codigo >= 0::numeric) AND\n(fr02codigo <= 9999999999::numeric) AND (fr13codpr = 60732))\n Rows Removed by Filter: 5621\n Buffers: shared hit=175\n -> Index Scan using fr02t_pkey on fr02t t2\n (cost=0.28..4.30 rows=1 width=12) (actual time=0.009..0.012 rows=1 loops=2)\n Index Cond: ((fr01codemp = t1.fr01codemp) AND\n(fr01codemp = 1::smallint) AND (fr02codigo = t1.fr02codigo))\n Buffers: shared hit=6\n -> HashAggregate (cost=30529.00..30840.80 rows=31180\nwidth=21) (actual time=229.435..325.660 rows=250102 loops=2)\n Buffers: shared hit=21175\n -> Seq Scan on fr13t1 (cost=0.00..25072.50\nrows=311800 width=21) (actual time=0.003..74.088 rows=311800 loops=1)\n Filter: (fr01codemp = 1::smallint)\n Buffers: shared hit=21175\n -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16\nrows=1 width=15) (actual time=0.023..0.024 rows=1 loops=2)\n Index Cond: ((fr01codemp = 1::smallint) AND (fr09cod =\nt2.fr09cod))\n Buffers: shared hit=4\n -> HashAggregate (cost=6.14..6.43 rows=29 width=17) (actual\ntime=0.065..0.098 rows=184 loops=2)\n Buffers: shared hit=2\n -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17)\n(actual time=0.006..0.029 rows=184 loops=1)\n Filter: (fr01codemp = 1::smallint)\n Buffers: shared hit=2\n Total runtime: 763.536 ms\n(35 rows)\n\n\nThanks for any help.\n\nBest regards,\n\nAlexandre\n\nHi,First, sorry to compare Post with other database system, but I know nothing about Oracle...This customer have an application made with a framework thats generates the SQL statements (so, We can't make any query optimizations) .We did the following tests:1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA disk,Core i5)2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores, SAS disks)In the first machine, postgresql takes from 20,000 to 40,000 ms to complete the query and from 1,200 to 2,000 ms in the others runs. Oracle in this machine takes 2,000ms in the first run and *70ms* using cache. In the second machine, postgresql takes about 2,000ms in the first run and about 800ms in the others. 11x slow than Oracle times, in a much more powefull machine.Bellow is the 2 explains in the second server:database=# explain (analyze,buffers) SELECT T1.fr13baixa,T1.fr13dtlanc,T2.fr02empfo,COALESCE( T4.fr13TotQtd, 0) AS fr13TotQtd,T1.fr13codpr,T1.fr13categ,COALESCE( T5.fr13TotBx, 0) AS fr13TotBx,COALESCE( T4.fr13VrTot, 0) AS fr13VrTot,T2.fr09cod, T3.fr09desc, T1.fr02codigo,T1.fr01codemp FROM((((FR13T T1 LEFT JOIN FR02T T2 ON T2.fr01codemp = T1.fr01codemp AND T2.fr02codigo = T1.fr02codigo)LEFT JOIN FR09T T3 ON T3.fr01codemp = T1.fr01codemp AND T3.fr09cod = T2.fr09cod) LEFT JOIN (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo, fr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS NUMERIC(18,10))) AS fr13VrTotFROM FR13T1 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON T4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND T4.fr13dtlanc = T1.fr13dtlanc) LEFT JOIN (SELECT SUM(fr13VrBx) AS fr13TotBx, fr01codemp, fr02codigo, fr13dtlanc FROM FR13T3 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T5 ON T5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND T5.fr13dtlanc = T1.fr13dtlanc) WHERE (T1.fr01codemp = '1' and T1.fr13codpr = '60732' and T1.fr13dtlanc >= '01/05/2014') AND (T1.fr02codigo >= '0' and T1.fr02codigo <= '9999999999') AND (T1.fr13dtlanc <= '31/05/2014') ORDER BY T1.fr01codemp, T1.fr13codpr, T1.fr13dtlanc; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=30535.97..33804.07 rows=1 width=130) (actual time=1371.548..1728.058 rows=2 loops=1) Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND (fr13t3.fr02codigo = t1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc)) Rows Removed by Join Filter: 368 Buffers: shared hit=95 read=21267 -> Nested Loop Left Join (cost=30529.83..33796.84 rows=1 width=98) (actual time=1345.565..1701.990 rows=2 loops=1) Join Filter: (t3.fr01codemp = t1.fr01codemp) Buffers: shared hit=95 read=21265 -> Nested Loop Left Join (cost=30529.70..33796.67 rows=1 width=87) (actual time=1340.393..1696.793 rows=2 loops=1) Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND (fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc)) Rows Removed by Join Filter: 500202 Buffers: shared hit=93 read=21263 -> Nested Loop Left Join (cost=0.70..2098.42 rows=1 width=23) (actual time=36.424..66.841 rows=2 loops=1) Buffers: shared hit=93 read=88 -> Index Scan using ufr13t2 on fr13t t1 (cost=0.42..2094.11 rows=1 width=19) (actual time=27.518..57.910 rows=2 loops=1) Index Cond: ((fr01codemp = 1::smallint) AND (fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date)) Filter: ((fr02codigo >= 0::numeric) AND (fr02codigo <= 9999999999::numeric) AND (fr13codpr = 60732)) Rows Removed by Filter: 5621 Buffers: shared hit=90 read=85 -> Index Scan using fr02t_pkey on fr02t t2 (cost=0.28..4.30 rows=1 width=12) (actual time=4.455..4.458 rows=1 loops=2) Index Cond: ((fr01codemp = t1.fr01codemp) AND (fr01codemp = 1::smallint) AND (fr02codigo = t1.fr02codigo)) Buffers: shared hit=3 read=3 -> HashAggregate (cost=30529.00..30840.80 rows=31180 width=21) (actual time=630.594..753.406 rows=250102 loops=2) Buffers: shared read=21175 -> Seq Scan on fr13t1 (cost=0.00..25072.50 rows=311800 width=21) (actual time=6.354..720.037 rows=311800 loops=1) Filter: (fr01codemp = 1::smallint) Buffers: shared read=21175 -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16 rows=1 width=15) (actual time=2.584..2.586 rows=1 loops=2) Index Cond: ((fr01codemp = 1::smallint) AND (fr09cod = t2.fr09cod)) Buffers: shared hit=2 read=2 -> HashAggregate (cost=6.14..6.43 rows=29 width=17) (actual time=12.906..12.972 rows=184 loops=2) Buffers: shared read=2 -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17) (actual time=25.570..25.624 rows=184 loops=1) Filter: (fr01codemp = 1::smallint) Buffers: shared read=2 Total runtime: 1733.320 ms(35 rows)database=# explain (analyze,buffers) SELECT T1.fr13baixa,T1.fr13dtlanc,T2.fr02empfo,COALESCE( T4.fr13TotQtd, 0) AS fr13TotQtd,T1.fr13codpr,T1.fr13categ,COALESCE( T5.fr13TotBx, 0) AS fr13TotBx,COALESCE( T4.fr13VrTot, 0) AS fr13VrTot,T2.fr09cod, T3.fr09desc, T1.fr02codigo,T1.fr01codemp FROM((((FR13T T1 LEFT JOIN FR02T T2 ON T2.fr01codemp = T1.fr01codemp AND T2.fr02codigo = T1.fr02codigo)LEFT JOIN FR09T T3 ON T3.fr01codemp = T1.fr01codemp AND T3.fr09cod = T2.fr09cod) LEFT JOIN (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo, fr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS NUMERIC(18,10))) AS fr13VrTotFROM FR13T1 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON T4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND T4.fr13dtlanc = T1.fr13dtlanc) LEFT JOIN (SELECT SUM(fr13VrBx) AS fr13TotBx, fr01codemp, fr02codigo, fr13dtlanc FROM FR13T3 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T5 ON T5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND T5.fr13dtlanc = T1.fr13dtlanc) WHERE (T1.fr01codemp = '1' and T1.fr13codpr = '60732' and T1.fr13dtlanc >= '01/05/2014') AND (T1.fr02codigo >= '0' and T1.fr02codigo <= '9999999999') AND (T1.fr13dtlanc <= '31/05/2014') ORDER BY T1.fr01codemp, T1.fr13codpr, T1.fr13dtlanc; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=30535.97..33804.07 rows=1 width=130) (actual time=492.669..763.313 rows=2 loops=1) Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND (fr13t3.fr02codigo = t1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc)) Rows Removed by Join Filter: 368 Buffers: shared hit=21362 -> Nested Loop Left Join (cost=30529.83..33796.84 rows=1 width=98) (actual time=492.462..763.015 rows=2 loops=1) Join Filter: (t3.fr01codemp = t1.fr01codemp) Buffers: shared hit=21360 -> Nested Loop Left Join (cost=30529.70..33796.67 rows=1 width=87) (actual time=492.423..762.939 rows=2 loops=1) Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND (fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc)) Rows Removed by Join Filter: 500202 Buffers: shared hit=21356 -> Nested Loop Left Join (cost=0.70..2098.42 rows=1 width=23) (actual time=0.855..2.268 rows=2 loops=1) Buffers: shared hit=181 -> Index Scan using ufr13t2 on fr13t t1 (cost=0.42..2094.11 rows=1 width=19) (actual time=0.844..2.229 rows=2 loops=1) Index Cond: ((fr01codemp = 1::smallint) AND (fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date)) Filter: ((fr02codigo >= 0::numeric) AND (fr02codigo <= 9999999999::numeric) AND (fr13codpr = 60732)) Rows Removed by Filter: 5621 Buffers: shared hit=175 -> Index Scan using fr02t_pkey on fr02t t2 (cost=0.28..4.30 rows=1 width=12) (actual time=0.009..0.012 rows=1 loops=2) Index Cond: ((fr01codemp = t1.fr01codemp) AND (fr01codemp = 1::smallint) AND (fr02codigo = t1.fr02codigo)) Buffers: shared hit=6 -> HashAggregate (cost=30529.00..30840.80 rows=31180 width=21) (actual time=229.435..325.660 rows=250102 loops=2) Buffers: shared hit=21175 -> Seq Scan on fr13t1 (cost=0.00..25072.50 rows=311800 width=21) (actual time=0.003..74.088 rows=311800 loops=1) Filter: (fr01codemp = 1::smallint) Buffers: shared hit=21175 -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16 rows=1 width=15) (actual time=0.023..0.024 rows=1 loops=2) Index Cond: ((fr01codemp = 1::smallint) AND (fr09cod = t2.fr09cod)) Buffers: shared hit=4 -> HashAggregate (cost=6.14..6.43 rows=29 width=17) (actual time=0.065..0.098 rows=184 loops=2) Buffers: shared hit=2 -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17) (actual time=0.006..0.029 rows=184 loops=1) Filter: (fr01codemp = 1::smallint) Buffers: shared hit=2 Total runtime: 763.536 ms(35 rows)Thanks for any help. Best regards,Alexandre",
"msg_date": "Tue, 4 Aug 2015 22:41:13 -0300",
"msg_from": "Alexandre de Arruda Paes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow HashAggregate/cache access"
},
{
"msg_contents": "Alexandre de Arruda Paes <[email protected]> wrote:\n\n> We did the following tests:\n>\n> 1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA disk,Core i5)\n> 2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores, SAS disks)\n\nThat's only part of the information we would need to be able to\ngive specific advice. Please read this page:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nOne possibility is that you are running with the default\nconfiguration, rather than having tuned for the hardware. You are\nvery likely to need to adjust shared_buffers, effective_cache_size,\nwork_mem, maintenance_work_mem, random_page_cost, cpu_tuple_cost,\nand (at least for the second machine) effective_io_concurrency. If\nthe queries have a lot of joins you may need to increase\nfrom_collapse_limit and/or join_collapse_limit. You also may need\nto adjust [auto]vacuum and/or background writer settings. Various\nOS settings may matter, too.\n\nTo get a handle on all this, it might be worth looking for Greg\nSmith's book on PostgreSQL high performance.\n\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Aug 2015 17:24:48 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": "On Wed, Aug 5, 2015 at 11:41 AM, Alexandre de Arruda Paes <\[email protected]> wrote:\n\n> Hi,\n>\n> First, sorry to compare Post with other database system, but I know\n> nothing about Oracle...\n>\n> This customer have an application made with a framework thats generates\n> the SQL statements (so, We can't make any query optimizations) .\n>\n> We did the following tests:\n>\n> 1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA\n> disk,Core i5)\n> 2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores, SAS\n> disks)\n>\n>\n> database=# explain (analyze,buffers)\n> \n> SELECT T1.fr13baixa,T1.fr13dtlanc,T2.fr02empfo,COALESCE( T4.fr13TotQtd, 0)\n> AS fr13TotQtd,T1.fr13codpr,T1.fr13categ,COALESCE( T5.fr13TotBx, 0) AS\n> fr13TotBx,COALESCE( T4.fr13VrTot, 0) AS fr13VrTot,T2.fr09cod, T3.fr09desc,\n> T1.fr02codigo,T1.fr01codemp FROM((((FR13T T1 LEFT JOIN FR02T T2 ON\n> T2.fr01codemp = T1.fr01codemp AND T2.fr02codigo = T1.fr02codigo)LEFT JOIN\n> FR09T T3 ON T3.fr01codemp = T1.fr01codemp AND T3.fr09cod = T2.fr09cod) LEFT\n> JOIN (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo,\n> fr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS\n> NUMERIC(18,10))) AS fr13VrTot\n> FROM FR13T1 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON\n> T4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND\n> T4.fr13dtlanc = T1.fr13dtlanc)\n> \n> LEFT JOIN (SELECT SUM(fr13VrBx) AS fr13TotBx, fr01codemp, fr02codigo,\n> fr13dtlanc FROM FR13T3 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T5 ON\n> T5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND\n> T5.fr13dtlanc = T1.fr13dtlanc)\n> WHERE (T1.fr01codemp = '1' and T1.fr13codpr = '60732' and T1.fr13dtlanc >=\n> '01/05/2014') AND (T1.fr02codigo >= '0' and T1.fr02codigo <= '9999999999')\n> AND (T1.fr13dtlanc <= '31/05/2014') ORDER BY T1.fr01codemp, T1.fr13codpr,\n> T1.fr13dtlanc;\n>\n>\n\nI think I know where issue is.\nThe PostgreSQL planner unable pass join conditions into subquery with\naggregate functions (it's well known limitation).\n\nFor sample to calculate this part:\nLEFT JOIN (SELECT SUM(fr13VrBx) AS fr13TotBx, fr01codemp, fr02codigo,\nfr13dtlanc FROM FR13T3 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T5 ON\nT5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND\nT5.fr13dtlanc = T1.fr13dtlanc)\nPostgreSQL forced to calculate full aggregate subquery, instead of pass\nJOIN conditions into it.\n\nI suggest rewrite query to the following form:\nSELECT T1.fr13baixa,T1.fr13dtlanc,T2.fr02empfo,COALESCE( T4.fr13TotQtd, 0)\nAS fr13TotQtd,T1.fr13codpr,T1.fr13categ,\n(SELECT SUM(fr13VrBx) FROM FR13T3 AS T5 WHERE T5.fr01codemp = T1.fr01codemp\nAND T5.fr02codigo = T1.fr02codigo AND T5.fr13dtlanc = T1.fr13dtlanc) AS\nfr13TotBx,\n(SELECT SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS\nNUMERIC(18,10))) AS fr13VrTot FROM FR13T1 AS T4 WHERE T4.fr01codemp =\nT1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND T4.fr13dtlanc =\nT1.fr13dtlanc) AS fr13VrTot,\nT2.fr09cod, T3.fr09desc, T1.fr02codigo,T1.fr01codemp\nFROM\nFR13T T1 LEFT JOIN FR02T T2 ON T2.fr01codemp = T1.fr01codemp AND\nT2.fr02codigo = T1.fr02codigo\nLEFT JOIN FR09T T3 ON T3.fr01codemp = T1.fr01codemp AND T3.fr09cod =\nT2.fr09cod\nWHERE\n(T1.fr01codemp = '1' and T1.fr13codpr = '60732' and T1.fr13dtlanc >=\n'01/05/2014') AND (T1.fr02codigo >= '0' and T1.fr02codigo <= '9999999999')\nAND (T1.fr13dtlanc <= '31/05/2014') ORDER BY T1.fr01codemp, T1.fr13codpr,\nT1.fr13dtlanc;\n\nAnd re-test performance again.\n\n\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\nJabber: [email protected]\nМойКруг: http://mboguk.moikrug.ru/\n\n\"People problems are solved with people.\nIf people cannot solve the problem, try technology.\nPeople will then wish they'd listened at the first stage.\"\n\nOn Wed, Aug 5, 2015 at 11:41 AM, Alexandre de Arruda Paes <[email protected]> wrote:Hi,First, sorry to compare Post with other database system, but I know nothing about Oracle...This customer have an application made with a framework thats generates the SQL statements (so, We can't make any query optimizations) .We did the following tests:1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA disk,Core i5)2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores, SAS disks)database=# explain (analyze,buffers) SELECT T1.fr13baixa,T1.fr13dtlanc,T2.fr02empfo,COALESCE( T4.fr13TotQtd, 0) AS fr13TotQtd,T1.fr13codpr,T1.fr13categ,COALESCE( T5.fr13TotBx, 0) AS fr13TotBx,COALESCE( T4.fr13VrTot, 0) AS fr13VrTot,T2.fr09cod, T3.fr09desc, T1.fr02codigo,T1.fr01codemp FROM((((FR13T T1 LEFT JOIN FR02T T2 ON T2.fr01codemp = T1.fr01codemp AND T2.fr02codigo = T1.fr02codigo)LEFT JOIN FR09T T3 ON T3.fr01codemp = T1.fr01codemp AND T3.fr09cod = T2.fr09cod) LEFT JOIN (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo, fr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS NUMERIC(18,10))) AS fr13VrTotFROM FR13T1 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON T4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND T4.fr13dtlanc = T1.fr13dtlanc) LEFT JOIN (SELECT SUM(fr13VrBx) AS fr13TotBx, fr01codemp, fr02codigo, fr13dtlanc FROM FR13T3 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T5 ON T5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND T5.fr13dtlanc = T1.fr13dtlanc) WHERE (T1.fr01codemp = '1' and T1.fr13codpr = '60732' and T1.fr13dtlanc >= '01/05/2014') AND (T1.fr02codigo >= '0' and T1.fr02codigo <= '9999999999') AND (T1.fr13dtlanc <= '31/05/2014') ORDER BY T1.fr01codemp, T1.fr13codpr, T1.fr13dtlanc; I think I know where issue is.The PostgreSQL planner unable pass join conditions into subquery with aggregate functions (it's well known limitation).For sample to calculate this part:LEFT\n JOIN (SELECT SUM(fr13VrBx) AS fr13TotBx, fr01codemp, fr02codigo, \nfr13dtlanc FROM FR13T3 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T5 \nON T5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND \nT5.fr13dtlanc = T1.fr13dtlanc) PostgreSQL forced to calculate full aggregate subquery, instead of pass JOIN conditions into it.I suggest rewrite query to the following form:SELECT T1.fr13baixa,T1.fr13dtlanc,T2.fr02empfo,COALESCE( T4.fr13TotQtd, 0) AS fr13TotQtd,T1.fr13codpr,T1.fr13categ,(SELECT SUM(fr13VrBx) FROM FR13T3 AS T5 WHERE T5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND T5.fr13dtlanc = T1.fr13dtlanc) AS fr13TotBx,(SELECT SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS NUMERIC(18,10))) AS fr13VrTot FROM FR13T1 AS T4 WHERE T4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND T4.fr13dtlanc = T1.fr13dtlanc) AS fr13VrTot,T2.fr09cod, T3.fr09desc, T1.fr02codigo,T1.fr01codemp FROM FR13T T1 LEFT JOIN FR02T T2 ON T2.fr01codemp = T1.fr01codemp AND T2.fr02codigo = T1.fr02codigoLEFT JOIN FR09T T3 ON T3.fr01codemp = T1.fr01codemp AND T3.fr09cod = T2.fr09codWHERE (T1.fr01codemp = '1' and T1.fr13codpr = '60732' and T1.fr13dtlanc >= '01/05/2014') AND (T1.fr02codigo >= '0' and T1.fr02codigo <= '9999999999') AND (T1.fr13dtlanc <= '31/05/2014') ORDER BY T1.fr01codemp, T1.fr13codpr, T1.fr13dtlanc;And re-test performance again. -- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Thu, 6 Aug 2015 04:25:25 +1000",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": "Hi,\n\nKevin:\n\nSecond machine config parameters:\n\nshared_buffers = 8GB\nwork_mem = 1 GB (was 512MB)\nmaintenace_work_mem = 4 GB\n\n#seq_page_cost = 1.0\n#cpu_tuple_cost = 0.01\n#cpu_index_tuple_cost = 0.005\n#cpu_operator_cost = 0.0025\n\nrandom_page_cost = 2.0\neffective_cache_size = 110GB\n\nI try to change from_collapse_limit, join_collapse_limit and io_con, w/o\nsuccess.\n\nI create a database with this tables only, vaccum analyze them and test\nwith only my connection to postgresql.\nNow we have another querys(all with aggregates) that the time is 15x - 20x\nslower than Oracle and SQL Server.\nAll tables have indexes (btree) with fields in the where/order/group\nparameters.\n\nMaxim:\n\nThe developer is changing from a Desktop application (ODBC with Use\nDeclare/Fetch, 'single' querys with local summing and aggregation) for a\nclient/server web application (.NET, most querys with aggregate). Unfortunattly\nwe cant change this querys, but I will try your solution to see what\nhappens.\n\nTake a look at another big query generated by the development tool.\nOracle/SQL Server runs the same query (with the same data but in a slow\nmachine) in about 2 seconds:\n\n\nhttp://explain.depesz.com/s/wxq\n\n\nBest regards,\n\nAlexandre\n\n\n2015-08-05 14:24 GMT-03:00 Kevin Grittner <[email protected]>:\n\n> Alexandre de Arruda Paes <[email protected]> wrote:\n>\n> > We did the following tests:\n> >\n> > 1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA\n> disk,Core i5)\n> > 2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores,\n> SAS disks)\n>\n> That's only part of the information we would need to be able to\n> give specific advice. Please read this page:\n>\n> https://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> One possibility is that you are running with the default\n> configuration, rather than having tuned for the hardware. You are\n> very likely to need to adjust shared_buffers, effective_cache_size,\n> work_mem, maintenance_work_mem, random_page_cost, cpu_tuple_cost,\n> and (at least for the second machine) effective_io_concurrency. If\n> the queries have a lot of joins you may need to increase\n> from_collapse_limit and/or join_collapse_limit. You also may need\n> to adjust [auto]vacuum and/or background writer settings. Various\n> OS settings may matter, too.\n>\n> To get a handle on all this, it might be worth looking for Greg\n> Smith's book on PostgreSQL high performance.\n>\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi,Kevin:Second machine config parameters:shared_buffers = 8GBwork_mem = 1 GB (was 512MB)maintenace_work_mem = 4 GB#seq_page_cost = 1.0#cpu_tuple_cost = 0.01#cpu_index_tuple_cost = 0.005#cpu_operator_cost = 0.0025random_page_cost = 2.0effective_cache_size = 110GBI try to change from_collapse_limit, join_collapse_limit and io_con, w/o success.I create a database with this tables only, vaccum analyze them and test with only my connection to postgresql.Now we have another querys(all with aggregates) that the time is 15x - 20x slower than Oracle and SQL Server.All tables have indexes (btree) with fields in the where/order/group parameters.Maxim:The developer is changing from a Desktop application (ODBC with Use Declare/Fetch, 'single' querys with local summing and aggregation) for a client/server web application (.NET, most querys with aggregate). Unfortunattly we cant change this querys, but I will try your solution to see what happens.Take a look at another big query generated by the development tool. Oracle/SQL Server runs the same query (with the same data but in a slow machine) in about 2 seconds:http://explain.depesz.com/s/wxqBest regards,Alexandre2015-08-05 14:24 GMT-03:00 Kevin Grittner <[email protected]>:Alexandre de Arruda Paes <[email protected]> wrote:\n\n> We did the following tests:\n>\n> 1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA disk,Core i5)\n> 2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores, SAS disks)\n\nThat's only part of the information we would need to be able to\ngive specific advice. Please read this page:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nOne possibility is that you are running with the default\nconfiguration, rather than having tuned for the hardware. You are\nvery likely to need to adjust shared_buffers, effective_cache_size,\nwork_mem, maintenance_work_mem, random_page_cost, cpu_tuple_cost,\nand (at least for the second machine) effective_io_concurrency. If\nthe queries have a lot of joins you may need to increase\nfrom_collapse_limit and/or join_collapse_limit. You also may need\nto adjust [auto]vacuum and/or background writer settings. Various\nOS settings may matter, too.\n\nTo get a handle on all this, it might be worth looking for Greg\nSmith's book on PostgreSQL high performance.\n\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 5 Aug 2015 16:29:36 -0300",
"msg_from": "Alexandre de Arruda Paes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": "På onsdag 05. august 2015 kl. 20:25:25, skrev Maxim Boguk <[email protected]\n <mailto:[email protected]>>: [snip] I think I know where issue is.\nThe PostgreSQL planner unable pass join conditions into subquery with \naggregate functions (it's well known limitation).\n[snip]\n\n\n\n\n \nI'm curious; will 9.5 help here as it has \"WHERE clause pushdown in subqueries \nwith window functions\"?\n\nhttp://michael.otacoo.com/postgresql-2/postgres-9-5-feature-highlight-where-pushdown-with-window-function/\n\n \nAre you able to try 9.5 and post the results?\n \nThanks.\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>",
"msg_date": "Wed, 5 Aug 2015 21:55:58 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": "Hi Andreas,\n\nSame plan in 9.5, but the execution time was greater than 9.3 (maybe need\nsome tunning):\n\npostgres@hw-prox01-fac:~/PG95$ /usr/PG95/bin/psql copro95 -p 5444\npsql (9.5alpha1)\nType \"help\" for help.\n\ncopro95=# explain (analyze,buffers) SELECT\nT1.fr13baixa,T1.fr13dtlanc,T2.fr02empfo,COALESCE( T4.fr13TotQtd, 0) AS\nfr13TotQtd,T1.fr13codpr,T1.fr13categ,COALESCE( T5.fr13TotBx, 0) AS\nfr13TotBx,COALESCE( T4.fr13VrTot, 0) AS fr13VrTot,T2.fr09cod, T3.fr09desc,\nT1.fr02codigo,T1.fr01codemp FROM((((FR13T T1 LEFT JOIN FR02T T2 ON\nT2.fr01codemp = T1.fr01codemp AND T2.fr02codigo = T1.fr02codigo)LEFT JOIN\nFR09T T3 ON T3.fr01codemp = T1.fr01codemp AND T3.fr09cod = T2.fr09cod) LEFT\nJOIN (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo,\nfr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS\nNUMERIC(18,10))) AS fr13VrTot\nFROM FR13T1 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON\nT4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND\nT4.fr13dtlanc = T1.fr13dtlanc)\nLEFT JOIN (SELECT SUM(fr13VrBx) AS fr13TotBx, fr01codemp, fr02codigo,\nfr13dtlanc FROM FR13T3 GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T5 ON\nT5.fr01codemp = T1.fr01codemp AND T5.fr02codigo = T1.fr02codigo AND\nT5.fr13dtlanc = T1.fr13dtlanc)\nWHERE (T1.fr01codemp = '1' and T1.fr13codpr = '60732' and T1.fr13dtlanc >=\n'01/05/2014') AND (T1.fr02codigo >= '0' and T1.fr02codigo <= '9999999999')\nAND (T1.fr13dtlanc <= '31/05/2014') ORDER BY T1.fr01codemp, T1.fr13codpr,\nT1.fr13dtlanc;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=30535.97..33949.17 rows=1 width=130) (actual\ntime=623.008..1029.130 rows=2 loops=1)\n Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND (fr13t3.fr02codigo\n= t1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 368\n Buffers: shared hit=21362\n -> Nested Loop Left Join (cost=30529.83..33941.87 rows=1 width=98)\n(actual time=622.761..1028.782 rows=2 loops=1)\n Join Filter: (t3.fr01codemp = t1.fr01codemp)\n Buffers: shared hit=21360\n -> Nested Loop Left Join (cost=30529.70..33941.71 rows=1\nwidth=87) (actual time=622.709..1028.699 rows=2 loops=1)\n Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND\n(fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 500202\n Buffers: shared hit=21356\n -> Nested Loop Left Join (cost=0.70..2087.56 rows=1\nwidth=23) (actual time=1.021..2.630 rows=2 loops=1)\n Buffers: shared hit=181\n -> Index Scan using ufr13t2 on fr13t t1\n (cost=0.42..2083.24 rows=1 width=19) (actual time=0.996..2.576 rows=2\nloops=1)\n Index Cond: ((fr01codemp = '1'::smallint) AND\n(fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date))\n Filter: ((fr02codigo >= '0'::numeric) AND\n(fr02codigo <= '9999999999'::numeric) AND (fr13codpr = 60732))\n Rows Removed by Filter: 5621\n Buffers: shared hit=175\n -> Index Scan using fr02t_pkey on fr02t t2\n (cost=0.28..4.30 rows=1 width=12) (actual time=0.013..0.016 rows=1 loops=2)\n Index Cond: ((fr01codemp = t1.fr01codemp) AND\n(fr01codemp = '1'::smallint) AND (fr02codigo = t1.fr02codigo))\n Buffers: shared hit=6\n -> HashAggregate (cost=30529.00..30996.70 rows=31180\nwidth=21) (actual time=286.123..457.848 rows=250102 loops=2)\n Group Key: fr13t1.fr01codemp, fr13t1.fr02codigo,\nfr13t1.fr13dtlanc\n Buffers: shared hit=21175\n -> Seq Scan on fr13t1 (cost=0.00..25072.50\nrows=311800 width=21) (actual time=0.007..115.766 rows=311800 loops=1)\n Filter: (fr01codemp = '1'::smallint)\n Buffers: shared hit=21175\n -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16\nrows=1 width=15) (actual time=0.026..0.027 rows=1 loops=2)\n Index Cond: ((fr01codemp = '1'::smallint) AND (fr09cod =\nt2.fr09cod))\n Buffers: shared hit=4\n -> HashAggregate (cost=6.14..6.50 rows=29 width=17) (actual\ntime=0.082..0.128 rows=184 loops=2)\n Group Key: fr13t3.fr01codemp, fr13t3.fr02codigo, fr13t3.fr13dtlanc\n Buffers: shared hit=2\n -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17)\n(actual time=0.011..0.033 rows=184 loops=1)\n Filter: (fr01codemp = '1'::smallint)\n Buffers: shared hit=2\n Planning time: 2.394 ms\n Execution time: 1038.785 ms\n(38 rows)\n\ncopro95=#\n\n\n2015-08-05 16:55 GMT-03:00 Andreas Joseph Krogh <[email protected]>:\n\n> På onsdag 05. august 2015 kl. 20:25:25, skrev Maxim Boguk <\n> [email protected]>:\n>\n> [snip]\n>\n> I think I know where issue is.\n> The PostgreSQL planner unable pass join conditions into subquery with\n> aggregate functions (it's well known limitation).\n> [snip]\n>\n>\n> I'm curious; will 9.5 help here as it has \"WHERE clause pushdown in\n> subqueries with window functions\"?\n>\n> http://michael.otacoo.com/postgresql-2/postgres-9-5-feature-highlight-where-pushdown-with-window-function/\n>\n> Are you able to try 9.5 and post the results?\n>\n> Thanks.\n>\n> --\n> *Andreas Joseph Krogh*\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected]\n> www.visena.com\n> <https://www.visena.com>\n>\n>",
"msg_date": "Wed, 5 Aug 2015 17:53:25 -0300",
"msg_from": "Alexandre de Arruda Paes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": "På onsdag 05. august 2015 kl. 22:53:25, skrev Alexandre de Arruda Paes <\[email protected] <mailto:[email protected]>>:\nHi Andreas, \nSame plan in 9.5, but the execution time was greater than 9.3 (maybe need some \ntunning):\n\n \nThanks for sharing.\nMaybe some @hackers will chime in and comment.\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>",
"msg_date": "Wed, 5 Aug 2015 23:00:07 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": "On 6 August 2015 at 07:55, Andreas Joseph Krogh <[email protected]> wrote:\n\n> På onsdag 05. august 2015 kl. 20:25:25, skrev Maxim Boguk <\n> [email protected]>:\n>\n> [snip]\n>\n> I think I know where issue is.\n> The PostgreSQL planner unable pass join conditions into subquery with\n> aggregate functions (it's well known limitation).\n> [snip]\n>\n>\n> I'm curious; will 9.5 help here as it has \"WHERE clause pushdown in\n> subqueries with window functions\"?\n>\n> http://michael.otacoo.com/postgresql-2/postgres-9-5-feature-highlight-where-pushdown-with-window-function/\n>\n>\n>\nI've not looked at the query in any detail, but that particular patch won't\nhelp as it only allows pushdown of predicate into subqueries with window\nfunctions where the predicate is part of all of the subquery's PARTITION BY\nclauses.\n\nThe query in question has no window clauses, so qual pushdown is not\ndisabled for that reason.\n\nRegards\n\nDavid Rowley\n--\n David Rowley http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn 6 August 2015 at 07:55, Andreas Joseph Krogh <[email protected]> wrote:På onsdag 05. august 2015 kl. 20:25:25, skrev Maxim Boguk <[email protected]>:\n\n\n[snip]\n\n\n \nI think I know where issue is.\nThe PostgreSQL planner unable pass join conditions into subquery with aggregate functions (it's well known limitation).\n[snip]\n\n\n\n\n\n \n\nI'm curious; will 9.5 help here as it has \"WHERE clause pushdown in subqueries with window functions\"?\nhttp://michael.otacoo.com/postgresql-2/postgres-9-5-feature-highlight-where-pushdown-with-window-function/\n\n \nI've not looked at the query in any detail, but that particular patch won't help as it only allows pushdown of predicate into subqueries with window functions where the predicate is part of all of the subquery's PARTITION BY clauses.The query in question has no window clauses, so qual pushdown is not disabled for that reason. RegardsDavid Rowley-- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 6 Aug 2015 09:06:31 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": "On 6 August 2015 at 06:25, Maxim Boguk <[email protected]> wrote:\n\n>\n>\n> On Wed, Aug 5, 2015 at 11:41 AM, Alexandre de Arruda Paes <\n> [email protected]> wrote:\n>\n>> Hi,\n>>\n>> First, sorry to compare Post with other database system, but I know\n>> nothing about Oracle...\n>>\n>> This customer have an application made with a framework thats generates\n>> the SQL statements (so, We can't make any query optimizations) .\n>>\n>> We did the following tests:\n>>\n>> 1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA\n>> disk,Core i5)\n>> 2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores,\n>> SAS disks)\n>>\n>>\n>> I think I know where issue is.\n> The PostgreSQL planner unable pass join conditions into subquery with\n> aggregate functions (it's well known limitation).\n>\n>\nI think this statement is quite misleading. Let's look at an example:\n\ncreate table t1 (a int not null, v int not null);\ncreate table t2 (a int not null);\ninsert into t1 select s.i,10 from generate_series(1,1000)\ns(i),generate_series(1,1000);\ninsert into t2 select generate_series(1,1000);\ncreate index on t1 (a);\n\n\nexplain select t2.a,s.sumv from (select a,sum(v) sumv from t1 group by a) s\ninner join t2 on t2.a = s.a where t2.a = 1;\n QUERY PLAN\n----------------------------------------------------------------------------------\n Nested Loop (cost=0.42..59.76 rows=1 width=12)\n -> GroupAggregate (cost=0.42..42.24 rows=1 width=8)\n Group Key: t1.a\n -> Index Scan using t1_a_idx on t1 (cost=0.42..37.38 rows=969\nwidth=8)\n Index Cond: (a = 1)\n -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4)\n Filter: (a = 1)\n(7 rows)\n\nAs you can see, the predicate is pushes down just fine into a subquery with\naggregates.\n\nThe likely reason that PostgreSQL Is not behaving the same as SQL Server\nand Oracle is because the predicate pushdowns are limited to equality\noperators only as internally these are all represented by a series of\n\"equivalence classes\" which in this case say that 1 = t2.a = t1.a,\ntherefore it's possible to apply t1.a = 1 at the lowest level.\n\nThese equivalence classes don't currently handle non-equality operators.\nHere's an example:\n\nexplain select t2.a,s.sumv from (select a,sum(v) sumv from t1 group by a) s\ninner join t2 on t2.a = s.a where t2.a <= 1;\n QUERY PLAN\n------------------------------------------------------------------------\n Hash Join (cost=19442.51..19466.27 rows=1 width=12)\n Hash Cond: (t1.a = t2.a)\n -> HashAggregate (cost=19425.00..19435.00 rows=1000 width=8)\n Group Key: t1.a\n -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=8)\n -> Hash (cost=17.50..17.50 rows=1 width=4)\n -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4)\n Filter: (a <= 1)\n(8 rows)\n\nNotice the seq scan on t1 instead of the index scan on t1_a_idx.\n\nA way around this is to manually push the predicate down into the subquery:\n\nexplain select t2.a,s.sumv from (select a,sum(v) sumv from t1 where t1.a <=\n1 group by a) s inner join t2 on t2.a = s.a where t2.a <= 1;\n QUERY PLAN\n-------------------------------------------------------------------------------\n Nested Loop (cost=0.42..21.98 rows=1 width=12)\n Join Filter: (t1.a = t2.a)\n -> GroupAggregate (cost=0.42..4.46 rows=1 width=8)\n Group Key: t1.a\n -> Index Scan using t1_a_idx on t1 (cost=0.42..4.44 rows=1\nwidth=8)\n Index Cond: (a <= 1)\n -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4)\n Filter: (a <= 1)\n(8 rows)\n\n\nThe query in question is likely performing badly because of this:\n\n -> Seq Scan on fr13t1 (cost=0.00..25072.50\nrows=311800 width=21) (actual time=0.007..115.766 rows=311800 loops=1)\n Filter: (fr01codemp = '1'::smallint)\n Buffers: shared hit=21175\n\nJust how selective is fr01codemp = '1'::smallint ? Is there an index on\nthat column ?\n\nRegards\n\nDavid Rowley\n\n--\n David Rowley http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn 6 August 2015 at 06:25, Maxim Boguk <[email protected]> wrote:On Wed, Aug 5, 2015 at 11:41 AM, Alexandre de Arruda Paes <[email protected]> wrote:Hi,First, sorry to compare Post with other database system, but I know nothing about Oracle...This customer have an application made with a framework thats generates the SQL statements (so, We can't make any query optimizations) .We did the following tests:1) Postgresql 9.3 and Oracle 10 in a desktop machine(8 GB RAM, 1 SATA disk,Core i5)2) Postgresql 9.3 in a server + FC storage (128 GB RAM, Xeon 32 cores, SAS disks)I think I know where issue is.The PostgreSQL planner unable pass join conditions into subquery with aggregate functions (it's well known limitation).I think this statement is quite misleading. Let's look at an example:create table t1 (a int not null, v int not null);create table t2 (a int not null);insert into t1 select s.i,10 from generate_series(1,1000) s(i),generate_series(1,1000);insert into t2 select generate_series(1,1000);create index on t1 (a); explain select t2.a,s.sumv from (select a,sum(v) sumv from t1 group by a) s inner join t2 on t2.a = s.a where t2.a = 1; QUERY PLAN---------------------------------------------------------------------------------- Nested Loop (cost=0.42..59.76 rows=1 width=12) -> GroupAggregate (cost=0.42..42.24 rows=1 width=8) Group Key: t1.a -> Index Scan using t1_a_idx on t1 (cost=0.42..37.38 rows=969 width=8) Index Cond: (a = 1) -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4) Filter: (a = 1)(7 rows)As you can see, the predicate is pushes down just fine into a subquery with aggregates.The likely reason that PostgreSQL Is not behaving the same as SQL Server and Oracle is because the predicate pushdowns are limited to equality operators only as internally these are all represented by a series of \"equivalence classes\" which in this case say that 1 = t2.a = t1.a, therefore it's possible to apply t1.a = 1 at the lowest level.These equivalence classes don't currently handle non-equality operators. Here's an example:explain select t2.a,s.sumv from (select a,sum(v) sumv from t1 group by a) s inner join t2 on t2.a = s.a where t2.a <= 1; QUERY PLAN------------------------------------------------------------------------ Hash Join (cost=19442.51..19466.27 rows=1 width=12) Hash Cond: (t1.a = t2.a) -> HashAggregate (cost=19425.00..19435.00 rows=1000 width=8) Group Key: t1.a -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=8) -> Hash (cost=17.50..17.50 rows=1 width=4) -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4) Filter: (a <= 1)(8 rows)Notice the seq scan on t1 instead of the index scan on t1_a_idx.A way around this is to manually push the predicate down into the subquery:explain select t2.a,s.sumv from (select a,sum(v) sumv from t1 where t1.a <= 1 group by a) s inner join t2 on t2.a = s.a where t2.a <= 1; QUERY PLAN------------------------------------------------------------------------------- Nested Loop (cost=0.42..21.98 rows=1 width=12) Join Filter: (t1.a = t2.a) -> GroupAggregate (cost=0.42..4.46 rows=1 width=8) Group Key: t1.a -> Index Scan using t1_a_idx on t1 (cost=0.42..4.44 rows=1 width=8) Index Cond: (a <= 1) -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4) Filter: (a <= 1)(8 rows) The query in question is likely performing badly because of this: -> Seq Scan on fr13t1 (cost=0.00..25072.50 rows=311800 width=21) (actual time=0.007..115.766 rows=311800 loops=1) Filter: (fr01codemp = '1'::smallint) Buffers: shared hit=21175Just how selective is fr01codemp = '1'::smallint ? Is there an index on that column ?RegardsDavid Rowley-- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 6 Aug 2015 10:07:27 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": ">\n>\n> The query in question is likely performing badly because of this:\n>\n> -> Seq Scan on fr13t1 (cost=0.00..25072.50\n> rows=311800 width=21) (actual time=0.007..115.766 rows=311800 loops=1)\n> Filter: (fr01codemp = '1'::smallint)\n> Buffers: shared hit=21175\n>\n> Just how selective is fr01codemp = '1'::smallint ? Is there an index on\n> that column ?\n>\n>\nHi David,\n\nIn this case, fr13t1 has only value '1' in all fr01codemp:\n\ncopro95=# select fr01codemp,count(*) from fr13t1 group by fr01codemp;\n fr01codemp | count\n------------+--------\n 1 | 311800\n(1 row)\n\nTable \"public.fr13t1\"\n Column | Type | Modifiers\n------------+-----------------------------+-----------\n fr01codemp | smallint | not null\n fr02codigo | numeric(10,0) | not null\n fr13dtlanc | date | not null\n fr13sequen | smallint | not null\n(...)\nIndexes:\n \"fr13t1_pkey\" PRIMARY KEY, btree (fr01codemp, fr02codigo, fr13dtlanc,\nfr13sequen)\n \"ifr13t1\" btree (fr01codemp, fr07cod)\n \"ifr13t12\" btree (co18codord)\n \"ifr13t14\" btree (fr01codemp, fr52mot)\n(...)\n\nIf planner needs to scan all table, can indexscan/indexonlyscan can take\nany advantage ?\n\nBesta regards,\n\nAlexandre\n\nThe query in question is likely performing badly because of this: -> Seq Scan on fr13t1 (cost=0.00..25072.50 rows=311800 width=21) (actual time=0.007..115.766 rows=311800 loops=1) Filter: (fr01codemp = '1'::smallint) Buffers: shared hit=21175Just how selective is fr01codemp = '1'::smallint ? Is there an index on that column ?Hi David,In this case, fr13t1 has only value '1' in all fr01codemp:copro95=# select fr01codemp,count(*) from fr13t1 group by fr01codemp; fr01codemp | count ------------+-------- 1 | 311800(1 row)Table \"public.fr13t1\" Column | Type | Modifiers ------------+-----------------------------+----------- fr01codemp | smallint | not null fr02codigo | numeric(10,0) | not null fr13dtlanc | date | not null fr13sequen | smallint | not null(...)Indexes: \"fr13t1_pkey\" PRIMARY KEY, btree (fr01codemp, fr02codigo, fr13dtlanc, fr13sequen) \"ifr13t1\" btree (fr01codemp, fr07cod) \"ifr13t12\" btree (co18codord) \"ifr13t14\" btree (fr01codemp, fr52mot)(...) If planner needs to scan all table, can indexscan/indexonlyscan can take any advantage ? Besta regards,Alexandre",
"msg_date": "Wed, 5 Aug 2015 20:58:56 -0300",
"msg_from": "Alexandre de Arruda Paes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": ">\n>\n> Notice the seq scan on t1 instead of the index scan on t1_a_idx.\n>\n> A way around this is to manually push the predicate down into the subquery:\n>\n> explain select t2.a,s.sumv from (select a,sum(v) sumv from t1 where t1.a\n> <= 1 group by a) s inner join t2 on t2.a = s.a where t2.a <= 1;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------\n> Nested Loop (cost=0.42..21.98 rows=1 width=12)\n> Join Filter: (t1.a = t2.a)\n> -> GroupAggregate (cost=0.42..4.46 rows=1 width=8)\n> Group Key: t1.a\n> -> Index Scan using t1_a_idx on t1 (cost=0.42..4.44 rows=1\n> width=8)\n> Index Cond: (a <= 1)\n> -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4)\n> Filter: (a <= 1)\n> (8 rows)\n>\n>\n>\nHi David,\n\nYou are right. If the subquery includes the same filters of the main select\n(of the existing fields, sure), the times down to the floor (50 ms in the\nfirst execution and *18* ms by cache. Superb! ):\n\n(...) (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo,\nfr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS\nNUMERIC(18,10))) AS fr13VrTot\nFROM FR13T1 *WHERE (fr01codemp = '1' and fr13dtlanc >= '01/05/2014') AND\n(fr02codigo >= '0' and fr02codigo <= '9999999999') AND (fr13dtlanc <=\n'31/05/2014') *GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON\nT4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND\nT4.fr13dtlanc = T1.fr13dtlanc)\n(...)\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=5770.32..7894.70 rows=1 width=130) (actual\ntime=13.715..18.366 rows=2 loops=1)\n Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND (fr13t3.fr02codigo\n= t1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 368\n Buffers: shared hit=5920\n -> Nested Loop Left Join (cost=5764.18..7887.47 rows=1 width=98)\n(actual time=13.529..18.108 rows=2 loops=1)\n Join Filter: (t3.fr01codemp = t1.fr01codemp)\n Buffers: shared hit=5918\n -> Nested Loop Left Join (cost=5764.04..7887.30 rows=1 width=87)\n(actual time=13.519..18.094 rows=2 loops=1)\n Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND\n(fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 11144\n Buffers: shared hit=5914\n -> Nested Loop Left Join (cost=0.70..2098.42 rows=1\nwidth=23) (actual time=0.796..2.071 rows=2 loops=1)\n Buffers: shared hit=181\n -> Index Scan using ufr13t2 on fr13t t1\n (cost=0.42..2094.11 rows=1 width=19) (actual time=0.787..2.054 rows=2\nloops=1)\n Index Cond: ((fr01codemp = 1::smallint) AND\n(fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date))\n Filter: ((fr02codigo >= 0::numeric) AND\n(fr02codigo <= 9999999999::numeric) AND (fr13codpr = 60732))\n Rows Removed by Filter: 5621\n Buffers: shared hit=175\n -> Index Scan using fr02t_pkey on fr02t t2\n (cost=0.28..4.30 rows=1 width=12) (actual time=0.005..0.005 rows=1 loops=2)\n Index Cond: ((fr01codemp = t1.fr01codemp) AND\n(fr01codemp = 1::smallint) AND (fr02codigo = t1.fr02codigo))\n Buffers: shared hit=6\n -> HashAggregate (cost=5763.34..5770.15 rows=681 width=21)\n(actual time=5.576..6.787 rows=5573 loops=2)\n Buffers: shared hit=5733\n -> Index Scan using ufr13t15 on fr13t1\n (cost=0.42..5644.31 rows=6802 width=21) (actual time=0.020..3.371\nrows=7053 loops=1)\n Index Cond: ((fr01codemp = 1::smallint) AND\n(fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date)\nAND (fr02codigo >= 0::numeric) AND (fr02codigo <= 9999999999::numeric))\n Buffers: shared hit=5733\n -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16\nrows=1 width=15) (actual time=0.005..0.005 rows=1 loops=2)\n Index Cond: ((fr01codemp = 1::smallint) AND (fr09cod =\nt2.fr09cod))\n Buffers: shared hit=4\n -> HashAggregate (cost=6.14..6.43 rows=29 width=17) (actual\ntime=0.056..0.086 rows=184 loops=2)\n Buffers: shared hit=2\n -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17)\n(actual time=0.003..0.027 rows=184 loops=1)\n Filter: (fr01codemp = 1::smallint)\n Buffers: shared hit=2\n Total runtime: 18.528 ms\n(35 rows)\n\n\nTomorrow I will try to do the same with the other slow query, reporting\nhere.\n\nBest regards,\n\nAlexandre\n\nNotice the seq scan on t1 instead of the index scan on t1_a_idx.A way around this is to manually push the predicate down into the subquery:explain select t2.a,s.sumv from (select a,sum(v) sumv from t1 where t1.a <= 1 group by a) s inner join t2 on t2.a = s.a where t2.a <= 1; QUERY PLAN------------------------------------------------------------------------------- Nested Loop (cost=0.42..21.98 rows=1 width=12) Join Filter: (t1.a = t2.a) -> GroupAggregate (cost=0.42..4.46 rows=1 width=8) Group Key: t1.a -> Index Scan using t1_a_idx on t1 (cost=0.42..4.44 rows=1 width=8) Index Cond: (a <= 1) -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4) Filter: (a <= 1)(8 rows) Hi David,You are right. If the subquery includes the same filters of the main select (of the existing fields, sure), the times down to the floor (50 ms in the first execution and *18* ms by cache. Superb! ):(...) (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo, fr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS NUMERIC(18,10))) AS fr13VrTotFROM FR13T1 WHERE (fr01codemp = '1' and fr13dtlanc >= '01/05/2014') AND (fr02codigo >= '0' and fr02codigo <= '9999999999') AND (fr13dtlanc <= '31/05/2014') GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON T4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND T4.fr13dtlanc = T1.fr13dtlanc)(...) QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=5770.32..7894.70 rows=1 width=130) (actual time=13.715..18.366 rows=2 loops=1) Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND (fr13t3.fr02codigo = t1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc)) Rows Removed by Join Filter: 368 Buffers: shared hit=5920 -> Nested Loop Left Join (cost=5764.18..7887.47 rows=1 width=98) (actual time=13.529..18.108 rows=2 loops=1) Join Filter: (t3.fr01codemp = t1.fr01codemp) Buffers: shared hit=5918 -> Nested Loop Left Join (cost=5764.04..7887.30 rows=1 width=87) (actual time=13.519..18.094 rows=2 loops=1) Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND (fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc)) Rows Removed by Join Filter: 11144 Buffers: shared hit=5914 -> Nested Loop Left Join (cost=0.70..2098.42 rows=1 width=23) (actual time=0.796..2.071 rows=2 loops=1) Buffers: shared hit=181 -> Index Scan using ufr13t2 on fr13t t1 (cost=0.42..2094.11 rows=1 width=19) (actual time=0.787..2.054 rows=2 loops=1) Index Cond: ((fr01codemp = 1::smallint) AND (fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date)) Filter: ((fr02codigo >= 0::numeric) AND (fr02codigo <= 9999999999::numeric) AND (fr13codpr = 60732)) Rows Removed by Filter: 5621 Buffers: shared hit=175 -> Index Scan using fr02t_pkey on fr02t t2 (cost=0.28..4.30 rows=1 width=12) (actual time=0.005..0.005 rows=1 loops=2) Index Cond: ((fr01codemp = t1.fr01codemp) AND (fr01codemp = 1::smallint) AND (fr02codigo = t1.fr02codigo)) Buffers: shared hit=6 -> HashAggregate (cost=5763.34..5770.15 rows=681 width=21) (actual time=5.576..6.787 rows=5573 loops=2) Buffers: shared hit=5733 -> Index Scan using ufr13t15 on fr13t1 (cost=0.42..5644.31 rows=6802 width=21) (actual time=0.020..3.371 rows=7053 loops=1) Index Cond: ((fr01codemp = 1::smallint) AND (fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date) AND (fr02codigo >= 0::numeric) AND (fr02codigo <= 9999999999::numeric)) Buffers: shared hit=5733 -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16 rows=1 width=15) (actual time=0.005..0.005 rows=1 loops=2) Index Cond: ((fr01codemp = 1::smallint) AND (fr09cod = t2.fr09cod)) Buffers: shared hit=4 -> HashAggregate (cost=6.14..6.43 rows=29 width=17) (actual time=0.056..0.086 rows=184 loops=2) Buffers: shared hit=2 -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17) (actual time=0.003..0.027 rows=184 loops=1) Filter: (fr01codemp = 1::smallint) Buffers: shared hit=2 Total runtime: 18.528 ms(35 rows)Tomorrow I will try to do the same with the other slow query, reporting here.Best regards,Alexandre",
"msg_date": "Wed, 5 Aug 2015 22:09:55 -0300",
"msg_from": "Alexandre de Arruda Paes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": "På torsdag 06. august 2015 kl. 03:09:55, skrev Alexandre de Arruda Paes <\[email protected] <mailto:[email protected]>>:\n \nNotice the seq scan on t1 instead of the index scan on t1_a_idx.\n \nA way around this is to manually push the predicate down into the subquery:\n \nexplain select t2.a,s.sumv from (select a,sum(v) sumv from t1 where t1.a <= 1 \ngroup by a) s inner join t2 on t2.a = s.a where t2.a <= 1;\n QUERY PLAN\n-------------------------------------------------------------------------------\n Nested Loop (cost=0.42..21.98 rows=1 width=12)\n Join Filter: (t1.a = t2.a)\n -> GroupAggregate (cost=0.42..4.46 rows=1 width=8)\n Group Key: t1.a\n -> Index Scan using t1_a_idx on t1 (cost=0.42..4.44 rows=1 width=8)\n Index Cond: (a <= 1)\n -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4)\n Filter: (a <= 1)\n(8 rows) \n \n \n\n\n\n \nHi David,\n \nYou are right. If the subquery includes the same filters of the main select \n(of the existing fields, sure), the times down to the floor (50 ms in the first \nexecution and *18* ms by cache. Superb! ):\n \n(...) (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo, \nfr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS \nNUMERIC(18,10))) AS fr13VrTot\nFROM FR13T1 WHERE (fr01codemp = '1' and fr13dtlanc >= '01/05/2014') AND \n(fr02codigo >= '0' and fr02codigo <= '9999999999') AND (fr13dtlanc <= \n'31/05/2014')GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON T4.fr01codemp \n= T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND T4.fr13dtlanc = \nT1.fr13dtlanc)\n(...)\n \n \n QUERY PLAN \n \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=5770.32..7894.70 rows=1 width=130) (actual \ntime=13.715..18.366 rows=2 loops=1)\n Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND (fr13t3.fr02codigo = \nt1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 368\n Buffers: shared hit=5920\n -> Nested Loop Left Join (cost=5764.18..7887.47 rows=1 width=98) (actual \ntime=13.529..18.108 rows=2 loops=1)\n Join Filter: (t3.fr01codemp = t1.fr01codemp)\n Buffers: shared hit=5918\n -> Nested Loop Left Join (cost=5764.04..7887.30 rows=1 width=87) \n(actual time=13.519..18.094 rows=2 loops=1)\n Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND \n(fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 11144\n Buffers: shared hit=5914\n -> Nested Loop Left Join (cost=0.70..2098.42 rows=1 width=23) \n(actual time=0.796..2.071 rows=2 loops=1)\n Buffers: shared hit=181\n -> Index Scan using ufr13t2 on fr13t t1 \n (cost=0.42..2094.11 rows=1 width=19) (actual time=0.787..2.054 rows=2 loops=1)\n Index Cond: ((fr01codemp = 1::smallint) AND \n(fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date))\n Filter: ((fr02codigo >= 0::numeric) AND (fr02codigo \n<= 9999999999::numeric) AND (fr13codpr = 60732))\n Rows Removed by Filter: 5621\n Buffers: shared hit=175\n -> Index Scan using fr02t_pkey on fr02t t2 \n (cost=0.28..4.30 rows=1 width=12) (actual time=0.005..0.005 rows=1 loops=2)\n Index Cond: ((fr01codemp = t1.fr01codemp) AND \n(fr01codemp = 1::smallint) AND (fr02codigo = t1.fr02codigo))\n Buffers: shared hit=6\n -> HashAggregate (cost=5763.34..5770.15 rows=681 width=21) \n(actual time=5.576..6.787 rows=5573 loops=2)\n Buffers: shared hit=5733\n -> Index Scan using ufr13t15 on fr13t1 \n (cost=0.42..5644.31 rows=6802 width=21) (actual time=0.020..3.371 rows=7053 \nloops=1)\n Index Cond: ((fr01codemp = 1::smallint) AND \n(fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date) AND \n(fr02codigo >= 0::numeric) AND (fr02codigo <= 9999999999::numeric))\n Buffers: shared hit=5733\n -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16 rows=1 \nwidth=15) (actual time=0.005..0.005 rows=1 loops=2)\n Index Cond: ((fr01codemp = 1::smallint) AND (fr09cod = \nt2.fr09cod))\n Buffers: shared hit=4\n -> HashAggregate (cost=6.14..6.43 rows=29 width=17) (actual \ntime=0.056..0.086 rows=184 loops=2)\n Buffers: shared hit=2\n -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17) (actual \ntime=0.003..0.027 rows=184 loops=1)\n Filter: (fr01codemp = 1::smallint)\n Buffers: shared hit=2\n Total runtime: 18.528 ms\n(35 rows)\n\n \n \nTomorrow I will try to do the same with the other slow query, reporting here.\n\n\n\n \nIt will be interesting to see how Oracle and SQL-Server perform with the \nre-written query too.\nThanks.\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>",
"msg_date": "Thu, 6 Aug 2015 12:05:46 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow HashAggregate/cache access"
},
{
"msg_contents": "On 6 August 2015 at 22:05, Andreas Joseph Krogh <[email protected]> wrote:\n\n> På torsdag 06. august 2015 kl. 03:09:55, skrev Alexandre de Arruda Paes <\n> [email protected]>:\n>\n>\n>> Notice the seq scan on t1 instead of the index scan on t1_a_idx.\n>>\n>> A way around this is to manually push the predicate down into the\n>> subquery:\n>>\n>> explain select t2.a,s.sumv from (select a,sum(v) sumv from t1 where t1.a\n>> <= 1 group by a) s inner join t2 on t2.a = s.a where t2.a <= 1;\n>> QUERY PLAN\n>>\n>> -------------------------------------------------------------------------------\n>> Nested Loop (cost=0.42..21.98 rows=1 width=12)\n>> Join Filter: (t1.a = t2.a)\n>> -> GroupAggregate (cost=0.42..4.46 rows=1 width=8)\n>> Group Key: t1.a\n>> -> Index Scan using t1_a_idx on t1 (cost=0.42..4.44 rows=1\n>> width=8)\n>> Index Cond: (a <= 1)\n>> -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4)\n>> Filter: (a <= 1)\n>> (8 rows)\n>>\n>>\n>>\n>\n> Hi David,\n>\n> You are right. If the subquery includes the same filters of the main\n> select (of the existing fields, sure), the times down to the floor (50 ms\n> in the first execution and *18* ms by cache. Superb! ):\n>\n> (...) (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo,\n> fr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS\n> NUMERIC(18,10))) AS fr13VrTot\n> FROM FR13T1 *WHERE (fr01codemp = '1' and fr13dtlanc >= '01/05/2014') AND\n> (fr02codigo >= '0' and fr02codigo <= '9999999999') AND (fr13dtlanc <=\n> '31/05/2014') *GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON\n> T4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND\n> T4.fr13dtlanc = T1.fr13dtlanc)\n> (...)\n>\n>\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=5770.32..7894.70 rows=1 width=130) (actual\n> time=13.715..18.366 rows=2 loops=1)\n> Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND\n> (fr13t3.fr02codigo = t1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc))\n> Rows Removed by Join Filter: 368\n> Buffers: shared hit=5920\n> -> Nested Loop Left Join (cost=5764.18..7887.47 rows=1 width=98)\n> (actual time=13.529..18.108 rows=2 loops=1)\n> Join Filter: (t3.fr01codemp = t1.fr01codemp)\n> Buffers: shared hit=5918\n> -> Nested Loop Left Join (cost=5764.04..7887.30 rows=1\n> width=87) (actual time=13.519..18.094 rows=2 loops=1)\n> Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND\n> (fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc))\n> Rows Removed by Join Filter: 11144\n> Buffers: shared hit=5914\n> -> Nested Loop Left Join (cost=0.70..2098.42 rows=1\n> width=23) (actual time=0.796..2.071 rows=2 loops=1)\n> Buffers: shared hit=181\n> -> Index Scan using ufr13t2 on fr13t t1\n> (cost=0.42..2094.11 rows=1 width=19) (actual time=0.787..2.054 rows=2\n> loops=1)\n> Index Cond: ((fr01codemp = 1::smallint) AND\n> (fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date))\n> Filter: ((fr02codigo >= 0::numeric) AND\n> (fr02codigo <= 9999999999::numeric) AND (fr13codpr = 60732))\n> Rows Removed by Filter: 5621\n> Buffers: shared hit=175\n> -> Index Scan using fr02t_pkey on fr02t t2\n> (cost=0.28..4.30 rows=1 width=12) (actual time=0.005..0.005 rows=1 loops=2)\n> Index Cond: ((fr01codemp = t1.fr01codemp) AND\n> (fr01codemp = 1::smallint) AND (fr02codigo = t1.fr02codigo))\n> Buffers: shared hit=6\n> -> HashAggregate (cost=5763.34..5770.15 rows=681\n> width=21) (actual time=5.576..6.787 rows=5573 loops=2)\n> Buffers: shared hit=5733\n> -> Index Scan using ufr13t15 on fr13t1\n> (cost=0.42..5644.31 rows=6802 width=21) (actual time=0.020..3.371\n> rows=7053 loops=1)\n> Index Cond: ((fr01codemp = 1::smallint) AND\n> (fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date)\n> AND (fr02codigo >= 0::numeric) AND (fr02codigo <= 9999999999::numeric))\n> Buffers: shared hit=5733\n> -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16\n> rows=1 width=15) (actual time=0.005..0.005 rows=1 loops=2)\n> Index Cond: ((fr01codemp = 1::smallint) AND (fr09cod =\n> t2.fr09cod))\n> Buffers: shared hit=4\n> -> HashAggregate (cost=6.14..6.43 rows=29 width=17) (actual\n> time=0.056..0.086 rows=184 loops=2)\n> Buffers: shared hit=2\n> -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17)\n> (actual time=0.003..0.027 rows=184 loops=1)\n> Filter: (fr01codemp = 1::smallint)\n> Buffers: shared hit=2\n> Total runtime: 18.528 ms\n> (35 rows)\n>\n>\n> Tomorrow I will try to do the same with the other slow query, reporting\n> here.\n>\n>\n> It will be interesting to see how Oracle and SQL-Server perform with the\n> re-written query too.\n> Thanks.\n>\n>\nGlad that's looking better for you.\n\nI'd guess that they're likely already pushing down those predicates into\nthe subquery going by the execution times that you posted.\n\nI can't imagine Oracle can perform a seq scan / table scan that much faster\nthan Postgres\n\nInterested to hear the results of your tests though.\n\nRegards\n\nDavid Rowley\n\n--\n David Rowley http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn 6 August 2015 at 22:05, Andreas Joseph Krogh <[email protected]> wrote:På torsdag 06. august 2015 kl. 03:09:55, skrev Alexandre de Arruda Paes <[email protected]>:\n\n\n\n\n\n\n\n\n \nNotice the seq scan on t1 instead of the index scan on t1_a_idx.\n \nA way around this is to manually push the predicate down into the subquery:\n \nexplain select t2.a,s.sumv from (select a,sum(v) sumv from t1 where t1.a <= 1 group by a) s inner join t2 on t2.a = s.a where t2.a <= 1;\n QUERY PLAN\n-------------------------------------------------------------------------------\n Nested Loop (cost=0.42..21.98 rows=1 width=12)\n Join Filter: (t1.a = t2.a)\n -> GroupAggregate (cost=0.42..4.46 rows=1 width=8)\n Group Key: t1.a\n -> Index Scan using t1_a_idx on t1 (cost=0.42..4.44 rows=1 width=8)\n Index Cond: (a <= 1)\n -> Seq Scan on t2 (cost=0.00..17.50 rows=1 width=4)\n Filter: (a <= 1)\n(8 rows) \n \n \n\n\n\n\n \nHi David,\n \nYou are right. If the subquery includes the same filters of the main select (of the existing fields, sure), the times down to the floor (50 ms in the first execution and *18* ms by cache. Superb! ):\n \n\n(...) (SELECT SUM(fr13quant) AS fr13TotQtd, fr01codemp, fr02codigo, fr13dtlanc, SUM(COALESCE( fr13quant, 0) * CAST(COALESCE( fr13preco, 0) AS NUMERIC(18,10))) AS fr13VrTot\nFROM FR13T1 WHERE (fr01codemp = '1' and fr13dtlanc >= '01/05/2014') AND (fr02codigo >= '0' and fr02codigo <= '9999999999') AND (fr13dtlanc <= '31/05/2014') GROUP BY fr01codemp, fr02codigo, fr13dtlanc ) T4 ON T4.fr01codemp = T1.fr01codemp AND T4.fr02codigo = T1.fr02codigo AND T4.fr13dtlanc = T1.fr13dtlanc)\n(...)\n \n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=5770.32..7894.70 rows=1 width=130) (actual time=13.715..18.366 rows=2 loops=1)\n Join Filter: ((fr13t3.fr01codemp = t1.fr01codemp) AND (fr13t3.fr02codigo = t1.fr02codigo) AND (fr13t3.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 368\n Buffers: shared hit=5920\n -> Nested Loop Left Join (cost=5764.18..7887.47 rows=1 width=98) (actual time=13.529..18.108 rows=2 loops=1)\n Join Filter: (t3.fr01codemp = t1.fr01codemp)\n Buffers: shared hit=5918\n -> Nested Loop Left Join (cost=5764.04..7887.30 rows=1 width=87) (actual time=13.519..18.094 rows=2 loops=1)\n Join Filter: ((fr13t1.fr01codemp = t1.fr01codemp) AND (fr13t1.fr02codigo = t1.fr02codigo) AND (fr13t1.fr13dtlanc = t1.fr13dtlanc))\n Rows Removed by Join Filter: 11144\n Buffers: shared hit=5914\n -> Nested Loop Left Join (cost=0.70..2098.42 rows=1 width=23) (actual time=0.796..2.071 rows=2 loops=1)\n Buffers: shared hit=181\n -> Index Scan using ufr13t2 on fr13t t1 (cost=0.42..2094.11 rows=1 width=19) (actual time=0.787..2.054 rows=2 loops=1)\n Index Cond: ((fr01codemp = 1::smallint) AND (fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date))\n Filter: ((fr02codigo >= 0::numeric) AND (fr02codigo <= 9999999999::numeric) AND (fr13codpr = 60732))\n Rows Removed by Filter: 5621\n Buffers: shared hit=175\n -> Index Scan using fr02t_pkey on fr02t t2 (cost=0.28..4.30 rows=1 width=12) (actual time=0.005..0.005 rows=1 loops=2)\n Index Cond: ((fr01codemp = t1.fr01codemp) AND (fr01codemp = 1::smallint) AND (fr02codigo = t1.fr02codigo))\n Buffers: shared hit=6\n -> HashAggregate (cost=5763.34..5770.15 rows=681 width=21) (actual time=5.576..6.787 rows=5573 loops=2)\n Buffers: shared hit=5733\n -> Index Scan using ufr13t15 on fr13t1 (cost=0.42..5644.31 rows=6802 width=21) (actual time=0.020..3.371 rows=7053 loops=1)\n Index Cond: ((fr01codemp = 1::smallint) AND (fr13dtlanc >= '2014-05-01'::date) AND (fr13dtlanc <= '2014-05-31'::date) AND (fr02codigo >= 0::numeric) AND (fr02codigo <= 9999999999::numeric))\n Buffers: shared hit=5733\n -> Index Scan using fr09t_pkey on fr09t t3 (cost=0.14..0.16 rows=1 width=15) (actual time=0.005..0.005 rows=1 loops=2)\n Index Cond: ((fr01codemp = 1::smallint) AND (fr09cod = t2.fr09cod))\n Buffers: shared hit=4\n -> HashAggregate (cost=6.14..6.43 rows=29 width=17) (actual time=0.056..0.086 rows=184 loops=2)\n Buffers: shared hit=2\n -> Seq Scan on fr13t3 (cost=0.00..4.30 rows=184 width=17) (actual time=0.003..0.027 rows=184 loops=1)\n Filter: (fr01codemp = 1::smallint)\n Buffers: shared hit=2\n Total runtime: 18.528 ms\n(35 rows)\n\n \n \nTomorrow I will try to do the same with the other slow query, reporting here.\n\n\n\n\n \nIt will be interesting to see how Oracle and SQL-Server perform with the re-written query too.\nThanks.\nGlad that's looking better for you.I'd guess that they're likely already pushing down those predicates into the subquery going by the execution times that you posted. I can't imagine Oracle can perform a seq scan / table scan that much faster than PostgresInterested to hear the results of your tests though.RegardsDavid Rowley-- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 10 Aug 2015 15:16:16 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow HashAggregate/cache access"
}
] |
[
{
"msg_contents": "Platform: pg 9.2.9 on Ubuntu 12.04.4 LTS.\nI have a table which is partitioned to about 80 children. There are usualy\nseveral dozens of connections accessing these tables concurrently. I found\nsometimes the query planing time is very long if I query against the parent\ntable with partition key. The connections are shown with status 'BIND' by\nps command.\n\nIn normal condition, the plan time of the query is about several hundred of\nmillion seconds while the same query accessing child table directly is less\nthan 1 million seconds:\n# explain select 1 from article where cid=729 and\nurl_hash='6851f596f55a994b2df417b53523fe45';\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..8.68 rows=2 width=0)\n -> Append (cost=0.00..8.68 rows=2 width=0)\n -> Seq Scan on article (cost=0.00..0.00 rows=1 width=0)\n Filter: ((cid = 729) AND (url_hash =\n'6851f596f55a994b2df417b53523fe45'::bpchar))\n -> Index Scan using article_729_url_hash on article_729 article\n(cost=0.00..8.68 rows=1 width=0)\n Index Cond: (url_hash =\n'6851f596f55a994b2df417b53523fe45'::bpchar)\n Filter: (cid = 729)\n(7 rows)\n\nTime: 361.401 ms\n\n# explain select 1 from article_729 where\nurl_hash='6851f596f55a994b2df417b53523fe45';\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------\n Index Only Scan using article_729_url_hash on article_729\n(cost=0.00..8.67 rows=1 width=0)\n Index Cond: (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar)\n(2 rows)\n\nTime: 0.898 ms\n\nThis is only in normal condition. In extreme condition, the planing time\ncould take several minutes. There seems some locking issue in query\nplaning. How can I increase the plan performance? Or is it bad to partition\ntable to 80 children in PostgreSQL?\n\nPlatform: pg 9.2.9 on Ubuntu 12.04.4 LTS.I have a table which is partitioned to about 80 children. There are usualy several dozens of connections accessing these tables concurrently. I found sometimes the query planing time is very long if I query against the parent table with partition key. The connections are shown with status 'BIND' by ps command. In normal condition, the plan time of the query is about several hundred of million seconds while the same query accessing child table directly is less than 1 million seconds:# explain select 1 from article where cid=729 and url_hash='6851f596f55a994b2df417b53523fe45'; QUERY PLAN ------------------------------------------------------------------------------------------------------------ Result (cost=0.00..8.68 rows=2 width=0) -> Append (cost=0.00..8.68 rows=2 width=0) -> Seq Scan on article (cost=0.00..0.00 rows=1 width=0) Filter: ((cid = 729) AND (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar)) -> Index Scan using article_729_url_hash on article_729 article (cost=0.00..8.68 rows=1 width=0) Index Cond: (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar) Filter: (cid = 729)(7 rows)Time: 361.401 ms# explain select 1 from article_729 where url_hash='6851f596f55a994b2df417b53523fe45'; QUERY PLAN --------------------------------------------------------------------------------------------- Index Only Scan using article_729_url_hash on article_729 (cost=0.00..8.67 rows=1 width=0) Index Cond: (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar)(2 rows)Time: 0.898 msThis is only in normal condition. In extreme condition, the planing time could take several minutes. There seems some locking issue in query planing. How can I increase the plan performance? Or is it bad to partition table to 80 children in PostgreSQL?",
"msg_date": "Tue, 11 Aug 2015 16:46:30 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "On Tue, Aug 11, 2015 at 6:46 PM, Rural Hunter <[email protected]> wrote:\n\n> Platform: pg 9.2.9 on Ubuntu 12.04.4 LTS.\n> I have a table which is partitioned to about 80 children. There are usualy\n> several dozens of connections accessing these tables concurrently. I found\n> sometimes the query planing time is very long if I query against the parent\n> table with partition key. The connections are shown with status 'BIND' by\n> ps command.\n>\n> In normal condition, the plan time of the query is about several hundred\n> of million seconds while the same query accessing child table directly is\n> less than 1 million seconds:\n> # explain select 1 from article where cid=729 and\n> url_hash='6851f596f55a994b2df417b53523fe45';\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------\n> Result (cost=0.00..8.68 rows=2 width=0)\n> -> Append (cost=0.00..8.68 rows=2 width=0)\n> -> Seq Scan on article (cost=0.00..0.00 rows=1 width=0)\n> Filter: ((cid = 729) AND (url_hash =\n> '6851f596f55a994b2df417b53523fe45'::bpchar))\n> -> Index Scan using article_729_url_hash on\n> \n> article_729 article (cost=0.00..8.68 rows=1 width=0)\n> Index Cond: (url_hash =\n> '6851f596f55a994b2df417b53523fe45'::bpchar)\n> Filter: (cid = 729)\n> (7 rows)\n>\n> Time: 361.401 ms\n>\n> # explain select 1 from article_729 where\n> url_hash='6851f596f55a994b2df417b53523fe45';\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------------------------\n> Index Only Scan using article_729_url_hash on article_729\n> (cost=0.00..8.67 rows=1 width=0)\n> Index Cond: (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar)\n> (2 rows)\n>\n> Time: 0.898 ms\n>\n> This is only in normal condition. In extreme condition, the planing time\n> could take several minutes. There seems some locking issue in query\n> planing. How can I increase the plan performance? Or is it bad to partition\n> table to 80 children in PostgreSQL?\n>\n>\nHi,\n\nCould you provide full definition of article_729 table (\\dt+\narticle_729)?\n80 partitions is adequate amount of partitions for the PostgreSQL, so there\nare going something unusual (I suspect it may be related to used\npartitioning schema).\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n\nOn Tue, Aug 11, 2015 at 6:46 PM, Rural Hunter <[email protected]> wrote:Platform: pg 9.2.9 on Ubuntu 12.04.4 LTS.I have a table which is partitioned to about 80 children. There are usualy several dozens of connections accessing these tables concurrently. I found sometimes the query planing time is very long if I query against the parent table with partition key. The connections are shown with status 'BIND' by ps command. In normal condition, the plan time of the query is about several hundred of million seconds while the same query accessing child table directly is less than 1 million seconds:# explain select 1 from article where cid=729 and url_hash='6851f596f55a994b2df417b53523fe45'; QUERY PLAN ------------------------------------------------------------------------------------------------------------ Result (cost=0.00..8.68 rows=2 width=0) -> Append (cost=0.00..8.68 rows=2 width=0) -> Seq Scan on article (cost=0.00..0.00 rows=1 width=0) Filter: ((cid = 729) AND (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar)) -> Index Scan using article_729_url_hash on article_729 article (cost=0.00..8.68 rows=1 width=0) Index Cond: (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar) Filter: (cid = 729)(7 rows)Time: 361.401 ms# explain select 1 from article_729 where url_hash='6851f596f55a994b2df417b53523fe45'; QUERY PLAN --------------------------------------------------------------------------------------------- Index Only Scan using article_729_url_hash on article_729 (cost=0.00..8.67 rows=1 width=0) Index Cond: (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar)(2 rows)Time: 0.898 msThis is only in normal condition. In extreme condition, the planing time could take several minutes. There seems some locking issue in query planing. How can I increase the plan performance? Or is it bad to partition table to 80 children in PostgreSQL?\nHi,Could you provide full definition of article_729 table (\\dt+ article_729)?80 partitions is adequate amount of partitions for the PostgreSQL, so there are going something unusual (I suspect it may be related to used partitioning schema).-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/",
"msg_date": "Tue, 11 Aug 2015 21:43:41 +1000",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "# \\dt+ article_729\n List of relations\n Schema | Name | Type | Owner | Size | Description\n--------+-------------+-------+--------+--------+-------------\n public | article_729 | table | omuser1 | 655 MB |\n(1 row)\nThe problem exists on not only this specific child table, but with all of\nthem.\n\n2015-08-11 19:43 GMT+08:00 Maxim Boguk <[email protected]>:\n\n>\n>\n> On Tue, Aug 11, 2015 at 6:46 PM, Rural Hunter <[email protected]>\n> wrote:\n>\n>> Platform: pg 9.2.9 on Ubuntu 12.04.4 LTS.\n>> I have a table which is partitioned to about 80 children. There are\n>> usualy several dozens of connections accessing these tables concurrently. I\n>> found sometimes the query planing time is very long if I query against the\n>> parent table with partition key. The connections are shown with status\n>> 'BIND' by ps command.\n>>\n>> In normal condition, the plan time of the query is about several hundred\n>> of million seconds while the same query accessing child table directly is\n>> less than 1 million seconds:\n>> # explain select 1 from article where cid=729 and\n>> url_hash='6851f596f55a994b2df417b53523fe45';\n>> QUERY\n>> PLAN\n>>\n>> ------------------------------------------------------------------------------------------------------------\n>> Result (cost=0.00..8.68 rows=2 width=0)\n>> -> Append (cost=0.00..8.68 rows=2 width=0)\n>> -> Seq Scan on article (cost=0.00..0.00 rows=1 width=0)\n>> Filter: ((cid = 729) AND (url_hash =\n>> '6851f596f55a994b2df417b53523fe45'::bpchar))\n>> -> Index Scan using article_729_url_hash on\n>> \n>> article_729 article (cost=0.00..8.68 rows=1 width=0)\n>> Index Cond: (url_hash =\n>> '6851f596f55a994b2df417b53523fe45'::bpchar)\n>> Filter: (cid = 729)\n>> (7 rows)\n>>\n>> Time: 361.401 ms\n>>\n>> # explain select 1 from article_729 where\n>> url_hash='6851f596f55a994b2df417b53523fe45';\n>> QUERY\n>> PLAN\n>>\n>> ---------------------------------------------------------------------------------------------\n>> Index Only Scan using article_729_url_hash on article_729\n>> (cost=0.00..8.67 rows=1 width=0)\n>> Index Cond: (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar)\n>> (2 rows)\n>>\n>> Time: 0.898 ms\n>>\n>> This is only in normal condition. In extreme condition, the planing time\n>> could take several minutes. There seems some locking issue in query\n>> planing. How can I increase the plan performance? Or is it bad to partition\n>> table to 80 children in PostgreSQL?\n>>\n>>\n> Hi,\n>\n> Could you provide full definition of article_729 table (\\dt+\n> article_729)?\n> 80 partitions is adequate amount of partitions for the PostgreSQL, so\n> there are going something unusual (I suspect it may be related to used\n> partitioning schema).\n>\n>\n> --\n> Maxim Boguk\n> Senior Postgresql DBA\n> http://www.postgresql-consulting.ru/\n> <http://www.postgresql-consulting.com/>\n>\n>\n\n# \\dt+ article_729 List of relations Schema | Name | Type | Owner | Size | Description --------+-------------+-------+--------+--------+------------- public | article_729 | table | omuser1 | 655 MB | (1 row)The problem exists on not only this specific child table, but with all of them.2015-08-11 19:43 GMT+08:00 Maxim Boguk <[email protected]>:On Tue, Aug 11, 2015 at 6:46 PM, Rural Hunter <[email protected]> wrote:Platform: pg 9.2.9 on Ubuntu 12.04.4 LTS.I have a table which is partitioned to about 80 children. There are usualy several dozens of connections accessing these tables concurrently. I found sometimes the query planing time is very long if I query against the parent table with partition key. The connections are shown with status 'BIND' by ps command. In normal condition, the plan time of the query is about several hundred of million seconds while the same query accessing child table directly is less than 1 million seconds:# explain select 1 from article where cid=729 and url_hash='6851f596f55a994b2df417b53523fe45'; QUERY PLAN ------------------------------------------------------------------------------------------------------------ Result (cost=0.00..8.68 rows=2 width=0) -> Append (cost=0.00..8.68 rows=2 width=0) -> Seq Scan on article (cost=0.00..0.00 rows=1 width=0) Filter: ((cid = 729) AND (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar)) -> Index Scan using article_729_url_hash on article_729 article (cost=0.00..8.68 rows=1 width=0) Index Cond: (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar) Filter: (cid = 729)(7 rows)Time: 361.401 ms# explain select 1 from article_729 where url_hash='6851f596f55a994b2df417b53523fe45'; QUERY PLAN --------------------------------------------------------------------------------------------- Index Only Scan using article_729_url_hash on article_729 (cost=0.00..8.67 rows=1 width=0) Index Cond: (url_hash = '6851f596f55a994b2df417b53523fe45'::bpchar)(2 rows)Time: 0.898 msThis is only in normal condition. In extreme condition, the planing time could take several minutes. There seems some locking issue in query planing. How can I increase the plan performance? Or is it bad to partition table to 80 children in PostgreSQL?\nHi,Could you provide full definition of article_729 table (\\dt+ article_729)?80 partitions is adequate amount of partitions for the PostgreSQL, so there are going something unusual (I suspect it may be related to used partitioning schema).-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/",
"msg_date": "Tue, 11 Aug 2015 21:44:14 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "On Tue, Aug 11, 2015 at 11:44 PM, Rural Hunter <[email protected]>\nwrote:\n\n> # \\dt+\n> \n> article_729\n> List of relations\n> Schema | Name | Type | Owner | Size | Description\n> --------+-------------+-------+--------+--------+-------------\n> public | article_729 | table | omuser1 | 655 MB |\n> (1 row)\n> The problem exists on not only this specific child table, but with all of\n> them.\n>\n\nOops sorry, оf course I mean \"\\d+\n article_729\n\" (to see criteria used for partitioning).\n\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n\nOn Tue, Aug 11, 2015 at 11:44 PM, Rural Hunter <[email protected]> wrote:# \\dt+ article_729 List of relations Schema | Name | Type | Owner | Size | Description --------+-------------+-------+--------+--------+------------- public | article_729 | table | omuser1 | 655 MB | (1 row)The problem exists on not only this specific child table, but with all of them.Oops sorry, оf course I mean \"\\d+ article_729\" (to see criteria used for partitioning).-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/",
"msg_date": "Tue, 11 Aug 2015 23:53:06 +1000",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "# \\d article_729\n Table \"public.article_729\"\n Column | Type |\nModifiers\n--------------+-----------------------------+-------------------------------------------------------\n aid | bigint | not null default\nnextval('article_aid_seq'::regclass)\n style | smallint | not null default 0\n oaid | bigint | default 0\n fid | integer |\n bid | integer | default 0\n cid | integer |\n tid | integer |\n url | text | default NULL::bpchar\n tm_post | timestamp without time zone |\n tm_last_rply | timestamp without time zone |\n author | character varying(100) | default NULL::bpchar\n title | character varying(255) | default NULL::bpchar\n content | text |\n ab_content | text |\n rply_cnt | integer |\n read_cnt | integer |\n url_hash | character(32) | not null\n hash_plain | text | default NULL::bpchar\n title_hash | character(32) | default NULL::bpchar\n guid | character(32) | default NULL::bpchar\n neg_pos | smallint | not null default 0\n match_code | character(32) | default NULL::bpchar\n tm_spider | timestamp without time zone |\n tm_update | timestamp without time zone |\n stage | smallint | not null default 0\n rply_cut | integer | not null default 0\n read_cut | integer | not null default 0\n src | integer | default 0\n rfid | integer |\n labels | integer[] |\n kwds | integer[] |\n like_cnt | integer |\nIndexes:\n \"article_729_pkey\" PRIMARY KEY, btree (aid), tablespace \"indextbs\"\n \"article_729_url_hash\" UNIQUE CONSTRAINT, btree (url_hash), tablespace\n\"indextbs\"\n \"article_729_bid_titlehash_idx\" btree (bid, title_hash), tablespace\n\"indextbs\"\n \"article_729_fid_idx\" btree (fid), tablespace \"indextbs\"\n \"article_729_guid_idx\" btree (guid), tablespace \"indextbs\"\n \"article_729_labels_idx\" gin (labels), tablespace \"data1tbs\"\n \"article_729_mtcode_idx\" btree (match_code), tablespace \"indextbs\"\n \"article_729_rfid_author_idx\" btree (rfid, author), tablespace\n\"indextbs\"\n \"article_729_stage_idx\" btree (stage), tablespace \"data1tbs\"\n \"article_729_time_style_idx\" btree (tm_post DESC, style), tablespace\n\"data1tbs\"\n \"article_729_tm_spider_idx\" btree (tm_spider), tablespace \"indextbs\"\n \"article_729_tm_update_idx\" btree (tm_update), tablespace \"data1tbs\"\nCheck constraints:\n \"article_729_cid_check\" CHECK (cid = 729)\nForeign-key constraints:\n \"article_729_cid_fk\" FOREIGN KEY (cid) REFERENCES company(cid) ON\nDELETE CASCADE\nTriggers:\n trg_article_729_delete AFTER DELETE ON article_729 FOR EACH ROW EXECUTE\nPROCEDURE fn_article_delete()\n trg_article_729_insert AFTER INSERT ON article_729 FOR EACH ROW EXECUTE\nPROCEDURE fn_article_insert()\n trg_article_729_update AFTER UPDATE ON article_729 FOR EACH ROW EXECUTE\nPROCEDURE fn_article_update()\nInherits: article\n\n2015-08-11 21:53 GMT+08:00 Maxim Boguk <[email protected]>:\n\n>\n>\n> On Tue, Aug 11, 2015 at 11:44 PM, Rural Hunter <[email protected]>\n> wrote:\n>\n>> # \\dt+\n>> \n>> article_729\n>> List of relations\n>> Schema | Name | Type | Owner | Size | Description\n>> --------+-------------+-------+--------+--------+-------------\n>> public | article_729 | table | omuser1 | 655 MB |\n>> (1 row)\n>> The problem exists on not only this specific child table, but with all of\n>> them.\n>>\n>\n> Oops sorry, оf course I mean \"\\d+\n> article_729\n> \" (to see criteria used for partitioning).\n>\n>\n>\n> --\n> Maxim Boguk\n> Senior Postgresql DBA\n> http://www.postgresql-consulting.ru/\n> <http://www.postgresql-consulting.com/>\n>\n\n# \\d article_729 Table \"public.article_729\" Column | Type | Modifiers --------------+-----------------------------+------------------------------------------------------- aid | bigint | not null default nextval('article_aid_seq'::regclass) style | smallint | not null default 0 oaid | bigint | default 0 fid | integer | bid | integer | default 0 cid | integer | tid | integer | url | text | default NULL::bpchar tm_post | timestamp without time zone | tm_last_rply | timestamp without time zone | author | character varying(100) | default NULL::bpchar title | character varying(255) | default NULL::bpchar content | text | ab_content | text | rply_cnt | integer | read_cnt | integer | url_hash | character(32) | not null hash_plain | text | default NULL::bpchar title_hash | character(32) | default NULL::bpchar guid | character(32) | default NULL::bpchar neg_pos | smallint | not null default 0 match_code | character(32) | default NULL::bpchar tm_spider | timestamp without time zone | tm_update | timestamp without time zone | stage | smallint | not null default 0 rply_cut | integer | not null default 0 read_cut | integer | not null default 0 src | integer | default 0 rfid | integer | labels | integer[] | kwds | integer[] | like_cnt | integer | Indexes: \"article_729_pkey\" PRIMARY KEY, btree (aid), tablespace \"indextbs\" \"article_729_url_hash\" UNIQUE CONSTRAINT, btree (url_hash), tablespace \"indextbs\" \"article_729_bid_titlehash_idx\" btree (bid, title_hash), tablespace \"indextbs\" \"article_729_fid_idx\" btree (fid), tablespace \"indextbs\" \"article_729_guid_idx\" btree (guid), tablespace \"indextbs\" \"article_729_labels_idx\" gin (labels), tablespace \"data1tbs\" \"article_729_mtcode_idx\" btree (match_code), tablespace \"indextbs\" \"article_729_rfid_author_idx\" btree (rfid, author), tablespace \"indextbs\" \"article_729_stage_idx\" btree (stage), tablespace \"data1tbs\" \"article_729_time_style_idx\" btree (tm_post DESC, style), tablespace \"data1tbs\" \"article_729_tm_spider_idx\" btree (tm_spider), tablespace \"indextbs\" \"article_729_tm_update_idx\" btree (tm_update), tablespace \"data1tbs\"Check constraints: \"article_729_cid_check\" CHECK (cid = 729)Foreign-key constraints: \"article_729_cid_fk\" FOREIGN KEY (cid) REFERENCES company(cid) ON DELETE CASCADETriggers: trg_article_729_delete AFTER DELETE ON article_729 FOR EACH ROW EXECUTE PROCEDURE fn_article_delete() trg_article_729_insert AFTER INSERT ON article_729 FOR EACH ROW EXECUTE PROCEDURE fn_article_insert() trg_article_729_update AFTER UPDATE ON article_729 FOR EACH ROW EXECUTE PROCEDURE fn_article_update()Inherits: article2015-08-11 21:53 GMT+08:00 Maxim Boguk <[email protected]>:On Tue, Aug 11, 2015 at 11:44 PM, Rural Hunter <[email protected]> wrote:# \\dt+ article_729 List of relations Schema | Name | Type | Owner | Size | Description --------+-------------+-------+--------+--------+------------- public | article_729 | table | omuser1 | 655 MB | (1 row)The problem exists on not only this specific child table, but with all of them.Oops sorry, оf course I mean \"\\d+ article_729\" (to see criteria used for partitioning).-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/",
"msg_date": "Tue, 11 Aug 2015 22:00:52 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "# \\d+ article_729\n Table\n\"public.article_729\"\n Column | Type |\nModifiers | Storage | Stats target | Description\n--------------+-----------------------------+-------------------------------------------------------+----------+--------------+-------------\n aid | bigint | not null default\nnextval('article_aid_seq'::regclass) | plain | |\n style | smallint | not null default\n0 | plain | |\n oaid | bigint | default\n0 | plain | |\n fid | integer\n| | plain\n| |\n bid | integer | default\n0 | plain | |\n cid | integer\n| | plain\n| |\n tid | integer\n| | plain\n| |\n url | text | default\nNULL::bpchar | extended | |\n tm_post | timestamp without time zone\n| | plain\n| |\n tm_last_rply | timestamp without time zone\n| | plain\n| |\n author | character varying(100) | default\nNULL::bpchar | extended | |\n title | character varying(255) | default\nNULL::bpchar | extended | |\n content | text\n| | extended\n| |\n ab_content | text\n| | extended\n| |\n rply_cnt | integer\n| | plain\n| |\n read_cnt | integer\n| | plain\n| |\n url_hash | character(32) | not\nnull | extended |\n|\n hash_plain | text | default\nNULL::bpchar | extended | |\n title_hash | character(32) | default\nNULL::bpchar | extended | |\n guid | character(32) | default\nNULL::bpchar | extended | |\n neg_pos | smallint | not null default\n0 | plain | |\n match_code | character(32) | default\nNULL::bpchar | extended | |\n tm_spider | timestamp without time zone\n| | plain\n| |\n tm_update | timestamp without time zone\n| | plain\n| |\n stage | smallint | not null default\n0 | plain | |\n rply_cut | integer | not null default\n0 | plain | |\n read_cut | integer | not null default\n0 | plain | |\n src | integer | default\n0 | plain | |\n rfid | integer\n| | plain\n| |\n labels | integer[]\n| | extended\n| |\n kwds | integer[]\n| | extended\n| |\n like_cnt | integer\n| | plain\n| |\nIndexes:\n \"article_729_pkey\" PRIMARY KEY, btree (aid), tablespace \"indextbs\"\n \"article_729_url_hash\" UNIQUE CONSTRAINT, btree (url_hash), tablespace\n\"indextbs\"\n \"article_729_bid_titlehash_idx\" btree (bid, title_hash), tablespace\n\"indextbs\"\n \"article_729_fid_idx\" btree (fid), tablespace \"indextbs\"\n \"article_729_guid_idx\" btree (guid), tablespace \"indextbs\"\n \"article_729_labels_idx\" gin (labels), tablespace \"data1tbs\"\n \"article_729_mtcode_idx\" btree (match_code), tablespace \"indextbs\"\n \"article_729_rfid_author_idx\" btree (rfid, author), tablespace\n\"indextbs\"\n \"article_729_stage_idx\" btree (stage), tablespace \"data1tbs\"\n \"article_729_time_style_idx\" btree (tm_post DESC, style), tablespace\n\"data1tbs\"\n \"article_729_tm_spider_idx\" btree (tm_spider), tablespace \"indextbs\"\n \"article_729_tm_update_idx\" btree (tm_update), tablespace \"data1tbs\"\nCheck constraints:\n \"article_729_cid_check\" CHECK (cid = 729)\nForeign-key constraints:\n \"article_729_cid_fk\" FOREIGN KEY (cid) REFERENCES company(cid) ON\nDELETE CASCADE\nTriggers:\n trg_article_729_delete AFTER DELETE ON article_729 FOR EACH ROW EXECUTE\nPROCEDURE fn_article_delete()\n trg_article_729_insert AFTER INSERT ON article_729 FOR EACH ROW EXECUTE\nPROCEDURE fn_article_insert()\n trg_article_729_update AFTER UPDATE ON article_729 FOR EACH ROW EXECUTE\nPROCEDURE fn_article_update()\nInherits: article\nHas OIDs: no\n\n2015-08-11 22:00 GMT+08:00 Rural Hunter <[email protected]>:\n\n> # \\d article_729\n> Table \"public.article_729\"\n> Column | Type |\n> Modifiers\n>\n> --------------+-----------------------------+-------------------------------------------------------\n> aid | bigint | not null default\n> nextval('article_aid_seq'::regclass)\n> style | smallint | not null default 0\n> oaid | bigint | default 0\n> fid | integer |\n> bid | integer | default 0\n> cid | integer |\n> tid | integer |\n> url | text | default NULL::bpchar\n> tm_post | timestamp without time zone |\n> tm_last_rply | timestamp without time zone |\n> author | character varying(100) | default NULL::bpchar\n> title | character varying(255) | default NULL::bpchar\n> content | text |\n> ab_content | text |\n> rply_cnt | integer |\n> read_cnt | integer |\n> url_hash | character(32) | not null\n> hash_plain | text | default NULL::bpchar\n> title_hash | character(32) | default NULL::bpchar\n> guid | character(32) | default NULL::bpchar\n> neg_pos | smallint | not null default 0\n> match_code | character(32) | default NULL::bpchar\n> tm_spider | timestamp without time zone |\n> tm_update | timestamp without time zone |\n> stage | smallint | not null default 0\n> rply_cut | integer | not null default 0\n> read_cut | integer | not null default 0\n> src | integer | default 0\n> rfid | integer |\n> labels | integer[] |\n> kwds | integer[] |\n> like_cnt | integer |\n> Indexes:\n> \"article_729_pkey\" PRIMARY KEY, btree (aid), tablespace \"indextbs\"\n> \"article_729_url_hash\" UNIQUE CONSTRAINT, btree (url_hash), tablespace\n> \"indextbs\"\n> \"article_729_bid_titlehash_idx\" btree (bid, title_hash), tablespace\n> \"indextbs\"\n> \"article_729_fid_idx\" btree (fid), tablespace \"indextbs\"\n> \"article_729_guid_idx\" btree (guid), tablespace \"indextbs\"\n> \"article_729_labels_idx\" gin (labels), tablespace \"data1tbs\"\n> \"article_729_mtcode_idx\" btree (match_code), tablespace \"indextbs\"\n> \"article_729_rfid_author_idx\" btree (rfid, author), tablespace\n> \"indextbs\"\n> \"article_729_stage_idx\" btree (stage), tablespace \"data1tbs\"\n> \"article_729_time_style_idx\" btree (tm_post DESC, style), tablespace\n> \"data1tbs\"\n> \"article_729_tm_spider_idx\" btree (tm_spider), tablespace \"indextbs\"\n> \"article_729_tm_update_idx\" btree (tm_update), tablespace \"data1tbs\"\n> Check constraints:\n> \"article_729_cid_check\" CHECK (cid = 729)\n> Foreign-key constraints:\n> \"article_729_cid_fk\" FOREIGN KEY (cid) REFERENCES company(cid) ON\n> DELETE CASCADE\n> Triggers:\n> trg_article_729_delete AFTER DELETE ON article_729 FOR EACH ROW\n> EXECUTE PROCEDURE fn_article_delete()\n> trg_article_729_insert AFTER INSERT ON article_729 FOR EACH ROW\n> EXECUTE PROCEDURE fn_article_insert()\n> trg_article_729_update AFTER UPDATE ON article_729 FOR EACH ROW\n> EXECUTE PROCEDURE fn_article_update()\n> Inherits: article\n>\n> 2015-08-11 21:53 GMT+08:00 Maxim Boguk <[email protected]>:\n>\n>>\n>>\n>> On Tue, Aug 11, 2015 at 11:44 PM, Rural Hunter <[email protected]>\n>> wrote:\n>>\n>>> # \\dt+\n>>> \n>>> article_729\n>>> List of relations\n>>> Schema | Name | Type | Owner | Size | Description\n>>> --------+-------------+-------+--------+--------+-------------\n>>> public | article_729 | table | omuser1 | 655 MB |\n>>> (1 row)\n>>> The problem exists on not only this specific child table, but with all\n>>> of them.\n>>>\n>>\n>> Oops sorry, оf course I mean \"\\d+\n>> article_729\n>> \" (to see criteria used for partitioning).\n>>\n>>\n>>\n>> --\n>> Maxim Boguk\n>> Senior Postgresql DBA\n>> http://www.postgresql-consulting.ru/\n>> <http://www.postgresql-consulting.com/>\n>>\n>\n>\n\n# \\d+ article_729 Table \"public.article_729\" Column | Type | Modifiers | Storage | Stats target | Description --------------+-----------------------------+-------------------------------------------------------+----------+--------------+------------- aid | bigint | not null default nextval('article_aid_seq'::regclass) | plain | | style | smallint | not null default 0 | plain | | oaid | bigint | default 0 | plain | | fid | integer | | plain | | bid | integer | default 0 | plain | | cid | integer | | plain | | tid | integer | | plain | | url | text | default NULL::bpchar | extended | | tm_post | timestamp without time zone | | plain | | tm_last_rply | timestamp without time zone | | plain | | author | character varying(100) | default NULL::bpchar | extended | | title | character varying(255) | default NULL::bpchar | extended | | content | text | | extended | | ab_content | text | | extended | | rply_cnt | integer | | plain | | read_cnt | integer | | plain | | url_hash | character(32) | not null | extended | | hash_plain | text | default NULL::bpchar | extended | | title_hash | character(32) | default NULL::bpchar | extended | | guid | character(32) | default NULL::bpchar | extended | | neg_pos | smallint | not null default 0 | plain | | match_code | character(32) | default NULL::bpchar | extended | | tm_spider | timestamp without time zone | | plain | | tm_update | timestamp without time zone | | plain | | stage | smallint | not null default 0 | plain | | rply_cut | integer | not null default 0 | plain | | read_cut | integer | not null default 0 | plain | | src | integer | default 0 | plain | | rfid | integer | | plain | | labels | integer[] | | extended | | kwds | integer[] | | extended | | like_cnt | integer | | plain | | Indexes: \"article_729_pkey\" PRIMARY KEY, btree (aid), tablespace \"indextbs\" \"article_729_url_hash\" UNIQUE CONSTRAINT, btree (url_hash), tablespace \"indextbs\" \"article_729_bid_titlehash_idx\" btree (bid, title_hash), tablespace \"indextbs\" \"article_729_fid_idx\" btree (fid), tablespace \"indextbs\" \"article_729_guid_idx\" btree (guid), tablespace \"indextbs\" \"article_729_labels_idx\" gin (labels), tablespace \"data1tbs\" \"article_729_mtcode_idx\" btree (match_code), tablespace \"indextbs\" \"article_729_rfid_author_idx\" btree (rfid, author), tablespace \"indextbs\" \"article_729_stage_idx\" btree (stage), tablespace \"data1tbs\" \"article_729_time_style_idx\" btree (tm_post DESC, style), tablespace \"data1tbs\" \"article_729_tm_spider_idx\" btree (tm_spider), tablespace \"indextbs\" \"article_729_tm_update_idx\" btree (tm_update), tablespace \"data1tbs\"Check constraints: \"article_729_cid_check\" CHECK (cid = 729)Foreign-key constraints: \"article_729_cid_fk\" FOREIGN KEY (cid) REFERENCES company(cid) ON DELETE CASCADETriggers: trg_article_729_delete AFTER DELETE ON article_729 FOR EACH ROW EXECUTE PROCEDURE fn_article_delete() trg_article_729_insert AFTER INSERT ON article_729 FOR EACH ROW EXECUTE PROCEDURE fn_article_insert() trg_article_729_update AFTER UPDATE ON article_729 FOR EACH ROW EXECUTE PROCEDURE fn_article_update()Inherits: articleHas OIDs: no2015-08-11 22:00 GMT+08:00 Rural Hunter <[email protected]>:# \\d article_729 Table \"public.article_729\" Column | Type | Modifiers --------------+-----------------------------+------------------------------------------------------- aid | bigint | not null default nextval('article_aid_seq'::regclass) style | smallint | not null default 0 oaid | bigint | default 0 fid | integer | bid | integer | default 0 cid | integer | tid | integer | url | text | default NULL::bpchar tm_post | timestamp without time zone | tm_last_rply | timestamp without time zone | author | character varying(100) | default NULL::bpchar title | character varying(255) | default NULL::bpchar content | text | ab_content | text | rply_cnt | integer | read_cnt | integer | url_hash | character(32) | not null hash_plain | text | default NULL::bpchar title_hash | character(32) | default NULL::bpchar guid | character(32) | default NULL::bpchar neg_pos | smallint | not null default 0 match_code | character(32) | default NULL::bpchar tm_spider | timestamp without time zone | tm_update | timestamp without time zone | stage | smallint | not null default 0 rply_cut | integer | not null default 0 read_cut | integer | not null default 0 src | integer | default 0 rfid | integer | labels | integer[] | kwds | integer[] | like_cnt | integer | Indexes: \"article_729_pkey\" PRIMARY KEY, btree (aid), tablespace \"indextbs\" \"article_729_url_hash\" UNIQUE CONSTRAINT, btree (url_hash), tablespace \"indextbs\" \"article_729_bid_titlehash_idx\" btree (bid, title_hash), tablespace \"indextbs\" \"article_729_fid_idx\" btree (fid), tablespace \"indextbs\" \"article_729_guid_idx\" btree (guid), tablespace \"indextbs\" \"article_729_labels_idx\" gin (labels), tablespace \"data1tbs\" \"article_729_mtcode_idx\" btree (match_code), tablespace \"indextbs\" \"article_729_rfid_author_idx\" btree (rfid, author), tablespace \"indextbs\" \"article_729_stage_idx\" btree (stage), tablespace \"data1tbs\" \"article_729_time_style_idx\" btree (tm_post DESC, style), tablespace \"data1tbs\" \"article_729_tm_spider_idx\" btree (tm_spider), tablespace \"indextbs\" \"article_729_tm_update_idx\" btree (tm_update), tablespace \"data1tbs\"Check constraints: \"article_729_cid_check\" CHECK (cid = 729)Foreign-key constraints: \"article_729_cid_fk\" FOREIGN KEY (cid) REFERENCES company(cid) ON DELETE CASCADETriggers: trg_article_729_delete AFTER DELETE ON article_729 FOR EACH ROW EXECUTE PROCEDURE fn_article_delete() trg_article_729_insert AFTER INSERT ON article_729 FOR EACH ROW EXECUTE PROCEDURE fn_article_insert() trg_article_729_update AFTER UPDATE ON article_729 FOR EACH ROW EXECUTE PROCEDURE fn_article_update()Inherits: article2015-08-11 21:53 GMT+08:00 Maxim Boguk <[email protected]>:On Tue, Aug 11, 2015 at 11:44 PM, Rural Hunter <[email protected]> wrote:# \\dt+ article_729 List of relations Schema | Name | Type | Owner | Size | Description --------+-------------+-------+--------+--------+------------- public | article_729 | table | omuser1 | 655 MB | (1 row)The problem exists on not only this specific child table, but with all of them.Oops sorry, оf course I mean \"\\d+ article_729\" (to see criteria used for partitioning).-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/",
"msg_date": "Tue, 11 Aug 2015 22:01:42 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "Check constraints:\n> \"article_729_cid_check\" CHECK (cid = 729)\n>\n\n\nUsed partition schema looks very simple and straightforward, and should\nhave no issues with 80 partitions.\nAre you sure that you have only 80 partitions but not (lets say) 800?\nAre every other partition of the article table use the same general idea of\npartition check (cid=something)?\n\n\nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\nJabber: [email protected]\nМойКруг: http://mboguk.moikrug.ru/\n\n\"People problems are solved with people.\nIf people cannot solve the problem, try technology.\nPeople will then wish they'd listened at the first stage.\"\n\nCheck constraints: \"article_729_cid_check\" CHECK (cid = 729)Used partition schema looks very simple and straightforward, and should have no issues with 80 partitions.Are you sure that you have only 80 partitions but not (lets say) 800?Are every other partition of the article table use the same general idea of partition check (cid=something)? Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Wed, 12 Aug 2015 00:42:15 +1000",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "yes i'm very sure. from what i observed, it has something to do with the\nconcurrent query planing. if i disconnect other connections, the plan is\nvery quick.\n\n2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:\n\n>\n>\n> Check constraints:\n>> \"article_729_cid_check\" CHECK (cid = 729)\n>>\n>\n>\n> Used partition schema looks very simple and straightforward, and should\n> have no issues with 80 partitions.\n> Are you sure that you have only 80 partitions but not (lets say) 800?\n> Are every other partition of the article table use the same general idea\n> of partition check (cid=something)?\n>\n>\n> Maxim Boguk\n> Senior Postgresql DBA\n> http://www.postgresql-consulting.ru/\n> <http://www.postgresql-consulting.com/>\n>\n> Phone RU: +7 910 405 4718\n> Phone AU: +61 45 218 5678\n>\n> LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\n> Skype: maxim.boguk\n> Jabber: [email protected]\n> МойКруг: http://mboguk.moikrug.ru/\n>\n> \"People problems are solved with people.\n> If people cannot solve the problem, try technology.\n> People will then wish they'd listened at the first stage.\"\n>\n>\n\nyes i'm very sure. from what i observed, it has something to do with the concurrent query planing. if i disconnect other connections, the plan is very quick.2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:Check constraints: \"article_729_cid_check\" CHECK (cid = 729)Used partition schema looks very simple and straightforward, and should have no issues with 80 partitions.Are you sure that you have only 80 partitions but not (lets say) 800?Are every other partition of the article table use the same general idea of partition check (cid=something)? Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Tue, 11 Aug 2015 22:51:04 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "Hi Rural Hunter,\nTry to create an index on cid attribute.\nHow many rows has article_729?\n\nPietro Pugni\nIl 11/ago/2015 16:51, \"Rural Hunter\" <[email protected]> ha scritto:\n\n> yes i'm very sure. from what i observed, it has something to do with the\n> concurrent query planing. if i disconnect other connections, the plan is\n> very quick.\n>\n> 2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:\n>\n>>\n>>\n>> Check constraints:\n>>> \"article_729_cid_check\" CHECK (cid = 729)\n>>>\n>>\n>>\n>> Used partition schema looks very simple and straightforward, and should\n>> have no issues with 80 partitions.\n>> Are you sure that you have only 80 partitions but not (lets say) 800?\n>> Are every other partition of the article table use the same general idea\n>> of partition check (cid=something)?\n>>\n>>\n>> Maxim Boguk\n>> Senior Postgresql DBA\n>> http://www.postgresql-consulting.ru/\n>> <http://www.postgresql-consulting.com/>\n>>\n>> Phone RU: +7 910 405 4718\n>> Phone AU: +61 45 218 5678\n>>\n>> LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\n>> Skype: maxim.boguk\n>> Jabber: [email protected]\n>> МойКруг: http://mboguk.moikrug.ru/\n>>\n>> \"People problems are solved with people.\n>> If people cannot solve the problem, try technology.\n>> People will then wish they'd listened at the first stage.\"\n>>\n>>\n>\n\nHi Rural Hunter, \nTry to create an index on cid attribute.\nHow many rows has article_729?\n\n Pietro Pugni\nIl 11/ago/2015 16:51, \"Rural Hunter\" <[email protected]> ha scritto:yes i'm very sure. from what i observed, it has something to do with the concurrent query planing. if i disconnect other connections, the plan is very quick.2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:Check constraints: \"article_729_cid_check\" CHECK (cid = 729)Used partition schema looks very simple and straightforward, and should have no issues with 80 partitions.Are you sure that you have only 80 partitions but not (lets say) 800?Are every other partition of the article table use the same general idea of partition check (cid=something)? Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Tue, 11 Aug 2015 19:03:25 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "article_729 has about 0.8 million rows. The rows of the children tables are\nvariance from several thousands to dozens of millions. How can it help to\ncreate index on the partition key?\n\n2015-08-12 1:03 GMT+08:00 Pietro Pugni <[email protected]>:\n\n> Hi Rural Hunter,\n> Try to create an index on cid attribute.\n> How many rows has article_729?\n>\n> Pietro Pugni\n> Il 11/ago/2015 16:51, \"Rural Hunter\" <[email protected]> ha scritto:\n>\n>> yes i'm very sure. from what i observed, it has something to do with the\n>> concurrent query planing. if i disconnect other connections, the plan is\n>> very quick.\n>>\n>> 2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:\n>>\n>>>\n>>>\n>>> Check constraints:\n>>>> \"article_729_cid_check\" CHECK (cid = 729)\n>>>>\n>>>\n>>>\n>>> Used partition schema looks very simple and straightforward, and should\n>>> have no issues with 80 partitions.\n>>> Are you sure that you have only 80 partitions but not (lets say) 800?\n>>> Are every other partition of the article table use the same general idea\n>>> of partition check (cid=something)?\n>>>\n>>>\n>>> Maxim Boguk\n>>> Senior Postgresql DBA\n>>> http://www.postgresql-consulting.ru/\n>>> <http://www.postgresql-consulting.com/>\n>>>\n>>> Phone RU: +7 910 405 4718\n>>> Phone AU: +61 45 218 5678\n>>>\n>>> LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\n>>> Skype: maxim.boguk\n>>> Jabber: [email protected]\n>>> МойКруг: http://mboguk.moikrug.ru/\n>>>\n>>> \"People problems are solved with people.\n>>> If people cannot solve the problem, try technology.\n>>> People will then wish they'd listened at the first stage.\"\n>>>\n>>>\n>>\n\narticle_729 has about 0.8 million rows. The rows of the children tables are variance from several thousands to dozens of millions. How can it help to create index on the partition key?2015-08-12 1:03 GMT+08:00 Pietro Pugni <[email protected]>:Hi Rural Hunter, \nTry to create an index on cid attribute.\nHow many rows has article_729?\n\n Pietro Pugni\nIl 11/ago/2015 16:51, \"Rural Hunter\" <[email protected]> ha scritto:yes i'm very sure. from what i observed, it has something to do with the concurrent query planing. if i disconnect other connections, the plan is very quick.2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:Check constraints: \"article_729_cid_check\" CHECK (cid = 729)Used partition schema looks very simple and straightforward, and should have no issues with 80 partitions.Are you sure that you have only 80 partitions but not (lets say) 800?Are every other partition of the article table use the same general idea of partition check (cid=something)? Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Wed, 12 Aug 2015 09:49:42 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "You can give it a try only on that partition just to see if your query plan\ngets better. I prefer defining partitioning over ranging attributes like,\nfor example: cid between 123 and 456. It makes more sense, especially when\nthere are attributes which value strictly depends on the check attribute.\nBtw, dozens of millions is not a problem on modern systems. I remember of\nreading about a recommended 20 millions per partition but I usually work\nwith 60 millions per partition without any problem.\n\nDo you autovacuum? How frequently do the updates and insert operations\noccur?\nGive us your configuration about work_mem, shared_buffers, max_connections\netc. Kernel version? If possible avoid 3.2 and 3.8-3.13. Also think to\nupgrade your OS version.\n\n From today I'm on vacancy, so others could help :)\n\nPietro Pugni\nIl 12/ago/2015 03:49, \"Rural Hunter\" <[email protected]> ha scritto:\n\n> article_729 has about 0.8 million rows. The rows of the children tables\n> are variance from several thousands to dozens of millions. How can it help\n> to create index on the partition key?\n>\n> 2015-08-12 1:03 GMT+08:00 Pietro Pugni <[email protected]>:\n>\n>> Hi Rural Hunter,\n>> Try to create an index on cid attribute.\n>> How many rows has article_729?\n>>\n>> Pietro Pugni\n>> Il 11/ago/2015 16:51, \"Rural Hunter\" <[email protected]> ha scritto:\n>>\n>>> yes i'm very sure. from what i observed, it has something to do with the\n>>> concurrent query planing. if i disconnect other connections, the plan is\n>>> very quick.\n>>>\n>>> 2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:\n>>>\n>>>>\n>>>>\n>>>> Check constraints:\n>>>>> \"article_729_cid_check\" CHECK (cid = 729)\n>>>>>\n>>>>\n>>>>\n>>>> Used partition schema looks very simple and straightforward, and should\n>>>> have no issues with 80 partitions.\n>>>> Are you sure that you have only 80 partitions but not (lets say) 800?\n>>>> Are every other partition of the article table use the same general\n>>>> idea of partition check (cid=something)?\n>>>>\n>>>>\n>>>> Maxim Boguk\n>>>> Senior Postgresql DBA\n>>>> http://www.postgresql-consulting.ru/\n>>>> <http://www.postgresql-consulting.com/>\n>>>>\n>>>> Phone RU: +7 910 405 4718\n>>>> Phone AU: +61 45 218 5678\n>>>>\n>>>> LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\n>>>> Skype: maxim.boguk\n>>>> Jabber: [email protected]\n>>>> МойКруг: http://mboguk.moikrug.ru/\n>>>>\n>>>> \"People problems are solved with people.\n>>>> If people cannot solve the problem, try technology.\n>>>> People will then wish they'd listened at the first stage.\"\n>>>>\n>>>>\n>>>\n>\n\nYou can give it a try only on that partition just to see if your query plan gets better. I prefer defining partitioning over ranging attributes like, for example: cid between 123 and 456. It makes more sense, especially when there are attributes which value strictly depends on the check attribute. Btw, dozens of millions is not a problem on modern systems. I remember of reading about a recommended 20 millions per partition but I usually work with 60 millions per partition without any problem.\nDo you autovacuum? How frequently do the updates and insert operations occur?\nGive us your configuration about work_mem, shared_buffers, max_connections etc. Kernel version? If possible avoid 3.2 and 3.8-3.13. Also think to upgrade your OS version. \nFrom today I'm on vacancy, so others could help :)\nPietro Pugni\nIl 12/ago/2015 03:49, \"Rural Hunter\" <[email protected]> ha scritto:article_729 has about 0.8 million rows. The rows of the children tables are variance from several thousands to dozens of millions. How can it help to create index on the partition key?2015-08-12 1:03 GMT+08:00 Pietro Pugni <[email protected]>:Hi Rural Hunter, \nTry to create an index on cid attribute.\nHow many rows has article_729?\n\n Pietro Pugni\nIl 11/ago/2015 16:51, \"Rural Hunter\" <[email protected]> ha scritto:yes i'm very sure. from what i observed, it has something to do with the concurrent query planing. if i disconnect other connections, the plan is very quick.2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:Check constraints: \"article_729_cid_check\" CHECK (cid = 729)Used partition schema looks very simple and straightforward, and should have no issues with 80 partitions.Are you sure that you have only 80 partitions but not (lets say) 800?Are every other partition of the article table use the same general idea of partition check (cid=something)? Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Wed, 12 Aug 2015 09:00:45 +0200",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
},
{
"msg_contents": "I tried to add index on partition key and it didn't help. we have\nautovacuum running. The updates and inserts are very frequent on these\ntables. The server kernel version is 3.5.0-22-generic. It has 376G memory.\n\nmax_connections = 2500 # (change requires restart)\nshared_buffers = 32GB # min 128kB\nwork_mem = 8MB # min 64kB\nmaintenance_work_mem = 20GB # min 1MB\n\nWe usually have around 400 active connections on the db. Most of them are\nidle. There are about 100 connections are in active status and I can see\nmost of the time they are in 'BIND' status in ps command.\n\nWe have heavy IO load on the disk of the default tablespace where I believe\ntable statistics tables are in. Will that impact the query planing greatly?\n\n\n2015-08-12 15:00 GMT+08:00 Pietro Pugni <[email protected]>:\n\n> You can give it a try only on that partition just to see if your query\n> plan gets better. I prefer defining partitioning over ranging attributes\n> like, for example: cid between 123 and 456. It makes more sense, especially\n> when there are attributes which value strictly depends on the check\n> attribute. Btw, dozens of millions is not a problem on modern systems. I\n> remember of reading about a recommended 20 millions per partition but I\n> usually work with 60 millions per partition without any problem.\n>\n> Do you autovacuum? How frequently do the updates and insert operations\n> occur?\n> Give us your configuration about work_mem, shared_buffers, max_connections\n> etc. Kernel version? If possible avoid 3.2 and 3.8-3.13. Also think to\n> upgrade your OS version.\n>\n> From today I'm on vacancy, so others could help :)\n>\n> Pietro Pugni\n> Il 12/ago/2015 03:49, \"Rural Hunter\" <[email protected]> ha scritto:\n>\n>> article_729 has about 0.8 million rows. The rows of the children tables\n>> are variance from several thousands to dozens of millions. How can it help\n>> to create index on the partition key?\n>>\n>> 2015-08-12 1:03 GMT+08:00 Pietro Pugni <[email protected]>:\n>>\n>>> Hi Rural Hunter,\n>>> Try to create an index on cid attribute.\n>>> How many rows has article_729?\n>>>\n>>> Pietro Pugni\n>>> Il 11/ago/2015 16:51, \"Rural Hunter\" <[email protected]> ha scritto:\n>>>\n>>>> yes i'm very sure. from what i observed, it has something to do with\n>>>> the concurrent query planing. if i disconnect other connections, the plan\n>>>> is very quick.\n>>>>\n>>>> 2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:\n>>>>\n>>>>>\n>>>>>\n>>>>> Check constraints:\n>>>>>> \"article_729_cid_check\" CHECK (cid = 729)\n>>>>>>\n>>>>>\n>>>>>\n>>>>> Used partition schema looks very simple and straightforward, and\n>>>>> should have no issues with 80 partitions.\n>>>>> Are you sure that you have only 80 partitions but not (lets say) 800?\n>>>>> Are every other partition of the article table use the same general\n>>>>> idea of partition check (cid=something)?\n>>>>>\n>>>>>\n>>>>> Maxim Boguk\n>>>>> Senior Postgresql DBA\n>>>>> http://www.postgresql-consulting.ru/\n>>>>> <http://www.postgresql-consulting.com/>\n>>>>>\n>>>>> Phone RU: +7 910 405 4718\n>>>>> Phone AU: +61 45 218 5678\n>>>>>\n>>>>> LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\n>>>>> Skype: maxim.boguk\n>>>>> Jabber: [email protected]\n>>>>> МойКруг: http://mboguk.moikrug.ru/\n>>>>>\n>>>>> \"People problems are solved with people.\n>>>>> If people cannot solve the problem, try technology.\n>>>>> People will then wish they'd listened at the first stage.\"\n>>>>>\n>>>>>\n>>>>\n>>\n\nI tried to add index on partition key and it didn't help. we have autovacuum running. The updates and inserts are very frequent on these tables. The server kernel version is 3.5.0-22-generic. It has 376G memory.max_connections = 2500 # (change requires restart)shared_buffers = 32GB # min 128kBwork_mem = 8MB # min 64kBmaintenance_work_mem = 20GB # min 1MBWe usually have around 400 active connections on the db. Most of them are idle. There are about 100 connections are in active status and I can see most of the time they are in 'BIND' status in ps command. We have heavy IO load on the disk of the default tablespace where I believe table statistics tables are in. Will that impact the query planing greatly?2015-08-12 15:00 GMT+08:00 Pietro Pugni <[email protected]>:You can give it a try only on that partition just to see if your query plan gets better. I prefer defining partitioning over ranging attributes like, for example: cid between 123 and 456. It makes more sense, especially when there are attributes which value strictly depends on the check attribute. Btw, dozens of millions is not a problem on modern systems. I remember of reading about a recommended 20 millions per partition but I usually work with 60 millions per partition without any problem.\nDo you autovacuum? How frequently do the updates and insert operations occur?\nGive us your configuration about work_mem, shared_buffers, max_connections etc. Kernel version? If possible avoid 3.2 and 3.8-3.13. Also think to upgrade your OS version. \nFrom today I'm on vacancy, so others could help :)\nPietro Pugni\nIl 12/ago/2015 03:49, \"Rural Hunter\" <[email protected]> ha scritto:article_729 has about 0.8 million rows. The rows of the children tables are variance from several thousands to dozens of millions. How can it help to create index on the partition key?2015-08-12 1:03 GMT+08:00 Pietro Pugni <[email protected]>:Hi Rural Hunter, \nTry to create an index on cid attribute.\nHow many rows has article_729?\n\n Pietro Pugni\nIl 11/ago/2015 16:51, \"Rural Hunter\" <[email protected]> ha scritto:yes i'm very sure. from what i observed, it has something to do with the concurrent query planing. if i disconnect other connections, the plan is very quick.2015-08-11 22:42 GMT+08:00 Maxim Boguk <[email protected]>:Check constraints: \"article_729_cid_check\" CHECK (cid = 729)Used partition schema looks very simple and straightforward, and should have no issues with 80 partitions.Are you sure that you have only 80 partitions but not (lets say) 800?Are every other partition of the article table use the same general idea of partition check (cid=something)? Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Wed, 12 Aug 2015 16:25:39 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Plan Performance on Partitioned Table"
}
] |
[
{
"msg_contents": "Hi,\n\nI am new to optimizing queries and i'm getting a slow running time\n(~1.5secs) with the following SQL:\n\nSELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n\"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n, \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n\"Occupation\", \"Vacancy\".\"PositionNo\"\n, \"Vacancy\".\"Template\" from \n \"Vacancy\"\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" \n ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\" \n and \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\"\n ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\" \n ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\" \n and \"t_33\".\"Table_TableID\" = 33 )\nLEFT JOIN \"Department\"\n ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n\"Department\".\"Active\" = 't' and \"Department\"\n.\"ClientID\" = 263)\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\nand \"c_50\".\"RowID\" = \"Vacancy\"\n.\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\nWHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\nDISTINCT(\"Vacancy\".\"ID\") \n FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n(\"ct126\".\"Category_TableID\" = 126\n and \"RowID\" = \"Vacancy\".\"ID\") \n left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\nand \"Workflow\".\"Level\" \n= 1) \n left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n\"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n = 30 and \"c30\".\"CategoryOptionID\" = 21923) \n WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\nIN(34024,35254,35255,35256)) and \"Vacancy\"\n.\"Template\" = 't'\nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n\"Vacancy\".\"CustomAccess\", \"Department\"\n.\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\nUNION\nSELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n\"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n, \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n\"Occupation\", \"Vacancy\".\"PositionNo\"\n, \"Vacancy\".\"Template\" from \n \"Vacancy\"\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" \n ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\" \n and \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\"\n ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\" \n ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\" \n and \"t_33\".\"Table_TableID\" = 33 )\nLEFT JOIN \"Department\"\n ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n\"Department\".\"Active\" = 't' and \"Department\"\n.\"ClientID\" = 263)\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\nand \"c_50\".\"RowID\" = \"Vacancy\"\n.\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\nWHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\nDISTINCT(\"Vacancy\".\"ID\") \n FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n(\"ct126\".\"Category_TableID\" = 126\n and \"RowID\" = \"Vacancy\".\"ID\") \n left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\nand \"Workflow\".\"Level\" \n= 1) \n left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n\"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n = 30 and \"c30\".\"CategoryOptionID\" = 21923) \n WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\nIN(34024,35254,35255,35256)) and \"Vacancy\"\n.\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1 \nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n\"Vacancy\".\"CustomAccess\", \"Department\"\n.\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n ORDER BY \"JobTitle\"\n\nRunning explain analyze gives me the following information: \nhttp://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC> \n\nFor a total runtime: 2877.157 ms\n\nIf i remove the left joins on Department and TableRow_TableRow this reduces\nthe run time by about a third.\nAdditionally removing CategoryOption and CategoryOption_TableRow joins\nfurther reduces by a about a third.\n\nGiven that i need both these joins for the information retrieved by them,\nwhat would be the best way to re-factor this query so it runs faster?\n\nLooking at the output of explain analyze the hash aggregates and sort seem\nto be the primary issue.\n\nThanks in advance \n\n\n\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Slow-Query-tp5861835.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 11 Aug 2015 19:34:20 -0700 (MST)",
"msg_from": "robbyc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Query"
},
{
"msg_contents": "On Wed, Aug 12, 2015 at 12:34 PM, robbyc <[email protected]> wrote:\n\n> Hi,\n>\n> I am new to optimizing queries and i'm getting a slow running time\n> (~1.5secs) with the following SQL:\n>\n> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n> \"Occupation\", \"Vacancy\".\"PositionNo\"\n> , \"Vacancy\".\"Template\" from\n> \"Vacancy\"\n> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n> and \"c_22\".\"Category_TableID\" = 22)\n> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n> LEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n> and \"t_33\".\"Table_TableID\" = 33 )\n> LEFT JOIN \"Department\"\n> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n> \"Department\".\"Active\" = 't' and \"Department\"\n> .\"ClientID\" = 263)\n> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\n> and \"c_50\".\"RowID\" = \"Vacancy\"\n> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n> DISTINCT(\"Vacancy\".\"ID\")\n> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n> (\"ct126\".\"Category_TableID\" = 126\n> and \"RowID\" = \"Vacancy\".\"ID\")\n> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" =\n> \"Vacancy\".\"ID\"\n> and \"Workflow\".\"Level\"\n> = 1)\n> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n> = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n> IN(34024,35254,35255,35256)) and \"Vacancy\"\n> .\"Template\" = 't'\n> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n> \"Vacancy\".\"CustomAccess\", \"Department\"\n> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n> UNION\n> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n> \"Occupation\", \"Vacancy\".\"PositionNo\"\n> , \"Vacancy\".\"Template\" from\n> \"Vacancy\"\n> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n> and \"c_22\".\"Category_TableID\" = 22)\n> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n> LEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n> and \"t_33\".\"Table_TableID\" = 33 )\n> LEFT JOIN \"Department\"\n> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n> \"Department\".\"Active\" = 't' and \"Department\"\n> .\"ClientID\" = 263)\n> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\n> and \"c_50\".\"RowID\" = \"Vacancy\"\n> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n> DISTINCT(\"Vacancy\".\"ID\")\n> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n> (\"ct126\".\"Category_TableID\" = 126\n> and \"RowID\" = \"Vacancy\".\"ID\")\n> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" =\n> \"Vacancy\".\"ID\"\n> and \"Workflow\".\"Level\"\n> = 1)\n> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n> = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n> IN(34024,35254,35255,35256)) and \"Vacancy\"\n> .\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1\n> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n> \"Vacancy\".\"CustomAccess\", \"Department\"\n> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n> ORDER BY \"JobTitle\"\n>\n> Running explain analyze gives me the following information:\n> http://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC>\n\n\n> For a total runtime: 2877.157 ms\n>\n> If i remove the left joins on Department and TableRow_TableRow this reduces\n> the run time by about a third.\n> Additionally removing CategoryOption and CategoryOption_TableRow joins\n> further reduces by a about a third.\n>\n> Given that i need both these joins for the information retrieved by them,\n> what would be the best way to re-factor this query so it runs faster?\n>\n> Looking at the output of explain analyze the hash aggregates and sort seem\n> to be the primary issue.\n>\n\nThe query has got a distinct and group-by/order-by clauses which seems to\nbe taking time. Without looking at much details of the query code and Table\nsize etc, did you try increasing the work_mem and then execute the query\nand see if that helps ? This will reduce the on-disk IO for sorting. Also,\nVacancy.JobTitle seems to be a non-index column.\n\nRegards,\nVenkata Balaji\n\nFujitsu Australia\n\nOn Wed, Aug 12, 2015 at 12:34 PM, robbyc <[email protected]> wrote:Hi,\n\nI am new to optimizing queries and i'm getting a slow running time\n(~1.5secs) with the following SQL:\n\nSELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n\"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n, \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n\"Occupation\", \"Vacancy\".\"PositionNo\"\n, \"Vacancy\".\"Template\" from\n \"Vacancy\"\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n and \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\"\n ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n and \"t_33\".\"Table_TableID\" = 33 )\nLEFT JOIN \"Department\"\n ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n\"Department\".\"Active\" = 't' and \"Department\"\n.\"ClientID\" = 263)\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\nand \"c_50\".\"RowID\" = \"Vacancy\"\n.\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\nWHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\nDISTINCT(\"Vacancy\".\"ID\")\n FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n(\"ct126\".\"Category_TableID\" = 126\n and \"RowID\" = \"Vacancy\".\"ID\")\n left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\nand \"Workflow\".\"Level\"\n= 1)\n left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n\"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\nIN(34024,35254,35255,35256)) and \"Vacancy\"\n.\"Template\" = 't'\nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n\"Vacancy\".\"CustomAccess\", \"Department\"\n.\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\nUNION\nSELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n\"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n, \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n\"Occupation\", \"Vacancy\".\"PositionNo\"\n, \"Vacancy\".\"Template\" from\n \"Vacancy\"\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n and \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\"\n ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n and \"t_33\".\"Table_TableID\" = 33 )\nLEFT JOIN \"Department\"\n ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n\"Department\".\"Active\" = 't' and \"Department\"\n.\"ClientID\" = 263)\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\nand \"c_50\".\"RowID\" = \"Vacancy\"\n.\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\nWHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\nDISTINCT(\"Vacancy\".\"ID\")\n FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n(\"ct126\".\"Category_TableID\" = 126\n and \"RowID\" = \"Vacancy\".\"ID\")\n left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\nand \"Workflow\".\"Level\"\n= 1)\n left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n\"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\nIN(34024,35254,35255,35256)) and \"Vacancy\"\n.\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1\nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n\"Vacancy\".\"CustomAccess\", \"Department\"\n.\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n ORDER BY \"JobTitle\"\n\nRunning explain analyze gives me the following information:\nhttp://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC>\nFor a total runtime: 2877.157 ms\n\nIf i remove the left joins on Department and TableRow_TableRow this reduces\nthe run time by about a third.\nAdditionally removing CategoryOption and CategoryOption_TableRow joins\nfurther reduces by a about a third.\n\nGiven that i need both these joins for the information retrieved by them,\nwhat would be the best way to re-factor this query so it runs faster?\n\nLooking at the output of explain analyze the hash aggregates and sort seem\nto be the primary issue.The query has got a distinct and group-by/order-by clauses which seems to be taking time. Without looking at much details of the query code and Table size etc, did you try increasing the work_mem and then execute the query and see if that helps ? This will reduce the on-disk IO for sorting. Also, Vacancy.JobTitle seems to be a non-index column.Regards,Venkata BalajiFujitsu Australia",
"msg_date": "Wed, 12 Aug 2015 14:08:44 +1000",
"msg_from": "Venkata Balaji N <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query"
},
{
"msg_contents": "Hi Venkata,\n\nwork_mem was set to 72MB, increased to 144MB, no change.\n\nAdded an index of type varchar_pattern_ops to Vacancy.JobTitle, this did\nnot help either.\n\nOn Wed, Aug 12, 2015 at 2:09 PM, Venkata Balaji N [via PostgreSQL] <\[email protected]> wrote:\n\n> On Wed, Aug 12, 2015 at 12:34 PM, robbyc <[hidden email]\n> <http:///user/SendEmail.jtp?type=node&node=5861839&i=0>> wrote:\n>\n>> Hi,\n>>\n>> I am new to optimizing queries and i'm getting a slow running time\n>> (~1.5secs) with the following SQL:\n>>\n>> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n>> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n>> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n>> \"Occupation\", \"Vacancy\".\"PositionNo\"\n>> , \"Vacancy\".\"Template\" from\n>> \"Vacancy\"\n>> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n>> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n>> and \"c_22\".\"Category_TableID\" = 22)\n>> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n>> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n>> LEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n>> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n>> and \"t_33\".\"Table_TableID\" = 33 )\n>> LEFT JOIN \"Department\"\n>> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n>> \"Department\".\"Active\" = 't' and \"Department\"\n>> .\"ClientID\" = 263)\n>> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\n>> and \"c_50\".\"RowID\" = \"Vacancy\"\n>> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n>> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n>> DISTINCT(\"Vacancy\".\"ID\")\n>> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n>> (\"ct126\".\"Category_TableID\" = 126\n>> and \"RowID\" = \"Vacancy\".\"ID\")\n>> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" =\n>> \"Vacancy\".\"ID\"\n>> and \"Workflow\".\"Level\"\n>> = 1)\n>> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n>> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n>> = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n>> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n>> IN(34024,35254,35255,35256)) and \"Vacancy\"\n>> .\"Template\" = 't'\n>> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n>> \"Vacancy\".\"CustomAccess\", \"Department\"\n>> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n>> UNION\n>> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n>> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n>> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n>> \"Occupation\", \"Vacancy\".\"PositionNo\"\n>> , \"Vacancy\".\"Template\" from\n>> \"Vacancy\"\n>> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n>> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n>> and \"c_22\".\"Category_TableID\" = 22)\n>> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n>> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n>> LEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n>> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n>> and \"t_33\".\"Table_TableID\" = 33 )\n>> LEFT JOIN \"Department\"\n>> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n>> \"Department\".\"Active\" = 't' and \"Department\"\n>> .\"ClientID\" = 263)\n>> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\n>> and \"c_50\".\"RowID\" = \"Vacancy\"\n>> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n>> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n>> DISTINCT(\"Vacancy\".\"ID\")\n>> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n>> (\"ct126\".\"Category_TableID\" = 126\n>> and \"RowID\" = \"Vacancy\".\"ID\")\n>> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" =\n>> \"Vacancy\".\"ID\"\n>> and \"Workflow\".\"Level\"\n>> = 1)\n>> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n>> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n>> = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n>> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n>> IN(34024,35254,35255,35256)) and \"Vacancy\"\n>> .\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1\n>> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n>> \"Vacancy\".\"CustomAccess\", \"Department\"\n>> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n>> ORDER BY \"JobTitle\"\n>>\n>> Running explain analyze gives me the following information:\n>> http://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC>\n>\n>\n>> For a total runtime: 2877.157 ms\n>>\n>> If i remove the left joins on Department and TableRow_TableRow this\n>> reduces\n>> the run time by about a third.\n>> Additionally removing CategoryOption and CategoryOption_TableRow joins\n>> further reduces by a about a third.\n>>\n>> Given that i need both these joins for the information retrieved by them,\n>> what would be the best way to re-factor this query so it runs faster?\n>>\n>> Looking at the output of explain analyze the hash aggregates and sort seem\n>> to be the primary issue.\n>>\n>\n> The query has got a distinct and group-by/order-by clauses which seems to\n> be taking time. Without looking at much details of the query code and Table\n> size etc, did you try increasing the work_mem and then execute the query\n> and see if that helps ? This will reduce the on-disk IO for sorting. Also,\n> Vacancy.JobTitle seems to be a non-index column.\n>\n> Regards,\n> Venkata Balaji\n>\n> Fujitsu Australia\n>\n>\n>\n> ------------------------------\n> If you reply to this email, your message will be added to the discussion\n> below:\n> http://postgresql.nabble.com/Slow-Query-tp5861835p5861839.html\n> To unsubscribe from Slow Query, click here\n> <http://postgresql.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5861835&code=cm9iY2FtcGJlbGw3M0BnbWFpbC5jb218NTg2MTgzNXwxOTc1MDc2ODM4>\n> .\n> NAML\n> <http://postgresql.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>\n>\n\n\n\n-- \nRegards\n\nRobert Campbell\n+61412062971\[email protected]\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Slow-Query-tp5861835p5861850.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nHi Venkata,work_mem was set to 72MB, increased to 144MB, no change.Added an index of type varchar_pattern_ops to Vacancy.JobTitle, this did not help either.On Wed, Aug 12, 2015 at 2:09 PM, Venkata Balaji N [via PostgreSQL] <[hidden email]> wrote:\nOn Wed, Aug 12, 2015 at 12:34 PM, robbyc <[hidden email]> wrote:Hi,\n\nI am new to optimizing queries and i'm getting a slow running time\n(~1.5secs) with the following SQL:\n\nSELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n\"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n, \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n\"Occupation\", \"Vacancy\".\"PositionNo\"\n, \"Vacancy\".\"Template\" from\n \"Vacancy\"\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n and \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\"\n ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n and \"t_33\".\"Table_TableID\" = 33 )\nLEFT JOIN \"Department\"\n ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n\"Department\".\"Active\" = 't' and \"Department\"\n.\"ClientID\" = 263)\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\nand \"c_50\".\"RowID\" = \"Vacancy\"\n.\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\nWHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\nDISTINCT(\"Vacancy\".\"ID\")\n FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n(\"ct126\".\"Category_TableID\" = 126\n and \"RowID\" = \"Vacancy\".\"ID\")\n left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\nand \"Workflow\".\"Level\"\n= 1)\n left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n\"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\nIN(34024,35254,35255,35256)) and \"Vacancy\"\n.\"Template\" = 't'\nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n\"Vacancy\".\"CustomAccess\", \"Department\"\n.\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\nUNION\nSELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n\"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n, \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n\"Occupation\", \"Vacancy\".\"PositionNo\"\n, \"Vacancy\".\"Template\" from\n \"Vacancy\"\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n and \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\"\n ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n and \"t_33\".\"Table_TableID\" = 33 )\nLEFT JOIN \"Department\"\n ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n\"Department\".\"Active\" = 't' and \"Department\"\n.\"ClientID\" = 263)\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\nand \"c_50\".\"RowID\" = \"Vacancy\"\n.\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\nWHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\nDISTINCT(\"Vacancy\".\"ID\")\n FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n(\"ct126\".\"Category_TableID\" = 126\n and \"RowID\" = \"Vacancy\".\"ID\")\n left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\nand \"Workflow\".\"Level\"\n= 1)\n left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n\"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\nIN(34024,35254,35255,35256)) and \"Vacancy\"\n.\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1\nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n\"Vacancy\".\"CustomAccess\", \"Department\"\n.\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n ORDER BY \"JobTitle\"\n\nRunning explain analyze gives me the following information:\nhttp://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC>\nFor a total runtime: 2877.157 ms\n\nIf i remove the left joins on Department and TableRow_TableRow this reduces\nthe run time by about a third.\nAdditionally removing CategoryOption and CategoryOption_TableRow joins\nfurther reduces by a about a third.\n\nGiven that i need both these joins for the information retrieved by them,\nwhat would be the best way to re-factor this query so it runs faster?\n\nLooking at the output of explain analyze the hash aggregates and sort seem\nto be the primary issue.The query has got a distinct and group-by/order-by clauses which seems to be taking time. Without looking at much details of the query code and Table size etc, did you try increasing the work_mem and then execute the query and see if that helps ? This will reduce the on-disk IO for sorting. Also, Vacancy.JobTitle seems to be a non-index column.Regards,Venkata BalajiFujitsu Australia\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.nabble.com/Slow-Query-tp5861835p5861839.html\n\n\n\t\t\n\t\tTo unsubscribe from Slow Query, click here.\nNAML\n-- RegardsRobert Campbell+61412062971[hidden email]\n\n\nView this message in context: Re: Slow Query\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Tue, 11 Aug 2015 22:29:45 -0700 (MST)",
"msg_from": "robbyc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query"
},
{
"msg_contents": "On Wed, Aug 12, 2015 at 3:29 PM, robbyc <[email protected]> wrote:\n\n> Hi Venkata,\n>\n> work_mem was set to 72MB, increased to 144MB, no change.\n>\n\nIncreasing work_mem depends on various other factors like Table size\n(amount of data being sorted), available memory etc.\n\n\n> Added an index of type varchar_pattern_ops to Vacancy.JobTitle, this did\n> not help either.\n>\n\nSorry, I did not mean to say that an Index must be added straight away. The\ncolumn must be eligible to have an Index. Meaning, Index will be\nbeneficial if created on a column with high number of distinct values.\n\nIf either of the above does not help, then options to rewrite the query\nmust be explored.\n\nThanks,\nVenkata Balaji N\n\nFujitsu Australia\n\n\n>\n> On Wed, Aug 12, 2015 at 2:09 PM, Venkata Balaji N [via PostgreSQL] <[hidden\n> email] <http:///user/SendEmail.jtp?type=node&node=5861850&i=0>> wrote:\n>\n>> On Wed, Aug 12, 2015 at 12:34 PM, robbyc <[hidden email]\n>> <http:///user/SendEmail.jtp?type=node&node=5861839&i=0>> wrote:\n>>\n>>> Hi,\n>>>\n>>> I am new to optimizing queries and i'm getting a slow running time\n>>> (~1.5secs) with the following SQL:\n>>>\n>>> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n>>> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n>>> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n>>> \"Occupation\", \"Vacancy\".\"PositionNo\"\n>>> , \"Vacancy\".\"Template\" from\n>>> \"Vacancy\"\n>>> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n>>> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n>>> and \"c_22\".\"Category_TableID\" = 22)\n>>> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n>>> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n>>> LEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n>>> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n>>> and \"t_33\".\"Table_TableID\" = 33 )\n>>> LEFT JOIN \"Department\"\n>>> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n>>> \"Department\".\"Active\" = 't' and \"Department\"\n>>> .\"ClientID\" = 263)\n>>> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"=\n>>> 50\n>>> and \"c_50\".\"RowID\" = \"Vacancy\"\n>>> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n>>> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n>>> DISTINCT(\"Vacancy\".\"ID\")\n>>> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n>>> (\"ct126\".\"Category_TableID\" = 126\n>>> and \"RowID\" = \"Vacancy\".\"ID\")\n>>> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" =\n>>> \"Vacancy\".\"ID\"\n>>> and \"Workflow\".\"Level\"\n>>> = 1)\n>>> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n>>> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n>>> = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n>>> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n>>> IN(34024,35254,35255,35256)) and \"Vacancy\"\n>>> .\"Template\" = 't'\n>>> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n>>> \"Vacancy\".\"CustomAccess\", \"Department\"\n>>> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n>>> UNION\n>>> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n>>> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n>>> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n>>> \"Occupation\", \"Vacancy\".\"PositionNo\"\n>>> , \"Vacancy\".\"Template\" from\n>>> \"Vacancy\"\n>>> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n>>> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n>>> and \"c_22\".\"Category_TableID\" = 22)\n>>> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n>>> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n>>> LEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n>>> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n>>> and \"t_33\".\"Table_TableID\" = 33 )\n>>> LEFT JOIN \"Department\"\n>>> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n>>> \"Department\".\"Active\" = 't' and \"Department\"\n>>> .\"ClientID\" = 263)\n>>> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"=\n>>> 50\n>>> and \"c_50\".\"RowID\" = \"Vacancy\"\n>>> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n>>> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n>>> DISTINCT(\"Vacancy\".\"ID\")\n>>> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n>>> (\"ct126\".\"Category_TableID\" = 126\n>>> and \"RowID\" = \"Vacancy\".\"ID\")\n>>> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" =\n>>> \"Vacancy\".\"ID\"\n>>> and \"Workflow\".\"Level\"\n>>> = 1)\n>>> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n>>> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n>>> = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n>>> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n>>> IN(34024,35254,35255,35256)) and \"Vacancy\"\n>>> .\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1\n>>> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n>>> \"Vacancy\".\"CustomAccess\", \"Department\"\n>>> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n>>> ORDER BY \"JobTitle\"\n>>>\n>>> Running explain analyze gives me the following information:\n>>> http://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC>\n>>\n>>\n>>> For a total runtime: 2877.157 ms\n>>>\n>>> If i remove the left joins on Department and TableRow_TableRow this\n>>> reduces\n>>> the run time by about a third.\n>>> Additionally removing CategoryOption and CategoryOption_TableRow joins\n>>> further reduces by a about a third.\n>>>\n>>> Given that i need both these joins for the information retrieved by them,\n>>> what would be the best way to re-factor this query so it runs faster?\n>>>\n>>> Looking at the output of explain analyze the hash aggregates and sort\n>>> seem\n>>> to be the primary issue.\n>>>\n>>\n>> The query has got a distinct and group-by/order-by clauses which seems to\n>> be taking time. Without looking at much details of the query code and Table\n>> size etc, did you try increasing the work_mem and then execute the query\n>> and see if that helps ? This will reduce the on-disk IO for sorting. Also,\n>> Vacancy.JobTitle seems to be a non-index column.\n>>\n>> Regards,\n>> Venkata Balaji\n>>\n>> Fujitsu Australia\n>>\n>>\n>>\n>> ------------------------------\n>> If you reply to this email, your message will be added to the discussion\n>> below:\n>> http://postgresql.nabble.com/Slow-Query-tp5861835p5861839.html\n>> To unsubscribe from Slow Query, click here.\n>> NAML\n>> <http://postgresql.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>\n>>\n>\n>\n>\n> --\n> Regards\n>\n> Robert Campbell\n> +61412062971\n> [hidden email] <http:///user/SendEmail.jtp?type=node&node=5861850&i=1>\n>\n> ------------------------------\n> View this message in context: Re: Slow Query\n> <http://postgresql.nabble.com/Slow-Query-tp5861835p5861850.html>\n>\n> Sent from the PostgreSQL - performance mailing list archive\n> <http://postgresql.nabble.com/PostgreSQL-performance-f2050081.html> at\n> Nabble.com.\n>\n\nOn Wed, Aug 12, 2015 at 3:29 PM, robbyc <[email protected]> wrote:Hi Venkata,work_mem was set to 72MB, increased to 144MB, no change.Increasing work_mem depends on various other factors like Table size (amount of data being sorted), available memory etc. Added an index of type varchar_pattern_ops to Vacancy.JobTitle, this did not help either.Sorry, I did not mean to say that an Index must be added straight away. The column must be eligible to have an Index. Meaning, Index will be beneficial if created on a column with high number of distinct values.If either of the above does not help, then options to rewrite the query must be explored.Thanks,Venkata Balaji NFujitsu AustraliaOn Wed, Aug 12, 2015 at 2:09 PM, Venkata Balaji N [via PostgreSQL] <[hidden email]> wrote:\nOn Wed, Aug 12, 2015 at 12:34 PM, robbyc <[hidden email]> wrote:Hi,\n\nI am new to optimizing queries and i'm getting a slow running time\n(~1.5secs) with the following SQL:\n\nSELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n\"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n, \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n\"Occupation\", \"Vacancy\".\"PositionNo\"\n, \"Vacancy\".\"Template\" from\n \"Vacancy\"\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n and \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\"\n ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n and \"t_33\".\"Table_TableID\" = 33 )\nLEFT JOIN \"Department\"\n ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n\"Department\".\"Active\" = 't' and \"Department\"\n.\"ClientID\" = 263)\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\nand \"c_50\".\"RowID\" = \"Vacancy\"\n.\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\nWHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\nDISTINCT(\"Vacancy\".\"ID\")\n FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n(\"ct126\".\"Category_TableID\" = 126\n and \"RowID\" = \"Vacancy\".\"ID\")\n left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\nand \"Workflow\".\"Level\"\n= 1)\n left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n\"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\nIN(34024,35254,35255,35256)) and \"Vacancy\"\n.\"Template\" = 't'\nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n\"Vacancy\".\"CustomAccess\", \"Department\"\n.\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\nUNION\nSELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n\"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n, \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n\"Occupation\", \"Vacancy\".\"PositionNo\"\n, \"Vacancy\".\"Template\" from\n \"Vacancy\"\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n and \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\"\n ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n and \"t_33\".\"Table_TableID\" = 33 )\nLEFT JOIN \"Department\"\n ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n\"Department\".\"Active\" = 't' and \"Department\"\n.\"ClientID\" = 263)\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\nand \"c_50\".\"RowID\" = \"Vacancy\"\n.\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\nWHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\nDISTINCT(\"Vacancy\".\"ID\")\n FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n(\"ct126\".\"Category_TableID\" = 126\n and \"RowID\" = \"Vacancy\".\"ID\")\n left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\nand \"Workflow\".\"Level\"\n= 1)\n left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n\"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\nIN(34024,35254,35255,35256)) and \"Vacancy\"\n.\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1\nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n\"Vacancy\".\"CustomAccess\", \"Department\"\n.\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n ORDER BY \"JobTitle\"\n\nRunning explain analyze gives me the following information:\nhttp://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC>\nFor a total runtime: 2877.157 ms\n\nIf i remove the left joins on Department and TableRow_TableRow this reduces\nthe run time by about a third.\nAdditionally removing CategoryOption and CategoryOption_TableRow joins\nfurther reduces by a about a third.\n\nGiven that i need both these joins for the information retrieved by them,\nwhat would be the best way to re-factor this query so it runs faster?\n\nLooking at the output of explain analyze the hash aggregates and sort seem\nto be the primary issue.The query has got a distinct and group-by/order-by clauses which seems to be taking time. Without looking at much details of the query code and Table size etc, did you try increasing the work_mem and then execute the query and see if that helps ? This will reduce the on-disk IO for sorting. Also, Vacancy.JobTitle seems to be a non-index column.Regards,Venkata BalajiFujitsu Australia\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.nabble.com/Slow-Query-tp5861835p5861839.html\n\n\n\t\t\n\t\tTo unsubscribe from Slow Query, click here.\nNAML\n-- RegardsRobert Campbell+61412062971[hidden email]\n\n\nView this message in context: Re: Slow Query\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 12 Aug 2015 16:08:46 +1000",
"msg_from": "Venkata Balaji N <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query"
},
{
"msg_contents": "On 08/12/2015 04:34 AM, robbyc wrote:\n> Hi,\n> \n> I am new to optimizing queries and i'm getting a slow running time\n> (~1.5secs) with the following SQL:\n\nBefore mucking about with work_mem and indexes, the first thing to do is\nrewrite this query correctly. Here are just some of the things wrong\nwith the query as written:\n\n* You're doing a DISTINCT on the same set of columns also in a GROUP BY.\n This is redundant and causes needless deduplication.\n\n* You're joining two GROUPed BY then DISTINCTed queries using the UNION\n operator which will do yet another pass for deduplication.\n\n* You've got the entire query repeated for just a simple difference in\n the global WHERE clause. These can be merged.\n\n* You've kept LEFT JOINs in the subquery but you don't use any values\n from them. These can be safely removed altogether.\n\n* You're using a NOT IN clause which is almost never what you want. Use\n NOT EXISTS instead.\n\nWhat is this list() function? How is it defined? Can it be replaced\nwith string_agg()?\n\nYou're not doing yourself any favors at all with all this quoting and\nmixed case stuff.\n\nHere is a rewritten version, please let me know how it performs:\n\nSELECT \"Vacancy\".\"ID\",\n \"Vacancy\".\"JobTitle\",\n \"Vacancy\".\"DateCreated\",\n \"Vacancy\".\"CustomAccess\",\n \"Department\".\"Name\" as \"Department\",\n list(\"Occupation\".\"Name\") as \"Occupation\",\n \"Vacancy\".\"PositionNo\",\n \"Vacancy\".\"Template\"\nFROM \"Vacancy\"\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\n \"c_50\".\"Category_TableID\"= 50\n AND \"c_50\".\"RowID\" = \"Vacancy\".\"ID\"\n AND \"c_50\".\"CategoryOptionID\"=19205)\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" ON (\n \"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n AND \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\" ON (\n \"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\" ON (\n \"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n AND \"t_33\".\"Table_TableID\" = 33)\nLEFT JOIN \"Department\" ON (\n \"Department\".\"ID\" = \"t_33\".\"Table2RowID\"\n AND \"Department\".\"Active\" = 't'\n AND \"Department\".\"ClientID\" = 263)\nWHERE \"Vacancy\".\"ClientID\" = 263\n AND NOT EXISTS (\n SELECT 1\n FROM \"Vacancy\" as _Vacancy\n JOIN \"CategoryOption_TableRow\" \"ct126\" on (\n \"ct126\".\"Category_TableID\" = 126\n AND \"RowID\" = _Vacancy.\"ID\")\n WHERE _Vacancy.\"Template\"\n AND \"ct126\".\"CategoryOptionID\" IN (34024,35254,35255,35256)\n AND _Vacancy.\"ID\" = \"Vacancy\".\"ID\")\n AND (\"Vacancy\".\"Template\" = 't' OR \"Vacancy\".\"Level\" = 1)\nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n \"Vacancy\".\"CustomAccess\", \"Department\".\"Name\",\n \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n\n\n> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n> \"Occupation\", \"Vacancy\".\"PositionNo\"\n> , \"Vacancy\".\"Template\" from \n> \"Vacancy\"\n> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" \n> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\" \n> and \"c_22\".\"Category_TableID\" = 22)\n> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n> LEFT JOIN \"TableRow_TableRow\" as \"t_33\" \n> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\" \n> and \"t_33\".\"Table_TableID\" = 33 )\n> LEFT JOIN \"Department\"\n> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n> \"Department\".\"Active\" = 't' and \"Department\"\n> .\"ClientID\" = 263)\n> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\n> and \"c_50\".\"RowID\" = \"Vacancy\"\n> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n> DISTINCT(\"Vacancy\".\"ID\") \n> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n> (\"ct126\".\"Category_TableID\" = 126\n> and \"RowID\" = \"Vacancy\".\"ID\") \n> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\n> and \"Workflow\".\"Level\" \n> = 1) \n> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n> = 30 and \"c30\".\"CategoryOptionID\" = 21923) \n> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n> IN(34024,35254,35255,35256)) and \"Vacancy\"\n> .\"Template\" = 't'\n> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n> \"Vacancy\".\"CustomAccess\", \"Department\"\n> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n> UNION\n> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n> \"Occupation\", \"Vacancy\".\"PositionNo\"\n> , \"Vacancy\".\"Template\" from \n> \"Vacancy\"\n> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" \n> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\" \n> and \"c_22\".\"Category_TableID\" = 22)\n> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n> LEFT JOIN \"TableRow_TableRow\" as \"t_33\" \n> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\" \n> and \"t_33\".\"Table_TableID\" = 33 )\n> LEFT JOIN \"Department\"\n> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n> \"Department\".\"Active\" = 't' and \"Department\"\n> .\"ClientID\" = 263)\n> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\n> and \"c_50\".\"RowID\" = \"Vacancy\"\n> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n> DISTINCT(\"Vacancy\".\"ID\") \n> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n> (\"ct126\".\"Category_TableID\" = 126\n> and \"RowID\" = \"Vacancy\".\"ID\") \n> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\n> and \"Workflow\".\"Level\" \n> = 1) \n> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n> = 30 and \"c30\".\"CategoryOptionID\" = 21923) \n> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n> IN(34024,35254,35255,35256)) and \"Vacancy\"\n> .\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1 \n> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n> \"Vacancy\".\"CustomAccess\", \"Department\"\n> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n> ORDER BY \"JobTitle\"\n> \n> Running explain analyze gives me the following information: \n> http://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC> \n> \n> For a total runtime: 2877.157 ms\n> \n> If i remove the left joins on Department and TableRow_TableRow this reduces\n> the run time by about a third.\n> Additionally removing CategoryOption and CategoryOption_TableRow joins\n> further reduces by a about a third.\n> \n> Given that i need both these joins for the information retrieved by them,\n> what would be the best way to re-factor this query so it runs faster?\n> \n> Looking at the output of explain analyze the hash aggregates and sort seem\n> to be the primary issue.\n> \n> Thanks in advance \n\n\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 12 Aug 2015 13:34:44 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query"
},
{
"msg_contents": "Hi Vik,\n\nThanks for your feedback, very helpful.\n\nI modified your query slightly, this will return all vacancy templates and\nall level 1 vacancies which arent templates, and does so in about\n~800-900ms less, an great improvement on the original query.\n\nSELECT \"Vacancy\".\"ID\",\n \"Vacancy\".\"JobTitle\",\n \"Vacancy\".\"DateCreated\",\n \"Vacancy\".\"CustomAccess\",\n \"Department\".\"Name\" as \"Department\",\n list(\"Occupation\".\"Name\") as \"Occupation\",\n \"Vacancy\".\"PositionNo\",\n \"Vacancy\".\"Template\"\nFROM \"Vacancy\"\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\n \"c_50\".\"Category_TableID\"= 50\n AND \"c_50\".\"RowID\" = \"Vacancy\".\"ID\"\n AND \"c_50\".\"CategoryOptionID\"=19205)\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" ON (\n \"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n AND \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\" ON (\n \"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\" ON (\n \"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n AND \"t_33\".\"Table_TableID\" = 33)\nLEFT JOIN \"Department\" ON (\n \"Department\".\"ID\" = \"t_33\".\"Table2RowID\"\n AND \"Department\".\"Active\" = 't'\n AND \"Department\".\"ClientID\" = 263)\nWHERE \"Vacancy\".\"ClientID\" = 263\n AND NOT EXISTS (\n SELECT 1\n FROM \"Vacancy\" as \"v\"\n JOIN \"CategoryOption_TableRow\" \"ct126\" on (\n \"ct126\".\"Category_TableID\" = 126\n AND \"RowID\" = \"v\".\"ID\")\n WHERE \"v\".\"Template\"\n AND \"ct126\".\"CategoryOptionID\" IN (34024,35254,35255,35256)\n AND \"v\".\"ID\" = \"Vacancy\".\"ID\")\n AND (\"Vacancy\".\"Template\" OR (\"Vacancy\".\"Template\" = 'f' AND\n\"Vacancy\".\"Level\" = 1))\n GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n \"Vacancy\".\"CustomAccess\", \"Department\".\"Name\",\n \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n\n\n\n\n\n\n\n\nOn Wed, Aug 12, 2015 at 9:35 PM, Vik Fearing-3 [via PostgreSQL] <\[email protected]> wrote:\n\n> On 08/12/2015 04:34 AM, robbyc wrote:\n> > Hi,\n> >\n> > I am new to optimizing queries and i'm getting a slow running time\n> > (~1.5secs) with the following SQL:\n>\n> Before mucking about with work_mem and indexes, the first thing to do is\n> rewrite this query correctly. Here are just some of the things wrong\n> with the query as written:\n>\n> * You're doing a DISTINCT on the same set of columns also in a GROUP BY.\n> This is redundant and causes needless deduplication.\n>\n> * You're joining two GROUPed BY then DISTINCTed queries using the UNION\n> operator which will do yet another pass for deduplication.\n>\n> * You've got the entire query repeated for just a simple difference in\n> the global WHERE clause. These can be merged.\n>\n> * You've kept LEFT JOINs in the subquery but you don't use any values\n> from them. These can be safely removed altogether.\n>\n> * You're using a NOT IN clause which is almost never what you want. Use\n> NOT EXISTS instead.\n>\n> What is this list() function? How is it defined? Can it be replaced\n> with string_agg()?\n>\n> You're not doing yourself any favors at all with all this quoting and\n> mixed case stuff.\n>\n> Here is a rewritten version, please let me know how it performs:\n>\n> SELECT \"Vacancy\".\"ID\",\n> \"Vacancy\".\"JobTitle\",\n> \"Vacancy\".\"DateCreated\",\n> \"Vacancy\".\"CustomAccess\",\n> \"Department\".\"Name\" as \"Department\",\n> list(\"Occupation\".\"Name\") as \"Occupation\",\n> \"Vacancy\".\"PositionNo\",\n> \"Vacancy\".\"Template\"\n> FROM \"Vacancy\"\n> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\n> \"c_50\".\"Category_TableID\"= 50\n> AND \"c_50\".\"RowID\" = \"Vacancy\".\"ID\"\n> AND \"c_50\".\"CategoryOptionID\"=19205)\n> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" ON (\n> \"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n> AND \"c_22\".\"Category_TableID\" = 22)\n> LEFT JOIN \"CategoryOption\" as \"Occupation\" ON (\n> \"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n> LEFT JOIN \"TableRow_TableRow\" as \"t_33\" ON (\n> \"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n> AND \"t_33\".\"Table_TableID\" = 33)\n> LEFT JOIN \"Department\" ON (\n> \"Department\".\"ID\" = \"t_33\".\"Table2RowID\"\n> AND \"Department\".\"Active\" = 't'\n> AND \"Department\".\"ClientID\" = 263)\n> WHERE \"Vacancy\".\"ClientID\" = 263\n> AND NOT EXISTS (\n> SELECT 1\n> FROM \"Vacancy\" as _Vacancy\n> JOIN \"CategoryOption_TableRow\" \"ct126\" on (\n> \"ct126\".\"Category_TableID\" = 126\n> AND \"RowID\" = _Vacancy.\"ID\")\n> WHERE _Vacancy.\"Template\"\n> AND \"ct126\".\"CategoryOptionID\" IN (34024,35254,35255,35256)\n> AND _Vacancy.\"ID\" = \"Vacancy\".\"ID\")\n> AND (\"Vacancy\".\"Template\" = 't' OR \"Vacancy\".\"Level\" = 1)\n> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n> \"Vacancy\".\"CustomAccess\", \"Department\".\"Name\",\n> \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n>\n>\n> > SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n> > \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n> > , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n> > \"Occupation\", \"Vacancy\".\"PositionNo\"\n> > , \"Vacancy\".\"Template\" from\n> > \"Vacancy\"\n> > LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n> > ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n> > and \"c_22\".\"Category_TableID\" = 22)\n> > LEFT JOIN \"CategoryOption\" as \"Occupation\"\n> > ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n> > LEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n> > ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n> > and \"t_33\".\"Table_TableID\" = 33 )\n> > LEFT JOIN \"Department\"\n> > ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n> > \"Department\".\"Active\" = 't' and \"Department\"\n> > .\"ClientID\" = 263)\n> > JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"=\n> 50\n> > and \"c_50\".\"RowID\" = \"Vacancy\"\n> > .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n> > WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n> > DISTINCT(\"Vacancy\".\"ID\")\n> > FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n> > (\"ct126\".\"Category_TableID\" = 126\n> > and \"RowID\" = \"Vacancy\".\"ID\")\n> > left join \"Workflow\" on (\"Workflow\".\"VacancyID\" =\n> \"Vacancy\".\"ID\"\n> > and \"Workflow\".\"Level\"\n> > = 1)\n> > left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\"\n> =\n> > \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n> > = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n> > WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n> > IN(34024,35254,35255,35256)) and \"Vacancy\"\n> > .\"Template\" = 't'\n> > GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n> > \"Vacancy\".\"CustomAccess\", \"Department\"\n> > .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n> > UNION\n> > SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n> > \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n> > , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n> > \"Occupation\", \"Vacancy\".\"PositionNo\"\n> > , \"Vacancy\".\"Template\" from\n> > \"Vacancy\"\n> > LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\"\n> > ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n> > and \"c_22\".\"Category_TableID\" = 22)\n> > LEFT JOIN \"CategoryOption\" as \"Occupation\"\n> > ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n> > LEFT JOIN \"TableRow_TableRow\" as \"t_33\"\n> > ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n> > and \"t_33\".\"Table_TableID\" = 33 )\n> > LEFT JOIN \"Department\"\n> > ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n> > \"Department\".\"Active\" = 't' and \"Department\"\n> > .\"ClientID\" = 263)\n> > JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"=\n> 50\n> > and \"c_50\".\"RowID\" = \"Vacancy\"\n> > .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n> > WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n> > DISTINCT(\"Vacancy\".\"ID\")\n> > FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n> > (\"ct126\".\"Category_TableID\" = 126\n> > and \"RowID\" = \"Vacancy\".\"ID\")\n> > left join \"Workflow\" on (\"Workflow\".\"VacancyID\" =\n> \"Vacancy\".\"ID\"\n> > and \"Workflow\".\"Level\"\n> > = 1)\n> > left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\"\n> =\n> > \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n> > = 30 and \"c30\".\"CategoryOptionID\" = 21923)\n> > WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n> > IN(34024,35254,35255,35256)) and \"Vacancy\"\n> > .\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1\n> > GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n> > \"Vacancy\".\"CustomAccess\", \"Department\"\n> > .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n> > ORDER BY \"JobTitle\"\n> >\n> > Running explain analyze gives me the following information:\n> > http://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC>\n> >\n> > For a total runtime: 2877.157 ms\n> >\n> > If i remove the left joins on Department and TableRow_TableRow this\n> reduces\n> > the run time by about a third.\n> > Additionally removing CategoryOption and CategoryOption_TableRow joins\n> > further reduces by a about a third.\n> >\n> > Given that i need both these joins for the information retrieved by\n> them,\n> > what would be the best way to re-factor this query so it runs faster?\n> >\n> > Looking at the output of explain analyze the hash aggregates and sort\n> seem\n> > to be the primary issue.\n> >\n> > Thanks in advance\n>\n>\n> --\n> Vik Fearing +33 6 46 75 15 36\n> http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([hidden email]\n> <http:///user/SendEmail.jtp?type=node&node=5861873&i=0>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> ------------------------------\n> If you reply to this email, your message will be added to the discussion\n> below:\n> http://postgresql.nabble.com/Slow-Query-tp5861835p5861873.html\n> To unsubscribe from Slow Query, click here\n> <http://postgresql.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5861835&code=cm9iY2FtcGJlbGw3M0BnbWFpbC5jb218NTg2MTgzNXwxOTc1MDc2ODM4>\n> .\n> NAML\n> <http://postgresql.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>\n>\n\n\n\n-- \nRegards\n\nRobert Campbell\n+61412062971\[email protected]\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Slow-Query-tp5861835p5861961.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nHi Vik,Thanks for your feedback, very helpful. I modified your query slightly, this will return all vacancy templates and all level 1 vacancies which arent templates, and does so in about ~800-900ms less, an great improvement on the original query.SELECT \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\", \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as \"Occupation\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"FROM \"Vacancy\"JOIN \"CategoryOption_TableRow\" as \"c_50\" ON ( \"c_50\".\"Category_TableID\"= 50 AND \"c_50\".\"RowID\" = \"Vacancy\".\"ID\" AND \"c_50\".\"CategoryOptionID\"=19205)LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" ON ( \"c_22\".\"RowID\" = \"Vacancy\".\"ID\" AND \"c_22\".\"Category_TableID\" = 22)LEFT JOIN \"CategoryOption\" as \"Occupation\" ON ( \"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")LEFT JOIN \"TableRow_TableRow\" as \"t_33\" ON ( \"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\" AND \"t_33\".\"Table_TableID\" = 33)LEFT JOIN \"Department\" ON ( \"Department\".\"ID\" = \"t_33\".\"Table2RowID\" AND \"Department\".\"Active\" = 't' AND \"Department\".\"ClientID\" = 263)WHERE \"Vacancy\".\"ClientID\" = 263 AND NOT EXISTS ( SELECT 1 FROM \"Vacancy\" as \"v\" JOIN \"CategoryOption_TableRow\" \"ct126\" on ( \"ct126\".\"Category_TableID\" = 126 AND \"RowID\" = \"v\".\"ID\") WHERE \"v\".\"Template\" AND \"ct126\".\"CategoryOptionID\" IN (34024,35254,35255,35256) AND \"v\".\"ID\" = \"Vacancy\".\"ID\") AND (\"Vacancy\".\"Template\" OR (\"Vacancy\".\"Template\" = 'f' AND \"Vacancy\".\"Level\" = 1)) GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\", \"Department\".\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\" On Wed, Aug 12, 2015 at 9:35 PM, Vik Fearing-3 [via PostgreSQL] <[hidden email]> wrote:\n\n\tOn 08/12/2015 04:34 AM, robbyc wrote:\n> Hi,\n> \n> I am new to optimizing queries and i'm getting a slow running time\n> (~1.5secs) with the following SQL:\nBefore mucking about with work_mem and indexes, the first thing to do is\nrewrite this query correctly. Here are just some of the things wrong\nwith the query as written:\n* You're doing a DISTINCT on the same set of columns also in a GROUP BY.\n This is redundant and causes needless deduplication.\n* You're joining two GROUPed BY then DISTINCTed queries using the UNION\n operator which will do yet another pass for deduplication.\n* You've got the entire query repeated for just a simple difference in\n the global WHERE clause. These can be merged.\n* You've kept LEFT JOINs in the subquery but you don't use any values\n from them. These can be safely removed altogether.\n* You're using a NOT IN clause which is almost never what you want. Use\n NOT EXISTS instead.\nWhat is this list() function? How is it defined? Can it be replaced\nwith string_agg()?\nYou're not doing yourself any favors at all with all this quoting and\nmixed case stuff.\nHere is a rewritten version, please let me know how it performs:\nSELECT \"Vacancy\".\"ID\",\n \"Vacancy\".\"JobTitle\",\n \"Vacancy\".\"DateCreated\",\n \"Vacancy\".\"CustomAccess\",\n \"Department\".\"Name\" as \"Department\",\n list(\"Occupation\".\"Name\") as \"Occupation\",\n \"Vacancy\".\"PositionNo\",\n \"Vacancy\".\"Template\"\nFROM \"Vacancy\"\nJOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\n \"c_50\".\"Category_TableID\"= 50\n AND \"c_50\".\"RowID\" = \"Vacancy\".\"ID\"\n AND \"c_50\".\"CategoryOptionID\"=19205)\nLEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" ON (\n \"c_22\".\"RowID\" = \"Vacancy\".\"ID\"\n AND \"c_22\".\"Category_TableID\" = 22)\nLEFT JOIN \"CategoryOption\" as \"Occupation\" ON (\n \"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\nLEFT JOIN \"TableRow_TableRow\" as \"t_33\" ON (\n \"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\"\n AND \"t_33\".\"Table_TableID\" = 33)\nLEFT JOIN \"Department\" ON (\n \"Department\".\"ID\" = \"t_33\".\"Table2RowID\"\n AND \"Department\".\"Active\" = 't'\n AND \"Department\".\"ClientID\" = 263)\nWHERE \"Vacancy\".\"ClientID\" = 263\n AND NOT EXISTS (\n SELECT 1\n FROM \"Vacancy\" as _Vacancy\n JOIN \"CategoryOption_TableRow\" \"ct126\" on (\n \"ct126\".\"Category_TableID\" = 126\n AND \"RowID\" = _Vacancy.\"ID\")\n WHERE _Vacancy.\"Template\"\n AND \"ct126\".\"CategoryOptionID\" IN (34024,35254,35255,35256)\n AND _Vacancy.\"ID\" = \"Vacancy\".\"ID\")\n AND (\"Vacancy\".\"Template\" = 't' OR \"Vacancy\".\"Level\" = 1)\nGROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n \"Vacancy\".\"CustomAccess\", \"Department\".\"Name\",\n \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n> \"Occupation\", \"Vacancy\".\"PositionNo\"\n> , \"Vacancy\".\"Template\" from \n> \"Vacancy\"\n> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" \n> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\" \n> and \"c_22\".\"Category_TableID\" = 22)\n> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n> LEFT JOIN \"TableRow_TableRow\" as \"t_33\" \n> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\" \n> and \"t_33\".\"Table_TableID\" = 33 )\n> LEFT JOIN \"Department\"\n> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n> \"Department\".\"Active\" = 't' and \"Department\"\n> .\"ClientID\" = 263)\n> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\n> and \"c_50\".\"RowID\" = \"Vacancy\"\n> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n> DISTINCT(\"Vacancy\".\"ID\") \n> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n> (\"ct126\".\"Category_TableID\" = 126\n> and \"RowID\" = \"Vacancy\".\"ID\") \n> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\n> and \"Workflow\".\"Level\" \n> = 1) \n> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n> = 30 and \"c30\".\"CategoryOptionID\" = 21923) \n> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n> IN(34024,35254,35255,35256)) and \"Vacancy\"\n> .\"Template\" = 't'\n> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n> \"Vacancy\".\"CustomAccess\", \"Department\"\n> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n> UNION\n> SELECT distinct(\"Vacancy\".\"ID\"), \"Vacancy\".\"JobTitle\",\n> \"Vacancy\".\"DateCreated\", \"Vacancy\".\"CustomAccess\"\n> , \"Department\".\"Name\" as \"Department\", list(\"Occupation\".\"Name\") as\n> \"Occupation\", \"Vacancy\".\"PositionNo\"\n> , \"Vacancy\".\"Template\" from \n> \"Vacancy\"\n> LEFT JOIN \"CategoryOption_TableRow\" as \"c_22\" \n> ON (\"c_22\".\"RowID\" = \"Vacancy\".\"ID\" \n> and \"c_22\".\"Category_TableID\" = 22)\n> LEFT JOIN \"CategoryOption\" as \"Occupation\"\n> ON (\"Occupation\".\"ID\" = \"c_22\".\"CategoryOptionID\")\n> LEFT JOIN \"TableRow_TableRow\" as \"t_33\" \n> ON (\"t_33\".\"Table1RowID\" = \"Vacancy\".\"ID\" \n> and \"t_33\".\"Table_TableID\" = 33 )\n> LEFT JOIN \"Department\"\n> ON (\"Department\".\"ID\" = \"t_33\".\"Table2RowID\" and\n> \"Department\".\"Active\" = 't' and \"Department\"\n> .\"ClientID\" = 263)\n> JOIN \"CategoryOption_TableRow\" as \"c_50\" ON (\"c_50\".\"Category_TableID\"= 50\n> and \"c_50\".\"RowID\" = \"Vacancy\"\n> .\"ID\" and \"c_50\".\"CategoryOptionID\"=19205)\n> WHERE \"Vacancy\".\"ClientID\" = 263 AND \"Vacancy\".\"ID\" NOT IN(SELECT\n> DISTINCT(\"Vacancy\".\"ID\") \n> FROM \"Vacancy\" join \"CategoryOption_TableRow\" \"ct126\" on\n> (\"ct126\".\"Category_TableID\" = 126\n> and \"RowID\" = \"Vacancy\".\"ID\") \n> left join \"Workflow\" on (\"Workflow\".\"VacancyID\" = \"Vacancy\".\"ID\"\n> and \"Workflow\".\"Level\" \n> = 1) \n> left join \"CategoryOption_TableRow\" \"c30\" on (\"c30\".\"RowID\" =\n> \"Workflow\".\"ID\" and \"c30\".\"Category_TableID\"\n> = 30 and \"c30\".\"CategoryOptionID\" = 21923) \n> WHERE \"Template\" AND \"ct126\".\"CategoryOptionID\"\n> IN(34024,35254,35255,35256)) and \"Vacancy\"\n> .\"Template\" <> 't' AND \"Vacancy\".\"Level\" = 1 \n> GROUP BY \"Vacancy\".\"ID\", \"Vacancy\".\"JobTitle\", \"Vacancy\".\"DateCreated\",\n> \"Vacancy\".\"CustomAccess\", \"Department\"\n> .\"Name\", \"Vacancy\".\"PositionNo\", \"Vacancy\".\"Template\"\n> ORDER BY \"JobTitle\"\n> \n> Running explain analyze gives me the following information: \n> http://explain.depesz.com/s/pdC <http://explain.depesz.com/s/pdC> \n> \n> For a total runtime: 2877.157 ms\n> \n> If i remove the left joins on Department and TableRow_TableRow this reduces\n> the run time by about a third.\n> Additionally removing CategoryOption and CategoryOption_TableRow joins\n> further reduces by a about a third.\n> \n> Given that i need both these joins for the information retrieved by them,\n> what would be the best way to re-factor this query so it runs faster?\n> \n> Looking at the output of explain analyze the hash aggregates and sort seem\n> to be the primary issue.\n> \n> Thanks in advance \n-- \nVik Fearing <a href=\"tel:%2B33%206%2046%2075%2015%2036\" value=\"+33646751536\" target=\"_blank\">+33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n-- \nSent via pgsql-performance mailing list ([hidden email])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.nabble.com/Slow-Query-tp5861835p5861873.html\n\n\n\t\t\n\t\tTo unsubscribe from Slow Query, click here.\nNAML\n-- RegardsRobert Campbell+61412062971[hidden email]\n\n\nView this message in context: Re: Slow Query\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 12 Aug 2015 17:50:39 -0700 (MST)",
"msg_from": "robbyc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query"
},
{
"msg_contents": "In the 'not exists' cluster, you do not have to search table \"Vacancy as v\"\nagain.\nI think it would be faster to use the outer Vacancy table as below.\nIn your case, that do the same work.\n\nNOT EXISTS (\n SELECT 1\n FROM \"CategoryOption_TableRow\" \"ct126\"\n WHERE \"Vacancy\".\"Template\"\n AND \"ct126\".\"CategoryOptionID\" IN (34024,35254,35255,35256)\n AND \"ct126\".\"Category_TableID\" = 126\n AND \"ct126\".\"RowID\" = \"Vacancy\".\"ID\"\n )\n\nIn the 'not exists' cluster, you do not have to search table \"Vacancy as v\" again.I think it would be faster to use the outer Vacancy table as below.In your case, that do the same work. NOT EXISTS ( SELECT 1 FROM \"CategoryOption_TableRow\" \"ct126\" WHERE \"Vacancy\".\"Template\" AND \"ct126\".\"CategoryOptionID\" IN (34024,35254,35255,35256) AND \"ct126\".\"Category_TableID\" = 126 AND \"ct126\".\"RowID\" = \"Vacancy\".\"ID\" )",
"msg_date": "Thu, 13 Aug 2015 18:21:23 +0900",
"msg_from": "=?UTF-8?B?5p6X5aOr5Y2a?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query"
},
{
"msg_contents": "Hi,\n\nDoing this returns 0 records\n\nOn Thu, Aug 13, 2015 at 7:22 PM, 林士博 [via PostgreSQL] <\[email protected]> wrote:\n\n> In the 'not exists' cluster, you do not have to search table \"Vacancy as\n> v\" again.\n> I think it would be faster to use the outer Vacancy table as below.\n> In your case, that do the same work.\n>\n> NOT EXISTS (\n> SELECT 1\n> FROM \"CategoryOption_TableRow\" \"ct126\"\n> WHERE \"Vacancy\".\"Template\"\n> AND \"ct126\".\"CategoryOptionID\" IN (34024,35254,35255,35256)\n> AND \"ct126\".\"Category_TableID\" = 126\n> AND \"ct126\".\"RowID\" = \"Vacancy\".\"ID\"\n> )\n>\n>\n> ------------------------------\n> If you reply to this email, your message will be added to the discussion\n> below:\n> http://postgresql.nabble.com/Slow-Query-tp5861835p5862008.html\n> To unsubscribe from Slow Query, click here\n> <http://postgresql.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5861835&code=cm9iY2FtcGJlbGw3M0BnbWFpbC5jb218NTg2MTgzNXwxOTc1MDc2ODM4>\n> .\n> NAML\n> <http://postgresql.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>\n>\n\n\n\n-- \nRegards\n\nRobert Campbell\n+61412062971\[email protected]\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Slow-Query-tp5861835p5862122.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nHi,Doing this returns 0 recordsOn Thu, Aug 13, 2015 at 7:22 PM, 林士博 [via PostgreSQL] <[hidden email]> wrote:\nIn the 'not exists' cluster, you do not have to search table \"Vacancy as v\" again.I think it would be faster to use the outer Vacancy table as below.In your case, that do the same work. NOT EXISTS ( SELECT 1 FROM \"CategoryOption_TableRow\" \"ct126\" WHERE \"Vacancy\".\"Template\" AND \"ct126\".\"CategoryOptionID\" IN (34024,35254,35255,35256) AND \"ct126\".\"Category_TableID\" = 126 AND \"ct126\".\"RowID\" = \"Vacancy\".\"ID\" )\n\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.nabble.com/Slow-Query-tp5861835p5862008.html\n\n\n\t\t\n\t\tTo unsubscribe from Slow Query, click here.\nNAML\n-- RegardsRobert Campbell+61412062971[hidden email]\n\n\nView this message in context: Re: Slow Query\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Thu, 13 Aug 2015 20:15:01 -0700 (MST)",
"msg_from": "robbyc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query"
},
{
"msg_contents": "Is the \"Vacancy\".\"ID\" a primary key?\nOr is unique in Vacancy table?\n\nIs the \"Vacancy\".\"ID\" a primary key?Or is unique in Vacancy table?",
"msg_date": "Fri, 14 Aug 2015 14:10:14 +0900",
"msg_from": "=?UTF-8?B?5p6X5aOr5Y2a?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query"
},
{
"msg_contents": "OK.\n\nIf you benchmark that correctly,\nthen it seems that adding some redundant search can get the query faster in\nsome special cases.\n\nAnd without further info, I can not see any reason.\n\n2015-08-14 14:35 GMT+09:00 Robert Campbell <[email protected]>:\n\n> Hi,\n>\n> My mistake, didnt apply the sub query properly the first time.\n>\n> It does return records but not quite as fast as original query, about\n> 200ms slower\n>\n> Vacancy ID is a primary key.\n>\n> On Fri, Aug 14, 2015 at 3:10 PM, 林士博 <[email protected]> wrote:\n>\n>> Is the \"Vacancy\".\"ID\" a primary key?\n>> Or is unique in Vacancy table?\n>>\n>>\n>\n>\n> --\n> Regards\n>\n> Robert Campbell\n> +61412062971\n> [email protected]\n>\n\nOK.If you benchmark that correctly, then it seems that adding some redundant search can get the query faster in some special cases.And without further info, I can not see any reason.2015-08-14 14:35 GMT+09:00 Robert Campbell <[email protected]>:Hi,My mistake, didnt apply the sub query properly the first time.It does return records but not quite as fast as original query, about 200ms slowerVacancy ID is a primary key.On Fri, Aug 14, 2015 at 3:10 PM, 林士博 <[email protected]> wrote:Is the \"Vacancy\".\"ID\" a primary key?Or is unique in Vacancy table?\n-- RegardsRobert [email protected]",
"msg_date": "Fri, 14 Aug 2015 15:09:35 +0900",
"msg_from": "=?UTF-8?B?5p6X5aOr5Y2a?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query"
}
] |
[
{
"msg_contents": "Setup:\n\n* PostgreSQL 9.3.9\n* 1 master, 1 replica\n* Tiny database, under 0.5GB, completely cached in shared_buffers\n* 90% read query traffic, which is handled by replica\n* Traffic in the 1000's QPS.\n\nThe wierdness:\n\nPeriodically the master runs an \"update all rows\" query on the main\ntable in the database. When this update hits the replica via\nreplication stream, *some* (about 5%) of the queries which do seq scans\nwill stall for 22 to 32 seconds (these queries normally take about\n75ms). Queries which do index scans seem not to be affected.\n\nThing is, the update all rows only takes 2.5 seconds to execute on the\nmaster. So even if the update is blocking the seq scans on the replica\n(and I can't see why it would), it should only block them for < 3 seconds.\n\nAnyone seen anything like this?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 13 Aug 2015 10:09:30 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange query stalls on replica in 9.3.9"
},
{
"msg_contents": "Josh Berkus <[email protected]> wrote:\n\n> Periodically the master runs an \"update all rows\" query on the main\n> table in the database. When this update hits the replica via\n> replication stream, *some* (about 5%) of the queries which do seq scans\n> will stall for 22 to 32 seconds (these queries normally take about\n> 75ms). Queries which do index scans seem not to be affected.\n>\n> Thing is, the update all rows only takes 2.5 seconds to execute on the\n> master. So even if the update is blocking the seq scans on the replica\n> (and I can't see why it would), it should only block them for < 3 seconds.\n\nVisibility hinting and/or hot pruning?\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 13 Aug 2015 20:24:28 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange query stalls on replica in 9.3.9"
},
{
"msg_contents": "On Thu, Aug 13, 2015 at 10:09 AM, Josh Berkus <[email protected]> wrote:\n\n> Setup:\n>\n> * PostgreSQL 9.3.9\n> * 1 master, 1 replica\n> * Tiny database, under 0.5GB, completely cached in shared_buffers\n> * 90% read query traffic, which is handled by replica\n> * Traffic in the 1000's QPS.\n>\n> The wierdness:\n>\n> Periodically the master runs an \"update all rows\" query on the main\n> table in the database. When this update hits the replica via\n> replication stream, *some* (about 5%) of the queries which do seq scans\n> will stall for 22 to 32 seconds (these queries normally take about\n> 75ms). Queries which do index scans seem not to be affected.\n>\n> Thing is, the update all rows only takes 2.5 seconds to execute on the\n> master. So even if the update is blocking the seq scans on the replica\n> (and I can't see why it would), it should only block them for < 3 seconds.\n>\n> Anyone seen anything like this?\n>\n\nSounds like another manifestation of this: \"[PERFORM] Planner performance\nextremely affected by an hanging transaction (20-30 times)?\"\n\nhttp://www.postgresql.org/message-id/CAMkU=1yy-YEQVvqj2xJitT1EFkyuFk7uTV_hrOMGyGMxpU=N+Q@mail.gmail.com\n\n\nEach backend that does a seqscan, for each tuple it scans which is not yet\nresolved (which near the end of the bulk update is going to be nearly equal\nto 2*reltuples, as every tuple has both an old and a new version so one\nxmax from one and one xmin from the other must be checked), it has to lock\nand scan the proc array lock to see if the tuple-inserting transaction has\ncommitted yet. This creates profound contention on the lock. Every\nscanning backend is looping over every other backend for every tuple\n\nOnce the commit of the whole-table update has replayed, the problem should\ngo way instantly because at that point each backend doing the seqscan will\nfind the the transaction has committed and so will set the hint bit that\nmeans all of the other seqscan backends that come after it can skip the\nproc array scan for that tuple.\n\nSo perhaps the commit of the whole-table update is delayed because the\nstartup process as also getting bogged down on the same contended lock? I\ndon't know how hard WAL replay hits the proc array lock.\n\nCheers,\n\nJeff\n\nOn Thu, Aug 13, 2015 at 10:09 AM, Josh Berkus <[email protected]> wrote:Setup:\n\n* PostgreSQL 9.3.9\n* 1 master, 1 replica\n* Tiny database, under 0.5GB, completely cached in shared_buffers\n* 90% read query traffic, which is handled by replica\n* Traffic in the 1000's QPS.\n\nThe wierdness:\n\nPeriodically the master runs an \"update all rows\" query on the main\ntable in the database. When this update hits the replica via\nreplication stream, *some* (about 5%) of the queries which do seq scans\nwill stall for 22 to 32 seconds (these queries normally take about\n75ms). Queries which do index scans seem not to be affected.\n\nThing is, the update all rows only takes 2.5 seconds to execute on the\nmaster. So even if the update is blocking the seq scans on the replica\n(and I can't see why it would), it should only block them for < 3 seconds.\n\nAnyone seen anything like this?Sounds like another manifestation of this: \"[PERFORM] Planner performance extremely affected by an hanging transaction (20-30 times)?\"http://www.postgresql.org/message-id/CAMkU=1yy-YEQVvqj2xJitT1EFkyuFk7uTV_hrOMGyGMxpU=N+Q@mail.gmail.comEach backend that does a seqscan, for each tuple it scans which is not yet resolved (which near the end of the bulk update is going to be nearly equal to 2*reltuples, as every tuple has both an old and a new version so one xmax from one and one xmin from the other must be checked), it has to lock and scan the proc array lock to see if the tuple-inserting transaction has committed yet. This creates profound contention on the lock. Every scanning backend is looping over every other backend for every tupleOnce the commit of the whole-table update has replayed, the problem should go way instantly because at that point each backend doing the seqscan will find the the transaction has committed and so will set the hint bit that means all of the other seqscan backends that come after it can skip the proc array scan for that tuple.So perhaps the commit of the whole-table update is delayed because the startup process as also getting bogged down on the same contended lock? I don't know how hard WAL replay hits the proc array lock.Cheers,Jeff",
"msg_date": "Thu, 13 Aug 2015 13:59:42 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange query stalls on replica in 9.3.9"
},
{
"msg_contents": "On 08/13/2015 01:24 PM, Kevin Grittner wrote:\n>> Thing is, the update all rows only takes 2.5 seconds to execute on the\n>> master. So even if the update is blocking the seq scans on the replica\n>> (and I can't see why it would), it should only block them for < 3 seconds.\n> \n> Visibility hinting and/or hot pruning?\n\nUnlikely; I can VACUUM FULL the entire database in 30 seconds. This\ndatabase is small. Jeff's answer seems more likely ...\n\nOn 08/13/2015 01:59 PM, Jeff Janes wrote: execute on the\n\n> Once the commit of the whole-table update has replayed, the problem\n> should go way instantly because at that point each backend doing the\n> seqscan will find the the transaction has committed and so will set the\n> hint bit that means all of the other seqscan backends that come after it\n> can skip the proc array scan for that tuple.\n\nYes ... and given that the commit on the master took < 3 seconds, it's\nnot likely to take 30 seconds on the replica. That aside, the pattern\nof behavior does look similar to the planner issue.\n\n> So perhaps the commit of the whole-table update is delayed because the\n> startup process as also getting bogged down on the same contended lock?\n> I don't know how hard WAL replay hits the proc array lock.\n\nI don't know; we don't have any visibility into the replay process, and\nno way to tell if replay is waiting on some kind of lock. A regular\nUPDATE should not block against any select activity on the replay, though.\n\nAlso, why would this affect *only* the query which does seq scans? Is\nthere some difference between seqscan and index scan here, or is it\nsimply because they take longer, and since this issue is timing-based,\nthey're more likely to be hit? Significantly, the seqscan query is also\nthe most complex query run against the replica, so maybe the seqscan is\nirrelevant and it's being affected by planner issues?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Aug 2015 09:34:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange query stalls on replica in 9.3.9"
},
{
"msg_contents": "On Fri, Aug 14, 2015 at 9:34 AM, Josh Berkus <[email protected]> wrote:\n\n>\n> On 08/13/2015 01:59 PM, Jeff Janes wrote: execute on the\n>\n> > Once the commit of the whole-table update has replayed, the problem\n> > should go way instantly because at that point each backend doing the\n> > seqscan will find the the transaction has committed and so will set the\n> > hint bit that means all of the other seqscan backends that come after it\n> > can skip the proc array scan for that tuple.\n>\n> Yes ... and given that the commit on the master took < 3 seconds, it's\n> not likely to take 30 seconds on the replica. That aside, the pattern\n> of behavior does look similar to the planner issue.\n>\n\nAnother thought. Who actually sets the hint bits on a replica?\n\nDo the read-only processes on the replica which discovers a tuple to have\nbeen securely committed set the hint bits?\n\nMy benchmarking suggests not.\n\nOr does it wait for the hint bits to get set on master, and then for a\ncheckpoint to occur on the master, and then for that page to get changed\nagain and FPW to the log, and then for the log to get replayed? If so,\nthat explains why the issue doesn't clear up on the replica immediately\nafter the commit gets replayed.\n\n\n\n>\n> > So perhaps the commit of the whole-table update is delayed because the\n> > startup process as also getting bogged down on the same contended lock?\n> > I don't know how hard WAL replay hits the proc array lock.\n>\n> I don't know; we don't have any visibility into the replay process, and\n> no way to tell if replay is waiting on some kind of lock. A regular\n> UPDATE should not block against any select activity on the replay, though.\n>\n> Also, why would this affect *only* the query which does seq scans? Is\n> there some difference between seqscan and index scan here, or is it\n> simply because they take longer, and since this issue is timing-based,\n> they're more likely to be hit?\n\n\nAn index scan only has to check the commit status of rows which meet the\nindex quals, which is presumably a small fraction of the rows.\n\nA seq scan checks the visibility of every row first, before checking the\nwhere clause.\n\nCheers,\n\nJeff\n\nOn Fri, Aug 14, 2015 at 9:34 AM, Josh Berkus <[email protected]> wrote:\nOn 08/13/2015 01:59 PM, Jeff Janes wrote: execute on the\n\n> Once the commit of the whole-table update has replayed, the problem\n> should go way instantly because at that point each backend doing the\n> seqscan will find the the transaction has committed and so will set the\n> hint bit that means all of the other seqscan backends that come after it\n> can skip the proc array scan for that tuple.\n\nYes ... and given that the commit on the master took < 3 seconds, it's\nnot likely to take 30 seconds on the replica. That aside, the pattern\nof behavior does look similar to the planner issue.Another thought. Who actually sets the hint bits on a replica? Do the read-only processes on the replica which discovers a tuple to have been securely committed set the hint bits?My benchmarking suggests not.Or does it wait for the hint bits to get set on master, and then for a checkpoint to occur on the master, and then for that page to get changed again and FPW to the log, and then for the log to get replayed? If so, that explains why the issue doesn't clear up on the replica immediately after the commit gets replayed. \n\n> So perhaps the commit of the whole-table update is delayed because the\n> startup process as also getting bogged down on the same contended lock?\n> I don't know how hard WAL replay hits the proc array lock.\n\nI don't know; we don't have any visibility into the replay process, and\nno way to tell if replay is waiting on some kind of lock. A regular\nUPDATE should not block against any select activity on the replay, though.\n\nAlso, why would this affect *only* the query which does seq scans? Is\nthere some difference between seqscan and index scan here, or is it\nsimply because they take longer, and since this issue is timing-based,\nthey're more likely to be hit?An index scan only has to check the commit status of rows which meet the index quals, which is presumably a small fraction of the rows.A seq scan checks the visibility of every row first, before checking the where clause.Cheers,Jeff",
"msg_date": "Fri, 14 Aug 2015 09:54:05 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange query stalls on replica in 9.3.9"
},
{
"msg_contents": "On Fri, Aug 14, 2015 at 9:54 AM, Jeff Janes <[email protected]> wrote:\n\n> On Fri, Aug 14, 2015 at 9:34 AM, Josh Berkus <[email protected]> wrote:\n>\n>>\n>> On 08/13/2015 01:59 PM, Jeff Janes wrote: execute on the\n>>\n>> > Once the commit of the whole-table update has replayed, the problem\n>> > should go way instantly because at that point each backend doing the\n>> > seqscan will find the the transaction has committed and so will set the\n>> > hint bit that means all of the other seqscan backends that come after it\n>> > can skip the proc array scan for that tuple.\n>>\n>> Yes ... and given that the commit on the master took < 3 seconds, it's\n>> not likely to take 30 seconds on the replica. That aside, the pattern\n>> of behavior does look similar to the planner issue.\n>>\n>\n> Another thought. Who actually sets the hint bits on a replica?\n>\n> Do the read-only processes on the replica which discovers a tuple to have\n> been securely committed set the hint bits?\n>\n> My benchmarking suggests not.\n>\n\nThe hint bits only get set if the commit lsn of the transaction of the\ntuple being hinted (*not* the page lsn) thinks it has already been flushed\nto WAL. On master the transaction commit record usually would have already\nflushed its own WAL, or if async then wal writer is going to take care of\nthis fairly soon if nothing else gets to it first.\n\nOn the standby, it looks like the only thing that updates the\nthinks-it-has-been-flushed-to marker (which is stored in the control file,\nrather than memory) is either the eviction of a dirty buffer, or the\ncompletion of a restartpoint. I could easily be wrong on that, though.\n\nIn any case, you empirically can have committed but unhintable tuples\nhanging around for prolonged amounts of time on the standby. Perhaps\nstandbys need a wal writer process.\n\nCheers,\n\nJeff\n\nOn Fri, Aug 14, 2015 at 9:54 AM, Jeff Janes <[email protected]> wrote:On Fri, Aug 14, 2015 at 9:34 AM, Josh Berkus <[email protected]> wrote:\nOn 08/13/2015 01:59 PM, Jeff Janes wrote: execute on the\n\n> Once the commit of the whole-table update has replayed, the problem\n> should go way instantly because at that point each backend doing the\n> seqscan will find the the transaction has committed and so will set the\n> hint bit that means all of the other seqscan backends that come after it\n> can skip the proc array scan for that tuple.\n\nYes ... and given that the commit on the master took < 3 seconds, it's\nnot likely to take 30 seconds on the replica. That aside, the pattern\nof behavior does look similar to the planner issue.Another thought. Who actually sets the hint bits on a replica? Do the read-only processes on the replica which discovers a tuple to have been securely committed set the hint bits?My benchmarking suggests not.The hint bits only get set if the commit lsn of the transaction of the tuple being hinted (*not* the page lsn) thinks it has already been flushed to WAL. On master the transaction commit record usually would have already flushed its own WAL, or if async then wal writer is going to take care of this fairly soon if nothing else gets to it first.On the standby, it looks like the only thing that updates the thinks-it-has-been-flushed-to marker (which is stored in the control file, rather than memory) is either the eviction of a dirty buffer, or the completion of a restartpoint. I could easily be wrong on that, though. In any case, you empirically can have committed but unhintable tuples hanging around for prolonged amounts of time on the standby. Perhaps standbys need a wal writer process.Cheers,Jeff",
"msg_date": "Tue, 18 Aug 2015 10:44:41 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange query stalls on replica in 9.3.9"
}
] |
[
{
"msg_contents": "Pretty bad subject description... but let me try to explain.\n\n\nI'm trying to figure out what would be the most efficient way to query data\nfrom multiple tables using a foreign key.\n\nRight now the table design is such that I have multiple tables that share\nsome common information, and some specific information. (in OO world we\ncould see them as derived tables) For the sake of simplicity let's assume\nthere are only 5,\n\ntable type1(int id, varchar(24) reference_id, ....specific columns)\ntable type2(int id, varchar(24) reference_id, ....specific columns)\ntable type3(int id, varchar(24) reference_id, ....specific columns)\ntable type4(int id, varchar(24) reference_id, ....specific columns)\ntable type5(int id, varchar(24) reference_id, ....specific columns)\n\nNB: you could imagine those 5 tables inheriting from a base_type table that\nshares a few attributes.\n\nI have a list of reference ids, those reference ids could be in any of\nthose 5 tables but I don't know in which one.\n\nI want to most efficiently retrieve the data on N out of 5 relevant tables\nbut still want to query individually those 5 tables (for orm simplicity\nreason).\n\nSo the idea is first to identify which tables I should query for.\n\nThe naive implementation would be to always query those 5 tables for the\nlist of reference ids, however 90% of the time the reference ids would only\nbe stored in one single table though. So 4/5th of the queries would then be\nirrelevant.\n\nwhat I initially came up with was doing a union of the tables such as:\n\n\nSELECT 'type1', id FROM type5 WHERE reference_id IN (....)\nUNION\nSELECT 'type2', id FROM type4 WHERE reference_id IN (....)\nUNION\n...\nSELECT 'type2', id FROM type3 WHERE reference_id IN (....)\n\nthen effectively figuring the list of which reference ids are in type1,\ntype2, type3, ...etc..\n\nand then issuing the right select to the tables for the related reference\nids.\n\nwhich means in best case scenario I would only do 2 queries instead of 5.\n1 to retrieve the list of reference ids per 'type'\n1 to retrieve the list of types with the corresponding reference ids\n\nI'm trying to figure out if there is a more efficient way to retrieve this\ninformation than doing a union across all tables (there can be a couple\nhundreds reference ids to query for in the list)\n\nI was thinking worse case scenario I could maintain this information in\nanother table via triggers to avoid doing this union, but that seems a bit\nof a hammer solution initially and wondering if there is not something\nsimpler via joins that could be more performant.\n\nThanks for any suggestions.\n\nPretty bad subject description... but let me try to explain. I'm trying to figure out what would be the most efficient way to query data from multiple tables using a foreign key.Right now the table design is such that I have multiple tables that share some common information, and some specific information. (in OO world we could see them as derived tables) For the sake of simplicity let's assume there are only 5, table type1(int id, varchar(24) reference_id, ....specific columns)table type2(int id, varchar(24) reference_id, ....specific columns)table type3(int id, varchar(24) reference_id, ....specific columns)table type4(int id, varchar(24) reference_id, ....specific columns)table type5(int id, varchar(24) reference_id, ....specific columns)NB: you could imagine those 5 tables inheriting from a base_type table that shares a few attributes.I have a list of reference ids, those reference ids could be in any of those 5 tables but I don't know in which one.I want to most efficiently retrieve the data on N out of 5 relevant tables but still want to query individually those 5 tables (for orm simplicity reason). So the idea is first to identify which tables I should query for.The naive implementation would be to always query those 5 tables for the list of reference ids, however 90% of the time the reference ids would only be stored in one single table though. So 4/5th of the queries would then be irrelevant.what I initially came up with was doing a union of the tables such as:SELECT 'type1', id FROM type5 WHERE reference_id IN (....)UNIONSELECT 'type2', id FROM type4 WHERE reference_id IN (....)UNION...SELECT 'type2', id FROM type3 WHERE reference_id IN (....)then effectively figuring the list of which reference ids are in type1, type2, type3, ...etc..and then issuing the right select to the tables for the related reference ids.which means in best case scenario I would only do 2 queries instead of 5.1 to retrieve the list of reference ids per 'type'1 to retrieve the list of types with the corresponding reference idsI'm trying to figure out if there is a more efficient way to retrieve this information than doing a union across all tables (there can be a couple hundreds reference ids to query for in the list)I was thinking worse case scenario I could maintain this information in another table via triggers to avoid doing this union, but that seems a bit of a hammer solution initially and wondering if there is not something simpler via joins that could be more performant.Thanks for any suggestions.",
"msg_date": "Thu, 20 Aug 2015 20:03:32 -0400",
"msg_from": "Stephane Bailliez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Most efficient way of querying M 'related' tables where N out of M\n may contain the key"
},
{
"msg_contents": "On Thu, Aug 20, 2015 at 8:03 PM, Stephane Bailliez <[email protected]>\nwrote:\n\n> Pretty bad subject description... but let me try to explain.\n>\n>\n> I'm trying to figure out what would be the most efficient way to query\n> data from multiple tables using a foreign key.\n>\n>\nSELECT [...]\nFROM (SELECT reference_id, [...] FROM table_where_referenced_id_is_a_pk\nWHERE reference_id EXISTS/IN/JOIN)\n\nsrc\nLEFT JOIN type1 USING (reference_id)\nLEFT JOIN type2 USING (reference_id)\n[...]\n\nOr consider whether PostgreSQL Inheritance would work - though basically\nits a friendly API over the \"UNION ALL\" query you proposed.\n\nDavid J.\n\nOn Thu, Aug 20, 2015 at 8:03 PM, Stephane Bailliez <[email protected]> wrote:Pretty bad subject description... but let me try to explain. I'm trying to figure out what would be the most efficient way to query data from multiple tables using a foreign key.SELECT [...]FROM (SELECT reference_id, [...] FROM table_where_referenced_id_is_a_pk WHERE reference_id EXISTS/IN/JOIN) srcLEFT JOIN type1 USING (reference_id)LEFT JOIN type2 USING (reference_id)[...]Or consider whether PostgreSQL Inheritance would work - though basically its a friendly API over the \"UNION ALL\" query you proposed.David J.",
"msg_date": "Thu, 20 Aug 2015 20:19:22 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Most efficient way of querying M 'related' tables where\n N out of M may contain the key"
},
{
"msg_contents": "On Thu, Aug 20, 2015 at 8:19 PM, David G. Johnston <\[email protected]> wrote:\n\n>\n> SELECT [...]\n> FROM (SELECT reference_id, [...] FROM table_where_referenced_id_is_a_pk\n> WHERE reference_id EXISTS/IN/JOIN)\n>\n> src\n> LEFT JOIN type1 USING (reference_id)\n> LEFT JOIN type2 USING (reference_id)\n> [...]\n>\n\n\nthere are no tables where reference_id is a pk, I could create one or do :\nselect reference_id from ( values (..), (...), (...) .... )\n\nthe tricky part with the join (and where I was not clear about it in my\noriginal description) is that a reference_id can match in multiple tables\n(eg. it can be a fk in type1 and type2), so it then becomes a bit harder to\ncollect all the common attributes and 'types' when doing joins like this.\n\nFor example let's assume there is a group_id to be be retrieved among all\ntables as a common attribute:\n\nif reference_id was existing only in one table, I could do\ncoalesce(type1.group_id, ... type5.group_id) as group_id in the main select\nhowever that would not work in this case.\n\nI could work around the common attributes however.\n\nBut for retrieving the types, what I really need to have as a return of\nthis query is data that allows me to partition the reference_id for each\ntype like:\n\ntype1 -> ref1, ref2, ref5\ntype2 -> ref1, ref3\ntype3 -> ref4, ref3\n\n I guess I could try to return an array and fill it with case/when for each\ntable eg. something like\nARRAY( CASE WHEN type1.id IS NOT NULL THEN 'type1' END, ... CASE WHEN\ntype1.id IS NOT NULL THEN 'type5' END)\n\nand then collect all the non-null values in the code.\n\n\n\n>\n> Or consider whether PostgreSQL Inheritance would work - though basically\n> its a friendly API over the \"UNION ALL\" query you proposed.\n>\n\n\nThe problem with postgresql inheritance is that it would not play well with\nthe orm and substantially complicates implementation.\n\n\nThanks for the all the ideas, that helps me a lot to brainstorm more.\n\nOn Thu, Aug 20, 2015 at 8:19 PM, David G. Johnston <[email protected]> wrote:SELECT [...]FROM (SELECT reference_id, [...] FROM table_where_referenced_id_is_a_pk WHERE reference_id EXISTS/IN/JOIN) srcLEFT JOIN type1 USING (reference_id)LEFT JOIN type2 USING (reference_id)[...]there are no tables where reference_id is a pk, I could create one or do : select reference_id from ( values (..), (...), (...) .... )the tricky part with the join (and where I was not clear about it in my original description) is that a reference_id can match in multiple tables (eg. it can be a fk in type1 and type2), so it then becomes a bit harder to collect all the common attributes and 'types' when doing joins like this.For example let's assume there is a group_id to be be retrieved among all tables as a common attribute:if reference_id was existing only in one table, I could do coalesce(type1.group_id, ... type5.group_id) as group_id in the main selecthowever that would not work in this case.I could work around the common attributes however. But for retrieving the types, what I really need to have as a return of this query is data that allows me to partition the reference_id for each type like:type1 -> ref1, ref2, ref5type2 -> ref1, ref3type3 -> ref4, ref3 I guess I could try to return an array and fill it with case/when for each table eg. something likeARRAY( CASE WHEN type1.id IS NOT NULL THEN 'type1' END, ... CASE WHEN type1.id IS NOT NULL THEN 'type5' END)and then collect all the non-null values in the code. Or consider whether PostgreSQL Inheritance would work - though basically its a friendly API over the \"UNION ALL\" query you proposed.The problem with postgresql inheritance is that it would not play well with the orm and substantially complicates implementation.Thanks for the all the ideas, that helps me a lot to brainstorm more.",
"msg_date": "Fri, 21 Aug 2015 08:07:26 -0400",
"msg_from": "Stephane Bailliez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Most efficient way of querying M 'related' tables where\n N out of M may contain the key"
},
{
"msg_contents": "On Fri, Aug 21, 2015 at 8:07 AM, Stephane Bailliez <[email protected]>\nwrote:\n\n>\n> On Thu, Aug 20, 2015 at 8:19 PM, David G. Johnston <\n> [email protected]> wrote:\n>\n>>\n>> SELECT [...]\n>> FROM (SELECT reference_id, [...] FROM table_where_referenced_id_is_a_pk\n>> WHERE reference_id EXISTS/IN/JOIN)\n>>\n>> src\n>> LEFT JOIN type1 USING (reference_id)\n>> LEFT JOIN type2 USING (reference_id)\n>> [...]\n>>\n>\n>\nPlace ^ in a CTE named (find_all)\n\n\n> there are no tables where reference_id is a pk, I could create one or do :\n> select reference_id from ( values (..), (...), (...) .... )\n>\n> the tricky part with the join (and where I was not clear about it in my\n> original description) is that a reference_id can match in multiple tables\n> (eg. it can be a fk in type1 and type2), so it then becomes a bit harder to\n> collect all the common attributes and 'types' when doing joins like this.\n>\n> For example let's assume there is a group_id to be be retrieved among all\n> tables as a common attribute:\n>\n> if reference_id was existing only in one table, I could do\n> coalesce(type1.group_id, ... type5.group_id) as group_id in the main select\n> however that would not work in this case.\n>\n>\nWITH find_all (reference_id, type_identifier, type_id) AS ( ... )\nSELECT type_identifier, array_agg(reference_id), array_agg(type_id)\nFROM find_all\nWHERE type_identifier IS NOT NULL\nGROUP BY type_identifier\n\nfind_all will return at least one row, possibly empty if no matches are\npresent, and will return multiple rows if more than one matches. You can\nuse array_agg as shown, or play around with custom composite types, or\neven build a JSON document.\n\nDavid J.\n\nOn Fri, Aug 21, 2015 at 8:07 AM, Stephane Bailliez <[email protected]> wrote:On Thu, Aug 20, 2015 at 8:19 PM, David G. Johnston <[email protected]> wrote:SELECT [...]FROM (SELECT reference_id, [...] FROM table_where_referenced_id_is_a_pk WHERE reference_id EXISTS/IN/JOIN) srcLEFT JOIN type1 USING (reference_id)LEFT JOIN type2 USING (reference_id)[...]Place ^ in a CTE named (find_all)there are no tables where reference_id is a pk, I could create one or do : select reference_id from ( values (..), (...), (...) .... )the tricky part with the join (and where I was not clear about it in my original description) is that a reference_id can match in multiple tables (eg. it can be a fk in type1 and type2), so it then becomes a bit harder to collect all the common attributes and 'types' when doing joins like this.For example let's assume there is a group_id to be be retrieved among all tables as a common attribute:if reference_id was existing only in one table, I could do coalesce(type1.group_id, ... type5.group_id) as group_id in the main selecthowever that would not work in this case.WITH find_all (reference_id, type_identifier, type_id) AS ( ... )SELECT type_identifier, array_agg(reference_id), array_agg(type_id)FROM find_allWHERE type_identifier IS NOT NULLGROUP BY type_identifierfind_all will return at least one row, possibly empty if no matches are present, and will return multiple rows if more than one matches. You can use array_agg as shown, or play around with custom composite types, or even build a JSON document.David J.",
"msg_date": "Fri, 21 Aug 2015 08:24:37 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Most efficient way of querying M 'related' tables where\n N out of M may contain the key"
}
] |
[
{
"msg_contents": "Hey,\n\ni have a very long running stored procedure, due to array manipulation in a stored procedure. The following procedure takes 13 seconds to finish.\n\nBEGIN\n point_ids_older_than_one_hour := '{}';\n object_ids_to_be_invalidated := '{}';\n\n select ARRAY(SELECT\n point_id\n from ONLY\n public.ims_point as p\n where\n p.timestamp < m_before_one_hour\n )\n into point_ids_older_than_one_hour ; -- this array has a size of 20k\n\n select ARRAY(SELECT\n object_id\n from\n public.ims_object_header h\n WHERE\n h.last_point_id= ANY(point_ids_older_than_one_hour)\n )\n into object_ids_to_be_invalidated; -- this array has a size of 100\n\n -- current_last_point_ids will have a size of 100k\n current_last_point_ids := ARRAY( SELECT\n last_point_id\n from\n public.ims_object_header h\n );\n -- START OF PERFORMANCE BOTTLENECK\n IF(array_length(current_last_point_ids, 1) > 0)\n THEN\n FOR i IN 0 .. array_upper(current_last_point_ids, 1)\n LOOP\n point_ids_older_than_one_hour = array_remove(point_ids_older_than_one_hour, current_last_point_ids[i]::bigint);\n END LOOP;\n END IF;\n -- END OF PERFORMANCE BOTTLENECK\nEND;\n\nThe array manipulation part is the performance bottleneck. I am pretty sure, that there is a better way of doing this, however I couldn't find one.\nWhat I have is two table, lets call them ims_point and ims_object_header. ims_object_header references some entries of ims_point in the column last_point_id.\nNow I want to delete all entries from ims_point, where the timestamp is older than one hour. The currently being referenced ids of the table ims_object_header should be excluded from this deletion. Therefore I stored the ids in arrays and iterate over those arrays to exclude the referenced values from being deleted.\n\nHowever, I not sure if using an array for an operation like this is the best approach.\n\nCan anyone give me some advice how this could be enhanced.\n\nThanks in advance.\n\n\n\n\n\n\n\n\n\n\nHey, \n \ni have a very long running stored procedure, due to array manipulation in a stored procedure. The following procedure takes 13 seconds to finish.\n \nBEGIN\n point_ids_older_than_one_hour := '{}';\n object_ids_to_be_invalidated := '{}';\n \n select ARRAY(SELECT \n point_id \n from ONLY \n public.ims_point as p\n\n where \n p.timestamp < m_before_one_hour\n\n )\n into point_ids_older_than_one_hour ; -- this array has a size of 20k\n \n select ARRAY(SELECT \n object_id \n from \n public.ims_object_header h\n WHERE \n h.last_point_id= ANY(point_ids_older_than_one_hour)\n )\n into object_ids_to_be_invalidated; -- this array has a size of 100\n \n -- current_last_point_ids will have a size of 100k\n current_last_point_ids := ARRAY( SELECT\n\n last_point_id\n\n from\n\n public.ims_object_header h\n ); \n\n -- START OF PERFORMANCE BOTTLENECK\n IF(array_length(current_last_point_ids, 1) > 0)\n THEN \n FOR i IN 0 .. array_upper(current_last_point_ids, 1)\n LOOP\n point_ids_older_than_one_hour = array_remove(point_ids_older_than_one_hour, current_last_point_ids[i]::bigint);\n END LOOP;\n END IF; \n -- END OF PERFORMANCE BOTTLENECK\nEND;\n \nThe array manipulation part is the performance bottleneck. I am pretty sure, that there is a better way of doing this, however I couldn’t find one.\nWhat I have is two table, lets call them ims_point and ims_object_header. ims_object_header references some entries of ims_point in the column last_point_id.\nNow I want to delete all entries from ims_point, where the timestamp is older than one hour. The currently being referenced ids of the table ims_object_header should be excluded from this deletion. Therefore I stored\n the ids in arrays and iterate over those arrays to exclude the referenced values from being deleted.\n \nHowever, I not sure if using an array for an operation like this is the best approach.\n \nCan anyone give me some advice how this could be enhanced.\n \nThanks in advance.",
"msg_date": "Fri, 21 Aug 2015 12:48:37 +0000",
"msg_from": "=?iso-8859-1?Q?Genc=2C_=D6mer?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance bottleneck due to array manipulation "
},
{
"msg_contents": "=?iso-8859-1?Q?Genc=2C_=D6mer?= <[email protected]> writes:\n> i have a very long running stored procedure, due to array manipulation in a stored procedure. The following procedure takes 13 seconds to finish.\n\n> BEGIN\n> point_ids_older_than_one_hour := '{}';\n> object_ids_to_be_invalidated := '{}';\n\n> select ARRAY(SELECT\n> point_id\n> from ONLY\n> public.ims_point as p\n> where\n> p.timestamp < m_before_one_hour\n> )\n> into point_ids_older_than_one_hour ; -- this array has a size of 20k\n\n> select ARRAY(SELECT\n> object_id\n> from\n> public.ims_object_header h\n> WHERE\n> h.last_point_id= ANY(point_ids_older_than_one_hour)\n> )\n> into object_ids_to_be_invalidated; -- this array has a size of 100\n\n> -- current_last_point_ids will have a size of 100k\n> current_last_point_ids := ARRAY( SELECT\n> last_point_id\n> from\n> public.ims_object_header h\n> );\n> -- START OF PERFORMANCE BOTTLENECK\n> IF(array_length(current_last_point_ids, 1) > 0)\n> THEN\n> FOR i IN 0 .. array_upper(current_last_point_ids, 1)\n> LOOP\n> point_ids_older_than_one_hour = array_remove(point_ids_older_than_one_hour, current_last_point_ids[i]::bigint);\n> END LOOP;\n> END IF;\n> -- END OF PERFORMANCE BOTTLENECK\n> END;\n\nWell, in the first place, this is the cardinal sin of working with SQL\ndatabases: doing procedurally that which should be done relationally.\nForget the arrays entirely and use EXCEPT, ie\n\n SELECT point_id FROM ...\n EXCEPT\n SELECT last_point_id FROM ...\n\nOr maybe you want EXCEPT ALL, depending on whether duplicates are possible\nand what you want to do with them if so.\n\nHaving said that, the way you have this is necessarily O(N^2) because\narray_remove has to search the entire array on each call, and then\nreconstruct the entire array less any removed element(s). The new\n\"expanded array\" infrastructure in 9.5 could perhaps reduce some of the\nconstant factor, but the array search loop remains so it would still be\nO(N^2); and anyway array_remove has not been taught about expanded arrays\n(which means this example is probably even slower in 9.5 :-().\n\nIf you were using integers, you could possibly get decent performance from\ncontrib/intarray's int[] - int[] operator (which I think does a sort and\nmerge internally); but I gather that these are bigints, so that won't\nwork.\n\n\t\t\tregards, tom lane\n\n\n\n\n\n\n\n\n> from\n\n\n\n\n\n\n> The array manipulation part is the performance bottleneck. I am pretty sure, that there is a better way of doing this, however I couldn't find one.\n> What I have is two table, lets call them ims_point and ims_object_header. ims_object_header references some entries of ims_point in the column last_point_id.\n> Now I want to delete all entries from ims_point, where the timestamp is older than one hour. The currently being referenced ids of the table ims_object_header should be excluded from this deletion. Therefore I stored the ids in arrays and iterate over those arrays to exclude the referenced values from being deleted.\n\n> However, I not sure if using an array for an operation like this is the best approach.\n\n> Can anyone give me some advice how this could be enhanced.\n\n> Thanks in advance.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 21 Aug 2015 10:06:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bottleneck due to array manipulation"
},
{
"msg_contents": "Hello,\n\nOn Fri, Aug 21, 2015 at 2:48 PM, Genc, Ömer <[email protected]>\nwrote:\n\n> Now I want to delete all entries from ims_point, where the timestamp is\n> older than one hour. The currently being referenced ids of the table\n> ims_object_header should be excluded from this deletion.\n>\n>\n>\n\ndelete from public.ims_point ip\n where ip.timestamp < current_timestamp - interval '1 hour'\n and not exists ( select 'reference exists'\n from public.ims_object_header ioh\n where ioh.last_point_id = ip.point_id\n )\n;\n\nDoes this works for you ?\n\nHello,On Fri, Aug 21, 2015 at 2:48 PM, Genc, Ömer <[email protected]> wrote:\n\n\nNow I want to delete all entries from ims_point, where the timestamp is older than one hour. The currently being referenced ids of the table ims_object_header should be excluded from this deletion.\n \n\n\ndelete from public.ims_point ip where ip.timestamp < current_timestamp - interval '1 hour' and not exists ( select 'reference exists' from public.ims_object_header ioh where ioh.last_point_id = ip.point_id );Does this works for you ?",
"msg_date": "Fri, 21 Aug 2015 16:11:38 +0200",
"msg_from": "=?UTF-8?Q?F=C3=A9lix_GERZAGUET?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bottleneck due to array manipulation"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Genc, Ömer\nSent: Friday, August 21, 2015 8:49 AM\nTo: [email protected]\nSubject: [PERFORM] Performance bottleneck due to array manipulation\n\nHey,\n\ni have a very long running stored procedure, due to array manipulation in a stored procedure. The following procedure takes 13 seconds to finish.\n\nBEGIN\n point_ids_older_than_one_hour := '{}';\n object_ids_to_be_invalidated := '{}';\n\n select ARRAY(SELECT\n point_id\n from ONLY\n public.ims_point as p\n where\n p.timestamp < m_before_one_hour\n )\n into point_ids_older_than_one_hour ; -- this array has a size of 20k\n\n select ARRAY(SELECT\n object_id\n from\n public.ims_object_header h\n WHERE\n h.last_point_id= ANY(point_ids_older_than_one_hour)\n )\n into object_ids_to_be_invalidated; -- this array has a size of 100\n\n -- current_last_point_ids will have a size of 100k\n current_last_point_ids := ARRAY( SELECT\n last_point_id\n from\n public.ims_object_header h\n );\n -- START OF PERFORMANCE BOTTLENECK\n IF(array_length(current_last_point_ids, 1) > 0)\n THEN\n FOR i IN 0 .. array_upper(current_last_point_ids, 1)\n LOOP\n point_ids_older_than_one_hour = array_remove(point_ids_older_than_one_hour, current_last_point_ids[i]::bigint);\n END LOOP;\n END IF;\n -- END OF PERFORMANCE BOTTLENECK\nEND;\n\nThe array manipulation part is the performance bottleneck. I am pretty sure, that there is a better way of doing this, however I couldn't find one.\nWhat I have is two table, lets call them ims_point and ims_object_header. ims_object_header references some entries of ims_point in the column last_point_id.\nNow I want to delete all entries from ims_point, where the timestamp is older than one hour. The currently being referenced ids of the table ims_object_header should be excluded from this deletion. Therefore I stored the ids in arrays and iterate over those arrays to exclude the referenced values from being deleted.\n\nHowever, I not sure if using an array for an operation like this is the best approach.\n\nCan anyone give me some advice how this could be enhanced.\n\nThanks in advance.\n\n\nI think in this case (as is in many other cases) \"pure\" SQL does the job much better than procedural language:\n\nDELETE FROM public.ims_point as P\nWHERE P.timestamp < m_before_one_hour\n AND NOT EXISTS (SELECT 1 FROM public.ims_object_header OH\n WHERE OH.last_point_id = P.object_id);\n\nIs that what you are trying to accomplish?\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Genc, Ömer\nSent: Friday, August 21, 2015 8:49 AM\nTo: [email protected]\nSubject: [PERFORM] Performance bottleneck due to array manipulation \n\n\n \nHey, \n \ni have a very long running stored procedure, due to array manipulation in a stored procedure. The following procedure takes 13 seconds to finish.\n \nBEGIN\n point_ids_older_than_one_hour := '{}';\n object_ids_to_be_invalidated := '{}';\n \n select ARRAY(SELECT \n point_id \n from ONLY \n public.ims_point as p \n where \n p.timestamp < m_before_one_hour \n )\n into point_ids_older_than_one_hour ; -- this array has a size of 20k\n \n select ARRAY(SELECT \n object_id \n from \n public.ims_object_header h\n WHERE \n h.last_point_id= ANY(point_ids_older_than_one_hour)\n )\n into object_ids_to_be_invalidated; -- this array has a size of 100\n \n -- current_last_point_ids will have a size of 100k\n current_last_point_ids := ARRAY( SELECT \n last_point_id \n\n from \n public.ims_object_header h\n ); \n\n -- START OF PERFORMANCE BOTTLENECK\n IF(array_length(current_last_point_ids, 1) > 0)\n THEN \n FOR i IN 0 .. array_upper(current_last_point_ids, 1)\n LOOP\n point_ids_older_than_one_hour = array_remove(point_ids_older_than_one_hour, current_last_point_ids[i]::bigint);\n END LOOP;\n END IF; \n -- END OF PERFORMANCE BOTTLENECK\nEND;\n \nThe array manipulation part is the performance bottleneck. I am pretty sure, that there is a better way of doing this, however I couldn’t find one.\nWhat I have is two table, lets call them ims_point and ims_object_header. ims_object_header references some entries of ims_point in the column last_point_id.\nNow I want to delete all entries from ims_point, where the timestamp is older than one hour. The currently being referenced ids of the table ims_object_header should be excluded from this deletion. Therefore I stored the ids in arrays and\n iterate over those arrays to exclude the referenced values from being deleted.\n \nHowever, I not sure if using an array for an operation like this is the best approach.\n \nCan anyone give me some advice how this could be enhanced.\n \nThanks in advance.\n\n \n\n \nI think in this case (as is in many other cases) “pure” SQL does the job much better than procedural language:\n \nDELETE FROM public.ims_point as P\n\nWHERE P.timestamp < m_before_one_hour \n AND NOT EXISTS (SELECT 1 FROM public.ims_object_header OH\n WHERE OH.last_point_id = P.object_id);\n \nIs that what you are trying to accomplish?\n \nRegards,\nIgor Neyman",
"msg_date": "Fri, 21 Aug 2015 14:50:18 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bottleneck due to array manipulation "
},
{
"msg_contents": "Thanks a lot,\n\nThe mentioned advices helped me a lot. I used an approach similar to the one mentioned by Igor and Felix and now the stored procedure runs fast.\n\nKind regards,\n\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]]<mailto:[mailto:[email protected]]> On Behalf Of Genc, Ömer\nSent: Friday, August 21, 2015 8:49 AM\nTo: [email protected]<mailto:[email protected]>\nSubject: [PERFORM] Performance bottleneck due to array manipulation\n\nHey,\n\ni have a very long running stored procedure, due to array manipulation in a stored procedure. The following procedure takes 13 seconds to finish.\n\nBEGIN\n point_ids_older_than_one_hour := '{}';\n object_ids_to_be_invalidated := '{}';\n\n select ARRAY(SELECT\n point_id\n from ONLY\n public.ims_point as p\n where\n p.timestamp < m_before_one_hour\n )\n into point_ids_older_than_one_hour ; -- this array has a size of 20k\n\n select ARRAY(SELECT\n object_id\n from\n public.ims_object_header h\n WHERE\n h.last_point_id= ANY(point_ids_older_than_one_hour)\n )\n into object_ids_to_be_invalidated; -- this array has a size of 100\n\n -- current_last_point_ids will have a size of 100k\n current_last_point_ids := ARRAY( SELECT\n last_point_id\n from\n public.ims_object_header h\n );\n -- START OF PERFORMANCE BOTTLENECK\n IF(array_length(current_last_point_ids, 1) > 0)\n THEN\n FOR i IN 0 .. array_upper(current_last_point_ids, 1)\n LOOP\n point_ids_older_than_one_hour = array_remove(point_ids_older_than_one_hour, current_last_point_ids[i]::bigint);\n END LOOP;\n END IF;\n -- END OF PERFORMANCE BOTTLENECK\nEND;\n\nThe array manipulation part is the performance bottleneck. I am pretty sure, that there is a better way of doing this, however I couldn't find one.\nWhat I have is two table, lets call them ims_point and ims_object_header. ims_object_header references some entries of ims_point in the column last_point_id.\nNow I want to delete all entries from ims_point, where the timestamp is older than one hour. The currently being referenced ids of the table ims_object_header should be excluded from this deletion. Therefore I stored the ids in arrays and iterate over those arrays to exclude the referenced values from being deleted.\n\nHowever, I not sure if using an array for an operation like this is the best approach.\n\nCan anyone give me some advice how this could be enhanced.\n\nThanks in advance.\n\n\nI think in this case (as is in many other cases) \"pure\" SQL does the job much better than procedural language:\n\nDELETE FROM public.ims_point as P\nWHERE P.timestamp < m_before_one_hour\n AND NOT EXISTS (SELECT 1 FROM public.ims_object_header OH\n WHERE OH.last_point_id = P.object_id);\n\nIs that what you are trying to accomplish?\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\n\n\n\n\nThanks a lot,\n \nThe mentioned advices helped me a lot. I used an approach similar to the one mentioned by\nIgor and Felix and now the stored procedure runs fast.\n \nKind regards,\n \n\n\nFrom: \[email protected] \n[mailto:[email protected]] On Behalf Of Genc, Ömer\nSent: Friday, August 21, 2015 8:49 AM\nTo: [email protected]\nSubject: [PERFORM] Performance bottleneck due to array manipulation \n\n\n \nHey, \n \ni have a very long running stored procedure, due to array manipulation in a stored procedure. The following procedure takes 13 seconds to finish.\n \nBEGIN\n point_ids_older_than_one_hour := '{}';\n object_ids_to_be_invalidated := '{}';\n \n select ARRAY(SELECT \n point_id \n from ONLY \n public.ims_point as p\n\n where \n p.timestamp < m_before_one_hour\n\n )\n into point_ids_older_than_one_hour ; -- this array has a size of 20k\n \n select ARRAY(SELECT \n object_id \n from \n public.ims_object_header h\n WHERE \n h.last_point_id= ANY(point_ids_older_than_one_hour)\n )\n into object_ids_to_be_invalidated; -- this array has a size of 100\n \n -- current_last_point_ids will have a size of 100k\n current_last_point_ids := ARRAY( SELECT\n\n last_point_id\n\n from\n\n public.ims_object_header h\n ); \n\n -- START OF PERFORMANCE BOTTLENECK\n IF(array_length(current_last_point_ids, 1) > 0)\n THEN \n FOR i IN 0 .. array_upper(current_last_point_ids, 1)\n LOOP\n point_ids_older_than_one_hour = array_remove(point_ids_older_than_one_hour, current_last_point_ids[i]::bigint);\n END LOOP;\n END IF; \n -- END OF PERFORMANCE BOTTLENECK\nEND;\n \nThe array manipulation part is the performance bottleneck. I am pretty sure, that there is a better way of doing this, however I couldn’t find one.\nWhat I have is two table, lets call them ims_point and ims_object_header. ims_object_header references some entries of ims_point in the column last_point_id.\nNow I want to delete all entries from ims_point, where the timestamp is older than one hour. The currently being referenced ids of the table ims_object_header should be excluded from this deletion. Therefore I stored\n the ids in arrays and iterate over those arrays to exclude the referenced values from being deleted.\n \nHowever, I not sure if using an array for an operation like this is the best approach.\n \nCan anyone give me some advice how this could be enhanced.\n \nThanks in advance.\n\n \n\n \nI think in this case (as is in many other cases) “pure” SQL does the job much better than procedural language:\n \nDELETE FROM \npublic.ims_point as P \nWHERE P.timestamp < m_before_one_hour \n\n AND NOT EXISTS (SELECT 1 FROM public.ims_object_header OH\n WHERE OH.last_point_id = P.object_id);\n \nIs that what you are trying to accomplish?\n \nRegards,\nIgor Neyman",
"msg_date": "Mon, 24 Aug 2015 10:22:27 +0000",
"msg_from": "=?iso-8859-1?Q?Genc=2C_=D6mer?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bottleneck due to array manipulation "
}
] |
[
{
"msg_contents": "Hi,\n\nWe have a query on a column with GIN index, but query plan chooses not using the index but do an seq scan whichi is must slower\n\nCREATE INDEX idx_access_grants_on_access_tokens ON access_grants USING gin (access_tokens);\n\nexplain analyze SELECT \"access_grants\".* FROM \"access_grants\" WHERE (access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..7.46 rows=1 width=157) (actual time=260.376..260.377 rows=1 loops=1)\n -> Seq Scan on access_grants (cost=0.00..29718.03 rows=3985 width=157) (actual time=260.373..260.373 rows=1 loops=1)\n Filter: (access_tokens @> '{124e5a1f9de325fc176a7c89152ac734}'::text[])\n Rows Removed by Filter: 796818\n Total runtime: 260.408 ms\n\n\nWe tested on smaller table in development region and it chooses to use the index there. However, in production size table it decides to ignore the index for unknown reasons.\n\nIs the large number of tuples skewing the query planner’s decision or the index itself is larger than the table therefor it would decide to do table scan?\n\nAny suggestions are greatly appreciated!\n\nYun\n\n\n\n\n\n\n\nHi,\n\n\nWe have a query on a column with GIN index, but query plan chooses not using the index but do an seq scan whichi is must slower\n\n\nCREATE INDEX idx_access_grants_on_access_tokens ON access_grants USING gin (access_tokens); \n\n\n\n\nexplain analyze SELECT \"access_grants\".* FROM \"access_grants\" WHERE (access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..7.46 rows=1 width=157) (actual time=260.376..260.377 rows=1 loops=1)\n -> Seq Scan on access_grants (cost=0.00..29718.03 rows=3985 width=157) (actual time=260.373..260.373 rows=1 loops=1)\n Filter: (access_tokens @> '{124e5a1f9de325fc176a7c89152ac734}'::text[])\n Rows Removed by Filter: 796818\n Total runtime: 260.408 ms\n\n\n\n\n\nWe tested on smaller table in development region and it chooses to use the index there. However, in production size table it decides to ignore the index for unknown reasons.\n\n\nIs the large number of tuples skewing the query planner’s decision or the index itself is larger than the table therefor it would decide to do table scan?\n\n\nAny suggestions are greatly appreciated!\n\n\nYun",
"msg_date": "Sat, 22 Aug 2015 01:55:56 +0000",
"msg_from": "\"Guo, Yun\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "query not using GIN index"
},
{
"msg_contents": "Hi,\n\nOn 08/22/2015 03:55 AM, Guo, Yun wrote:\n> Hi,\n>\n> We have a query on a column with GIN index, but query plan chooses not\n> using the index but do an seq scan whichi is must slower\n>\n> CREATE INDEX idx_access_grants_on_access_tokens ON access_grants USING\n> gin (access_tokens);\n>\n> explain analyze SELECT \"access_grants\".* FROM \"access_grants\" WHERE\n> (access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) ;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..7.46 rows=1 width=157) (actual\n> time=260.376..260.377 rows=1 loops=1)\n> -> Seq Scan on access_grants (cost=0.00..29718.03 rows=3985\n> width=157) (actual time=260.373..260.373 rows=1 loops=1)\n> Filter: (access_tokens @>\n> '{124e5a1f9de325fc176a7c89152ac734}'::text[])\n> Rows Removed by Filter: 796818\n> Total runtime: 260.408 ms\n>\n\nI find it very likely that the explain output actually comes from a \nslightly different query, including a LIMIT 1 clause.\n\nThat might easily be the problem here, because the optimizer expects the \n3985 \"matches\" to be uniformly distributed in the table, so it thinks \nit'll scan just a tiny fraction of the table (1/3985) until the first \nmatch. But it's quite possible all at rows are end of the table, and the \nexecutor has to actually scan the whole table.\n\nIt's difficult to say without further details of the table and how the \ndata are generated.\n\n> We tested on smaller table in development region and it chooses to use\n> the index there. However, in production size table it decides to ignore\n> the index for unknown reasons.\n\nPlease provide explain output from that table. It's difficult to say \nwhat's different without seeing the details.\n\nAlso please provide important details about the system (e.g. which \nPostgreSQL version, how much RAM, what work_mem/shared_buffers and such \nstuff).\n\n> Is the large number of tuples skewing the query planner�s decision\n> or the index itself is larger than the table therefor it would decide\n> to do table scan?\n\nWhat large number of tuples? The indexes are supposed to be more \nefficient the larger the table is.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 22 Aug 2015 06:36:15 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query not using GIN index"
},
{
"msg_contents": "Hi Tom,\nThanks for you valuable input. You¹re right, the plan was coming from\nexplain analyze SELECT \"access_grants\".* FROM \"access_grants² WHERE\n(access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) limit 1;\nWe tried removing \"limit 1², which did give us the benefit of using index\nfor sometime. However, after a while, it went back to the old behavior of\nignoring the index for \" SELECT \"access_grants\".* FROM \"access_grants²\nWHERE (access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) ;² .\nWe had to turn off sequential scans (enable_seqscan) to force it use the\nindex. But I¹m not sure this should be the permanent fix.\nThe access_grants table has 797415 rows and the schema as below:\n Column | Type |\n Modifiers\n-------------------------+-----------------------------+-------------------\n-----------------------------------------\n id | integer | not null default\nnextval('access_grants_id_seq'::regclass)\n user_id | integer | not null\n code | text | not null\n client_application_name | text | not null\n access_tokens | text[] | default\n'{}'::text[]\n created_at | timestamp without time zone | not null\n updated_at | timestamp without time zone | not null\n mongo_id | text |\nIndexes:\n \"access_grants_pkey\" PRIMARY KEY, btree (id)\n \"index_access_grants_on_code\" UNIQUE, btree (code)\n \"index_access_grants_on_mongo_id\" UNIQUE, btree (mongo_id)\n \"idx_access_grants_on_access_tokens\" gin (access_tokens)\n \"index_access_grants_on_user_id\" btree (user_id)\n\n\n\nThe array length distribution of access_token is below:\n309997 rows has only one element, 248334 rows has empty array, 432 rows\nhas array length >100, and 1 row has array length 3575.\nThe table size is 154MB, and the index size is 180MB.\n \n \n\n It¹s on AWS db.r3.xlarge instance with 4 virtual cores4, Memory30.5 GiB,\nGeneral purpose ssd, with shared_buffers 1048576 and work_mem 159744.\n\n\n\n\n\n\nOn 8/22/15, 12:36 AM, \"[email protected] on behalf of\nTomas Vondra\" <[email protected] on behalf of\[email protected]> wrote:\n\n>Hi,\n>\n>On 08/22/2015 03:55 AM, Guo, Yun wrote:\n>> Hi,\n>>\n>> We have a query on a column with GIN index, but query plan chooses not\n>> using the index but do an seq scan whichi is must slower\n>>\n>> CREATE INDEX idx_access_grants_on_access_tokens ON access_grants USING\n>> gin (access_tokens);\n>>\n>> explain analyze SELECT \"access_grants\".* FROM \"access_grants\" WHERE\n>> (access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) ;\n>> QUERY PLAN\n>> \n>>-------------------------------------------------------------------------\n>>-------------------------------------------------\n>> Limit (cost=0.00..7.46 rows=1 width=157) (actual\n>> time=260.376..260.377 rows=1 loops=1)\n>> -> Seq Scan on access_grants (cost=0.00..29718.03 rows=3985\n>> width=157) (actual time=260.373..260.373 rows=1 loops=1)\n>> Filter: (access_tokens @>\n>> '{124e5a1f9de325fc176a7c89152ac734}'::text[])\n>> Rows Removed by Filter: 796818\n>> Total runtime: 260.408 ms\n>>\n>\n>I find it very likely that the explain output actually comes from a\n>slightly different query, including a LIMIT 1 clause.\n>\n>That might easily be the problem here, because the optimizer expects the\n>3985 \"matches\" to be uniformly distributed in the table, so it thinks\n>it'll scan just a tiny fraction of the table (1/3985) until the first\n>match. But it's quite possible all at rows are end of the table, and the\n>executor has to actually scan the whole table.\n>\n>It's difficult to say without further details of the table and how the\n>data are generated.\n>\n>> We tested on smaller table in development region and it chooses to use\n>> the index there. However, in production size table it decides to ignore\n>> the index for unknown reasons.\n>\n>Please provide explain output from that table. It's difficult to say\n>what's different without seeing the details.\n>\n>Also please provide important details about the system (e.g. which\n>PostgreSQL version, how much RAM, what work_mem/shared_buffers and such\n>stuff).\n>\n>> Is the large number of tuples skewing the query planner¹s decision\n>> or the index itself is larger than the table therefor it would decide\n>> to do table scan?\n>\n>What large number of tuples? The indexes are supposed to be more\n>efficient the larger the table is.\n>\n>regards\n>\n>--\n>Tomas Vondra http://www.2ndQuadrant.com\n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Aug 2015 14:45:16 +0000",
"msg_from": "\"Guo, Yun\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query not using GIN index"
},
{
"msg_contents": "\r\n\r\nHi Tom,\r\nThanks for you valuable input.\r\n\r\n\r\n\r\n>Hi,\r\n>\r\n>On 08/22/2015 03:55 AM, Guo, Yun wrote:\r\n>> Hi,\r\n>>\r\n>> We have a query on a column with GIN index, but query plan chooses not\r\n>> using the index but do an seq scan whichi is must slower\r\n>>\r\n>> CREATE INDEX idx_access_grants_on_access_tokens ON access_grants USING\r\n>> gin (access_tokens);\r\n>>\r\n>> explain analyze SELECT \"access_grants\".* FROM \"access_grants\" WHERE\r\n>> (access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) ;\r\n>> QUERY PLAN\r\n>> \r\n>>-------------------------------------------------------------------------\r\n>>-------------------------------------------------\r\n>> Limit (cost=0.00..7.46 rows=1 width=157) (actual\r\n>> time=260.376..260.377 rows=1 loops=1)\r\n>> -> Seq Scan on access_grants (cost=0.00..29718.03 rows=3985\r\n>> width=157) (actual time=260.373..260.373 rows=1 loops=1)\r\n>> Filter: (access_tokens @>\r\n>> '{124e5a1f9de325fc176a7c89152ac734}'::text[])\r\n>> Rows Removed by Filter: 796818\r\n>> Total runtime: 260.408 ms\r\n>>\r\n>\r\n>I find it very likely that the explain output actually comes from a\r\n>slightly different query, including a LIMIT 1 clause.\r\n\r\n You¹re right, the plan was coming from\r\nexplain analyze SELECT \"access_grants\".* FROM \"access_grants² WHERE\r\n(access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) limit 1;\r\nWe tried removing \"limit 1², which did give us the benefit of using index\r\nfor sometime. However, after a while, it went back to the old behavior of\r\nignoring the index for \" SELECT \"access_grants\".* FROM \"access_grants²\r\nWHERE (access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) ;² .\r\nWe had to turn off sequential scans (enable_seqscan) to force it use the\r\nindex. But I¹m not sure this should be the permanent fix.\r\n\r\n\r\n>\r\n>That might easily be the problem here, because the optimizer expects the\r\n>3985 \"matches\" to be uniformly distributed in the table, so it thinks\r\n>it'll scan just a tiny fraction of the table (1/3985) until the first\r\n>match. But it's quite possible all at rows are end of the table, and the\r\n>executor has to actually scan the whole table.\r\n>\r\n>It's difficult to say without further details of the table and how the\r\n>data are generated.\r\n\r\nThe access_grants table has 797415 rows and the schema as below:\r\nColumn | Type |\r\nModifiers\r\n-------------------------+-----------------------------+-------------------\r\n-----------------------------------------\r\nid | integer | not null default\r\nnextval('access_grants_id_seq'::regclass)\r\nuser_id | integer | not null\r\ncode | text | not null\r\nclient_application_name | text | not null\r\naccess_tokens | text[] | default\r\n'{}'::text[]\r\ncreated_at | timestamp without time zone | not null\r\nupdated_at | timestamp without time zone | not null\r\nmongo_id | text |\r\nIndexes:\r\n\"access_grants_pkey\" PRIMARY KEY, btree (id)\r\n\"index_access_grants_on_code\" UNIQUE, btree (code)\r\n\"index_access_grants_on_mongo_id\" UNIQUE, btree (mongo_id)\r\n\"idx_access_grants_on_access_tokens\" gin (access_tokens)\r\n\"index_access_grants_on_user_id\" btree (user_id)\r\n\r\n\r\n\r\n>\r\n>> We tested on smaller table in development region and it chooses to use\r\n>> the index there. However, in production size table it decides to ignore\r\n>> the index for unknown reasons.\r\n>\r\n>Please provide explain output from that table. It's difficult to say\r\n>what's different without seeing the details.\r\n>\r\n>Also please provide important details about the system (e.g. which\r\n>PostgreSQL version, how much RAM, what work_mem/shared_buffers and such\r\n>stuff).\r\n\r\nThe array length distribution of access_token is below:\r\n309997 rows has only one element, 248334 rows has empty array, 432 rows\r\nhas array length >100, and 1 row has array length 3575.\r\nThe table size is 154MB, and the index size is 180MB.\r\n \r\n\r\nIt¹s on AWS db.r3.xlarge instance with 4 virtual cores4, Memory30.5 GiB,\r\nGeneral purpose ssd, with shared_buffers 1048576 and work_mem 159744.\r\n\r\n\r\n>\r\n>> Is the large number of tuples skewing the query planner’s decision\r\n>> or the index itself is larger than the table therefor it would decide\r\n>> to do table scan?\r\n>\r\n>What large number of tuples? The indexes are supposed to be more\r\n>efficient the larger the table is.\r\n>\r\n>regards\r\n>\r\n>--\r\n>Tomas Vondra http://www.2ndQuadrant.com\r\n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\r\n>\r\n>\r\n>-- \r\n>Sent via pgsql-performance mailing list ([email protected])\r\n>To make changes to your subscription:\r\n>http://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Aug 2015 15:08:29 +0000",
"msg_from": "\"Guo, Yun\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query not using GIN index"
},
{
"msg_contents": "On Mon, Aug 24, 2015 at 8:18 AM, Guo, Yun <[email protected]> wrote:\n\n>\n>\n> From: Jeff Janes <[email protected]>\n> Date: Friday, August 21, 2015 at 10:44 PM\n> To: Yun <[email protected]>\n> Subject: Re: [PERFORM] query not using GIN index\n>\n> On Fri, Aug 21, 2015 at 6:55 PM, Guo, Yun <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> We have a query on a column with GIN index, but query plan chooses not\n>> using the index but do an seq scan whichi is must slower\n>>\n>> CREATE INDEX idx_access_grants_on_access_tokens ON access_grants USING\n>> gin (access_tokens);\n>>\n>> explain analyze SELECT \"access_grants\".* FROM \"access_grants\" WHERE\n>> (access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) ;\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..7.46 rows=1 width=157) (actual time=260.376..260.377\n>> rows=1 loops=1)\n>> -> Seq Scan on access_grants (cost=0.00..29718.03 rows=3985\n>> width=157) (actual time=260.373..260.373 rows=1 loops=1)\n>> Filter: (access_tokens @>\n>> '{124e5a1f9de325fc176a7c89152ac734}'::text[])\n>> Rows Removed by Filter: 796818\n>> Total runtime: 260.408 ms\n>>\n>\n>\n> What version are you running? What are your non-default configuration\n> settings (particularly for the *_cost parameters)?\n>\n> select name,setting from pg_settings where name like '%cost';\n> name | setting\n> ----------------------+---------\n> cpu_index_tuple_cost | 0.005\n> cpu_operator_cost | 0.0025\n> cpu_tuple_cost | 0.01\n> random_page_cost | 4\n> seq_page_cost | 1\n>\n\n\nOK, thanks. I had overlooked the \"LIMIT\" in the first plan you posted, and\nso thought you must have some pretty weird settings. But noticing the\nLIMIT, it makes more sense with the normal settings, like the ones you show.\n\n\n>\n> Can you turn track_io_timing on and then report a explain (analyze,\n> buffers) of the same query?\n>\n> I didn’t try this as our prod instance is on AWS and setting this would\n> require a reboot.\n>\n\nOK, but you can still do an \"explain (analyze,buffers)\". It is less useful\nthan with track_io_timing on, but it is still more useful than just\n\"explain analyze\".\n\n\n>\n> Then do a \"set enable_seqscan=off\" and repeat.\n>\n> This is the life saver! After applying this, it’s able to use the index.\n> But should we consider it as the permanent solution?\n>\n\nNo, probably not a permanent solution. Or at least, I only do things like\nthat in production as a last resort. I suggested doing that so you can\nforce it to use the index and so see what the explain (analyze,buffers)\nlook like when it does use the index. Sorry for not being more clear.\n\nThe seq scan thinks it is going to find a matching row pretty early in the\nscan and can stop at the first one, but based on \"Rows Removed by Filter:\n796818\" it isn't actually finding a match until the end. There probably\nisn't much you can do about this, other than not using a LIMIT.\n\nThe reason it thinks it will find a row soon is that it thinks 0.5% of the\nrows meet your criteria. That is default selectivity estimate it uses when\nit has nothing better to use. Raising the statistics target on the column\nmight help. But I doubt it, because access tokens are probably nearly\nunique, and so even the highest possible setting for statistics target is\nnot going get it to record MCE statistics. See\nhttps://commitfest.postgresql.org/6/323/ for a possible solution, but any\nfix for that won't be released to production for a long time.\n\n\nIf your gin index has a large pending list, that will make the index scan\nlook very expensive. vacuuming the table will clear that up. Setting\nfastupdate off for the index will prevent it growing again. Based on your\ndescription of most lists having 0 or 1 element in them, and my assumption\nthat a table named \"access_grants\" isn't getting updated hundreds of times\na second, I don't think fast_update being off is going to cause any\nproblems at all.\n\nCheers,\n\nJeff\n\nOn Mon, Aug 24, 2015 at 8:18 AM, Guo, Yun <[email protected]> wrote:\n\n\n\n\n\n\n\nFrom: Jeff Janes <[email protected]>\nDate: Friday, August 21, 2015 at 10:44 PM\nTo: Yun <[email protected]>\nSubject: Re: [PERFORM] query not using GIN index\n\n\n\n\n\n\n\nOn Fri, Aug 21, 2015 at 6:55 PM, Guo, Yun \n<[email protected]> wrote:\n\n\n\nHi,\n\n\nWe have a query on a column with GIN index, but query plan chooses not using the index but do an seq scan whichi is must slower\n\n\nCREATE INDEX idx_access_grants_on_access_tokens ON access_grants USING gin (access_tokens); \n\n\n\n\nexplain analyze SELECT \"access_grants\".* FROM \"access_grants\" WHERE (access_tokens @> ARRAY['124e5a1f9de325fc176a7c89152ac734']) ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..7.46 rows=1 width=157) (actual time=260.376..260.377 rows=1 loops=1)\n -> Seq Scan on access_grants (cost=0.00..29718.03 rows=3985 width=157) (actual time=260.373..260.373 rows=1 loops=1)\n Filter: (access_tokens @> '{124e5a1f9de325fc176a7c89152ac734}'::text[])\n Rows Removed by Filter: 796818\n Total runtime: 260.408 ms\n\n\n\n\n\n\n\n\nWhat version are you running? What are your non-default configuration settings (particularly for the *_cost parameters)?\n\n\n\n\n select name,setting from pg_settings where name like '%cost';\n name | setting\n----------------------+---------\n cpu_index_tuple_cost | 0.005\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n random_page_cost | 4\n seq_page_cost | 1OK, thanks. I had overlooked the \"LIMIT\" in the first plan you posted, and so thought you must have some pretty weird settings. But noticing the LIMIT, it makes more sense with the normal settings, like the ones you show. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCan you turn track_io_timing on and then report a explain (analyze, buffers) of the same query? \n\n\n\n\n\n\n\n\nI didn’t try this as our prod instance is on AWS and setting this would require a reboot.OK, but you can still do an \"explain (analyze,buffers)\". It is less useful than with track_io_timing on, but it is still more useful than just \"explain analyze\". \n\n\n\n\n\n\n\n\nThen do a \"set enable_seqscan=off\" and repeat.\n\n\n\n\n\n\n\n\nThis is the life saver! After applying this, it’s able to use the index. But should we consider it as the permanent solution?No, probably not a permanent solution. Or at least, I only do things like that in production as a last resort. I suggested doing that so you can force it to use the index and so see what the explain (analyze,buffers) look like when it does use the index. Sorry for not being more clear.The seq scan thinks it is going to find a matching row pretty early in the scan and can stop at the first one, but based on \"Rows Removed by Filter: 796818\" it isn't actually finding a match until the end. There probably isn't much you can do about this, other than not using a LIMIT.The reason it thinks it will find a row soon is that it thinks 0.5% of the rows meet your criteria. That is default selectivity estimate it uses when it has nothing better to use. Raising the statistics target on the column might help. But I doubt it, because access tokens are probably nearly unique, and so even the highest possible setting for statistics target is not going get it to record MCE statistics. See https://commitfest.postgresql.org/6/323/ for a possible solution, but any fix for that won't be released to production for a long time.If your gin index has a large pending list, that will make the index scan look very expensive. vacuuming the table will clear that up. Setting fastupdate off for the index will prevent it growing again. Based on your description of most lists having 0 or 1 element in them, and my assumption that a table named \"access_grants\" isn't getting updated hundreds of times a second, I don't think fast_update being off is going to cause any problems at all.Cheers,Jeff",
"msg_date": "Mon, 24 Aug 2015 10:04:42 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query not using GIN index"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a table with 12 columns and 20 Million rows. While writing the table\nI do not find any problem but when reading that I have some issues faced.\nWhen I perform a 'select * from table limit 14000000;' (selecting 14million\nrows), it is working fine. If the limit value is 15000000, it is throwing\nthe error as 'out of memory'.\n\nIf the query is 'select * from table' , The process is getting killed by\ndisplaying the message 'killed'.\n\nKindly tell me where it is going wrong. I have 6MB cache, 1.6GHz CPU, linux\n14.04 OS, 8GB RAM.\n\nThanks.\n\nHello,I have a table with 12 columns and 20 Million rows. While writing the table I do not find any problem but when reading that I have some issues faced. When I perform a 'select * from table limit 14000000;' (selecting 14million rows), it is working fine. If the limit value is 15000000, it is throwing the error as 'out of memory'. If the query is 'select * from table' , The process is getting killed by displaying the message 'killed'.Kindly tell me where it is going wrong. I have 6MB cache, 1.6GHz CPU, linux 14.04 OS, 8GB RAM.Thanks.",
"msg_date": "Mon, 24 Aug 2015 12:34:07 +0530",
"msg_from": "bhuvan Mitra <[email protected]>",
"msg_from_op": true,
"msg_subject": "problem with select *"
},
{
"msg_contents": "Hi,\n\nPlease share with us on the configuration in postgresql.conf\n\nThanks!\n\nOn 24 August 2015 at 15:04, bhuvan Mitra <[email protected]> wrote:\n\n> Hello,\n>\n> I have a table with 12 columns and 20 Million rows. While writing the\n> table I do not find any problem but when reading that I have some issues\n> faced. When I perform a 'select * from table limit 14000000;' (selecting\n> 14million rows), it is working fine. If the limit value is 15000000, it is\n> throwing the error as 'out of memory'.\n>\n> If the query is 'select * from table' , The process is getting killed by\n> displaying the message 'killed'.\n>\n> Kindly tell me where it is going wrong. I have 6MB cache, 1.6GHz CPU,\n> linux 14.04 OS, 8GB RAM.\n>\n> Thanks.\n>\n\n\n\n-- \nRegards,\nAng Wei Shan\n\nHi,Please share with us on the configuration in postgresql.confThanks!On 24 August 2015 at 15:04, bhuvan Mitra <[email protected]> wrote:Hello,I have a table with 12 columns and 20 Million rows. While writing the table I do not find any problem but when reading that I have some issues faced. When I perform a 'select * from table limit 14000000;' (selecting 14million rows), it is working fine. If the limit value is 15000000, it is throwing the error as 'out of memory'. If the query is 'select * from table' , The process is getting killed by displaying the message 'killed'.Kindly tell me where it is going wrong. I have 6MB cache, 1.6GHz CPU, linux 14.04 OS, 8GB RAM.Thanks.\n-- Regards,Ang Wei Shan",
"msg_date": "Mon, 24 Aug 2015 15:13:42 +0800",
"msg_from": "Wei Shan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problem with select *"
},
{
"msg_contents": "På mandag 24. august 2015 kl. 09:04:07, skrev bhuvan Mitra <[email protected] \n<mailto:[email protected]>>:\nHello,\n \n I have a table with 12 columns and 20 Million rows. While writing the table I \ndo not find any problem but when reading that I have some issues faced. When I \nperform a 'select * from table limit 14000000;' (selecting 14million rows), it \nis working fine. If the limit value is 15000000, it is throwing the error as \n'out of memory'.\n \n If the query is 'select * from table' , The process is getting killed by \ndisplaying the message 'killed'.\n \n Kindly tell me where it is going wrong. I have 6MB cache, 1.6GHz CPU, linux \n14.04 OS, 8GB RAM.\n\n \nIn what application are you performing these queries?\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>",
"msg_date": "Mon, 24 Aug 2015 10:59:11 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problem with select *"
},
{
"msg_contents": "\n\nOn 08/24/2015 03:04 AM, bhuvan Mitra wrote:\n> Hello,\n>\n> I have a table with 12 columns and 20 Million rows. While writing the \n> table I do not find any problem but when reading that I have some \n> issues faced. When I perform a 'select * from table limit 14000000;' \n> (selecting 14million rows), it is working fine. If the limit value is \n> 15000000, it is throwing the error as 'out of memory'.\n>\n> If the query is 'select * from table' , The process is getting killed \n> by displaying the message 'killed'.\n>\n> Kindly tell me where it is going wrong. I have 6MB cache, 1.6GHz CPU, \n> linux 14.04 OS, 8GB RAM.\n>\n>\n\n\nYou should be using a cursor.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Aug 2015 07:44:17 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problem with select *"
}
] |
[
{
"msg_contents": "Working with 9.4.\n\nWe are in the process of unpacking complicated XML-data into tables.\nXML-data are already in a table with two fields (id, xml) - 47+ million\nrecords.\n\nSome of hour queries to extract the data and insert it in other tables runs\nfor days and in one case we have created a table with 758million unique\nrecords.\n\nNow my question. Is there a way to monitor the progress of a long running\nquery like this?\n\nI have recently read that it is probably better for processes like this to\ncopy result of the query to a csv-file and then import it again with copy\nas an insert. Next time I will try that.\n\nThe following query has been running for 6 days now and are still running\n(I have anonymized it a little bit) on a server with 768 GB RAM. It has\ncreated 44 temporary files so far:\n\nINSERT INTO table_a_link(uid,gn_id)\n\nWITH p AS\n (SELECT ARRAY[ARRAY['t','some_xpath']] AS some_xpath),\n q AS\n (SELECT (xpath('//t:UID/text()',xml,some_xpath))[1] uid,\n unnest(xpath('//t:grant',xml,some_xpath)) AS gr\n FROM source.xml_data a,\n p\n WHERE xpath_exists('//t:grant/t:grant_agency', xml ,some_xpath)),\n r AS\n (\n SELECT\n CASE WHEN\n xpath_exists('//t:grant_ids', gr, some_xpath)\n THEN\n unnest(xpath('//t:grant_ids', gr, some_xpath))\n ELSE\n NULL\n END\n AS GR_ids\n FROM q,\n p ) ,\n y as (SELECT A.UUID AS FO_ID,\n/* unnest(xpath('//t:grant_agency/text()',GR,ns))::citext AS agency,\n*/ CASE WHEN\n xpath_exists('//t:grant_id', gr_ids, some_xpath)\n THEN\n unnest(xpath('//t:grant_id/text()', gr_ids, some_xpath))::citext\n ELSE\n NULL\n END\n grant_NO,\n uid::varchar(19)\n from WOS.FUNDING_ORG A, p,q\n left join r on (xpath('//t:grant/t:grant_ids/t:grant_id/text()',gr,\n ARRAY[ARRAY['t','some_xpath']])::citext =\n\nxpath('//t:grant_id/text()',GR_IDS,ARRAY[ARRAY['t','some_xpath']])::citext)\n WHERE A.FUNDING_ORG =\n(xpath('//t:grant_agency/text()',GR,some_xpath))[1]::CITEXT\n )\n\n select distinct y.uid, B.uuid gn_id\n\n from y, table_b B\n where\n y.fo_id = B.fo_id\n and\n y.grant_no is not distinct from b.grant_no\n\n\nRegards.\nJohann\n\nWorking with 9.4.We are in the process of unpacking complicated XML-data into tables. XML-data are already in a table with two fields (id, xml) - 47+ million records.Some of hour queries to extract the data and insert it in other tables runs for days and in one case we have created a table with 758million unique records.Now my question. Is there a way to monitor the progress of a long running query like this?I have recently read that it is probably better for processes like this to copy result of the query to a csv-file and then import it again with copy as an insert. Next time I will try that. The following query has been running for 6 days now and are still running (I have anonymized it a little bit) on a server with 768 GB RAM. It has created 44 temporary files so far:INSERT INTO table_a_link(uid,gn_id)WITH p AS (SELECT ARRAY[ARRAY['t','some_xpath']] AS some_xpath), q AS (SELECT (xpath('//t:UID/text()',xml,some_xpath))[1] uid, unnest(xpath('//t:grant',xml,some_xpath)) AS gr FROM source.xml_data a, p WHERE xpath_exists('//t:grant/t:grant_agency', xml ,some_xpath)), r AS ( SELECT CASE WHEN xpath_exists('//t:grant_ids', gr, some_xpath) THEN unnest(xpath('//t:grant_ids', gr, some_xpath)) ELSE NULL END AS GR_ids FROM q, p ) , y as (SELECT A.UUID AS FO_ID,/* unnest(xpath('//t:grant_agency/text()',GR,ns))::citext AS agency,*/ CASE WHEN xpath_exists('//t:grant_id', gr_ids, some_xpath) THEN unnest(xpath('//t:grant_id/text()', gr_ids, some_xpath))::citext ELSE NULL END grant_NO, uid::varchar(19) from WOS.FUNDING_ORG A, p,q left join r on (xpath('//t:grant/t:grant_ids/t:grant_id/text()',gr, ARRAY[ARRAY['t','some_xpath']])::citext = xpath('//t:grant_id/text()',GR_IDS,ARRAY[ARRAY['t','some_xpath']])::citext) WHERE A.FUNDING_ORG = (xpath('//t:grant_agency/text()',GR,some_xpath))[1]::CITEXT ) select distinct y.uid, B.uuid gn_id from y, table_b B where y.fo_id = B.fo_id and y.grant_no is not distinct from b.grant_noRegards.Johann",
"msg_date": "Tue, 25 Aug 2015 09:22:45 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Long running query: How to monitor the progress"
},
{
"msg_contents": "Johann Spies <[email protected]> writes:\n> Some of hour queries to extract the data and insert it in other tables runs\n> for days and in one case we have created a table with 758million unique\n> records.\n\n> Now my question. Is there a way to monitor the progress of a long running\n> query like this?\n\nYou could watch how fast the target table is physically growing, perhaps.\nOr I think contrib/pgstattuple could be used to count uncommitted tuples\nin it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Aug 2015 09:50:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long running query: How to monitor the progress"
}
] |
[
{
"msg_contents": "Hi\n\nI'm trying to get a query to run fast enough for interactive use. I've gotten\nsome speed-up, but still not there. It is for a tool called IRRExplorer\n(http://irrexplorer.nlnog.net/) that correlates IP routes between Internet\nRoute Registries and real-world routing information. We landed on PostgreSQL\nlargely due to indexing of the cidr type with gist indexing.\n\n* Preliminaries:\n\nThe query is about getting data from this table:\n\nirrexplorer=> \\d routes\n Table \"public.routes\"\n Column | Type | Modifiers \n-----------+---------+-----------\n route | cidr | not null\n asn | bigint | not null\n source_id | integer | not null\nIndexes:\n \"unique_entry\" UNIQUE CONSTRAINT, btree (route, asn, source_id)\n \"route_gist\" gist (route inet_ops)\nCheck constraints:\n \"positive_asn\" CHECK (asn > 0)\nForeign-key constraints:\n \"routes_source_id_fkey\" FOREIGN KEY (source_id) REFERENCES sources(id)\n\nComplete DDL: https://github.com/job/irrexplorer/blob/master/data/schema.sql\n\nData set: 125 MB on disk, 2.3 million rows.\n\nRunning stock PostgreSQL 9.4.4 on Ubuntu 14.10 (hosted on VMWare on OS X)\n\nHave done VACUUM, ANALYZE, and checked memory settings, and tried to increase\nwork_mem, but with no success. The issue seems cpu bound (100% cpu load during\nquery).\n\n\n* Query\n\nWhen a user inputs an AS number a simple match on the asn column will return\nthe stuff relevant. However, the interesting thing to display is\nconflicting/rogue routes. This means matching routes with the && operator to\nfind all covered/covering routes. This would look something like this:\n\nSELECT rv.route, rv.asn, rv.source\nFROM routes_view rv\nLEFT OUTER JOIN routes_view r ON (rv.route && r.route)\nWHERE rv.route && r.route AND r.asn = %s\n\nWhile this is fairly fast if the initial set of routes is relatively small \n(<100) it runs with a second or so, but if the number of routes matching \nthe asn is large (>1000), it takes quite a while (+30 seconds). Explain \nanalyze link:\n\nhttp://explain.depesz.com/s/dHqo\n\nI am not terribly good at reading the output, but it seem most of the time is\nactually spend on the bitmap scan for the gist index. It there another type of\nindexing that would behave better here?\n\nSince there often identical routes in the initial set of routes (from the AS number matching), I tried reducing the initial set of matching routes:\nORDER BY rv.route;\n\nSELECT rv.route, rv.asn, rv.source\nFROM\n (SELECT DISTINCT route FROM routes_view WHERE asn = %s) r\nLEFT OUTER JOIN routes_view rv ON (r.route && rv.route)\n\nThis often cuts the set of initial matching routes by 25-50%, and cuts a\nsimilar amount of time from the query time. Explain analyze link:\n\nhttp://explain.depesz.com/s/kf13\n\nI tried further reducing the routes to the minimal set:\n\nWITH distinct_routes AS (SELECT DISTINCT route FROM routes_view WHERE asn = %s)\nSELECT route FROM distinct_routes\nEXCEPT\nSELECT r1.route\nFROM distinct_routes r1\n INNER JOIN distinct_routes r2 ON (r1.route << r2.route);\n\nBut typically only yields 10-20% reduction of the inital route set, and \nadds query complexity (putting the above in a CTE/WITH seems to make the \nquery significantly slower for some reason).\n\nThe main issue seem to be with the gist bitmap index. Is there a better way to\napproach this style of query?\n\n\n Best regards, Henrik\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Aug 2015 15:11:43 +0200 (CEST)",
"msg_from": "Henrik Thostrup Jensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Gist indexing performance with cidr types "
},
{
"msg_contents": "> I'm trying to get a query to run fast enough for interactive use. I've gotten\n> some speed-up, but still not there. It is for a tool called IRRExplorer\n> (http://irrexplorer.nlnog.net/) that correlates IP routes between Internet\n> Route Registries and real-world routing information. We landed on PostgreSQL\n> largely due to indexing of the cidr type with gist indexing.\n\nIt is nice to hear about someone making use of the feature.\n\n> * Query\n>\n> When a user inputs an AS number a simple match on the asn column will return\n> the stuff relevant. However, the interesting thing to display is\n> conflicting/rogue routes. This means matching routes with the && operator to\n> find all covered/covering routes. This would look something like this:\n>\n> SELECT rv.route, rv.asn, rv.source\n> FROM routes_view rv\n> LEFT OUTER JOIN routes_view r ON (rv.route && r.route)\n> WHERE rv.route && r.route AND r.asn = %s\n\nWhy don't you just use INNER JOIN like this:\n\nSELECT rv.route, rv.asn, rv.source\nFROM routes_view rv\nJOIN routes_view r ON rv.route && r.route\nWHERE r.asn = %s\n\n> While this is fairly fast if the initial set of routes is relatively small\n> (<100) it runs with a second or so, but if the number of routes matching the\n> asn is large (>1000), it takes quite a while (+30 seconds).Explain analyze\n> link:\n>\n> http://explain.depesz.com/s/dHqo\n>\n> I am not terribly good at reading the output, but it seem most of the time is\n> actually spend on the bitmap scan for the gist index. It there another type of\n> indexing that would behave better here?\n\nAn index to the \"asn\" column would probably help to the outer side,\nbut more time seems to be consumed on the inner side. Plain index\nscan would probably be faster for it. You can test it by setting\nenable_bitmapscan to false.\n\nThe problem about bitmap index scan is selectivity estimation. The\nplanner estimates a lot more rows would match the condition, so it\nchooses bitmap index scan. Selectivity estimation functions for inet\non PostgreSQL 9.4 just return some constants, so it is expected. We\ndeveloped better ones for 9.5. PostgreSQL 9.5 also supports index\nonly scans with GiST which can be even better than plain index scan.\nCan you try 9.5 to see if they help?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Aug 2015 16:25:52 +0200",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "Hi, thanks for the reply.\n\nOn Tue, 25 Aug 2015, Emre Hasegeli wrote:\n\n>> I'm trying to get a query to run fast enough for interactive use. I've gotten\n>> some speed-up, but still not there. It is for a tool called IRRExplorer\n>> (http://irrexplorer.nlnog.net/) that correlates IP routes between Internet\n>> Route Registries and real-world routing information.\n\n>> We landed on PostgreSQL largely due to indexing of the cidr type with \n>> gist indexing.\n>\n> It is nice to hear about someone making use of the feature.\n\nThanks to whoever made it. It is probably a niche-feature though.\n\n>> SELECT rv.route, rv.asn, rv.source\n>> FROM routes_view rv\n>> LEFT OUTER JOIN routes_view r ON (rv.route && r.route)\n>> WHERE rv.route && r.route AND r.asn = %s\n>\n> Why don't you just use INNER JOIN like this:\n>\n> SELECT rv.route, rv.asn, rv.source\n> FROM routes_view rv\n> JOIN routes_view r ON rv.route && r.route\n> WHERE r.asn = %s\n\nI probably have a habit of thinking in outer joins. The inner join turns \nout to slightly slower though (but faster in planning), but it looks like \nit depends on a dice roll by the planner (it does bitmap heap scan on \ninner, and index scan on left outer).\n\n>> I am not terribly good at reading the output, but it seem most of the time is\n>> actually spend on the bitmap scan for the gist index. It there another type of\n>> indexing that would behave better here?\n>\n> An index to the \"asn\" column would probably help to the outer side,\n\n\"select route from routes where asn = %s\" takes .15-.2 seconds on my \nlaptop, so it isn't where the time is spend here.\n\n> but more time seems to be consumed on the inner side. Plain index\n> scan would probably be faster for it. You can test it by setting\n> enable_bitmapscan to false.\n\nThis actually makes it go slower for inner join (31s -> 56s). Left outer \njoin is around the same.\n\n> The problem about bitmap index scan is selectivity estimation. The\n> planner estimates a lot more rows would match the condition, so it\n> chooses bitmap index scan. Selectivity estimation functions for inet\n> on PostgreSQL 9.4 just return some constants, so it is expected. We\n> developed better ones for 9.5. PostgreSQL 9.5 also supports index\n> only scans with GiST which can be even better than plain index scan.\n\nOK, that is interesting.\n\n> Can you try 9.5 to see if they help?\n\nI'll try installing it and report back.\n\n\n Best regards, Henrik\n\n Henrik Thostrup Jensen <htj at nordu.net>\n Software Developer, NORDUnet\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Aug 2015 09:58:10 +0200 (CEST)",
"msg_from": "Henrik Thostrup Jensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "On Wed, 26 Aug 2015, Henrik Thostrup Jensen wrote:\n\n>> Can you try 9.5 to see if they help?\n>\n> I'll try installing it and report back.\n\nI upgraded to 9.5 (easier than expected) and ran vacuum analyze.\n\nThe query planner now chooses index scan for outer and inner join. This \nseems to cut off roughly a second or so (31s -> 30s, and 17s->16s for when \nusing distint on initial route set).\n\nQuery:\n\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT rv.route, rv.asn, rv.source FROM\n (SELECT DISTINCT route FROM routes_view WHERE asn = %s) r\nINNER JOIN routes_view rv ON (r.route && rv.route)\nORDER BY rv.route;\n\nExplain analyze: http://explain.depesz.com/s/L7kZ\n\n\n9.5 also seems to fix the case with using CTE/WITH was actually slower. \nThe fastest I can currently do is this, which finds the minimal set of \ncovering routes before joining:\n\nSET enable_bitmapscan = false;\nEXPLAIN ANALYZE\nWITH\ndistinct_routes AS (SELECT DISTINCT route FROM routes_view WHERE asn = %s),\nminimal_routes AS (SELECT route FROM distinct_routes\n EXCEPT\n SELECT r1.route\n FROM distinct_routes r1 INNER JOIN distinct_routes r2 ON (r1.route << r2.route))\nSELECT rv.route, rv.asn, rv.source\nFROM routes_view rv\nJOIN minimal_routes ON (rv.route <<= minimal_routes.route);\n\nExplain analyze: http://explain.depesz.com/s/Plx4\n\nThe query planner chooses bitmap Index Scan for this query, which adds \naround .5 second the query time, so it isn't that bad of a decision.\n\nUnfortunately it still takes 15 seconds for my test case (a big network, \nbut still a factor 10 from the biggest).\n\nAre the coverage operatons just that expensive?\n\n\n Best regards, Henrik\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Aug 2015 11:00:38 +0200 (CEST)",
"msg_from": "Henrik Thostrup Jensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "> Are the coverage operatons just that expensive?\n\nThey shouldn't be. A similar query like yours works in 0.5 second on my laptop:\n\n># create table inner_side as select i, ((random() * 255.5)::int::text || '.' || (random() * 255.5)::int::text || '.' || (random() * 255.5)::int::text || '.' || (random() * 255.5)::int::text || '/' || (random() * 16 + 9)::int::text)::inet::cidr as network from generate_series(1, 2300000) as i;\n> SELECT 2300000\n>\n># create table outer_side as select i, ((random() * 255.5)::int::text || '.' || (random() * 255.5)::int::text || '.' || (random() * 255.5)::int::text || '.' || (random() * 255.5)::int::text || '/' || (random() * 16 + 9)::int::text)::inet::cidr as network from generate_series(1, 732) as i;\n> SELECT 732\n>\n># create index on inner_side using gist(network inet_ops);\n> CREATE INDEX\n>\n># analyze;\n> ANALYZE\n>\n># explain analyze select * from outer_side join inner_side on outer_side.network && inner_side.network;\n> QUERY PLAN\n> ----------\n> Nested Loop (cost=0.41..563927.27 rows=137310 width=22) (actual time=0.115..474.103 rows=561272 loops=1)\n> -> Seq Scan on outer_side (cost=0.00..11.32 rows=732 width=11) (actual time=0.011..0.096 rows=732 loops=1)\n> -> Index Scan using inner_side_network_idx on inner_side (cost=0.41..540.38 rows=23000 width=11) (actual time=0.031..0.553 rows=767 loops=732)\n> Index Cond: ((outer_side.network)::inet && (network)::inet)\n> Planning time: 0.830 ms\n> Execution time: 505.641 ms\n> (6 rows)\n\nMaybe, something we haven't expected about your dataset causes a\nperformance regression on the index. Did you see anything relevant on\nthe server logs on index creation time?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Aug 2015 11:47:13 +0200",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "On Wed, 26 Aug 2015, Emre Hasegeli wrote:\n\n>> Are the coverage operatons just that expensive?\n>\n> They shouldn't be. A similar query like yours works in 0.5 second on my laptop:\n[snip]\n\nI get the same from your testcase.\n\n> Maybe, something we haven't expected about your dataset causes a\n> performance regression on the index. Did you see anything relevant on\n> the server logs on index creation time?\n\nI tried dropping and re-creating the index. The only log entry was for the \ndrop statement.\n\nThe distribution of the data is not uniform like the data set you produce. \nThough I find it hard to believe that it would affect this as much.\n\nselect masklen(route), count(*) from routes group by masklen(route);\n\n masklen | count\n---------+---------\n 8 | 47\n 9 | 30\n 10 | 84\n 11 | 225\n 12 | 580\n 13 | 1163\n 14 | 2401\n 15 | 4530\n 16 | 32253\n 17 | 20350\n 18 | 35583\n 19 | 76307\n 20 | 111913\n 21 | 132807\n 22 | 229578\n 23 | 286986\n 24 | 1149793\n\nRest is rather small, though with bumps at /32 and /48 (typical IPv6 prefix length).\n\nReal-world address space is very fragmented, where as some is unused.\n\nThen there is the mixed IPv6 and IPv4 data that might factor in.\n\n\nI tried the approach from your benchmark, to try make a more isolated test \ncase:\n\nirrexplorer=> SELECT DISTINCT route INTO hmm FROM routes_view WHERE asn = 2914;\nSELECT 732\n\nirrexplorer=> explain analyze select routes.route from routes join hmm on routes.route && hmm.route;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.41..511914.27 rows=2558 width=7) (actual time=8.096..17209.778 rows=8127 loops=1)\n -> Seq Scan on hmm (cost=0.00..11.32 rows=732 width=7) (actual time=0.010..0.609 rows=732 loops=1)\n -> Index Only Scan using route_gist on routes (cost=0.41..470.32 rows=22900 width=7) (actual time=4.823..23.502 rows=11 loops=732)\n Index Cond: (route && (hmm.route)::inet)\n Heap Fetches: 0\n Planning time: 0.971 ms\n Execution time: 17210.627 ms\n(7 rows)\n\nThe only difference in the query plan is that the above used an index \nonly, where as your test case used index scan (it did this for me as \nwell). I tried without index only scan:\n\nirrexplorer=> set enable_indexonlyscan =false;\nSET\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.41..571654.27 rows=2558 width=7) (actual time=6.406..15899.791 rows=8127 loops=1)\n -> Seq Scan on hmm (cost=0.00..11.32 rows=732 width=7) (actual time=0.011..0.615 rows=732 loops=1)\n -> Index Scan using route_gist on routes (cost=0.41..551.93 rows=22900 width=7) (actual time=4.490..21.712 rows=11 loops=732)\n Index Cond: ((route)::inet && (hmm.route)::inet)\n Planning time: 0.505 ms\n Execution time: 15900.669 ms\n(6 rows)\n\nSlight faster, but nothing significant. Something seems wonky.\n\n\n Best regards, Henrik\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Aug 2015 13:29:01 +0200 (CEST)",
"msg_from": "Henrik Thostrup Jensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "> Then there is the mixed IPv6 and IPv4 data that might factor in.\n\nIt shouldn't be the problem. The index should separate them on the top level.\n\n> I tried the approach from your benchmark, to try make a more isolated test\n> case:\n\nCan you try to isolate it even more by something like this:\n\nselect * from routes where route && 'a.b.c.d/e';\n\nIt would be easier to debug, if we can reproduce performance\nregression like this. It would also be helpful to check where the\ntime is spent. Maybe \"perf\" on Linux would help, though I haven't\nused it before.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Aug 2015 15:46:07 +0200",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "On Wed, Aug 26, 2015 at 4:29 AM, Henrik Thostrup Jensen <[email protected]>\nwrote:\n\n> On Wed, 26 Aug 2015, Emre Hasegeli wrote:\n>\n> Are the coverage operatons just that expensive?\n>>>\n>>\n>> They shouldn't be. A similar query like yours works in 0.5 second on my\n>> laptop:\n>>\n> [snip]\n>\n> I get the same from your testcase.\n>\n> Maybe, something we haven't expected about your dataset causes a\n>> performance regression on the index. Did you see anything relevant on\n>> the server logs on index creation time?\n>>\n>\n> I tried dropping and re-creating the index. The only log entry was for the\n> drop statement.\n>\n> The distribution of the data is not uniform like the data set you produce.\n> Though I find it hard to believe that it would affect this as much.\n>\n> select masklen(route), count(*) from routes group by masklen(route);\n>\n\nAny chance you can share the actual underlying data? I noticed it wasn't\non github, but is that because it is proprietary, or just because you don't\nthink it is interesting?\n\n\n> irrexplorer=> explain analyze select routes.route from routes join hmm on\n> routes.route && hmm.route;\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.41..511914.27 rows=2558 width=7) (actual\n> time=8.096..17209.778 rows=8127 loops=1)\n> -> Seq Scan on hmm (cost=0.00..11.32 rows=732 width=7) (actual\n> time=0.010..0.609 rows=732 loops=1)\n> -> Index Only Scan using route_gist on routes (cost=0.41..470.32\n> rows=22900 width=7) (actual time=4.823..23.502 rows=11 loops=732)\n> Index Cond: (route && (hmm.route)::inet)\n> Heap Fetches: 0\n> Planning time: 0.971 ms\n> Execution time: 17210.627 ms\n> (7 rows)\n>\n\nIf you loop over the 732 rows yourself, issuing the simple query against\neach retrieved constant value:\n\nexplain (analyze,buffers) select routes.route from routes where route && $1\n\nDoes each one take about the same amount of time, or are there some outlier\nvalues which take much more time than the others?\n\nCheers,\n\nJeff\n\nOn Wed, Aug 26, 2015 at 4:29 AM, Henrik Thostrup Jensen <[email protected]> wrote:On Wed, 26 Aug 2015, Emre Hasegeli wrote:\n\n\nAre the coverage operatons just that expensive?\n\n\nThey shouldn't be. A similar query like yours works in 0.5 second on my laptop:\n\n[snip]\n\nI get the same from your testcase.\n\n\nMaybe, something we haven't expected about your dataset causes a\nperformance regression on the index. Did you see anything relevant on\nthe server logs on index creation time?\n\n\nI tried dropping and re-creating the index. The only log entry was for the drop statement.\n\nThe distribution of the data is not uniform like the data set you produce. Though I find it hard to believe that it would affect this as much.\n\nselect masklen(route), count(*) from routes group by masklen(route);Any chance you can share the actual underlying data? I noticed it wasn't on github, but is that because it is proprietary, or just because you don't think it is interesting?\nirrexplorer=> explain analyze select routes.route from routes join hmm on routes.route && hmm.route;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.41..511914.27 rows=2558 width=7) (actual time=8.096..17209.778 rows=8127 loops=1)\n -> Seq Scan on hmm (cost=0.00..11.32 rows=732 width=7) (actual time=0.010..0.609 rows=732 loops=1)\n -> Index Only Scan using route_gist on routes (cost=0.41..470.32 rows=22900 width=7) (actual time=4.823..23.502 rows=11 loops=732)\n Index Cond: (route && (hmm.route)::inet)\n Heap Fetches: 0\n Planning time: 0.971 ms\n Execution time: 17210.627 ms\n(7 rows)If you loop over the 732 rows yourself, issuing the simple query against each retrieved constant value:explain (analyze,buffers) select routes.route from routes where route && $1Does each one take about the same amount of time, or are there some outlier values which take much more time than the others?Cheers,Jeff",
"msg_date": "Wed, 26 Aug 2015 09:08:01 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "On Wed, 26 Aug 2015, Emre Hasegeli wrote:\n\n> Can you try to isolate it even more by something like this:\n\nI tried some different bisection approaches:\n\n-- base query (time ~19 seconds)\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT rv.route, rv.asn, rv.source\nFROM\n (SELECT DISTINCT route FROM routes_view WHERE asn = 2914 AND [ stuff here ]) r\n JOIN routes_view rv ON (r.route && rv.route);\n\nSELECT DISTINCT route FROM routes_view WHERE asn = 2914; -> 732 rows, 0.2 seconds\n\nmasklen(route) <= 20; -> 356 rows, join time 9.2 seconds\nmasklen(route) > 20; -> 376 rows, join time 9.1 seconds\n\nfamily(route) = 6 -> 22 rows, join time 0.2 seconds\nfamily(route) = 4 -> 710 rows, join time 18.1 seconds\n\nroute <= '154.0.0.0' -> 362 rows, join time 9.2 seconds\nroute > '154.0.0.0' -> 370 rows, join time 9.5 seconds\n\nNothing really interesting here though.\n\n\n> select * from routes where route && 'a.b.c.d/e';\n>\n> It would be easier to debug, if we can reproduce performance\n> regression like this. It would also be helpful to check where the\n> time is spent. Maybe \"perf\" on Linux would help, though I haven't\n> used it before.\n\nHaven't used this before either (but seem like a nice tool). Output while \nrunning the query:\n\nSamples: 99K of event 'cpu-clock', Event count (approx.): 11396624870\n 14.09% postgres [.] inet_gist_consistent\n 10.77% postgres [.] 0x00000000000c05f7\n 10.46% postgres [.] FunctionCall5Coll\n 5.68% postgres [.] gistdentryinit\n 5.57% postgres [.] 0x00000000000c05c4\n 4.62% postgres [.] FunctionCall1Coll\n 4.52% postgres [.] MemoryContextReset\n 4.25% postgres [.] bitncmp\n 3.32% libc-2.19.so [.] __memcmp_sse4_1\n 2.44% postgres [.] 0x00000000000c08f9\n 2.37% postgres [.] 0x00000000000c0907\n 2.27% postgres [.] 0x00000000000c0682\n 2.12% postgres [.] pg_detoast_datum_packed\n 1.86% postgres [.] hash_search_with_hash_value\n 1.40% postgres [.] inet_gist_decompress\n 1.09% postgres [.] 0x00000000000c067e\n 1.03% postgres [.] 0x00000000000c047e\n 0.77% postgres [.] 0x00000000002f0e57\n 0.75% postgres [.] gistcheckpage\n\nThis seemed to stay reletively consistent throughout the query.\n\n\n Best regards, Henrik\n\n Henrik Thostrup Jensen <htj at nordu.net>\n Software Developer, NORDUnet\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Aug 2015 11:21:12 +0200 (CEST)",
"msg_from": "Henrik Thostrup Jensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "On Wed, 26 Aug 2015, Jeff Janes wrote:\n\n> Any chance you can share the actual underlying data?\n\nSure. I added a snapshot to the repo:\nhttps://github.com/job/irrexplorer/blob/master/data/irrexplorer_dump.sql.gz?raw=true\n\n> I noticed it wasn't on github, but is that because it is proprietary, or\n> just because you don't think it is interesting?\n\nI hoped it wouldn't be this complicated :-).\n\nBGP and IRR data is (mostly) public, but it changes constantly, so there \nis little sense in putting in the repo, as it is not the authorative \nsource (we have a script to boostrap with instead).\n\n\n> If you loop over the 732 rows yourself, issuing the simple query against each retrieved constant value:\n> \n> explain (analyze,buffers) select routes.route from routes where route && $1\n> \n> Does each one take about the same amount of time, or are there some outlier values which take much more time than the others?\n\nI wrote a small script to try this out. It queries for each route 20 times \nto try and suppress the worst noise. I've sorted the results by time and \nput it here: https://gist.github.com/htj/1817883f92a9cb17a4f8\n(ignore the ntp timing issue causing a negative value)\n\nSome observations:\n\n- v6 is faster than v4 which is expected.\n\n- The slowest prefixes by all seem to start bits '11'.\n However it is only by a factor of 1.5x which is not really significant\n\n\n Best regards, Henrik\n\n Henrik Thostrup Jensen <htj at nordu.net>\n Software Developer, NORDUnet\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Aug 2015 11:35:31 +0200 (CEST)",
"msg_from": "Henrik Thostrup Jensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "> Nothing really interesting here though.\n\nI think the slowdown is not related with the key your searched for,\nbut the organisation of the index. We have a simple structure for\nthe index keys. Basically, common bits of the child nodes are stored\non the parent node. It leads to not efficient indexes, where there\nare too much values with the same prefix. I couldn't quite understand\nwhy it performs so bad, though. You might have better luck with\nip4r extension [1] or creating an index using the range types like\nthis:\n\n> # create type inetrange as range (subtype = inet);\n> CREATE TYPE\n>\n> # create function cidr_to_range(cidr) returns inetrange language sql as 'select inetrange(set_masklen($1::inet, 0), set_masklen(broadcast($1), 0))';\n> CREATE FUNCTION\n>\n> # create index on routes using gist ((cidr_to_range(route)));\n> CREATE INDEX\n>\n> # explain analyze select * from routes where cidr_to_range(route) && cidr_to_range('160.75/16');\n> QUERY PLAN\n> ----------\n> Bitmap Heap Scan on routes (cost=864.50..18591.45 rows=21173 width=19) (actual time=7.249..7.258 rows=7 loops=1)\n> Recheck Cond: (inetrange(set_masklen((route)::inet, 0), set_masklen(broadcast((route)::inet), 0)) && '[160.75.0.0/0,160.75.255.255/0)'::inetrange)\n> Heap Blocks: exact=3\n> -> Bitmap Index Scan on routes_cidr_to_range_idx (cost=0.00..859.21 rows=21173 width=0) (actual time=7.242..7.242 rows=7 loops=1)\n> Index Cond: (inetrange(set_masklen((route)::inet, 0), set_masklen(broadcast((route)::inet), 0)) && '[160.75.0.0/0,160.75.255.255/0)'::inetrange)\n> Planning time: 1.456 ms\n> Execution time: 7.346 ms\n> (7 rows)\n\nI have examined them about the performance problem:\n\n* It splits pages by IP family [2] a lot of times, but deleting IPv6\n rows from the table doesn't make it faster.\n* It doesn't fail and do 50-50 split [3] as I expected.\n* The previous posted version [4] of it works roughly twice faster,\n but it is still too slow.\n\n[1] https://github.com/RhodiumToad/ip4r\n[2] network_gist.c:705\n[3] network_gist.c:754\n[4] CAE2gYzzioHNxdZXyWz0waruJuw7wKpEJ-2xPTihjd6Rv8YJF_w@mail.gmail.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Aug 2015 18:05:06 +0200",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gist indexing performance with cidr types"
},
{
"msg_contents": "Hi\n\nOn Thu, 27 Aug 2015, Emre Hasegeli wrote:\n\n> I think the slowdown is not related with the key your searched for,\n> but the organisation of the index. We have a simple structure for\n> the index keys. Basically, common bits of the child nodes are stored\n> on the parent node. It leads to not efficient indexes, where there\n> are too much values with the same prefix. I couldn't quite understand\n> why it performs so bad, though.\n\nI can see the issue. Unfortunately IP space tends to be fragmented in some \nranges, and very sparse in other.\n\nIt is unfortunate that something to index IP prefixes doesn't handle BGP \nand IRR data very well (the only largish \"real\" datasets with IP prefixes \nI can think of).\n\n\n> You might have better luck with ip4r extension [1] or creating an index \n> using the range types like this:\n[snip]\n\nUsing the range type index:\n\n Nested Loop (cost=0.42..603902.92 rows=8396377 width=26) (actual time=0.514..662.500 rows=8047 loops=1)\n -> Seq Scan on hmm (cost=0.00..11.32 rows=732 width=7) (actual time=0.015..0.119 rows=732 loops=1)\n -> Index Scan using routes_cidr_to_range_idx on routes (cost=0.42..595.58 rows=22941 width=19) (actual time=0.262..0.903 rows=11 loops=732)\n Index Cond: (inetrange(set_masklen((route)::inet, 0), set_masklen(broadcast((route)::inet), 0)) && inetrange(set_masklen((hmm.route)::inet, 0), set_masklen(broadcast((hmm.route)::inet), 0)))\n Planning time: 0.211 ms\n Execution time: 662.769 ms\n(6 rows)\n\nBoom. This is actually usefull.\n\nIt does take 70 seconds for the biggst network though. The index is also \nrather large:\n\n public | routes_cidr_to_range_idx | index | htj | routes | 158 MB |\n\nTable is 119MB data. The gist index was 99 MB.\n\n\n Best regards, Henrik\n\n Henrik Thostrup Jensen <htj at nordu.net>\n Software Developer, NORDUnet\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 28 Aug 2015 16:05:14 +0200 (CEST)",
"msg_from": "Henrik Thostrup Jensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Gist indexing performance with cidr types"
}
] |
[
{
"msg_contents": "I'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it\nstarted this particular tables index creation at 10:16pm and it's still\nrunning. 1 single core is at 100% (32 core box) and there is almost zero\nI/O activity.\n\nCentOS 6.6\n\n\n 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 |\n10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\n\n19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 |\n2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\n\nshTableAfterCopy(143); analyze \"torque\".\"impressions\";\n\nI was wondering if there were underlying tools to see how it's progressing,\nor if there is anything I can do to bump the performance mid creation?\nNothing I can do really without stopping postgres or slon, but that would\nstart me back at square one.\n\n\nThanks\n\nTory\n\n\n\niostat: sdb is the db directory\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 3.55 0.00 0.23 0.00 0.00 96.22\n\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n\nsda 1.00 0.00 12.00 0 24\n\nsdb 0.00 0.00 0.00 0 0\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 3.57 0.00 0.06 0.00 0.00 96.37\n\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n\nsda 0.00 0.00 0.00 0 0\n\nsdb 21.50 0.00 15484.00 0 30968\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 3.72 0.00 0.06 0.00 0.00 96.22\n\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n\nsda 2.00 0.00 20.00 0 40\n\nsdb 0.00 0.00 0.00 0 0\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 4.06 0.00 0.05 0.02 0.00 95.87\n\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n\nsda 4.00 0.00 64.00 0 128\n\nsdb 3.50 0.00 108.00 0 216\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 3.36 0.00 0.03 0.00 0.00 96.61\n\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n\nsda 0.00 0.00 0.00 0 0\n\nsdb 0.00 0.00 0.00 0 0\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 3.41 0.00 0.06 0.00 0.00 96.53\n\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n\nsda 0.00 0.00 0.00 0 0\n\nsdb 0.00 0.00 0.00 0 0\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 3.45 0.00 0.27 0.00 0.00 96.28\n\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n\nsda 0.00 0.00 0.00 0 0\n\nsdb 1.00 0.00 24.00 0 48\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n\n 3.50 0.00 0.30 0.00 0.00 96.20\n\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n\nsda 1.50 0.00 344.00 0 688\n\nsdb 0.00 0.00 0.00 0 0\n\nI'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it started this particular tables index creation at 10:16pm and it's still running. 1 single core is at 100% (32 core box) and there is almost zero I/O activity.CentOS 6.6\n 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 | 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\n19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 | 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\nshTableAfterCopy(143); analyze \"torque\".\"impressions\"; \nI was wondering if there were underlying tools to see how it's progressing, or if there is anything I can do to bump the performance mid creation? Nothing I can do really without stopping postgres or slon, but that would start me back at square one.ThanksToryiostat: sdb is the db directory avg-cpu: %user %nice %system %iowait %steal %idle 3.55 0.00 0.23 0.00 0.00 96.22Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 1.00 0.00 12.00 0 24sdb 0.00 0.00 0.00 0 0avg-cpu: %user %nice %system %iowait %steal %idle 3.57 0.00 0.06 0.00 0.00 96.37Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 0.00 0.00 0.00 0 0sdb 21.50 0.00 15484.00 0 30968avg-cpu: %user %nice %system %iowait %steal %idle 3.72 0.00 0.06 0.00 0.00 96.22Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 2.00 0.00 20.00 0 40sdb 0.00 0.00 0.00 0 0avg-cpu: %user %nice %system %iowait %steal %idle 4.06 0.00 0.05 0.02 0.00 95.87Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 4.00 0.00 64.00 0 128sdb 3.50 0.00 108.00 0 216avg-cpu: %user %nice %system %iowait %steal %idle 3.36 0.00 0.03 0.00 0.00 96.61Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 0.00 0.00 0.00 0 0sdb 0.00 0.00 0.00 0 0avg-cpu: %user %nice %system %iowait %steal %idle 3.41 0.00 0.06 0.00 0.00 96.53Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 0.00 0.00 0.00 0 0sdb 0.00 0.00 0.00 0 0avg-cpu: %user %nice %system %iowait %steal %idle 3.45 0.00 0.27 0.00 0.00 96.28Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 0.00 0.00 0.00 0 0sdb 1.00 0.00 24.00 0 48avg-cpu: %user %nice %system %iowait %steal %idle 3.50 0.00 0.30 0.00 0.00 96.20Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 1.50 0.00 344.00 0 688\nsdb 0.00 0.00 0.00 0 0",
"msg_date": "Wed, 26 Aug 2015 12:14:04 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index creation running now for 14 hours"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Tory M Blue\r\nSent: Wednesday, August 26, 2015 3:14 PM\r\nTo: pgsql-performance <[email protected]>\r\nSubject: [PERFORM] Index creation running now for 14 hours\r\n\r\nI'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it started this particular tables index creation at 10:16pm and it's still running. 1 single core is at 100% (32 core box) and there is almost zero I/O activity.\r\n\r\nCentOS 6.6\r\n\r\n\r\n 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 | 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\r\n19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 | 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\r\nshTableAfterCopy(143); analyze \"torque\".\"impressions\";\r\nI was wondering if there were underlying tools to see how it's progressing, or if there is anything I can do to bump the performance mid creation? Nothing I can do really without stopping postgres or slon, but that would start me back at square one.\r\n\r\nThanks\r\nTory\r\n\r\n\r\niostat: sdb is the db directory\r\n\r\navg-cpu: %user %nice %system %iowait %steal %idle\r\n 3.55 0.00 0.23 0.00 0.00 96.22\r\n\r\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\r\nsda 1.00 0.00 12.00 0 24\r\nsdb 0.00 0.00 0.00 0 0\r\n\r\navg-cpu: %user %nice %system %iowait %steal %idle\r\n 3.57 0.00 0.06 0.00 0.00 96.37\r\n\r\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\r\nsda 0.00 0.00 0.00 0 0\r\nsdb 21.50 0.00 15484.00 0 30968\r\n\r\navg-cpu: %user %nice %system %iowait %steal %idle\r\n 3.72 0.00 0.06 0.00 0.00 96.22\r\n\r\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\r\nsda 2.00 0.00 20.00 0 40\r\nsdb 0.00 0.00 0.00 0 0\r\n\r\navg-cpu: %user %nice %system %iowait %steal %idle\r\n 4.06 0.00 0.05 0.02 0.00 95.87\r\n\r\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\r\nsda 4.00 0.00 64.00 0 128\r\nsdb 3.50 0.00 108.00 0 216\r\n\r\navg-cpu: %user %nice %system %iowait %steal %idle\r\n 3.36 0.00 0.03 0.00 0.00 96.61\r\n\r\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\r\nsda 0.00 0.00 0.00 0 0\r\nsdb 0.00 0.00 0.00 0 0\r\n\r\navg-cpu: %user %nice %system %iowait %steal %idle\r\n 3.41 0.00 0.06 0.00 0.00 96.53\r\n\r\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\r\nsda 0.00 0.00 0.00 0 0\r\nsdb 0.00 0.00 0.00 0 0\r\n\r\navg-cpu: %user %nice %system %iowait %steal %idle\r\n 3.45 0.00 0.27 0.00 0.00 96.28\r\n\r\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\r\nsda 0.00 0.00 0.00 0 0\r\nsdb 1.00 0.00 24.00 0 48\r\n\r\navg-cpu: %user %nice %system %iowait %steal %idle\r\n 3.50 0.00 0.30 0.00 0.00 96.20\r\n\r\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\r\nsda 1.50 0.00 344.00 0 688\r\nsdb 0.00 0.00 0.00 0 0\r\n\r\nCheck pg_locks in regards to the table in question.\r\n\r\nRegards,\r\nIgor Neyman\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Tory M Blue\nSent: Wednesday, August 26, 2015 3:14 PM\nTo: pgsql-performance <[email protected]>\nSubject: [PERFORM] Index creation running now for 14 hours\n \n\n\nI'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it started this particular tables index creation at 10:16pm and it's still running. 1 single core is at 100% (32 core box) and there is almost zero I/O activity.\n\n\n \n\n\nCentOS 6.6\n\n\n \n\n\n \n\n\n 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 | 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\n19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 | 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\nshTableAfterCopy(143); analyze \"torque\".\"impressions\"; \nI was wondering if there were underlying tools to see how it's progressing, or if there is anything I can do to bump the performance mid creation? Nothing I can do really without\r\n stopping postgres or slon, but that would start me back at square one.\n \nThanks\nTory\n \n \niostat: sdb is the db directory \n \navg-cpu: %user %nice %system %iowait %steal %idle\n 3.55 0.00 0.23 0.00 0.00 96.22\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 1.00 0.00 12.00 0 24\nsdb 0.00 0.00 0.00 0 0\n \navg-cpu: %user %nice %system %iowait %steal %idle\n 3.57 0.00 0.06 0.00 0.00 96.37\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 21.50 0.00 15484.00 0 30968\n \navg-cpu: %user %nice %system %iowait %steal %idle\n 3.72 0.00 0.06 0.00 0.00 96.22\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 2.00 0.00 20.00 0 40\nsdb 0.00 0.00 0.00 0 0\n \navg-cpu: %user %nice %system %iowait %steal %idle\n 4.06 0.00 0.05 0.02 0.00 95.87\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 4.00 0.00 64.00 0 128\nsdb 3.50 0.00 108.00 0 216\n \navg-cpu: %user %nice %system %iowait %steal %idle\n 3.36 0.00 0.03 0.00 0.00 96.61\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 0.00 0.00 0.00 0 0\n \navg-cpu: %user %nice %system %iowait %steal %idle\n 3.41 0.00 0.06 0.00 0.00 96.53\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 0.00 0.00 0.00 0 0\n \navg-cpu: %user %nice %system %iowait %steal %idle\n 3.45 0.00 0.27 0.00 0.00 96.28\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 1.00 0.00 24.00 0 48\n \navg-cpu: %user %nice %system %iowait %steal %idle\n 3.50 0.00 0.30 0.00 0.00 96.20\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 1.50 0.00 344.00 0 688\n\n\r\nsdb 0.00 0.00 0.00 0 0\n\n \nCheck pg_locks in regards to the table in question.\n \nRegards,\nIgor Neyman",
"msg_date": "Wed, 26 Aug 2015 19:18:47 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index creation running now for 14 hours"
},
{
"msg_contents": "On Wed, Aug 26, 2015 at 12:18 PM, Igor Neyman <[email protected]>\nwrote:\n\n>\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Tory M Blue\n> *Sent:* Wednesday, August 26, 2015 3:14 PM\n> *To:* pgsql-performance <[email protected]>\n> *Subject:* [PERFORM] Index creation running now for 14 hours\n>\n>\n>\n> I'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it\n> started this particular tables index creation at 10:16pm and it's still\n> running. 1 single core is at 100% (32 core box) and there is almost zero\n> I/O activity.\n>\n>\n>\n> CentOS 6.6\n>\n>\n>\n>\n>\n> 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 |\n> 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\n>\n> 19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 |\n> 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\n>\n> shTableAfterCopy(143); analyze \"torque\".\"impressions\";\n>\n> I was wondering if there were underlying tools to see how it's\n> progressing, or if there is anything I can do to bump the performance mid\n> creation? Nothing I can do really without stopping postgres or slon, but\n> that would start me back at square one.\n>\n>\n>\n> Thanks\n>\n> Tory\n>\n>\n>\n>\n>\n> i\n>\n>\n>\n> Check pg_locks in regards to the table in question.\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n\nthanks Igor I did, but not clear what that is telling me, there are 249\nrows in there, nothing has a table name , they are all for the PID in the\n\"analyze torque.impressions line that I listed above pid 25765.\n\nHere is one for an exclusive lock, but what should I be looking for? There\nare no other processes on this box other than slon and this index creation.\n\n\n transactionid | | | | | |\n93588453 | | | | 4/25823460 | 25765 |\nExclusiveL\n\nock | t | f\n\nThanks\nTory\n\nOn Wed, Aug 26, 2015 at 12:18 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Tory M Blue\nSent: Wednesday, August 26, 2015 3:14 PM\nTo: pgsql-performance <[email protected]>\nSubject: [PERFORM] Index creation running now for 14 hours\n \n\n\nI'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it started this particular tables index creation at 10:16pm and it's still running. 1 single core is at 100% (32 core box) and there is almost zero I/O activity.\n\n\n \n\n\nCentOS 6.6\n\n\n \n\n\n \n\n\n 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 | 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\n19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 | 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\nshTableAfterCopy(143); analyze \"torque\".\"impressions\"; \nI was wondering if there were underlying tools to see how it's progressing, or if there is anything I can do to bump the performance mid creation? Nothing I can do really without\n stopping postgres or slon, but that would start me back at square one.\n \nThanks\nTory\n \n \ni\n\n \nCheck pg_locks in regards to the table in question.\n \nRegards,\nIgor Neymanthanks Igor I did, but not clear what that is telling me, there are 249 rows in there, nothing has a table name , they are all for the PID in the \"analyze torque.impressions line that I listed above pid 25765.Here is one for an exclusive lock, but what should I be looking for? There are no other processes on this box other than slon and this index creation. transactionid | | | | | | 93588453 | | | | 4/25823460 | 25765 | ExclusiveL\nock | t | f\nThanksTory",
"msg_date": "Wed, 26 Aug 2015 12:25:39 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index creation running now for 14 hours"
},
{
"msg_contents": "From: Tory M Blue [mailto:[email protected]]\r\nSent: Wednesday, August 26, 2015 3:26 PM\r\nTo: Igor Neyman <[email protected]>\r\nCc: pgsql-performance <[email protected]>\r\nSubject: Re: [PERFORM] Index creation running now for 14 hours\r\n\r\n\r\n\r\nOn Wed, Aug 26, 2015 at 12:18 PM, Igor Neyman <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of Tory M Blue\r\nSent: Wednesday, August 26, 2015 3:14 PM\r\nTo: pgsql-performance <[email protected]<mailto:[email protected]>>\r\nSubject: [PERFORM] Index creation running now for 14 hours\r\n\r\nI'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it started this particular tables index creation at 10:16pm and it's still running. 1 single core is at 100% (32 core box) and there is almost zero I/O activity.\r\n\r\nCentOS 6.6\r\n\r\n\r\n 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 | 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\r\n19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 | 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\r\nshTableAfterCopy(143); analyze \"torque\".\"impressions\";\r\nI was wondering if there were underlying tools to see how it's progressing, or if there is anything I can do to bump the performance mid creation? Nothing I can do really without stopping postgres or slon, but that would start me back at square one.\r\n\r\nThanks\r\nTory\r\n\r\n\r\ni\r\n\r\nCheck pg_locks in regards to the table in question.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\nthanks Igor I did, but not clear what that is telling me, there are 249 rows in there, nothing has a table name , they are all for the PID in the \"analyze torque.impressions line that I listed above pid 25765.\r\n\r\nHere is one for an exclusive lock, but what should I be looking for? There are no other processes on this box other than slon and this index creation.\r\n\r\n\r\n transactionid | | | | | | 93588453 | | | | 4/25823460 | 25765 | ExclusiveL\r\nock | t | f\r\n\r\nThanks\r\nTory\r\n\r\nThere are objects OIDs in pg_lock, not names.\r\nFind the OID of the table that you create your index for, and search pg_locks for the records referencing your table.\r\nIt cannot be that all records in pg_locks are for pid running “analyze”, there should be records with pid running your “create index”.\r\nWhat’s the size of the table you are indexing?\r\nAlso, take a look at pg_stat_activity for long running transactions/queries.\r\n\r\nIgor Neyman\r\n\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: Tory M Blue [mailto:[email protected]]\r\n\nSent: Wednesday, August 26, 2015 3:26 PM\nTo: Igor Neyman <[email protected]>\nCc: pgsql-performance <[email protected]>\nSubject: Re: [PERFORM] Index creation running now for 14 hours\n \n\n \n\n \n\nOn Wed, Aug 26, 2015 at 12:18 PM, Igor Neyman <[email protected]> wrote:\n\n\n\n \n \nFrom:\[email protected] [mailto:[email protected]]\r\nOn Behalf Of Tory M Blue\nSent: Wednesday, August 26, 2015 3:14 PM\nTo: pgsql-performance <[email protected]>\nSubject: [PERFORM] Index creation running now for 14 hours\n \n\n\nI'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it started this particular tables index creation at 10:16pm and it's still running. 1 single core is at 100%\r\n (32 core box) and there is almost zero I/O activity.\n\n\n \n\n\nCentOS 6.6\n\n\n \n\n\n \n\n\n 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 | 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\n19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 | 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\nshTableAfterCopy(143); analyze \"torque\".\"impressions\"; \nI was wondering if there were underlying tools to see how it's progressing, or if there is anything I can do to bump the performance mid creation? Nothing I can do really without\r\n stopping postgres or slon, but that would start me back at square one.\n \nThanks\nTory\n \n \ni\n \nCheck pg_locks in regards to the table in question.\n \nRegards,\nIgor Neyman\n\n\n\n\n\n\n \n\n\nthanks Igor I did, but not clear what that is telling me, there are 249 rows in there, nothing has a table name , they are all for the PID in the \"analyze torque.impressions line that I listed above pid 25765.\n\n\n \n\n\nHere is one for an exclusive lock, but what should I be looking for? There are no other processes on this box other than slon and this index creation.\n\n\n \n\n\n \n\n\n transactionid | | | | | | 93588453 | | | | 4/25823460 | 25765 | ExclusiveL\n\nock | t | f\n\n \n\n\nThanks\n\n\n\nTory \n\n\n\n \nThere are objects OIDs in pg_lock, not names.\nFind the OID of the table that you create your index for, and search pg_locks for the records referencing your table.\nIt cannot be that all records in pg_locks are for pid running “analyze”, there should be records with pid running your “create index”.\nWhat’s the size of the table you are indexing?\nAlso, take a look at pg_stat_activity for long running transactions/queries.\n \nIgor Neyman",
"msg_date": "Wed, 26 Aug 2015 19:36:46 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index creation running now for 14 hours"
},
{
"msg_contents": "On Wed, Aug 26, 2015 at 12:36 PM, Igor Neyman <[email protected]>\nwrote:\n\n>\n>\n>\n>\n> *From:* Tory M Blue [mailto:[email protected]]\n> *Sent:* Wednesday, August 26, 2015 3:26 PM\n> *To:* Igor Neyman <[email protected]>\n> *Cc:* pgsql-performance <[email protected]>\n> *Subject:* Re: [PERFORM] Index creation running now for 14 hours\n>\n>\n>\n>\n>\n>\n>\n> On Wed, Aug 26, 2015 at 12:18 PM, Igor Neyman <[email protected]>\n> wrote:\n>\n>\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Tory M Blue\n> *Sent:* Wednesday, August 26, 2015 3:14 PM\n> *To:* pgsql-performance <[email protected]>\n> *Subject:* [PERFORM] Index creation running now for 14 hours\n>\n>\n>\n> I'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it\n> started this particular tables index creation at 10:16pm and it's still\n> running. 1 single core is at 100% (32 core box) and there is almost zero\n> I/O activity.\n>\n>\n>\n> CentOS 6.6\n>\n>\n>\n>\n>\n> 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 |\n> 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\n>\n> 19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 |\n> 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\n>\n> shTableAfterCopy(143); analyze \"torque\".\"impressions\";\n>\n> I was wondering if there were underlying tools to see how it's\n> progressing, or if there is anything I can do to bump the performance mid\n> creation? Nothing I can do really without stopping postgres or slon, but\n> that would start me back at square one.\n>\n>\n>\n> Thanks\n>\n> Tory\n>\n>\n>\n>\n>\n> i\n>\n>\n>\n> Check pg_locks in regards to the table in question.\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n>\n>\n> thanks Igor I did, but not clear what that is telling me, there are 249\n> rows in there, nothing has a table name , they are all for the PID in the\n> \"analyze torque.impressions line that I listed above pid 25765.\n>\n>\n>\n> Here is one for an exclusive lock, but what should I be looking for? There\n> are no other processes on this box other than slon and this index creation.\n>\n>\n>\n>\n>\n> transactionid | | | | | |\n> 93588453 | | | | 4/25823460 | 25765 |\n> ExclusiveL\n>\n> ock | t | f\n>\n>\n>\n> Thanks\n>\n> Tory\n>\n>\n>\n> There are objects OIDs in pg_lock, not names.\n>\n> Find the OID of the table that you create your index for, and search\n> pg_locks for the records referencing your table.\n>\n> It cannot be that all records in pg_locks are for pid running “analyze”,\n> there should be records with pid running your “create index”.\n>\n> What’s the size of the table you are indexing?\n>\n> Also, take a look at pg_stat_activity for long running\n> transactions/queries.\n>\n>\n>\n> Igor Neyman\n>\n>\n>\n\nthe table is 90GB without indexes, 285GB with indexes and bloat, The row\ncount is not actually completing.. 125Million rows over 13 months, this\ntable is probably close to 600million rows.\n\nYes I have long running queries, my job started last night at 9pm, it\nappears 3 of the 6 indexes on this table are completed, but I'm about to\nblow out my disk space, so I won't be able to give it much longer to\nrun.... Bah!\n\n16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 |\n10.13.200.232 | | 45712 | 2015-08-25\n21:12:01.619819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25\n22:16:03.10099-07 | 2015-08-25 22:16:03.100992-07 | f | active |\nselect \"_cls\".finishTableAfterCopy(143); analyze \"torque\".\"impressions\";\n\n 16398 | lsdb | 25777 | 10 | postgres | slon.local_cleanup |\n10.13.200.232 | | 45718 | 2015-08-25\n21:12:01.624032-07 | 2015-08-25 21:23:09.103395-07 | 2015-08-25\n21:23:09.103395-07 | 2015-08-25 21:23:09.103397-07 | t | active |\nbegin;lock table \"_cls\".sl_config_lock;select \"_cls\".cleanupEvent('10\nminutes'::interval);commit;\nthere is nothing else in the pg_stat table other than a bunch of slony\nconnections, these are the only 2 items that have been running since the\nindex started last night at 10:16pm\n\n2015-08-25 22:16:03 PDT CONFIG remoteWorkerThread_1: 67254824703 bytes\ncopied for table \"torque\".\"impressions\"\n\nThe above is when it had finished copying the table and started on the\nindex..\n\nWell as I said I'm running out of storage as the index is creating some\nserious data on the filesystem, I'll have to kill it, try to massage the\ndata a bit and increase the maintenance_work mem to use some of my 256GB of\nram to try to get through this. Right now the 100% cpu process which is\nthis index is only using 3.5GB and has been for the last 15 hours\n\n\nTory\n\nOn Wed, Aug 26, 2015 at 12:36 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n \nFrom: Tory M Blue [mailto:[email protected]]\n\nSent: Wednesday, August 26, 2015 3:26 PM\nTo: Igor Neyman <[email protected]>\nCc: pgsql-performance <[email protected]>\nSubject: Re: [PERFORM] Index creation running now for 14 hours\n \n\n \n\n \n\nOn Wed, Aug 26, 2015 at 12:18 PM, Igor Neyman <[email protected]> wrote:\n\n\n\n \n \nFrom:\[email protected] [mailto:[email protected]]\nOn Behalf Of Tory M Blue\nSent: Wednesday, August 26, 2015 3:14 PM\nTo: pgsql-performance <[email protected]>\nSubject: [PERFORM] Index creation running now for 14 hours\n \n\n\nI'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it started this particular tables index creation at 10:16pm and it's still running. 1 single core is at 100%\n (32 core box) and there is almost zero I/O activity.\n\n\n \n\n\nCentOS 6.6\n\n\n \n\n\n \n\n\n 16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 | 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.6\n19819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 | 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".fini\nshTableAfterCopy(143); analyze \"torque\".\"impressions\"; \nI was wondering if there were underlying tools to see how it's progressing, or if there is anything I can do to bump the performance mid creation? Nothing I can do really without\n stopping postgres or slon, but that would start me back at square one.\n \nThanks\nTory\n \n \ni\n \nCheck pg_locks in regards to the table in question.\n \nRegards,\nIgor Neyman\n\n\n\n\n\n\n \n\n\nthanks Igor I did, but not clear what that is telling me, there are 249 rows in there, nothing has a table name , they are all for the PID in the \"analyze torque.impressions line that I listed above pid 25765.\n\n\n \n\n\nHere is one for an exclusive lock, but what should I be looking for? There are no other processes on this box other than slon and this index creation.\n\n\n \n\n\n \n\n\n transactionid | | | | | | 93588453 | | | | 4/25823460 | 25765 | ExclusiveL\n\nock | t | f\n\n \n\n\nThanks\n\n\n\nTory \n\n\n\n \nThere are objects OIDs in pg_lock, not names.\nFind the OID of the table that you create your index for, and search pg_locks for the records referencing your table.\nIt cannot be that all records in pg_locks are for pid running “analyze”, there should be records with pid running your “create index”.\nWhat’s the size of the table you are indexing?\nAlso, take a look at pg_stat_activity for long running transactions/queries.\n \nIgor Neyman\n the table is 90GB without indexes, 285GB with indexes and bloat, The row count is not actually completing.. 125Million rows over 13 months, this table is probably close to 600million rows.Yes I have long running queries, my job started last night at 9pm, it appears 3 of the 6 indexes on this table are completed, but I'm about to blow out my disk space, so I won't be able to give it much longer to run.... Bah!\n16398 | clsdb | 25765 | 10 | postgres | slon.remoteWorkerThread_1 | 10.13.200.232 | | 45712 | 2015-08-25 21:12:01.619819-07 | 2015-08-25 21:22:08.68766-07 | 2015-08-25 22:16:03.10099-07 | 2015-08-25 22:16:03.100992-07 | f | active | select \"_cls\".finishTableAfterCopy(143); analyze \"torque\".\"impressions\"; \n 16398 | lsdb | 25777 | 10 | postgres | slon.local_cleanup | 10.13.200.232 | | 45718 | 2015-08-25 21:12:01.624032-07 | 2015-08-25 21:23:09.103395-07 | 2015-08-25 21:23:09.103395-07 | 2015-08-25 21:23:09.103397-07 | t | active | begin;lock table \"_cls\".sl_config_lock;select \"_cls\".cleanupEvent('10 minutes'::interval);commit; there is nothing else in the pg_stat table other than a bunch of slony connections, these are the only 2 items that have been running since the index started last night at 10:16pm\n2015-08-25 22:16:03 PDT CONFIG remoteWorkerThread_1: 67254824703 bytes copied for table \"torque\".\"impressions\"The above is when it had finished copying the table and started on the index..Well as I said I'm running out of storage as the index is creating some serious data on the filesystem, I'll have to kill it, try to massage the data a bit and increase the maintenance_work mem to use some of my 256GB of ram to try to get through this. Right now the 100% cpu process which is this index is only using 3.5GB and has been for the last 15 hoursTory",
"msg_date": "Wed, 26 Aug 2015 13:26:30 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index creation running now for 14 hours"
},
{
"msg_contents": "On Wed, Aug 26, 2015 at 1:26 PM, Tory M Blue <[email protected]> wrote:\n>\n> Right now the 100% cpu process which is this index is only using 3.5GB\n> and has been for the last 15 hours\n>\n\nIf 100% cpu, you can do 'sudo perf top' to see what the CPU is busy about.\n\nRegards,\nQingqing\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Aug 2015 14:45:22 -0700",
"msg_from": "Qingqing Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index creation running now for 14 hours"
},
{
"msg_contents": "On Wed, Aug 26, 2015 at 2:45 PM, Qingqing Zhou <[email protected]>\nwrote:\n\n> On Wed, Aug 26, 2015 at 1:26 PM, Tory M Blue <[email protected]> wrote:\n> >\n> > Right now the 100% cpu process which is this index is only using 3.5GB\n> > and has been for the last 15 hours\n> >\n>\n> If 100% cpu, you can do 'sudo perf top' to see what the CPU is busy about.\n>\n> Regards,\n> Qingqing\n>\n\n\nI appreciate the attempted help, but I know what it's doing, it's creating\nindexes for the last 14+ hours. I've killed it now, as it was about to run\nmy machine out of disk space, stopped it at 97% full, could not go any\nlonger.\n\nI will now clean up the table a bit but will still have 500million rows\nwith 6 indexes on it. I will create the indexes after the data is laid down\nvs during, so it doesn't block my other table replications. I will then\nfire off my index creations in parallel for my other tables so I can\nactually use the hardware the DB is sitting on.\n\nBut I guess the answer is, no real way to tell what the box is doing when\nit's creating an index. Yes there was a lock, no I could not find a way to\nsee how it's progressing so there was no way for me to gauge when it would\nbe done.\n\nThanks\nTory\n\nTory\n\nOn Wed, Aug 26, 2015 at 2:45 PM, Qingqing Zhou <[email protected]> wrote:On Wed, Aug 26, 2015 at 1:26 PM, Tory M Blue <[email protected]> wrote:\n>\n> Right now the 100% cpu process which is this index is only using 3.5GB\n> and has been for the last 15 hours\n>\n\nIf 100% cpu, you can do 'sudo perf top' to see what the CPU is busy about.\n\nRegards,\nQingqing\nI appreciate the attempted help, but I know what it's doing, it's creating indexes for the last 14+ hours. I've killed it now, as it was about to run my machine out of disk space, stopped it at 97% full, could not go any longer.I will now clean up the table a bit but will still have 500million rows with 6 indexes on it. I will create the indexes after the data is laid down vs during, so it doesn't block my other table replications. I will then fire off my index creations in parallel for my other tables so I can actually use the hardware the DB is sitting on.But I guess the answer is, no real way to tell what the box is doing when it's creating an index. Yes there was a lock, no I could not find a way to see how it's progressing so there was no way for me to gauge when it would be done.ThanksToryTory",
"msg_date": "Wed, 26 Aug 2015 14:53:23 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index creation running now for 14 hours"
},
{
"msg_contents": "Hi,\n\nOn 08/26/2015 11:53 PM, Tory M Blue wrote:\n>\n>\n> On Wed, Aug 26, 2015 at 2:45 PM, Qingqing Zhou\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> On Wed, Aug 26, 2015 at 1:26 PM, Tory M Blue <[email protected]\n> <mailto:[email protected]>> wrote:\n> >\n> > Right now the 100% cpu process which is this index is only using\n> 3.5GB\n> > and has been for the last 15 hours\n> >\n>\n> If 100% cpu, you can do 'sudo perf top' to see what the CPU is busy\n> about.\n>\n> Regards,\n> Qingqing\n>\n>\n>\n> I appreciate the attempted help, but I know what it's doing, it's\n> creating indexes for the last 14+ hours.\n\nSure, but what exactly was it doing? 'perf top' might give us a hint \nwhich function is consuming most of the time, for example.\n\n > I've killed it now, as it was\n> about to run my machine out of disk space, stopped it at 97% full, could\n> not go any longer.\n\nWhich suggests it's using a lot of temp files.\n\nIndexes are built by reading all the necessary data from the table (just \nthe columns), sorted and then an index is built using the sorted data \n(because it can be done very efficiently - much faster than when simply \ninserting the tuples into the btree index).\n\nThe fact that you ran out of disk space probably means that you don't \nhave enough space for the sort (it clearly does not fit into \nmaintenance_work_mem), and there's no way around that - you need enough \ndisk space.\n\n> I will now clean up the table a bit but will still have 500million rows\n> with 6 indexes on it. I will create the indexes after the data is laid\n> down vs during, so it doesn't block my other table replications. I will\n> then fire off my index creations in parallel for my other tables so I\n> can actually use the hardware the DB is sitting on.\n\nThat's a very bad idea, because each of the index builds will require \ndisk space for the sort, and you're even more likely to run out of disk \nspace.\n\n>\n> But I guess the answer is, no real way to tell what the box is doing\n> when it's creating an index. Yes there was a lock, no I could not find a\n> way to see how it's progressing so there was no way for me to gauge when\n> it would be done.\n\nHad it been waiting on a lock, it wouldn't consume 100% of CPU.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Aug 2015 00:36:03 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index creation running now for 14 hours"
},
{
"msg_contents": "\n\nOn 08/26/2015 10:26 PM, Tory M Blue wrote:\n\n>\n> the table is 90GB without indexes, 285GB with indexes and bloat, The\n> row count is not actually completing.. 125Million rows over 13 months,\n> this table is probably close to 600million rows.\n\nYou don't need to do SELECT COUNT(*) if you only need an approximate \nnumber. You can look at pg_class.reltuples:\n\n SELECT reltuples FROM pg_class WHERE relname = 'impressions';\n\nThat should be a sufficiently accurate estimate.\n\n> The above is when it had finished copying the table and started on the\n> index..\n>\n> Well as I said I'm running out of storage as the index is creating some\n> serious data on the filesystem, I'll have to kill it, try to massage the\n> data a bit and increase the maintenance_work mem to use some of my 256GB\n> of ram to try to get through this. Right now the 100% cpu process which\n> is this index is only using 3.5GB and has been for the last 15 hours\n\nPlease post details on the configuration (shared_buffer, work_mem, \nmaintenance_work_mem and such).\n\nBTW while the the CREATE INDEX is reporting 3.5GB, it most likely wrote \na lot of data into on-disk chunks when sorting the data. So it's \nactually using the memory through page cache (i.e. don't increase \nmaintenance_work_mem too much, you don't want to force the data to disk \nneedlessly).\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Aug 2015 00:42:02 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index creation running now for 14 hours"
},
{
"msg_contents": "On Wed, Aug 26, 2015 at 3:36 PM, Tomas Vondra\n<[email protected]> wrote:\n>> But I guess the answer is, no real way to tell what the box is doing\n>> when it's creating an index. Yes there was a lock, no I could not find a\n>> way to see how it's progressing so there was no way for me to gauge when\n>> it would be done.\n>\n>\n> Had it been waiting on a lock, it wouldn't consume 100% of CPU.\n\nWhen things are going out to disk anyway, you're often better off with\na lower maintenance_work_mem (or work_mem). It's actually kind of\nbogus than run size is dictated by these settings. Reducing it will\ntend to make tuplesort's maintenance of the heap invariant\ninexpensive, while not really making the merge phase more painful. I\nwould try 128MB of maintenance_work_mem. That could be significantly\nfaster. Check how the I/O load on the system compares with a higher\nmaintenance_work_mem setting. Often, this will make the sort less CPU\nbound, which is good here.\n\nI am currently working on making this a lot better in Postgres 9.6.\nAlso, note that text and numeric sorts will be much faster in 9.5.\n\nOf course, as Tomas says, if you don't have the disk space to do the\nsort, you're not going to be able to complete it. That much is very\nclear.\n\nIf you're really worried about these costs, I suggest enabling\ntrace_sort locally, and monitoring the progress of this sort in the\nlogs.\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Aug 2015 15:58:05 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index creation running now for 14 hours"
}
] |
[
{
"msg_contents": "Hi,\n\nI am currently working on a data migration for a client.\nThe general plan is :\n - Read data from a postgresql database\n - Convert them to the new application\n - Insert in another database (same postgresql instance).\n\nThe source database is rather big (~40GB, wo indexes), and the\nconversion process takes some time. It is done by multiple workers\non a separate Linux environnement, piece by piece.\n\nWhen we start the migration, at first it looks good.\nPerformances are good, and it ran smoothly. After a few hours,\nwe noticed that things started to slow down. Some queries seemed\nto be stuck, so we waited for them to end, and restarted the server.\n\nAfter that it went well for some time (~10 minutes), then it slowed\ndown again. We tried again (a few times), and the pattern repeats.\n\nMy postgresql specific problem is that it looks like the server gets\nstuck. CPU usage is <10%, RAM usage is under 50% max, there is\nno noticeable disk usage. But, there are some (<10) active queries,\nsome of which may take several hours to complete. Those queries\nwork properly (i.e < 1min) right after the server restarts.\n\nSo my question is : What could slow the queries from ~1min to 2hours\nwhich does not involve CPU, Memory, or disk usage, and which would\n\"reset\" when restarting the server ?\n\nFor information, the number of processes does not seem to be the\nproblem, there are ~20 connections with max_connection set to 100.\nWe noticed at some point that the hard drive holding the target\ndatabase was heavily fragmented (100%...), but defrag did not\nseem to change anything.\n\nAlso, the queries that appear to get stuck are \"heavy\" queries,\nthough after a fresh restart they execute in a reasonable time.\n\nFinally, whatever causes the database to wait also causes the\nWindows instance to slow down. But restarting Postgresql fixes\nthis as well.\n\nConfiguration :\n\nThe Postgresql server runs on a Windows Virtual Machine under\nVMWare. The VM has dedicated resources, and the only other\nVM on the host is the applicative server (which runs idle while\nwaiting for the database). There is nothing else running on the\nserver except postgresql (well, there were other things, but we\nstopped everything to no avail).\n\nPostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit\nWindows 2008R2 (64 bits)\n10 Go RAM\n4 vCPU\n\nHost : VMWare ESXi 5.5.0 build-2068190\nCPU Intel XEON X5690 3.97GHz\nHDD 3x Nearline SAS 15K RAID0\n\nPlease let me know if any other information may be useful.\n\nJean Cavallo\n\nHi,I am currently working on a data migration for a client.The general plan is : - Read data from a postgresql database - Convert them to the new application - Insert in another database (same postgresql instance).The source database is rather big (~40GB, wo indexes), and theconversion process takes some time. It is done by multiple workerson a separate Linux environnement, piece by piece.When we start the migration, at first it looks good.Performances are good, and it ran smoothly. After a few hours,we noticed that things started to slow down. Some queries seemedto be stuck, so we waited for them to end, and restarted the server.After that it went well for some time (~10 minutes), then it sloweddown again. We tried again (a few times), and the pattern repeats.My postgresql specific problem is that it looks like the server getsstuck. CPU usage is <10%, RAM usage is under 50% max, there isno noticeable disk usage. But, there are some (<10) active queries,some of which may take several hours to complete. Those querieswork properly (i.e < 1min) right after the server restarts.So my question is : What could slow the queries from ~1min to 2hourswhich does not involve CPU, Memory, or disk usage, and which would\"reset\" when restarting the server ?For information, the number of processes does not seem to be theproblem, there are ~20 connections with max_connection set to 100.We noticed at some point that the hard drive holding the targetdatabase was heavily fragmented (100%...), but defrag did notseem to change anything.Also, the queries that appear to get stuck are \"heavy\" queries,though after a fresh restart they execute in a reasonable time.Finally, whatever causes the database to wait also causes theWindows instance to slow down. But restarting Postgresql fixesthis as well.Configuration :The Postgresql server runs on a Windows Virtual Machine underVMWare. The VM has dedicated resources, and the only otherVM on the host is the applicative server (which runs idle whilewaiting for the database). There is nothing else running on theserver except postgresql (well, there were other things, but westopped everything to no avail).\n\nPostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bitWindows 2008R2 (64 bits)10 Go RAM4 vCPUHost : VMWare ESXi 5.5.0 build-2068190CPU Intel XEON X5690 3.97GHzHDD 3x Nearline SAS 15K RAID0Please let me know if any other information may be useful.Jean Cavallo",
"msg_date": "Thu, 27 Aug 2015 19:21:22 +0200",
"msg_from": "Jean Cavallo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Server slowing down over time"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Jean Cavallo\r\nSent: Thursday, August 27, 2015 1:21 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] Server slowing down over time\r\n\r\nHi,\r\n\r\nI am currently working on a data migration for a client.\r\nThe general plan is :\r\n - Read data from a postgresql database\r\n - Convert them to the new application\r\n - Insert in another database (same postgresql instance).\r\n\r\nThe source database is rather big (~40GB, wo indexes), and the\r\nconversion process takes some time. It is done by multiple workers\r\non a separate Linux environnement, piece by piece.\r\n\r\nWhen we start the migration, at first it looks good.\r\nPerformances are good, and it ran smoothly. After a few hours,\r\nwe noticed that things started to slow down. Some queries seemed\r\nto be stuck, so we waited for them to end, and restarted the server.\r\n\r\nAfter that it went well for some time (~10 minutes), then it slowed\r\ndown again. We tried again (a few times), and the pattern repeats.\r\n\r\nMy postgresql specific problem is that it looks like the server gets\r\nstuck. CPU usage is <10%, RAM usage is under 50% max, there is\r\nno noticeable disk usage. But, there are some (<10) active queries,\r\nsome of which may take several hours to complete. Those queries\r\nwork properly (i.e < 1min) right after the server restarts.\r\n\r\nSo my question is : What could slow the queries from ~1min to 2hours\r\nwhich does not involve CPU, Memory, or disk usage, and which would\r\n\"reset\" when restarting the server ?\r\n\r\nFor information, the number of processes does not seem to be the\r\nproblem, there are ~20 connections with max_connection set to 100.\r\nWe noticed at some point that the hard drive holding the target\r\ndatabase was heavily fragmented (100%...), but defrag did not\r\nseem to change anything.\r\n\r\nAlso, the queries that appear to get stuck are \"heavy\" queries,\r\nthough after a fresh restart they execute in a reasonable time.\r\n\r\nFinally, whatever causes the database to wait also causes the\r\nWindows instance to slow down. But restarting Postgresql fixes\r\nthis as well.\r\n\r\nConfiguration :\r\n\r\nThe Postgresql server runs on a Windows Virtual Machine under\r\nVMWare. The VM has dedicated resources, and the only other\r\nVM on the host is the applicative server (which runs idle while\r\nwaiting for the database). There is nothing else running on the\r\nserver except postgresql (well, there were other things, but we\r\nstopped everything to no avail).\r\n\r\nPostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit\r\nWindows 2008R2 (64 bits)\r\n10 Go RAM\r\n4 vCPU\r\n\r\nHost : VMWare ESXi 5.5.0 build-2068190\r\nCPU Intel XEON X5690 3.97GHz\r\nHDD 3x Nearline SAS 15K RAID0\r\n\r\nPlease let me know if any other information may be useful.\r\n\r\nJean Cavallo\r\n\r\n\r\nHaving 4 CPUs, I’d try to decrease number of connections from ~20 to 8, and see if “slowing down” still happens.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Jean Cavallo\nSent: Thursday, August 27, 2015 1:21 PM\nTo: [email protected]\nSubject: [PERFORM] Server slowing down over time\n \n\nHi,\n\n \n\n\nI am currently working on a data migration for a client.\n\n\nThe general plan is :\n\n\n - Read data from a postgresql database\n\n\n - Convert them to the new application\n\n\n - Insert in another database (same postgresql instance).\n\n\n \n\n\nThe source database is rather big (~40GB, wo indexes), and the\n\n\nconversion process takes some time. It is done by multiple workers\n\n\non a separate Linux environnement, piece by piece.\n\n\n \n\n\nWhen we start the migration, at first it looks good.\n\n\nPerformances are good, and it ran smoothly. After a few hours,\n\n\nwe noticed that things started to slow down. Some queries seemed\n\n\nto be stuck, so we waited for them to end, and restarted the server.\n\n\n \n\n\nAfter that it went well for some time (~10 minutes), then it slowed\n\n\ndown again. We tried again (a few times), and the pattern repeats.\n\n\n \n\n\nMy postgresql specific problem is that it looks like the server gets\n\n\nstuck. CPU usage is <10%, RAM usage is under 50% max, there is\n\n\nno noticeable disk usage. But, there are some (<10) active queries,\n\n\nsome of which may take several hours to complete. Those queries\n\n\nwork properly (i.e < 1min) right after the server restarts.\n\n\n \n\n\nSo my question is : What could slow the queries from ~1min to 2hours\n\n\nwhich does not involve CPU, Memory, or disk usage, and which would\n\n\n\"reset\" when restarting the server ?\n\n\n \n\n\nFor information, the number of processes does not seem to be the\n\n\nproblem, there are ~20 connections with max_connection set to 100.\n\n\nWe noticed at some point that the hard drive holding the target\n\n\ndatabase was heavily fragmented (100%...), but defrag did not\n\n\nseem to change anything.\n\n\n \n\n\nAlso, the queries that appear to get stuck are \"heavy\" queries,\n\n\nthough after a fresh restart they execute in a reasonable time.\n\n\n \n\n\nFinally, whatever causes the database to wait also causes the\n\n\nWindows instance to slow down. But restarting Postgresql fixes\n\n\nthis as well.\n\n\n \n\n\nConfiguration :\n\n\n \n\n\nThe Postgresql server runs on a Windows Virtual Machine under\n\n\nVMWare. The VM has dedicated resources, and the only other\n\n\nVM on the host is the applicative server (which runs idle while\n\n\nwaiting for the database). There is nothing else running on the\n\n\nserver except postgresql (well, there were other things, but we\n\n\nstopped everything to no avail).\n\n\n \n\n\n\nPostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit\n\n\n\nWindows 2008R2 (64 bits)\n\n\n10 Go RAM\n\n\n4 vCPU\n\n\n \n\n\nHost : VMWare ESXi 5.5.0 build-2068190\n\n\nCPU Intel XEON X5690 3.97GHz\n\n\nHDD 3x Nearline SAS 15K RAID0\n\n\n \n\n\nPlease let me know if any other information may be useful.\n\n\n\n\n\n\n\nJean Cavallo\n\n \n\n \nHaving 4 CPUs, I’d try to decrease number of connections from ~20 to 8, and see if “slowing down” still happens.\n \nRegards,\nIgor Neyman",
"msg_date": "Thu, 3 Sep 2015 13:07:21 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server slowing down over time"
},
{
"msg_contents": "Could you check pg_locks table to see if there's any major difference\nbetween \"healthy\" state and \"slowing down\" state?\n\nOn 3 September 2015 at 21:07, Igor Neyman <[email protected]> wrote:\n\n>\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Jean Cavallo\n> *Sent:* Thursday, August 27, 2015 1:21 PM\n> *To:* [email protected]\n> *Subject:* [PERFORM] Server slowing down over time\n>\n>\n>\n> Hi,\n>\n>\n>\n> I am currently working on a data migration for a client.\n>\n> The general plan is :\n>\n> - Read data from a postgresql database\n>\n> - Convert them to the new application\n>\n> - Insert in another database (same postgresql instance).\n>\n>\n>\n> The source database is rather big (~40GB, wo indexes), and the\n>\n> conversion process takes some time. It is done by multiple workers\n>\n> on a separate Linux environnement, piece by piece.\n>\n>\n>\n> When we start the migration, at first it looks good.\n>\n> Performances are good, and it ran smoothly. After a few hours,\n>\n> we noticed that things started to slow down. Some queries seemed\n>\n> to be stuck, so we waited for them to end, and restarted the server.\n>\n>\n>\n> After that it went well for some time (~10 minutes), then it slowed\n>\n> down again. We tried again (a few times), and the pattern repeats.\n>\n>\n>\n> My postgresql specific problem is that it looks like the server gets\n>\n> stuck. CPU usage is <10%, RAM usage is under 50% max, there is\n>\n> no noticeable disk usage. But, there are some (<10) active queries,\n>\n> some of which may take several hours to complete. Those queries\n>\n> work properly (i.e < 1min) right after the server restarts.\n>\n>\n>\n> So my question is : What could slow the queries from ~1min to 2hours\n>\n> which does not involve CPU, Memory, or disk usage, and which would\n>\n> \"reset\" when restarting the server ?\n>\n>\n>\n> For information, the number of processes does not seem to be the\n>\n> problem, there are ~20 connections with max_connection set to 100.\n>\n> We noticed at some point that the hard drive holding the target\n>\n> database was heavily fragmented (100%...), but defrag did not\n>\n> seem to change anything.\n>\n>\n>\n> Also, the queries that appear to get stuck are \"heavy\" queries,\n>\n> though after a fresh restart they execute in a reasonable time.\n>\n>\n>\n> Finally, whatever causes the database to wait also causes the\n>\n> Windows instance to slow down. But restarting Postgresql fixes\n>\n> this as well.\n>\n>\n>\n> Configuration :\n>\n>\n>\n> The Postgresql server runs on a Windows Virtual Machine under\n>\n> VMWare. The VM has dedicated resources, and the only other\n>\n> VM on the host is the applicative server (which runs idle while\n>\n> waiting for the database). There is nothing else running on the\n>\n> server except postgresql (well, there were other things, but we\n>\n> stopped everything to no avail).\n>\n>\n>\n> PostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit\n>\n> Windows 2008R2 (64 bits)\n>\n> 10 Go RAM\n>\n> 4 vCPU\n>\n>\n>\n> Host : VMWare ESXi 5.5.0 build-2068190\n>\n> CPU Intel XEON X5690 3.97GHz\n>\n> HDD 3x Nearline SAS 15K RAID0\n>\n>\n>\n> Please let me know if any other information may be useful.\n>\n>\n> Jean Cavallo\n>\n>\n>\n>\n>\n> Having 4 CPUs, I’d try to decrease number of connections from ~20 to 8,\n> and see if “slowing down” still happens.\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n>\n>\n\n\n\n-- \nRegards,\nAng Wei Shan\n\nCould you check pg_locks table to see if there's any major difference between \"healthy\" state and \"slowing down\" state?On 3 September 2015 at 21:07, Igor Neyman <[email protected]> wrote:\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Jean Cavallo\nSent: Thursday, August 27, 2015 1:21 PM\nTo: [email protected]\nSubject: [PERFORM] Server slowing down over time\n \n\nHi,\n\n \n\n\nI am currently working on a data migration for a client.\n\n\nThe general plan is :\n\n\n - Read data from a postgresql database\n\n\n - Convert them to the new application\n\n\n - Insert in another database (same postgresql instance).\n\n\n \n\n\nThe source database is rather big (~40GB, wo indexes), and the\n\n\nconversion process takes some time. It is done by multiple workers\n\n\non a separate Linux environnement, piece by piece.\n\n\n \n\n\nWhen we start the migration, at first it looks good.\n\n\nPerformances are good, and it ran smoothly. After a few hours,\n\n\nwe noticed that things started to slow down. Some queries seemed\n\n\nto be stuck, so we waited for them to end, and restarted the server.\n\n\n \n\n\nAfter that it went well for some time (~10 minutes), then it slowed\n\n\ndown again. We tried again (a few times), and the pattern repeats.\n\n\n \n\n\nMy postgresql specific problem is that it looks like the server gets\n\n\nstuck. CPU usage is <10%, RAM usage is under 50% max, there is\n\n\nno noticeable disk usage. But, there are some (<10) active queries,\n\n\nsome of which may take several hours to complete. Those queries\n\n\nwork properly (i.e < 1min) right after the server restarts.\n\n\n \n\n\nSo my question is : What could slow the queries from ~1min to 2hours\n\n\nwhich does not involve CPU, Memory, or disk usage, and which would\n\n\n\"reset\" when restarting the server ?\n\n\n \n\n\nFor information, the number of processes does not seem to be the\n\n\nproblem, there are ~20 connections with max_connection set to 100.\n\n\nWe noticed at some point that the hard drive holding the target\n\n\ndatabase was heavily fragmented (100%...), but defrag did not\n\n\nseem to change anything.\n\n\n \n\n\nAlso, the queries that appear to get stuck are \"heavy\" queries,\n\n\nthough after a fresh restart they execute in a reasonable time.\n\n\n \n\n\nFinally, whatever causes the database to wait also causes the\n\n\nWindows instance to slow down. But restarting Postgresql fixes\n\n\nthis as well.\n\n\n \n\n\nConfiguration :\n\n\n \n\n\nThe Postgresql server runs on a Windows Virtual Machine under\n\n\nVMWare. The VM has dedicated resources, and the only other\n\n\nVM on the host is the applicative server (which runs idle while\n\n\nwaiting for the database). There is nothing else running on the\n\n\nserver except postgresql (well, there were other things, but we\n\n\nstopped everything to no avail).\n\n\n \n\n\n\nPostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit\n\n\n\nWindows 2008R2 (64 bits)\n\n\n10 Go RAM\n\n\n4 vCPU\n\n\n \n\n\nHost : VMWare ESXi 5.5.0 build-2068190\n\n\nCPU Intel XEON X5690 3.97GHz\n\n\nHDD 3x Nearline SAS 15K RAID0\n\n\n \n\n\nPlease let me know if any other information may be useful.\n\n\n\n\n\n\n\nJean Cavallo\n\n \n\n \nHaving 4 CPUs, I’d try to decrease number of connections from ~20 to 8, and see if “slowing down” still happens.\n \nRegards,\nIgor Neyman\n \n\n\n\n\n\n\n\n-- Regards,Ang Wei Shan",
"msg_date": "Fri, 4 Sep 2015 14:13:04 +0800",
"msg_from": "Wei Shan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server slowing down over time"
},
{
"msg_contents": "Hi,\n\nOn 08/27/2015 07:21 PM, Jean Cavallo wrote:\n> Hi,\n>\n> I am currently working on a data migration for a client.\n> The general plan is :\n> - Read data from a postgresql database\n> - Convert them to the new application\n> - Insert in another database (same postgresql instance).\n>\n> The source database is rather big (~40GB, wo indexes), and the\n> conversion process takes some time. It is done by multiple workers\n> on a separate Linux environnement, piece by piece.\n>\n> When we start the migration, at first it looks good.\n> Performances are good, and it ran smoothly. After a few hours,\n> we noticed that things started to slow down. Some queries seemed\n> to be stuck, so we waited for them to end, and restarted the server.\n>\n> After that it went well for some time (~10 minutes), then it slowed\n> down again. We tried again (a few times), and the pattern repeats.\n\nIf you're moving a lot of data (especially if the destination database \nis empty), one possible problem is statistics. This generally is not a \nproblem in regular operation, because the data growth is gradual and \nautovacuum analyzes the tables regularly, but in batch processes this is \noften a big issue.\n\nThe usual scenario is that there's an empty (or very small) table, where \nindexes are inefficient so PostgreSQL plans the queries with sequential \nscans. The table suddenly grows, which would make indexes efficient, but \nthe planner has no idea about that until autovacuum kicks in. But before \nthat happens, the batch process executes queries on that table.\n\nTry adding ANALYZE after steps that add a lot of data.\n\n>\n> My postgresql specific problem is that it looks like the server gets\n> stuck. CPU usage is <10%, RAM usage is under 50% max, there is no\n> noticeable disk usage. But, there are some (<10) active queries, some\n> of which may take several hours to complete. Those queries work\n> properly (i.e < 1min) right after the server restarts.\n\nThat's a bit strange. Essentially what you're saying is that the \nworkload is neither CPU nor I/O bound. To make it CPU bound, at least \none CPU would have to be 100% utilized, and with 4 CPUs that's 25%, but \nyou're saying there's only 10% used. But you're saying I/O is not the \nbottleneck either.\n\n> So my question is : What could slow the queries from ~1min to 2hours\n> which does not involve CPU, Memory, or disk usage, and which would\n> \"reset\" when restarting the server ?\n\nA lot of things, unfortunately, and the fact that this is a migration \nmoving data between two databases makes it even more complicated. The \nvirtualization does not make it less complex either.\n\nFor example, are you sure it's not stuck on the other database? I assume \nyou're running some long queries, so maybe it's stuck there and the \ndestination database is just waiting for data? That's be consistent with \nthe low CPU and I/O usage you observe.\n\nLocking is another possibility, although it probably is not the only \ncause - it'd be utilizing at least one CPU otherwise.\n\n>\n> For information, the number of processes does not seem to be the\n> problem, there are ~20 connections with max_connection set to 100.\n> We noticed at some point that the hard drive holding the target\n> database was heavily fragmented (100%...), but defrag did not\n> seem to change anything.\n\nIf it was a problem, you'd see high I/O usage. And that's not the case.\n\n>\n> Also, the queries that appear to get stuck are \"heavy\" queries,\n> though after a fresh restart they execute in a reasonable time.\n\nDoes the plan change? If not, check waiting locks in pg_locks.\n\n>\n> Finally, whatever causes the database to wait also causes the\n> Windows instance to slow down. But restarting Postgresql fixes\n> this as well.\n\nThat's a bit strange, I guess. If you're not observing light CPU and I/O \nusage, then the instance should not be slow, unless there's something \nelse going on - possibly at the virtualization level (e.g. another busy \ninstance on the same hardware, some sort of accounting that limits the \nresources after a time, etc.)\n\n> Configuration :\n>\n> The Postgresql server runs on a Windows Virtual Machine under\n> VMWare. The VM has dedicated resources, and the only other\n> VM on the host is the applicative server (which runs idle while\n> waiting for the database). There is nothing else running on the\n> server except postgresql (well, there were other things, but we\n> stopped everything to no avail).\n>\n> PostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit\n\nYou're 4 versions behind.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 04 Sep 2015 13:59:27 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server slowing down over time"
},
{
"msg_contents": "On 9/4/15 6:59 AM, Tomas Vondra wrote:\n>> Finally, whatever causes the database to wait also causes the\n>> Windows instance to slow down. But restarting Postgresql fixes\n>> this as well.\n>\n> That's a bit strange, I guess. If you're not observing light CPU and I/O\n> usage, then the instance should not be slow, unless there's something\n> else going on - possibly at the virtualization level (e.g. another busy\n> instance on the same hardware, some sort of accounting that limits the\n> resources after a time, etc.)\n\nI've experienced something similar on linux before. Database is slow, \nbut neither CPU or IO is maxed. IIRC there was nothing disturbing in \npg_locks either. I don't recall the server actually slowing down, but \nthis would have been on something with at least 24 cores, so...\n\nMy suspicion has always been that there's some form of locking that \nisn't being caught by the tools.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Sep 2015 17:08:07 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server slowing down over time"
}
] |
[
{
"msg_contents": "hi,We are trying to use PG FDW to read data from external tables.\r\nWe’d like to use PG as a SQL parser and push predicates into backend NoSQL databases.\r\nIt works well for single table queries. PG FDW is able to push all required predicates into backend database.\r\nHowever, things go odd after introducing join operation. For example, if we use TPCH as a test case, with normal local tables and foreign tables:\r\ntpch=# \\d \r\n List of relations \r\n Schema | Name | Type | Owner \r\n--------+------------+---------------+---------- \r\n public | lineitem | foreign table | sdbadmin \r\n public | orders | foreign table | sdbadmin \r\n public | t_lineitem | table | sdbadmin \r\n public | t_orders | table | sdbadmin \r\n(4 rows) \r\n \r\nThe access plan for local table access works great. We can see that it use NL Join and pushed join predicate into inner table since there’s index exist (Index Cond: (l_orderkey = t_orders.o_orderkey)): \r\ntpch=# \r\nExplain analyze select count(*) from( select o_orderkey from orders where o_custkey=28547 ) AS T, lineitem as l where T.o_orderkey=l.l_orderkey; \r\n QUERY PLAN \r\n------------------------------------------------------------------------------------------------------------------------------------------------------ \r\n Aggregate (cost=5575229.23..5575229.24 rows=1 width=0) (actual time=67169.270..67169.271 rows=1 loops=1) \r\n -> Merge Join (cost=1621891.36..5012615.33 rows=225045562 width=0) (actual time=64543.730..67169.247 rows=31 loops=1) \r\n Merge Cond: (orders.o_orderkey = l.l_orderkey) \r\n -> Sort (cost=79230.48..79249.23 rows=7500 width=4) (actual time=0.427..0.433 rows=7 loops=1) \r\n Sort Key: orders.o_orderkey \r\n Sort Method: quicksort Memory: 25kB \r\n -> Foreign Scan on orders (cost=0.00..78747.75 rows=7500 width=4) (actual time=0.217..0.364 rows=7 loops=1) \r\n Filter: (o_custkey = 28547) \r\n Foreign Namespace: tpch.orders \r\n -> Materialize (cost=1542660.89..1572666.96 rows=6001215 width=4) (actual time=64543.249..66281.325 rows=4028971 loops=1) \r\n -> Sort (cost=1542660.89..1557663.92 rows=6001215 width=4) (actual time=64543.234..65404.116 rows=4028971 loops=1) \r\n Sort Key: l.l_orderkey \r\n Sort Method: external sort Disk: 82136kB \r\n -> Foreign Scan on lineitem l (cost=0.00..620867.90 rows=6001215 width=4) (actual time=35.438..50376.884 rows=6001215 loops=1) \r\n Foreign Namespace: tpch.lineitem \r\n Total runtime: 67191.867 ms \r\n(16 rows) \r\n However, if we use foreign tables, since PG doesn’t know whether index exist in the inner table, it has to perform merge join and fetch everything from foreign table, which takes forever. \r\ntpch=# \r\nExplain analyze select count(*) from( select o_orderkey from t_orders where o_custkey=28547 ) AS T, t_lineitem as l where T.o_orderkey=l.l_orderkey; \r\n QUERY PLAN \r\n------------------------------------------------------------------------------------------------------------------------------------------------ \r\n Aggregate (cost=223.15..223.16 rows=1 width=0) (actual time=255.385..255.385 rows=1 loops=1) \r\n -> Nested Loop (cost=4.99..222.98 rows=68 width=0) (actual time=48.397..255.356 rows=31 loops=1) \r\n -> Bitmap Heap Scan on t_orders (cost=4.56..71.47 rows=17 width=4) (actual time=24.567..54.906 rows=7 loops=1) \r\n Recheck Cond: (o_custkey = 28547) \r\n -> Bitmap Index Scan on fk_t_orders (cost=0.00..4.56 rows=17 width=0) (actual time=24.551..24.551 rows=7 loops=1) \r\n Index Cond: (o_custkey = 28547) \r\n -> Index Only Scan using pk_t_lineitem on t_lineitem l (cost=0.43..8.75 rows=16 width=4) (actual time=28.618..28.624 rows=4 loops=7) \r\n Index Cond: (l_orderkey = t_orders.o_orderkey) \r\n Heap Fetches: 31 \r\n Total runtime: 255.489 ms \r\n(10 rows) \r\n So, the question is, is there any way we can push join predicate into inner table ( we can disable merge join and hash join to get NL Join, but join predicate is not able to push into inner table )?\r\nThanks\nhi,We are trying to use PG FDW to read data from external tables.We’d like to use PG as a SQL parser and push predicates into backend NoSQL databases.It works well for single table queries. PG FDW is able to push all required predicates into backend database.However, things go odd after introducing join operation. For example, if we use TPCH as a test case, with normal local tables and foreign tables:tpch=# \\d List of relations Schema | Name | Type | Owner --------+------------+---------------+---------- public | lineitem | foreign table | sdbadmin public | orders | foreign table | sdbadmin public | t_lineitem | table | sdbadmin public | t_orders | table | sdbadmin (4 rows) The access plan for local table access works great. We can see that it use NL Join and pushed join predicate into inner table since there’s index exist (Index Cond: (l_orderkey = t_orders.o_orderkey)): tpch=# Explain analyze select count(*) from( select o_orderkey from orders where o_custkey=28547 ) AS T, lineitem as l where T.o_orderkey=l.l_orderkey; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate (cost=5575229.23..5575229.24 rows=1 width=0) (actual time=67169.270..67169.271 rows=1 loops=1) -> Merge Join (cost=1621891.36..5012615.33 rows=225045562 width=0) (actual time=64543.730..67169.247 rows=31 loops=1) Merge Cond: (orders.o_orderkey = l.l_orderkey) -> Sort (cost=79230.48..79249.23 rows=7500 width=4) (actual time=0.427..0.433 rows=7 loops=1) Sort Key: orders.o_orderkey Sort Method: quicksort Memory: 25kB -> Foreign Scan on orders (cost=0.00..78747.75 rows=7500 width=4) (actual time=0.217..0.364 rows=7 loops=1) Filter: (o_custkey = 28547) Foreign Namespace: tpch.orders -> Materialize (cost=1542660.89..1572666.96 rows=6001215 width=4) (actual time=64543.249..66281.325 rows=4028971 loops=1) -> Sort (cost=1542660.89..1557663.92 rows=6001215 width=4) (actual time=64543.234..65404.116 rows=4028971 loops=1) Sort Key: l.l_orderkey Sort Method: external sort Disk: 82136kB -> Foreign Scan on lineitem l (cost=0.00..620867.90 rows=6001215 width=4) (actual time=35.438..50376.884 rows=6001215 loops=1) Foreign Namespace: tpch.lineitem Total runtime: 67191.867 ms (16 rows) \nHowever, if we use foreign tables, since PG doesn’t know whether index exist in the inner table, it has to perform merge join and fetch everything from foreign table, which takes forever. tpch=# Explain analyze select count(*) from( select o_orderkey from t_orders where o_custkey=28547 ) AS T, t_lineitem as l where T.o_orderkey=l.l_orderkey; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate (cost=223.15..223.16 rows=1 width=0) (actual time=255.385..255.385 rows=1 loops=1) -> Nested Loop (cost=4.99..222.98 rows=68 width=0) (actual time=48.397..255.356 rows=31 loops=1) -> Bitmap Heap Scan on t_orders (cost=4.56..71.47 rows=17 width=4) (actual time=24.567..54.906 rows=7 loops=1) Recheck Cond: (o_custkey = 28547) -> Bitmap Index Scan on fk_t_orders (cost=0.00..4.56 rows=17 width=0) (actual time=24.551..24.551 rows=7 loops=1) Index Cond: (o_custkey = 28547) -> Index Only Scan using pk_t_lineitem on t_lineitem l (cost=0.43..8.75 rows=16 width=4) (actual time=28.618..28.624 rows=4 loops=7) Index Cond: (l_orderkey = t_orders.o_orderkey) Heap Fetches: 31 Total runtime: 255.489 ms (10 rows) \nSo, the question is, is there any way we can push join predicate into inner table ( we can disable merge join and hash join to get NL Join, but join predicate is not able to push into inner table )?Thanks",
"msg_date": "Mon, 31 Aug 2015 21:51:54 +0800",
"msg_from": "\"=?gb18030?B?sKTM38jL?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "is there any way we can push join predicate into inner table "
},
{
"msg_contents": "\"=?gb18030?B?sKTM38jL?=\" <[email protected]> writes:\n> So, the question is, is there any way we can push join predicate into inner table ( we can disable merge join and hash join to get NL Join, but join predicate is not able to push into inner table )?\n\nYou probably need to turn on use_remote_estimate; postgres_fdw won't\nconsider parameterized paths without that, because it has no way to\ncompare costs.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 31 Aug 2015 10:36:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is there any way we can push join predicate into inner table"
}
] |
[
{
"msg_contents": "I have the following three tables:\n\nDOCUMENT\n id (index)\n documenttype\n date_last_updated: timestamp(6) (indexed)\n\nEXTERNAL_TRANSLATION_UNIT\n id (indexed)\n fk_id_document (indexed)\n\nEXTERNAL_TRANSLATION\n id (indexed)\n fk_id_translation_unit (indexed)\n\nTable sizes:\n DOCUMENT: 381 000\n EXTERNAL_TRANSLATION_UNIT: 76 000 000\n EXTERNAL_TRANSLATION: 76 000 000\n\nNow the following query takes about 36 minutes to finish:\n\n SELECT u.id AS id_external_translation_unit,\n r.id AS id_external_translation, \n u.fk_id_language AS fk_id_source_language,\n r.fk_id_language AS fk_id_target_language,\n doc.fk_id_job\n FROM \"EXTERNAL_TRANSLATION_UNIT\" u\n JOIN \"DOCUMENT\" doc ON u.fk_id_document = doc.id\n JOIN \"EXTERNAL_TRANSLATION\" r ON u.id = r.fk_id_translation_unit\n WHERE doc.date_last_updated >= date(now() - '171:00:00'::interval)\n ORDER BY r.id LIMIT 1000\n\nThis is the query plan:\n \n<http://postgresql.nabble.com/file/n5864045/qp1.png> \n\nIf I remove the WHERE condition, it returns immediately.\n\nAm I doing something obviously wrong?\n\nThank you for any ideas.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Query-1000-slowdown-after-adding-datetime-comparison-tp5864045.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 31 Aug 2015 09:09:11 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Query_>_1000=C3=97_slowdown_after_adding_datetime_comparison?="
},
{
"msg_contents": "\n\nOn 08/31/2015 06:09 PM, twoflower wrote:\n> I have the following three tables:\n...\n> This is the query plan:\n>\n> <http://postgresql.nabble.com/file/n5864045/qp1.png>\n>\n> If I remove the WHERE condition, it returns immediately.\n>\n> Am I doing something obviously wrong?\n\nPlease share explain plans for both the slow and the fast query. That \nmakes it easier to spot the difference, and possibly identify the cause.\n\nAlso, what PostgreSQL version is this, and what are \"basic\" config \nparameters (shared buffers, work mem)?\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 31 Aug 2015 18:19:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query > =?windows-1252?Q?1000=D7_slowdown_afte?=\n =?windows-1252?Q?r_adding_datetime_comparison?="
},
{
"msg_contents": "On Mon, Aug 31, 2015 at 12:09 PM, twoflower <[email protected]> wrote:\n\n> I have the following three tables:\n>\n> DOCUMENT\n> id (index)\n> documenttype\n> date_last_updated: timestamp(6) (indexed)\n>\n> EXTERNAL_TRANSLATION_UNIT\n> id (indexed)\n> fk_id_document (indexed)\n>\n> EXTERNAL_TRANSLATION\n> id (indexed)\n> fk_id_translation_unit (indexed)\n>\n> Table sizes:\n> DOCUMENT: 381 000\n> EXTERNAL_TRANSLATION_UNIT: 76 000 000\n> EXTERNAL_TRANSLATION: 76 000 000\n>\n> Now the following query takes about 36 minutes to finish:\n>\n> SELECT u.id AS id_external_translation_unit,\n> r.id AS id_external_translation,\n> u.fk_id_language AS fk_id_source_language,\n> r.fk_id_language AS fk_id_target_language,\n> doc.fk_id_job\n> FROM \"EXTERNAL_TRANSLATION_UNIT\" u\n> JOIN \"DOCUMENT\" doc ON u.fk_id_document = doc.id\n> JOIN \"EXTERNAL_TRANSLATION\" r ON u.id = r.fk_id_translation_unit\n> WHERE doc.date_last_updated >= date(now() - '171:00:00'::interval)\n> ORDER BY r.id LIMIT 1000\n>\n> This is the query plan:\n>\n> <http://postgresql.nabble.com/file/n5864045/qp1.png>\n>\n> If I remove the WHERE condition, it returns immediately.\n>\n>\nSo does \"SELECT 1;\" - but since that doesn't give the same answer it is\nnot very relevant.\n\n\n> Am I doing something obviously wrong?\n>\n\nNot obviously...\n\n\n> Thank you for any ideas.\n>\n\nConsider updating the translation tables at the same time the document\ntable is updated. That way you can apply the WHERE and ORDER BY clauses\nagainst the same table.\n\nI presume you've run ANALYZE on the data.\n\nI would probably try something like:\n\nWITH docs AS ( SELECT ... WHERE date > ...)\nSELECT ... FROM (translations join translation_unit) t\nWHERE EXISTS (SELECT 1 FROM docs WHERE t.doc_id = docs.doc_id)\nORDER BY t.id LIMIT 1000\n\nYou are trying to avoid the NESTED LOOP and the above has a decent chance\nof materializing docs and then building either a bit or hash map for both\ndocs and translations thus performing a single sequential scan over both\ninstead of performing 70+ million index lookups.\n\nTake this with a grain of salt as my fluency in this area is limited - I\ntend to work with trial-and-error but without data that is difficult.\n\nI'm not sure if the planner could be smarter because you are asking a\nquestion it is not particularly suited to estimating - namely cross-table\ncorrelations. Rethinking the model is likely to give you a better outcome\nlong-term though it does seem like there should be room for improvement\nwithin the stated query and model.\n\nAs Tomas said you likely will benefit from increased working memory in\norder to make materializing and hashing/bitmapping favorable compared to a\nnested loop.\n\nDavid J.\n\nOn Mon, Aug 31, 2015 at 12:09 PM, twoflower <[email protected]> wrote:I have the following three tables:\n\nDOCUMENT\n id (index)\n documenttype\n date_last_updated: timestamp(6) (indexed)\n\nEXTERNAL_TRANSLATION_UNIT\n id (indexed)\n fk_id_document (indexed)\n\nEXTERNAL_TRANSLATION\n id (indexed)\n fk_id_translation_unit (indexed)\n\nTable sizes:\n DOCUMENT: 381 000\n EXTERNAL_TRANSLATION_UNIT: 76 000 000\n EXTERNAL_TRANSLATION: 76 000 000\n\nNow the following query takes about 36 minutes to finish:\n\n SELECT u.id AS id_external_translation_unit,\n r.id AS id_external_translation,\n u.fk_id_language AS fk_id_source_language,\n r.fk_id_language AS fk_id_target_language,\n doc.fk_id_job\n FROM \"EXTERNAL_TRANSLATION_UNIT\" u\n JOIN \"DOCUMENT\" doc ON u.fk_id_document = doc.id\n JOIN \"EXTERNAL_TRANSLATION\" r ON u.id = r.fk_id_translation_unit\n WHERE doc.date_last_updated >= date(now() - '171:00:00'::interval)\n ORDER BY r.id LIMIT 1000\n\nThis is the query plan:\n\n<http://postgresql.nabble.com/file/n5864045/qp1.png>\n\nIf I remove the WHERE condition, it returns immediately.\nSo does \"SELECT 1;\" - but since that doesn't give the same answer it is not very relevant. \nAm I doing something obviously wrong?Not obviously...\n\nThank you for any ideas.Consider updating the translation tables at the same time the document table is updated. That way you can apply the WHERE and ORDER BY clauses against the same table.I presume you've run ANALYZE on the data.I would probably try something like:WITH docs AS ( SELECT ... WHERE date > ...)SELECT ... FROM (translations join translation_unit) tWHERE EXISTS (SELECT 1 FROM docs WHERE t.doc_id = docs.doc_id)ORDER BY t.id LIMIT 1000You are trying to avoid the NESTED LOOP and the above has a decent chance of materializing docs and then building either a bit or hash map for both docs and translations thus performing a single sequential scan over both instead of performing 70+ million index lookups.Take this with a grain of salt as my fluency in this area is limited - I tend to work with trial-and-error but without data that is difficult.I'm not sure if the planner could be smarter because you are asking a question it is not particularly suited to estimating - namely cross-table correlations. Rethinking the model is likely to give you a better outcome long-term though it does seem like there should be room for improvement within the stated query and model.As Tomas said you likely will benefit from increased working memory in order to make materializing and hashing/bitmapping favorable compared to a nested loop.David J.",
"msg_date": "Mon, 31 Aug 2015 12:35:03 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Query_=3E_1000=C3=97_slowdown_after_adding_d?=\n =?UTF-8?Q?atetime_comparison?="
},
{
"msg_contents": "Tomas Vondra-4 wrote\n> Please share explain plans for both the slow and the fast query. That \n> makes it easier to spot the difference, and possibly identify the cause.\n> \n> Also, what PostgreSQL version is this, and what are \"basic\" config \n> parameters (shared buffers, work mem)?\n\nI am running 9.4.4, here are the basic config parameters:\n\nwork_mem = 32 MB\nshared_buffers = 8196 MB\ntemp_buffers = 8 MB\neffective_cache_size = 4 GB\n\nI have run ANALYZE on all tables prior to running the queries. The query\nplan for the fast version (without the WHERE clause) follows:\n\n<http://postgresql.nabble.com/file/n5864075/qp2.png> \n\nWhat I don't understand is the difference between the inner NESTED LOOP\nbetween the slow and the fast query plan. In the fast one, both index scans\nhave 1000 as the actual row count. I would expect that, given the LIMIT\nclause. The slow query plan, however, shows ~ 75 000 000 as the actual row\ncount. Is the extra WHERE condition the only and *plausible* explanation for\nthis difference?\n\n\nDavid G. Johnston wrote\n> I would probably try something like:\n> \n> WITH docs AS ( SELECT ... WHERE date > ...)\n> SELECT ... FROM (translations join translation_unit) t\n> WHERE EXISTS (SELECT 1 FROM docs WHERE t.doc_id = docs.doc_id)\n> ORDER BY t.id LIMIT 1000\n\nDavid, I tried this and it is probably as slow as the original query. It did\nnot finish in 5 minutes anyway.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864075.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 31 Aug 2015 12:03:50 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re:_Query_>_1000=C3=97_slowdown_af?=\n =?UTF-8?Q?ter_adding_datetime_comparison?="
},
{
"msg_contents": "And another thing which comes out as a little surprising to me - if I replace\nthe *date_last_updated* condition with another one, say *doc.documenttype =\n4*, the query finishes immediately. *documenttype* is an unindexed integer\ncolumn.\n\nHere's the query plan:\n\n<http://postgresql.nabble.com/file/n5864080/qp3.png> \n\nWhat's so special about that *date_last_updated* condition that makes it so\nslow to use? Is it because it involves the *date()* function call that it\nmakes it difficult for the planner to guess the data distribution in the\nDOCUMENT table?\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864080.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 31 Aug 2015 12:19:23 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re:_Query_>_1000=C3=97_slowdown_af?=\n =?UTF-8?Q?ter_adding_datetime_comparison?="
},
{
"msg_contents": "On Mon, Aug 31, 2015 at 3:03 PM, twoflower <[email protected]> wrote:\n\n> Tomas Vondra-4 wrote\n> > Please share explain plans for both the slow and the fast query. That\n> > makes it easier to spot the difference, and possibly identify the cause.\n> >\n> > Also, what PostgreSQL version is this, and what are \"basic\" config\n> > parameters (shared buffers, work mem)?\n>\n> I am running 9.4.4, here are the basic config parameters:\n>\n> work_mem = 32 MB\n> shared_buffers = 8196 MB\n> temp_buffers = 8 MB\n> effective_cache_size = 4 GB\n>\n> I have run ANALYZE on all tables prior to running the queries. The query\n> plan for the fast version (without the WHERE clause) follows:\n>\n> <http://postgresql.nabble.com/file/n5864075/qp2.png>\n>\n> What I don't understand is the difference between the inner NESTED LOOP\n> between the slow and the fast query plan. In the fast one, both index scans\n> have 1000 as the actual row count. I would expect that, given the LIMIT\n> clause. The slow query plan, however, shows ~ 75 000 000 as the actual row\n> count. Is the extra WHERE condition the only and *plausible* explanation\n> for\n> this difference?\n>\n>\nIn the slow query it requires evaluating every single document to\ndetermine which of the 75 million translations can be discarded; after\nwhich the first 1000 when sorted by translation id are returned.\n\nIn the first query the executor simply scans the translation index in\nascending order and stops after retrieving the first 1,000.\n\nWhat you are expecting, I think, is for that same process to continue\nbeyond 1,000 should any of the first 1,000 be discarded due to the\ncorresponding document not being updated recently enough, until 1,000\ntranslations are identified. I'm not sure why the nested loop executor is\nnot intelligent enough to do this...\n\nThe important number in these plans is \"loops\", not \"rows\"\n\nDavid J.\n\nOn Mon, Aug 31, 2015 at 3:03 PM, twoflower <[email protected]> wrote:Tomas Vondra-4 wrote\n> Please share explain plans for both the slow and the fast query. That\n> makes it easier to spot the difference, and possibly identify the cause.\n>\n> Also, what PostgreSQL version is this, and what are \"basic\" config\n> parameters (shared buffers, work mem)?\n\nI am running 9.4.4, here are the basic config parameters:\n\nwork_mem = 32 MB\nshared_buffers = 8196 MB\ntemp_buffers = 8 MB\neffective_cache_size = 4 GB\n\nI have run ANALYZE on all tables prior to running the queries. The query\nplan for the fast version (without the WHERE clause) follows:\n\n<http://postgresql.nabble.com/file/n5864075/qp2.png>\n\nWhat I don't understand is the difference between the inner NESTED LOOP\nbetween the slow and the fast query plan. In the fast one, both index scans\nhave 1000 as the actual row count. I would expect that, given the LIMIT\nclause. The slow query plan, however, shows ~ 75 000 000 as the actual row\ncount. Is the extra WHERE condition the only and *plausible* explanation for\nthis difference?In the slow query it requires evaluating every single document to determine which of the 75 million translations can be discarded; after which the first 1000 when sorted by translation id are returned.In the first query the executor simply scans the translation index in ascending order and stops after retrieving the first 1,000.What you are expecting, I think, is for that same process to continue beyond 1,000 should any of the first 1,000 be discarded due to the corresponding document not being updated recently enough, until 1,000 translations are identified. I'm not sure why the nested loop executor is not intelligent enough to do this...The important number in these plans is \"loops\", not \"rows\"David J.",
"msg_date": "Mon, 31 Aug 2015 15:22:53 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_Query_=3E_1000=C3=97_slowdown_after_addi?=\n =?UTF-8?Q?ng_datetime_comparison?="
},
{
"msg_contents": "On Mon, Aug 31, 2015 at 3:19 PM, twoflower <[email protected]> wrote:\n\n> And another thing which comes out as a little surprising to me - if I\n> replace\n> the *date_last_updated* condition with another one, say *doc.documenttype =\n> 4*, the query finishes immediately. *documenttype* is an unindexed integer\n> column.\n>\n>\nThe only index that matters here is the pkey on document. The problem is\nthe failure to exit the nested loop once 1,000 translations have been\ngathered. Translation is related to document via key - hence the nested\nloop. A hashing-based plan would make use of the secondary indexes but\nlikely would not be particularly useful in this query (contrary to my\nearlier speculation).\n\nHere's the query plan:\n>\n> <http://postgresql.nabble.com/file/n5864080/qp3.png>\n>\n> What's so special about that *date_last_updated* condition that makes it so\n> slow to use? Is it because it involves the *date()* function call that it\n> makes it difficult for the planner to guess the data distribution in the\n> DOCUMENT table?\n>\n\nWhat happens if you pre-compute the date condition and hard code it?\n\n\nDavid J.\n\n\nOn Mon, Aug 31, 2015 at 3:19 PM, twoflower <[email protected]> wrote:And another thing which comes out as a little surprising to me - if I replace\nthe *date_last_updated* condition with another one, say *doc.documenttype =\n4*, the query finishes immediately. *documenttype* is an unindexed integer\ncolumn.\nThe only index that matters here is the pkey on document. The problem is the failure to exit the nested loop once 1,000 translations have been gathered. Translation is related to document via key - hence the nested loop. A hashing-based plan would make use of the secondary indexes but likely would not be particularly useful in this query (contrary to my earlier speculation).\nHere's the query plan:\n\n<http://postgresql.nabble.com/file/n5864080/qp3.png>\n\nWhat's so special about that *date_last_updated* condition that makes it so\nslow to use? Is it because it involves the *date()* function call that it\nmakes it difficult for the planner to guess the data distribution in the\nDOCUMENT table?What happens if you pre-compute the date condition and hard code it?David J.",
"msg_date": "Mon, 31 Aug 2015 15:29:46 -0400",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_Query_=3E_1000=C3=97_slowdown_after_addi?=\n =?UTF-8?Q?ng_datetime_comparison?="
},
{
"msg_contents": "David G Johnston wrote\n> What happens if you pre-compute the date condition and hard code it?\n\nI created a new boolean column and filled it for every row in DOCUMENT with\n*(doc.date_last_updated >= date(now() - '171:00:00'::interval))*, reanalyzed\nthe table and modified the query to just compare this column to TRUE. I\nexpected this to be very fast, considering that a (to me, anyway) similar\nquery also containing a constant value comparison finishes immediately.\nHowever, the query is running now for 4 minutes already. That's really\ninteresting.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864088.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 31 Aug 2015 12:46:56 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re:__Re:_Query_>_1000=C3=97_slow?=\n =?UTF-8?Q?down_after_adding_datetime_comparison?="
},
{
"msg_contents": "2015-08-31 21:46 GMT+02:00 twoflower <[email protected]> wrote:\n> I created a new boolean column and filled it for every row in DOCUMENT with\n> *(doc.date_last_updated >= date(now() - '171:00:00'::interval))*, reanalyzed\n> ...\n\n... and you've put an index on that new boolean column (say \"updated\")?\nCREATE INDEX index_name ON some_table (boolean_field);\nor tried a conditional index like\nCREATE INDEX index_name ON some_table (some_field) WHERE boolean_field;\n\n-S.\n\n\n2015-08-31 21:46 GMT+02:00 twoflower <[email protected]>:\n> David G Johnston wrote\n>> What happens if you pre-compute the date condition and hard code it?\n>\n> I created a new boolean column and filled it for every row in DOCUMENT with\n> *(doc.date_last_updated >= date(now() - '171:00:00'::interval))*, reanalyzed\n> the table and modified the query to just compare this column to TRUE. I\n> expected this to be very fast, considering that a (to me, anyway) similar\n> query also containing a constant value comparison finishes immediately.\n> However, the query is running now for 4 minutes already. That's really\n> interesting.\n>\n>\n>\n> --\n> View this message in context: http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864088.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 31 Aug 2015 22:24:35 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_Query_=3E_1000=C3=97_slowdown_after_addi?=\n =?UTF-8?Q?ng_datetime_comparison?="
},
{
"msg_contents": "I did not. I wanted to compare this query to the one I tried before, having\n*documenttype = 4* as the sole condition. That one was very fast and the\n*documenttype* was not indexed either. \n\nBut this query, using the new temporary column, still runs, after 48\nminutes...\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864101.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 31 Aug 2015 13:30:41 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re:__Re:_Query_>_1000=C3=97_slow?=\n =?UTF-8?Q?down_after_adding_datetime_comparison?="
},
{
"msg_contents": "So, if I'm understanding you correctly, we're talking solely about\nfollowing clause in the query you gave initially:\n\nWHERE doc.date_last_updated >= date(now() - '171:00:00'::interval)\nwhich initially was\nWHERE documenttype = 4\nand now is being replaced by a temporary (I'd say derived) column\nWHERE updated\n?\n\nIn any case - I have to go - but run http://explain.depesz.com/ and\ngive a weblink to the explain plans of your queries.\n\n-S.\n\n\n2015-08-31 22:30 GMT+02:00 twoflower <[email protected]>:\n> I did not. I wanted to compare this query to the one I tried before, having\n> *documenttype = 4* as the sole condition. That one was very fast and the\n> *documenttype* was not indexed either.\n>\n> But this query, using the new temporary column, still runs, after 48\n> minutes...\n>\n>\n>\n> --\n> View this message in context: http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864101.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 31 Aug 2015 23:00:28 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_Query_=3E_1000=C3=97_slowdown_after_addi?=\n =?UTF-8?Q?ng_datetime_comparison?="
},
{
"msg_contents": "Stefan Keller wrote\n> So, if I'm understanding you correctly, we're talking solely about\n> following clause in the query you gave initially:\n> \n> WHERE doc.date_last_updated >= date(now() - '171:00:00'::interval)\n> which initially was\n> WHERE documenttype = 4\n> and now is being replaced by a temporary (I'd say derived) column\n> WHERE updated\n> ?\n> \n> In any case - I have to go - but run http://explain.depesz.com/ and\n> give a weblink to the explain plans of your queries.\n\nHere's the query plan for the original query I struggled with: WHERE\ndoc.date_last_updated >= date(now() - '171:00:00'::interval)\n\nThe original slow query with date()\n<http://explain.depesz.com/d/WEy/SFfcd5esMuuI9wrLmQrZU33ma371uW2Nlh1PmeI9ZDRiAJ4wjv> \n\nThen, just out of curiosity, I replaced the WHERE condition with another\none: WHERE doc.documenttype = 4. documenttype is an unindexed integer\ncolumn. It's fast. Here's the query plan:\n\ndate() condition replaced with documenttype = 4\n<http://explain.depesz.com/d/kHG/u1dvsDyJPBK92xnKwNT3YkQsNW6WfG9aHmWAzVSOgmXikL3hiW> \n\nThen, as David suggested, I tried to precompute the doc.date_last_updated >=\ndate(now() - '171:00:00'::interval). I created a new boolean temp_eval\ncolumn on the DOCUMENT table and updated the rows like this:\n\nupdate \"DOCUMENT\" set temp_eval = date_last_updated >= date(now() -\n'171:00:00'::interval);\n\nThen I analyzed the table and modified the query to only have WHERE\ntemp_eval = TRUE. I expected this to be fast, but it took 41 minutes to\ncomplete. Here's the query plan:\n\nPrecomputed column\n<http://explain.depesz.com/d/2aFw/GLqQ8CMGDZOYx5134M8eallFluervKXDfOdnaz4nZTMXYNacji> \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864164.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 1 Sep 2015 00:53:45 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re:__Re:_Query_>_1000=C3=97_slow?=\n =?UTF-8?Q?down_after_adding_datetime_comparison?="
},
{
"msg_contents": "I think you should try putting the precomputed boolean temp_eval column\nto \"EXTERNAL_TRANSLATION\" r table.\n\nAnd if possible, try creating a conditional index on id where temp_eval is\ntrue,\non \"EXTERNAL_TRANSLATION\" r table.\n\nSo that, only check this index can get the top 1000 records.\n\nI think you should try putting the precomputed boolean temp_eval column to \"EXTERNAL_TRANSLATION\" r table.And if possible, try creating a conditional index on id where temp_eval is true,on \"EXTERNAL_TRANSLATION\" r table.So that, only check this index can get the top 1000 records.",
"msg_date": "Tue, 1 Sep 2015 18:45:00 +0900",
"msg_from": "=?UTF-8?B?5p6X5aOr5Y2a?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_Query_=3E_1000=C3=97_slowdown_after_addi?=\n =?UTF-8?Q?ng_datetime_comparison?="
},
{
"msg_contents": "林士博 wrote\n> I think you should try putting the precomputed boolean temp_eval column\n> to \"EXTERNAL_TRANSLATION\" r table.\n> \n> And if possible, try creating a conditional index on id where temp_eval is\n> true,\n> on \"EXTERNAL_TRANSLATION\" r table.\n> \n> So that, only check this index can get the top 1000 records.\n\nI agree that might help. But I would still like to understand what's the\nreason for difference between the second and the third query. Both contain a\nsimple <column> = <constant> expression, yet one finishes immediately and\none runs for 41 minutes.\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864173.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 1 Sep 2015 02:51:50 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re:__Re:_Query_>_1000=C3=97_slow?=\n =?UTF-8?Q?down_after_adding_datetime_comparison?="
},
{
"msg_contents": "It depends on the values in your table.\r\nIt seems that the documenttype of all the records with the smallest 1000\r\nids is all 4.\r\nSo, the query ends after doing nest-loop 1000 times.\r\n\r\n\r\n\r\n2015-09-01 18:51 GMT+09:00 twoflower <[email protected]>:\r\n\r\n> 林士博 wrote\r\n> > I think you should try putting the precomputed boolean temp_eval column\r\n> > to \"EXTERNAL_TRANSLATION\" r table.\r\n> >\r\n> > And if possible, try creating a conditional index on id where temp_eval\r\n> is\r\n> > true,\r\n> > on \"EXTERNAL_TRANSLATION\" r table.\r\n> >\r\n> > So that, only check this index can get the top 1000 records.\r\n>\r\n> I agree that might help. But I would still like to understand what's the\r\n> reason for difference between the second and the third query. Both contain\r\n> a\r\n> simple <column> = <constant> expression, yet one finishes immediately and\r\n> one runs for 41 minutes.\r\n>\r\n>\r\n>\r\n>\r\n> --\r\n> View this message in context:\r\n> http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864173.html\r\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\r\n>\r\n>\r\n> --\r\n> Sent via pgsql-performance mailing list ([email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n>\r\n\r\n\r\n\r\n-- \r\n─repica group──────────────────\r\n▼ポイント×電子マネー×メールで店舗販促に必要な機能を全て提供!\r\n【point+plus】http://www.repica.jp/pointplus/\r\n\r\n▼フォローアップメールや外部連携に対応!\r\n【mail solution】http://ms.repica.jp/\r\n\r\n▼9年連続シェアNo.1 個人情報漏えい対策ソフト\r\n【P-Pointer】http://ppointer.jp/\r\n\r\n▼単月導入可能!AR動画再生アプリ\r\n【marcs】http://www.arappli.com/service/marcs/\r\n\r\n▼ITビジネスを創造しながら未来を創る\r\n【VARCHAR】http://varchar.co.jp/\r\n───────────────────────────\r\n\nIt depends on the values in your table.It seems that the documenttype of all the records with the smallest 1000 ids is all 4.So, the query ends after doing nest-loop 1000 times.2015-09-01 18:51 GMT+09:00 twoflower <[email protected]>:林士博 wrote\n> I think you should try putting the precomputed boolean temp_eval column\n> to \"EXTERNAL_TRANSLATION\" r table.\n>\n> And if possible, try creating a conditional index on id where temp_eval is\n> true,\n> on \"EXTERNAL_TRANSLATION\" r table.\n>\n> So that, only check this index can get the top 1000 records.\n\nI agree that might help. But I would still like to understand what's the\nreason for difference between the second and the third query. Both contain a\nsimple <column> = <constant> expression, yet one finishes immediately and\none runs for 41 minutes.\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Query-1-000-000-slowdown-after-adding-datetime-comparison-tp5864045p5864173.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- ─repica group──────────────────▼ポイント×電子マネー×メールで店舗販促に必要な機能を全て提供!【point+plus】http://www.repica.jp/pointplus/▼フォローアップメールや外部連携に対応!【mail solution】http://ms.repica.jp/▼9年連続シェアNo.1 個人情報漏えい対策ソフト【P-Pointer】http://ppointer.jp/▼単月導入可能!AR動画再生アプリ【marcs】http://www.arappli.com/service/marcs/▼ITビジネスを創造しながら未来を創る【VARCHAR】http://varchar.co.jp/───────────────────────────",
"msg_date": "Tue, 1 Sep 2015 19:15:21 +0900",
"msg_from": "=?UTF-8?B?5p6X5aOr5Y2a?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_Query_=3E_1000=C3=97_slowdown_after_addi?=\n =?UTF-8?Q?ng_datetime_comparison?="
},
{
"msg_contents": "By the way, if you set the documenttype with an unexisted value,\nthe query would check all the records, and it would run slower than the\noriginal one.\n\nBy the way, if you set the documenttype with an unexisted value,the query would check all the records, and it would run slower than the original one.",
"msg_date": "Tue, 1 Sep 2015 19:25:31 +0900",
"msg_from": "=?UTF-8?B?5p6X5aOr5Y2a?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_Query_=3E_1000=C3=97_slowdown_after_addi?=\n =?UTF-8?Q?ng_datetime_comparison?="
}
] |
[
{
"msg_contents": "Dear all,\n\nI have server as below:\n- OS Debian 7.0 (Wheezy)\n- CPU 24 cores\n- RAM 128GB\n- SSD 128GB\n- 1 Gigabit Ethernet\n\nI need to tune the Postgres with hardware specification above for my\nMessaging/Messenger/Chatting Application that will run separately in\nanother server.\n\nCan you give me the recommended configuration (postgresql.conf ?) for\nmy requirement above?\n\n\nBig thanks in advance,\nFattahRozzaq\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 2 Sep 2015 19:16:23 +0700",
"msg_from": "FattahRozzaq <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL Tuning for Messaging/Messenger/Chatting Application"
},
{
"msg_contents": "With server configuration and application functional we cannot conclude\npostgresql.conf, application behavior need to consider(mandatory)\n\nbut, can give some ideal way of calculation\n\n1) shared_buffers can be 7-8% of database size under condition having\nproper indexes with size not greater then 10-15% of respective table size,\n\n2) work_mem can be 8mb if database size under 100-120gb with point#1\n\n3) maintanance_work_mem can be 512mb if database size under 100-120gb and\nany table max size under 0-2GB\n\n4) checkpoint_segments = 16\n\n5) all scan parameters \"on\"\n\n6) autovacuum = on ##### if transactions on tables are under 0-30% per\nday\n autovacuum_vacuum_scale_factor = 0.2\n autovacuum_analyze_scale_factor = 0.1\n\n7) effective_cache_size = 64-96GB\n\nNote: 1) may need changes based on application behavior, specially if\nsorting and full-table-scans and transactions are very high.\n 2) regular maintenance is required to control database size and\nperformance\n 3) only some parameters defined\n\n\n\nOn Wed, Sep 2, 2015 at 5:46 PM, FattahRozzaq <[email protected]> wrote:\n\n> Dear all,\n>\n> I have server as below:\n> - OS Debian 7.0 (Wheezy)\n> - CPU 24 cores\n> - RAM 128GB\n> - SSD 128GB\n> - 1 Gigabit Ethernet\n>\n> I need to tune the Postgres with hardware specification above for my\n> Messaging/Messenger/Chatting Application that will run separately in\n> another server.\n>\n> Can you give me the recommended configuration (postgresql.conf ?) for\n> my requirement above?\n>\n>\n> Big thanks in advance,\n> FattahRozzaq\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWith server configuration and application functional we cannot conclude postgresql.conf, application behavior need to consider(mandatory)but, can give some ideal way of calculation1) shared_buffers can be 7-8% of database size under condition having proper indexes with size not greater then 10-15% of respective table size,2) work_mem can be 8mb if database size under 100-120gb with point#13) maintanance_work_mem can be 512mb if database size under 100-120gb and any table max size under 0-2GB4) checkpoint_segments = 165) all scan parameters \"on\"6) autovacuum = on ##### if transactions on tables are under 0-30% per day autovacuum_vacuum_scale_factor = 0.2 autovacuum_analyze_scale_factor = 0.17) effective_cache_size = 64-96GBNote: 1) may need changes based on application behavior, specially if sorting and full-table-scans and transactions are very high. 2) regular maintenance is required to control database size and performance 3) only some parameters definedOn Wed, Sep 2, 2015 at 5:46 PM, FattahRozzaq <[email protected]> wrote:Dear all,\n\nI have server as below:\n- OS Debian 7.0 (Wheezy)\n- CPU 24 cores\n- RAM 128GB\n- SSD 128GB\n- 1 Gigabit Ethernet\n\nI need to tune the Postgres with hardware specification above for my\nMessaging/Messenger/Chatting Application that will run separately in\nanother server.\n\nCan you give me the recommended configuration (postgresql.conf ?) for\nmy requirement above?\n\n\nBig thanks in advance,\nFattahRozzaq\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 3 Sep 2015 19:10:11 +0530",
"msg_from": "Sridhar N Bamandlapally <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Tuning for Messaging/Messenger/Chatting Application"
}
] |
[
{
"msg_contents": "Hi,\nI have a table for addresses:\n\nCREATE TABLE adressen.adresse\n(\n pool_pk integer NOT NULL,\n adressnr_pk integer NOT NULL,\n anrede varchar(8),\n vorname varchar(50) DEFAULT ''::character varying NOT NULL,\n name1 varchar(100) NOT NULL,\n name2 varchar(80) DEFAULT ''::character varying NOT NULL,\n name3 varchar(80) DEFAULT ''::character varying NOT NULL,\n strasse varchar(80) DEFAULT ''::character varying NOT NULL,\n plz varchar(8) DEFAULT ''::character varying NOT NULL,\n ort varchar(80) DEFAULT ''::character varying NOT NULL,\n changed timestamptz,\n id2 integer\n);\n\nThe primary key is on 'pool_pk' and 'adressnr_pk' and amongst some other \nindexes there is an index\n\nCREATE INDEX trgm_adresse ON adressen.adresse USING gist \n(normalize_string((btrim((((((((normalize_string((((COALESCE((vorname)::text, \n''::text) || ' '::text) || (name1)::text))::character varying, \n(-1)))::text || ' '::text) || \n(normalize_string((COALESCE((strasse)::text, ''::text))::character \nvarying, (-2)))::text) || ' '::text) || (plz)::text) || ' '::text) || \n(normalize_string((COALESCE((ort)::text, ''::text))::character varying, \n(-3)))::text)))::character varying) gist_trgm_ops);\n\nWhen I try to retrieve some addresses similar to a new address, I use \nthe following query\n\nSELECT pool_pk AS pool, adressnr_pk AS adrnr, vorname, name1,\n strasse, plz, ort, ratio_ld_adresse($1, $2, $3, $4, name1,\n strasse, plz, ort)::double precision AS ratio\nFROM adressen.adresse\nWHERE normalize_string(trim(normalize_string(coalesce(vorname::text, '')\n || ' ' || name1::text, -1) || ' ' ||\n normalize_string(coalesce(strasse::text, ''), -2) || ' ' || \nplz::text ||\n ' ' || normalize_string(coalesce(ort::text, ''), -3))) %\n normalize_string(trim(normalize_string($1::text) || ' ' ||\n normalize_string(coalesce($2::text, ''), -2) || ' ' || $3::text ||\n ' ' || normalize_string(coalesce($4::text, ''), -3)))\n ORDER BY 1, 8 DESC, 2;\n\nwhich means: take the normalized (lower case, no accents ...) parts of \nthe address, concatinate them to\n <name> <street> <zip> <city>\nand search for a similar address in the existing addresses. The \ndescribed index 'trgm_adresse' is built on the same expression.\n\nThe function 'normalize_string' is written in plpythonu and doesn't use \nany database calls. The same with the function 'ratio_ld_adresse', which \ncalculates the levenshtein distance for two entire addresses.\n\nMost the time everything works fine and one search (in about 500,000 \naddresses) lasts about 2 to 5 seconds. But sometimes the search takes 50 \nor even 300 seconds.\n\nWe have two machines which have the same software installation (Ubuntu \n14.10 server, Postgres 9.4.4) the production server has 4 GB memory and \n2 processors and the test server 2 GB memory and 1 processor. Both are \nvirtual machines on two different ESX-servers.\n\nOn both machines postgres was installed out of the box (apt-get ...) \nwithout configuration modifications (except network interfaces in \npg_hba.conf). The test machine is a little bit slower but the very slow \nsearches occur much more (~ 35 %) than on the production machine (~10 %).\n\nAn explain tells\n\nQUERY PLAN\nSort (cost=2227.31..2228.53 rows=489 width=65)\n Sort Key: pool_pk, ((ratio_ld_adresse('Test 2'::character varying, \n'Am Hang 12'::character varying, '12345'::character varying, \n'Irgendwo'::character varying, name1, strasse, plz, ort))::double \nprecision), adressnr_pk\n -> Index Scan using trgm_adresse on adresse (cost=1.43..2205.46 \nrows=489 width=65)\n Index Cond: \n((normalize_string((btrim((((((((normalize_string((((COALESCE((vorname)::text, \n''::text) || ' '::text) || (name1)::text))::character varying, \n(-1)))::text || ' '::text) || \n(normalize_string((COALESCE((strasse)::text, ''::text))::character \nvarying, (-2)))::text) || ' '::text) || (plz)::text) || ' '::text) || \n(normalize_string((COALESCE((ort)::text, ''::text))::character varying, \n(-3)))::text)))::character varying, 0))::text % 'test 2 am hang 12 12345 \nirgendwo'::text)\n\nwhich shows that the index is used and the result should arrive within \nsome seconds (2228 is not very expensive).\n\nWhen such a slow query is running, 'top' shows nearly '100 % wait' and \n'iotop' shows 3 - 8 MB/sec disk read by a process\n postgres: vb vb 10.128.96.25(60435) FETCH\n\nAlso the postgres log, which was told to log every task longer than 5000 \nms, shows\n\n 2015-09-02 13:44:48 CEST [25237-1] vb@vb LOG: duration: 55817.191 \nms execute <unnamed>: FETCH FORWARD 4096 IN \"py:0xa2d61f6c\"\n\nSince I never used a FETCH command in my life, this must be used by \npg_trgm or something inside it (gin, gist etc.)\n\nIf during a slow Query one (or several) more instance(s) of the same \nquery are started, all of them hang and return at the _same second_ some \nminutes later. Even if the other queries are on different addresses.\n\n2015-08-31 09:09:00 GMT LOG: duration: 98630.958 ms execute <unnamed>: \nFETCH FORWARD 4096 IN \"py:0x7fb780a07e10\"\n2015-08-31 09:09:00 GMT LOG: duration: 266887.136 ms execute \n<unnamed>: FETCH FORWARD 4096 IN \"py:0x7fb780a95dd8\"\n2015-08-31 09:09:00 GMT LOG: duration: 170311.627 ms execute \n<unnamed>: FETCH FORWARD 4096 IN \"py:0x7fb780a77e10\"\n2015-08-31 09:09:00 GMT LOG: duration: 72614.474 ms execute <unnamed>: \nFETCH FORWARD 4096 IN \"py:0x7fb780a0ce10\"\n2015-08-31 09:09:00 GMT LOG: duration: 78561.131 ms execute <unnamed>: \nFETCH FORWARD 4096 IN \"py:0x7fb780a08da0\"\n2015-08-31 09:09:00 GMT LOG: duration: 182392.792 ms execute \n<unnamed>: FETCH FORWARD 4096 IN \"py:0x7fb78170c2b0\"\n2015-08-31 09:09:00 GMT LOG: duration: 245632.530 ms execute \n<unnamed>: FETCH FORWARD 4096 IN \"py:0x7fb7809ddc50\"\n2015-08-31 09:09:00 GMT LOG: duration: 84760.400 ms execute <unnamed>: \nFETCH FORWARD 4096 IN \"py:0x7fb7809f7dd8\"\n2015-08-31 09:09:00 GMT LOG: duration: 176402.352 ms execute \n<unnamed>: FETCH FORWARD 4096 IN \"py:0x7fb7809fc668\"\n\nDoes anyone have an idea, how to solve this problem?\n\nregards Volker\n\n-- \nVolker Böhm Tel.: +49 4141 981155 www.vboehm.de\nVoßkuhl 5 Fax: +49 4141 981154\n21682 Stade mailto:[email protected]\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 02 Sep 2015 16:00:47 +0200",
"msg_from": "=?UTF-8?B?Vm9sa2VyIELDtmht?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "query with pg_trgm sometimes very slow"
},
{
"msg_contents": "On Wed, Sep 2, 2015 at 7:00 AM, Volker Böhm <[email protected]> wrote:\n\n>\n>\n> CREATE INDEX trgm_adresse ON adressen.adresse USING gist\n> (normalize_string((btrim((((((((normalize_string((((COALESCE((vorname)::text,\n> ''::text) || ' '::text) || (name1)::text))::character varying,\n> (-1)))::text || ' '::text) || (normalize_string((COALESCE((strasse)::text,\n> ''::text))::character varying, (-2)))::text) || ' '::text) || (plz)::text)\n> || ' '::text) || (normalize_string((COALESCE((ort)::text,\n> ''::text))::character varying, (-3)))::text)))::character varying)\n> gist_trgm_ops);\n>\n\n\nYou might have better luck with gin_trgm_ops than gist_trgm_ops. Have you\ntried that?\n\n...\n\n\n> When such a slow query is running, 'top' shows nearly '100 % wait' and\n> 'iotop' shows 3 - 8 MB/sec disk read by a process\n> postgres: vb vb 10.128.96.25(60435) FETCH\n>\n> Also the postgres log, which was told to log every task longer than 5000\n> ms, shows\n>\n> 2015-09-02 13:44:48 CEST [25237-1] vb@vb LOG: duration: 55817.191\n> ms execute <unnamed>: FETCH FORWARD 4096 IN \"py:0xa2d61f6c\"\n>\n> Since I never used a FETCH command in my life, this must be used by\n> pg_trgm or something inside it (gin, gist etc.)\n>\n\n\nThe FETCH is probably being automatically added by whatever python library\nyou are use to talk to PostgreSQL. Are you using a named cursor in\npython? In any event, that is not the cause of the problem.\n\nCan you get the result of the indexed expression for a query that is slow?\n\nOn Wed, Sep 2, 2015 at 7:00 AM, Volker Böhm <[email protected]> wrote:\nCREATE INDEX trgm_adresse ON adressen.adresse USING gist (normalize_string((btrim((((((((normalize_string((((COALESCE((vorname)::text, ''::text) || ' '::text) || (name1)::text))::character varying, (-1)))::text || ' '::text) || (normalize_string((COALESCE((strasse)::text, ''::text))::character varying, (-2)))::text) || ' '::text) || (plz)::text) || ' '::text) || (normalize_string((COALESCE((ort)::text, ''::text))::character varying, (-3)))::text)))::character varying) gist_trgm_ops);You might have better luck with gin_trgm_ops than gist_trgm_ops. Have you tried that?...\nWhen such a slow query is running, 'top' shows nearly '100 % wait' and 'iotop' shows 3 - 8 MB/sec disk read by a process\n postgres: vb vb 10.128.96.25(60435) FETCH\n\nAlso the postgres log, which was told to log every task longer than 5000 ms, shows\n\n 2015-09-02 13:44:48 CEST [25237-1] vb@vb LOG: duration: 55817.191 ms execute <unnamed>: FETCH FORWARD 4096 IN \"py:0xa2d61f6c\"\n\nSince I never used a FETCH command in my life, this must be used by pg_trgm or something inside it (gin, gist etc.)The FETCH is probably being automatically added by whatever python library you are use to talk to PostgreSQL. Are you using a named cursor in python? In any event, that is not the cause of the problem.Can you get the result of the indexed expression for a query that is slow?",
"msg_date": "Wed, 2 Sep 2015 12:29:07 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query with pg_trgm sometimes very slow"
},
{
"msg_contents": "On Wed, Sep 2, 2015 at 4:29 PM, Jeff Janes <[email protected]> wrote:\n> On Wed, Sep 2, 2015 at 7:00 AM, Volker Böhm <[email protected]> wrote:\n>>\n>>\n>>\n>> CREATE INDEX trgm_adresse ON adressen.adresse USING gist\n>> (normalize_string((btrim((((((((normalize_string((((COALESCE((vorname)::text,\n>> ''::text) || ' '::text) || (name1)::text))::character varying,\n>> (-1)))::text || ' '::text) || (normalize_string((COALESCE((strasse)::text,\n>> ''::text))::character varying, (-2)))::text) || ' '::text) || (plz)::text)\n>> || ' '::text) || (normalize_string((COALESCE((ort)::text,\n>> ''::text))::character varying, (-3)))::text)))::character varying)\n>> gist_trgm_ops);\n>\n>\n>\n> You might have better luck with gin_trgm_ops than gist_trgm_ops. Have you\n> tried that?\n\n\nI just had the exact same problem, and indeed gin fares much better.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 3 Sep 2015 20:19:40 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query with pg_trgm sometimes very slow"
},
{
"msg_contents": "On Thu, Sep 3, 2015 at 6:19 PM, Claudio Freire <[email protected]> wrote:\n> On Wed, Sep 2, 2015 at 4:29 PM, Jeff Janes <[email protected]> wrote:\n>> On Wed, Sep 2, 2015 at 7:00 AM, Volker Böhm <[email protected]> wrote:\n>>>\n>>>\n>>>\n>>> CREATE INDEX trgm_adresse ON adressen.adresse USING gist\n>>> (normalize_string((btrim((((((((normalize_string((((COALESCE((vorname)::text,\n>>> ''::text) || ' '::text) || (name1)::text))::character varying,\n>>> (-1)))::text || ' '::text) || (normalize_string((COALESCE((strasse)::text,\n>>> ''::text))::character varying, (-2)))::text) || ' '::text) || (plz)::text)\n>>> || ' '::text) || (normalize_string((COALESCE((ort)::text,\n>>> ''::text))::character varying, (-3)))::text)))::character varying)\n>>> gist_trgm_ops);\n>>\n>>\n>>\n>> You might have better luck with gin_trgm_ops than gist_trgm_ops. Have you\n>> tried that?\n>\n>\n> I just had the exact same problem, and indeed gin fares much better.\n\nAlso, with 9.5 we will see much better worst case performance from gin\nvia Jeff's patch:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=97f3014647a5bd570032abd2b809d3233003f13f\n\n(I had to previously abandon pg_tgrm for a previous project and go\nwith solr; had this patch been in place that would not have happened)\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 8 Sep 2015 14:15:43 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query with pg_trgm sometimes very slow"
},
{
"msg_contents": "\n\nOn 09/08/2015 09:15 PM, Merlin Moncure wrote:\n...\n>> I just had the exact same problem, and indeed gin fares much better.\n>\n> Also, with 9.5 we will see much better worst case performance from gin\n> via Jeff's patch:\n> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=97f3014647a5bd570032abd2b809d3233003f13f\n>\n> (I had to previously abandon pg_tgrm for a previous project and go\n> with solr; had this patch been in place that would not have happened)\n\nExcept that pg_tgrm-1.2 is not included in 9.5, because it was committed \nin July (i.e. long after 9.5 was branched).\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 08 Sep 2015 23:21:21 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query with pg_trgm sometimes very slow"
},
{
"msg_contents": "On Tue, Sep 8, 2015 at 4:21 PM, Tomas Vondra\n<[email protected]> wrote:\n> On 09/08/2015 09:15 PM, Merlin Moncure wrote:\n> ...\n>>>\n>>> I just had the exact same problem, and indeed gin fares much better.\n>>\n>>\n>> Also, with 9.5 we will see much better worst case performance from gin\n>> via Jeff's patch:\n>>\n>> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=97f3014647a5bd570032abd2b809d3233003f13f\n>>\n>> (I had to previously abandon pg_tgrm for a previous project and go\n>> with solr; had this patch been in place that would not have happened)\n>\n> Except that pg_tgrm-1.2 is not included in 9.5, because it was committed in\n> July (i.e. long after 9.5 was branched).\n\noops! thinko. it shouldn't be too difficult to back patch though if\nyou're so inclined?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 8 Sep 2015 16:24:24 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query with pg_trgm sometimes very slow"
}
] |
[
{
"msg_contents": "Hi, \nWe have a table which consists of 3 millions of records and when we try to\ndelete them and run VACUUM VERBOSE ANALYZE on it in production environment ,\nit takes 6/7 hours to process.\n\nour understanding is that ,there are other processes running in the prod\nenvironment which may acquire lock on this table and due to this vacuum\nkeeps waiting continuously for getting lock and due to this it takes long\ntime to finish .\n\ncan any body suggest me a solution by which we can reduce the time to 1 to 2\nhours ?\n\n\nplease reply me on this with your suggestions .\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/VACUUM-VERBOSE-ANALYZE-taking-long-time-to-process-tp5865303.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 9 Sep 2015 03:43:01 -0700 (MST)",
"msg_from": "anil7385 <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM VERBOSE ANALYZE taking long time to process."
},
{
"msg_contents": "On Sep 9, 2015, at 3:43 AM, anil7385 <[email protected]> wrote:\n> \n> Hi, \n> We have a table which consists of 3 millions of records and when we try to\n> delete them and run VACUUM VERBOSE ANALYZE on it in production environment ,\n> it takes 6/7 hours to process.\n\nYou make it sound like you are deleting all records. If that's true, why not TRUNCATE? It'll be a lot faster, and also leave you with no need to vacuum.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 15 Sep 2015 16:52:10 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM VERBOSE ANALYZE taking long time to process."
}
] |
[
{
"msg_contents": "Hello PGSQL performance community,\nYou might remember me from these past postings:\n\n* In 2012, we announced that the TPC was using PostgreSQL in a benchmark for virtualized databases. Unlike the traditional TPC Enterprise benchmarks, which are defined in paper specifications, this Express benchmark comes with a complete end-to-end benchmarking kit, which at this point in time, runs on Linux VMs (we have tested it with PGSQL 9.3 on RHEL 7.1) on the X86 architecture.\nOn that occasion, I was looking for a PGSQL feature similar to MS SQL Server's \"clustered indexes\" or Oracle's \"index-only-tables\". (I was told about \"index-only scans\" which helped but not as much as clustered indexes or IOT)\n\n* In 2013, I asked about a performance hit during checkpoints, which was quickly diagnosed as the well-known dirty_background_bytes problem\n\n* Last year, I asked about the high rate of transaction aborts due to serialization failures. It turned out that even dbt5, which implements TPC-E, was running into the same issue, and simply resubmitting the transaction was an acceptable solution\n\nWe are done with the benchmark development, and it has entered TPC's \"Public Review\" phase. In parallel, the TPC council has approved the public release of a prototype of the benchmark kit to gather more experimental data, and to speed up the approval of the benchmark.\n\nThe Benchmark Specification, the Benchmark Kit, and the User's Guide are available from tpc.org/tpcx-v. In addition to these components of the benchmark standard, the subcommittee members have developed two other tools to help benchmarkers get a fast start: a downloadable VM in the form of an ovf-format VM template that contains a complete benchmarking environment with all the software already installed and pre-configured; and a PowerCLI script that, in the VMware vSphere environment, allows you to quickly clone a VM template into a large number of benchmark VMs.\n\nThe review period is open until Thursday, October 15th. The subcommittee intends to resolve the comments from the Formal Review by November 12th, and bring forward a motion to the Council to approve the Benchmark Standard.\n\nWe would love to get feedback from the PGSQL community. The subcommittee is taking feedback via FogBugz at www.tpc.org/bugdb<http://www.tpc.org/bugdb> under the project \"TPC-Virt\".\n\nAnticipating some of the questions/comments, here are some thoughts:\n\n- We wish we had a more representative workload; with multiple kits for multiple hardware architectures; etc. Those are questions for another day. Having an end-to-end database benchmarking kit for X86 virtualization with a workload derived from TPC-E is a pretty good first step.\n\n- You don't have to run the full multi-VM configuration of TPCx-V if you are just playing with the kit and not intending to have an audited result. You can run the benchmark on a single database on a single VM (or even a native, un-virtualized server!) This would be very similar to a simple TPC-E config. If we can get feedback based on a single-VM config, we will still be grateful\n\n- Having said that, there are enough subtle differences between the TPC-E schema and the TPCx-V schema that running TPCx-V on a single VM doesn't exactly take you to TPC-E.\n\nThanks,\nReza Taheri for the TPCx-V subcommittee\n\n\n\n\n\n\n\n\n\nHello PGSQL performance community,\nYou might remember me from these past postings:\n\n· \nIn 2012, we announced that the TPC was using PostgreSQL in a benchmark for virtualized databases. Unlike the traditional TPC Enterprise benchmarks, which are defined in paper specifications, this Express benchmark comes with a\n complete end-to-end benchmarking kit, which at this point in time, runs on Linux VMs (we have tested it with PGSQL 9.3 on RHEL 7.1) on the X86 architecture.\nOn that occasion, I was looking for a PGSQL feature similar to MS SQL Server’s “clustered indexes” or Oracle’s “index-only-tables”. (I was told about “index-only scans” which helped but not as much as clustered indexes or IOT)\n\n· \nIn 2013, I asked about a performance hit during checkpoints, which was quickly diagnosed as the well-known dirty_background_bytes problem\n\n· \nLast year, I asked about the high rate of transaction aborts due to serialization failures. It turned out that even dbt5, which implements TPC-E, was running into the same issue, and simply resubmitting the transaction was an\n acceptable solution\n \nWe are done with the benchmark development, and it has entered TPC’s “Public Review” phase. In parallel, the TPC council has approved the public release of a prototype of the benchmark kit to gather more experimental data, and to speed\n up the approval of the benchmark.\n \nThe Benchmark Specification, the Benchmark Kit, and the User’s Guide are available from tpc.org/tpcx-v. In addition to these components of the benchmark standard, the subcommittee members have developed two other tools to help benchmarkers\n get a fast start: a downloadable VM in the form of an ovf-format VM template that contains a complete benchmarking environment with all the software already installed and pre-configured; and a PowerCLI script that, in the VMware vSphere environment, allows\n you to quickly clone a VM template into a large number of benchmark VMs.\n \nThe review period is open until Thursday, October 15th. The subcommittee intends to resolve the comments from the Formal Review by November 12th, and bring forward a motion to the Council to approve the Benchmark Standard.\n \nWe would love to get feedback from the PGSQL community. The subcommittee is taking feedback via FogBugz at\nwww.tpc.org/bugdb under the project “TPC-Virt”.\n \nAnticipating some of the questions/comments, here are some thoughts:\n\n- \nWe wish we had a more representative workload; with multiple kits for multiple hardware architectures; etc. Those are questions for another day. Having an end-to-end database benchmarking kit for X86 virtualization with a workload derived\n from TPC-E is a pretty good first step.\n\n- \nYou don’t have to run the full multi-VM configuration of TPCx-V if you are just playing with the kit and not intending to have an audited result. You can run the benchmark on a single database on a single VM (or even a native, un-virtualized\n server!) This would be very similar to a simple TPC-E config. If we can get feedback based on a single-VM config, we will still be grateful\n- \nHaving said that, there are enough subtle differences between the TPC-E schema and the TPCx-V schema that running TPCx-V on a single VM doesn’t exactly take you to TPC-E.\n \nThanks,\nReza Taheri for the TPCx-V subcommittee",
"msg_date": "Thu, 10 Sep 2015 18:59:11 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Announcing the public availability of the TPCx-V prototype"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.